TBPN Live - Google I/O Reaction, Microsoft Build Recap, OpenAI's Stargate Datacenter | Austen Allred, Jeff Morris Jr., Cliff Weitzman, Logan Kilpatrick & Tulsee Doshi
Episode Date: May 20, 2025TBPN.com is made possible by: Ramp - https://ramp.comFigma - https://figma.comVanta - https://vanta.comLinear - https://linear.appEight Sleep - https://eightsleep.com/tbpnWander - https://wa...nder.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - https://getbezel.com Numeral - https://www.numeralhq.comPolymarket - https://polymarket.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://youtube.com/@technologybrotherspod?si=lpk53xTE9WBEcIjV(07:21) - Microsoft Build Recap (25:34) - Ben Thompson: The Agentic Web and the Original Sin Breakdown (01:05:06) - OpenAI's Stargate Datacenter (02:03:07) - Austen Allred. Austen is the founder and CEO of Gauntlet AI, a company focused on helping developers evaluate and test AI systems with greater precision. He previously co-founded Lambda School and is known for building tools that improve access to education and technology. (02:21:53) - Jeff Morris Jr. Jeff is the founder and managing partner of Chapter One, a venture fund focused on early-stage consumer and crypto startups. He previously led product and revenue at Tinder and has invested in companies like Lyft, Replit, and Notion. (02:41:42) - Cliff Weitzman. Cliff is the founder and CEO of Speechify, a text-to-speech platform designed to improve accessibility and productivity. He started the company to support people with dyslexia and has grown it into one of the leading tools in assistive technology. (03:03:16) - Logan Kilpatrick & Tulsee Doshi. Logan leads Google AI Studio, a platform that helps developers build with Google’s latest generative AI models. He previously worked in developer relations at OpenAI and is known for his work making cutting-edge AI more accessible. Tulsee Doshi leads Product for Responsible AI at Google DeepMind, where she focuses on building ethical and inclusive AI systems. Her work helps guide both internal development and public tools for fairness and transparency in machine learning.
Transcript
Discussion (0)
You're watching TVPN. Today is Tuesday, May 20th, 2025. We are live from the Temple of Technology, the Fortune of Finance, the Capital of Capital.
And we are recapping Microsoft Build, which was yesterday, and Google I.O., which is today.
We have a bunch of folks coming on from Google, but we're going through what happened at Microsoft.
Obviously, they're going even deeper into artificial intelligence, talking about the agentic web, the future where the robots
do your work for you and work while you sleep.
This is why we do three hours a day, folks.
Yep.
So we can cover it all.
You're going to have time to watch and listen to the full thing every single day.
There's just a crazy amount of news and deep dives this week specifically into AI.
Obviously, Microsoft and Google are doing
big keynotes right now, but Bloomberg has a fantastic
deep dive into OpenAI's Stargate.
And then there's also a profile by the same journalist
in Bloomberg about Dario Omade over at Anthropic
telling his soul story.
And so basically everyone's getting coverage and AI is the
only thing people care about and so that's what we're talking about today
on the show. But let's give you a little bit of a recap of the news. So
Microsoft Build 2025 opens Satya Nadella calling for an open agentic web. There's
new copilot features, open protocols, they're going all in on MCP, model context protocol,
that standard, not a particular tech product,
obviously defined by Anthropic, but then open sourced
and then adopted by both Microsoft and Google.
So the narrative is MCP one, basically.
And MCP became a very hot button issue.
Lots of people posting threads on how to use it.
We're not sure if it's goaded yet,
but it's definitely in the conversation.
It seems like it could be goaded.
It's doing quite well.
It's getting massive adoption,
and most importantly, I think it's more of a standard
than a particular product,
and so you're not seeing a wave of startups,
but you're seeing startups use MCP.
And then Elon Musk joined Satya Nadella on stage
via Zoom, presumably, or probably Microsoft Teams, I guess,
that they called in from,
and confirmed that Azure will host XAI's Grok 3
and Grok 3 Mini.
And he kind of talked about his vision for Grok
being a truth-seeking LLM and focused on foundation,
a foundation model built on physics and first principles, which has been his,
uh, his goal with that. But if you look at such an Adele's, uh,
pitch now for the hosts, the models that are hosted on Azure,
it's really just everything you can run. They even have, so they have them,
you can swap quickly between even deep seek is on there.
And there's a version of deep seek that's been fine tuned by Microsoft's AI
internal team. There's all the llama models.
There's all the open AI models. Now the grok models. Maybe,
they found him, they asked him to leave. So now they're hosting.
Yeah. And, and then there's even,
there's so many models now that Azure has a model router for OpenAI models.
So you can import the Azure OpenAI model router
and then it will decide what the cost benefit is
for the individual model that you're using.
Oh, do you have a plug or, I guess we,
I guess we have, oh no, I got this one, cool.
Okay, and so yeah, you should go watch the full
conversation between Microsoft CEO Satya Nadella
and Elon Musk.
He says, with Grok 3.5, which is about to be released,
it's trying to reason from first principles.
Also, I saw a ton of Tesla battery packs show up.
Bad news for product managers who have been, you know,
spending their entire life figuring out how to think
from first principles.
That Rock 5 will just do it out of the box.
Yeah, yeah.
It's a golden age for people that never learned
first principles.
You can just say, think from first principles.
Yeah.
Do it for me.
Yeah, just do it.
This is golden retriever mode all over again.
Yep.
We've been calling it.
It was a bold prediction at the time.
Being dumb.
Dan Shipper summed it up.
Yesterday, Microsoft rolls out MCP to support to Windows.
Today, Google officially supports MCP and the Gemini SDK.
It's over.
MCP wins.
And that makes a lot of sense.
It's a standard that everyone's building against now.
And we imagine that that will continue.
That's great.
Once you have Google and Microsoft and Anthropic,
everyone has to conform to it.
Coordinating with Dan, but hoping to get him
on the show Thursday.
He's got some news to share.
Another big announcement from Google I.O.
They have real-time Google Meet translation is live now.
I feel like we've seen this demo 10 years in a row,
but it feels like it's finally here. And so there's a video where you can be talking to somebody in a row, but it feels like it's finally here.
And so there's a video where you can be talking
to somebody in a different language,
and it will live translate what you're saying
into the local language,
so you can just have a conversation.
Obviously, this is a,
it's like one of the most obvious AI use cases.
It's something that Google should do.
It's something that seems like extremely beneficial
for humanity across the board, just like pure alpha.
So excited to see that.
They're also releasing a diffusion language model
that I want to dig in with the Google team more on
because we've seen image models move away from diffusion
into more token-based structures
and transformer-based architectures.
Now language models are going into diffusion.
So I don't know enough about this, but I'm going to be digging in this week.
They also launched DeepThink 2.5 Pro,
a new enhanced reasoning mode for using Google's research in parallel thinking techniques,
meaning it explores multiple hypotheses before responding,
basically branching all the different trees.
There's already a little bit of this going on
in OpenAI's deep research product,
but obviously it's a very competitive space.
This enables it to handle incredibly complex
math and coding problems more effectively.
And I was excited about Google Gemini.
I've become more bullish on the product
since I sent you those incredible AI generated videos
that are generated from VO2.
And so I went in to Gemini,
because remember I generated three or four of those videos
and then it gave me a timeout and it said,
look, I know that you've spent millions of dollars
with Google over your career,
but $20 is the most money I could accept from you.
I couldn't possibly charge you an extra dollar
to make one more of these videos.
The GPUs are on fire.
Even though you would pay 2000.
And we have, even though we have $100 billion of cash,
we have to rate limit you.
We couldn't possibly light another GPU on fire
for you right now.
There's no price.
We should ask Logan later around rate limiting and the plans there.
Hopefully it was a very frustrating experience for me because I would have paid more in that
moment to generate one more photo of a Lamborghini Urus with a TBPM livery driving around a track.
But I couldn't do it and it was not possible.
I mean, I guess I probably could have if I got a separate Google account with a separate
Gemini Pro subscription
Maybe on a separate phone and then how to do that. Yeah, huge opportunity right now to create a
Basically a bundled Google subscription where you subscribe to a service that then aggregates like yes
Yes, yeah. Yeah. Yeah. Yeah. Yeah. It's kind of like how there's how the banks have
FDIC insurance and you're insured for up to $100,000, then $250,000.
So some people would take, oh, spread $250 million
across 10 accounts, or five, four accounts.
You need to do that with Google so you don't hit the rate
limits, clearly.
So they put me on a timeout.
They said, John, you have generated too many of these GT3
RSs with TBPN liveries.
You need to take a break.
We'll see you one day from now, okay?
One day.
Come back.
Come back.
It's been three days now since I did that.
And you should be okay with that
because that's kind of a typical workflow with a human,
right?
You maybe get a few iterations and they say,
hey, I gotta go to sleep.
I gotta take a break.
I'll see you tomorrow.
Yeah, you're overworking me, boss.
Yeah.
And so it told me to take a break. It said one day. That is the agentic way. Monday night. I remember what a break. I'll see you tomorrow. Yeah, you're overworking me boss. Yeah, and so it told me to take a break
It said one day that is the agentic one day night
I remember what it said said Monday night you can create more
So I wanted to go and create one today because Tuesday and I said generate a vertical video 16 by 9 of cinder Pachai
Arriving at Google I o and away mo and waving to a crowd of adoring fans who are taking pictures with Android phones
It says too many requests in a short time period try again later
So again a weird thing where like the product is amazing who are taking pictures with Android phones. And it says, too many requests in a short time period, try again later.
So again, a weird thing where like the product is amazing,
but I can't access it enough,
and I keep being frustrated with the actual rollout
of these products, but hopefully it's getting better.
I do wonder if they're getting slammed right now
because of Google I.O.
Everyone's testing out the new model,
and so they really are under a ton of demand.
What was interesting is that it didn't generate
any videos for me today and immediately it just said,
it just said too many requests in a short period of time.
It didn't say if it's too many requests from me
because this is the first request I've sent in today.
And so I was very unsure of what's going on there.
Is it just their system's under load entirely?
Can you prompt it to address you as Bossman?
Because it might be more, it would feel a little bit better
if it said, hey, Bossman, that's enough for today,
instead of just, you know, hitting,
I'm working hard.
Feeling like you're hitting those rate limits.
I'm working hard.
Going back a little bit, one thing that stood out to me,
seeing Elon and Satya having a conversation,
working on this product launch, felt significant to me
because it was not too long ago that Microsoft
was named as a defendant in the OpenAI lawsuit, right?
Elon's lawsuit against OpenAI.
Microsoft was a defendant.
And at a certain point, I believe it was in 2023 as well,
Elon was threatening to sue Microsoft for training
on X or Twitter at the time.
Oh, interesting.
I didn't know that.
And so these two haven't exactly had the best relationship.
I mean, clearly, they're fine on an individual level.
Yeah, yeah, yeah.
But the two companies.
The partnership now is cool to see,
just given that there had been quite a bit of friction
over the last couple of years.
But the funny thing is Musk was actually
way back in the day an intern for Microsoft Windows
in the 90s.
So very full circle moment.
Isn't there some story about him trying to get a job
at Netscape with Mark Andreessen,
and he didn't make it past the first round
and so he went to Microsoft.
That might be wrong, but there's something about Elon
before he started Zip2 was kind of moving around the valley
at different tech companies and it's funny to imagine
intern Elon writing some code for Microsoft Office
at the time or something.
But now they're partnering and it's all part
of Sacha's strategy to be model agnostic.
It's always felt like Microsoft research
has been behind the curve on LLM training.
Like, Zuck, although Lama has kind of hit a rough patch
with Lama 4.
Apparently, so Elon was rejected from a job at Netscape. Yeah, that's right. He
Sent in his resume showed up at the office
Yep, even lingered in the lobby, but no one acknowledged him Wow just Mark Andrews and walking by mogged
And now and now marks in XAI yeah now they're boys a big check and a bunch of other companies too, but
It's great.
What a full circle story. But yeah, so I mean, Satya obviously is positioning
Microsoft to be model agnostic
and wants to be a vendor of this in the cloud platform.
And I think that's, it seems very good for Azure
in the idea that if you're a business
and you're building on Google, you might feel like,
oh, they're gonna really push the Gemini models on me.
Maybe I get locked in at some point.
Maybe that gets expensive.
But if I'm on Azure, I can dance around
between different OpenAI models and different Lama models
and different Grok models and have a lot more flexibility because
Sacha has kind of said like,
hey, we're not gonna push you towards a particular model
or a particular like regime.
They also release an open source VS code,
sort of like co-pilot product, which is big.
So anyways, reaction to that was-
Project Padawan.
It was positive as well. GitHub Copilot graduates to a full coding agent.
Project Padawan, GA for Copilot Enterprise and ProPlus, capable of for
autonomous feature work bug fixes and refactors and so everyone's kind of
moving into these autonomous coding agents. We're having Scott Wu from
Cognition on the show on Wednesday tomorrow to talk about the landscape and
We're also having Lee Marie from Kleiner Perkins on Friday to talk about the same thing this idea that
How is the AI coding landscape shaping up? Is it one market two three four? It feels like there's
as we were talking
About it a little bit earlier it feels like there's, as we were talking about it
a little bit earlier, it felt like there are, there's AI code that's written,
I mean, you just gave an example where you asked it to count,
you asked ChachiPT to count the number of white boxes
or something, what happened there?
So basically XAI got, received 168 Tesla megapacks,
which is gonna power Colossusacks, which is going to power
Colossus 2, which is their second data center.
And there's this sort of above ground image.
And I initially saw the image, and somebody else
had sort of counted them up.
So I asked ChatGPT to count.
And it spent 13 minutes attacking the problem
by writing code.
In that time, of course, I just looked at the X axis
and the Y axis and multiplied and got the right number.
But, chat GPT was running in the background,
attacking it in the most sophisticated way possible.
Ultimately, what the number I wanted to figure out
that was interesting is that the retail value
of the megapacks, which of course you can buy
on the Tesla website, $5 million,
comes out to around $850 million just on megapacks,
which is about half of their 2024 revenue, right?
Wait, half of whose?
Well, not XAIs, but X's revenue,
which is now combined.
Wow, yeah, yeah, yeah, yeah, yeah.
XAI doesn't have any revenue, really.
X has graduated.
Yeah, but they raised a lot of money,
so they're spending on Tesla megapacks, that's wild.
Yeah, and I'm interested to see how,
ultimately exactly how this new round shakes out.
There's been rumors around the new fundraise for a while now.
Because the final details of that round
haven't been announced yet, right?
It's still just rumors.
That's interesting.
I wonder, 13 minutes is fast, but not that fast.
I wonder if there's a world where OpenAI has a function
it can call to like mechanical Turk or scale AI
to just have put a human on the case.
Well, it failed.
It failed?
So it didn't even do it in 13 minutes? It just it just basically timed out because I could see it working and just writing code taking all these different cracks
Yeah, so to its credit it would have taken a human not 13 minutes to count those. No just manually counting them
like what I did yeah, not I didn't actually go line by line, but like
Whatever 20 seconds 20 seconds, but if if a human had to run take that many attacks at
Solving it they wouldn't like what I'm saying is that is that you kick off this how many whites or how many white squares are?
In this image and it just displays that to a mechanical turkeys just sitting there doing random tasks all day long
And they just one-shot it in two minutes.
Yeah, this is what I was talking about over the weekend.
I was joking around.
I'm sure some people took it seriously,
but I was saying I met somebody in the Philippines
whose job is to make investment decisions
for a firm that claims to use AI.
That's the final boss of using AI.
Yeah. AI that's the final boss of using AI yeah but but it's really you know
mechanical Turk is just you know using yeah using outsourced labor to just make
all investment yeah the overall announcement at any of these keynotes is
always we're 90% of the way there in the last 10% will be another 90% of the way there. And the last 10% will be another 90% of the way there.
But I mean, there are some impressive announcements
and steps forward, but obviously there's a lot left to do.
So Copilot Studio, just running through
what else Microsoft announced.
Copilot Studio now supports multi-agent systems,
so developers can build agents that delegate tasks
to one another with my Microsoft 365 agent builder
NL web debuts as HTML for the agentic web letting any size any site expose a conversational endpoint
Discoverable by AI agents so an AI agent shows up and it can just chat with your website
We talked about project padawan github copilot graduating to compete in that space with Codex from OpenAI.
And now just in the OpenAI world,
you have three products that effectively write code.
You can just go into 03 or even 40
and it can write a little bit of Python
like you experienced, which can have mixed results
but is sometimes extremely useful.
You have Codex that can actually plug into a GitHub repo
and go and write some code and fix bugs and whatnot.
And then you have Windsurf,
which can sit in your IDE alongside
and be kind of that ground up adoption.
And so there's one mental model where there's like,
the consumers are going to ask questions
and they don't even know that they should be thinking
about writing code, code will be written.
They might never see that code if they don't unfold the reasoning tokens
On the flip side you might have someone who's using codecs to change bugs and make small changes in a github repo
Without really opening up an IDE constantly, but they're just they're just interacting with code
But they're aware of what should be done. What should be built
Then you have a product like Windsurf,
where for basically a full-time programmer,
they're working in code constantly,
and they're solving problems at a much higher level
than any of the agents can actually do,
but they are enhanced and sped up by these AI IDEs
that are speeding up their workflows.
And then potentially you have this top-down enterprise AI,
which is what Scott's building with Cognition,
where there isn't as much of a ground swell
around Devon as a consumer tool.
But as I understand it,
Devon is something that's been pushed top-down
on big corporations as something that needs to be rolled out
and it has much more of a, like an enterprise sales,
like go to market and sales cycle.
So interesting to see if that model holds
of like four distinct markets,
like consumer, prosumer,
ground up enterprise, top down enterprise.
I don't know if that's the right way to think about it,
but I think that's something we'll be digging into
over the next couple of days.
Yeah, I mean, ultimately, some combination of product led
and enterprise is probably the winning combo long-term.
They all seem to be getting adoption and printing money.
The question's just, yeah, churn
and how the market
looks in the really long term. It is interesting because AI coding tools, it does feel like
it's a massive market. You looked at the numbers for how much money is spent on software engineering.
It's huge. And then it's also just a completely new market in the sense that there isn't anything
that's really established. They're not really displacing spend on anything else.
It's all just incremental and additive,
so it's kind of hard to handicap
exactly what you're going after.
I thought this post from Caleb Harris,
who has an interesting call out.
He says, last night I dropped 12 backlog bugs
from Linear into Codex and launched.
Greptile found some nits in the reviews,
but they were almost entirely correct,
and then Spur QA tested almost entirely human
out of the loop at this point.
That's awesome.
And yeah.
Well, good time to tell you about Linear.
Linear is a purpose-built tool
for planning and building products.
Meet the system for modern software development,
streamline issues, projects, and product roadmaps.
Anyway. Thank you, John.
They also launched copilot Tuning,
which allows companies to fine tune their models
with their own data to create domain-specific agents.
Windows AI Foundry launches as a local platform
on Windows and Mac for training and fine tuning
and running LLMs with Foundry Local.
Azure AI Foundry Models adds Grok 3, which we talked about.
MCP gets first party support.
Windows Subsystem for Linux, a bunch of other stuff.
SQL Server 2025 hits public preview.
Let's go, SQL Server!
Huge for the database engineers in the crowd.
Let's give it up.
Let's give it up.
Anyway, I think the big definitive analysis
of the post game on Microsoft Build came from Ben Thompson,
but Derek Thompson, no relation I believe,
but maybe they're secretly brothers.
Derek Thompson.
That would be wild.
He wrote Abundance with Ezra Klein.
He says, fascinating, Ben Thompson's vision
of the future of the internet, i.e. the agentic web scenario is before I go to sleep,
I tell Chachi Petit, plan my five-year-old's birthday
next Saturday, budget $500 when you've made the reservation,
email these 20 people a printable invitation to attend.
Also, my wife wants to go to England in mid-July,
find five plausible flights for the family
and make several distinct itineraries.
Finally, please edit this work memo.
When I go to sleep, the AI
agent negotiates slots with two bowling alleys, buys a cake, emails, printable invites, plans the trip, copy edits, etc. This presents an interesting economic challenge. What happens to ad revenue when more traffic is just AI?
Ads make little sense when the reader is a robot.
Well, maybe instead, sites ask agents to make tiny payments that are
fractions of a cent every time they call up an article. Ben suggests the use of
no fee stable coins, blockchain dollars, but I think micro transactions could
work with old fashioned dollars. I have no idea if this cash is out,
but very interesting vision of how AI would necessarily transform internet
economies as traffic shifts to from humans to agents.
Interesting, so I wanna dig through this
strategic article, but I also don't, yeah.
And this is significant because so every business,
every consumer tech company eventually becomes
an advertising business from Instacart to Uber to Apple.
And if you remove these ad businesses, it typically will just rip out a lot of the actual
earnings, right? Because advertising tends to be very high margin. And so what I think we're going
to see is ultimately this kind of drawn out war between agentic software and the old internet where the old
internet doesn't want to let go of advertising. And so I think that
microtrack transactions are potentially somewhere that you could land.
But it's interesting to dig through. I'm not entirely sure that language models
and AI agents will be entirely resistant to advertising.
Because I imagine that they're being-
It's more so if you're Uber,
and you have a great ads business,
are you thrilled to have an agent trawling
and booking things and buying groceries and all this stuff?
You're not, right?
You're like, I would prefer-
Unless the ads that you run on your platform
influence the future training of those AI models.
So for example, if an LLM is training on transcript
of this show right now, and then we start talking
about how RAMP can save time and money.
Well, it does, John.
Yeah, yeah, exactly.
We're stating the facts here.
And we're explaining that RAMP has easy to use
corporate cards, bill payments, accounting,
and a whole lot more all in one place.
Not even to mention ramp travel.
Yeah, we didn't even get to mention ramp travel,
which makes travel so seamless and streamlined.
We use it every time we hit the road.
We do.
And so an LLM, as it's trolling through the internet,
it comes across a host read ad like that,
that influences the future
recommendations it makes, right?
It gets baked in the training data.
So perhaps ads could be just as influential in the sense that LLMs are simulating human
behavior.
Everyone likes to say that advertising doesn't work on them, but time and time again, data
shows that it does.
And so if you have an agent
that's actually perfectly simulating a human,
it should also be simulating receptivity to advertising.
You would think that it would.
That's right.
But let's dig into Ben's article
because I think I agree, and I think you agree,
more than we're letting on right now,
but let's dig into it.
So he starts with this 2014 article in the Atlantic by Ethan Zuckerman about like
the death of the web. I remember this this iconic article he says from Ethan
Zuckerman, I have come to believe that advertising is the original sin of the
web, the fallen state of our internet.
Right? No.
Nobody's taking this serious.
Nobody's taking this seriously.
Yeah, I forget what.
Jordy has a laugh track now.
It's called the Internet's Original Sin.
And.
Keep going, John.
You're going to make this very hard.
The fallen state of our internet is a direct,
if unintentional, consequence of choosing advertising as the default model to support online content and services.
Through successive rounds of innovation and investor story time, we've trained internet
users to expect that everything they say and do online will be aggregated into profiles
which they cannot review, challenge, or change that shape both what ads and what content
they see.
Outrage over experimental manipulation of these profiles
by social networks and dating companies has led
to heated debates amongst the technology savvy,
the technologically savvy, but hasn't shrunk the user bases
of these services as users now accept
that this sort of manipulation is an integral part
of the online experience.
Mark Andreessen.
Mark Andreessen.
Mark Andreessen, who was there when the web was born, explained in a 2019 podcast why
this sin was committed.
The quote is edited lightly for clarity.
One would think the most obvious thing to do would be building in the browser the ability
to actually spend money, right?
You'll notice that didn't happen.
And in a lot of ways, we didn't even think it's unusual
that it didn't happen because maybe
that shouldn't have happened.
I think the original sin was we couldn't actually
build economics, which is to say money into the core
of the internet and so therefore advertising
became the primary business model.
We tried very hard to build payments into the browser. It was not possible. We made a huge
mistake. We tried to work with the banks and we tried to work with the credit card companies.
It was sort of this classic single point of failure bottleneck or at least in this case,
two points of failure. Visa and MasterCard essentially had a duopoly at the time and so
they were just literally if they didn't want you to be in the switch They didn't want you to be able to do transactions
You simply weren't going to do it and what's interesting is like I feel like the modern
Entrepreneur like the the Sam Altman style deal maker if you
Transported the back in time you put the you can just do things mindset in there
You just be like, oh, yeah
like the whoever's building the browser did some crazy deal with Visa and he's bundling these transactions and
so yeah they're totally waiving the 3% fee but they're aggregating this way and
they're also investing it's interesting the other it's interesting that the
credit card companies at the time too you can imagine I would pay an
incremental fee to have my credit card data perfectly stored in my browser.
You could capture bips on that pretty easily.
Totally, totally.
And so you'd think there would have been a deal to be done.
But remember, this was at a time when people
were scared of the internet.
People generally weren't even fully convicted at the time
that it still had nonbelievers.
I wonder if it was more of a failure of imagination
from Visa and MasterCard executives,
or a failure of aggression in the deal-making
of the Web 1.0 entrepreneurs.
Because the Web, whatever generation we're in right now,
you feel-
You think, you're basically saying
entrepreneurs today are built different.
It's not just that they're built different,
it's that they're very willing to go and do crazy deals
with huge companies.
Yeah.
I mean, think about that awkward chain company that-
One thing that's real back then,
there was more, there was more
almost technical risk to something like this
that feels simple now.
No, it's reasonable now, it didn't happen, but yeah, yeah.
I mean a lot of it is just the maturity of the industry.
People take startups seriously.
But I'm thinking of that, what is that Ankur Jain company where you can pay your rent with
credit card and Wells Fargo like totally took a bath on the deal, at least in the short
term?
You remember this?
It was some sort of credit card?
Oh, built?
Built, yeah, built rewards.
And so like doing a deal with Wells Fargo at that scale as like a startup is crazy,
but like, Ankur is clearly just a really, really great like deal maker. And so he was
able to get it done. And if, if points on housing actually does plant like, like play
out to be a real thing in a big business,
it will be on the back of that crazy deal
that needed to get done to make this happen.
It wasn't just gonna be some ground up,
small company deal, you had to go in and do this big thing.
Yeah, it's interesting in some ways,
now Apple Pay enables what would have been the goal
at this time, but it does it effectively at the hardware layer.
Yeah, yeah, yeah, yeah.
It's not even that good.
I think it's pretty,
it's getting to the point where it's pretty good.
Yeah, yeah, it's pretty good.
But I mean, still like you hit web checkout forms
all the time and have to hit the auto fail,
which is like such a kludge
because it could just not even be there.
It could just be as native as you click a link
or even just like the integration of email
or the integration of the share sheet.
There's so many different aspects
that are built into the web browser natively.
The ability to play video on the web,
that was something that all the browsers needed to figure out.
And it just works now.
Every browser supports video.
Mark Andrewson was talking about
All the different things like like why the hyperlinks are blue like he's the one that decided that because he was like we need hyperlinks I mean the reason they make him blue and it's like there could have been a version of money
So anyway Ben Thompson goes on to say I think Andreessen is too hard on himself
And I think Zuckerman is too harsh on the model Andreessen created
on the model Andreessen created on the model Andreessen created the conditions for the original web was the human web and advertising was one of the best
possible ways to monetize the scarce resource in digital human attention the
incentives all aligned users get access to vastly to a vastly larger amount of
content and services
because they're free, which I really like.
It's great that anyone can access the internet
for basically free and they can go all over
and most of these services are free.
The information spreads very quickly.
Content makers get to reach the largest possible.
The information doesn't spread quickly always
because there's a paywall.
But there's the black market of PDF as well that I think
it's Paula has talked about in SV.
Yeah.
Content makers get to reach the largest possible audience because the access is free and advertisers
have the opportunity to find customers they would never have been able to reach otherwise.
Yes, there are downsides to advertising.
Zuckerman fretted about, but everything is a trade off.
And the particular set of trade offs that led to advertising Zuckerman fretted about but everything is a trade-off and the particular set of trade-offs that led to advertising
The advertising centric web were on balance a win-win win that generated an astronomical amount of economic value
Yeah, it's hard to go back and play play out the Google story and say like oh they left a lot on the table
Yeah, and so many ways content content wants to be free, right? The second that you put web pages
or anything behind a paywall,
the attention they get drops by 99%, usually more.
Great.
Totally.
And...
Yeah, it's just way better to be advertising led.
It's way better to just, you know,
as you're talking about one story,
just start talking about Figma, you know,
think bigger, design, build faster.
Figma helps design and development teams
build great products together.
You can get started for free.
Anyway.
It is the backbone.
It's the backbone.
Of our show, and we can't thank them enough.
Okay. Thank you.
Moreover, I disagree with Andreessen
that we could have ended up with a better system
if the banks and credit card companies
had been willing to play ball.
In fact, over the last 30 years, the credit card companies have in particular
have in part thanks to companies like Stripe gotten their digital acts together
and are integral to a huge amount of web based commerce, which itself is driven
through digital advertising, the largest category of advertising for both Google
and Metta. That too is human in that the biggest outcome
of digital advertising is physical products
and real world experiences like travel,
digital products like apps and games meanwhile
are themselves pursuing human attention.
What was not viable in the 1990s,
nor at any time since then was something
like micro transactions for content.
One obvious problem is that the fee structure
of credit cards don't allow for very small transactions.
Another problem is that the fee structure of credit cards don't allow for very small transactions. Another problem is that costs to produce content are front-loaded and the potential payoff
is both back-loaded and unpredictable, making it impossible to make a living.
The biggest problem of all, however, is that microtransactions are anti-human, forcing
a potential content consumer to continually decide on whether or not to pay for a piece
of content is alienating,
particularly when plenty of alternatives for their scarce attention exist.
I really believe that a large amount of the substack economy is effectively crowdfunding
somebody's ability to just nerd out on a specific set of topics.
I agree.
And I think it's beautiful.
I agree.
Yeah, it does seem like if there's, yeah, sorry.
Yeah, I was just gonna say,
but a lot of the content creators
that are paywalled on Substack
would be bigger and more influential
if it was just completely free.
Yeah, yeah.
Yeah, there's a big question in my mind of like,
where does investigative journalism go
in the era of going direct and modern tech media? Uh,
like where is tech Seymour Hersh? Like you need someone who can just be like
Seymour Hersh was just on the payroll of big newspapers for a long time and then
could go and like hang out at a bar and talk to former soldiers about what they
did in Vietnam and then figure out that the My Lai massacre happened
and like uncover this like massive scandal.
And we see this in Silicon Valley where there are companies
that are like, you're not gonna be able to put a bunch
of sponsored content on a take down of some company
that's defrauding their investors.
The win case is the John Kerry Rue story with Theranos
where he was able to be on the wall street journal payroll for a long time,
hunt that story down for a long time,
do a lot of sourcing and a lot of confirmation because it's extremely risky to
write something negative about a $10 billion company that has incredible
resources. I mean, they hybrid.
You kind of need the heat shield. Totally a big organization. Totally. Yeah.
And also just you need,
you need to be able to pay your mortgage while you're doing the research and not publishing and and just doing
nothing basically and and being a hundred percent focused on on chasing
down a really big story and then when you do it there's a huge outcome because
he of course sold a book and then the rights to the documentary and the movie
and the TV show and all this stuff and I don't even know if he's I don't know
what he's up to now,
but it seems like he worked out really well.
It'd be great to get him on the show.
Here if he's getting back in the game,
digging into some of the, maybe starting more times.
Theranos 2.0 is coming right now.
He's like, I'm back.
He's like, I'm ready to start talking to the employees
of this new blood testing company.
It would have been a huge missed opportunity
to not just name the company Theranos 2.0 Inc.
Yeah, that'd be great.
I mean, attention.
Do we know what the name is yet?
I don't know.
I missed it.
But we'll have to look at that.
Anyway, subscriptions do work at smaller scales,
says Ben Thompson, because they are ultimately
not about paying for content, but giving money
to another human, which you've mentioned,
or human institution from the local news business model. which Ben wrote, it is very important to clearly
define what a subscription means.
First, it's not a donation.
It is asking a customer to pay for money for a product.
What then is the product?
It is not, in fact, any one article, a point that is missed by the misguided focus on microtransactions.
Rather, a subscriber is paying for the regular delivery
of well-defined value.
Each of those words is meaningful.
Paying a subscription is an ongoing commitment
to the production of content, not a one-off payment
for a one-off, one piece of content that catches the eye.
Regular delivery, a subscriber does not need to depend
on the random discovery of content.
Said content can be delivered to the subscriber directly, whether be email a bookmark or an app well-defined
value a subscriber needs to know what they are paying for the last point is
the crux of why many ad-based newspapers will find it all but impossible to
switch to a real subscription business model when asking people to pay quality
matters far more than quality and the ratio matters a publication with one
valuable article a day about a well-defined topic
Will easily earn more subscriptions than one with three valuable articles
But 20 worthless ones covering a variety of subjects yet all too many local newspapers built for ad-based business model
That calls for daily content to wrap ads around
Spend their limited resources churning out daily filler even if those
ads no longer exist. Ben says I expect that this model will endure in the age
of AI obviously I'm biased at this point but in a world of infinite content on
demand common content becomes community and if I'm successful this essay will
generate a lot of discussion amongst a lot of people precisely because it is
both original and widely accessible,
funded by an audience that wants me to keep writing
articles exactly like this.
So of course Ben writes free articles
and then also private ad supported articles
and goes back and forth.
So he gets a little bit of reach.
So he goes in here, the death of the ad supported web.
The ad supported web, particularly text-based sites
is going to fare considerably worse.
In fact, the most substantive pushback to my defensive advertising was in my own
excerpt. Most ad supported content is already terrible.
Thanks to the bad incentives, both sacrament and injuries in Beamone and the
impossible economics enabled by zero marginal cost content generation and
consumption.
Google in its most idealized form aggregated content consumers,
aggregated content consumers
by mastering discovery in this world of abundance,
directing users to exactly the site they were looking for,
which was monetized through ads that were sold and served
by Google.
Indeed, this is the great irony in the ads antitrust case
in which Google is currently embroiled.
Orf Sufert asked on Mobile Dev Memo,
I've heard arguments that because Google's suppressed competition in open web advertising markets,
those markets should flourish when Google's monopoly is broken.
But my sense is that this ignores two realities.
First, that consumer engagement has shifted into apps and walled gardens irreversibly,
of course, like most people get their news from X or Instagram or Facebook.
Yeah, and this was the logic for X to say,
yeah, we don't want you posting links.
We're fine for you to post an excerpt or a screenshot.
Let's build those walls a little bit higher.
Because we want you to click into the content
and see another ad and then click out,
go see something else, see another ad,
and just do that forever.
And second, that Google was keeping the open web
on life support, and the open web's demise
will be hastened when Google no longer has an incentive to support it. What happens to the open web on life support, and the open web's demise will be hastened
when Google no longer has an incentive to support it.
What happens to the open web when its biggest,
albeit imperfect, benefactor loses the motivation
to sustain it?
Well, everyone's going in the walled garden.
Wild gardens, like social networks,
are both more attractive to most users
and also better for advertisers.
Google might soon lose what little motivation
they had left to support the open web.
However, that's not Google's and the web's only problem.
Why go through the hassle of typing a search term
and choosing the best link, particularly if search results
are polluted by an increasingly overwhelming amount
of SEO spam now augmented by a generative AI
when chat GPT or Google itself will simply just give you
the answer that you're looking for.
In short, every leg of the answer that you're looking for in short
Every leg of the stool that supported the open web is at best wobbly users are less likely to go to add supported
Content based websites even as the long tail of advertisers might soon lose their conduit to place ads on those websites
Leaving said websites even less viable than they are today and they're barely hanging on as it is
I mean, I can you think of a single?
The classic example of this is like,
historically trying to find an ingredient,
like a recipe, right?
You're like, I wanna make oatmeal.
And the site like buries the only information
you actually need, which is like how much,
how many cups of oatmeal to water the ratio or whatever.
And you have to scroll by, you scroll by two pop-ups and six ads
to get to that little nugget of information
and now it can just be immediately surfaced by an LLM.
I'm trying to think about the last time I visited
a truly ad-supported website,
I'm scrolling through here and it's just nothing in my tabs.
scrolling through here and it's just,
it's just nothing in my tabs. Everything is either a marketplace that monetizes
by transaction fees or a paywalled website for news
or just Google search generally, finding stuff.
But there are very, very few websites that I visit today on a regular basis that are purely ad-supported
like I do go to the Wall Street Journal or the Economist or
Sure techery or any of these but they none of them are purely ad-supported the Wall Street Journal does have ads but
They also have a big paywall in an actual subscription fee. So it's interesting.
Yeah, the open web, I mean, is it dying
or is it already dead?
Like it seems like, like, you know,
the Vice News and the Buzzfeed,
like that boom and bust has already happened.
Yeah.
So I don't know.
Anyway, he moves on to the,
yeah, I mean, what do you think?
It'd be interesting to try to honestly
run a deep research report on trying to understand traffic
to these websites that are effectively dying,
but not dead, right?
I wonder what the best example is.
I would imagine like how many websites have recipes
for cooking various things that still get people trickling in
and they're technically still monetizing.
Maybe WebMT is probably out there.
Yeah, the question is like at what point
does it cost more to even host the website
than it's generating traffic, right?
Yeah, I mean, there's been a couple of those roll-ups, right?
Because if you're making, whatever,
if you're making $1,000 a month in ads
and it costs you a couple hundred bucks
to maintain the website, in theory,
somebody would just keep that running indefinitely.
Yeah, I mean, TechCrunch was obviously famously ad supported.
They were bought by Regent,
which is a private equity roll-up of brands.
Let's look at their portfolio.
It's all very hyper-specific stuff.
They own TechCrunch now, but they also own
Computer World, Network World, PC World, Mac World,
Tech Hive, they own Cheddar News, they own Defense News,
Air Force Times, Navy Times, Army Times.
So I imagine that Navy Times is probably ad-supported.
I mean, I'm just clicking around on the website
and there's not a lot of like paywalls I'm hitting,
but they are running ads and I'm sure that they have other
ways to monetize whether it's through premium, you know,
reports or maybe conferences or something like that.
But in general, it seems like there aren't that many
companies, I would struggle to name companies
that are generating over a billion dollars in revenue
off of purely ad-based website that's not an ad exchange, for example.
Yeah.
Right.
I doubt it exists.
It doesn't really exist.
Anyway, so Ben takes all of this, all this preliminary thesis and weaves it into Microsoft
and the open agentic web, says this reality is the fly in the ointment
of an intriguing set of proposals
that Microsoft put forward yesterday
at the Build 2025 Developer Conference
about the open agentic web.
And so the quote from the CTO is,
the thing that is super important
if you think about what an open agentic web could be
is you need agents to be able to take actions on your behalf.
And one of the really important things about agents being able to take actions
and on your behalf is that they have to be plumbed up to the greater world.
So you need protocols,
things like MCP and a two a and things that are likely going to be emerging over
the coming year that will help connect in an open, reliable, interoperable way,
the way agents that you are writing
and agents that are being used so actively now by hundreds of millions of
people to be able to go access content to access services to take action on
behalf of users in fulfilling the tasks that have been delegated to them one
aspect of this vision of the agentic web says Ben Thompson was Microsoft's
commitment to MCP which we talked about a little bit created by anthropic Scott
told Neelay Patel an excellent interview on the verge that to MCP, which we talked about a little bit, created by Anthropic. Scott told Neelay Patel in an excellent interview
on The Verge that while MCP wasn't exactly
what he would have designed from scratch,
ubiquity is more important than semantic differences,
particularly when you're trying to create HTTP for AI agents.
The second part of Scott's vision was something
Microsoft created called NLWeb, natural language web,
and these are natural language interfaces for websites that make them more directly accessible for agents.
He says, if you think back to the web, we have HTTP, and then we have things that sit
on top of HTTP, like HTML mainly, that are opinionated about the payload.
So today we're announcing an NLWeb.
The idea behind NLWeb is that it is a way for anyone who has a website or an API
Already to very easily make their website or API an agentic application
lets you implement and leverage the full power of language large language models to enrich the services and products that you're already offering and
Because NL web endpoint the end every NL web endpoint is by default an MCP server
It means that those those things that people are offering up via NL web will be by default an MCP server. It means that those things that people are offering up
via NL web will be accessible to any agent that speaks MCP.
So you really can think about it a little bit like HTML
for the agentic web.
I always thought this was odd because I thought that the
the layer to go between APIs and HTML for an agent
to just transform HTML into something that an agent
could understand would be very, very simple. I didn't think that we'd need a new
protocol but it seems like it's getting tons of adoption so I guess to make
sense we've done a bunch of work already with our partners who have been a really
excited to be able to very quickly get to implementation and prototypes using
NL web they've worked with TripAdvisor O'Reilly media a ton of really great
companies that offer important products so Scott Scott concluded by reemphasizing how important it was
that layers, that the layers of the agentic web be open
and use the evolution of the internet as his reason of why.
So he says, so the last thing that I want to say
before handing things back over to Satya
is just sort of press on these two points
about why open is so important here.
You know it is unbelievable, what can happen
in the world when simple components and simple protocols
that are composable with one another are out there,
exposed to the full scrutiny and creativity
of every developer in the world who wants to participate
or who has an idea.
This thought game that I play with myself all the time
is trying to imagine what the web would have looked like
if one of the actors in the early development of the web say the browser
manufacturers browser manufacturers
Mark Andrews in a browser manufacturer. Yeah, he made his money in browser manufacturing
Domestic browser manufacturing American dynamism browsing
So had decided that they wanted to vertically integrate and own the entire web.
A hundred percent of the web would have been dictated
by the limits of their imagination.
And it's just obvious with 30 years of history
that that wouldn't have been a very interesting web.
The web is interesting because yeah,
think of the iPhone app layer.
If Apple wasn't controlling, it would be extremely chaotic,
but there would undoubtedly be sort of cool,
you know, things that would emerge.
Yeah, I'm kind of split on it because like,
there's so much creativity that you can do
within a JPEG or an MP4.
And so if you get on YouTube,
I mean, it requires more production value,
but you know, we, like, we are not constrained
by anything other than just pixels here.
And we can put tickers and do all sorts of things
within the confines of the MP4 box.
I guess it is a little bit more narrow
than a web UI that anyone can use and click around.
So in some ways we are being constrained,
but you just scroll on any social network and you
see the amount of creativity that, that occurs even within the constraints of
like no links, right?
Yeah.
Despite Apple's control of the app store, you still get the benefits of the web,
which he goes on to describe here.
The web is interesting because millions, tens of millions, hundreds of millions of
people are participating to make it this rich dynamic thing.
That's what we think we need with the Gentic web. And that what we're hoping you all can get inspired to go work on a little bit
to riff on and to use the full extent of your imagination to help make this thing interesting.
Yeah. So he says, so back to Ben Thompson, he says, I think the widespread adoption of
MCP as a protocol layer and NL web as markup layer sounds excellent. The big hole in Scott's
proposal, however, was pointed out by Neelay Patel in that interview.
That's the piece that on the web right now seems the most under threat.
The underlying business dynamics of if I start a website, put a bunch of schema on it that
allows search engines to read my website and surface content across different distributions.
I might add an RSS feed which is a standardized distribution everyone agrees on.
There's lots of ways to do this but if I make a website and I open myself up to different
surfaces, what will I get in return for that is not necessarily money, in almost every
case not money.
What I'll get is visitors to my website and I'll monetize them however I choose, selling
a subscription, display ads, whatever.
That's broken, right?
As more and more of the answers appear directly, particularly AI-based search products, traffic
to websites has generally dropped
We see this over and over again
What's going to replace that in the agentic era where we've created a new schema for agents to come and talk to my website and receive
Some answers what's going to make it worth it?
And this is a good question Scott in his answer noted that websites would be able to communicate to agents
That that they wanted to make available and on what terms, along with some vague hand waving
about new advertising models and transactions.
That last point is valid.
TripAdvisor sells hotel rooms, O'Reilly sells
training courses, and you can see a world
where websites based on transactions can not only
benefit from exposing themselves to agents,
but in fact transact more and potentially pay
an affiliate fee, Patel.
So yeah, I mean, if you're searching for a car and you're searching across eBay and
Auto Tempest and cars and bids and bring a trailer all of a sudden it doesn't
really matter if someone's landing on your website all that matters is that
their transaction start transacting and that they found the car that they're
looking for and so for all those transaction based websites they're gonna
do fine.
Yes, someone who's just making content
and putting ads on it is kind of screwed.
I don't know about these transaction-based websites, right?
I mean, thinking of cars as an example, right?
You have CarGurus, you have probably what?
Cars.com.
Bring a trailer, cars and buildings.
Yeah, the auction ones.
Imagine a world where I just have, gurus is like an ad based business model
They want to drive leads to dealerships and get that that might be world where I just say
Hey, find me every single Ford GT between this year and this year. Yep, that's publicly available online
Bring me the contact information and actually reach out to the people. Yes, yes, yes, yes, but, but,
I don't think you can disintermediate entirely.
I think that the agent would say,
hey, there's a 4GT that's available
and bring a trailer right now.
Here's the current price.
Do you want me to sit here and bid on it for you?
Yeah, but here's the issue though,
because I can have that,
you could have the agent, like in theory,
the agent would go to CarGurus and fill out the form and tell the dealer, like in theory, the agent would go to car gurus and fill out the form
and tell the dealer process a lead to the dealer.
And the dealer's like, thank you for this lead car gurus.
If I sell the car, I will pay you a fee.
Or maybe it's just paid upfront.
But the issue is an agent could theoretically just like
find the, see the image of the car
and then find, do an image search.
Maybe, yeah.
I'm just saying do an image search and be like,
oh, I found this exact car and this exact photo
that the dealer posted on their own site.
On their own website.
And I'm not gonna pay CarGurus, I'm just gonna find,
and I'll actually call the dealer
and confirm that they have it,
and I'll negotiate on your behalf.
And then CarGurus is cooked.
Yeah, yeah, they already do that,
but it's just much more scalable to say.
Yeah, but I mean, that's not a marketplace.
Like if you are on Bring a Trailer and you see a car,
like you generally can't disintermediate.
Sure.
Like so if you are a platform that's bringing liquidity
to the market and bringing listings
that don't exist elsewhere on the internet.
But yes, I mean, people see this with Airbnb
where they see a beautiful place and they're like,
I'm gonna go just look up this place
and see if they have their own website.
But a lot of people that are selling a one-off car
don't have a website where they're listing their car.
Like somebody might do that.
Maybe in the future, you would just post on X,
hey, I'm selling my car.
I have it listed on Bring a Trailer,
but you can just negotiate.
Well, it's funny.
So one of the companies that's probably pretty cooked
in this new era is internet brands,
and they own WebMD.
They own, but all the way through,
they own Ford truck enthusiasts.
They own RenList.
That seems rough.
They own the Dodge Forum.
Like some of these more community-based sites,
I think are gonna be fine.
Yeah, yeah. Because it's a place where a discussion is happening
but again
Well, well sites that are purely indexing
So yeah, Ben Thompson goes on says the original sin of the internet lacking native payments was not in my opinion a sin at all
advertising supported the human web
Not because Andreessen failed to make a deal
with the credit card companies,
but because it was the only business model that made sense.
No, the real neglect and missed opportunity
in terms of payments is happening right now.
Microsoft is on to the right idea
with this adoption of MCP and in introduction of NLWeb,
but its proposal by virtue of not including native payments
isn't nearly compelling as it should be
As compelling as compelling as it should be the key difference from the 1990s is that on the agentic web?
Native digital payments both are both viable and the best possible way to not only keep the web alive
But also in the process create better and more useful AI. And so he goes on to talk about stablecoins
and agentic microtransactions
that start with the viability.
This is from Bloomberg.
Stablecoin legislation overcame a procedural blockade
in the US Senate marking a major victory
for the crypto industry
after a group of Democrats dropped their opposition Monday.
The industry backed regulatory bill is now set
for a debate on the Senate floor
with a bipartisan group hoping to pass it as soon as this week.
Although senators said a final vote could slip until after the Memorial Day recess.
So hopefully this passes and we can talk to everyone on crypto day about this next week.
That'll be a lot of interest, a lot of fun.
Ben goes on to say, I know I've driven a long time strategic readers, a bit batty with my
long running and still enduring in the face of massive grift and seemingly
Unending scandals interesting crypto, but stable coins are genuinely a big deal
I wrote a brief explainer last fall when stripe acquired bridge and so I'm sure most people are familiar with stable coins
He goes on to say stable coins solve several of the microtransaction problems
I listed above including dramatically lower or no fees and the fact that they are
problems I listed above including dramatically lower or no fees and the fact that they are infinitely divisible and thus can scale to very small amounts.
Stablecoins by virtue of being programmable are also well suited to agents.
Agents meanwhile are much more suited to microtransactions because they are not in the end simply software
making a decision.
Because they are in the end simply software making a decision unencumbered by the very
human feeling of decision paralysis.
So an agent can just go around and make a bunch of microtransactions.
The entire digital ad ecosystem, he writes, is an example of deterministic agents making
microtransactions at scale.
Every time a human loads a webpage, an awe-inspiring amount of computation and communication happens
in milliseconds as an auction is run to fill the inventory on that page with an ad that is likely to appeal to the human. These microtransactions are only worth
fractions of a penny but the aggregate volume of them drives trillions of
dollars worth of value. The problem as I and Patel both noted is that this
ecosystem depends on humans seeing those web pages not impersonal agents
impervious to advertising, which destroys the economics
of ad-supported content sites, which in the long run dries up the supply of new content
for AI.
OpenAI and Google in particular are clumsily addressing the supply issue by cutting deals
with news providers and user-generated content sites like Reddit.
This however is bad for the sort of competition Microsoft wants to engender and ultimately
won't scale to the amount of new content that needs to be generated.
What is possible?
Not probable, but at least possible is to, in the long run, build an entirely new marketplace
for content that results in a new win-win-win equilibrium.
First, the protocol layer should have a mechanism for payments via digital currency, stable
coins.
Second, providers like OpenAI should build an auction mechanism that pays out content
sources based on the frequency which they are cited in AI answers.
And this is kind of already happening with Spotify, right?
Like there's a whole bunch of music on Spotify.
They have a whole pool of capital from all their subscribers.
And then the more listens that they get, the more money flows in.
It's not really an ad-based auction system.
It's more like just dividing up the pie. The thing there,
and it's a good comp, is that that was happening from the very, very beginning. And the initial
deals that Spotify did with record labels to get the music on in the first place.
And now we're in a situation where all the LLMs are just using the internet. That's not true,
though. Like OpenAI has a deal with the Wall Street Journal and they have some deals, but they certainly don't have deals with
Every book they've ever ingested. No, no, no, but you imagine that they'll kind of go down the power law curve of like the most important
information sources and
Broker deals more and more and more which is what Spotify did at the beginning
They'll do that if they're sued. Remember, like they had, and they have been sued.
They've been sued a ton.
And then. Over and over.
So they get sued and then they settle
and then they do a deal.
And like, yeah, it's a little messy, but it's fine
because at the end of the day,
the Wall Street Journal has a deal with OpenAI
where OpenAI can ingest information from this,
surface it in chat GPT results,
and the Wall Street Journal gets paid for that work.
And I don't know the structure of the deal,
but I imagine that it's something like, like, you know,
we're generating this much money from people searching and,
and you're driving 1% of our value.
And so we'll give you 1% of the cut or half a percent. We'll split it with you.
Just like, you know, if you're Taylor Swift on Spotify and you're getting,
you know, 1% of the listening time, they'll give you half a percent of revenue or something like
that right is that not crazy okay I love it theoretically I think that's like
what's happening and and the problem but I just need to go down the stack until
it's self-serve because if you want the open web you can't do individual deals
and so eventually anyone can go on Spotify like we're on Spotify we get I
think we get a check we get a a check from Google, at least.
And in theory, it's like,
we don't need to do a deal with Spotify.
Of course, Joe Rogan did do a deal with Spotify directly,
like the Wall Street Journal did a deal with OpenAI.
But plenty of podcasts go on Spotify.
Spotify automatically runs ads and does premium plays,
and then sends them a check.
It's just such an interesting dynamic, right?
Because so often LLMs are not just serving the content.
They're effectively remixing it, right?
But the precedent for that is historically,
historically if the journal went into,
somebody from the journal goes into the library, public library reads a bunch of books, forms its own kind of summary of nobody's getting paid for that. Right. Yeah.
But so there's an argument, the models can make an argument that there's enough precedent that says there's nothing illegal about ingesting information.
But there is this, there is this win, win, win that Ben Thompson's talking about.
This, this idea that, uh,
Google has an incentive to keep the open web alive and the LLM providers might
have a similar incentive because they want more journalism and more facts to hit
the internet that they can scrape and pull into results.
And so what is the value of a single Wall Street Journal page loading logged out? Right? Yeah, I can see a scenario where it's like net new content or something like that.
Where like open AI doesn't deal with substack. Why not every time? Why not? Why
not on a user basis? I asked for a summary of the Wall Street Journal mansion section.
No, I'm saying it went red ten articles.
I'm saying every time I'm just saying it's possible that like there needs to be some type of cutoff date where where you know, again, how does this actually it sounds really simple and finding us win-win-win, but the actual mechanics of getting a deal done between the internet and a relatively small group of foundation model providers.
But again, are you saying that there's a tremendous amount of complexity?
I'm saying there's a tremendous amount of complexity, John.
Because that's exactly what Ben Thompson's concluded with.
He said, there is, to be sure, a tremendous amount of complexity in John. Because that's exactly what Ben Thompson's concluded with. He said, there is, to be sure, a tremendous amount
of complexity in what I'm proposing.
And the path to marketplace for data generation
is quite unclear at the moment.
Who, however, could have predicted exactly how
the ad-supported web would have evolved
or centrally designed the incredible complexity
that undergirds it?
This is where Scott's exhortation of openness is spot on.
A world of one dominant AI making business development deals with a few
Blessed content creators and scraping the carcass of what remains on the web for everything else is a far less interesting one than one
Driven by a marketplace auctions and aligned incentives to get there
However means realizing that the internet's so-called original sin was in fact key to
realizing that the internet's so-called original sin was in fact key to realizing the human web's potential,
while the actual mistake would be in not building payments
now for the coming agentic web.
And so bull market in stable coins,
we can see to table coins trading at $1.0001.
So buy them now.
It's so funny because everybody's so bullish
and convicted on stable coins now,
but it's like, how do we make money on this?
How do we make money?
I don't think it's launching the next Circle, right?
Or another stable coin provider,
even though that clearly is a great business
if you look at some of the different providers.
I was actually wanting to get,
there had been rumors that Circle is talking with Coinbase
and Ripple about a sale because they put out their S1.
The market hated it.
And they're basically like,
you are completely dependent on Coinbase in many ways
and paying all such a large part of your revenue to Coinbase, you know, does this.
So anyways, could make a lot of sense
for those two to join forces,
but Coinbase is also dealing with a lot right now.
Yeah.
Well, that concludes Microsoft
and we've talked about Google a little bit.
Should we move over to Stargate and open AI?
We're taking a full tour today.
We're just going around the horn to all the big labs,
one story after another.
We got to get boots on the ground and Abilene.
Yeah, we got to go there.
I mean, Chase was texting me earlier.
We got to have him on the show.
But I think visiting would be worth it.
It seems like Sharon Gaffare,
I don't know if I'm pronouncing that right,
but Sharon over at Bloomberg got an exclusive look
inside Project Stargate,
the $500 billion AI infrastructure project,
including it on the ground tour
of its first massive data center in Abilene, Texas
with interviews of Sam Altman.
Love it.
She's been on absolute tear, wrote this today,
dropped the anthropic profile of Dario yesterday.
Just banger after banger.
So inside the first Stargate AI data center,
OpenAI, Oracle, and Softbank hope the site in Texas
is the first of many across the United States.
Let's dig in.
Trucks carrying concrete and electrical wiring plod over red clay
weaving in between cranes and excavators to symmetrical buildings
stand in a massive plot of land where thousands of people in brightly
colored vests work day and night to construct six more near identical structures
that will make up the first site for Stargate project.
Stargate is collaboration between OpenAI, Oracle and Softbank
with promotional support from Donald Trump.
Love it.
Throwing that in.
Still unbelievable that they got him to just go
and announce it even though.
The guy loves building in America.
This is like his whole brand.
Showbiz.
Loves building the big things.
To build data centers and other infrastructure
for artificial intelligence throughout the United States.
The companies have pledged to spend as much as $500 billion,
a number so large it's hard to believe
it'll actually happen.
But at least for this one, in Abilene, Texas,
180 miles west of Dallas, Chase Lockmiller says,
they're good for the money.
Chase Lockmiller. This is an
incredible he chases the energy he locks it down then it's
Miller time. Chase lock Miller baby. He's on his determinism is
wild. Let's pull up the photo of chase. He's looking great. He's
hanging out an absolute dog. Yeah, he's been on this for so
long. Miller is the co founder ago, you can go to the next
slide. It's it's more centered.
I met him four or five years ago when they were doing oil and gas flaring for crypto mining.
And it was great.
And it was, I mean, the business was doing really well
back then.
It's funny because it was very low status
to do the crypto to AI pivot.
Totally.
Yet you get CoreWeave and Crusoe out of it.
Yep.
Just absolute monsters.
Always ignore the meme. Whatever the meme isoe out of it. Yep. Kills absolute monsters. Always ignore the meme.
Whatever the meme is, just ignore it.
Build a GPT wrapper and sell it to OpenAI for $3 billion.
Pivot from crypto to AI and IPO for $40 billion.
Do it.
It's fine.
It doesn't matter.
Ignore what the haters think.
Yeah.
So Locke Miller is, of course, the co-founder and chief
executive officer of Crusoe, a startup that
helps develop AI data centers.
Crusoe's cost to build the one in Abilene
are expected to reach 12 billion, Lockmiller says.
And that's not counting the billions of dollars worth
of Nvidia Corp chips that will be installed
in the finished facility.
I've never built anything of this scale, Lockmiller says.
I've had to learn a lot on the job.
So the three main companies behind Stargate
could say the same thing.
OpenAI knows AI, but has relied on Microsoft
for data centers.
Oracle knows databases but only holds 3%
of the cloud market.
SoftBank once raised a $100 billion fund
but that didn't go so well.
Let's give them some time.
Let's let them cook, you know?
Where's the money?
Ten-year horizon.
We've got a few years left on that one.
Oh yeah, we got to pivot really quickly to the economist
because the economist has an article about
will OpenAI ever make real money?
We should do a whole deep dive on this one,
but the way they describe SoftBank is hilarious.
Here it says they're talking about the latest open AI fundraising. Happily
for the CFO of open AI money men swept up in the AI mania need little persuading they're falling
over themselves to fund open AI. On May 13th, SoftBank, a Japanese tech piggy bank said that
it's $30 billion
investment in the firm was unaffected by OpenAI's
recent decision not to ditch its odd governance structure.
A non-profit board will keep control of its for-profit arm.
It's a whole great article.
I mean, I completely-
We were talking about this off air.
I mean, it's absolutely credit to Sam
for not completing the conversion
But not having to renegotiate at least that we top-level numbers. Yeah, the 30 on
330 or whatever it was
One of the best deal makers of all time and masa doesn't clearly yeah clearly doesn't care what price he pays. So like, simultaneously the soft bank deal
is like crazy and unprecedented and wild
and Japanese piggy bank and all that.
But the premise of like, will open AI ever make real money
is like ridiculous to me.
Like, will the next consumer tech giant make money online?
Will a company with 500 million users make money?
I was saying this yesterday off air.
I was like, you remember it was only,
it was less than a year ago, people,
Anthropic was ripping.
Yep.
And everybody was just saying, yeah, open AIs Yahoo,
open AIs Yahoo.
I was thinking about this a lot and I was like,
what was the problem with Yahoo?
Like they got like, like they got disrupted by Google obviously, but like more specifically
what happened, like it was, it was a shift from, from non algorithmic links, right. To
a better algorithm in page rank. Like it was a development of a better algorithm that really smoked them. And I'm wondering, like, is that,
is that a point of fragility for open AI?
Like you could imagine someone comes up with a different model for AI.
Like they say, like the transformer architecture is fundamentally not it.
It's V one and we're coming out with V2 and it's completely
different architecture. Fine. Maybe that happens. Is open AI agile enough to
implement that within six months? Like probably, right? If deep sea can clone
transformers and reasoning models in just a few months, you would think that
like, I guess the question is like, the Yahoo failed not just because Google invented PageRank, which sorted
the links by the number of links to a specific website. And so if everyone is linking to
Stratecary and you search for tech news, that increases the PageRank and that shows up at
the top. And that was the key insight by Larry Page, the creator of PageRank. And it created
much, much better results.
Like when you went to Google,
it was dramatically better of a product.
I remember when it came out, like I was a kid,
and I remember using AltaVista.
You're what, like 30 at that point?
Yeah, yeah, exactly.
Like a little late, late teens.
I think I was actually like eight or something, but.
Anyway, so I had been using Yahoo and AltaVista
and you would search and it would just be
kind of like random links.
Or they'd be like kind of curated.
It would literally just be,
it would basically just search based on the keyword.
So if you searched for like tech news,
you would more than likely just see like technews.com
come up because it had really strong like SEO
just from keyword searching.
Google brought in PageRank and said, well, if,
if it doesn't matter that strategic or the name,
the word doesn't make any sense and Ben Thompson never actually writes I am tech
news, technology news, hashtag technology.
It doesn't matter that he says that all that matters is that everyone is linking
to strategic or and it's actually a great source for tech news.
And so we're going to put that at the top of the results.
Now that was a better algorithm and it was a better product very quickly.
But if Yahoo had implemented PageRank, which I don't know if it was like patented or something,
but if they'd been able to keep up with that, people probably would have just stuck with
Yahoo for a lot longer.
But something about where the company was and the culture didn't allow them to upgrade
their search results. And so they fell behind very quickly.
And so for open AI to become the Yahoo of AI,
there would have to be a new company that comes out with a dramatically
different technology or paradigm and either is able to patent it or,
or keep it locked up in some way that open AI can't just port that innovation
back because they've obviously been able to do that
with other advances in image models
that are happening elsewhere, reasoning or tool use.
It would be crazy, and this is something
that's always interesting in tech,
is there just don't seem to be that many patents that hold.
How amazing would it be if you're just like,
I'm the person that thought of MCP
or I'm the person that thought of image diffusion
and I have the patent and so I'm the only one that can build the company
or you have to pay me to license this.
That's just not a conversation at all.
How many designers, for example,
have patents from big tech companies
and then can leave and effectively rebuild
the same features?
And the patents don't even seem to hold.
Did you know that the swipe down to refresh,
pull to refresh, that was patented by a designer at
Twitter, like 10 years ago, or 15 years ago, and
potentially, they created the slot machine for the mind.
Yeah, endless scroll was another slot machine. Yeah, endless.
Well, now now you don't even really pull to refresh. I mean,
I guess you do every once in a while, but mostly it's just
endless scroll. But that was another thing that was invented
at one of the big tech companies
and then immediately poured it across every other platform.
And like stories, Snapchat,
Evan Spiegel created the stories format.
Thank you Evan Spiegel for creating stories.
We love innovation.
Thanks for creating a low risk way
to just share what you're up to.
Really show you the stories. But he wasn't able to patent it or like or like, you know, keep
meta from rolling out to every single thing. LinkedIn has stories now. And there's this
question is like, it's kind of this weird failure potentially of the IP system. Maybe
it's better because stuff just goes everywhere. But if you're if you're open AI, and you're
on top of the of the innovations that are happening, even other labs, you would imagine that you'd be able to port back anything and stay on top as long as you're open AI and you're on top of the innovations that are happening, even at other labs,
you would imagine that you'd be able to port back anything
and stay on top as long as you're that front door
and you have the scale that Yahoo or Google
eventually got to, hundreds of millions of users,
you should be fine.
Anyway, that was just my thinking on the Yahoo thing.
It feels like a stretch,
especially given the recent innovations.
Totally.
Anyways, back to Abilene, where we don't use Manus.
We use, we use chat GPT around these parts.
Anyway, so the article here is basically pointing out
that OpenAI, SoftBank and Oracle are funding
this $500 billion
project, and yet none of them have hyperscalers data center
experience.
No, Microsoft is not in Stargate.
It's OpenAI, SoftBank, and Oracle.
Yes, true, true, true.
And so the only companies that have built cloud infrastructure
on the scale are Amazon, Google, and Microsoft.
Yeah, Oracle knows databases, but they only have 3% of the cloud market, right?
Oh, Larry, Alison, don't count out Larry. Oh, yeah, right. Oh,
yeah, the truth is, Larry's good. Larry, go to don't don't come for Larry. Larry's on top of
it. We will defend Larry seven years old. And he looks like he's 42. He looks great.
Incredible. And he follows TBPN. Yeah, he's gonna be fine.
We love you, Larry. He can he can build some cement. He's got
chase on the case.
That's right. He's got chase on the case. Anyway, so in
interviews with Bloomberg Businessweek, the CEOs of OpenAI
and SoftBank acknowledge that they've promised a lot and that
to some extent, they're figuring things out as they go.
And we knew this, right?
They announced this.
It was like, you know, Elon was pointing it out.
He was saying, where's the money coming from?
Yeah, it's very, you know, pulling $500 billion together is not not the easiest thing in the
world.
So they just they dismiss critics, the loudest being Elon Musk, who says the full scope will
never be realized because they don't have the money. We don't. And Masa says, uh,
founder of soft bank, uh, the infamous Japanese piggy bank is this article.
And the economist says, we don't need $500 billion in one day.
We'll go step by step.
Do you think more founders should be pitching stuff like star gates? Like, yeah,
like I, I need to realize my full vision. I need 500 billion
I don't need it right now
So like if you're if you're gonna come at me for that that's ridiculous
Like I didn't say I needed it right now, but over the course of this company we will consume 500 capital. Yeah
I mean a lot of these might you know, who knows trillions
So opening I CEO Sam Altman knew they'd need construction crews and lots of computers. Yep. We're gonna
need computers. Hey, folks, we need lots of computers,
computers, but data centers also have extensive electricity and
water needs to keep the machines running at a safe
temperature, often requiring infrastructure upgrades that can
take years for Abilene, which will have mighty capacity of 1.2
gigawatts. The team built its own gas power plant to get the place running more quickly.
Yeah, the actual...
We talked a little bit about Augustus, but there's this whole meme about like,
every time you search JETCBT, it empties out an entire lake,
and we're running out of water, and I mean, I'm sure it uses some water,
but I have no idea exactly how that's calculated,
and that feels like something that's being made.
Yeah, the question is what percentage evaporates
during cooling process, and is it?
Like a lot of cooling systems are pretty closed loop
because you're just piping the water through tubes,
and then the water goes onto the chip.
The chip heats up, the water gets gets hot and then you pipe the water
Out that's like a water-cooled GPU rig on like a gaming PC
Yeah, and like yeah, you need to change out the water every once in a while
But like not constantly and so it's not like I
Do wonder how what the actual impact scale is here?
And I bet there's a lot of people with like the incentive to say wonder if you use any water
I wonder if you get too much water. I wonder if you could rip out the the parts of an air-cooled Porsche
Yeah, and potentially use those. Yeah air cool. I want an air and then Elon can take the air-cooled Porsches and put
Engine swap. Yeah, Tesla motors into that. I think we wanted is supercharged AI factory just pipe the pipe the air back in
Exhaust fans and reroute them right onto the chip. Yeah. So anyways Sam knows he's gonna need a lot of computers
But they also have extensive electricity and water needs
So Abilene is gonna have 1.2 gigawatts the team built its own gas power plant to get the place running more quickly
One of the things that really surprised me as we were starting to dig into this Altman says was just how many things feed?
Into the main line the main line is still being assembled at the Abilene site in March
Some rooms are in the process of being wired and are off limits due to the risk of electrocution in the future tubes will pump
Cooling liquid around servers which will be loaded with Nvidia graphics or sorry, GPUs and to the physical
GPU actually stands for graphics processing unit. Yeah, as they thoughtfully noted in
this piece. It's good to know. It's good to know. If you've been living, living under
a cluster.
They also contextualize it by telling you what a GPU here is. It's not only a graphics processing unit.
You can think of it as the physical brain of AI.
Something there.
PBAs.
This is ripped straight from a softbank deck.
It's like they're talking to the same audience, retail.
For now, those tubes dangle from the ceilings beside half-lit hallways.
We're trying to deliver on the fastest schedule
that a 100 megawatt or greater data center
has ever been built.
Locke Miller says, from this front seat of a buggy
on a ride through the construction site,
in this moment, speed matters a lot.
The big reveal for Stargate was at the White House
on day two of Trump's second term.
I forgot, it was so early.
Altman, Sohn, and Oracle chairman Larry Ellison
stood behind-
This was so funny to remember
because it was Elon and Sam,
like both around the White House on the same day
in the midst of this battle.
Dude.
A little bit of an awkward.
Have you seen this picture of Masa getting hype-mogged
so bad by Larry?
It's brutal.
Really, John, you're gonna bring up hype-mogging.
It's a sensitive topic on this show.
It's a sensitive topic.
But I mean, we gotta pull it up.
We gotta call it out.
Anyway, lavishing praise and crediting him
with help making the project possible.
The whole production came together in a matter of days, according to a person familiar with
the events planning who asked not to be identified.
Stargate's real origin, according to Peter, the vice president of infrastructure and strategy
at operations at OpenAI, Hoshle, Hoshle?
I'm not sure how to pronounce the last name, goes back to a research paper published
at the beginning of the decade,
written by a group at OpenAI that included Dario Amadei,
who'd go on to start one of the company's main rivals,
Anthropic PBC.
The paper describes so-called scaling laws,
this is GPT-2, which we've talked about a bunch,
which assume that the more capable AI
requires ever more data and computing resources
What's interesting is like we're talking about increasing the oom's
But like this has to be synthetic data at this point because we've kind of already hit the data wall
I believe or are we because I don't know that there's ever anyone expanding it
Yeah, but I don't know that anyone's really mapped out like 10x. Yeah, check out that photo
Rough podium situation Trump's also pretty big to Larry's pretty big.
But yeah, masa.
Well, presentation still a size Lord. Yeah, mix up for it with
the checks he rips. Yeah. Open AI, as many know already had a
contract designating Microsoft, its primary
financial backer as its exclusive cloud provider, but Altman decided he'd eventually need other
options.
Oracle was looking for partners for the future complex in Abilene and was talking with Musk
before Altman came in.
It says a person familiar with the discussion.
Musk obviously chose Memphis for XAI, so went a different direction.
Oh, this is cool.
Altman picked the name Stargate
because of one of OpenAI's early data center designs
resembled the ring-shaped portal
that could open a wormhole
in the 1990s science fiction movie of the same name.
Also a TV show.
You ever watch Stargate?
Absolutely not.
Next question.
After half a year of Altman selling the idea to perspective backers stargate really came into focus in mid
2024 and he says he sound he found himself in a lot of zoom meetings also a
CIA project around mind reading. I don't know. I'm not familiar with that one
Look at that can the hardest part was figuring out what shape of the what would be. Would it be part of OpenAI?
Should it be a separate entity?
You know they love separate entities over at OpenAI, so we know where they landed on
this one.
Should we just try to get one company to build it all for us?
That took a lot of exploration.
Yeah, so the Stargate project was a classified United States Army program.
Wasn't affiliated with the CIA, but it was initiated in 1977 by the DIA and SRI International to explore the potential
of psychic phenomena in intelligence gathering
and military applications.
How'd it go?
Were they successful?
I mean, seems like it.
I'm sure they cooked.
So OpenAI is responsible for the operation of the business
and will be its main customer.
SoftBank handles the financial side,
including raising additional funds for Abilene Oracle Oracle will lease the data center and fill it with
servers. Crusoe declined to comment on who else is going in. OpenAhand and SoftBank
will each put in 19 billion to start Oracle and another equity partner Abu Dhabi based
investment firm MGX are on the hook for 7 billion ap, according to a person familiar with the matter.
So yeah, I mean 19 and 19 plus seven and seven,
you're getting up into the 40s, 50s, 60s, billions,
not bad, it's not 500, but they're getting there.
They're within one oom.
That's what they gotta do.
Striking distance.
Each project will be financed individually
with a mix of equity and debt,
with data center firm primary digital infrastructure is helping
orchestrate the financing.
JP Morgan Chase is doing some loans.
Yes.
So if you have money over at Chase, you're, I would say at this point entitled to say
that you're a financier of Abilene, of the project in Abilene.
That's great.
For future sides, SoftBank has held talks with dozens of lenders and alternative asset managers,
although it's yet to secure a deal,
and some of those conversations have slowed
due to global market volatility,
but now we're kind of flat.
Maybe the market's opened back up and things move on.
That's a structured financing
that SoftBank has done many times in the past,
so we are very confident.
And yeah, I mean, SoftBank took arm private,
and so they're clearly capable of operating at this scale. It's not not crazy
Not everyone is so optimistic about soft banks ability to execute the last time zone did anything close to this scale was the vision
Fund which depended on a 45 billion dollar contribution from Saudi Arabia's largest sovereign wealth fund
matching clipping 2 and 20 on that
Soft banks fund racked up losses most disastrously
most disastrously on the office real estate startup.
Sometimes I get crazily overexcited
and I make a mistake like WeWork,
but when you have the conviction and the passion,
Moss's mistake was not backing, what's the new company?
Cursor?
No, no, no, the new real estate company by Newman.
Oh, I don't know.
Andreessen did.
Flow.
Flow, he should have doubled down.
But when you have the conviction in the past
and you actually learn from those mistakes and the scars
and that will make you stronger.
I've made so many more mistakes
that I think I'm a little stronger than in the past.
He's been very wrong, but he's also been very right
and he's been right more than he's wrong.
And so he's still in the game.
And as long as SoftBank is cooking as a company,
he can keep ripping checks.
That's the thing.
Yeah, I mean, his calculus is, I believe,
can be done on a napkin.
And this is just my guess.
It's a huge napkin.
He has a very large napkin.
And it's basically, if I buy 10% of OpenAI,
and it's a trillion dollar company someday
I'm gonna do very well and I'm gonna lever up to do that. So my actual return on equity is gonna be
Fantastic. It's amazing and he's gonna be laughing and this is
He's a gambling man is not as crazy as investing.
One of the best books on masa is called the gambling man. That's the context you need.
Yeah, we did a whole deep dive on it.
Complicating matters are Trump's tariffs,
cost calculations swing every time
the president changes his mind,
but the trend line is expected to go up on aluminum steel
and other construction supplies,
not to mention Nvidia GPUs,
which are made mainly by Taiwan Semiconductor Manufacturing
Co. TSMC.
Sohn says he expects no significant impact
from the trade policies, and I think he's right.
I think this is what Trump wants,
and I think all of these will get car routes,
and they've already been negotiated down,
so I'm not too worried about that.
Trump, I think the bigger question here is just like,
like, what is the return to scale?
Because we've seen the scaling laws
kind of maybe potentially drop in terms of pre-training wall
and whatnot and so the question is just like
what does a 10X larger facility actually get you?
It better be worth it because we know the economics
of GPT-4 training run were great.
Like they printed tons and tons of $20 a month subscriptions
off of that and I think the total cost was like $100 million.
Now you're up at 10 billion.
Can you get everyone on a $2,000 a month subscription?
If you can, no big deal.
Well, yeah, and a lot of, I mean,
won't a huge amount of Stargate's resources
just go towards servicing models?
Yeah, inference potentially.
But I mean, I feel like a lot of the narrative
is like training, but obviously it will be both
Ali Baba chairman Joe Sai has raised the possibility that we're in the middle of a data center construction bubble
and
Obviously Amazon Microsoft have pulled back on some data center plans
Altman is undeterred still seeing compute capacity as precious resource.
We need more compute, more capital.
He says, we want to have access to a lot more
of the machinery to make AI and run AI.
And I mean, yeah, like the GPUs do go on fire
and there are limited resources still,
even with the models.
I mean, obviously there's a ton that you can do on
compressing the models.
Yeah, the fact that you're getting rate limited
by Google as a paid customer.
It's insane.
Yeah.
It's insane.
Like it is the biggest bull case for Buildmore data centers.
But I honestly put that less on their cloud offering
than just some PM was like, oh, no one would wanna
generate more than three videos in one day or whatever.
Totally. Or like, let's just roll this out slowly. Um,
Altman is undeterred, still seeing computing capacity. Uh,
the Abilene skyline is made up largely of buildings constructed in the 1920s
during the Texas oil boom among the city's major employers as a nearby air
force base and cheese production plant that opened in 2022. Well, uh,
meanwhile activity at the Stargate site
spanning 900 acres larger than New York City Central Park
moves at the pace of a boom town.
And I was talking to some real estate folks
and I was saying like, you should probably just go
to Abilene and start building like Walmarts
and gas stations and everything.
Cause like it really will be a boom town.
They're gonna be building this for a long time.
Yes, these are gonna be somewhat human resource light,
but there still will be a ton of people working there,
ton of people getting paid a lot.
They'll want bars and restaurants
and the local economy should,
it would be impossible for it not to flourish.
The economy booms when you put in a soccer stadium.
You're talking about $10 billion of capbacks.
I wonder if there's been any people going.
I mean, I'm sure that Stargate Project in general
has just been buying up surrounding land
to just sort of mitigate potential risks
for future issues.
But if you own like an acre, even just a single acre
that's not connected to roads at all within this area,
you could probably end up selling it for yeah
It's smart, you know million guys hold on and make them build the data center
You need a helicopter
Look every year of this quaint little house on the prairie and then it's a little microclimate
Yeah, because like the heat coming off. Have you seen that photo in Virginia where basically there was like this nice little
house and then AWS and Amazon just built up these massive and so you just see this nice
house and you pan up and it's just as far as the eye can see data centers.
It's like, jeez.
Beautiful.
Rough.
The team behind the Abilene data center conceived under the code name Project Ludicrous.
Nice code name.
First put shovels on the ground in June of 2024.
Crusoe decided to fly in workers from all over the country
because of the city's,
because the city of 129,000 people
didn't have a labor force large enough to support it,
Locke Miller says.
It's project-
129,000 is way more than I originally kind of imagined
for this town.
Probably got some good high school football going on.
That feels more like a city than a,
people like to position Abilene as like,
oh, this quaint little town.
Yeah, don't hit on Abilene.
It's projected to take slightly more than two years
for the entire project to go from start to finish,
although parts of the data center
are slated to go online sooner.
Towns like Abilene are usually on the losing end
of technological change, which tends to create
most jobs in the largest cities,
and data centers traditionally don't employ that many people.
Crusoe has promised officials that the Abilene site
will deliver 357 full-time jobs
once construction has finished.
That's not that much actually, that's pretty small.
I thought it'd be higher.
For a $500 billion project.
Well, I guess it's 10 billion in this version or what are they?
a little bit more ten or fifty, but yeah, that is crazy low numbers, but
The AI agents hate to be contributing to this. I backed a company that makes robots for data center
We're gonna get that number down to five. Yeah, that's the goal. Yeah, I mean
From 350, we're gonna get that number down to five. Yeah, that's the goal.
Yeah, I mean, it's basically the idea
that data centers are flat,
and there's routine maintenance that needs to be done.
No, it's reasonable.
And it's very much a controlled environment,
and just driving a little Wally-style robot around
makes a lot of sense.
We're talking about how much of a boom town it is,
it's like 375 jobs for now. a base of a hundred and twenty nine thousand
It's like nothing existing. No people. I feel like there will be more knock-on effects though
Anyway, the other one about the chip. There's Chipotle
Yeah opportunities. Yeah, there's three hundred and fifty seven people are gonna be eating a lot of double steak. Yeah
Burrito bowls. Yeah
So the Abilene government is hopeful that it is the start of something bigger the city and county agreed to extend to be eating a lot of double steak, walk, burrito bowls. Yeah.
The Abilene government is hopeful
that it is the start of something bigger.
The city and county agreed to extend significant tax breaks
to get the deal done, and local officials
saw an influx of requests in the days
after Stargate was announced from housing developers,
infrastructure, energy infrastructure vendors,
would-be investors, and other municipal governments
desperate to bring jobs to their town.
It will impact the rest of the economy.
Our restaurants, our home builders, our Chipotle's,
with many new people coming in and taking these jobs.
The city was able to accommodate Stargate's water needs,
which are small compared to a typical server farm
due to a novel cooling method
that recycles most of the liquid.
Let's hear it for the fake news
that Chachi-BT is destroying the ocean.
It's not true. They figured out how to recycle the water, of course,
cause water's expensive and why would you waste it?
The trade off is that the data is not,
the water is not being massively contaminated like in oil
production fracking. And so they need a ton of electricity data centers.
The scale are unprecedented.
This was caused for concern where the memory of 20 2021 Texas blackouts which led to the
death of six people in the county.
Still fresh in people's minds much of the blame for those outages belongs to Texas's
deregulated energy grid, but the flexibility that comes with the lack of regulation was
part of the allure for Stargate
Crusoe went out and bought gas turbines and oversaw the construction of a natural gas plant on site
Which was supplemented the what the data center can get from the local utilities Yeah, I talked to an investor who was who knows Jason was like he's just he has an uncanny ability to find energy
Which is like I don't know something I wouldn't even think about
in terms of like what the skills of a founder are necessary,
but he's like extremely good at finding
like opportunities and pockets of energy
that are being underutilized.
So-
I'm technically millennial,
but I was fairly close to being Gen Z by a few days.
Was that like rizzing energy?
Like I'm just trying to put it. He's an energy,
he's an energy. No, it's like, it's like, I think there are a lot of, there are a lot
of companies out there that think kind of along the lines of like a single energy source.
So there is a lot of, of natural gas here. Chase is very good at putting together different
opportunities. There's a existing plant, we can build a new plant,
there's wind, solar, like, very energy agnostic,
but how do you piece all those together
to meet the actual demand on an ongoing basis?
Stargate's, to Stargate's critics,
the idea that the project's backers could maintain this pace
and scale across the country is reason
to think they're living in a fantasy.
Amadei, co-creator of the scaling laws,
publicly questioned Stargate's seriousness,
calling it chaotic.
Musk, breaking with Trump, called it fake.
Altman's retort, he says all sorts of crazy stuff.
Salesforce CEO Mark Benioff said in a post on X in April
that Stargate signaled an end to the honeymoon
between OpenAI and Microsoft.
The project does reflect a shift in the relationship
between the two companies.
They revised their cloud contract in January
to allow OpenAI to use other vendors, such as Oracle,
as long as Microsoft still has right of first refusal.
Microsoft doesn't intend to invest in Stargate,
but was listed by OpenAI as a technology partner
in the project.
And this was interesting because they-
Can I just put your logo on the deck, please?
Can I please just put your logo on the deck, please?
Can I please just put your logo on the deck?
Yeah, yeah.
There's this amazing...
Microsoft is providing exchange.
There's this amazing line from this guy, Aaron,
buddy of mine over...
We'll be using Microsoft Excel on site,
so they're a technology partner.
Yeah. That's it.
But yeah, remember the Satchin quote,
where I'm good for my 80 billion? He's just like, I'm good for my 80 billion, generically, CapEx, it. But yeah, remember the Satchel quote, where I'm good for my 80 billion?
He's just like, I'm good for my 80 billion,
generically, CapEx, it'll happen somewhere,
it's not necessarily an investment in starting.
I'm gonna butcher the exact quote,
but Aaron Frank has an iconic post at one point,
or maybe he just said this to me in person,
he just said, some people just wanna see my name in there.
Some people just want my name in their deck,
because there was like this fintech boom,
and people would
just come to him and be like, please, can I give you 50 bips
to just like put your name because like that like at some
point, he he created the card that became the Apple. Okay,
what was it called again? You don't even see that the Apple
push. No Apple card. Remember the Apple card? I think it was called the Apple card. No, Apple Card. Remember the Apple Card?
I think it was called the Apple Card,
but his company became that.
Yeah, Apple Card.
You don't hear much about the Apple Card anymore.
They had to deal with Goldman,
and then they kind of fell off,
and maybe they're not doing it anymore.
Did people get those?
I never had one.
Was it good?
Did it have cash back on stuff?
It was pretty good.
Okay, anyway.
Unlimited daily cash back.
I'm not really, credit card points maxer, or into credit card stuff. Never been big into that world. Okay. Anyway unlimited daily cash back. I'm not really the credit card points maxer or into
Big into that world never to management consulting coded
It's like every every guy I know it works like McKinsey or be Bain or BCG is like
She points. I'm just like I don't care. Just tell me which card I'll get and I'll use that for a decade and they'll
Squeeze me for annual fee probably and I'm probably like way underwater on it because I'm not points
maxing. Anyway if things go as planned Abilene could serve as a model for the
next of Stargate's developments the day after Altman first visited the Abilene
site in early May he testified before Congress asking for help to fast-track
data center permitting on future locations. The first site was
incredible we need a lot more of that.
Crusoe is evaluating another potential spot
in Amarillo, Texas, which it hopes
could be part of Stargate, according to a person
with knowledge of deliberations who asked not to be named.
OpenAI has said it's also looking in several states,
including Oregon, Pennsylvania, and Wisconsin.
And while Stargate might rent data centers
already being built elsewhere,
meanwhile, OpenAI and Oracle are taking the spirit of Stargate International with an AI
data center under development in Abu Dhabi, which will not be part of Stargate LLC, but
will count OpenAI as a customer.
That's simple.
If all that wasn't enough, Stargate could even expand to include semiconductor production,
according to Altman, perhaps as soon as next year.
I think of Stargate as the AI factory.
If we don't have to be involved here, maybe we wouldn't.
But if we could just magically spin up all the compute
that we needed in the sky, but at this kind of scale,
you can't do that.
And so there's another interesting angle here,
which is the idea of building an AI data center in space.
We're having a couple of folks to come on the show maybe
next week to talk about that. There's a company that's thinking
about doing it. There's some investors that are looking into
it. It's sound silly sounds extremely buzzwordy and sci fi
it's like hard tech space and AI and data centers. But think
about it, you get you get energy from the sun from solar panels
basically for free.
And then you also get cooling because you're in the vacuum
of space and it's cold for free.
And so you don't need probably water cooling
and you don't need as much energy.
And so the economics might work out potentially in GPU.
You don't need land.
Pretty, yeah, you don't need land.
And so you've cut out a bunch of costs
and then added Starship or Falcon
9 launch costs which are significant, but maybe small. I cannot wait for the day
where a satellite is ghibling for me. There's knowing that I can create the
most silly picture in my mind and then send it up to space to get made and then
have it sent down.
Just send it back whenever you get around to it.
It could be tomorrow.
And then it says, you've run out of requests.
We don't do much anymore.
When's the last time you saw a Ghibli?
It's been a while.
I have been using, I've been using images and chat, chat GPT a lot, but on a lot more
like practical sense,
just trying to visualize different things.
I like the library feature.
I told you I've been using chat GPT images
to get my three-year-old to eat dinner.
Yeah, you said that was working really well.
So basically bite by bite, I will generate a new,
if you take another bite of chicken
And chew it well, I'll generate another picture of me as a dinosaur my new my new go-to Ghibli
Instead of asking turn this into a studio Ghibli. I just say
Turn these who turn the image into bodybuilders, and I've had been having a lot of luck with that
I sent one to the chat. We'll try and pull it up, but get this pulled up. It's
It's great. Oh
But that's that's the original picture
Yeah, this is the original picture first of the open AI team there. They are. Yeah, okay, and then can you?
Can you make them look like bodybuilders? Yeah for sure
Yeah, I enjoy that.
Also just making, yeah, just a lot of Cadillac escalades
with GT3 liveries and a lot of TBPN merch ideas.
A lot of stringers, a lot of suits with ads on it.
Speaking of which, we should do some ads.
Go to Vanta.com, automate compliance, manage risk,
prove trust continuously.
Vanta's Trust Management Platform takes the manual work
out of your security and compliance process.
If you're doing anything important,
you probably should be on Vanta.
For sure.
You heard it here first.
You heard it here first.
Go to Vanta.com.
You should also get on Numeral, sales tax on autopilot.
Spend less than five minutes per month
on sales tax compliance.
Benchmark Series A.
Benchmark Series A.
Benchmark Series A.
Anyway, do we have time to do, we only have 15 minutes until our next guest.
Should we do a little timeline?
We should do a little timeline.
We got Austin Allred coming in.
We're going to ask him about AI coding in the era of agents.
We can cover Dario maybe tomorrow.
I also want to go through, oh, we didn't get a chance to dig through
Craig Federighi and the Apple AI
Piece in Bloomberg, but there are some really hilarious
points in here one is that
Apparently when Steve Jobs was trying to acquire Siri
He called the CEO and said,
I'd like to buy the company.
I think this is the future of computing.
I think voice interfaces, artificial intelligence,
this is what Apple needs.
We wanna buy your company, bring them in-house.
The CEO of Siri said, no,
I don't want to be part of Apple, I'm good.
And Steve Jobs proceeded to call him every single day
for 24 days straight before he got the deal done
Insane absolutely insane. You don't hear about that a lot
yeah, you you have you get this idea in your head of somebody that is a
Demi-god figure and you think that they don't have to be cringe to get things done. Yep, and
Turns out sometimes you gotta cringe Max
and do whatever it takes.
And sometimes that's calling somebody every single day.
It must just be so funny to pick up the phone.
Imagine you're like, yeah, walking your dog or, you know.
It's like Steve Jobs calling again.
Oh, he's leaving a voicemail.
Oh, hey Steve. Hey bud.
Still not interested. Still buildingicemail. Hey bud. Yeah, I'm not interested
Still building my company still grinding to my job Steve Jobs
Would get sort of
Total privileges to call people bud
Totally one of the few when you're at the top one of the few that earned it
You're the greatest entrepreneur of all time if you're not Steve Jobs, don't use that from your vocabulary
You're the greatest entrepreneur of all time. So if you're not Steve Jobs, don't use the word.
Just remove that from your vocabulary.
Let's go to some timelines.
Zach Kukoff's talking about, oh my god,
we're really going to speed run 2008 with collateralized bulls.
Blarna's losses have widened as more consumers
fail to repay loans.
Unfortunate.
Learning, basically, probably speed running risk management that credit card companies
have done for a really long time.
I wonder what's actually going on here because the whole like, yes, you can buy now pay later
on your burrito.
That's silly.
But that's probably not the vast majority of their underwriting, right?
Like that's the meme, but that's not actually what's driving buy now pay later activity. I wonder if in the lead-up to IPO they they loosened underwriting restrictions to try and ramp volume and
Because it seems like clarina have they gotten out no right?
They're they're they're not public yet, right?
They have not successfully IPO'd and so I imagine that in the lead-up to the IPO they're trying to do a bunch of things
There's the holes that whole news cycle around. Oh, we're not gonna hire anyone. We're gonna use AI for everything
Obviously that's sending a signal to the market that hey, we're gonna be an ultra efficient business
The margins are gonna dramatically improve. We are taking AI super seriously this company and so
You know our OPEC should be really low over the long term. You can underwrite the the IPO against a really low
Sgna cost right? Yeah
And then also on the revenue side, they're probably saying they're probably trying to do everything they can't maybe they went too far and now they're
Generating losses, which is obviously not yes. The issue is they doubled losses
So their net loss for the first three months of 2025
was around a hundred million.
A year ago for the same period, it was 47 million.
Is this from like the S1 or something?
Like, why is this data public?
I think they're trying to get out.
And so I imagine they need to do some type of reporting revenue increased 13% year over year
to 700 million for the quarter.
And again, this is a company that in 2021 was valued
at $45 billion most recent private Mark was around 15.
So unclear what they'll do.
They put their IPO plans on hold last month
along with StubHub and again,
they had been on this marketing tear, right?
Remember the CEO came out and he was like,
we're using AI for everything.
We're letting everyone go.
And then he says, actually, no, we're not doing that.
It's not good enough yet.
I don't like when the IPO window's closed.
I like an open window. Yeah. I like companies going out every day.
I like you punched a hole at the gym. When you heard I remember
I the IPO window when it closed. Yeah, I was like, john, why is
your hand two feet into the sorry? Why? Why is your fist two
feet into the wall? john, you were trying to get it out. Too
frustrated. Pissed off. Well John? You were trying to get it out. You were trying to. Too frustrated.
Pissed off.
Well, if you're trying to track all these IPOs,
trying to get it on the action, head over to public.com,
investing for those who take it seriously.
They got multi-asset investing, industry-leading yields,
and they're trusted by millions.
Also, Jordy, how'd you sleep last night?
I put up some good numbers.
I'm climbing up the ranks.
I'm getting back in the game.
I might even be beating you back to back days.
Let's see, how'd you do?
What's your number?
What's mine?
No, you go first.
Okay, well, one second.
Get a Pod 5 Ultra.
They got a five-year warranty, a 30-night risk-free trial.
I got an 80, John.
I'm sure you beat me. Oh, I got a 94
Smoked smoked get out of here. Get out of here try 96 actually 96
Boom
John I'm proud of you. Yeah, still not good on the consistency going to bed at all random times waking up pretty consistently
545
Anyway, all right
come on we kind of covered this already this from the lion does not concern
himself with sock to compliance that's because the lion uses Vanta right
exactly so I think this concerned maybe we promote yeah this is an ad hey this
is an ad for sure Andy you got to to be disclosing Yeah, you have to do has tag ad or maybe he just loves Vanta so much possible. It could just be you see yeah
Yeah, but anyway, obviously don't concern yourself with sock to be a lion and get on Vanta
Art over at Brex announced a partnership with zip a big procurement platform. Apparently zip is taking a step back
from the corporate card market.
And so this partnership allows them to eat.
Oh, interesting.
Okay, that makes a lot more sense.
So Zip had a corporate card,
and now they're just partnering with Brex for this.
And Brex had a procurement product,
which they rolled back.
And that kinda happened too
with Stripe had a corporate card for a little bit
and then stopped that, right?
Isn't that what happened?
Yeah, everybody was like, wait, corporate cards, good business,
we should do corporate cards.
Turns out, hard to do.
Yeah, well there's always a question about
where does the corporate card sit
because Stripe does not sit in the same place
as like Ramp in terms of the CFO suite.
It's more of almost a developer tool.
It's less.
Well that was their initial wedge.
Yeah, but it's less about like an accounting product.
It's more about like the payments and engineering product.
Although obviously it's very tied,
but I don't think Stripe's ever gone into travel
or bill pay.
Like they haven't focused on that as much.
Anyway, Art says, this is two YC companies
with cultures of shipping fast and collaboration
Creating something truly special we love to see why see companies obviously the most love ain't the recent collaboration between deal
Also why see companies yes hasn't
Yeah, a little too much, they were partnered in some ways
because they were both paying the same people.
There was an employee, yeah, they were paying
the same employee.
Yeah, anyway.
Back in 2021, our approach, Zip CEO,
because I saw Zip doing something remarkable
in the enterprise segment, a market Brex
was determined to win at the time.
The CEO of Zip pushed back saying Brex
was too focused on startups.
What would he get from this?
Fast forward to March 2024, Brex has built their base of enterprises and many were
demanding our products to work seamlessly together.
Both companies had also retrenched to focus on core strengths.
Zip stepped back from Card.
We stepped back from enterprise procurement.
This is a classic competitors turned co-creator story.
Zip brings world-class procurement orchestration.
Brex brings global card capabilities and spend management across 50 countries so
congratulations to art and Brex art fellow LA guy and I have some advice for
Brex why you run a billboard to promote this on adquick yeah get some Brex actually has a ton of billboards right I mean they've invented the billboard to promote this on adquick. Yeah. Get some, Brex actually has a ton of billboards, right?
They, I mean, they ran the billboard in many ways.
They did invent the billboard.
They should get on adquick.
They put the billboard economy on their backs in 2020, 2019,
right?
Yeah.
It was like, it was hard to get a billboard
because they were on all of them.
So head over to adquick.com.
Art, out-of-home advertising made easy and measurable.
Say goodbye to the headaches of out-of-home advertising. Only adquick combines technology, out of home advertising made easy and measurable say goodbye to the headaches of out of home advertising only adquick combines technology
out of home expertise and data to enable efficient,
seamless ad buying across the globe.
Double kill.
Great, great transition John.
We got a post here from TJ Parker
who will be on the show Thursday.
Again, he's coming back on.
He says, one thing I'll never understand
about healthcare startups is the instinct
to take a strategic route rather than an
Administrative tactical approach you don't need a top-to-top meeting to get in network with a payer
Just fill out the paperwork
You don't need a strategic relationship with quest lab court to integrate scheduling or get competitive pricing
Just work with an existing vendor if things go well, you can elevate to strategic relationship over time
But you'll never get off the starting line with the strategic approach. So work with an existing vendor, if things go well, you can elevate to strategic relationship over time,
but you'll never get off the starting line
with the strategic approach.
Will has a funny reply here.
It's very funny when people tell me they're struggling
with getting in touch with payers.
It's like, have you tried going in the front door?
Timeless advice, timeless wisdom.
I wonder what's going on here.
Is this just because there's a lot of tech people
who think I'm gonna build tech for healthcare
and then they're just completely unfamiliar with them?
They think it's all about partnerships
versus just making stuff happen.
Or they're just unfamiliar with this market structure
because TJ has been, I mean, he was a pharmacist before.
He really understands the industry
and has continued to understand the industry even post-exit.
Whereas I think a lot of people are like, I build software, I know that healthcare is a problem,
so I'm going to build healthcare software, but they don't understand like the structure of the
market at all or how the deals get done. And so they got to learn stuff from, from TJ's posts.
Anyway, we got a post here from Andrew Reed in the news. Authentic Brands buys Dockers from Levi Strauss,
a little mix up in the pants market.
Little switch up and Andrew says,
"'Docker' the containers overlapping with
"'Dockers' the pants, both making hundreds of millions
selling to system admins."
Yeah, you know Andrew is a great investor,
but he puts his pants on one leg at a time
like anyone else.
Yeah.
Let's get him some dockers.
Let's see.
Get him some dockers.
I somehow think he's too stylish for that.
Yeah, I can't see him wearing dockers,
but we should get him a TBPN tie.
Yeah, tie would be good.
Which we've been working on with ads on, of course.
Yeah, last time he came on the show is he was wearing a
actually really nice figma sweatshirt but didn't have a
collar. So you know, docked some points from that. Yeah.
Docked. Docked. Well, you know, we should also tell you about
wander. Find your happy place. Find your happy place. Book a
wander with inspiring views,
hotel-grade amenities, dreamy beds,
top tier cleaning, and 24-7 concierge services,
vacation home, but better folks.
John Andrew, founder of Wander, was teasing
how excited he is about some big news,
and I couldn't possibly guess what the news is,
but we're gonna have him on the show next week
to talk about it. Tane, friend of the show,
says OpenAI tried to partner with Google
to use its search API to power real-time results
in ChatGPT's search offering.
Google said no, citing too many complexities,
and this is from Michelle Fradin.
Or Fradin.
Interestingly, she was at Sequoia,
she was on the FTX deal,
and then went over to OpenAI
right before the Sam Altman firing,
and so it was just like chaos for a while, but it seems like she's settled in over at OpenAI right before the Sam Altman firing, and so it was just like chaos for a while,
but it seems like she's settled in over at OpenAI
and is doing well, but getting some pushback
from the Google folks.
She says, we hope you're doing well.
We wanted to follow up on our discussions
over the past few weeks regarding using Google Search API
to help power Chat GPT's Search GPT prototype
and search functionality in Chat GPT.
As mentioned, we are currently testing
a prototype Search GPT, and search functionality in ChatGPT. As mentioned, we are currently testing a prototype, SearchGPT, and to learn more about how users and publishers
use AI chat experiences coupled with real-time information
and search-like features.
On July 25th, the day of our public announcement,
we reached out to Google about using its search API.
We believe having multiple partners, in particular Google's
API, would enable us to provide a better product to users.
It's like, not just destroy you.
We also understand Google publicly offers its custom search JSON API to developers looking
to incorporate Google search results into their products, providing additional precedent
for us to work together.
So basically asking like, hey, you have this service that you offer to a lot of people,
like, can you offer it to us?
And they're like absolutely hey do you mind letting the Fox into the house email Fox sending
the emails to a chicken in the hen house hey come in I'd love to check out the
house I thought it might ask it seems like a great house. I think my I think the other foxes
In the Fox house would like it
Google's like that the chickens over in the hen house are like I think it'd be kind of complicated
Google said no decline this because we believe that you are a wolf
in sheep's clothing. Well, yeah, cheers to Sundar for not letting the fox in the hen
house. And he lives to fight another day, built his own Gemini, built his own apps.
And we're excited to talk to some frankly built, different, built different. We're excited to talk to some folks. Frankly, built different. Built different. We're excited to talk to some folks over at Google
about the hen house protection strategy
and the Fox defense system.
The Fox defense network.
The Fox defense engine.
Should we talk about Coinbase?
This is a rough one.
Markle Arrington is very upset with the Coinbase news.
He says, I'm a long time investor and champion of Coinbase.
Something that has to be said though, this hack,
which includes home addresses and account balances will lead to people dying.
It's it probably already has the human cost denominated in misery is much larger
than the $400 million or so they think it will actually cost the company to
reimburse people.
The consequences to companies who do not adequately protect their customer
information should include without limitation prison time for executives.
Very disappointing Coinbase right now,
using the cheapest option for customer service
has its price and Coinbase's customers will bear that cost.
And so-
Yeah, so for context, if anybody was living under
a data center, Coinbase effectively was off-shoring
their CX function.
Certain CX reps had access to consumer data,
including home addresses and account balances,
which they then effectively sold or were bribed
into passing to nefarious groups
who then were threatening Coinbase,
saying they were gonna leak it.
Coinbase came out and said, you know, kick rocks.
We're gonna pay basically a bounty
that leads to the arrest of these people.
The issue obviously with home addresses
and account balances being online
is that bad actors could just find a home address
if somebody with a high Bitcoin balance go there
and say with the threat of physical violence and say,
send me your Bitcoin right now or even worse abducted.
And so Michael is basically saying that there's gonna be a,
you know, real, real consequences here.
Coinbase came out and said, anybody that sort of falls
to various social engineering hacks due to the,
you know, they're basically gonna reimburse them.
Apparently they expect to spend quite a lot of money, hundreds of millions of dollars to reimburse those people.
But Michael is saying that the true cost is much higher.
And apparently there had been just like
a bunch of other issues here.
I mean, who knows what's actually going on.
I'm sure that...
I do wonder like how much of a cost center
is customer service for Coinbase?
It seems like it's a big company,
very profitable, has done very well.
If they doubled the cost of the expense,
would it get better?
Is customer service really like a...
Something that where spending more money on is a
lever or is this something that's like a little bit more intractable the other
interesting thing is like I do wonder like the internet is pretty locked down
I feel like like it is like yes there are addresses that can be sold in the
dark web but like it's pretty hard to just put up something that is illegal on the internet.
Like all the major platforms take it down,
Google shadow bans it, the links get banned,
the browsers can be, like, it's not that crazy to-
Yeah, but it's fairly easy for this information
to just be like a file, an Excel file.
Yes, but at the same time-
That then criminals can transact and sell multiple times.
At the same time, like, let's say that I'm like a,
you know, the type of criminal that does,
that does just show up to a house
and put a gun in someone's face.
Well no, the real, the immediate risk is social engineering.
It's somebody calling you from a number
that's spoofing Coinbase and saying,
hey, I noticed some activity.
I need you to verify this, give me this code.
So that's the immediate risk.
Second order effect is that a criminal could just show up at somebody's house
and demand. Yeah. Yeah. Yeah. Yeah.
So the type of criminal that just shows up at people's houses,
like how are they actually going to get this information?
Cause they're not just going to Google it and it's not just going to show up
there. They're going to have to find it somewhere on like through some sort of
hacker network. Are they paying for this?
Like it feels like this information can be somewhat controlled,
like credit card numbers leak all the time. And, and like the,
the global cyber security industry is pretty good at making these not just
proliferate to like script kiddies and like random criminals. Like, yes,
there are organizations that try and like programmatically like run up a ton of charges
and just like spam the network,
but in terms of like these one-on-one dangerous
interactions, like I'm not so sure that it's gonna happen
like in mass, obviously it'd be terrible if it didn't.
It's a very big deal and it should be taken very seriously.
But I do wonder if there's a way to kind of like
not allow this information to proliferate. Ironically, like what if it goes on the blockchain and it's just available?
It's a very naughty issue with like, you know, censorship and things.
But Balaji had a good response here to Errington.
I don't have a clip, but we'll have to talk to him about like the future of KYC. His argument was that like part of the problem is that
Coinbase is required to keep all this information when they might not
actually need to if the government didn't require it. And so if this double
edge sword of the more information that you keep on someone the more information
that can be leaked. Anyway we'll dive into that more next week.
For now, we have Austin Allred coming in to the studio.
How you doing, Austin?
Good to hear.
Hey, good, how you guys doing?
What's going on?
How are you?
I'm good, yeah, hanging out.
Yeah, I wanted to have you on the show for a few reasons.
Obviously, there's a lot of stuff happening
with AI coding agents.
What's this whole AI thing?
Yeah, what's this whole AI thing?
The whole AI thing.
Haven't heard about it.
Yeah, I mean, the eye level that I would-
Maybe just start with an introduction on yourself and what you're up to now, what you've done
in the past, and then we can kind of go into the structure of AI in the workplace,
how people are learning to use AI,
and then kind of some of the disruption
that's potentially happening with AI tools
actually replacing jobs.
Yeah, for sure.
So a little bit of background, co-founder of BloomTech,
which used to be known as Lambda School
before a trademark lawsuit.
It's technically still the same company,
but about 18 months ago, we started talking
to people about this newfangled AI thing and if it had any impact within engineering teams.
And frankly, I was pretty skeptical in the early days, but we sent out a research team.
They looked at what everybody was doing. And at the time, it's not quite that way anymore.
But there were a handful of people who had sat in the basement
playing with LLMs, seeing if they could make them write real code for a couple of years.
And when we saw that, we were like, oh, wow, this is actually working. It's not mainstream at all
yet, and at the time, even less so. And then working with a couple of those companies that
we were... So we started training engineers to use AI know, use AI really well. Um, and it's one of those things where there are no best practices.
Everybody's learning more every day.
The game completely changes every day, let alone every week.
Um, and then one of the companies we were working with brought us in to do a, you
know, fully intensive hundred hour a week, find all the people with 98th percentile
IQ and above, fly them into Austin,
train them over the course of 10 weeks to be expert at using AI and kind of staying
on that heading edge, riding that wave up. And then they hire them on the other side
and pay us to do that. Now we're expanding that to more companies, but I basically spent
the past several months holed up in a little office with a bunch of sweaty people
Trying to figure out how to max out AI and figure out how to stay on the cutting edge of using AI
to build software is the shape of that more like
PM who's now able to write code or instantiate their ideas in code or is it software engineer who?
instantiate their ideas in code or is it software engineer who
understands some basic programming that can then write more functional code or just be more performant like which which
Angle is it both or is one is one narrative driving more of like AI adoption?
Yeah, I'd say it's more the latter so the more experience you have as an engineer,
the more it sucks to start using AI because you're going from here to here as far as code writing ability goes.
So in the early days of Gauntlet,
we force everybody and the more senior you are, the more you hate it.
We say you can't write code manually.
We put software on everybody's computer that watches them.
And AI has to do
all of the code writing for you. And everybody thinks that's the dumbest thing they've ever heard,
and they fight against you for a week, week and a half. And then the more senior you are,
the longer it takes for you to actually get to parity. But you'll never go back once you get there.
It has fundamentally changed the software industry forever in ways that people don't fully appreciate yet.
Entire companies are going,
companies are gonna have to rethink
the way they do everything.
And we're only seeing the very, very,
very early stages of that.
Do you guys have your own kind of messaging internally
with the students around the framing of like vibe coding?
Because it maybe is, it's like this cool viral phrase people.
Yeah.
But it's kind of different when Andre Carpathy does it.
But it's also different if you're
doing it in this hyper intentional way,
trying to effectively get to the same quality and consistency as regular software
engineering but just doing it in a super AI native way. Yeah I feel like that's
muddied the waters quite a bit. I mean when I say vibe coding most people
imagine somebody who doesn't understand software, kind of blindly trying to one shot applications,
which it's incredible that that works at all. But when we think of, you know, whether you call it
vibe coding, or some people call it started calling it super building internally, I don't know if that
ever made it out, but using AI to write performant enterprise grade bulletproof software. There's not
a great distinction for that out in the market,
but we do view it very, very differently.
There's a time and a place for both.
If you're writing your own personal software
or a little internal tools, like, yeah,
just blindly telling an LLM to do something might work,
but most of what our engineers are doing
would work in very enterprise, very legacy, very big, large scale applications,
security is taken care of, the formatting's done correctly.
It's scalable and secure and all the things
that software needs to be.
What are you seeing the most success with
in terms of tools?
There's so many different options every week.
There's a new tool, right?
Maybe they were using Devin at some point,
then they're using Claude,
and then they're using now Codex.
What's your kind of, how are people prioritizing?
Are they using things like Replet?
What does that actually, what does the stack look like?
Yeah, it's not an exaggeration to say
that it changes every week in a very fundamental way.
So, I remember when Claude37 came out,
before Claude37 came out,
pretty much everybody was using Claude35
as their daily driver.
And I feel like Claude37 was the first kind of
breaking point where that became no longer true anymore.
So before, Cloud had basically a monopoly.
If you were to walk around the Gauntlet offices
and ask everybody what they were using, 95% was Cloud 3.5.
With 3.7, it worked better for some people's workflows
and worse for other people's workflows.
Grok is gaining ground.
It doesn't, you know,
most people don't have API access to it yet.
So that's a, you know, you have to use repo prompt
or something to get it into your ID.
So that's a separate thing.
I would say on average,
if people who are working on really big,
really enterprise applications,
Cloud 3.5 still seems to be really enterprise applications, Cloud 3.5
still seems to be winning out even over Cloud 3.7.
And Gemini, I think, kind of changed the game.
When was that?
Was that two weeks ago or a year ago?
I kind of lose track at this point.
But it has a much broader context window.
So if you have a small code base, you can get most of the code base inside the context
window as opposed to Claude 3.5.
Claude 3.5 has proven to be better for more surgical stuff.
Claude 3.7 is like Claude 3.5,
but it kind of runs away and tries to do a lot.
So some people like that, some people don't.
Every engineer ends up with their very ordinate,
well-orchestrated workflow at Vauntlet,
and they do things their own wayuntlet and they, you know, they do things their
own way, whether it's, you know, I build all the front end stuff
really, really well and make sure that's all working. And
then I'd try to build a backend behind that, or I diagram
everything and plan everything out with a model like oh, three,
and then I'm creating a checklist and pulling pieces off
the checklist, everybody has slightly different workflows,
but it's not like what you see on Twitter.
What are you seeing on the other side,
on the hiring side,
are companies expressing preferences already
to you to some degree around,
hey, this is the stack we're using internally,
we want people to kind of be ramped already
on this set of tools, or is it
more kind of, you know, bring your own toolkit? Yeah, so there's spectrum, of course. On one side,
I would say the companies that I call them the companies that don't really get it yet,
they're like, okay, give me somebody who has five years of experience with TypeScript,
and they understand a little bit of AI,
and we'll drop them into our traditional engineering org.
The companies that are on the other side of that spectrum
are more actually this what was this entire team
can now be one person.
And what language we use matters a whole lot less than how good they are
at utilizing AI. Because anything that you don't know, AI is better at writing code academically
than any engineer you're going to talk to. Knowing how to get it to write the right code at the right
time in the right place is where all the magic happens. So on the other end of that spectrum,
there are companies who are,
I mean, the average company that we bring into Gauntlet
and we show them what students are doing,
they walk away saying, oh my gosh,
I need to rethink our hiring,
I need to rethink our entire strategy,
our roadmap needs to change.
It is a fundamentally different world
when you're utilizing AI in the correct way.
And then I guess even further on the other end
of the spectrum, there are a lot of companies
who tried it once and wholesale rejected it
and aren't going to really use it
or if an engineer does it at all,
it's on their own time and against the will of the manager.
There's more of that than people would anticipate
if you're spending all day on X like I am.
But yeah, there's a pretty broad spectrum. And I go back and forth with, frankly, how much time
I'm going to spend trying to convince people to do it all the right way or to experiment with new
ways of building companies versus just, okay, you know what, I'm going to go create a holding
company and spin up 10 companies myself and show everybody what can happen. I
Go back and forth on that. But but yeah, it's it's wildly wildly different. What is the what's your?
Read on that on the hiring market right now. Obviously there was big headlines. Maybe it was last last week
I believe midweek Microsoft that again our read on that situation was Microsoft has always taken the approach of cutting
the bottom 3%.
There still was a bunch of stories that came out,
people being like, I've been here however many years,
working 12 hours a day.
Clarence says they're not going to hire anyone,
but kind of went back on that.
Overall hiring market temperature,
what's it looking like?
I mean, I think, so first of all, I agree with you
that when there's a 3% riff,
that's rarely actually a riff.
That's performance management being done
in as kind of way as possible
so you can give people severance,
which if you're a company like Microsoft
with as much cash as they have,
you probably should do it that way.
That said, generally speaking, what we're seeing is a,
if you're really, really good at being an AI engineer,
I mean, we had, there were times
when we had multiple billionaires wandering
around our office with printed out offer letters
for people begging them to come work for them.
That's what it looks like if you're really good at AI. Most
companies are not really understanding that. So that's not everybody. It is interesting.
I think it's fair to say the junior engineering market is completely decimated. It's pretty
well gone. Even I mean, if you're at Stanford, maybe you can get an okay internship somewhere, but it is, it's Armageddon.
Does that mean that like junior engineers should be vibe coding companies and
starting more apps and trying to just like build lifestyle businesses almost like
what, what is your advice for people entering the workforce in software
engineering capacities?
Yeah, that's a really good question.
It's a tough one because the way that I describe it is someone who's not very good at AI and
AI engineering can still get roughly the output of starting junior engineer out of AI.
And that's what a lot of companies are doing.
Instead of hiring a junior engineer, I'll just get the output that I need.
There's still, and there always is crazy demand for senior and
high-end and everything else.
I think it'll take a while to figure out how that plays out
because the flip side of that is you don't get
super senior engineers if you don't have junior engineers.
Then you talk to the universities.
And the average university that I talk to
is closer to explicitly and permanently forbidding
any type of AI usage in the classroom
than they are to interweaving it or adopting it
in any material way.
Why is that?
Because it's cheating.
It's so much better at writing code than you are that the professor has no idea if you
know how to write code because the AI will write the code for you.
I talked to a...
I was at the park a couple days ago and was at the swings, pushing the little guy.
And there was another dad there who was a teacher
and he said that a high school teacher,
he said they basically now wait homework
at effectively zero.
It's like kind of check the box
that maybe contributes to a few percentage points
of your grade and then tests are now just a hundred percent,
cause it's the only environment in which you can actually prove.
Would you advise people to learn to code now or has the idea of
learning to code changed in some fundamental way?
So I still think, I feel like I'm almost on an island on this.
I still think you should learn to use AI.
And if you want to use AI in software environments,
you should learn to code.
Now, to me, that looks very different
than it did five years ago.
And our learn to code business,
we've basically shut down, it no longer operates.
I'm not interested in teaching people to code
in the traditional way.
Now the flip side of that is normally when you would train people to code, you would
start kind of close to the metal and at the bottom of the stack.
And so you would start with, okay, let's write binary and then we'll figure out how to write
some low level Java.
And eventually a year and a half later, you start actually building programs that can do things that are interesting to you.
The right way to learn to code is actually, in my mind,
the inverse where you start by building applications,
and then you figure out what's broken and what's missing and what you don't understand.
The thing that is different about that than just creating
a different curriculum is AI understands that.
AI understands.
We had software that would basically watch what you were doing on your computer and from
that we could derive what CS principles you understood, what CS principles you didn't
understand.
We could have AI generate a custom curriculum that meets those guidelines specifically.
And we can fill any gaps in a way that I would have killed to a decade ago training people
how to code.
So I think you can actually just start by building stuff and then figure out how it
works down and start at the top of the stack and slowly work your way down.
And to what extent you get down the stack depends on what you want to do,
right? Not the average engineer today isn't playing with a Linux kernel or writing C,
they're writing JavaScript and they're building applications. To be at that level, you can
probably get there relatively quickly. But I think I'll credit Martin Casado from a 16 Z with this.
He said, basically, to be a good engineer, you've always wanted to understand one layer
of the stack beneath what you were actually working on.
And I think that remains true.
Yeah.
Give me your view on the AI software engineer market.
Open AI now has like three offerings between,
you know, Oh three, we'll just write code randomly for you.
Codex is a new product. They now own windsurf. Um,
there's top down enterprise companies like cognition and Devin.
There's bottoms up enterprise like windsurf and a cursor. Um,
how does this all play out? What do you look,
what are you seeing in this entirely new market
of AI tooling?
Yeah, my first point here is,
we're in an incredibly lucky position
because as a consumer,
you don't have to predict who will win.
You can just figure out what's gaining momentum
and latch onto the best.
And so, I have a lot of empathy
for the companies trying to build those products and spend a billion dollars building a model.
And if something is 1% better, I'm switching to it tomorrow. So that's a great place to be in as a
consumer. How is that playing out? I think everybody's trying to fight similar battles.
Cursor and Windsurf, I think, are uniquely positioned
because they're focused primarily on the UX
of the experience as opposed to the data.
It's kind of in AI companies DNA
to think that every problem is a data problem
and they're generally speaking right.
But where Cursor and Windsurf are getting better and better
is, okay, it's really cool that there's this model.
I'd say on average, that is the problem
that AI companies have.
People have no idea what they can do with the models
or how they should be using them.
I think that gets simplified over time
the same way every product becomes,
you know, I shouldn't have to know
whether I want to use 03 Mini or 4.0.
No normal human's ever going to care about the difference
between those.
So I think they'll just get better and better.
Who wins?
High-eyes.
I don't have a clue.
Trillion dollar question.
Yeah, it is.
If you, it'd probably be a VC if you're in that game.
But thanks so much for stopping by.
We'd love to have you back.
This is great.
Yeah, keep us posted. We'll talk to you soon cheers Austin bye have a good one yeah
next up we have Jeff Morris jr. jr. himself coming coming in from chapter 1
ventures it's got some big news today we'll let it you're gonna press this
button okay welcome Jeff Jeff how you doing? Welcome to the studio. Welcome. Come on guys. There he is
Hey technology brother
Life goal achieved I feel like I'm like, let's go
No, I've seen I've seen this progress. I'm just such a big fan of you both and I won't flatter you for 20 minutes
But they're really you guys are killing it. It's awesome
Well, I remember I think we got breakfast burritos
in Santa Monica, like right around the time
that we were starting the show.
And you probably thought I was a little bit crazy
for going full-time on a podcast,
but you always had faith.
I think you saw the potential from the beginning.
So it's great to have you on and maybe give a quick intro
and then I'll let you talk about the news today
and then we'll get into a bunch of other stuff.
Yeah, so I've been an investor,
I guess for the past five years,
full-time startup firm called Chapter One.
We call ourselves the product fund
mainly because all of our backgrounds are in product,
engineering, design, data science,
and kind of take a product-centric view towards what we invest in.
But part of that grew up in Liberia, probably most famously as an operator, was a VP of
product at Tinder for a bunch of years, ran revenue over there kind of during like the
hyper growth period of 2015 to 2019 when the company was
growing and then now now run the venture firm full time and and yeah we've evolved quite
a bit today we're sharing some of that news.
Amazing what's what's the news and we're not quite set up to have multiple guests on but
we'll have to have your new partner, well, new GP.
So why don't you break down the news
and kind of talk about the evolution of the firm.
Yeah, so the big news today is Jameson Sidel
is being promoted to a general partner.
We started the firm in 2019 when, there we go.
You know, I think it kind of marks a moment in time
for us as a venture fund, but also maybe more
broadly in venture research after one as a solo GP firm when being a solo GP, I think was pretty new, but
also I call it trendy in 2019. And the it's been just really interesting to see the peer group go in
different directions to people who were solo GP's at the time, some have continued on that
path. Other people have gone back to operating. And then I
think there was another cohort of people who want to build
partnerships and firms. And so I think, you know, there's all
these right ways to do venture, whether it's building a firm,
equal partnerships, so the GPS and there's this constant
debate,
but I think we've really been clear
on what we wanna become and today,
building a partnership is obviously
what we're announcing today.
That's amazing.
Had this been your plan from the beginning?
Did you realize quickly that you wanted investing
at the firm to be a team
sport versus something that was more of this, again, solo endeavor, at least like it was
in the beginning?
I think a lot of us in 2019, we had a plan, but it wasn't-
It was concepts of a plan.
What we've become.
And so I think I realized in like 2020,
probably a year and a half, two years in that
I didn't want to just do it on my own.
And I think there's a lot of people who are
really happy with that lifestyle.
And frankly, it's probably a better financial decision
just to do, raise funds, do it on your own.
But I think to be competitive and to
build a great product for founders,
you likely can do
a better job of that if you bring on partners and build a firm. And so that was the goal in 2021.
It took a while to figure out who that person would be and also the right way to do it. So
a lot of people will tell you to kind of like promote people from within build, you know, you kind of like, people
buy into the culture, they know what what what you're doing and you kind of elevate
internally then other people say go, they go find a superstar externally, which, which
at times we also thought about doing says, you know, they do want to go bring in like
a fancy GP spinning out or maybe a former CEO type? Those
things are just really hard to get right. We're really lucky where Jameson was doing an outstanding
job internally, bought into what we were doing. It's kind of like a head coaching search.
and it's kind of like a head coaching search, right? You guys kind of have a TBPN model.
Like it's really, I think,
alluring to go higher like the big fancy head coaching name,
but oftentimes like promoting the person from within
is the best move because they know,
they know your players, your players trust them
and they know the system. players your players trust them and they
know the system I think and I maybe like the Boston Celtics did that really well
right and so I think it's really I think that's kind of the decision most most
firms have to make as they're growing their teams she's focused on I mean in
her bio she says AI ML infra how you guys making money off of stargate
there's 500 billion floating around
What's the angle we were talking about just go set up a chipotle there become become?
Set up like it the the basically recreate the product the 2007 Chipotle
Abilene Abilene. Massive opportunity.
There's 357 workers there.
But I mean, seriously, obviously there's this AI boom.
A lot of the trains have left the station.
I don't imagine you're participating
in the next OpenAI round.
But how do you make money in the current AI boom?
Or are we into the the next next trend?
Yeah, I think there's a couple of angles like one is location.
So there's Jameson lives in London.
I think it's a popular narrative right now that all the like all the value is
happening, value creation is happening in the Bay Area.
And so obviously, I grew up in the Bay Area and I live in Los Angeles and we've made a
decision to to focus kind of on the other most important up in the Bay Area and I live in Los Angeles and we've made a decision
to focus kind of on the other most important areas
in tech and so her view and it's my view
being spending more time in London
is that there's actually a lot of talent
in the universities of Cambridge, Oxford, et cetera.
And also we had a conference in London maybe two months ago
and we had the founder of Granola, we had Anthropic,
we had a lot of people from DeepMind.
And so I think this-
Is the plan to help those founders escape the backwater
that is the UK and the Europe?
Because I can't imagine that they're gonna try
and build businesses there, right?
That's crazy.
Yeah. So like, you go there, you give them money and you tell them here, go to San Francisco,
because then you have an actual shot. Is that the plan? I think it's a bit like, there's the
there's like the project Europe movement, which obviously like love Harry and think that's important,
but there's another views is like meeting founders
where they are today and often that is,
hey, we wanna raise a seed round and go, you know.
I mean, that's the story of Anthropic and Granola, right?
Like international founders who eventually came to the US.
There's always this question I have about like
there's also capital flows, right?
Sweden's pretty goaded, pretty good. If you get over there. The Nordics, the Nordics. Yeah. There's something to the US. There's always this question I have about like capital flows. There's also Sweden, right? Sweden's pretty goaded, pretty good.
If you get over there.
The Nordics, the Nordics.
Yeah.
There's something special.
There's something through in there, yeah.
But yeah, I mean there's always the capital flow.
Like there are a lot of great LPs there
that want to invest in American companies.
There's a lot of founders that want to come
invest in America.
Obviously there are some great companies in Europe
where we're joking around.
But getting that capital flow right
so you don't just expatriate US dollars
to underperforming countries is probably a risk
you want to avoid.
But at the same time, it's probably really underpriced
assets that are gonna go find niche markets
all over the world.
Who knows, maybe the next power law company comes from
some bizarre country.
Do you have strong beliefs around the intersection
of AI and crypto? there's been a lot of
Different attempts and people attacking it from different angles. You obviously have backed a bunch of crypto companies historically
Yeah, how do we make money off of stable coins? Ben Thompson was writing about this today how stable coins are
We were joking web and it's gonna get it on the action. I'm super long stable coins,
but they haven't moved at all.
My portfolio's flat.
No, it is a tough thing where everyone's just so bullish
on stables and yet it's kind of unclear.
Can we do like a meme coin about stable coin?
USD on pump.fund, just USD coins.
Potentially.
Just one trillion coins, there. USD should be a
dollar eventually. No, but more seriously, zero fiat backing.
It's just completely completely. It's just complete me. Yeah,
unstable coins. Yeah, I think people have tried to try to
everything's been tried. But but seriously, like stable coins.
What is exciting or again, like with the bridge acquisition? Is
that the end of
the story or the beginning?
I think you've seen a lot of application, their payments use applications are using
typical ones to enable their payments. But there's not, there hasn't been a lot of value
accrual to like net new circle competitors are trying to compete with the Tether or USDC.
And so stable coins are like the easiest thing to talk about at a AGM or LP
conference because it's like, okay, like I can kind of see that use case. And so I think there's
there's a shift to emphasizing stable coins today. I think the combination of stablecoins and agents starts to get, it does make sense,
but it gets pretty blurry in the current use cases of agents and what's possible today.
For what it's worth, we don't have a strong belief on the intersection between crypto and AI today.
The primary use case we like is giving open source developers
a way to actually make money.
I think that's a pretty clear thing that's
been missing if you look at any open source ecosystems.
You look at, I don't know, like, now as a researcher, different, like BitTensor is
trying to do this a little bit too. But the, you know, I think
the the worlds are are very separate today, if you go spend
time in the Bay Area and actually talk to the best AI
researchers like this, nobody cares about crypto in the Bay
Area for the most part. And that's totally okay. Like I kind of like that crypto is still very weird and unlike
by most people.
Let's get Linus Torvalds, a mega yacht. Let's get Guido Van Rossum, a private jet. This
guy created Linux, Liguido created Python, open source software. He didn't get comped.
It's got some stable immense immense value to get
him some greenbacks on chain.
Exactly. I want to talk about the the venture dynamics for
these mega platform funds and smaller funds. There was this
there was this trend during the Zerp era, where it seemed like
the mega funds were just preempting everything.
You had the crossover investors and that made it pretty hard for early stage
investors to kind of make the decision on whether or not to write pro rata
checks into the into their earlier portfolio because they just wrote the
seed check. Maybe they did find the great company early but then there was this
pressure to say hey these companies are graduating to series a immediately.
Defend your ownership.
Exactly. Exactly. And so has it has that dynamic changed or has the early stage market adjusted
to spinning up growth vehicles so that they can ride that wave as the overall market gets
hotter? I mean, some of these some of these rounds for these AI companies,
they get up a series a and a hundred million like happens
pretty frequently now.
And how do you set yourself up for that as a manager?
I think that's probably the hardest part of the 2019
to 2021 vintage, right?
Yeah.
And you look back at your following decisions
and at least speaking for ourselves,
I think that was probably the part of our investment
we wish we could get back the most.
Yeah.
And so like today the markups are just as crazy
except there's obviously more revenue
that you can kind of lean on to underwrite the companies.
And so it's actually, it might be harder today
because you have, like before it's like, okay,
like Excel or indexes marking up the company,
we did the seed, you know, we should maybe do the series A
out of principle.
Now it's like, okay, one of those firms is doing it,
plus they have, you know, 10 or $20 million in revenue.
And so it's equally hard. And I think
for seed managers, the growth fund thing for seed managers is pretty much done from what
I've seen, like people aren't going to raising growth funds. And if they do, they have to
have a very special reason why they're the seed fund who can do both. So I think there's
a lot more, like we've seen SPV volume pick up again.
That's become a bigger part of the ecosystem. And then I think a lot of LPs are looking
for co-invest. That's always been the case, but especially so today. I think you see a
lot of LPs coming to funds so they can co-invest. And that's, I think, a bigger trend today
than when I started in 2019.
What's the sophistication level when they're looking
at co-invest opportunities?
Are they saying, okay, I'm gonna back an emerging manager,
let's say a $30 million fund,
and then I'm planning to do one or two,
take one or two co-invest opportunities per fund
and really just try to get into those winners?
Or is there more of a, we just want broad direct exposure and, you know,
I'm curious what you've seen or what you're seeing broadly.
Yeah, I think it's both our preferences
that you do broad exposure,
mainly because the odds of doing one co-invest
and having that be the right company
are just extremely low.
And so we don't do a ton of SPVs, but the messaging is always like, please don't overshoot
on any single deal because if you're doing a series B SPV, we all know the amount of
risk even at the series B that especially today, like that's
even more pronounced.
And so, but yeah, I think
Series B risk is more pronounced now.
I think if you look at the amount of competition, the amount of change happening on a daily
basis in, in just technology advancements,
and then the pricing and size of these rounds.
I would think if we look back two or three years from now,
we'll see the Series B risk in 2025
was probably on par with the 2021 vintage,
would be my guess.
Calling a bubble.
That's crazy.
I mean, I just see this having probably 60 plus
different angel investments now.
I'll get an update and see that a company
is getting a markup.
And I'm like, I don't really think like,
they're there.
They've made progress.
Are they any closer to being a business
that truly has the intrinsic?
Series B used to be like pre-IPO.
It used to be like, this is an extremely solid business
like within sight lines of like true profitability.
I think we need a new stock exchange,
probably in Texas just called Dogs,
which is like venture-backed companies
that like have a little bit of revenue
still got a lot of work to do.
Let's just get them out in the public market.
Let's let them trade it like two, three million bucks,
clear the pref stack and then maybe they 100X, right?
Maybe they figure it out, right?
So anyways.
Well, what was that SPAC that holds the kind of change?
There's that one Destiny right now.
I don't know if you've seen this.
Yeah, yeah, yeah, yeah.
The inverse Destiny of the dogs, the companies
that are underperforming private markets.
You wrap all those into a SPAC.
Yeah, and investors can kind of sell to dogs.
It's like a penny stock.
Yeah, exactly.
Take a little bit of a loss.
But it has such a huge portfolio.
And then you're like, yeah, maybe there's one banger in here.
It's actually, there's something there.
There's already businesses that do businesses that buy underperforming.
But yeah, this Index Destiny, I don't know if you've seen it,
they hold a bunch of basically SP positions.
They have some look through exposure to SpaceX,
but they traded 10 times the underlying asset value.
Yeah, that's crazy.
It's like, I mean, people want access to the stock,
so they're going for it.
The Destiny Tech 100.
Anyway, we'll build that one back to you.
Last question I have, what's your updated thinking
on incubations?
I know there's some stuff in the works
that you probably can't announce,
but is that part of bringing on a GP to free you up
to kind of be able to spend a bit more time on internal stuff?
That's been a part of our strategy we want to expand on.
I think we're doing one incubation,
which I'd love to come on the show
in like three months to announce.
And it's partnering with a really big piece of IP,
like a globally known piece of IP
to build an application for that property.
And so I'm really excited about this
because obviously building software.
It's Mickey Mouse payroll software for sure.
Honestly, good execution and payroll,
you're at least a hundred million dollar company.
And so you slap some Mickey Mouse IP on that.
That's actually kind of genius.
It's Mickey Mouse payroll.
Just hyper commoditize,
like you go after these markets that are just
hyper commoditized.
Yeah, yeah, yeah.
Yeah, we're partnering with Disney.
It's just going to be Disney themed.
Oh, you don't want the Batman VPN?
Why don't you want the Batman VPN?
You don't want the Dark Knight protecting you
while you're browsing online?
You're going to go with what?
Nord?
What's Nord?
Honestly, Jeff, this is your new playbook
You live in LA. Let's go
Enterprise SAS yeah, I see IP
Buying it for pennies on the dollar. Yeah, I'm like 5% of the company. Yeah good to go. Anyways, I do know
I don't know what Jeff's Tony Stark built your ERP
Ironman ERP.
Stark.
Stark ERP.
Stark Enterprises ERP.
This is the future.
Jeff's gonna actually run this playbook.
It'd be great.
I kinda love this.
Yeah, definitely come back on.
I think it's gonna really break the internet
when you launch it, so I'm excited to see it.
All right, thanks guys.
Congrats again to the whole team.
Cheers.
Next up, we got Cliff Weitzman coming in from Speechify.
Great entrepreneur, great founder, good friend of mine.
Are we talking about business or lifting?
We're talking about lifting mostly.
So he's been on an absolute tear in the gym.
He's super jacked.
No, it's not a joke.
It's not a joke.
Hit the Ashton Hall. Hit the Ashton Hall.
Hit the Ashton Hall.
Bring Cliff in.
Welcome to the stream, Cliff.
Welcome to the stream.
Oh, we were expecting shirtless.
John said you were gonna be shirtless.
We're gonna be shirtless as well.
Yeah, no, no.
We're making TBPN stringers.
Yeah.
I worked out with him in this building
where we record this.
He put up some fantastic numbers on the bench press, showed me a spreadsheet
where he's what's it called? Thor 2.0. That's right.
He's trying to become Thor 2.0. And so I've been telling him,
I'm trying to do Thor 3.0 and it's been getting under your skin.
But that's a lit in his gym.
How's it been PRs of anyone who's ever worked out in that gym.
Yeah. Yeah. Yeah. We're, we're, we're getting there. Uh, are you in LA now?
Well, where are you at these days?
I'm in LA. I'm in LA in studio city. Uh, but I'm about to,
we move every four months or so. So I'm about to do SF New York,
Prague back to New York, London, Rome New York, Prague, back to New York,
London, Rome, back to London, back to New York.
Those are all four months.
Those are all four.
New York stands.
So you're playing like no, that's, that's next. That's the next 40 days.
Okay. Then back base in Florida for a separate time.
Okay. Yeah. So, so I mean, take us through the structure of the business.
Obviously you can travel a lot,
but it's a fantastic story,
and I just wanna hear what you built,
how you wound up there,
and kind of the state of the union with Speechify.
Yeah, I guess to give people a sense for the scale,
which I think is important,
because typically people just think companies are big
if enough VCs post about it,
but you haven't raised very much money.
Speechify has over 500,000 five-star reviews,
over 50 million users, Chrome extension of the year,
app of the day.
I'm just gonna keep it, I'm gonna hit this a couple times.
But anyways, continue.
Yeah, take us through it.
Yeah, happy to share more.
So I'm super dyslexic and I have ADHD. So first, second, third,
fourth grade, I had a really tough time learning how to read. And when I was right about to
start college, I built this text to speech tool that would read out everything to me.
And in high school, my mom used to read my summer reading books to me. And just we didn't
have time to finish the summer reading book for college. So I cobbled together this thing
that would read the stuff into my iPhone
and then I'd listen to that on the plane and it worked.
And when I was in school,
I studied renewable energy engineering at Brown.
I ended up building about 36 different products,
everything from 3D printed skateboard brakes,
to iPhone apps and websites and payment systems.
Wait, skateboard brakes?
Skateboard brakes, I'll show you an example.
Yeah, tell me about that.
You wanna go slower?
Okay, so if you ride a longboard or a skateboard Skateboard breaks. I'll show you. Yeah. Yeah. Tell me tell me about that
If you ride a longboard or skateboard
concrete hill You don't want to stop by putting your face against the pavement
Oh, yeah, 3d printed the series of brakes that would attach to the back axle. Oh, well, that actually works really well. That's great
That's a good product. That's probably be huge on Shark Tank. That's like the perfect
Oh That's great. That's a good product. That's probably huge on Shark Tank. That's like the perfect product for that. We got accepted on the Shark Tank.
Oh, you did? No way.
That's the best Shark Tank-coded product.
I was considering going on a show, but it's like you end up sitting for like 48 hours
in a trailer waiting to get called.
Oh, wow.
Another big opportunity that I had at the same time was like, Shark Tank isn't worth
it enough.
So at a certain point, I realized that software is much better. So I built this kind of meme maker on a flight from Logan Airport to SFO, published it,
checked in on it 30, 40 days later and it had 90,000 users.
And I was like, oh, software is way easier than injecting molding stuff and way easier than like
bored on doping silicon wafers.
My thesis was on a more efficient solar shell that I was building.
I was like, all right.
And around this time, I read this paper
about narrow applications of deep learning.
A bunch of academic papers, one of them was called WaveNet
about autoregressive speech that came out of DeepMind.
And I was like, I can make a 100x better
text to speech experience
and a 100x better audio books experience.
And I wasn't sure what to work on.
I knew I didn't want to go get
a job at Google or Palantir or Metta. And I was like, all right, well, I don't know what I want
to do. So let me just write. So I wrote a 30 page paper about my worldviews. And the conclusion was
I wanted to be the person that I needed the most when I was a kid. And the thing I really needed
was someone to do my readings for me. Okay, I'm going to fully send it on this. So I convinced two
of my professors to sponsor me to stay in school as a visiting scholar.
Basically I guest taught classes.
I got to be on meal plan, live on campus, use the gym,
but not pay tuition or do homework.
Gym's important. Amazing.
I was gonna do that,
work teaching computer science over the summer,
unlimited, indefinitely until something took off.
And so six months in it took off.
And so now there's like 50 million people who use it.
But the goal is to make sure that reading is never a barrier for learning for anyone, no matter what your background is.
So if you download the speechify iPhone app or Chrome extension or Mac app or
Android app, it lets you take a picture of a physical book.
It reads, it gives you play buttons throughout the internet.
It's like the voice of the internet. You click play, it reads,
and it coaches you to listen fast. So I listened to two audio books a week.
I've done that since I was 14.
So I've read more than 1,800 books by listening.
Do you listen on like three acts or something?
Yeah, I listen on three acts.
That's so intense.
And so-
You can listen to the whole show in just one hour
because it's a three hour show.
Yeah, it's a one hour show.
This is why streaming doesn't work for me.
It's gotta be something that I can react.
We're in RSS feed.
We're on YouTube so you can pull it in there.
Listen up.
That's exactly what I do.
I show off the live streams 80% of the time.
Yeah, yeah, yeah.
Catch up at the end.
But look, I learned English when I was 13.
So in the beginning, I would listen at 0.75x speed,
build up to 1.5x, 2x, 2.5x, 3x.
And I'm obsessed with biographies, theology,
philosophy, fantasy.
And so in the beginning, nobody wanted And I'm obsessed with biographies, theology, philosophy, fantasy.
And so in the beginning, nobody wanted to back a dyslexia education startup.
AI was not yet hot.
The toughest point was around 2017, we didn't have the level of engagement and retention
that I wanted.
And there were a bunch of other sexy projects I could have done.
But from first principles, my conclusion was, I think that the trend that I most back
is narrow applications of deep learning,
to what we call a generative AI.
From a supply side and from a demand side,
it's audio as a user interface.
And I was like, the intersection of these two,
if I wanted to build something in that intersection,
it's speechify, so I might as well stick with it.
I ended up putting a gigantic help button on the app.
It made like 20% of the screen real estate bright red,
helps last message us.
And if you clicked it,
it just put you into an iMessage conversation with me.
So like I didn't need you to enable notifications.
There was no intercom, there was no email.
And if I messaged you back and you didn't respond,
I would FaceTime audio call you.
And spreadsheet of all of our users.
And if for whatever reason you didn't use the product,
I would call you every day until you used it. Until you until you used it. And then the product became really good.
That's amazing.
We were 25, 18 of the folks who worked at Speechify were previously either CEO, CTO,
VP of engineering at their last company.
A lot of them were XYC founders or folks who exited their company after their series A.
Nice tidbit on fitness.
12 of the original teammates got six packs within 10 weeks of joining.
Uh, and we had, we had at least 17 people who gained 10 pounds of
muscle, um, in the first time.
Um, and we get a big Airbnb in a different city every four months or so.
And so at the time we were in LA and I convinced one person to move from
India, one person to move from India, one person to
move from Bulgaria, one person from Mexico, one person who was
in San Francisco. And I went and bought a bench press and I
rented a truck, installed it in the apartment in my room.
And so we had me and Valentine were sleeping in.
And that's like the only kind of weights you need, right?
Just to bet.
And actually, it gets even more deep because I went up to Marin to make sure
that I was buying food for my parents
because I didn't want them to go out during COVID.
And so two of the guys were in the apartment
working out just body weight.
And they did a DEXA scan before and after,
and they found that they had to like not gain any muscle
even though they were working out like animals.
And I was like, no, you guys need more weight.
And so I brought the bench press. And we had two guys sleeping in a bedroom in the living room that
we partitioned with a window, two guys sleeping in the bed in the, in the room, and then one guy on
an air mattress. And that was like the best time ever, like incredible time. And then we found just
very strong product market fit and ended up scaling really quickly. We closed partnerships
with all the top publishers to resell their audio books and ebooks. My little brother Tyler started coding when he was seven,
building dry quality websites when he was 18 times up assembly to hack video games.
He went to Exeter for high school, skipped four and a half years of math, skipped five years
computer science, did Stanford as an undergrad, dropped out to run a cyber security company,
went back to Stanford to do his master's in AI. And I was like, hey, if you can help me build a
model that will compile
in 3x real time, has better quality than any API that's currently out there, and meets the
following requirements, I will do anything. I gave him a very nice compensation offer. And he did
it. Took him about 10 months. He joined Speechify five years in as head of AI. So now we have a 40
person AI engineering team. And we make the highest quality digital voices in the world. We
are the largest supplier of speech AI in the world for consumers.
Um, and then, so that's like level one, level two, level three.
Um, the vibe now is we make everything multimodal.
So if you imagine that 5% of the population would read books for fun on their own.
Um, and let's imagine 15% of the population would listen if you provided it as audio.
Now we're using text to video models that turn everything into like a full,
um, audio and visual experience. That's like super high level for speechify, but happy to riff on anything
How big is the company now? We're
176 people now
Countries so 105 software engineers
On the engineering team and the rest is growth
Yes talk about talk about the things that venture has tried to make you, that the venture industrial complex has maybe tried
to make you do that you've rejected
because in an alternate reality,
you would have raised $500 million by now
and decided not to do that.
And that's influenced a lot of decisions, I'm sure.
So part of it is the fact that between the ages
of 14 and 18, I'm sure. So part of it is the fact that between the ages of 14 and 18,
I read 461 books.
A lot of them were about economics, philosophy,
biographies.
And I also listened to a lot of fantasy.
And when you listen to fantasy, you think,
would I make the decision this character would have
made in this point in time?
So my favorite book is The Way of Kings by Brandon Sanderson.
I'm obsessed with this character named Kaladin.
I based my entire leadership style after Kaladin.
And so when I was writing that 30 page paper
to figure out what I wanted to do in my life,
I became one of those lucky kids who when they were 21,
figured out what they were put on earth to do.
And for me, it was to solve dyslexia.
And so I love what I do. Like,
I wake up every morning early and I go to sleep very late. I sleep very little because I just,
I can't get enough of working on this. And so I don't want to sell it. I want to own as much of
it as possible. And I consider equity, holy. Mind you, I was a solo founder for five years working
on speechify. And again, we had an amazing leadership team that blood, sweat and tears, we figured out how to hire.
Last year, 50,000 people applied
to work in the technical positions at speechify,
10,000 people to the asynchronous engineering challenges.
But we had like incredible offers for our series A
from like literally the top firms.
We said no, they doubled the offers.
We still said no,
because I didn't want to sell 20% of the company. And even with Seed, we said no.
They doubled the offers, we still said no,
because I didn't want to sell 20% of the company.
Even with Seed, we just picked folks who we thought were really good.
Founders of Instagram, of Twitter, of Robinhood, That was key. So for example, Dylan Field from Figment taught me a ton very early on.
And my nuance is if you take money from a fund by and large,
they have a fiduciary duty to their LPs.
If you take money from an individual,
they have just a duty to themselves.
And so you get a Rolodex of someone who's really great,
but you're not beholden.
That being said, there are some businesses
where it makes a lot of sense to raise money.
In our case, we have a very simple business
and we understand the user extremely well
and it's a very niche audience initially
for people who have dyslexia, ADHD, low vision,
autism, anxiety, concussion, second language learners,
and then the entire productivity suite.
And at this point, we've worked on speechify
for eight and a half years,
so it's really hard to catch up to the product innovations
as well as the AI side.
And so we are always talking to investors.
And don't get me wrong, we have raised money from funds as well.
But we've just done it where we typically look for three things.
Number one, it's folks who either have an evergreen model so they can ride with us because
I'm going to be CEO of speech-defined 80 years.
They have a history of philanthropy and education or an experience with
dyslexia, radio key and their family.
And we think really highly of them.
They can see around corners that we can't either.
They've done an amazing job taking multiple companies public or they've
backed founders in ways that I would go public.
We would go public, but into the future, not in the short term.
Yeah.
Well, what are you thinking about competition
from the hyperscalers, the big tech companies?
Google IOS today, Microsoft had their build keynote yesterday.
It seems like an obvious place
that competition could come from,
and yet on the product side,
a lot of the big tech companies just don't seem to be
iterating on the application layer side as fast as most
people expected. So what's that been like? Has anything slowed you down over the past
two years at the competition from the big guys?
Oh, they've helped us a ton. So if you use heuristics that are relevant to any founders,
then I'll give you the specifics on speechify. The first one is in my personal opinion, value
accrues at the application layer more than it accrues at the model layer, and models become commoditized.
But if you ever, did you ever doubt yourself?
Did you ever think, oh, maybe value is going to accrue
to the model layer?
Because there was like an 18 month period
where everyone was like, models will do everything.
Well, I'm giving you what my opinion is today.
And the only reason I have a strong opinion
is I've gone deeper into it than almost anyone
has.
Right?
We bought millions of dollars of H100 GPUs.
We set them up in our own data center.
We have a 40 person team who does research and development on models.
So the second part I was going to say is, but if you really want to be indestructible,
you've got to own both the model and the data and the application layer.
That's what made ChetGPT and OpenAI so, is they had Brockman work on the user facing side,
and they had Elia work on the research side.
And if they just had the research,
they'd just be another lap.
Application layer that made it incredible.
And that's the first thing,
because you wanna be able to control the quality
and the cost as well as the user experience.
And then why did WhatsApp get acquired by Meta
for $19 billion?
It's because they had all the users.
And so you own the end relationship.
You don't want to have the phone number, the credit card, the email of that user, which
we talk in a minute about kind of the changes that came with Apple.
So that's the first thing.
The second thing is, and this is very unique, sorry, not unique, ubiquitous.
If you are Apple, Meta, Google, Netflix, there is no point in doing any project
unless you think that project is going to make you
one to a hundred billion dollars per year in the long term.
We had a point where we introduced translation
in this feature file.
This is in 2020, 2021.
And Mike Krieger, who is the founder of Instagram,
CEO of Instagram was like,
you should consider removing this feature.
And I'm like, are you kidding me?
Do you know how long I spent building this feature?
But I have respect for Mike, so I took it out.
Insult to trial increased, activation increased,
users use the product more.
And I was like, shit, I gotta remove this.
Why?
Why? Because it was distracting.
Speech Advisor, one thing,
to give you the best experience listening to something.
The second you add an extra button,
users go down that path.
So you don't clip me from work.
Interesting.
Great has a problem, but they had all-
And you guys have such insane scale
that you could immediately see something that,
again, just removing some complexity.
Back then we didn't have mad scale,
but you could see it in three days of usage.
Like there's nothing, no.
So Clippy is horrible because Word is horrible
because the design is terrible.
But you know who has amazing design?
Google. One text field. know who has amazing design? Google.
One text field.
Who else has amazing design?
ChachiPD.
One text field, one button.
And that's why they're successful.
If they had multiple buttons,
they would be far less successful.
Now here's the key.
Text to speech is buried five levels deep in the menu.
When I was 19, I made a video on YouTube
under an anonymous account,
because at the time I was embarrassed
for people to learn I was dyslexic. And it was called how to text to speech Mac free.
And if you search that phrase on Google, even today, my video from when I was 19 is number one.
Why? Because it's so difficult to activate text to speech that you need 19 year old Cliff to show
you how to do it. You can't pause, you can't play. And by the way, if you want to listen like me,
what I ended up doing in college is I would use the terminal to change the speed
because the GUI didn't go fast enough.
And it backed down, I had to restart my computer.
So I was restarting my MacBook multiple times a day
when I was in college.
And then I was like, screw this,
I'm gonna build a Mac app with keyboard shortcuts.
I'm gonna build a thing that OCRs the screen.
And so my dream is for Google in mobile Chrome
to add a gigantic play button
that appears in every website that lets you listen to it.
Why?
Because it will make everyone addicted to listening.
And then when they wanna listen to a long PDF
and they wanna do it offline,
or they wanna do it in my voice,
or Snoop Dogg voice, or their own voice,
or they wanna translate it to another language,
or they wanna scan a physical document,
Google doesn't cover that, Apple doesn't cover that,
and people use multiple proforms.
So you need to use speechify.
And so speechify is the premium experience
for text-to-speech.
What I need is for people to become educated
about the idea that you can listen to stuff.
And so a lot of our work over the last eight years
has been to educate the market.
And so there's really two goals.
Number one, build the most exceptional product,
like extreme product.
We have four core principles, extreme product quality,
leading with love, frugality, and speed.
So we talk to users a lot. We take really good
care of each other. We don't waste money on anything. We build our own SaaS tools. And
we move really fast. We ship a lot to production. And the second goal is to educate people that,
hey, you can practice being a fast listener. And if you practice, anyone could do it. My
dad is 65 years old and English is not his first language. And the joke we have in my family is he wishes he had a mute button for me and I wish I had
a speed up button for him.
And he now listens at two and a half X feed to everything. And so my life is better because
my dad listens and talks faster now. But anyone can learn how to do this. It's just a matter
of practice. When you come into first grade, no one expects you to be a good reader for
second, third, fourth, fifth, 12th grade. We expect 12 years for you to become a good reader,
but you can become a good listener after listening to 15 audiobooks. You're not going to be good on
the first one. You're not going to be good on the 10th one. And being a good listener means three
things. Number one, you can do something else and listen at the same time, drive, cook, walk,
whatever. Number two, you can listen to more than two X feed. Number three, five weeks later,
I asked you about the book. You have great retention of everything
that you read in the book.
And it will not happen when you start,
you have to practice.
But here's the thing, reading is a hack we invented, right?
24 characters on some dead trees.
Listening, we evolved to be good at listening, right?
Telling stories over the fire.
So if you were a bad listener,
you were removed from the genetic pool.
If you're a bad reader,
you were not removed from the genetic pool
because otherwise I wouldn't be around. And so when you read 30% of your brain is dedicated towards decoding,
70% is comprehending. When you listen, like 3% is dedicated toward decoding. The rest is
comprehending. And so you can understand a lot better. And once you practice listening fast,
you have that skill for life. And especially if you have ADHD and you get distracted easily,
the speed of listening matches the speed
at which their mind is working, and that helps people focus,
especially in school but also in work.
And so that's the goal of the company,
is to just make that accessible to everybody.
And right now there's 450,000 audiobooks,
but there's 100 million books.
We make all those accessible, plus your emails, plus PDFs,
and everything else.
It's amazing.
Incredible.
You are correct.
Wish we had more time.
Yeah, we're gonna get more content soon.
We will.
Yeah, come by before you take off.
We gotta make sure you break that VR.
Yeah, yeah, I'm working on it.
I think I'm pretty close.
We'll talk to you soon, Cliff.
Thanks so much for stopping by.
Thanks for coming on.
You're the man.
You got it.
Bye.
Really quick.
Bunch of VCs are gonna reach out to you now.
I know.
He really is a fantastic business.
Let me tell you about Bezel.
Go to getbezel.com.
Your Bezel Concierge is available now
to source you any watch on the planet.
Seriously, any watch.
They got Rolexes.
They got the GMT Master II, the Batman,
the precursor to the Batman payroll suite, the Batman. You're really onto something there. I think it's good. I think that's a feature Disney
Payroll, I mean Logan Paul Logan Paul has a
Energy drink why not Jake Paul has deodorant Jake Paul has deodorant. Why doesn't Batman have a ERP system?
Why not or Shaq payroll?
Shaq payroll would be pretty good.
Before we bring in the Google folks, we should talk about Joe Weisenthal.
He says, your neighbor is online.
He thinks the bricks are creating a new currency that will soon replace the dollar.
He thinks Blackstone owns 60% of single family homes.
He's Lon Cardano.
And this is a response to Buco Capital.
Bloke says, your neighbor isn't online.
He doesn't think about tariffs is
Honda CRV only has 80k miles on a good for another 100k. He
doesn't have an opinion on the rating of our sovereign debt
only the only inflation he cares about is shrink inflation. Yet
his 401k goes up just the same. I think the the Blackstone
owning 60% of family homes is the funniest meme this whole
idea of like, you know, like, like Blackstone's the most powerful
financial institution in the world
just because they index everything.
Anyway.
They're turning single family homes into pods.
They are.
All right.
Let's bring in.
You know what time it is.
There they are.
Welcome to the stream.
Huge day.
How you doing?
What's going on?
Congratulations on today.
What's new?
Would you mind introducing yourselves and then just giving a quick breakdown of what's going on today over at Google?
Yeah, I can go first. I'm Tulsi. I lead the product team for Gemini. Cool. And I work on some of the Gemini developer tools.
What are the top three announcements today? What's the most exciting thing? What is the one takeaway for the audience? Ooh, I'll start with one takeaway
I think the one takeaway is like Gemini at least from my perspective Gemini 2.5 is just getting better
And it's not just getting better in terms of like the quality of the models themselves
But we're now bringing that capability everywhere. So AI mode is coming to everyone which is kind of our advanced version of search
Gemini is coming to everyone, which is kind of our advanced version of search. Gemini is coming to Glasses, which is going to be amazing.
Gemini is coming in even better ways to Gemini app with more rate limits and just more access
and better features.
Yeah, I was going to have to spend...
Wait, Gemini and Glasses, is this like a return of Google Glass?
Is this a new product?
Well, yeah.
So part of the issue for us today
is that we got to watch like 20 minutes of I.O.
and then we started our own show.
So we still have to catch up.
But yeah, and then it's good you mentioned rate limiting
because I was worried I was going
to have to spend this entire conversation moderating
John asking for higher rate limits from you guys.
I hit a rate limit on higher rate limits from you guys.
VO too.
Anyways, sorry, Tulsi, continue.
No, no, I think it just it's it's like an exciting time for us, because I think the models continue to get better. We
released a new version of Gemini 2.5 Flash today, we
introduced, like improved versions of 2.5 Pro with the DeepThink,
kind of deeper reasoning version of the model. And so I think the models are getting better,
but it's also amazing to now see them actually really landing in Google products and in a very
real way. I don't know what are your takeaways or top three are? Yeah, I think my TLDR is Google's
firing on all cylinders across every vertical, across every dimension.
And like, IO is this like weird manifestation of this actually happening where you just like,
I'm like, holy, like even as someone who spends all day working with Tulsi, working with our teams
and seeing this happen, like IO is this crazy reminder of like just how much breadth Google
has and in the product areas and just how many people the model innovation touches.
And it's yeah, I think today's a celebration of all that hard work and making all that
happen. Yeah. How do you think about the messaging from like the continuum of like Google is
so big that like consumers will watch a developer focus keynote.
Some of the products that are launched are very consumer facing, but then there's obviously developers that want to build on top
of these tools. And then there's also
stock analysts that are probably watching for how things are going on benchmarks or MAUs or any new data points. And so
what are you focused on kind of messaging to the different audiences?
Is there a concept of speaking to multiple audiences or do you just try and laser focused in on on dev specifically?
No, I think we really do want to speak to multiple audiences.
I think you hit the nail on the head and saying like,
there are definitely experiences that we talk about here at IO that are very
consumer focused. We also know though,
that developers are trying to build for consumers, right?
So when you showcase amazing consumer experiences,
you are also able to speak to devs because you can speak to them about how you
actually were able to build that experience and what they can do to build
experiences like the ones that you're trying to build.
And so I think there's an art to try to figure out what are the right demos, what are the
right stories that really can land cross audience.
And then we try to create sections that are maybe more targeted.
So for example, I spoke earlier today, Logan spoke in the developer keynote, really targeted
at developers and what developers can do. And then there are obviously sections around search
or glasses or Gemini app that are much more targeted.
Yes.
Walk straight off the dev demo stage right here.
Yeah, it's true.
You said you could do two
and you also said your keynote was starting at 1.30.
I was like, are you worried somebody might ask
an extra question?
It could be in multiple places at once, it's a skill.
This is amazing.
How important do you think those demos are?
Because we saw ChatGPT breakthrough
with the Studio Ghibli moment.
Everyone was memeing the gibblies.
I was recently using VO2 to generate video of cars
with specific liveries and decals on them driving around.
And the result was fantastic,
but it hasn't broken through in the same way,
even though I feel like the model,
and just more importantly, the accessibility,
because I was able to generate a video
from a Gemini chat box,
and I didn't have to go to a different app for that.
And I saw in the Apple App Store,
the Gemini promoted post
is basically like VO2 is here.
And the output was so good that I showed it to somebody
and they said he was, John had generated an image
of a Lamborghini that had full TVPN branding on it.
And I showed it to somebody and they were like,
when did you do that?
And I was like, and they were like-
That's right.
Was this how you made your promo video
is really just VO behind it?
No, that's our next promo video.
Actually also on that note,
I think you probably haven't seen that part
of the keynote yet, but we shipped VO3 today.
Oh. And it's awesome.
And it has audio.
Oh, that's very cool.
Yeah, I was thinking about that
because it was missing audio.
And I was like, this is the missing thing.
I need to go find another model
and then figure out how to inference that or whatever.
But yeah, I mean, in terms of creating these like
memeable moments or something that can go viral,
is that something that's even thought of
or is it more just like turn it loose and hopefully?
Well, yeah, I think the question is like,
you want to distribute the capability
and the question is like, is that Google's job
or is it the developers building on your guys' platform to do that sort of last mile delivery to users?
Yeah, yeah.
I think the use case that is actually really important because I think despite
like where I feel like we're all sort of seeing everything that's happening and
we're close to it. But like there's so many people who actually have no idea.
And it's like every day I think we need like a tracker of like the number of new
people entering the AI like every day, I think we need like a tracker of like the number of new people entering the AI world
every day. But like, I imagine it's like some reasonable
percentage of all people who are using these products. And like,
you know, most products are still a blank empty chat screen
somewhere. So like showing that use case of what's possible is
super important. And I feel like a lot of the demos from IO today
were like, pushing that frontier in a really meaningful way.
But I think also to answer the question of whose responsibility
is it, I think Google, we get to do both, which is fun.
We get to go and build a lot of these use cases for developers
and make it possible for them to bring that to their consumers.
But then we also get to be dog food and really walk the walk
and build those consumer experiences ourselves
and find a way to make a memeable moment. Yeah.
How much of what is announced are more like
showcases that could be built into other products long
term? Because I was looking at somebody was saying like Google
Assistant has less five star ratings than Gemini.
But you could imagine Gemini being like a demo for what's coming in Google
assistant, but it seems like Gemini has just gotten so big that it's like bigger
than assistant now.
And so you have this weird thing where you're experimenting in this fun sandbox.
But if you're really successful, then people just start using that and then you
just move off of the other thing.
Is that is that like a deliberate part of the strategy or just like a natural
outgrowth of like the modern AI development strategy that you put
stuff out and if it goes really big, you just let your winners ride?
I think it's a, it's a good problem to have, I think if we're in a world where the experiments
we're trying or the new kind of surfaces we're building or the new types of use cases we're
building actually really resonate with users. And then we're in a place where we actually
want to build those and scale those out.
I think what we're trying to do.
So at the end of the day, what we want to build is products that people love and
really want to use.
Right.
And I think there's three ways you can do that.
You can try to build new features into your existing products.
I think in a lot of cases that makes sense.
Right.
So for example, like AI mode on search today.
But in some cases, if you do too much of that, you bloat the existing product, it becomes
harder to use and understand versus being able to create a new surface where you actually
can really rethink the UX, really rethink how users are supposed to think about the
product.
And then you can look at that and say, well, does this really make sense as a standalone
or does it actually merge with other ways that users are using different products, right?
And actually fits into their existing workflows in some way.
And so I don't think we want to pigeon-hole ourselves into just,
hey, we need to consistently build on one product, right?
We want to give ourselves the flexibility to create ways for users to build new mental models
and then figure out how those things come together.
And I think this search example is actually a great one because if you saw,
we announced deep research in the Gemini app came out,
I think all the way back in December and it was like pioneered the deep research
category and Google launched that in the Gemini app and people love it.
It's a crazy product experience. And then like that's now,
I think the feedback was, this is an incredible experience.
Not everyone in the world is a Gemini app customer.
There's a whole lot of people in search and like now I think it's called deep you know, the feedback was, this is an incredible experience. Not everyone in the world is a Gemini app customer. Um,
there's a whole lot of people in search and like now I think it's called deep search is available as part of AI mode inside of just like basic Google search,
which is going to reach hopefully even many, many,
larger millions of more users than are using the couple billion
just to, just to be, uh. Talk about MCP.
Was there like a sigh of relief that Google's leaning into MCP?
Was it ever a question or was it something that was like pretty easily adopted across
Google?
I think it's been fascinating to see like how quickly the world has like, I feel like
with standards, like people I feel like are often like very
slow moving to be like, do we adopt this? There's like a whole, you know, many year
battles with different standards competing. I feel like everyone was just like MCP came
out. Everyone's like, okay, we'll use MCP.
It's that XKCD meme. There's 10 standards. We need another, we just need one standard.
Now there's 11 standards, but in this case we actually got the good ending.
It's important though. Like I think developers are building stuff standards, but in this case, we actually got the good ending. It's important though.
I think developers are building stuff, I think, in this moment where the world is still trying
to figure out how to build robust agentic systems.
Having people not spend all the time just rewriting the same framework 50 times over
I think is actually a net benefit for the world.
And I think the team that built MCP at Anth that built MCP at Anthropic did a really, really
good job. And it was a smart move by them to like actually
make it an open standard, which they could have easily not done.
And I think it's it's the right thing for the ecosystem.
Put developers first in the right way.
Yeah. What is the overall messaging to developers? It
seemed like Satya on stage at Microsoft's
build was very much like, we host every single model, we'll
give you deep seek if you want, we're partnering with x. Google
in my mind has always just been like, look, we invented the
transformer, we have the biggest, like best models that
we're dominating the Pareto curve. Are you guys seen more
as like a one stop shop? Or do you think there'll be more model
agnosticism in the future with regard to GCP?
I think GCP already has this. So GCP, like we do, like I've been saying,
you know, Tulsi and the team do a great job with Gemini models.
We want the world to use Gemini, but GCP does actually have like,
I think it's over 175 different models that are available through the model
garden. You can get Anthropix models. You can get a lot of models.
So I think the, and like for customers on cloud, like that makes perfect sense.
They need that. Like that's a requirement. Like the world doesn't want to,
you know, be locked into a single model. Um, but I think like,
we get to see these like really,
really interesting experiences that can only be built on into Gemini because of
the like stack of, of Google's model innovation. Um, so we're going to keep pushing on on Gemini because of the like stack of Google's model innovation. So we're going to keep pushing on Gemini models too.
That's the goal.
Yeah. And then on the developers that might be working with Google,
I imagine that the conversation is a little more nuanced
if you're actually scaling a growing app on Google infrastructure.
Like I ran into a rate limit as a random user.
But if you're, if
you're actually thinking about scaling a startup, unless you're, you know, thinking about building
Stargate, you're probably able to scale pretty, pretty efficiently. But have you been seeing
companies run into any sort of like scaling or rate limits or challenges as they kind
of grow on Google?
My email, my email's online.
I'm on Twitter 14 hours a day.
So whatever you need, email me. We'll make sure it doesn't happen.
I think in general, like this is like as far as like a class of problems
that developers have in this moment.
I think they're like very real.
Like I hear this and see people on time.
The challenge is like compute is like a, it's like literally a currency almost.
So like we have to make sure we get the right
compute to the right people who are actually building stuff.
And like, it is a, it's not an easy process to do that.
But I think if people need stuff,
please reach out to the team,
actually reach out to Tulsi too.
We'll get, we'll get.
Yeah, spread the love, you know.
Don't just send your, don't just,
don't just create CX tickets for Logan.
Maybe you talk, talk about a couple of the other kind of product
level announcements.
You guys had Jules, an asynchronous coding agent.
You want to speak to that a little bit?
I'm curious to get your.
Yeah, we're really excited about Jules.
So actually like at Jules.Google today,
you can sign up for the beta.
But basically the idea with Jules is we really want to start thinking through more and more use cases for how we can build, essentially,
assistance for an agent of assistance for developers. Right. So you can think about
Jules as trying to take aspects of the workload that are frustrating, that are time consuming,
and can actually operate on them in the background. Right? So the example I gave earlier today is node.js,
being able to like update an outdated code base
for like the latest version, right?
But I think it's really about how can you actually
be able to assign tasks and let the model run
and complete those tasks and come back
and actually then have this like relationship
with the developer where these tasks can get completed,
can come back,
can iterate together and build that kind of collaboration setup. And you can run multiple
at once? You can run multiple at once. And I think in classic Google fashion, you can sign up and use
it for free right now, which I think is really cool. I think part of this is if you look at
what's Google's job in the world right now, I think part of this is we have this stewardship role of making sure developers, people who are using search,
et cetera, et cetera, know what's
happening with this AI technology
and can actually get their hands on it.
And can try it.
Yeah, can try it and know that it's available to them.
So anyone can sign up.
I think the queue time for tasks is kind of high right now
because everyone is flooding Jools.
But hopefully as they scale up, capacity will.
Yeah, I mean, also what I think what's been actually really exciting today is to
see and get the feedback on Jules because we've obviously been trying it internally
and using it, but actually getting it in the hands of real developers, getting that feedback.
That's how we can actually prioritize what types of activities do we need to prioritize
in Jules? What use cases and features are most valuable for developers? How do we build that up? And so it's actually like a very self-fulfilling prophecy, I think,
for how this will go, hopefully. Have either of you had a chance to actually sit and get the
Project Starline demo? I remember a couple of years ago, this is the 3D, how would you describe
it? 3D video conferencing? Yeah, like video conferencing plus plus. Yeah, yeah, plus plus.
And there were a whole bunch of videos
from influencers who were there who got the demo
and like blown away and they were like,
I filmed it with my camera and you can't really see
the effect, but you'll see my reaction.
But it's exciting that it sounds like that's going
into production in partnership with HP,
but is there anything else to say there?
Have you had a reaction or any test demos? I'll be honest, which I've had that same experience
that the influencers had, which is I used the project Starline to take a meat call, but I was
meeting with someone who wasn't in one of the other groups. So I have no idea, like maybe I looked
different or something like that, but they weren't,
they were just like.
They were a normal mirror.
They were in their home.
Yeah.
So, but it looks cool.
I'm hopeful.
I'm hopeful.
Yeah, it's a very cool setup.
As somebody who works at home too much,
I'm interested so that people can sort of feel my presence
hopefully in the future.
Well, if you get in your car and drive,
you're gonna miss all the DMs from developers
that are asking for.
That's me looking to stay home with his Starline setup. Stay locked in. Well, if you get in your car and drive you're gonna miss all the DMS from developers
Stay home with his star line set up locked in yeah I mean we've been kind of in like I mean I remember I bought a 3d TV like a like a decade ago
Probably and we've been waiting for like, okay. Yeah 4k. We can't really see anything past 4k
But something like this would be very very cool
notebook LM number four in the charts, big launch, new mobile app.
I'm sure you guys never lost faith internally, but I think people externally had been frustrated
because they liked the product so much and they didn't feel like it wasn't getting the
investment.
Any color to add there?
Lots of notebook LM progress.
I think people sort of felt this, which is interesting.
And we had a bunch of conversations around why,
but it was like, the team is still cooking,
like they're still shipping a bunch of stuff.
I think that, I think part of it was like,
the expectations were just like super, super high now,
because a lot of the stuff they shipped was awesome,
but they had a ton of new features that just landed
and it's been awesome.
I think also this team is one,
even if you look at the history of NotebookLM,
like we actually shipped NotebookLM relatively quietly and then it got a lot
of pickup because it was just such an amazing product to use.
And so I think we've taken the strategy with NotebookLM where we've sort of tried
to lead with the product leading almost, um,
as opposed to just trying to like cook up a bunch of noise.
And I think the NotebookLM,
his team has just been silently making the product better and better and better.
And like, we are going to have Tulsi cook up a bunch more noise. And I think the notebook LM has team has just been silently making the product better and better and better. Yeah, we are going to have Tulsi cook up a bunch more noise for notebook LM. This is
a great way for you to get more tweets. It's just firing off notebook LM updates.
Well, it's good. I mean, the fact that there was like organic real pull from the market
and it wasn't just like, Hey, we have a bunch of distribution. Let's make everybody aware
of this right away.
Yeah.
What was the original notebook, Ella?
Because I mean, there's research previews, there's developer previews now, there's alphas,
betas.
I feel like there's 25 different ways to say like, don't hold us accountable to the
relaunching this product.
What is the actual flow these days for just getting a product?
I wasn't wasn't Gmail and beta for like a decade or something like that.
This is kind of like in in tech culture broadly
But I mean a lot of things people are just like if I can use it
I expect it to be a consumer product, but it did did notebook LM go through
Like a particular pipeline or do you have like a structure for these launches over there?
Yeah, no book LM came through Google labs Google labs
Josh, I don't know if y'all follow Josh Woodward, but Josh Woodward runs Google
labs and Gemini app now.
Okay.
Gemini app is obviously generally available and not in labs, but labs does
all the early stage bets.
So actually AI studio, the project that I work on out of labs, same with the
Gemini API, notebook, um, uh, jewels came out today.
Some things we showcased.
So we showcased, uh, and say we showcased. So we showcased on stage, we showcased
pinhole, showcased Stitch and with Jules. Yeah, right. All of these are products that kind of
came out of and it's like a break those down really quickly. Those other labs products, Stitch and
yeah. So I mean, you were actually demoing Stitch earlier, right? Yeah, I didn't, Josh demoed Stitch right before I got on stage, but Stitch, similar sort of
vibe coding, vibe designing setup, where you can go in and ask the models to like go and
build native mobile apps, I think is what it's actually built about.
And then Pinhole basically builds off the magic of VO3 and sort of multimodal generation.
So I think really what we're trying to do with labs is, what Josh is
trying to do with labs, what we're trying to do with labs is use it as a way to take these
technologies and start testing what new surfaces, what new UX, what new types of products can you
build. And then like ideally like Notebook LM, they're well loved and we actually like can hit
a moment in the market where we can find new product market fit. Um,
and that's really why we're trying to create this incubation engine.
And then we can take those ideas and both bring them to our products as well as
actually shift them as standalone products.
Principally they're, it's, they're also like very close to the model teams.
So it's not this, like, you know, they're, they're, the, the, you know,
they're collaborating with Tulsi's team, like they're all sitting in the same
places. So there is this like innovation at the model level
turns into innovation at the product level.
And I think those teams are like some,
some of the first recipients of that stuff.
Yeah. It's interesting.
Cause like, I feel like Gemini has a voice mode now
that you can click over and talk to the LLM.
And it's kind of like almost a separate feature path
from just dictating and then having it read to you,
even though that's a very similar auditory experience.
You could imagine notebook LM being like a feature inside the Gemini app,
but right now it's kind of going down separate tech trees.
So we brought audio overviews and deep research and like,
I think the play between these things are like all kinds of connect.
I think there's like some like very bespoke notebook LM stuff that you'll like probably not have in the demo. I think the stuff that works
like there's no reason to not. It's like all it's all under Dasha's umbrella. So like he
gets to easily be able to be. Yeah. And I think like the underlying technology, like
Logan said, we're trying to make sure that the model teams and these products are very
close together. So for example, notebook LM has now native audio, which is one of the
things we talked about today,
which is really like Gemini being able
to natively generate audio,
which makes it more like multilingual friendly,
more natural.
And so that kind of technology,
A, we can use to make notebook LM better.
We're also bringing that to Gemini live.
We're bringing that to, you know,
other parts of the Gemini app
over the course of the next months.
And I think that kind of synergy means that you're going to get the best of these experiences
depending on your use case in different places.
Yeah.
It's an interesting product challenge.
I've run into weird scenarios with chat GPT deep research.
I'll generate some huge research report and then I'll be like, listen, read this to me.
But it clearly gets lost once it's reading halfway through and starts hallucinating
and stuff. So like, there's some interesting challenge to actually make
all of this, like if you generate 5000 words of text, actually getting it into
the human's brain, it's not enough just to be like, here's your research report.
Like you actually have to make it accessible.
Deep research audio overview, I think is that flow. Like, yeah, that makes sense.
Work. It's like, it's actually awesome. because the deep research reports are like 46 pages. Yeah,
exactly. I don't have time for that. 46 full pages. No, like give me one page or give me
like three minutes of someone like, you know, having a casual conversation. Give me a YouTube
short generated by VO3. that's what I need yeah
Eventually you two are just gonna be
Yeah With your you know
Spoke taste yeah, yeah, yeah, we're not letting notebook LM disrupt us
We're coming into the show completely unprepared
But we listen to a 10-minute notebook LM on the thing we're about to talk about so we have a little bit of context
Anyways, you guys have been shipping like crazy
What can people expect from you now? Are you gonna rest on your laurels? Are you gonna you know, you know
Just get a little overconfident or is it you know business as usual time back to the grind a little overconfident
I think will be good because I think like we've done a lot of,
there's just been so much stuff that the team has done.
So I think like people should be proud
of all the work that's gone into it.
But I think the work just starts now.
I woke up this morning and I was like,
I'm writing the doc about how we need to go heads down
for the next six months and land on 50 things
that need to happen because there's like so much stuff
that's still cooking in the pipeline
and it's gonna be awesome to see all that.
Yeah, I think also like we're really I really do think we're hitting our stride in so many
of these areas.
It feels like it.
Even even some of the posts like even Sundar's posts like over the last 24 hours, I was like
he's feeling good going into IO.
You can tell you can tell.
So the energy the energy is palpable.
Yeah, and so we want to I I think we wanna use that energy.
Like I think the momentum coming out from today
is really about like, okay, how do we take the excitement?
How do we take the energy?
How do we take the feedback and like actually really use it
to continue to just build awesome things?
So I think we're just at the beginning.
Like this actually really is the start
of what's gonna be an exciting year, I think.
And we'll make sure that you two are in person
for the next.
Oh yeah, we should.
Yeah, let's do it.
Yeah, we should be more than 24 hours heads up
if you wanna come in person.
Yeah, yeah, yeah.
Yeah, we need a whole calendar for these things.
Yeah, amazing.
Well, thank you both for making time to come on.
It's really great to hear.
And congratulations on all the progress.
Congratulations.
We'll do it again soon.
All right, have fun guys. Yeah, the interesting thing is they are cooking. They're cooking. I mean, yeah, they delivered in a bunch of different ways. And I mean, obviously, like, you know,
what like tech Twitter is not real life in many ways. And you see that with the the the Gemini
you see that with the Gemini Maos and Daos and just five star ratings and stuff.
But I mean, you saw it with the video I generated,
like the models are really, really good.
They're really good.
Performing really well.
And I feel like Google has been,
there's so much chatter on Twitter,
people making fun of, oh, I was in Gmail
and I asked it to do this.
Yeah, some of the products are misses for sure.
Yeah, but clearly they products are misses for sure.
But clearly they're aware of deficiencies
and rapidly shipping on a bunch of different dimensions.
And the energy they even are bringing to this call
and the fact that they're even just going direct is super bullish. I think the really interesting thing is like,
Mark Andreessen had that quote about like,
chat, open AI is becoming Google,
Google's becoming open AI.
You remember this?
He said this to us, and I think his take was that like,
open AI is becoming a consumer tech company
and Google's most advanced in research.
And I think that's, going back to the Yahoo question
we were talking about is like, if they're using,
like they invented the transformer, right?
And so if there is a new architecture
and it does come out of like an academic lab,
it's totally possible it comes out of DeepMind again.
Like they did invent the transformer there
and Ilya just kind of picked up the paper
and was like, this is amazing,
we need to implement this and go really hard at this
And then and then everyone did yeah, but it would be very interesting to see
Like like we didn't even get to the diffusion language model, but what other what other foundational innovations?
They're trying because they've been ahead of the curve on getting bigger
Context windows faster inference on
that Pareto frontier.
And if there's a new paradigm and they implement it quickly, like that, that could be an entirely
new thing.
Very interesting.
Yeah.
Overall, Google has been so successful for so long and so dominant that I think that
it creates this effect where people almost want them to lose or want them to, you them to get taken down a notch in some way.
But they're gonna have to go through us,
because we will take a bullet for Big Tech.
We will, we would, we will, we would, we won't.
We won't, because we won't have to,
because you will never challenge the supremacy of Big Tech.
Yeah, but Google is one of the greatest
high technology companies in history,
and it deserves to be celebrated,
and we need to just put this phrase to bed,
but founder mode, like Google feels like
at an individual level, people are taking massive autonomy
and really have a lot of momentum,
and even I'm on jewels.google.com right now,
and they're rate limited,
but it looks and feels
like the product that a startup would create.
It's good.
It looks like they just raised like $300 on $3 billion.
They probably did from internally.
Internally.
Well, anyways, we should get out of here.
I did get some extra context on the Coinbase issue.
A friend of the show said,
Re Coinbase hack, he was sending it live
while we were in that.
He said, the information that isn't the customer data
is sold at a granular level, i.e. info on the individual.
So it'll go on a dark web market and be sold individually
based on the Bitcoin balance of the individual.
Think of it like a standard three tier,
regular, silver and gold would be a balance over 50K.
And these accounts are already vulnerable
to like SIM swapping and things like that.
So anyways, appreciate the Intel,
the Lone Ranger shot that over to us.
Very helpful.
But anyway.
Last thing, congratulations to Tyler Cowen, a friend of the show who's made the Time Magazine
list of the 100 most influential people in philanthropy.
So congratulations to Tyler Cowen.
And we will see you tomorrow.
We will see you tomorrow folks.
Have a great rest of your day.
We'll see you on Wednesday.
Fun show.
Bye.