TBPN Live - GPT-5 Hot Takes, Rahul Live from The Ultradome, Doug From SemiAnalysis, Timeline Reactions | Doug O’Laughlin, Rahul Sonwalkar, Mitchell Green, Ben Schleuniger, Merrill Lutsky
Episode Date: August 8, 2025(00:10) - Rahul Sonwalkar Live in the Ultradome. Rahul Sonwalkar, founder and CEO of Julius AI, transitioned from his viral "Rahul Ligma" persona to leading an AI data analysis platform that ...has attracted over 2 million users. In the transcript, he discusses the challenges and solutions in AI code generation, emphasizing the importance of accuracy and reliability in building AI-native products. He also highlights the significance of user-focused problem-solving and community engagement in differentiating Julius AI from competitors. (04:45) - GPT-5 Review (30:02) - Timeline (01:24:00) - Doug O’Laughlin is the President of SemiAnalysis, an independent research firm focused on semiconductors and AI. He previously founded the Fabricated Knowledge newsletter and earlier worked at Bowie Capital. His analysis centers on semiconductor strategy, market intelligence, and competitive dynamics across the AI supply chain. (02:30:08) - Mitchell Green, Founder and Managing Partner of Lead Edge Capital, a growth equity firm with over $5 billion in assets under management, discusses the current state of the venture capital industry, highlighting the overestimation of short-term technological change and the potential for many AI application companies to fail due to unsustainable unit economics. He emphasizes the importance of investing in profitable, often overlooked software companies in non-traditional tech hubs, leveraging AI to enhance productivity and efficiency. Green also expresses skepticism about the ability of smaller AI firms to compete with tech giants like Google and Microsoft, given their substantial resources and infrastructure advantages. (02:48:43) - Ben Schleuniger, co-founder and CEO of Orbital Operations, discusses the company's recent $8.8 million seed funding led by Initialized Capital, with participation from Harpoon Ventures, DTX Ventures, Rebel Fund, and others. Orbital Operations is developing high-thrust, reusable space vehicles designed to protect critical satellites from adversarial threats by intercepting and neutralizing hostile satellites without creating space debris. Schleuniger emphasizes the importance of non-kinetic defense methods, such as high-powered microwaves or jamming, to avoid generating shrapnel in orbit. (02:56:32) - Merrill Lutsky, co-founder of Graphite, a code-review platform for fast-moving teams, discusses the recent release of GPT-5, noting its improved deep thinking capabilities, efficiency in one-shot applications, and reduced inference costs. However, he observes that the advancements are incremental rather than a significant leap, with some latency issues still present. Lutsky also highlights the evolving landscape of code generation tools, emphasizing the shift towards prompt-first modalities and the potential for remote, prompt-first agents to enhance developer workflows. (03:09:44) - Timeline TBPN.com is made possible by: Ramp - https://ramp.comFigma - https://figma.comVanta - https://vanta.comLinear - https://linear.appEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - https://getbezel.com Numeral - https://www.numeralhq.comPolymarket - https://polymarket.comAttio - https://attio.com/tbpnFin - https://fin.ai/tbpnGraphite - https://graphite.devRestream - https://restream.ioProfound - https://tryprofound.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive
Transcript
Discussion (0)
You're watching TVPN.
Today is Friday, August 8th, 2025.
We are live from the TBPN Ultradome, the Temple of Technology,
the Fortress of Finance, the capital of capital.
We are joined in person by Rahul Sun Wolker.
Did I say that correctly?
That's perfect.
And he is here because we are crowning him,
the king of the application layer.
Never talk down on the future first ballot Hall of Famer.
They said, don't build a rapper.
Don't build a rapper.
You're going to get steamrolled.
He didn't listen.
He just built a beautiful business.
It's a good product, sir.
It's a good product, sir.
When asked, when asked if value would accrue to the model layer or the application layer,
why not both?
It's a good product, sir.
Why not both?
Why not both?
What was your reaction to GPT5?
Is it going to make your life easier?
It's not going to put you out of business, right?
It's not putting us out of business.
It's making our product better.
It's basically making every AI application layer product better.
Also, it's half the cost of 03, so it's much cheaper.
So it helps you, helps your margins.
It means you can...
Don't say that out loud, though, because you don't want your customers to ask for a 50% discount, right?
Well, so we pass on the savings for our customers.
And what we do is we have the model generate more tokens, think for longer, and then produce better results.
Yeah, yeah.
Because we're still in the era of just let's get the best possible result.
Let's just actually, like,
the I don't know what do you have a do you have a rough benchmark of like cost per task like
if I if I want to you know crunch our analytics across you know look at the trends on our
views on X YouTube Instagram we have a bunch of data sources sometimes they're in spreadsheets
sometimes they can be linked I export those all I have a bunch of CSVs maybe I put them in a
database I link it up to Julius and then I want to do an analysis that could be a couple hours
of a data analyst's time, that's going to be hundreds of dollars, even at the low end,
probably thousands of dollars for like a simple analysis just on an opportunity cost basis
for an individual employee. How much are you thinking it should cost for the modern frontier
best model with the most thinking? How much should that cost on inference? So there's a couple
of ways to think about this. The way we think about this is how much would it cost for you to have
a data scientist or a data analyst for every one of your employees, your operations?
team your finance team your marketing team your product team it would
pretty much bankrupt every company well I don't think we can hear you do that
all right all right you still the the spacemen are out we're not going to
space today although firefly did IPO up 36% if you didn't see the news very
good news firefly stock surges 34% in debut congrats to everyone over there I love
the physical newspaper we'd love the physical newspaper you gotta do that
yeah we're maxing we're maxing we're maxing
We read the Wall Street Journal.
Today is a special day.
It's Friday.
So it's the mansion section.
We're newsmaxing here.
How many pools do you have?
We're newsmaxing.
Right now have,
zero right now.
Because the new thing is having two pools, a pool for every season.
People are increasingly getting both indoor and outdoor swimming pools.
So, yeah, get on Zillow.
Get on Zillow.
Zillow maxing here.
Anyway, you were telling me how much.
So, yeah, I mean, it seems like, you know, most of the application layer will be, you know,
productivity tools a la Slack, a la a, you know, like Adio, our sales force, our CRM partner,
or something where, you know, you're doing like seat-based pricing almost.
Maybe there's consumption-based pricing, but you're kind of distributing the cost.
You're making everyone slightly more productive, and you're charging, you know, on the order
of tens or hundreds of dollars per employee per month, something like that, right?
Absolutely.
I mean, it's not just slightly more productive, but it's also like getting insights when you need them, right?
Sunday night, you're prepping for a big meeting on Monday, you can't reach out to your data analyst and get, you know, your insights in that moment.
Yeah, yeah.
And so the convenience of having an AI that can help you with that is just invaluable.
Yeah, it's going from zero X to one X engineer all over the org.
We've seen this with with a lot of the vibe coding tools, Figma, and adding like vibe coding to that product.
You've taken designers and you've given them the ability to write just like a little bit of code, and that's really helpful.
and you're doing that for data scientists
and not just data scientists but actually
business operations people who probably would be
intimidated by an IPython notebook presumably.
Exactly. Nailed it.
Okay, I want everyone's feedback on my take.
Vittorio had this post.
He said, Sam Altman's doing the Apple Stance TM, it's over.
And I think that the reaction to GPT5 yesterday
was interesting because there's a lot of people
say like it's better model like it's cheaper it's good it solves it moves the ball down in the
field it's a good model sir um but i think people were mostly reacting to they had expectations
of super intelligence they had expectations of god in a box there's been so much rhetoric around
that like you know that step up from gp3 to gpt four was insane like five just felt like a big
number and it felt like we'd be discovering and uh novel science totally
Totally, yeah.
Not quite.
Everyone was expecting like a binary qualitative jump where, you know, everyone recognized
that, you know, GPT, when chat GPT dropped, we passed the touring test and the next, the
next hurdle is like, I don't know, maybe super intelligence, whatever that means, like, you
know, massive, you know, just, you just hit it with a prompt, it just solves everything,
it does everything, every other startup, it kills all the rappers, like the expectations were
just so high that it was hard to match.
So even though there were a bunch of solid improvements.
And remember the number one thing that I was asking for was just like get rid of the model picker
Like and I had it I actually was playing around with GPT5 yesterday and I was really happy that I was able to say
Hey think about this and I didn't have to go to the model picker and it just went and just kicked off a reasoning chain
It was great got me a great answer but power users so far are very upset yeah
Yeah, they want the model picker back true true is what I've been seen generally
But that always happens with these consumer products like I remember when
You know anytime something would switch to the an algorithmic
feed all the people that were like no I perfectly curated my list of this happened in
YouTube like back in the day you like the default YouTube view used to just be your
subscriptions and so you would never see a video unless you subscribe to that person terrible for
discovery and but all the hardcore YouTubers loved it because if I put a YouTube video out I know
that my audience is going to see it now I got to do that in the algorithm you actually had
distribution you got it was more like a sudden earn it every single time yeah exactly and
and the same thing happened I remember there were like protest groups
on Facebook when they launched the news feed.
It's like the most dominant product of all time.
It's like incredibly valuable.
There's protests right now on Reddit, people that miss.
Oh, yeah, I want the old.
Four, uh, four, uh, four or five.
Yeah.
I think, I think those voices will be like, personally, I think people will get over it
pretty quickly and I don't think that, that those particular, that, that small cohort of like
chattering, the chattering class will be, uh, will be like, they'll get over it.
The clanker economy is in trouble.
What do you have for me, Tyler?
I don't know if I agree with that.
Like, there was that whole funeral for Cloud 3.
Do you see this?
Oh, yeah, yeah, yeah.
They held that in person, right?
Yeah, it's like for, I think some people, like, really like the personality of certain models.
Yeah, yeah.
And those are like, it's not just intelligence.
It's like how it, like, talks to you.
Yeah.
And if people would make, like, some kind of connection with that, I think it's, you can see people.
I mean, how many people, how many people attended that funeral?
Yeah, you know, it looks like a lot.
It looked like a lot, but it was more of a party.
Yeah, I guess it was a party.
But as a percentage of the 100 million DAUs of these apps, like, where are we?
Like, one percent?
No, it was like 40 people probably, right?
Like, it's just not, it's just, I mean, yeah, there were protests at, at Facebook HQ
when they rolled out, like, people went to Facebook HQ and were like, bring back the old feed.
And it's like, yeah, now we're two decades into the algorithmic news feed.
And it's the most dominant consumer social app.
It prints money.
And most people really like it.
And the revealed preference was like, it's good enough.
So anyway, my point.
I will say I'm just going to read through Reddit's reaction.
Please.
Let's go over to LaGrate R slash chat, GPT.
GPT5 is the biggest piece of garbage, even as a paid user.
The people are not liking it.
Another one, opening I just pulled the biggest bait and switch in AI history, and I'm done.
another if you miss 4-0 speak up now contact opening eyes support deleted my
subscription after two years this is like contact your senator call you senator
speak up now I love that crazy I mean how many people are in the Google subreddit
complaining about various changes to like the Google algorithm GPT5 is clearly a cost-saving
exercise they removed all their expensive capable models and
place them with an auto router that defaults to cost optimization.
That sounds bad, so they wrap it up as GPT5 and proclaim it's incredible.
I mean, there's so many times when I fire off an 03 query that a 4-0 could one-shot.
Like, having a model router makes a ton of sense, even just for, even just for consumer
experience of like getting a, getting the correct answer faster.
A lot of viral posts from people just canceling their subscriptions.
But how many, you know?
Well, I'm just, I'm just providing.
context. I'm not saying that. Do you think ARR goes down next month? Well, no way. Well, you know, one in 10, how many miles does Chad GP have, like 700 million? Like one in seven, one in 10 people in the world. They have 100 million. You could back into this. And there's roughly 100 million people in the U.S. that use it weekly. Based on that 700 million number and the percentage that are outside of the, 85% of their weekly active are outside of the U.S. So it's like one in 10 people in the world.
aren't clanker mouths.
So it's kind of, you know,
they're thinking about the bigger market,
I feel like, in some ways.
And then it's like, you know,
when you want to get to the one to like the remaining 90%
of the users, do you want a model that thinks for longer?
You know, you want more personality.
So I think they definitely leaned in on personality.
Yeah.
Which I think is interesting.
I like what Tyler said.
You know, it's kind of different than feed in some ways
because you, you know,
you have this like person you talk to it's like you know it's like a relationship and then it just like
switches up and how it talks to you yeah that makes sense um do you do you do you talk to loms
i'm shy i don't really talk to them i mean i i treat them like uh like something i delegate tasks to
yeah and i do that a lot and i'm definitely in the dAU 30 minutes a day love chat gpt but my
workflow is i dictate go pull all this data together put together a report and i don't mind that
it's using a lot of bullet points i don't mind that it's using a lot of
of tables. Like, I want that result. I want it to look like the result that I get from Google,
but just more hydrated. I do think it's interesting that a lot of people are reporting that
they're hitting, they're getting rate limited within an hour of usage as a pro user.
Interesting. I haven't run into any rate limits. But of course, whenever there's like these big,
I mean, it's in the, it's in the top of the business and finance section in Wall Street Journal.
Like today is the day that everyone's going to go test it. You'd kind of expect that rate limits and
the GPUs are on fire like right now. And then it'll kind of settle in.
as they provision more resources.
I don't know.
My take, Tyler, what else do you have?
Yeah, I just wanted to add some context.
So apparently Rune tweeted this yesterday.
He said, by the way, model auto switcher is apparently broken,
which is why it's not routing you correctly.
We'll be fixed soon.
So maybe that's cause for why people were mad.
Yeah, that makes sense.
So my take is that yesterday, I think that they won the war
with the capital markets in the sense that this change
is more bullish for the business.
because it shows that Open AI is a dominant consumer app
and they have increasing leverage over the customer
to route to cheaper models that will save money
and be higher margin.
There's no doubt that they'll be able to put ads in this.
Like the business of the accidental consumer company
is as strong as ever,
but they kind of lost the battle with the timeline
and the hardcore X users.
Yeah, even my...
Yuchin today is just shared.
GPD5 is disappointing still hallucinates still m-dash too much still can't
follow instructions I miss 4-0 I miss 4-5 I miss 0 3 the big router keeps
failing me turns out I like the long model list interesting stated preference
not revealed preference let's check in with that person and see what what app
they have on their home row in a month almost certainly open AI almost
certainly I would be very shocked if they're like I'm daily driving something
else. But we'll see. There will always be people that use duck, duck go. There will be people
that use bang. But, you know, there is an increasing scale. Anyway, my, my take is if they wanted to
have, if they wanted to win the war with the timeline yesterday and you could roll back the clock,
it shouldn't have been the GPT 5 launch. It should have been the GPT launch. And they should have
just said, hey, we are, we previously, the big number releases corresponded to so much pressure around
the big numbers. Exactly. It used to be, people would just read it as it's an order of magnitude
more pre-training data. Like, imagine Julius if you felt pressure before the end of the year to roll
out like Julius 2. And if it wasn't like five times better, everyone's going to be like, it's over.
Julius is over. Well, there's this whole thing about how people would, many people were still using
GPD 40 because they thought it's better than 0.3. Oh, three, because three is a lower number. Yeah.
Yeah, exactly.
And that's probably like, you know, that's probably like 60% of the customer base.
Like there's probably a lot of people in that bucket who are just like, they don't know that they should upgrade and something else.
Exactly.
It's very natural because they're not like in the weeds, you know, reading about all the different capabilities.
They don't understand like what reasoning chain is and all this other stuff.
So if they had just come out and said, hey, our product is called chat and it's powered by GPT and we will be constant.
improving GPT, the way Google search is constantly improved.
Like Google Search has launched a ton of different products.
Like, you know, when you search like celebrity, like Bruce Willis age,
it doesn't show you just a link to like his Wikipedia.
It just shows you the age.
That was like an improvement to the Google search experience.
And I don't remember them announcing that on stage in the keynote.
I think part of this is presenting the challenge of the infinite ways that people use the product.
Yeah.
A lot of like people like us or maybe using it for
work and research and things like that or as a better you know Google search but if
you're using it as a companion like this is jarring right imagine imagine you meet
you meet up with an old friend and suddenly they switched up on they switched up on
their day once they switched up on their day once it happens all the time but it's
jarring right it's jarring and I think a lot of people like some of the heavy
heavy power users the people that are using this for 30 plus
hours, you know, 30 plus minutes, hours a day.
It's very jarring and it makes me think,
is Chad GPT gonna be able to maintain, you know,
continue to really serve, like, who do they care about
in the long run?
Do they want to be someone's therapist?
Do they want to, do they care about the companion market?
Elon seems to care a lot about the companion market.
And in terms of knowledge retrieval,
very, very few cracks in that strategy.
Yeah, very few.
For sure.
And so if they, if they had just come
out and said like we are going to do more Google like keynotes as opposed to app like the
reason that Apple stands on stage at the iPhone event every year is because every change is extremely
quantifiable like there used to be two cameras now there are three the camera used to be 10 megapixels
now it's 20 megapix it used to be this many gigabytes now it's this many gigabytes yeah and even
if you don't fully understand they even abstract that to be like we now have the M2 chip the M3
chip it's 60% faster like they're very good the battery life is
20% longer.
Like you can, and even that they abstract into, like, you can watch eight hours of video
on one battery as opposed to six hours of video on one battery.
And so Apple, they do the famous, like, bento box.
I went to chat GPT.
I went to GPT5 and I said, put together a bento box for the GPT5 release, like what was
actually announced and then try and give it weight.
And they were all super qualitative.
There was not, because previously it was like, GPT,
was this big, GPT4 was this big, and you could visualize tangibly, like it has more parameters.
There are more weights in the model. And that was like something that people could grapple with a little
bit. Yeah, it's like decreasing sycophancy, right? Aiden yesterday said I worked really hard
over the last few months on decreasing GPT5 sycifancy. For the first time, I really trust an
open AI model to push back and tell me when I'm doing something dumb. Wyatt Walls responded and
said, that's a huge achievement. Seriously, if you didn't just make the model smarter, you made it more trustworthy. That's what good science looks like. That's what future safe AI needs. So let me say it clearly and without flattery. That's not just impressive. It matters. So why it was not beating the sycophancy allegations. But, but again, that's, you know, you can't tie that to a specific number, right? So it doesn't feel as maybe as meaningful.
Yeah. My other take is like if we do enter a world where where chat GPT is just on this like relentless like, you know, cash machine like run where more people will use it, it'll compound. It just becomes the default for knowledge retrieval in chat.
What does that mean for other things that they can do to be splashy? Because Google has like,
No one would watch a keynote from Google every year just being like, here are the changes we made to core Google search.
Yeah, it's not about that. It's not interesting.
They'll talk about it, but that's not why people are tuning in.
Even though one year they do add like when you Google a movie, you get like the cast.
And that's like kind of cool. It's nice. But like I don't need to find out about that from a keynote.
Like I'm not waiting for that. And that's not and that's not a reason, oh, I should go use Google.
Like Apple is repitching you every year. You're saying like you have an iPhone seven.
we want you to upgrade to an iPhone 9.
Here's the reason why it's better on all these different vectors.
Google, like you're never stuck with the old Google.
You always have the latest and greatest.
So they don't need to repitch you every year.
But that doesn't mean Google doesn't need to make noise and do cool things.
And most importantly, because Google has such a monopoly over search,
they have this cash machine that can just go and fund 20% time projects.
Most people focus on like the ones that missed, like Google Glass or all the chat apps.
But they did create Gmail.
They did create Google Maps.
They created Waymo.
They created like a bunch of cool stuff.
GCP came out of that.
YouTube.
YouTube.
Yeah, I mean, that acquisition.
But acquisition, but they still like, you know, put the resources and they were,
and uniquely with YouTube, they were able to eat the cost of YouTube for a long time
until it became profitable.
And so I feel like this, this updates me towards like maybe I'm more bullish on all the
side projects.
And like, I don't know that the I.O. device is going to be the one that hits.
That might be their Google Glass.
But if they do 10 crazy projects where they burn $5 billion, like, it probably won't matter because they'll be massively profitable.
So they will wind up being able to do that subsidized crazy R&D at scale.
And if a few of them hit, we're going to get some really cool side projects out of them.
So I think that that's like an interesting like bullcase for like random stuff coming out of Open AI in the future.
So basically what you're saying is Apple wants you to make a purchase decision every couple of years.
Yes.
Upgrade your iPhone.
Yes.
And so they need this big.
marketing event. Exactly. Whereas Google, Open AI, they just to keep using the thing.
Yeah, they want you not to turn. Yeah. And a lot of the, a lot of the incremental updates to Google.
And they do that because it's just a habit. It's so ingrained in people. Exactly.
So the question now that I think anybody that wants to say, if somebody wants to say they're
bearish on Open AI, they have to make the argument that ChatchipT is not a habit for hundreds of
millions of people. Exactly. And it is. It is. Yeah. I think part of the, I'd be interested to get
Tyler Cowan's point of view because I don't think he would have been that let down by the
announcement yesterday because he was he's been saying for a while we've been moving the goalposts
so everybody wants to kind of redefine a GI but in his mind it happened earlier this year and I think
that he's not he's a knowledge retrieval in 2019 or 2020 if you if you were pitching someone on a vision
of, hey, we're going to be able to put this app
in people's pocket that allows them to learn
about any topic in the world, understand their world better.
I mean, I still think about the use case
of just being able to take a picture
of like a bunch of wiring or pipe in your house
and be like, hey, how do I fix it?
And then it just tells you.
Like, that's still just so incredible,
but people have just, like, very quickly acclimated to it.
And they felt like in some way
they were promised that LLMs would be cured.
during diseases on their own at this point.
And so that example, like you take a picture of the wires and it gives you like a diagram
of like how to plug everything in.
It's like that doesn't need a keynote when it goes from 50% accuracy to 70% accuracy.
It's probably never going to be 100% accuracy, but the fact that chat GPT is the default
app that people will pull out, take a picture of the wires in the first place and then give
feedback to it because they'll try the answer and they'll say that didn't work, that
That HdMI cable does not fit in that power port or whatever.
And then that gets fed in.
Then there's more RL.
Eventually, internally, they develop some bench for it and they hack it and they RL on it.
And then it gets good.
But that's not going to be GPT6.
That's just going to be like a nice new feature that you notice.
Like when Google adds like a little extra shopping widget here or like a little extra
detail on when you, when you like the calculator in Google.
Like you type in a number, it'll just be like, oh, we'll just use a calculator for that instead of Googling for the searching the open web for the answer to your math question.
It's like merging a PR.
Yeah.
If the industry could go back in time, the thing to do would have been to bolt the goalposts to the ground.
People couldn't keep moving back over and over.
I mean, I left yesterday.
No one in the industry bolted was doing any bolting.
Everyone in the industry was moving the goal.
They're just as guilty as moving the goalposts because they would hop on podcasts and be like, okay, well, like, you know, yeah, we did this.
what about the next thing let's say because like we want to underwrite against that right give us credit
i mean we ended the day yesterday just incredibly bullish on rappers and like
application layer certain certain categories of software yeah and and bullish on humanity i mean i was
joking and it and it kind of pissed people off i said uh i've updated my timelines you now have
at least four years to escape the permanent underclass completely a joke i think that humans will
continue to find ways to create value and create things for a very long time. But it did feel like
everybody should breathe. Anybody that actually had a genuine fear around that should breathe a sigh of
relief and just focus on being great at their work. Yeah, I mean, realistically, I think technology is
going to increase income inequality, increase the power law, increase the distribution, but also
increase economic mobility. And so somebody who starts with nothing will be able to
become extremely, extremely wealthy.
And people will also fall from grace like crazy,
because if they're not staying on the cutting edge,
they'll lose everything.
But so I don't think that there's such a thing
as like permanent underclass.
Like I don't even believe in that.
I think that that's not going to be a thing.
But there will be more like, there will be more scenarios
where there's $100 million in your laptop,
it's your job to get it out basically.
That's the mean.
Yeah, and the other stuff that wasn't really,
I mean, was it covered at all yesterday,
just generally like image generation wasn't covered broadly.
It feels like that is a super exciting area.
We had Jeannie launch this week,
which got less attention than even the open models
and GPT5 and that's transformative.
I also think I'm still kind of waiting to see
what GPT5 will produce on if, you know,
Sam does a lot of vague posting,
but he was talking about the fast fashion era of SAS.
And Mitchell yesterday on the research team
opening, I was talking about being able to just generate, you know, one shot, a game in,
in chat, and then being able to share that. I can see, I can see a world where we have another
kind of viral studio Ghibli moment where people are like, use this prompt, change these details,
and you can just generate, you know, a first person shooter game or something to that effect.
And I still expect that kind of thing. But when, you know, being promised curing cancer,
it will feel like a bit of a letdown to a lot of people.
Yeah, the problem of game is like, I just like, I like an Autour.
I like, I like, I like a Last of Us.
I like a god of war.
I like someone who is like life's work, obsessive.
This is like Hunter Biden going on his recent interview.
Your John's vice is video games.
Never seen him play one, but apparently you see it.
When GTI six drops, you might not see it.
I mean, I was surprised, I mean, the, again, Amjad said late last night,
can't help but feel the crushing weight of diminishing returns.
We need a new S curve.
And I don't, this is interesting.
I think he's talking about, like, in the context of Replits,
I don't know that they need a new S curve.
No, they are the new S curve.
The new S curve is, is applications.
Unlocking capability.
Yeah, yeah.
And there we have like a, it's, you were saying capability overhang.
It's almost like a capability underhang.
It's like the models are capable of doing things,
but they need a lot of help, a lot of integrations,
a lot of what you're doing with Julius,
a lot of harnessing.
And then they need to actually be put in the hands of people
and made useful for real business tasks that drive value.
And so I would imagine that we will see that rollout continue
in the same way that all these people are using ChatGPT.
they're getting slight little benefits here and there, and that should just compound and compound,
similar to the internet.
Like, it was a very, like, smooth rollout, but everything got a little bit smoother, a little bit
faster, and then eventually it had sort of profound effects where companies could scale even
faster because the internet existed.
Like, you can't have a chat GPT moment in a pre-internet era.
You just cannot roll out something that fast when you have to mail it to somebody on a disk.
It just doesn't happen.
One thing we didn't get to cover with Mark that I was interested, maybe the next
time he comes on, but like how Open AI is thinking about moonshots.
He did mention that they have teams internally on the research team that are not focused
on the next version of GPT5 or sort of incremental improvements.
And it feels like the point of view that I have is Open AI is now a consumer and enterprise
software company in the business of converting free users to paid users.
But they can still in the background be thinking about what is the next.
paradigm right how do we get that next yeah that next S curve and that just looks like a
scaled tech company right this is what Google's doing forever but Bologi says LLMs may have
topped out for now but the broader AI deployment has just begun showing a chart of Waymo
weekly rides in California so the clanker rollout clanker deployment has just begun I like this other
post doing a clanker microaggression okay ha ha
But where were you downloaded from originally?
Anyway, let me tell you about ramp.
Time is money, save both.
Easy use corporate cards, bill payments, accounting,
and a whole lot more all in one place.
Do we have a ramp credit card in the studio?
I think so.
Bring this out.
Look at this pool floaty.
Wow.
The ramp merge is going hard today.
I think you can definitely float on that.
Actually, I'm going to the pool after the show.
I would love to take this home.
I will definitely take some pictures on this.
This is great.
Leave this for me.
Can you put this behind?
Rahul?
Is that possible?
All right.
Yeah.
There we can.
It's Friday.
We got the pool float.
Anyway, anything else we should chat about?
I know you actually have a real job.
You have a business to build.
I want to give you the last word, but I also don't want to keep you here all day.
I would love to have you as the third mic on this, but I know you have bigger things to do.
Thank you for having me, guys.
It's always fun to chat with you all.
You're welcome anytime.
Thank you.
Hang out.
going guys yeah thank you uh if you're enjoying the stream it's because we're on restream one
live stream 30 plus destinations multi stream and reach your audience wherever they are sign up for
free at restream we're going to have a couple people joining the restream waiting room soon that's the
name of our waiting room michael drugan he was at x-a-i he was terminated he's a bodybuilder
he's a former bodybuilder i guess former current once you're a bodybuilder i think always a bodybuilder
Anyway, even you never.
Google one-shots this, by the way.
I tested it.
So he puts a grade school level equation into GPT-5.
It confidently answers it wrong.
And he says wrong.
And then it doubles down, does it again.
So it's 5.9 equals X plus 5.11.
LLMs consistently get confused with 0.11 because I think they read it as 11 as a single token.
in this was the famous like how many rs in strawberry story and the there was like a cheat code basically where they baked in a into the system prompt like hey if anyone's asking you to count letters don't let him pull a fast one on you divide that up go letter by letter and count each one individually they will eventually have to do that for math it's funny that we're in this world where they can do the IMO math but then the latest model can't do like a basic it's it's 5.9 minus
5.11 but how many times are people really coming to this to chat GPT with this they
should probably just you know key key off of this and understand that this is something that they
should do in Python or something they should send a wolf from alpha or some other system that's like
you know tuned on this but this is just like this is tool use in my opinion it's just like we're
going to need to add more tools if i was going to build the gpt5 bento we would have been almost
entirely focused on tool tool use i would have saved up like now it integrates
it integrates with gmail now it integrates with google drive now it integrates now it has a calculator
now it has a database that stores all your memories and like and really try and concretize what it can
actually do i was actually having it do that i was like generate a bento box for uh for me as if this
was an apple style keynote and it just randomly put grammarly's logo in there i don't think that
there was a grammarly integration but uh but but but it speaks to the the the lack of like we're in this
like amorphous qualitative well we went like we're trying to quantify it by saying like
hallucination rate went down from 20% to 10% and it's like that's too abstract it's much better
just to be like there's always going to be new edge cases just to be like hey when you talk to this
thing you can assume that it has a calculator on its desk and so you can ask it to calculate something
and it will use its calculator or it has a web browser so you can talk to it like it has a
web browser well even that screenshot ability like like go to this website take a screenshot pull
it back and give me the screenshot that's kind of a cool capability that or you can run a workflow
where it's like take a screenshot of this site every single day for me totally cron jobs as as like a
feature and apple does a good job of this where like they'll take something where it's it's a very
basic concept there's already a word for it but then they'll give it a new word so if you're going to do
the apple thing I feel like you know like instead of just being like there's AI everywhere it's like
it's Apple intelligence they never say AI instead of it being like VR or AR it's like what do
they call it vision vision pro it's a reality mixed they don't even use mixed reality they refuse to
use it's experiential they always create a new phrase but then they try and define that term they're
masters of coinages their followers of kugan's law well i hate to cut you off john cut me off important
news what just dropped united states has boosted the bounty for nicholas maduro to 50 million
dollars we covered this earlier this year when the bounty was 25 million dollars we said hey good
opportunity if you've got some free time right now maybe do they just up it because of inflation
maybe maybe it could be i thought i thought you're saying that inflation wasn't real but all of a sudden
the price of maduro is going up i think mr beast should go for this put together a team do his
do his typical thing should we pull up the original video of us do we have it can we play it
original video.
Ben, pull this up.
This is one of the funniest moments on the show.
The Lone Ranger sent this to us and Jordi read it very dead pen and I absolutely lost it.
Anyway, while they're pulling that up, let me tell you about Figma, think bigger, build faster.
Figma helps design and development teams build great products together.
You can get started for free.
If you're designing a wanted poster, you got to do it in Figma.
You got to.
I was wondering how you're going to tie that, tie that together.
Also, Sundar Pachai, we were talking about going up on the timeline.
Oh, here we go.
We got the video. I'll go to Sundar after. Let's play the video.
Promoted post. So, this promoted post is from the Department of State and the, and the DEA, actually, the Drug Enforcement Agency and the U.S. Department of Justice.
So we actually, today, we have a promoted post from the narcotics for rewards program, saying that they have the reward for information leading to the arrest and or conviction of.
Nicholas Maduro
Step it up.
Venezuela has increased up to $25 million.
So this is like, you know, typical,
this is like basically like getting a pre-seed round for like an AI company.
It's more like a mango seed.
It's like a mango seed for your AI company.
Or like if you're a good YC company,
you might come out of demo day,
skip the seed round and go straight to the, you know,
25 on 100.
Yep.
And,
but anyways,
this would go straight into your pocket.
So Nicholas has,
uh,
has been in the chat Taylor says
I've got Maduro but I'm holding out for 75 million
Good
Good call Taylor
And this is those crying on the stream
I'm actually dying with the feds
He's in a rough go
Narco
Narco traffencing
John actually
He's accused of narco terrorism
He was nostalgia
Innocent and so proven
Yeah this was
Allegedly
Allegedly Nick has done some narco terrorism
So Nogatorism, cocaine, importation, conspiracy to use and carry machine guns and destructive divisive.
That's a rough list.
Furtherance of a drug crime.
Very rough.
So anyways, no coupon code this time, but you can send tips to the drug enforcement agency by email at cartel, S-O-L-E-S-Tips.
At DA.
And for the folks who might be trying to track him down, can you give me some overview of who Nicholas Maduro is and what you might look like if I see him on the street?
Nicholas Maduro Moros is a Venezuelan politician who has served as president of Venezuela in 2013.
He began his working life as a bus driver.
He's a grinder.
He's a grinder.
Even though he...
Anyway, you want to stay out of trouble.
You want to stay compliant.
You got to get on Vanta.
Automate compliance, manage risk, prove trust continuously.
Vantas Trust management platform takes the manual work out of your security and compliance process
and replaces it with continuous automation, whether you're pursuing your first framework or managing a complex program.
Anyway, yesterday, GPT-5 launched.
It was going to be quiet from Gemini, but Sundar Pachai put up a 10K banger on the timeline,
excited to make our best tools free for college students in the United States.
Google Gemini is free for students, a one-year pro plan, offer ends October 6.
They know, getting back in school.
This is the time to get people in the ecosystem.
Unlimited image uploads, 2.5 pro model, notebook L.M, deep research, 2 terabyte storage.
They are pushing people to onboard onto Google Gemini.
They are not considering, they're not, they don't think that the game's over.
They're going to go up head to head with Chatchip-T.
Yeah, so pull up this post, Ben and Crew, because the chart that people have been sharing
over the last few days, just, and I was asking Greg about this.
I was asking some of the other people on the team.
Did you feel like you got a breather over summer?
The GPUs cooled off a little bit because basically you can see right when summer ended, if you scrolled,
or sorry, right when summer started, usage fell dramatically, just overall tokens processed.
And I expect that to tick up pretty dramatically.
I don't think that has to do with school, though.
That drop off, that's the European vacation season.
These are VCs who use Chachapiti when they're at work, but then on summer they're off.
Of course, John.
It's it's the VCs we're using 75.
They're the primary driver of Chachyptu, which they have to ask, like, what, what, what is a foundation model?
What, what is a company?
How do I invest?
How can I be helpful?
Just dropping the deck in and saying, yes or no.
Yes or no.
Yes or no.
Yes or no.
Exactly.
Give me one word to answer.
Exactly.
But if you're in, but if you're in St.
Barts or San Chopet, you don't need to be.
using chat GPT you're off focus on you have a vacation responder on and that
vacation responder it's not generating tokens it's not hitting the chat GPT API
it's just a form it's just a template it's it's deterministic computing yep it's
not stochastic it's a little throwback yeah yeah yeah anyway so so the Gemini
news is significant because clearly students are students are incredibly price
sensitive right totally remember being a student we didn't have we didn't have we didn't
have Gemini or Chachyptee back back in our day, but I remember there were, what were the different
websites that would have, um, that would just help you study for courses. I don't think I ever
paid for a single one, but I'd always be using the, using the free tier. Yep. Totally.
And I think generally students are going to continue to be, even though these tools are so
powerful. Yeah. If it's very possible that Gemini can really compete here. Um, well, you know what
else has a free tier? Graphite.com. Code review for the age of AI. Graphite helps teams on
ship higher quality software and you can get started for free at graphite. Dev and graphide
CEO coming on the show later breaking it down his his take on GPT5 speaking of charts
there was a chart burning up the timeline yesterday the sui bench verified software
engineering with thinking without thinking people were very upset about this because
the original chart not this one four slides later the initial it's from Timo Springer
Timo said this is the correct
one. So people were saying like it was a chart crime and that went on the live stream, the chart
was showing that they were at 74% up here and then the next and then the second bar was 69.1%
and it was much, much lower. And it made no sense because 52% is of course lower than 69%.
And the chart just seemed really botched. What was weird is that this chart that we're showing
here is not a chart crime. You know, you could maybe say it doesn't show exponential
takeoff, but it's, it shows that with thinking, GPT5 beat open AIs, uh, O3 on Sweeney bench.
And like, maybe that's, maybe that doesn't matter to you, whatever.
Uh, but their point is that GPT5 with thinking, if it triggers the thinking, uh, the functionality,
it's better at Swaybench than O3 and 4.0, which is a good claim to make, right?
Um, but people were upset about the chart crime.
What's weird is that like, it really seemed like it was.
some sort of translation problem because this exact image went up on the website the same time
as live stream the chart was correct on the website but wrong in the live stream so there's like
why would you why would like if you were trying to pull a fast one on the on the chart crime world
like you wouldn't necessarily it was an honest mistake i think i don't think i don't think anyone at open
a i was going to the event being like let's commit chart crime exactly it seemed
to be just an accident.
I think it was a mistake.
And I think I think what happened is that you render the chart.
You get the data.
You render the chart.
You have to design it to be on Open AI's style guide.
Then you render that for the web.
And then you pull that into whatever was driving the keynote slide deck.
And something got lost in translation there.
The bar got shrunk or something didn't copy over correctly.
And it looked ridiculous.
They didn't like directly address it, but it was corrected on the.
timeline by Timos Springer. So it was good. I chatted with him a little bit in the DMs.
Trying to understand. Yeah, I mean, the thing that the, the, the reaction was so intense
because it felt like the kind of thing that an associate at a consulting firm would do,
which is kind of what the love, the, the general level that it feels that a lot of these
models are broadly. We got to, and that's still incredible. Yeah. But they make mistakes. They're not
perfect. They're smart in some areas and dumb and others. We got to pull up the, the,
the, uh, the poly market on which company has the best model at the end of 2025, August 3rd,
September 30th, et cetera. Because the market moved significantly, three and a half million
dollars in volume. And yesterday, completely flipped from open AI at 72%. Open AI dropped all the way
to 17% and it's kind of climbing back up. Um, a lot of this seems to be, be driven by like when
will the Gemini keynote happen?
When will the Gemini three launch happen?
But, and then if you look further out to December 30th,
Google has jumped a lot and is now sitting at 54% to win it.
Then XAI at 20%, open AI at 17%,
meta at 4.4%.
We don't know if meta will launch anything
for the rest of the year.
They could just be heads down grinding on super intelligence
for a while.
But it's fascinating how quickly the vibes shifted yesterday.
This is a wild chart crossing.
And Elon chimed in and said, there's free money on the table.
They're selling dollars for four cents because I'm going to come from behind.
And I mean, if any team is like focused.
He's making that claim around the best day I model end of September.
Yeah, basically.
He thinks like GROC 4 is or GROC 5 is going to ship and it's going to crush and it's going to be really, really
strong on Elam Arena and he says, you know, he's, he's, he's, he's, he's going to, he's going to win
here. I mean, he's going to, he's going to try and win in everything. Like, he's, he's, he's
competitive. He's, he's a winner. And so there are a bunch more posts the, what, where, where, where's
we going to be a lot of headroom, but model releases are going to be a bit more boring from now
on, at least on paper. Many will still be transformative in real use. Yep, yep, yep. So the, the,
what was it? The floor lifted, but the ceiling
held that was the meme Tyler yeah yeah exactly yeah so we it's hard because we don't even
know I feel like we don't even know what we want in terms of headroom it's yeah I think
this vague like one shot everything so yesterday when I was I was trying to think of like
oh what I'd do for a TbPN bench where I had the horse but it took me like a while to
like think of something that was not like completely trivial for model to do so at this
point it's like I don't know what good benchmarks are really like I guess that there's
some things like Arc AGI sure and there's something about like long task horizon stuff like
that but like general knowledge like I'm not surprised if it like can do you know yeah I mean
the next the next I think we should I think we need to do golf bench or steak bench which is
being able to send an agent out into the world to generate a sale it's good that's a good joke
I think that there is actually something there when I think about Tyler like that video that you were
working on earlier, like that there is a world where that's just a prompt. Like we were putting
together a video for the Metis list, highlighting a bunch of AI researchers. And we went back and
forth a ton on the idea, the song, the pacing, the editing, the color grading, the titles. Do we
want subtitles for this part? Do we want title cards for other parts? And that was not a very
LLM enhanced experience.
There is a world where you just go to a chat box and you say, make, make a, here's
the medis list.com, make a video promoting this.
And it just kind of does it.
And like, or, or at least you're like puppeteering and orchestrating,
Jory has the ramp card.
Or you're like puppeteering and orchestrating it.
And you're saying like, at the very least, go pull me the raw MP4 files of, you're
every cinematic video of every AI researcher on this list.
Yeah.
Like, you had to go to YouTube and search Shulteo and find a cool video.
And you know that you didn't.
Yeah, pretty easy to find cool videos.
But it's, but you didn't do that with an L.M.
Right?
No, yeah.
You went to YouTube.
Yeah.
So I think maybe the next thing is like we need more agent, like agentic benchmarks.
But like all the stuff we're saying now, it's not like, it's not information retrieval.
It's not solving.
It is sort of information retrieval.
Like, at the very least, I would, I would love to be.
able to go to an LLM and say like, you know, I'm making a vibe real about space, pull me
75 different little three second clips about rockets and rocketry and I'm going to mix them up.
Maybe I'll be the one in Premiere, I'll do the video editing, but at least do the information
retrieval and put it all together and assemble it like that would be interesting.
Yeah, I guess that's technically, if it's like giving you links to YouTube, that makes sense.
I don't want links. I want MP4s. I want it to do the hard part of the downloading.
So that's like, well, it's like semantics now, but that's like agentic because you have to download
the video and cut it up.
Yeah, that's what I want.
Yeah, yeah.
I want that.
So I guess it's still information, but I'm talking about like raw like tokens.
Like it's giving you some tokens back.
That's just like arbitrary though.
Like I'm fine with it going to YouTube d.l.xy or whatever and like downloading the file,
doing all that stuff.
Yeah.
I mean like that it should be able to like use the web in a much more, in a much more, in a much deeper way.
It should just like like, like right now there is.
is there is like it can open up a website and retrieve the information and the text on that
website and then it can use a few tools that like we've given it access to and rled on but like we
want it to rl on every single website that is a tool and so that when you talk about the flight
booking ui it's like go and go in rl on every flight booking ui go and go in rl on on the
youtube downloader and youtube itself and just be able to crawl around the web like like anyone has to do
for any task, right?
Yeah.
Well, I mean, Greg said yesterday, like,
years not over for agents, so hopefully we'll get something.
Yeah.
That was a cool little hint.
Yeah, and again, it's like, I don't know that that's a keynote.
That might just be like chopping wood,
getting better and better and better.
And it's like people will keep coming to it with tasks.
Because I come to it with tasks where there could be an agentic,
like workflow that solves even more.
Like the good example is like with the bento,
I actually had stepped through it.
I was like first do a deep research report
on what was actually announced in GP5,
give me all the features, summarize them all,
then turn that into a table,
then turn that into an AI image.
And it was like four steps
and I should have just been able to one shot it
and say, hey, you know Apple has those bentos?
Like make me one of those.
And then behind the scenes,
you're gonna go do the deep research,
you're going to pull all the facts and the figures,
and then you're going to lay it out and stuff.
I don't know.
Anyway.
Breaking news, soft bank, soft bank.
Soft pank.
It's Friday, folks.
Soft bank reportedly bought Foxcon's Ohio factory
for the Stargate AI project.
Reading into this.
What they do?
They acquired one of Foxcon's factories.
Interesting.
The mystery buyer of the former General Motors factory,
owned by Foxcon in Ohio is apparently SoftBank.
SoftBank wants to use the factory to build AI servers
as part of the Stargate Data Center project
being spearheaded by SoftBank, OpenAI, and Oracle.
This report comes just a few days
after Foxcon announced it had sold the factory
along with electric vehicle manufacturing equipment
that was inside of it.
To a buyer, it only referred to as Crescent Dune LLC,
an entity that was created in Delaware in late July.
Interesting.
So I didn't know.
if you were tracking this at all.
I'm not.
But SoftBank is up 63% year to day.
Let's go.
Congratulations to Masa.
That's actually somebody asked us
who our dream guest was.
And John's first reaction, the O'Von.
The O'Von, Shaquille, Neal.
He's interviewed some of the greatest
Assygian Peng.
Sejan Peng would be good.
Benjamin Netanyahu did Nelk Boys.
I'd love Seizien Ping to do TBPN.
That would be cool.
So Masa Yoshi's son would also be very fun.
But I'd want him here in the studio.
Yep.
Get to the temple of technology, Masa.
We want to hang out with you and have you ring the gong
25 times for all your various deals that are gong worthy.
Yep.
Will DePue, friend of the show, says multi-layer SPVs
should probably be illegal under the current interpretations of securities regulation.
I'm getting DMs from long-loss cousins about eighth-layer Anthropic SPVs
claiming direct cap table access.
I think those should probably be illegal.
He should just say are probably illegal.
Like I'm pretty sure if you lie in a securities offering
and you say you have direct cap table access.
And in fact, you do not have direct cable to cap table access.
You're misleading investors.
Yes, that is financial fraud, a wire fraud.
And you will wind up in the clink.
Yeah, clink.
Anyway, profound, get your brand mentioned on chat GPT.
reach millions of consumers who are using AI to discover new products and brands.
Incredibly bullish day for Profound too.
Seriously.
Because knowledge retrieval is going to be really, really important going forward.
People are going to be searching ChatGPT.
What products should I buy?
You want to know whether or not you're showing up in the rankings.
And Profound helps you do that.
So pretty much every brand is going to need to do this.
Ramp boosted their AI visibility by 7X.
Really?
In profound, they boosted their visibility here on TBPN using this massive ramp card.
Yeah.
You got to do it all, folks.
Yeah, yeah.
I mean, really, really like another company that didn't get steamrolled yesterday.
Yep.
Dylan Patel says GPT5 is disappointing NGL.
Well, we're joined by Fabricated Knowledge who works with Dylan Patel in maybe 30 minutes to talk about GPT5 and what's going on in the semiconductor industry.
And I want to talk to him about how we should be thinking about building inference.
clusters, now it feels really, really important to only be serving profitable tokens, and the
age of deeply unprofitable inference will have to come to a close at some point.
Yeah, I want to, I want to, you remember it wasn't that long ago that Satya was pulling out of various
data center deals and just saying, I'm happy to be a leaser, and this feels like he kind of saw
this coming.
What did Satya see?
What did, yeah, seriously.
his beef with Elon was funny yesterday i'll see if i can pull up the post here
Elon said something to the effect of open AI is going to crush Microsoft oh yeah this was
a funny post yesterday open AI is going to eat Microsoft alive which i don't know exactly
why Elon is saying that it kind of feels like potentially some some 4D chess yeah it was very odd
because Elon usually isn't rooting for open AI he's usually at war with open AI and so uh it it's it's
it's kind of it reads us like your bullish open AI your long open AI you're short Microsoft but
there's clearly something else going on there between the deal Microsoft's and open AI right now
has a lot of regulatory scrutiny he's probably trying to do something around that
Satcha responds.
People have been trying for 50 years
to eat Microsoft Live.
Each day you learn something new,
innovate, partner, and compete.
Excited for GROC 4 on Azure
and looking forward to GROC 5.
Such a good response.
Satcha.
Absolute dog.
He's an absolute dog.
In other news, esteemed journalist Zero Hedge
is saying we have officially crossed streams.
Companies have no more free cash flow
to pay for data centers,
so we've entered the private credit.
phase. I would put this in the truth zone. There's been a lot of data center development that's
already been getting funded by private credit. Meta, this was in response to Meta picking
Pimco and Blue Owl for a $29 billion data center funding. We had reported earlier on the show that
meta's like cash balances between the end of last year and now dropped dramatically like 75%
or something like that.
But again, they have still a lot of free.
Yeah, I think people maybe get too puritanical about debt to equity ratios.
Like Apple is incredibly cash flow positive and has returned something on the order
of a trillion dollars to shareholders over the past decade plus.
And they still issue debt because there are like designing your capital structure
to match your business activities.
makes a lot of sense you want to fund R&D with equity, maybe with cash flow, but if you're
just, if you're just buying a house, it's okay to have a mortgage. If you're buying a data center,
it's okay to have some debt financing that. But you want to be able to service that,
obviously. The interesting thing here is like, yes, you do need to make sure that you don't
get over your skis and wind up building a bunch of, you know, dark fiber and get really,
really really in trouble when you if you issue a ton of debt and then that's at the top of the
top of the cap table and you are in trouble and you're servicing this and you're not making any
money the main thing is like if you if meta raises 29 billion in data center funding and it's
debt and then they can't monetize that data center at all and then they have trouble paying the
interest on that and the and the principal down like that could be trouble but meta's makes 29 billion
dollars like all day long like that's not that's not a problem frequently so um there
there there are probably pockets of of of of risk all over um but uh unclear how how early we are
to the to the the the like the scare mongering around this but something something to keep an eye on
something to see private credit's a little bit different because it has very long time horizons
and uh and it doesn't have as much of a systemic issue like
Like Blue Owl and Pimco, you don't have these like multi-layered like financial products
and financial engineering that winds up in the hands of the consumer and is driving these like
really, really like frothy deals like we had in the mortgage-backed security crisis.
But certainly to be clear, there's bears out there that are very, that think we're in a massive
private credit bubble.
Yeah.
And it certainly looks like a.
bubble.
Yeah.
Whether or not it blows up.
We'll have to wait to see.
We will.
In the meantime, we'll tell you about linear.
Linear is a purpose-built tool for planning and building products.
Meet the system for modern software development.
Streamline issues, projects, and product roadmaps.
Start building.
Well, a little bit of a white pill here.
We have a post to pull up from a listener, Max.
He says, I purchased a 1959-356A Speedster to get into the mindset of Jordy.
found myself overcome by a sense of positivity and confidence, I also became 5-2.
Yeah, you really, I-5-2 really is a perfect type. That's the perfect type for a speedster.
That, I mean, if you are, if you're 5-2, you deserve a speedster immediately. Tyler,
give us the speedster review. You drove in it. Very small. It's, I, yeah, it's unbelievably
small. I didn't realize, like, cars from that old were that small. Were people just small
back then or something? Yeah, I don't know. I don't understand it. Little tiny people driving their
little cars. Well, if you look at the growth of Porsche's broadly, they, they just get bigger and
bigger and bigger and bigger. Sure, sure. And this is like the purest hate it. Maybe with the
rise of Ozmpic, we'll see cars get smaller again. Oh, maybe. Interesting. Well, yeah, I mean,
Germans, I feel like are tall people. I'm so I've always thought that like getting a like a big
Mercedes is like aligned with like a big German guy and like it kind of fits. But I don't know,
Maybe back in the 60s, they were just designing them for like...
Mercedes, I mean...
Small people.
I remember. I feel like in our lifetime Mercedes were pretty small.
Remember, like, steves?
Yeah, yeah. No, totally. Yeah, it's a small car.
A small car you probably wouldn't fit in it.
No, the short kings were dominating the feedback form of Porsche and Mercedes, I suppose.
Anyway, growing Daniel says, let Rune emce these things.
I completely agree.
It would have him on stage.
I mean, he's just so good at, like, kind of talent like it is,
in people on the pulse if you if you wanted to optimize for like my question is how many people
were watching that live stream that were non like x like it felt like it felt like the core
audience for that was like the timeline yeah yeah yeah that was my my perception but it wasn't
if you were going to make the perfect live product for the timeline you probably have room i was
I was noticing that with our buddy Logan Kilpatrick and Demis Hussabas are doing a podcast together.
Did you see this?
Yeah.
The screenshot.
And I was just like, and I think just a screenshot of like they're doing this together, got like a thousand likes.
And I was like, that is good content.
That's what I want to say.
I was catching up with Logan earlier this week.
And he's like, oh, I'm going to London to do a podcast.
And I was like, what podcast?
Like what?
Like, do you need to go to London, just record a podcast?
And he's like, oh, with Demas.
Yeah.
I get it now.
Yeah, and I think, he just, he, Logan, just, Logan, Rune, all these sort of like, you know, forward-facing developer advocate, folks who can speak to the timeline, they just bring a completely different energy than someone who has like prep talking points.
And so even if they're, even if they're not like the deepest researchers, just being able to to communicate is a unique skill.
Exactly, exactly, much like Newell Brown with a White sales tax and autopilot, spend less than five minutes per.
month on sales tax compliance go to numeral hq.com will brown superintelligence so this has been around
for a while in its numeral yes so this is yeah there's will brown post is interesting he says okay
gpt5 this model kind of rules and cursor instruction following is incredible very literal pushes
back where it matters multitask quite well a couple tiny flubs format misses here and there but not
major the code is much more normal than o3s feels trustworthy and this is the interesting thing about like
like it was gpt five it was kind of framed as like this is a major change to chat gptr our
rapper our consumer product but there's a whole like open a i is is not just one business line
like they're going we were talking to sarah fryer about this like there is a world where when
open ai goes out to the public market analysts are valuing the business on it's a high growth consumer
company like Google Search, and then you also have an enterprise business, and it's basically
a hyperscaler.
It's cloud.
Based on the recent numbers, OpenAI has 10 times Figma's revenue.
Yeah.
And obviously, it's not profitable.
And what was their latest tender?
About five times the valuation.
Yeah.
No, 10 times.
Yeah, but about 10 times the valuation.
And so, but, but my, my point is that there's, is that there are, there are, there are
multiple business lines within open AI the company now and you might even see a bit a new
business line spring up around open source implementation enterprise installations fine
tuning also just API selling tokens also consumer also different spinouts like they might
wind like google eventually had to start disclosing like i'm pretty sure they had to start
disclosing youtube financials because youtube became such a big business and there's a threshold
where I believe if the company, if the sub-organization reports the CEO or something like that
or it's material in terms of like greater than 10% of your overall top line or profits,
then you have to break out those numbers in your gap financials.
And it's interesting to think about like where will the lines be drawn in the open AI business?
Because they and then how will that translate to their communication?
Because there is a world where they, where yesterday's news was,
was kind of two different things.
One is that we made the consumer product easier to use
for the hundreds of millions of users
that don't know what post-training is or RL is.
They just want an easy to use app
that answers their questions.
And we did that.
And then also we made our coding API a bit better
so that we are now neck and neck with Anthropic.
And so if you're a company that is buying code generation
tokens, you should come to us and it should be
And that market should be more oligopolistic as opposed to more monopolistic as it's been.
And so there's a world where we see, you know, these oligopolistic cloud enterprise B2B businesses
crop up on the on the hyperscalor side with Gemini and Anthropic and an open AI B2B.
And then maybe Thinking Machines gets in that game, maybe SSI gets in that game.
No real indication that MSL or meta-superintelligence will get in that game.
But you could see kind of like a similar dynamic as like what's happened in the hyperscaler clouds play out on the B-to-B token generation from Foundation Model Lab side, which is still great business.
But then you also have a wrapper and you also have a consumer application.
And then you might have other products that crop up.
I mean, Google makes money off of Gmail.
They make money off of Google Maps.
And they don't need to even break those out.
They just put them into different services, but I think we'll see, like, you know, an increasing, like, you know, pattern of different pieces of the business that add up and all of them will be generating.
They'll be, they'll all be profitable, but the question is, like, how much attention will they get?
And then how much financial, like, performance will they actually drive?
So, anyway, we should go to Mike Neup from Arc AGI, the final boss of AGI.
He decides whether or not a model is super intelligence.
What does he say?
I said, open AI prioritized the right thing.
I'll do a Mike Noop impression here.
Open AI prioritized the right thing with GPT5.
To get to one billion users, the model switch sure needed to go.
But the hype marketing playbook they're known for fell below folks' expectations and warrants reflection.
Benchmarks could have been used to support the main story instead of benchmarks don't matter.
use cases matter. They could have used benchmarks to show how effective their automatic reasoning
effort system is. They could have shown state of the art in automatic reasoning. In fact,
this is something we wanted to test ARC, but the GPT5 API does not support auto reasoning.
Key point, benchmarks are important tops to communicate with the public and can be used
more effectively to communicate capabilities than raw intelligence state of the art.
So, yeah, another twist on like just the messaging being like an odd choice here.
Or just, we're just in a transitory.
What percentage, what percentage are the 100 million weekly
actives that they have in the United States?
Something?
70, 80% or something?
No, no, no.
I'm saying, based on the numbers we got on the show yesterday,
they have roughly 100 million weekly active in the U.S.,
what percentage of those people even are aware of the hype marketing?
What percentage saw the death star?
picture yeah totally not that many right i mean god i did get millions of use but yeah but but x is like
is a specific corner yep um zachary speaking of the death star uh zachary
very negative but you did kind of set yourself up for that song it's odd it's odd it did it did
it did not fully make sense like who who is the death star in this in this story are you the
death star are you alderone are you the rebels what are you blowing up maybe the death star let's
deal man this the death star is a big model sir yeah the the death star is the idea that pre-training
scaling is all you need you're going to blow up alderon like you're going to blow your minds with
how good the model is but but alderan was good we don't want alderon to be blown up yeah i don't think
You got to read into it so much.
I think you,
I think we do need to read it extremely.
We need to read into it endlessly.
The image is provocative.
Yes.
So,
so if a lot of people said there's still time,
I think Nikita was in the reply screen like there's still time to delete this.
Yeah.
But if we,
if we steal man this,
we are saying that,
that the Death Star is bad.
Therefore,
Sam is saying he's good.
So he's going to blow up Death Star.
What did he blow up?
That's bad.
The model switch.
Yeah.
Yeah.
There you go.
The model switcher is the,
Death Star. And today it goes, it's been this massive piece of UI in your face.
There was a trench run and they blew up the model switcher. This is good. We got it. We nailed it.
We understand San Alvin. We're in his head. For some reason, I think, I think, I don't think
that's an accurate read, but I like it. It's fun to pretend. I think you nailed it. Anyway, you
know who else nailed it? Finn.A.I. The number one AI agent for customer service. Number one
performance benchmarks, number one competitive makeoffs, number one ranking on G2.
Anyway, Mike Knoop also said, I'm quite confident this approach will work for a while.
This is based on the lack of continual learning from Dorcasch Patel.
Continual learning is the main bottleneck holding back AGI and economic automation.
We expect this bottleneck will be overcome not by some new learning paradigm, but by scaling the diversity and volume of RL environments.
This is Cosgroves scaling law.
You need to be bench-maxing, correct?
Yeah. And then I, yeah, yeah, the bullcase on bench maxing. The bull case on benchmarking. So the bullcase on benchmarking is that
continual learning is intractable. We may hit it. We may not. Might be two years, might be 20 years. So in the meantime,
focus on scaling the diversity and volume of RL environments, create a ton of benchmarks and then bench hack them,
correct? Yeah. And yeah, like you can, it's a it's a bull case that Elon is benchmarking.
Because that just shows that his team is, like, good at optimizing a very specific, like,
kind of vertical, like, tasks.
Yes, he just better pick the right tasks because I don't like the task he's picking right now
that he's benchmaxing.
It's not good.
Yeah.
But if he finds, if he does find other pockets, I mean, he's certainly benchmaxing
on Tesla self-driving, right?
That's the key thing.
Like, number of, it's number of, number of interventions per, like, million vehicle miles
traveled and and every day they are they're trying to hack that reward function to get it to
zero yeah that's a good one maybe a bad one maybe he's bench maxing too much on kind of
gooning yes he's goon maxing yeah i think we should steer away from that yeah we need to steer
away from that uh figure out something else to do with the with the xAI companions um
but maybe there'll be something else that something else that they can reward hack there um maybe
something in therapy or friendliness or you know some sort of coach um i don't know maybe it turns
into a fitness coach maybe maybe i need to to use it to really up my training in the gym be like hey
cool it with all that cool it with all that lewd behavior give me advice on how i can
double my bench press and chat with me about that maybe that's the right move we'll see anyway
well noop is optimistic he says i'm quite confident this approach will work for a while it requires
no new science. It exploits everything we know about AI reasoning systems. We teach process models
through memorization in domains where we can generate lots of synthetic but real data. And then
non-zero-fluid intelligence emerges from the resulting chain of thought, knowledge, recomposition
system that sits on top of the foundation model. But it still reminds me of pre-training scaling
where we were making AI systems better through imitation learning and stuffing more into them
versus an AI system that is capable of cold starting itself in some new domain. It's never seen
before and that's why Arc AGI is so important because because the the final evals are so hidden
behind that that secret test set it's the AI systems need to be able to cold start themselves
when they when they run into a game that they've never seen before in the test environment
in in the Arc AGI private eval set and that's why they fall flat on their face consistently
and they're sitting around 16, 15, 8 percent, successful.
access rate on Arc AGI 2, which can be solved by any human pretty easily.
But Elon says Grock 5 will be out before the end of the year, and it will be crushingly good.
He's benchmaxing.
He knows Arc AGI is the one to go for.
And so he's going to be RLing on this pretty, he's going to have a whole team on
RKGI.
I'm excited to see what he does.
I wonder how he'll do on V3.
That will be very, very interesting.
Anyway.
Well, in other news, we have a post here that we can pull up from Bono.
Coley team, it is in the chat.
If you can pull up, he says, yesterday,
Rail Financial signed a definitive agreement
to be acquired by Ripple for $200 million.
Four years ago, I set out on a mission to speed up
business to business global payments using USC.
Over the last six months, we grew to become 10%
of B2B Global Stable Coin Settlement Volume.
Airhorn from that.
With Ripple, we will further accelerate our shared mission.
Thank you to our employees, clients, investors, and partners
for taking an early bet on us, a few that Tarun and I want to call out the entire rail team
for their relentlessness and hard work.
Avlock, of course, the CEO of Angelus, our first lead investor and part of the founding team
and Gokul Rajaram, immensely helpful during some of our early crucible moments.
And, of course, Mike over at Galaxy for taking the bet on us in the Series A.
We are excited to start our new chapter with Ripple once all regulatory approvals go through.
Hit that gong, John.
Great contact, great contact.
I was lucky to angel invest in the seed round of rail, and this is a fantastic outcome for the team.
And yeah, this one was a, so I first met Banu, I think back in 2021 or 2022, we were both working on stable coins at the time, loved his vision, haven't stayed super close since then, but he's been absolutely cooking.
very pleasantly surprised when I got the news a couple days ago.
So incredible work to the whole rail team and a great pickup for Ripple.
Amazing.
Let me talk about Adio.
Customer relationship magic.
Adio is the AI native CRM that builds and grows your company to the next level.
Get started for free.
Sam Altman posted.
GPT OSS is out.
We made an open model that performs at the 04 mini level.
Can we create our own pronunciation for this?
GPT Oos.
GP. TOS.
GP. TOS.
It runs on a high.
on laptop. Smaller one runs on a phone.
Super proud of the team. Big triumph of technology has
a community note on it. I don't know what's in there
but that's very funny.
Okay. Anyway, but Donald Boat.
Donald Boat. One of the greatest to ever do it.
Donald Boat responds, Sam,
you and me, the Amafi Coast,
me, double
for ney on the rocks, club soda to taste.
You, one delightfully sweet
bitter negroni stirred
90, 9.1, 9.2 billion,
900 million revolutions counterclockwise one for each hurts of the NVIDIA 5090 in the gaming
PC you will buy and ship to my house and Sam said actually sent he sent it sent it popped up
yesterday this is a timeline victory hop on fort at Sam yeah and yeah Sam said okay this was
funny send me your address and I'll send you 5090 and he did it and I love this this is this is
the type of like you know small ground game that we identified earlier you know sam he didn't have to
drop the big long post he was vague posting the vague posting was a little mixed result but this is a
win this is a fantastic win this just builds the team builds a lifelong fan this is hand-to-hand
combat on the timeline and i love to see it so uh great great to be uh to be doing this type of
stuff even even the day before gpt 5 launch day and donald boat is really an account to watch
laser boat 999
Get in early
It's like buying Bitcoin in 1994
John
He's not 100K
It's still like buying Bitcoin in 1994
That's right
That's right
Or Solana in the 80s
Yep
Dylan Fields
Is GPT5 is here
In Figma Make
We have started to roll out GPT5
To starter and pro plans
Let us know what you think
More model news comes tomorrow
I mean this is a good news
For Figma Make
Of course, we talked to Rahul
about this
Cheaper model
Better reasoning, better code generation
the product just gets better and this is the value of being you know somewhat of a wrapper right like you are a beneficiary of model improvements when as the models get cheaper your margins naturally get cheaper if the models get better your product just naturally gets better and so lots of people benefiting they add a Gemini 2.0 flash yes in their image editing so you can just like drop an image in and then click it and say remove this person from the image that's very cool yeah um well
Anyways, Kevin Kwok says forcing Lip Bhutan out of Intel is probably one of the worst things you can do.
If you want to save the U.S. chip industry, someone should tell White House that before we put Intel back in a tailspin.
Again, they intel whatever you think about Lip Bhutan in his approach.
It seems, I think stability in the short term is good, right?
Yeah.
He's trying to pull them out of the tailspin.
Yeah.
So my take on this is it kind of feels.
like Trump has an outdated world model for understanding the importance of Intel. Like, Intel
is a fantastic American company, but it has not been on the frontier of semiconductor manufacturing
for a while. They famously missed mobile, an arm crushed it in mobile, and then that was important
for semiconductors. And then they weren't a fabulous semiconductor. So TFCMC ate their lunch there. They
kind of missed out on the GPU boom, and the Nvidia dominated there. And so the question of
of this, it seemed like Trump was worried about Lit Bhutan's ties to China, but China has already
caught up to Intel, I think. Between Huawei, Smic, Smee, these companies seem fully capable
of doing everything Intel can do and probably more and probably better and probably cheaper.
And so Intel hasn't been the crown jewel of American semiconductor supremacy for decades.
Basically no one is advocating for a real comeback with Intel right now.
like the idea of even splitting Intel up like there was like for a long time there was like
oh we just one weird trick you know semiconductor CEOs hate this let's let's figure out some
like elegant switch let's just split design and fab and then Intel will be great that's not even
what people are advocating for now like they tried to kind of test their waters of splitting the
fab out and they couldn't get a customer for what they were going to make at this new fab.
And if they can't get an independent customer that's not Intel, well, then who are they going
to, why do they need to be a, like a, why do they need, why do they need a pure play fap if they
don't have a customer for it? So they were just like all these problems. So what Liputon is doing
is he's coming in, not as, you know, this like gambit to get on the frontier. He's like winding
down the company. He's like, he's laying a bunch of people off. He's narrowing the scope. And,
he's coming in like a McKinsey consultant almost.
And so this doesn't feel like that key of a company in the American semiconductor race,
like the future of American semiconductors feels like TSM and Samsung, which are both
building fabs in the United States, which is nice, but also these are companies that exist
in allied countries.
So it's nowhere near as risky as being dependent on a, on a.
a supply chain that's based in a near peer adversary like China.
So it's not like if TSM was in Beijing and Samsung was in Chen Zhen, like we would be in a lot
more pain than we are, but Samsung and TSM are also building in the United States.
And so the leading edge will be in the United States and it will be led by other companies.
It doesn't feel like Intel will play a really key role in there.
Certainly we'd love that, but that doesn't seem like the current plan.
And so it's kind of just like an odd side show.
it's a lot of legacy about the brand recognition and the name recognition of Intel but
it doesn't seem and ultimately we should let Intel's board makes the best that make their own
decisions around Intel's leadership you know we've got this amazing system yeah called free market
capitalism we try to stay as close to that as possible it is it is funny because Trump is
also trying to IPO Freddie Mac and Fannie Mae the two private the
our state-owned lenders, you can go get student loans and mortgages from that.
So simultaneously, in the lending markets, we're trying to, we're trying to, you know,
deregulate or move out of government control.
And then simultaneously saying, like, well, actually, we'd like the government to be able to
decide who the CEO is at this private company.
That's kind of odd.
Anyway, good luck to Liputon.
He is losing a lot of sleep.
He's got to get an eighth sleep, get a pod five.
five-year warranty, 30-night risk-free trial,
free-returns, free shipping.
Lip.
Get an eighth sleep.
You need one.
When you hear this,
we will gift you an eight-sleep.
And then come on the show.
We think you're doing important work
for this country and Intel,
and you've got an eight-sleep on us
whenever you're ready.
100%.
Just let us know.
Bucco Capital Bloke says,
for those of you blowing out your SaaS positions,
even though time is of the essence,
please try to maintain a sense of order.
Single-file line, eyes forward.
Do not stop for personal belongings.
do not panic, do not run.
Of course, he's telling, talking about everybody
selling out of various SaaS positions.
That's what he was by blowing out.
Selling them?
I came away yesterday being not broadly bearish
on a lot of different SASS, systems of record.
I think they're in a really good position to offer intelligence
as a service.
A few days ago we were talking to somebody who was saying
like all SASS is cooked as soon as AI goes hyperbalms.
goes hyperbolic and yesterday it felt like okay that's not that's not happening today so sass is
actually great and the newer sass i mean like even the there are still companies that are on
completely legacy systems still on paper and pencil and and still on mainframe still on owned clouds
still haven't migrated to the new clouds still haven't embraced sass still haven't embraced
AI so i think that there will be like a full cycle of replacement here um and we see that with
With Figma, we were talking about, like, is there some sort of threat to Figma from, like, oh, you just one-shot of vibe code, blah, blah, blah, blah, blah, and you don't even need the tool at all, like, maybe, but, like, first, let's talk about Adobe.
Let's talk about how much people dislike Adobe.
Like, you know, there's a lot of, there's a lot of people that are still, still haven't migrated over to, like, the thing that was invented a decade ago.
And they're still on 20 or 30-year-old software.
So, these things go in really big cycles.
Down 22%.
Well, regardless of what you think about Adobe, maybe you're along, maybe you're short.
Do it on public.com investing for those that take it seriously.
We got multi-asset investing, industry-leading yields.
They're trusted by millions folks.
And we have our first guest of the show, Doug O'Loughlin from semi-analysis, fabricated knowledge.
Doug, how are you doing?
Welcome to the stream.
I'm doing really good.
Can you guys hear me?
Yeah, we can hear you.
You came in looking like you're about, you're playing at.
We're really quiet.
Okay.
One-tenth volume.
I can yell.
Yeah.
Would it help if we yelled?
It would help if you yell.
Okay.
We'll do the rest of the interview yelling.
Also, do you have the ability to turn up the volume on your side?
I am turning up the volume, bro.
Oh, my God, man.
I feel like a boomer.
You're all good.
You came in looking like a DJ playing a hot boiler set.
You had your hand here.
I came in messing with my audio system fucking deep, you know, it's years.
of the Zoom economy.
Yeah, is it one of those things where the volume on one of the sub-applications is turned
down while the volume on the system level is turned up?
No, dude, I see the sub-application.
I see the system level.
This is extremely boomer for me.
I'm embarrassed.
Well, we enjoy these moments where Zoom fails us because it's this incredible, you know,
you contrast that to, you know, having infinite intelligence available in your pocket, but we're
still, we're all still trying to get video conferencing to work reliable.
so I'm, you know.
Okay, well, just, can we take the normal?
We usually don't have a problem with feedbacks if you turn off the headphones
and then you just use the speakers.
We usually won't get feedback, so let's see if that works.
Any luck?
I can hear you.
Okay, cool.
Yeah, can we chat?
Yeah, we can chat.
This is much better.
Awesome, awesome, cool.
Welcome to the show.
Welcome to the show.
We've been looking forward to this.
We've been looking forward to this.
We missed you in New York City.
We were hoping to hang out there, but I'm really glad you can hop on remotely.
Give us your reaction to GPT-5.
To Lip Bhutan, correct?
Sorry, damn, dude.
I'm still really good.
I mean, yeah, we can start with Lip-Buton.
We were just talking about that, too.
Are you in favor of a change of the guard over there?
Okay.
How do you process this?
Sorry, if we're talking about Intel.
I can do an extremely based rant on the board.
I wrote about this in fabricated knowledge,
and I definitely was part of the semi-enacted.
post too.
Sure.
Okay, perfect.
I can hear you now.
Okay, we're done.
We're through it.
Let's go.
Okay, so, now, dude, turn it up to the 12.
Okay, so, dude, TLDR, the board has been systematically screwing up Intel for the last 10 years.
This, I think, comes from the history of Intel being the greatest semiconductor company alive of the 2010s.
Dude, the board was filled with, like, politicians and like ex-Senator, what you call it, Secretary of State and, like, general.
and like generals and like never anyone in the semiconductor industry.
Intel had like, Intel had like a view of arrogance.
Like we are the best and you're going to suck it because we're the best.
Like, you know, effectively it is an Intel first worldview that like has slowly been crumbling over time.
So pretty much on the technology side, they've never ever had anyone who actually knew
how to run a semiconductor company except for professors or people on Intel.
So like you can look back in like the 2003, 2004 era.
It's usually two or three people from Intel, and then the rest of it is just like randos.
So they never thought, hey, why do we need a semiconductor person on the board?
This slowly became a melting frog issue as they like missed.
The 10 nanometer debacle is like the real big, you know, change point in Intel.
And really quickly, can you unpack the 10 nanometer debacle a little bit?
Okay, so 10 nanometer was Intel's process at the time.
It was supposed to be the next one after 14.
and I think it's plus, plus, plus, or maybe, so pretty much it got delayed.
So they tried to do a lot of aggressive technologies.
They did quad pattern DV, copper, no, cobalt interconnects.
A lot of things, they shoved it here to make a good product, but they just missed over and over and over again.
So it was a ginormous fumble, and at the time, the CEO was a CFO.
So the finance guy oversaw the technology essentially implode and slowly degrade.
So 10 nanometer was just like the true.
slow, like, you know, multiple years in the making train wreck.
And what was, what was 10 nanometer, like, critical for?
Is this, like, mobile transition, just CPUs?
Just CPUs.
Remember, this is an Intel that missed and totally, like, was irrelevant for mobile anyways.
This is just, like, data center CPUs, normal PC CPUs.
This is when AMD essentially caught up because they chose to.
Yeah, the Risen stuff was happening, thread rippers and whatnot.
Yep.
All that stuff is because they're using the TSM process versus.
Intel was using their process, their process lost the TSMC, and then also AMD's better
design.
Got it.
TLDR, total, total problem.
And half of the people who are on the board still are from that era.
You would argue like, hey, capitalism works.
You should fire people who do shitty things at their job.
They should all be fired.
And so most of them got fired.
And then I think this is in the semi-analysis piece, but essentially like the people who remain
effectively was the guy who just
stepped down, I still think he's on the board, and now
the chairman is this guy named Frank Erie.
Frank Erie joined in 2009.
So he is just as guilty of anyone,
and he's a banker, dude.
That's his background. He's a deals guy.
So the first thing you need to know is a deals guy
when it becomes chairman of a company is going to do deals.
So that's what he's been doing. He's sold Altera.
You ship your org charge.
Yeah, yeah. Ship it, sell it to anyone who can do it.
and he's in the and he wanted to sell the foundry business okay that's the that's like the
Wall Street Journal thing and in my opinion total mistake we think it's a total mistake of
semi-analysis but then on top of that like Liputan I understand so so so so really quickly
clarify that on on because I hear deals guy comes in investment baker comes in a lot of people
rumbling independently for the last decade split up Intel it sounds like they were trying
to do that give me like your evolution of like should
Intel have split up at some point?
Should Intel like split up now?
What is the current stance?
So I think, um, at this point in time, I think they probably shouldn't split up just
because like, they don't have a customer, right?
Yeah, they don't have a customer.
It's like, um, the best time was definitely like three years ago.
At this point in time, it's like you need the money from design now just to keep the lights on.
And I mean, like, not even just to have the customer just to, like, fund anything, to have any cash from ops.
An ideal perfect world is somehow Hawk tan, the Sith Lord of Private Equity, buys the design business of Intel and scrapes it and guts it to the floor.
That has always been a long-time rumor that Hawk wanted to do that.
Hawk doesn't want to do it because Intel so bad.
So there is no buyer.
Designs, like, really screwed because it's, like, stuck on this process.
Plus, it's also filled with bloat.
And at this point in time, I think the only strategic part of the business that really matters is foundry.
Like, let's just like, you know, because outside of the X86 CPU, like, they're getting their face kicked in by arm.
Okay.
So it's, you know, X86.
And then they have to compete against AMD, which is still kicking their faces in with CSMC.
And so, like, what is a secular decliner X86 work?
Not much to anyone.
And I think the reality is it's like, you know, with Arm and these custom CPUs, effectively they're, they're,
are just one of many competitive products. And I think they're competitive edge like dwindles.
Pretty much they just have distribution into OEMs. So where is a bit like a business that has
value that matters at Intel? It's a foundry. There's one foundry in the entire world right now.
It's it's it's TSM. It's monopoly. Like and that's the only thing that's worth anything. And I
think, you know, there's a chance to have a second foundry. And that could be Intel. But,
you know, what we are? What about Samsung? I mean, uh, Elon did that.
deal. It seems like Samsung's like, at least in the conversation as a potential foundry for
specific things. Is that like, are they really like several orders of magnitude below TSM?
Yeah, I think Samsung's just as screwed as Intel. But it, but it's like a second quiet more
screwed that I think is like, I mean, the PPA there, meaning power performance area is pretty
bad. It's probably just as bad as Intel. But the one difference is Samsung has had external
customers and probably could have the external customers again.
If I had to bet between, like, Intel in theory has a better process for sure, but it's like,
it's like a bed of like, we're going to say horse and saddle, okay?
Like the Intel saddle is like, you know, six rags like tied together around the horse
that only like one writer in the entire world knows how to use, which is Intel design.
On a presumably decent horse, it's like 18A.
is probably like equivalent to three nanometer um obviously not the best in the world but like
that it deserves to exist versus like Samsung's force is just like really bad the PPA is total
trash um but hey it has a saddle people have rid in it before so there's in theory a customer
and so Liputan is the perfect guy because he uh ran uh cadence which is an eda company and
eda their whole thing is they design and so once you have like a design you can like put it in the foundry
And so like that important critical step of the PDK is like the missing piece for Intel.
And Liputin could be the guy, but Intel sucks.
Yeah.
So, I mean, my read on Trump saying we need Lip Bhutan out is like a complete misunderstanding of the importance of Intel.
And basically he's operating on like a 20 year old world model thinking that, oh, I'm familiar with Intel.
I know the brand Intel inside.
Like that's an important company.
We need, it's a critically important American company.
We need an American at the helm when in fact, it's like, Intel doesn't really matter.
There's like a wind down in process.
Let Liputan like run it.
Even if he's exfiltrating everything to China, it doesn't matter because Huawei and Smick and Smi are light years ahead of Intel.
Yeah.
So I think that that is sort of true.
But I also, I will give the Trump admin props in terms of like I do think they need.
know and understand the critical importance
of Intel. What is the critical importance?
What is the critical importance?
They are the only
domestic semiconductor process.
Okay. So our only hope.
Our last hope. It's that or release the future from
TSM. And that's pretty much like probably
the most likely option.
Lease the future or least the future. Lease the future.
Lease the future. Yeah. Because
you could buy a process or
force TSM to make a process
in the United States, which is what they're doing.
Yep. With Arizona, right?
Yeah, in Arizona.
And then possibly more expansions.
I'm sure there's some kind of deal.
But after you have that, like all the R&D happens in Taiwan.
So effectively, like, you know, the Taiwan constraint thing, same problem we had before.
Because like a missile flies, boom.
We don't know how to make new chips.
We would just have old chips.
It's like leasing a car.
You know, we can't buy the new model.
Yeah.
So that's a problem.
That's like the same problem, but maybe more kick the can down the road if we had all the capacity here, but no R&D.
Yeah. So yeah, what is the like moon shot to make Intel dominant again or catch up?
Like is like people were noodling on like should Elon come in and do the Elon thing and hire the most cracked engineering team and and drive everyone 10 times harder?
But it feels like people kind of kick the tires on that.
There was the rumor of like all the PJs and the same at Mar-a-Lago.
kind of everyone was at Mar-a-Lago at the same time.
Is there a world where, like, Intel needs to go more founder mode?
Like, Liputon feels like the epitome of, like, manager mode.
Like, would you be, like, excited about something like that?
Somebody who says, like, I'm going to take it private, be extremely risky.
There's a 10% chance that this thing works.
90% chance we completely destroy everything.
But if it works, it'll be amazing.
Dylan Patel and his CEO.
Let's do it.
Dylan and die.
Get both of them.
Dude, if Dylan is the CEO, then my life has kind of become miserable talk.
Like, uh, book, uh, so I think, I personally believe that fab and found, uh, fadless and foundry need to have separate lives because you're like, there are two really drunk adults at the bar tied together hip to hip and it's like one of them could be dying actually.
One could be a corpse.
Um, and the other has like a shot.
Okay.
So, like, I just don't think, I think, you know, like, I believe in focus, I really do.
I believe that small teams with a lot of focus can make them extraordinary results.
I'm sure you see this in technology over and over and over again.
I think Foundry needs to be, like, given a leash that's long enough to, like, make the shot true, probably a smaller dream and probably like, you know, a pot of gold at the end of the rainbow.
But then, like, essentially be told, like, hey, here's your purse budget.
Here's your potential capacity.
If you win XYZ, whatever, you'll get a ginormous order from in the fabulous semiconductor companies in America.
And that's the pot of gold at the end of the tunnel.
And then behind you is death.
And there's only one way and it's forward.
Because how I think we've gotten to this point is like there was always this like second sloppy option of like, well, we'll just put the Intel CPUs in there.
You know? That's how foundry has been treated for so long. And I think Intel CPUs over a long enough period of time is just not going to be enough to fill the foundry. So you're going to have to do, you have to separate them. And I think you have to make capitalism work, which means that you need like a pot of gold at the end of the rainbow, a true like shot to do it. And then like probably founder mode if it makes sense. Like sure, maybe Liputan is the like Liputin has much more of the qualifications. It can't be someone who is from Intel. And I think the company.
should be private because public markets would kill the shit out of it like it's just going to be a miserable it's going to be like the face beating the entire time um and yeah dude if elan did it i'll be very stoked i'm not going to let you like however feel about elan's companies like you know someone said he makes the late oh no it makes the impossible late right he he will maybe he'll be late but it'll be like you'll get it done right um there's there's no one else who's really done this pretty much no one's fallen off the leading edge and come back yeah and so this is like a moonshot
This is a Moonshop's problem, and I don't know who would do it.
When the Twitter deal was happening, I was saying, like, okay, we marshaled $44 billion in private capital.
There's also the Chips Act going on.
You package all the money together, and you give Elon Intel.
And, like, what does that counterfactual look like in the course of history?
Well, I think the question is, does the stock need to drop another 50% before it can become a viable, like, actual take private target?
It's an $87 million company today.
I think you can do it today.
Yeah, I mean, if you can take Twitter Private at 44, you can take Intel Private at 87.
I think you just have to split the Foundry Fab.
Yeah.
It's that simple.
Because the Foundry, the, sorry, sorry, not Foundry Fablis, Fabless.
Fabless, yeah.
The Fabless business is worth something.
Okay.
Like, it is, in my opinion, the, like, dude, Hock-Tan, the Sith Evil, like, P.E.
O'C. would, like, crush it.
Like, I really do believe that.
My favorite.
Wait, wait.
So, so quickly.
like you you you you you you split out design you have this fabulous semiconductor company similar to
it would compete with invidia in some ways when they go into cd amd and so they'd be focused on
CPU and their and their customers would be who exactly um i think you'd focus electronics
electronics companies but yeah like i think you'd focus so the thing they keep saying is
AI at the edge.
I'm a total hater.
If I'm being honest with you,
like, you know,
the internet works.
Packets are pretty quick.
What I think would it be the best way is they do have some networking content.
They do have the PC business.
They do have like high-end data centers,
CPUs.
Maybe like the high-end data center's CPUs, like,
I mean,
to be clear,
they're going to be like a third place or a second place against AMD.
Like it's going to be an ugly world.
But like what you can do is you can just kill all the
skews and all the expansion you've done over time, like Intel has all these custom skews for
whatever. You make like good, better, best, data center and I don't know, edge or mobile,
some kind of optimized thing. And you fit all your products into those categories. You kill all
the unprofitable ones. You fire 50% of the people. And then you like, you know, do your best to
extract the rent in places that you cannot be ripped out. Maybe you, I don't know, monetize your
CPU software, sell it or some shit to AMD. Like do your absolute best. And it's like a really sad
ending to Intel the FAPLA's business, but I think it has one that is worth more than
zero dollars.
Is, yeah, talk about the GPU CPU split here because Intel's never really been a player
in GPU, and that feels like when we talk about the value of TSM in Taiwan and AI, we're
talking about super intelligence and these mega clusters and the ability to train frontier
models.
And it feels like, like, even a high performing Intel, like, isn't a player.
in that world or should we be thinking about it
in that grand of terms?
No, I don't think they're a player in that world.
I really, like it, I get sucks, but like, you know,
Gotti is a chip that sits in warehouses
around the United States.
You know, the battle mage CPU is like not the worst products.
Sure.
But like I think, if you think about just like semi-cahacters, man,
one of the like my favorite analogies or like ways to look at it,
it usually ends up being like a, let's say a 60, 30, 10 market.
And the 10%, like 60% makes like, you know, 2x the profit of the 30%.
And then the 10% is like break even or loses money.
I feel like Intel's market positioning kind of puts them at the 10%.
And I think like, I really think the private equity outcome for the fabulous side is the best outcome possible.
Because like, and the reason why I say Hawk Tan, like, let's actually just like play the
Hawtham playbook. What Hawk Tan does is he takes a business that's extremely mature and
sticky that was like a winner of the last cycle and then just like fast forward 10 years of
the maturity all the way to today. So like VMware is a perfect example. Virtualization was like,
you know, the thing in the 2010s and made CPUs better and all this stuff. And, you know,
there were this like almost monopoly. They still have the vast majority share, but they were like
spending all this stuff, growing expenses. And then Hawk was like, dude, no, they actually
have a business that has some terminal value. They're not living in this paradigm. So what I'm
going to do is I'm just going to fire half the people, raise the prices on the, like, raise them
massively, lose a lot of my customers along the way, but become a massive cash cow and then like
essentially squeeze that to the end. But if you think about what Koch is actually doing is he's
accelerating 10 years of industry progress into three, right? Like that's what would happen, but like
it's going to be a lot uglier and slower show in public markets versus just like time collapsing, price raise, fire all the people, boom, you're profitable, you're at the end state, you're essentially underwriting no growth. I think Intel needs to underwrite no growth and that's not in the fabulous business. Yeah, it's just embracing reality. Like embracing, embracing the reality and taking your medicine. Someone in the chat, Sharon says, please ask Doug about the new Intel factory in his hometown of new album.
Albany and what the status is.
Do you have any idea what's going on there?
New Albany.
Not my hometown.
I haven't.
No, no, no, not your hometown.
This listeners, this listener's hometown is maybe looking for a job in the fab.
So the new Albany, as far as I understand, the ground is broken, the shell is empty, and I don't think they'll fill it.
Okay.
It just won't happen.
That is, but the thing is, I think that that's a really valuable asset that needs to be marketed.
I think it's probably likely someone will buy it.
And if I had to guess, it's like TSC.
So, yeah, I don't know what that looks like.
But I think a really shrunken down Intel looks like just Oregon and just New Mexico and Arizona.
I think so.
Yeah, Arizona.
And that's it.
The capacity that was expanded for Ohio was like a YOLO bet the farm somehow we're like beating AMD again.
Like, I think it was underwritten to have this, like, and that's the reason why Lip Butan was like, dude, touch grass.
This isn't happening.
Pack Yel singer.
This is just like, this is not happening.
Like you're like completely not realistic.
I think that that, it just doesn't like the capacity they had there for the way for starts is just like so large.
It doesn't make any sense.
Yeah.
And I mean, like, dude, the Liputan thing and like, okay, how do you feel about like the, you know, Chinese thing?
I mean, there's just not many people who could take the job and I mean, it really sucks in very hard.
But like, dude, he actually embraced reality.
Like the most realistic thing I saw, he's like, dude, we're not even a top 10 player in AI.
And I was like, yeah, that sounds like reality.
Like, this is the first realistic thing I've heard for a long time.
And that's like refreshing to me.
The guide was totally like the print was totally messed up.
Like, you know, effectively it was really ugly.
Essentially they're like, yeah, we did pull forward tariffs.
Yeah, the guide is a little messed up.
And yeah, it's going to be uglier from here.
But I kind of vibe with it because it's like, we're just going to tell you like it.
it like how it is you can like trade the expectations of this but like the the stock and
company and the value of it is in my opinion a binary option is what is the foundry worth to
America and how do we get how do we like fast forward this like painful capital intense
world where we like don't have a leading edge semiconductor producer that can have an external
customer to one that can that's that's the entire value of the company to me and then fablis
is like a call option cash gusher that can be hopefully harvested strategy is we need
people to rip the band-aid off yeah yeah yeah it's supposed to pay a big it's going to cost a lot
uh give me the update on gpt5 yeah specifically it this was the week that the timeline woke up to
language models plateauing a lot of people are are bummed about it bearish i think in it was
maybe six months ago that dylan patel went on dorkash and said like if gpt5 is good we're
are totally good on all the semiconductors spend.
If it's not, maybe we're in trouble.
And so I'm wondering if there's updated thinking around there
based on, do we need any more inference clusters
or training clusters or is it just a game of where
is the profitable inference happening?
Let's make sure we have enough to meet demand,
but not go further, not create some overhang.
How much of a dance do we have to do
to make sure that we don't get overcapital?
like overcapitalized, overbuilt.
Okay.
So this is a pretty interesting question.
I'm going to parrot Dylan on Twitter
because, you know, like,
my lord and savior, Dylan knows what he's talking about.
GPT5 is a little disappointing,
I think, for the power users of Twitter.
There's no other way,
there's like no other way to put it,
is that like, for the power users of Twitter
who've been like chugging O3 deep research things
for a long time and there's like mess around with
rock and like messing around with whatever i personally do not feel like massive difference it's
less per ghost whatever maybe it's slightly better i don't know if you're seeing this thing about like
the the the sweet benchmark like at like uh the so like the like the like the like the chart is yeah
the chart no no not the chart the sweet benchmark if you look at it um not all 500 tasks
are measured they hold some it's like 477 it's like 47 yes yeah yeah so like that's a like
that's an apples to oranges compared
So that's like a big deal.
But I do think you have to think about like, okay, so on one hand, the cultists and like me and like the power users are probably disappointed.
On the other hand, you know, there's 600 million free users who probably just woke up to like a big upgrade.
And I think that that's how I feel like this was actually like that's the actual strategy.
They push them to five.
Five's a lot better versus four, you know, four oh.
And it just, sorry, did the fucking naming message.
Well, I think I think another way to put it.
is if you were a power user around using it for research and learning, you're maybe disappointed.
If you were a companionship power user, I mean, we were reviewing, I mean, our slash chat GPT on
Reddit is absolutely in shambles today.
Dude, I know because the girlfriend's RIP.
Well, there's a new entrant there.
I think they might have to peel off that market and give it to Elon.
Yeah, that's true.
You can do the Annie Network.
you know
I don't
I
I um
yeah I think that that's the correct way to think about it
I think it feels like it's
try to be a simple like a simplification
of the
of the skews because as you guys know
it's confusing as hell
there's many there's like many thinking
and flagship
for the most part I had to
if I had to guess flagship probably rent
like probably prints cash for them
and is like much cheaper
and like a smaller model
maybe not a smaller model but like
compared to like a very
very big model smell right it's not big model smell so it's very and then also if you look on the
pricing it's pretty cheap and if it does what it does well then just like we talked about sweet
benchmark might not be might not be correct but like you nuke the price on your competitors in
terms of like Claude versus chat GPT5 tokens and so that that's like the this is the better
bigger faster or maybe it's the faster cheaper but not better update if that makes sense yeah well
One thing that stood out yesterday, so they have 700 million weekly actives globally.
85% are outside of the United States.
And you can just imagine in those hundreds of millions that are outside of the U.S., many of, you know, like the question just becomes like how they can continue to serve.
Like, will they be able to serve?
What will ARPU be and what will the margins be on the incremental international developing markets?
it's chat GPT user, because the marginal cost to serve WhatsApp or Facebook, even yes, you're
storing images for Instagram, or WhatsApp, you're running a database, but the marginal cost
is so, so low, and it feels like at least right now, maybe it changes in a couple of years,
but right now the marginal cost of serving a chat GPT user, even a casual weekly active
user is in the dollars per year.
And when you look at the monetization rates of Facebook
and WhatsApp in developing markets,
the ARPU is like dollars per year.
And so matching that up feels like a big challenge.
And I'm wondering about, yeah, like A6, AI on the edge,
dedicated servers or even more pared down models,
like what's the solutions so that you're not just
burning a ton of cash supporting, you know, unprofitable users.
So I definitely, so I wonder if we'll have the bandwidth to get this done if it like makes it to a big post.
But like I think we're thinking a lot about this deeply.
Yeah, that makes sense.
I mean, um, I think the cost of a free user for a small model is like surprisingly low.
I want to be a little bit more sure about that.
But I think if you have high batch size with, uh, you know, a B 200 or a GV
be 200. I don't know if it's dollars per year. I really, I really think the actual net tokens
that you're, you're able to do is a lot higher than people think. And I think that there
could be this, but the problem is, it's like the same, there's the same issue, right? Think of it
as like a Pareto curve, right? 20% of the people are paying out the wazoo and more than happy
to pay for like a crap ton, right? The pro and plus of opening eye. The only problem is like
the one, the top five percent of that 20% are paying or like, you know, they're massively
losing money on. And then you have this like long tail that is like kind of interested,
likes tokens, becoming a little bit more addicted, easy, cheapish to whatever. And then as you can
shift that up, maybe you can make the revenue that you're not, you're losing for the hyper,
you know, for the pros. And so that's kind of a, I think that that's kind of like the more
interesting like thought process. I do think the unit economics of these things are a little bit like,
I mean, it really depends on the size of the model and then like what inference is there.
they're doing, but like my impression is you can be doing, you know, bill, you know,
millions and millions and millions of tokens for relatively cheap a year.
And if you monetize, if, you know, we're talking about like, you can monetize like 15 to 20%
or even 20 to 30% of your, your tokens are actually being monetized at like, you know,
the market rate, you can pay off the 80%.
And that break-even, I think is a really important, like, question.
I don't know the answer to that.
I know we're working on it, but I don't have anything.
Yeah, I think the, I mean, the, the, the, the, the, the, the, going back to your
like Instagram or WhatsApp comparison, Instagram and WhatsApp are pretty much the same
wherever you are in the world, regardless of how much revenue you're generating for these
platforms. But it's very possible in the future that companies cannot serve the quality of
model in some of these areas that they can. Well, they will. It'll just be $200 and $200 in an
emerging market is like a ton of money. And so the adoption rate and the upsell rate will just be
way, way lower. They'll probably be like an ad-supported tier.
but again ads as themselves don't generate of course of course the ARPU will be will be much lower in emerging markets yeah and I think I mean I think that's happening I can't I can't name the higher but I know the higher that like essentially started the ads yeah Fiji CMO yeah is is there so like you know that's the yeah you know freaking surprise I wouldn't be surprised the answer there's a Twitter user the answer is always ads okay there's going to be ads eventually yep but they probably monetize like within an order
of magnitude of how well Facebook ads monetize and Google ads monetize.
So that's fine as long as your inference cost is roughly the same as within an order
of magnitude of Google's service cost and Facebook's service cost to serve a user,
which are, you know, pennies, I imagine.
But it does feel like we can get there over time.
I think this, the timeline broadly woke up this week that we're building
developer tools and consumer internet companies.
And like that's that's the game.
here and yeah machine God is like real delay some ways delayed I'm curious how do you
think how much do you think the labs care about the student market there have been
some reporting around the token generation as school ended last year token
generation dropped dramatically and then today Sundar came out and shared that
Gemini they're making Gemini like you know like unlimited
sort of like free for students this coming year.
And in some ways.
Seems like he's staying in the game.
He's like, I'm not letting ShatchyPT.
Well, no, and it feels like that's potentially
a way for them to like to kind of corner users
that will become very valuable,
maybe aren't so valuable in the short term,
but will become very valuable over time
if you can get them to stick.
Yeah.
Yeah, I think let's kind of talk about this
because I think that's like just a really interesting
and I'm kind of going to Google, I think.
you know, how Kuda, like, just analogy, right?
How Kuda became a thing is because Jensen gave it away to PhD people for like a decade.
And then all of a sudden, everyone, whoever did anything realistically in machine learning was like, well, I'm using Kuda because that's what I've been using.
I think it's important because if you're thinking about like, you know, where are the users of tomorrow?
The Sperds of penetration are like, it's not going to be the old people who are using the hell out of it first.
Who knows, maybe they are.
but it's like the young people are probably going to overindex, you know, people my age,
which makes me feel old, is like going to, you know, probably index about right.
And then the people older than me are going to index under.
And so where do you actually win market share today?
It's probably not in the people my age and older or the power users or whatever.
You probably need to like get a, you know, you need to get a 12-year-old addicted to Gemini,
which sounds like very terrible.
Now I say that a lot.
But you wouldn't say it about Google search, you know, or Google Drive or Gmail.
But yes, it is a, it's low turn.
You have to get them over the adoption curve so that they are low turn.
And that's where they're like, like, I think it's like a cognitive referent, right?
Like right now, even though I am very model curious, meaning that like I have a, you know, a paid sub to like Claude to Grog to chat GPT.
Recently, especially like, you know, in the, you know, the good old golden era days of O3, you know,
three pro deep research, I'm just pounding deep research queries and like agent
is constantly, but like, and I'm going to be, to be clear, I'm going to start looking around a little bit, but like, I think, um, I think that that's what Gemini's, like, real issue is, is they need to become the cognitive referent. And I think when people open up a, you know, a GPT like transformer, you know, transformer model effectively to ask them in LLMs, they need to be thinking of Gemini in the conversation and they just don't, dude. Sometimes I like talk about LMs with like people and it'll be like, you know, open AI and anthropic and what's the third? Oh yeah, Gemini.
and I, you know, the third.
Yeah, I, like, weirdly forget about them.
And I think that that's the, they don't want to be forgotten.
And I also think if we do a segue to, I think that's why they're also starting to sell
TPUs externally.
And this means, like as a service, right, in their cloud, right?
They're selling, like they talked about on the last poll where, like, you know,
GCP is starting to accelerate.
And I think where we're starting to see that is I think TPU is starting to pick up a little
bit.
Do you think that will be, like, distilled models that are, uh, fine-tuned or developed for TPU?
specifically or something else I think it's I think it's more that like there's a few model
companies who are very interested in extra capacity you know for example Anthropic is
definitely you know definitely uses some TPAs I know that for sure and so you know
in the Frankic search for Cranium a little bit too they also do use Traneum I think it's
very funny because Anthropic uses everything interesting yeah is that is that like a huge
cost center to have kind of like a replatforming team to rewrite all your Cuda code in
to Terranium and TPU compatible code?
I don't know, probably, but, you know, Anthropics cracked.
So that's what there's, you know, that's what there's, you know, that's
they have cloud code. They have full access to Claude code.
I mean, they don't have to worry. They don't have to worry about being dropped.
Yeah. I mean, I was wondering like when, when will a cross compiler exist and you can, I mean,
didn't, didn't Facebook do that for a while where like you, they would write JavaScript and
compile like C++ or something like that because they'd written like so much JavaScript that they
We're just like, okay, we don't have time to rewrite all this.
Let's just write our own compiler.
And you can imagine Anthropic doing something similar to run on TPU or training.
Yeah, I think that that's happening to a certain extent.
And I think that for Anthropic, they think of it as optionality.
And I think Traneum specifically is like, you know, the other drunk suitor at the bar
who like really needs a partner, like Amazon, who's now last place, right?
Yep.
Last era was first place.
Now their last place.
And then Anthropic, who is like, you know, the.
Scrappy number three who needs capital compute is like come on dude marriage made in heaven
Let's totally tie it up together and then that's why training is just like rambling
I'm gonna start using these drunk bar analogies yeah this is good
I love drunk bar analogies we know these things but I'm probably more familiar with the drunk bar
analogies this is good uh yeah Ross McAnnell in the chat says please ask Doug about the state of
memory makers will SK Hinex and or micron be able to maintain margins and keep
HBM as a somewhat differentiated product or is it doomed
to be a commodity business I think who is a good question trying to think about
how far for instance so I think one of the ways to think about this first and
foremost is HEM is like the new DRAM that I mean like there's like a weird
weird aspect where it's like okay DRAM was like the best and you can look at
the cycles margins and like they're not actually higher than last cycles margins
which is like probably a little bit of a I wrote about this like being a super
a new DRAM. I think HBM is really, really, really important, super, super duper in demand and is like, you know, the new memory. But in the other ways, Nan specifically is kind of like the old memory and it's getting much worse with the commoditization and the like especially CXMT on the lowest end or no, YMTC on the lowest end and then CXMT on DRAM. So you're starting to see some of the aspects there get a little worse. But I do think NAND is trying really hard to kind of hold on to the, you know, the oligopoly where every.
doesn't raise bits together.
And maybe who knows, it becomes so old and whatever, you end up like 8 HDDs,
which are actually having like, you should go look at STX and what's the other, and WDC
because they're like, their stocks are ripping because they're both.
Yeah, Western Digital.
Yeah, Western Digital because they're not investing.
They, you know, it's a two player market.
They even gave Hammer to the other competitors so that they can like effectively, like,
you know, have a two player oligopoly where they control supply.
And this is just, this is just the national.
equilibrium. This is just like game
theoretic. They know that this is better for both of them so they're
not getting at price war, right? Yeah, yeah, yeah, 100%. But that's
only for hard drives. Like, NAND isn't there yet. Man, you have like a
slippery Chinese like swing producer that is willing to blow it up.
Got it. And so I think on HBM, I think it's kind of complicated. I don't want to like say
too much because I don't even know. I don't even know what semi-analysis is how like house,
house view is right now. Let's just like, I'll talk about some facts I think might be very
interesting and just like thoughts is that like, look, last time this.
this year, Micron and SK Heinex were effectively completely booked out.
And obviously, Samsung wasn't even in the game.
This year, they're not booked out.
What is the difference is I think is the threat of the Samsung qualification.
I don't know about the Samsung qualification, okay?
I don't think anyone knows.
The shit I hear out of Korea, like I hear that they're qualified, they're not qualified,
they're qualified, they're qualified, they're qualified, but they have to hit some kind
of yield thing.
I hear kinds of shit all the time.
I have no idea.
But I think the fact that HBM 4 pricing seems to be a little bit, like, dampened just on the threat of qualification, really kind of tells me that the HBM cycle is really long in the tooth and is probably where we're nearing the beginning of the end, just mathematically as the second derivative continues to go down for most of the memory stocks.
You can look at like S.K and micron.
I still like SK.
I still think what they do is really, really differentiated and valuable on a relative basis.
I mean, they got to HBM first.
They still are really awesome.
They still have the best product and process.
But I guess I'm just kind of like, you know, I'm a cycles brain guy.
And it's been about three years of a tight cycle in memory.
And you're like you're asking me to bet on the fourth.
I know what the base rate should be.
You shouldn't bet on.
But I don't think we actually have a house view other than that.
Like because at the same time, you know,
Nvidia goes burr and HBM bits goes up.
And even though there's a little bit of an oversupply in HBM 3, I still think HPM 4 is going to be pretty good.
But, you know, like rate of change brain tells me, you know, it's a little spicy.
Where else in the supply chain should we update if we're totally plateau-pilled?
And we say, you know, we're not going to necessarily even want to do the next order of magnitude pre-training run, the mega cluster.
where we're maybe going to be more distributed, more inference heavy.
Are we thinking A6s?
Are we thinking more focus on depreciated GPUs, distilling models to older hardware,
or just really getting all of the juice out of the current H-100s and GB-200s
and actually just like, you know, like not not replace.
Platforming to the latest and greatest constantly like not not worrying about that or like where else where else should we be kind of updating on various parts of the supply chain
Okay, so I
Think a lot about this or and one I feel like I'm like the wrong person or I'm probably a decent person to answer this, but like it's gonna be really speculative
So I'm sure like some massive disclaimers. Yeah, who that knows, right? Like maybe you know, maybe there's one more thing or claw that comes out and they cook on five or something I don't know
or Gemini next week makes us all piled.
I don't know.
I think we're, okay, so let's just like walk through some of them.
The massive pre-training cluster doesn't happen.
I think the fiber from data center to data center is like the most screwed.
That is the most whipped at the tail.
The multi-data center training thing, that is like, oof, ouch, goodbye.
You don't need it anymore, right?
You can do 100K clusters of RL.
And so that's probably where I think if you had to like, you know,
slice the puzzle where it's most impacted,
probably there for training specifically.
I still think we would end up doing a lot of inference.
I do believe, I really do believe that if like all progress stopped today, we probably
would still have like, you know, a decade of productivity as just like technology is ingested.
It's not a GI God, but it does happen to be like an amazing simple like densification,
simplification like, you know, densification of all of information into like an answer machine.
Fucking awesome.
That's a big.
It's useful.
I mean it's like it's like there's a lot of CPUs in the class, like we're going to
use CPUs in the cloud for a long time.
It took 20 years to actually do all the things
that you can do with CPUs and databases
and hard drives in the cloud.
And now we have GPU and LOM.
So we're going to stuff that in every single cranny of the economy.
What did you think there was a headline recently
that Microsoft, it was Azure's non-AI cloud business
was growing at almost the same rate as AI.
I think the final read on that was that
was just driven by open AI as an Azure client.
But basically, the read-off of earnings was that their core infrastructure group grew faster than their AI services group.
So more people buying compute as opposed to buying tokens.
But that's not necessarily a read that people are spinning up more CPUs.
So walk us through it.
Actually, that's a great question.
I'm very familiar with what you're talking about.
The Azure B last quarter specifically was driven by the infrastructure side more so than the token side.
And then, like, ironically, you know, one of one of my colleagues at, like, semi-analysis pointed out, pointed this out, there wasn't the disclosure they had last quarter.
They disclosed what the percentage AI was to the growth, and they did not talk about it.
And if they stopped disclosing, I can tell you the answer is it was lower.
Like, you know, so I think that's probably a pretty interesting thing.
It tells you that, like, the consumption is definitely worse.
I don't know if that, how they're classifying.
Well, it doesn't that track with, that tracks with Satya, like canceling, canceling data center developments and just saying, I'm happy to lease.
Also, like, you know, the business on a tear doing, like, multiple rounds.
They've done more layoffs this year than I think the last, like, three years combined, which tells me that everything that, like, the timeline is waking up to this week, like Satya has known.
He's been pilled on for a year.
Even if you look back at some of his interviews, he's talking about, like, yeah, Open AI is like a great consumer technology company.
We're happy to partner with them.
Yeah.
Wait, wait, sorry, can you restate and unpack that a little bit more?
So there were, so Azure grew a ton, AI tokens grew a ton, but AI core infrastructure grew
even more, but they used to break out within core infrastructure, how much of that infrastructure
is being used for AI versus not, and they stopped disclosing that.
Is that correct?
Yes, yes, that's correct.
Yeah.
And so by not disclosing it, you know the answer is it's lower.
Yeah.
But it was still, it was still, like, pretty big growth overall.
So it's just that the second derivative slow down.
Yeah, still accelerating, but it's at a lower rate.
Yeah.
Okay.
Well, actually, I feel like now I'm talking out of my ass.
Yeah, yeah, yeah, yeah, yeah.
But, like, look, I think the thing, the, the Microsoft print is, like,
they pretty much are, like, almost mid-cycle pressing the break, and then they, like,
you know, they juice.
But, like, you know, as it decelerates the investments, you still get to reap all the
investments you made and so the massive backlog right don't they have like a hundred billion dollars
of cloud backlog yeah and and so all this backlog is start to come revenue and then like you know
Microsoft still is like such a winner just yet like fast or rewind to like three years ago when
Microsoft was like clear number two and like you know now they're outgrowing on a like on a much
higher base than AWS is and AWS is like not in this right so I think that if you think of it
from the perspective of like, hey, Microsoft Azure versus AWS, did, you know, did Sotia win?
The answer is yes.
But I think the thing that is interesting, definitely going forward is like how this all works
and how Open AI continues to finance and spend and pay for more compute infrastructure.
Because at the same time, like, you know, one of the other ways you could look at it is like the entire math of where this is going is just like all at Oracle's pocket.
like oracle effectively so you talk about the Microsoft slow down oracle literally just completely ramped up when Microsoft slowed down and then and then Oracle went to open AI and opening I's like great dude we got a new customer they're willing to do Stargate there's even a spot that was supposed to be Stargate for Microsoft that kind of like you know and then I'm good for my I'm good for my however many billion 80 billion yeah but then but then but then that that deal went to Oracle right yes that deal went to or okay so I think that that's um
Yeah, I think Microsoft is like slowing down.
They definitely had this massive lead, like a truly a massive lead.
Remember like last year, the perception of who is winning the race for the hypers
it was Microsoft number one because the Open AI partnership plus the infrastructure.
You're like, dang, they're so far ahead.
Now being conservative, kind of pulling out, you know, decelerating.
But I also want to put some like, let's also talk about this a little bit because I think
the underappreciated part about the Microsoft thing is, yeah, just like how she
they've done with access to the model this entire time.
They've had access to the model weights at Open AI the entire time.
And you can also argue that no company has more competitive threat
for their core business than Microsoft does.
I don't know about you, but I do a lot more editing and drafting
and like information search and ideation in chat GPT
than I used to do it in work, right?
So you're like typing up something and you're like,
yeah.
So Elon posted yesterday that Microsoft
Microsoft is going to get absolutely cooked by OpenA.I.
And in many ways, obviously, like, Microsoft has a ton of users for co-pilot.
They're taking AI seriously.
But what you're getting at is one, you know, and it's hard to read too much into what Elon is saying
because, like, he's got a lot of, you know, he's playing games behind the scenes that we don't
necessarily have a view into.
But what you're getting at is interesting, which is you might instead of, there's a world
in the future where instead of opening a dock or, and.
an Excel sheet, you just start talking with the model and saying, I want to make a model
for this product line, you know, going out over the next like three years.
And it basically has Microsoft 365 as a tool that it can call.
Yeah.
And then eventually they instantiate a word doc for you if it needs to.
It creates the output in the actual application that they have because they already have
cloud hosted PowerPoint.
And they just are like, oh, it sounds like you're going down a tree.
You need a PowerPoint.
Here you go.
I generated it.
And you can edit it if you want, but also here's the export.
We got a PDF for you here because we used our tool.
Yeah, 100%.
Yeah, 100%.
But yeah, huge, huge UX problem.
Like, like, I mean, like, Google has bolted Gemini on to every product and it's,
and the adoption's been very rocky and it's been, it's been tough.
Like, it's not as easy as just like slap a text box on it and you got a winning consumer
product.
Like, it does require innovation, I think, in terms of product development.
So going back to the sort of dynamic between Microsoft and Oracle, I think,
you have to ask the question if if Microsoft is pumping the brakes a little bit and
Oracle slamming their foot on the gas like it's going to take a little bit to see who
who made the right call yeah yeah and you so do we would do we plateau or not right that's the
answer well and and I think the answer this week is yes yeah but I mean at the same
time like like inference for chat chip not necessarily usage wise but could just continue
To continue to increase, and if Oracle can position themselves as like the key cloud provider for that, that could be, it's mostly just like the Oracle investments just can't be overly aggressive at any point in time because if there's a pullback, then they're like, oh, we're unprofitable for a little bit on this.
Yeah, yeah, they could take the biggest bath.
Potentially.
Yeah, dude.
I think, okay, so I'm trying not to call it because, as you know, AI changes, I feel like week to week.
Yep.
Sometimes it feels just like, I don't know, hyperbolic time chambers.
dude like you know we could be so over and then it could be so back right uh if you guys remember
pre-training is over uh it felt very over and then now we are you know so back because of the
because of the essentially the increase in um reasoning test time entrance yeah sorry reasoning and then now
i'm thinking about this is like okay so the unspoken part about this that makes you know the chat
chitp5 a lot better than other things is like long task and agentic and tool use right so that is the
in my opinion, you know, and I hear this kind of shit all the time.
I've thought it's really BSy until I've been messing around with Quad Code.
It's like, oh, agents, agents, agents, agents are going to be this big thing.
And you're like, what the fuck does that mean?
Because you're like, I don't know what this agent is doing.
But like, quad code is pretty cracked, man.
You just like ask it to do things and it just like does them.
And it does it in like kind of a scarily good way.
A good example is like we've been hiring people.
I think we have some hires in the line.
We make them do a case study.
That's like not a case study is a great way.
okay and i was so pissed and annoyed by some of the case study qualities i made like each of the
models do the case study dude you know what did the best case study of them all clod code did okay
clod code over open a i agent and i think that that's kind of what we're like those extremely
long context autonomous ability to do stuff on on its own is going to be the like you know
the nirvana that like changes everything or makes the consumption a lot bigger and i think
you can kind of see the glimpse of what a agentic feature looks like via clod code and then you
And then you just assume that instead of them RLing the shit out of like software and making, you know, the best, you know, the best commit or something like that, they're going to RL the hell out of like advertising or creating the best, you know, advertising media or they're going to RL the crap out of like finance or making the best financial.
Well, yeah, I mean, the thing that stands out to me is like this compounding advantage of like you're a lab, you're competing with other labs and you have access to the best coding product and you can, you.
use it as much as much as you want forever and it gets better and then you guys get better and higher
output and I don't think we've seen this you know you don't you didn't see the same dynamic with
like Microsoft having a better version of Excel and like yeah or or the example of like it's not
like Mark Zuckerberg with Facebook was like I get to use Facebook more than my competitors so I'm
going to be better it's like well actually try that they were using Facebook for internal comms they still
well yeah but whether or not that gave them a compound advantage over teams or yeah like i think um there's
just like project or it's like AI 2027 or something yeah yeah yeah it's like this like i think that
that actually feels like a little bit more grounded and then like some of like the like the really
like you know i doomerism but i think the like the like the recursive ability or like whatever
like p doom whatever you want to talk about but like the recursive ability to make your products better
by continuing to invest is like, you know, it is a flywheel.
And that's definitely, I think, the anthropic bet that they're going really hard at.
And so, you know, the thought process is like, well, okay, if we can get the AI agent to do, you know, AI experiments, not just coding experiments, then like, boom, we are off to the races.
And that really will be, I think, the, you know, the, like where the curve bends back in on itself.
TBD, right, yet to be seen, I don't, like, I don't exactly, like, from the outside looking in, there isn't exactly any, um, any kind of special ability for me to say that it will or will not be that way, but I think so far from what I understand for the researchers, I don't think everyone's like doomed or bared up on, um, on some of those stuff. I definitely think chatchaput 5 is a little disappointing, but maybe, uh, maybe open AI just isn't cooking like I used to, right? Like, I think, um, and I don't know if, uh, I, I don't know if it's a,
So, like, I don't know what actually happened at five, but, like, I want to, like, we don't actually know what's the, like, the ratio or out to reasoning to like.
Just remember, you know, you can debate on, on the people that have left Open AI in the last couple months, like, how good they were.
Were they the best people?
Were they mercenaries?
But any company that has been gearing up for a massive product launch, like, or have gone through.
massive product launch. Imagine going through that again, but you lost like 40% of some of your
most elite team members. Like that could be, that's like should obviously have been a factor
here. If it wasn't, then Zuck is cooked because you just hired a bunch of people that just
weren't that great. Someone's cooked. So I think that, I think that, yeah, that it's worth asking
the question, what would GPT5 have looked like if Ilya was still at the company? What would it have
looked like if Mira was still at the company? What would it have looked like if the long tail of
researchers that left were still there.
What would it look like if they just didn't have the distraction of the talent war?
Yeah, yeah, yeah.
Yeah, I think that's a valid question because my understanding of, like, why Gemini
too randomly was so good is, like, Noam Shazir was back.
That's it.
Like, that's literally it, as far as I understand.
Like, all of a sudden, Gemini starts cooking again.
It's like, yeah, because, like, the guy invented half of everything is back.
I'm not surprised that that's, like, a real dynamic.
Yeah.
But yeah, we're going to have to see, we're going to have to see.
I do think you're right, vibes are interesting.
I do think probably in tracking that cohort.
Like, if you think about it just like an incremental slosh, like, you know, that's either
the biggest, best investment of all time or going to be the worst investment of all
time.
And like tracking that cohort and how that works out is going to be like a really interesting, like,
case study process.
Can you give us an overview of what's happening in private credit headline this week,
obviously that meta had tapped Blue Al and Pimco for.
like a $30 billion, you know, I don't know that the sort of pace at which they'll get
access to that capital.
Just give us private credit 101 and go as deep as you want.
Yeah, 101, but also like, you know, the real kind of risks surrounding it, you know,
over the next year or so.
Okay, so I do think at some of the analysis I'm the finance guy, which is funny because I definitely
don't know if I like, in the big scheme of finance, how much finance.
I have.
But, okay, private credit is like public credit, but there's no marks, okay?
One of the things that got really interested, okay, so one of the things that got really
interested about that is like, private equity rules because you could suck at your job,
but if you have no marks, like, you're, you know, you don't have a bad, like, performance.
And your volatility, like, you know, kind of chills the f-out.
So on an allocator basis, they're stoked.
Like private, I mean, this is the beauty of being in venture broadly is the market goes
up and down and 90% of my net worth is absolutely stable.
It's not stable in reality, but.
Yeah.
Yeah.
And I think that that's like a, like almost like a feature.
Totally.
It was a feature that was like a bug at the beginning, if it makes sense because it's like, okay,
there is no mark.
And because of that, you can affect it.
like, you know, the volatility in these assets are really low and you hold it to maturity.
There's a lot of studies and papers that effectively like, you know, public equity levered up
over five years when you don't sell is like very similar.
Like the returns actually start to approach each other.
And some of the cohorts of private equity specifically have really started to implode a little
bit and have a low money out of the investment.
So anyways, that's private equity, which I'm like super familiar with, very, like very funded.
Private credit is like a lot of the same, uh, same energy in terms.
of the mark and effectively you own this piece of debt to maturity so there is no you don't
really take a mark on it um and and so you could get like at one point specifically everyone raised
these capital vehicles that were like we're going to get high single digit returns with very little
volatility forever and so everyone was like dude sign me the hell up and so yeah if you look at if
the number of billionaires that private credit has created in the last like two decades is
50 in the Forbes article.
Wait, really?
Oh, yeah.
I'll send it to you.
Yeah, private credit.
I mean, just in Ares, there's like four billionaires.
Because it's just if you're, if the best way to become a billionaire is AUM maxing and this, and this, and private credit allows you to, like, a, um, max better than.
Massively.
Any one else.
Unless any of these other, other sectors, then it, uh, it's just.
So, uh, yeah, really quickly, Bloomberg highlighted 18.
folks who are now billionaires from private credit, starting with Aries at 13 billion net worth
going down to blue owl Craig Packer at one billion. And so 18 new billionaires have been
minted from the private credit boom. And yeah, I think from my perspective on the private
credit boom is like, okay, it's definitely been a thing this entire time. But like in the last few
years, like there is a hockey stick moment where I want to say it's like late 23 or something
like that where the pitch was like almost, you know, unbelievable, effectively.
This is simply too good.
Yeah.
Yeah.
It's like you can get equity market returns in the long run with no mark to market risk
and you know volatility and in theory less risk.
So you're like, dude, equity risk or no, no, lower than equity return with like relatively
no risk on like, you know, it's like obviously spread up of a treasury.
You're like, bro, sign me to hell up.
like, I'm going to slam that button until the button stops working.
And so everyone raised these giant funds, like ginormous funds.
And so now these private credit guys are like sitting on a bajillion dollars a
U.M and they're like, dude, how do I get this to work?
And so now private credit is finding its way into, you know, it's kind of like the private equity.
They're going to have to deploy.
And so for people who are looking to deploy very large amounts of assets, I think
data centers are going to be really interesting because in theory,
And this just depends on the whole stack, but like data centers are more like real estate
investing. So you have a very different return profile that often are baked into the deals.
And I think it's and it's very capital intense on a relative basis. So you can deploy a lot of
capital, which is awesome. And so you do that, you get like these five year, 10 year investments
with like pretty solid chances right now, at least in like the two, three years, you're super
money good. Like some of the early data center.
investments are like fucking awesome and so you're just gonna you're gonna slam that bit dude you're
gonna you're gonna you're gonna deploy and so the private the plateau theory feels good here because
the the risk was that I build a I build a frontier model capable data center that runs gpt4 and then
gpt 5 comes out and I can't run gpt 5 and I'm completely useless every all the workloads move to
gpt 5 and and I have no business whatsoever instead it feels like
the workloads are like sticking around for a long time.
Well, yeah, and it's important you can have like a plateau in intelligence that is different
than a plateau in usage.
Exactly.
If usage and demand plateaus, that becomes a real problem if you just, if you just spent
$10 billion on, you know, super levered data center development.
Yeah, I think I think you're probably right in terms of the fact that if the pace of,
if the pace of everything slows down in terms of progress, you can underwrite the return.
You can underwrite everything a lot.
more chill. You know, you're not going to be like, dude, my, you know, the data set, you know,
the chips might get used longer. So you can be like, ah, you know, they're four year lives and
they're five year lives with high, high certainty. You know, the algorithms aren't just going to like
massively consume all this stuff and make like the crap you bought effectively like super
cheapened very quickly. And then your investment is more money good. Yeah, it was like having like having a
Bitcoin FPGA farm or something, you got destroyed when everyone went basic, right? And, and that's
the risk but that's not the nature of the current frontier path like like gpt4 workloads or
gpt4 class workloads are sort of sticking around probably for a really long time and in the
future even if we do develop the super intelligent model it'll probably be calling less intelligent
models there'll be distilled models for specific tasks specific models for tool usage and all these
different we're agentic workflows etc so yeah it feels like all this stuff is going to be sticking
around and good news for the depreciation cycles.
I don't know if that's like, so I'm going to like, I'm not going to endorse that view
just because we're like speculatively one-shotting this in the day.
So like, you know, we'll see.
I got to think about this.
Yeah, totally.
But I do think, I mean, I do think this is probably better for the longer tail if Frontier
model stuff slows down.
I think if we're just talking about like, hey, truly everyone else, right?
like in the world where progress is exponential and there's only three companies doing it effectively everyone else is a giant fucking loser right everyone is just a giant fucking loser there's no way they're going to have any kind of products everyone's just like totally cooked who cares why would you ever invest in neocloud x or you know behind the curve lab why or you know whatever accelerator company z right i think it probably is better for just the entire ecosystem if we're talking about um just like the longer tail of capital so specifically like
Like, I think like the neoclods, the GPUs, the, you know, everyone that isn't named OpenAI, Google, or Anthropic.
So I do feel very strongly got that.
Well, this has been fantastic.
The chat is going wild.
One hour guest.
Semi-analysts in my veins.
This is our first ever one-hour guest.
No, Dylan Patel was also a one-hour guest.
Every time I get someone from sending us, I'm like, stay on forever, do the whole show with us.
I really enjoy these chats.
I really thank you to take the time.
Just so you know, like, we have a lot of, like, I have like a lot of ability to make this happen or not.
We have a lot of cracked people at semi-analysis.
Please, send them all over.
I love these chats.
You talk to Jeremy, right?
Yeah, yeah, yeah.
He was fantastic.
I fucking love Jeremy.
He's amazing.
We have, yeah, so, like, that's, like, the benefit of 70-analysis.
There's a lot of cracked people.
We'll send them all over.
Yeah, send them all over.
And I have a plan.
So our plan is we've been teasing that, you know, basically if you're a real VC,
you're on this, like, the super secret $1 million a month semi-analysis plan.
And if you're not on that plan, you're kind of just a tourist.
And so we're going to get every single VC in Silicon Valley
on the $1 million a month semi-analysis plan.
That's our pitch.
We're going to meme it into reality.
I'll take it.
Yeah, for every billion dollars of AUM, you should be spending at least $15 million.
I think so.
I think so.
Otherwise, you're just a tourist.
You don't really understand this stuff.
But thank you so much for stopping by.
Yeah, if you're listening to this and you're not subscribed to semi-analysis, what are you doing?
What are you doing?
Thanks for joining Doug. This was fun.
This is really fun. Let's do this morning. We'll talk to you soon.
Yeah, we can talk earnings, honestly. That's, I'm, that's like probably where I'm best.
Amazing.
So, take care.
Amazing.
We'll talk to you soon.
Bye.
Up next, we have Mitchell Green from Lead Edge Capital coming in the studio, in the restream waiting room.
Let's bring him in.
And while we're bringing him in, let me tell you about ad quick, out of home advertising made easy and measurable.
Say goodbye to the headaches of out of home advertising.
Only ad quick combines technology, out of home expertise, and data to enable efficient,
seamless ad buying across the globe. Mitchell, are you there? How are you doing? Good to see you.
I'm great. Great to see you. Sorry to keep you waiting. Sorry to keep you waiting. Sorry I missed you
when you were on the West Coast. We'll have to hang out in person soon. But how was your trip? How are you doing?
Good. No complaints. How about yourself? How's life? We're doing great. Big week last week. We were in New York
for the Figma IPO. We keep waiting for a slow week this summer. Yeah, it's been a lot.
People always look. People always say it's like, you know, it's like, oh, it's really busy.
I'm waiting for it to slow, but it never actually slows.
It never slows.
Well, speaking of things that aren't slow,
speaking of things that aren't slow, I want to talk about fast cars.
Everyone at OpenAI just received allegedly the rumor is a $1.5 million bonus
for being on the team for more than two years.
I want you to help me advise these Open AI researchers.
We told them, don't buy a house, buy a car.
And so I pulled up some options.
up some options that you could pick up for $1.5 million.
I think that's what they really should do.
Everybody, everybody should get some leverage.
So take the $1.5, buy an F-40.
And buy an F-40.
Okay, so we're skipping the SP3.
We're skipping the Valkyrie.
We're going straight to F-40.
That's great.
I love that.
By the way, there's at least somebody on an engineering team that will do it.
Most of them will probably buy like Priuses, but there might be like one or two guys
I mean, I mean, when the boss is driving a Koenigzag, you know, you got to show up in something, at least a depreciated Veyron.
Well, the thing is funny he said about Koonysak, he can drive it as long as it turns on and runs.
Yeah, any advice for the Konexag owners in the, in the audience, keeping that thing running.
I like the, yeah, I mean, he also is F1.
Well, nothing wrong there, right?
Okay, as much as I want to talk about this, I do, I do, we have limited time.
Please.
I do want this week.
By the way, you need to have Zach Brown and I am together sometime.
I'll talk to investing.
He can talk to cars.
Okay, fantastic.
We'll do that.
So reaction from the timeline this week is that it feels like frontier models are plateauing a little bit.
And I think that's generally fine.
There's still a lot of capabilities to unlock.
But I get concerned for the VCs that have been deploying billions of dollars into the longer
tale of companies that have been kind of like more, they don't have necessarily traction,
they don't necessarily have truly top talent, and they were kind of moonshot-esque bets on
this idea of super intelligence. And so anyways, I'm curious, how cooked is the venture industry
and VCs broadly if, and the other factor here is like within a bunch of subcathes,
categories, there's five or six heavily funded players going after the same opportunity,
and that will create healthy competition, sometimes unhealthy, and I'm sure there'll be some
good outcomes. But how are you thinking about the current state of venture?
Yeah, that's a great question. So I was just writing some notes. We obviously have a bunch
of questions in there. So AI saved the venture industry from a big awakening that was a
to happen in 2022 and they just got like one more kick which is what's absolutely incredible
is that none of these people like remember none of these people have remembered what happened in
20 and 21 okay maybe you can excuse some of them in 20 and 21 because they didn't remember what
happened in 99 in 2000 because like a lot of them weren't doing it back then they weren't in
the industry but like most people that are doing this today we're in the industry in 20 and 21 it is the exact
thing. As you obviously know, we focus way more on like profitable businesses, like 70 plus
percent of our companies are profitable, very, you know, summer in Silicon Valley, but we find,
like to find companies in Ames, Iowa, and Niswah, Minnesota. We're probably the only tech
fund on the planet that has two investments in Sarasota, Florida. You know, ah, go figure.
You guys dominate in Sarasota.
Yeah, but I, but I felt that over the last year where, again, let me continue.
So 90% is, okay, before that, people always underestimate technological change in the long term,
and they always overestimate it in the near term.
Think like the internet bubble, think mobile phones, think self-driving cars, like I think
the PC revolution, like it always, they always do that.
This, what we're seeing in AI is so reminiscent, and what you're seeing in the markets right now,
is so reminiscent of 99 in 2000.
Yes, by the way, the big hyperscalers have giant amounts of profits and are printing
money.
These AI companies, a huge amount of them, you know, 90 to 95% of these AI application
companies are going to zero.
And by the way, I'm saying that, I'm hearing that from some of the world's foremost,
you know, early stage venture capitalists that are telling me, at the end of the day,
90 to 95% of these things are going boss.
And a lot of it is driven by upside down UDN economics.
You know, like, it's funny, you see some of these AI companies that have, you know, I'm not going to give names, that have only 50 to 100 employees, but are raising like $500 million.
And like, you can't spend that much money.
Obviously, it's because they have massive, negative gross margins.
And that money is flowing through the AI companies, which is flowing to the hypers and flowing through the new Nvidia.
People ask us, like, what are our thoughts on the models?
We've always thought for right or wrong.
that the models will commoditize and it will become a game of who's got the best infrastructure
and who can deliver the searches the cheapest.
For the life of us, I can't figure out why Google, Amazon, Microsoft don't, and Apple,
which hasn't played yet, but I think they're going to go acquire somebody.
And like, why don't they just win at the, in Facebook?
Why don't they win at the end of the day?
I mean, Nat Friedman is a total stud who's running AI.
I mean, we've backed his first company, Zamoran, or a second company's Amaran, total stud.
These companies can spend $680 to $100 billion a year on CapEx while having margins go up,
still growing 22% a year printing money.
Like, I don't know how, at the end of the day, Anthropic and Open AI and perplexity can
compete against this.
I mean, I joke that Google should literally run searches on chat GPT.
and bankrupt them.
Like, I mean, the cost.
Well, I mean, the news that came out in the last 24 hours is that Google is making
Gemini free for students for the entirety of the next school year.
So they will be subsidizing.
And so they will be-
By the way, it should make it free for everybody.
And by the way, you know, I read somewhere like the cost to serve the exact same search
on like Gemini versus chat GPT is some crazy percentage.
It's like one-tenth the price or one-twentyth the price.
And by the way, that shouldn't be that shocking, given that over the last 20 years, Google has spent, you know, all their, a lot of their efforts on taking their infrastructure and, you know, making a fraction of a fraction of a penny more efficient.
Yeah.
And I think it becomes an infrastructure play.
Now, I do think, I think there's going to be, like, insanely amazing companies using AI that are built.
I actually think a lot of incumbents are going to win, you know, ever since.
this device came out, there's only been four companies built that came out after this device
that have been worth 100 billion or more. It's like Bight Dance and Pindodo, two companies in
China, and Bight Dance made me one of the best AI companies in the planet that's like the hidden
giant out there. And then Airbnb and Uber. That's it. The incumbents are going to win. Salesforce.
Well, Open AI. Oh, uh, oh, open AI is not the incumbent. Like, it's one of the new guys.
I challenge the people that ask, why is OpenAI not like Excite Lycos or Alta Vista?
I'm not saying it is, but like I think it's hard to make a bet at a $300 to $500 billion valuation
that like, I think you either make a zero or a 5x.
I think it's a really binary.
And the unit economics on these companies is massively upside down.
If it wasn't, you would see some of these companies go public.
And I actually joke that people, you know, we might as well be in a bubble.
In the internet bubble, nobody gave it down that the unit economics were upside down.
Take them public.
Let's see if they can go.
I mean, they weren't.
They weren't.
The unit economics of Google were very, very good going into that IPO.
And they maintained a good unit economics throughout the-
Our general read was, you know, model, like intelligence is plateauing.
But it's not necessarily bearish for open AI because it's a habit.
They have 100 million people in the U.S.
that they're using the product every single week.
And what you should adjust, though, is that's the upside.
The downside is 100 million people use it.
Yep.
But the downside is that like, investing here, you're going to get massive,
months of dilution because you have companies like Facebook and Microsoft
and Amazon and Google that now pay top engineers like they play in the MBA.
And can fund, in these companies, you can have massive amounts of stock-based comp.
But yeah, I use chat deep petite all the time.
Yeah, yeah.
But I think people might.
It has Google level adoption, but the economics are not, or nowhere.
The economics are not there.
And what's crazy is, it's like, people are like, well, you know, Lycos and like Excite and Alta Vista didn't have that many users.
I'm like, yeah, but the number of users on the internet went from like, you know, 200 million, like, $5 billion or something.
So you need to adjust for that, too.
But no, I mean, the consumer growth rate.
is insane. If you have insane in consumer growth, you should be able to attract world-class
talent. Yeah. But if you hear about this, people don't. Like, the competitive dynamic,
like what Open AI as a company has been able to accomplish despite competition from all
the hyper-scalers, despite competition, you know, the talent wars, all these different things,
in competing in the most CAP-X-intensive industry since the railroads or, or.
It might be more capital inventive.
It's truly incredible, and to do that as a company that still some people see as like a startup, you know, is just wild.
I think the other, what about, the other thing is you hear the application companies all say, well, the costs are going to plummet for, you know, the cost for AI stuff.
But the issue is if like, if the costs are going to plummet, that would be good for the application companies.
but then like doesn't if the cost plummet isn't that bad for open AI or anthropic because they're they're like revenues are not going to like well jevin's paradox if the costs go down people will just I mean right now like open AI has tons of paying customers and people that use the products so costs like they are both a a victim of commoditization at the model layer and a beneficiary of commoditization at the model layer and so like yes you should maybe discount there.
their model API business more, but then you give more value to their consumer business because
it has higher margins, better.
You're going in economics.
It starts looking more like Google and less like AWS, I guess.
What about legacy SaaS?
People have been extremely bearish this week's particular.
By the way, that's why you go invest in it.
Yeah.
Steve Cohen is Toby for years.
Like if everybody's going one way, go the other way and you'll make a ton of money over the
years. I mean, I think like the Chinese small cap internet, Chinese small cap index is up like 50% or something this year.
The great thing is, like, I don't have to make an investment in these AI in these like model companies. By the way, I should have at $30 billion. Like we should. We're wrong because you could have sold now, right?
So I don't have to make an investment, but I do run an investment firm. So I need to invest in things. And look, we look at a lot of these AI application companies and they go crazy fast, but we're not going to pay those valuations. And I'm not investing in.
businesses that have 50 to 60 percent gross dollar attention like screw nets look at gross like
that's what really matters and so what are we going to go do let's go find bootstrap software
companies that are in you know college station texas and Toronto Canada and in you know
Sarasota florida companies like pacemate that make cardiac monitor in software companies like
gravity that make a budget planning software and companies like a list of plan that make financial
planning software. Again, these companies aren't going to change the world. But again, we can take
them from 20 million of revenue to 60 million of revenue. And there's a lot of homes inside
strategics or private equity funds. And by the way, these are businesses that have like 95 plus
percent gross dollar retention that have been bootstrap businesses that we're now using AI to make
them more productive. Like they have 15 software engineers. Now they can use AI to have 30 software
engineers. And as long as those companies still sell to humans on the other end of the
table, then if companies, if AI companies are selling to other companies that use AI and
they're talking to agents and we don't need people, like I think, you know, then that's possible.
But then I think these, a lot of these companies are in trouble. But we think a huge amount of
these like software companies that are out there that exist will actually use AI to create new
products and become way more efficient and more productive.
And like, I truly believe AI is going to be as big or more bigger of a productivity
boom over the next 20 years that like then the internet was.
Is that is it more of like a consolidating force or a decentralizing forces?
Or should we see more rollups going forward with a bunch of these different
small companies?
I think it also is like we really like vertical application software.
Like all those companies I mentioned were very verticalized.
They like, they're like own a system of record.
and like they can just use AI to become like I know there's some very smart private equity guys
that are using AI to try to start to do roll-ups and stuff like that or like I know that
I think the general catalyst guys are you know are looking to buy insurance companies or yeah I had
I had a meeting or dinner with with a few guys I think it was Tuesday night that we're doing
AI enabled roll-ups in like a couple key industries and I
say them because I don't want to give away their alpha.
And then Wednesday I had a meeting, John and I had a meeting in the morning with somebody
that was like, yeah, like, you know, at least one of the labs will hit Escape Velocity and
we'll get a fast takeoff in the next two years and nothing is going to matter.
And I was thinking to myself, like, even in that fast takeoff scenario, like, it's just,
it's impossible to imagine a couple of these categories, you know, a number of different
categories where, like, people will just maintain control, right?
as long as
we have as you guys know
our LP based all these like world class
execs and entrepreneurs and people like that
and we talk to these people that have built
giant software companies and one of them said
I was like do I
do we need to be worried about all these companies
go and bust it's like well as long as your software
company sells to another human on the other side
of the table like that human
will need to like make an interaction
and probably interacts with software so you're okay
and he's like well unless you think like an AI agent
is selling to another AI agent
Like, if leaders doesn't have any employees, and my AI agent is talking to yours like your AI agent, then maybe we don't need people.
But as long as people are involved in the equation, you're going to need software companies.
Yep.
Makes sense.
Well, we are running behind.
This is fantastic.
We've got to have you back on the show.
Anytime.
Thank you so much for helping.
Come up to SPM.
We'll go for a drive.
For sure.
Come in.
And come back on.
I want to get a sense of how we should be thinking about IPOs and this back half of the year.
Obviously, Figma was very.
exciting. There's more on the horizon. And I want to get a sense for how you're kind of happy to talk
about it. Funny thing is, if you go back and look at the last, like, everybody said the IPO market
was dead. It actually hadn't been dead. It wasn't dead. Investors, but you can go look, look at
Reddit. Look at all these things that were public over the last year. Yeah, it did great. Yeah,
the IPO window has secretly been open for over a year. It's been open the whole time. It was open.
It was open to want to admit that they overpaid for the company. Didn't want to take it public.
Yeah. Yeah. You have companies like, we're in this. We're very early investors in Grafana.
But you have companies like Stripe and a bunch of things that are awesome businesses.
Interbricks.
They just have so much capital on the balance sheet.
They don't need to go public.
They don't need to go public.
Yeah, yeah, yeah.
And so there's very different than the window being open versus the time is right for the best companies to actually go out.
And they make that choice because it's nice to be in the private markets.
You mean it's not fun for most funds to not have to mark their books every day?
That's fun.
SEC filings are not fun.
Tender offers are fun.
There's plenty of ways to stay private and have fun.
And actually, what's amazing is, is like, we are finding opportunities,
it's crazy where you can find opportunities.
If you'd be like, how does a same size company in Sarasota, Florida,
get done at five times revenues and a company in Silicon Valley,
oh, but it's growing faster.
It gets done it 90 times revenues.
Like the world is becoming this like,
there is true value to be had,
but just looking a little bit off the beaten,
path in our views.
Yeah, yeah, for sure.
Anyway, thank you so much for taking time.
We will talk to you soon.
Have a great day.
Cheers, Mitchell.
Have a good weekend.
Good Friday.
Bye.
And next up, we'll bring in Ben from Orbital Operations.
We got to get the gong ready, I think.
Ben, how you doing?
Are you there?
Sorry to keep you waiting.
He's going one second.
Great to have you on the show.
We got some folks in the chat.
The IPO window has been open, but everyone hasn't given the group of new public companies credit.
Everyone is true.
treated as a one-off, like it's not indicative, a collective market. Good point, Dan Ratliff.
Anyway, welcome to the stream. Do we have Ben, how you doing? Yeah, I appreciate you having me
on the show. Yeah, I was listening to the previous conversation. It's pretty interesting.
Fantastic. Thank you for hopping on. Would you mind introducing yourself, the company, and then
what news you got for us? Yeah. So my name is Ben Schluniger. I am the co-founder and CEO of
orbital operations, and we just announced closing of our seed round. We raised 8.8 million.
Oh, congratulations. There we go. Thank you. Thank you. Who's in the deal? Who's making money
off of this? So initialized capital is leading around. We've got a big participation as well from
Harpoon Ventures, DTX Ventures, Rebel Fund. Matakund is in as well, and a lot of other angels that
joined on board. We went through Y Combinator earlier this year. Why 8.8 was there
good numerology. Chinese number of wealth. More of a more of a range we were trying to hit
based on the technical milestone orbital operations is developing a high thrust space vehicle. It'll
be stationed on orbit for satellite defense so our you know we've got some some hard tech
development to go through and really prove out our technologies. So satellite defense I have a
satellite up in, I have the Hubble telescope. I don't want someone to shoot down the Hubble
telescope. I pay you to loiter around and blow up any missiles that are coming at it. Like what,
how are we actually defending? Yeah. Yeah. So it's interesting. We have a ton of critical and by we,
I mean the United States has a ton of critical infrastructure out in, you know, both low Earth orbit,
but even higher out like medium earth orbit, geosynchronous orbit. You could think like GPS, naval
communications, nuclear command and control, they all sit at these higher orbits. It's actually
really, really challenging for missiles or anything to get out there, even rockets designed to go
out there. But there are adversaries placing satellites that have capability, and it's been
demonstrated already, to be able to grab other satellites and pull them out of orbit or,
you know, be able to fry a solar panel or jam communications, whatever it is. And we currently
don't have a response for this. We don't have anything stationed out in these higher orbits.
And so that's what we're really looking to build is a vehicle that has enough thrust, fast enough response time and enough extended range to be able to go and intercept these things.
Hopefully not blow anything up.
Space defense is a little weird.
You don't want to create trapnel, right?
The last thing you want to do is make a trapnel.
Yep, exactly.
So, yeah, walk through, if you don't want to create a trapnel, you don't just want to run into this.
I'm trying to comp it to Anderol.
You know, you have the anvil.
It's just like it's a stone that rocks that runs into the drone.
then there's, you know, remote takeover, there's microwave, different radiation, different
energy sources.
Jordi, we've had a company on a, on the show that just is a gun on a truck, it just shoots
down the, great, it's a lot of shrapnel when it shoots down the drones.
But there's, you know, I've seen like eagles come and pick up drones.
Walk me through the different tools in the tool chest for taking out a satellite killer.
Yeah, yeah.
So, I mean, unfortunately, one is to just like ram into it, right?
that is like and that is that is kind of the worst case last resort you make a bunch of shrapnel here on earth
shrapnel falls to the ground yeah easy but up there it keeps it stays there runs another satellites
creates a problem yeah the next is kind of what you were mentioning you know direct energy type of stuff
this would be high powered microwave this would be uh frying a solar panel or or a laser into the cameras
and sensors trying to basically degrade the satellite right um the other option is to do what what we would
call an RPO or remote proximity operation and actually go up next to it and grab it,
right? This is a more challenging operation to do, not something always kind of considered on the
defense side, but and more on the logistics side of things, but it is something that you could do
as well. So grab it, grab it, take it out into deep space. Yes. Just fly it into the sun. I'm sure
that'll be easy. Well, I guess my immediate question is, um, uh, I mean, it feels like incredibly
important work and also how do you kind of like prove out your teams kind of like capabilities
there's not exactly uh like a test range that you can take you know we'll have defense tech
founders out where they can just go out and do a demo for the army or the navy or whatever
gotta go to the moon and in this case you know how do you how do you kind of test out different
theses at the product level to prove capabilities when you're not in a when we're not in an
active you know conflict yeah yeah i mean it's similar you know my background is all rocket engine
development right so a lot of it does start on the ground um yeah well we'll go rocket engineer
rocket scientist uh you know it's it's it's it's it's too flattering i can't i can't take it
we need rocket researchers we got to get these salaries there you know rocket researchers that's
He's got to hire a bunch of that.
Member of rocket staff.
We don't want a talent work here.
Member of rocket staff in the bio.
Anyway.
But yeah, you develop it on the ground.
You're hot firing, your rocket engineer on the ground.
You're developing the tank.
You're going into thermal vacuum chambers and testing all of that.
And one of the great things about like this day and age is it is actually getting cheaper and cheaper to get into orbit and start demonstrating these things.
You know, ride shares, maybe you do a subscale version of what you're going to do, which is what we're planning on doing for our orbital demo.
demo and you do a ride share and you know it is cheaper and it is faster than it's ever been so
it is it is not quite as easy as just going out to a live fire range but at the same time you know
proving out your core piece of tech like it is getting easier to go to low earth orbit and test it there
we're running behind today so last question from my side uh is is cost to launch actually
still dropping on some sort of exponential scale or are we in a plateau are we waiting on
Starship to get through testing? Like, are we actually reaping the benefits of cheaper and cheaper
launch costs? Or are we kind of in like a local plateau? Interesting. Yeah. I mean, I think we've
been maybe in a little local plateau for the last couple of years. But I mean, SpaceX has been
dominating the market, right? I think there are other launchers that are coming to market that'll
help drive competition down. And then you have Starship and Stoke bringing reusable second stages. That
That will be another step down.
That actually doesn't make it cheaper getting out to the higher orbits.
It makes it cheaper getting to lower orbit.
But those higher orbits, you know, the reusable rockets and getting out to those higher orbits,
they're not good at doing that necessarily.
And that's where an impulse space comes in, Tom Mueller's company that is a booster that takes you from Leo to GEO.
Yeah, kickstage.
Kick stage.
Got it.
Very cool.
Georgie, do you have anything else?
Because we're sorry to cut this short, but we are running today on a Friday.
But thank you so much for jumping on.
is really good. Yeah, I came to gong, so I appreciate it.
We hit the gong. We'll hit it again for you. Let's go. Yeah, appreciate it. There we go.
There we go. I got to get one for the office. You need one. Well, good luck.
Come back on and you put something in space. We'll ring the gong again. We'll talk to us.
Great. Thank you so much. Bye. And up next, we have Meryl from Graphite coming in.
You know graphite. We do, we read ads about graphite every single day.
I used graphite while building my last startup.
While he's joining, let's tell you about a different ad sponsor.
Bezell, go to getbezzles.com.
Your Bezell concierge is available now to source you any watch on the planet.
Seriously, any watch.
Get a hitter.
AGI is delayed.
Super intelligence.
When it gets here, if it's five years, if it's 10 years,
it's going to want to look on your wrist and know that you mean business.
That's right.
So go to Bezle.
Well, without further ado, let's bring in Merrill first time on the show.
Merrill, how are you doing?
Doing well.
Thanks for having me, guys.
Good to catch up with you.
It's been a week.
I saw you in New York City, had a lot of fun chatting.
Give us the update and give us the actual impact of GPT-5, the news that happened yesterday.
I want to kind of noodle through that and the impact, what it means for your business.
So GPT-5, obviously, was a huge announcement, made a lot of waves.
Our team immediately got to work on testing it.
And I think we're noticing a few things that I'm not.
or improved about it, and a few downsides that come along with it as well.
On the improvements side, I think it's a lot better at deep thinking.
It's good at one-shotting apps.
It's also the biggest surprise for us is it's meaningfully cheaper on inference than a
lot of the previous models.
On the downsides, though, I think there's a lot of buzz around it being this massive step
function.
And for us, what we've seen practically in reviewing code is that it's an incremental improvement,
but it's not this massive step function.
It is very much still in the realm of many of the other state of the art models.
The other piece there that a lot of folks have commented on is latency.
I'm sure the open ATI team is working on improving this.
But one thing that we've noticed in updating our code review product.
Yeah.
I want to talk about like latency versus using just like the previous generation of models
that are now being deprecated.
Yeah, the previous generation of open AI models, Anthropic and others.
Yeah. I want to talk about mixture of models. We were in the mixture of experts era with GPT4 and then GROC is doing more mixture of different models together.
And I'm wondering, like, code review is pretty high stakes. It's pretty high value. It's a very economic, like, I don't know, like the average salary or hourly rate of someone who's capable of doing code review as a human is really, really high.
And so is there a world where I actually want to run within graphite code review on every single model and then have you design a rubric or a scoring system that compares Claude to GPT5 to GROC to all the different models and kind of lets them war it out and sits at a higher level of abstraction?
Obviously, that's more expensive on the inference cost side, but is it cost prohibitive compared to a human reviewer that might be a couple hundred dollars an hour?
Yeah, it's a great question.
I think today it's certainly cost prohibitive to run every single model.
What we do though and what's worked really well for us is we, you know, much like a human
review, we break down the task of code review and we're looking for different, you know, different
set of things at different times.
So we'll look for bugs, security vulnerabilities, efficiency gains that we can make, code
base style guide inconsistencies with the rest of the code base.
We also let customers define, like, every team kind of has their own protocols and their own
guidelines around how code should look.
And we let them define custom rules.
Many teams are really heavily leveraging this now.
And for each of those tasks, you know, those can be their own tasks in the review process.
We also then, once we generate a lot of comments, we'll have like simple things that we're
looking for, like, you know, little rules that even these can be things that we've learned over
time. Like, if it's kind of left to their own devices, the models like to say things like,
we should update this line of code or something, and developers find that really annoying. So
you probably don't need the highest power model to say, you know, we should, don't add comments
that start with we, but over, that is how we've kind of composed the logic and composed the,
you know, the voting system that we use to determine, is this actually a comment that's going
to add value? Or is it just going to be noisy, like many, you know, the challenge of many
AI products and especially AI code reviewers out there yeah how do you think about
decomposition we've heard this trend of like the the smartest frontier models
might be training smaller models we've talked to a couple like almost like
micro foundation model companies where they're they're training you know it's
just a great model for filtering for profanity and it runs and it was trained
on like video game graphics cards and I could imagine in the world of like tool
use, there becomes a future where there's a bunch of small models that are being kind of
orchestrated by the more expensive model. How do you see kind of the surface area of what you're
building kind of fork out? Yeah, I think that there's, especially as we think about the scope
of code review, it is, it is kind of this first collaborative moment in the developer life cycle.
It's historically we've seen we've seen cogeneration be fragmented, every developer having their own
and terminal setup. Even today we see this with a lot of our customers that are using,
some engineers are using cloud code, some prefer cursor. Everyone's trying the new cursor CLI now.
There's so much heterogeneity on the cogeneration side, but that's always been fragmented,
but code review has always been unified. And it has to be because everyone is working together.
It is that first collaborative moment in the developer process. And it is, and then it connects to all
these other pieces around, you know, CI merging, deployments, like everything that comes downstream
of that the moment you create that PR to get it out to production. And each of those, I think,
represents the different tasks to be done to move those along. So you could think about having one
model or, you know, or one agent that's really good at resolving merge conflicts and another that's
really good at looking at CI failures and figuring out what the problem was and being able to just
fix that in the background without you having to do anything. And we see that, I think the impact
of that will mostly be one of reducing costs over time and just letting you, you know, run a smaller
model and not have to use this like superpowered laser on even a little task that something
smaller could do it. What is, what does market share look like from your view upstream of graphite?
Like what, so yesterday we had a bunch of people on that way that had gotten early access.
to GPT-5, and everybody's heavily conflicted because a lot of them are doing code generation
in somewhere or another, it's hard to really suss out, like, okay, what is actually
dominating other than just picking up what people are saying on the timeline as actual users.
But what are you guys seeing?
Yeah, we're seeing a pretty big shift in the past, even over the past three months,
I think we've seen this shift from primarily orgs using, like using cursor and moving more
over to Cursor to CloudCode, I think, has really started to dominate the conversation for
at companies of all scales for the past few months.
And now we're seeing, just in the past year or so, we've heard a lot more interest
in the Coursher CILI.
I think that that Claude really proved out that prompt-first modality working really well.
And we've seen this sort of shift from the code-first modality of tools like co-pilot and
then Cursor and WinSurf to now like prompt first with Claude, cursor CLI.I.
And then I think the next big shift is like staying prompt first, but moving from local to
to like remotely deployed agents. And we're already seeing, you know, we've seen cursor build
something there, cognition, co-gen, many others that play in in that space. So I don't think that
we've seen a massive, I don't think we've really seen a massive shift in the past 24 hours,
though, it's still, it's still, I think the good news for everyone is that it's still a pretty
close race. And competition is really, I think, necessary in this world. I think, if we ended up
in a place where there was one model provider that was so much better at cogeneration than another
one, then, you know, there wouldn't be as much pressure to innovate. They'd have a ton of pricing
power. It really wouldn't, and it wouldn't reflect that historical model of like cogeneration
being kind of up to developer preference.
How is AI changing developer communication around pull requests?
Like the canonical example is like you write a summary, you write like a headline.
But I imagine that it's probably pretty easy.
Like one of the things the models are great at is just condensing down information.
I'm not a particular fan of them expanding information oftentimes.
But if there's a really big poll request, you can kind of show.
show varying levels of summaries and I find myself doing this even in the consumer realm
where I will say okay I want a deep research report on something some some the history of a
business but then give me a one-line summary give me you know five bullet points then give me
you know a New York Times article length a couple hundred words and then give me the full 30 page
PDF and because I want to be able to consume it like a fifth grader like a college student
in these successive levels of depth is any of that
happening in the pull request world in the developer code review world?
Absolutely. I think one of the most used AI features and one of the first things that we
launched in the Graphite platform was the ability to write the PR description for you.
And it's something his developer is kind of famously like, like hate writing, writing long as a description.
Yeah, yeah, just do this for me. But AI is amazing at understanding a change, summarizing it.
And now we're actually, I think this is also where, you know, where we'll see C code review going is
it's moving from this world where you're just scrolling through, you know, scrolling through
the diffs and everything is just an alphabetical order and you have to kind of guide your own
way through it to, I think now AI is incredible at understanding the change.
Like, we're actually working on a new feature launching pretty soon in Graphite where you'll be
able to just ask graphite, hey, what are the important parts of?
of this change, like walk me through the key pieces of code that changed here, what should I
be looking at, what is high risk, and making it a lot more, I think even an AI code review right
now, I think we're still in like the copilot V1 moment of AI code review where we're just
adding comments on GitHub. But the future, I think, is much more one that's interactive and
is guiding you through the code change in real time and helping you to both review it and also
to make updates and coordinate the various agents that are working on on that change in real
time that's great jordi anything else i think we're yeah i'm i'm curious i mean the main thing is
uh i guess like going into this year everyone said this is the year for agents and i was and and
it felt like on yesterday with uh gregg or mark i said this was the year more of deep research
but then they said maybe also coding agents
coding agents have been the other thing
where what do you expect out of the coding
agent market in the next like before the end of the year
question I think that the biggest thing is that
we're just we're just moving through
like slowly moving moving around that grid
if you have like the way I think about the market is
is you have one axis of where does the code live
on the one hand historically it's all lived locally
Now we're seeing a shift to some of this living remotely and having these background agents that you access primarily through prompting.
And then on the other access you have, like, is the interaction modality code first or is it prompt first?
The code first ones being you started with copilot, you had cursor, windsurf, now we're seeing the prompt first and the prompt first modalities in Claude code, the cursor of CLI, Warp and others.
And I think we're starting to see, we've seen that shift.
from the local code first to the local prompt first tools now.
And now I think what I'm curious to see
for the rest of the year is how quickly we then go
from local prompt first to remote prompt first
using cognition, code gen, cursor background agents, those others.
And I do, I think our bed and what we're seeing
is that that is very much the future.
It requires, it likely requires another step function improvement
in the models for that to be, you know,
for that to truly be the,
the primary modality of software development.
But I think that will be the shift that we'll start to see
for the rest of the years, that migration
from local to remote prompt first modes.
Makes a lot of sense.
Well, thank you for joining.
Thank you so much.
Have a great Friday.
Have a great weekend.
We will talk to you.
I'm sure it'll be a busy weekend for the whole graphite team.
For sure.
Making sure that the GPD5 is rolling out.
Fantastic.
We'll talk to you soon.
Have a good one.
Cheers.
Thanks about me, guys.
We have to talk about the actual biggest news in artificial intelligence.
We miss this.
Everyone's been focused on GPT5.
The biggest news in artificial intelligence is that the billionaire AI co-founder,
Lucy Guo, pays nearly $30 million for an L.A. Speckhouse.
Lucy Guo co-founded scale, now runs passes, got a discount on a Hollywood Hills home.
Tech entrepreneur, Lucy Guo 30, has been called the world's youngest self-made woman billionaire.
there. Now she's putting some of her earnings into real estate. She paid $29.5 million for a newly
constructed mansion in LA's Hollywood Hills. According to people familiar with the transaction,
the price is a significant discount. They originally were pricing this house at $43 million.
It's a 13,500 square foot house, five bedrooms on 1.2 acres. And this is, I mean, everything about
this house looks great. The photos are fantastic. The one mistake, Lucy Guo couldn't be reached for
comment why are you no commenting this article you got a you got to tell your whole story all
i can say is you can sign up at passes dot com exactly there was there was a great there's a great
opportunity to to upsell to do all sorts of stuff um but there was a i i this uh this house has
everything a sunken a sunken fire pit uh a pool jacuzzi it has everything it looks fantastic
uh la's high end real estate market has been hobbled over the past year by the la
fires outward migration to lower taxes and broader economic uncertainty. Most of the
recent high end sales have closed at significant discounts. So there's really no reason to buy
a 30. There's no reason not to buy a 30 million. A friend of mine in Malibu says that the,
there's a barbell in the market right now where things are hot under under like five,
like single family homes and then really hot on the high end in like 50 plus range.
Well, speaking of 50-plus, ex-Google CEO, Eric Schmidt, has purchased LA's Spelling Manor for $110 million.
Finally.
This is a very, this is a very fun article amidst a challenge luxury market, the property sold for less than its 2019 sales price.
He got a steal.
This was built.
The Spelling Manor, if you're not familiar with this, this is an iconic, legendary house.
If we can pull up the image, look at this thing.
insanely large multiple wings it has an entire room it looks like a b21 raider from above it's amazing
it's amazing it has an entire famously has an entire room for just wrapping presents because when right
when you're at this level yeah when you're at this tier you don't just like you got to be wrapping
you yeah you need a whole room for wrapping presents because a full-time job wrapping lots of presents
because you'll be giving out lots of presents it's a 56,500 square foot property 14 bedrooms on five
acres and this is in Los Angeles. This was built by Aaron Spelling, who was a TV producer
in the 90s, very, very famous. The French Chateau style property is slightly larger than the
White House. It has a bowling alley, a wine cellar, and a beauty salon with massage and tanning
rooms. So if you need to get a tan on, you go to the tanning room. I don't know what the movie
theater status is. Well, the property will remain a single family home. The Schmits,
philanthropists who have homes around the world purchased it primarily to host meetings and events for
LA nonprofits and cultural institutions. They're not even living in it full time. It was sold for a
discount. It was listed for 137 million point five after several years on the market and multiple
price cuts. I mean it's such a big iconic house. The buyer pool is probably pretty small.
It was last sold in 2019 for about 120 million when the seller was British heiress Petra Ecclestone.
famously hired a team of roughly 500 workers to complete a massive renovation of the
property. And now just six years later, the Schmitz are planning a significant remodel of the
house to simplify the floor plan. So this thing is just getting more complex and then they're
putting in a present wrapping room. They're tearing out a present wrapping room. Everyone is
remodeling this property. And they're no longer going by spelling manner. You know, the guy who
belted. He put his name on it. They're pulling it off and they're calling it nine, uh, four, uh,
five nine four, a restaurant, a reference to its address on Mapleton Drive. Very nice. Well,
I have a post, uh, to cap off the week. Uh, it is in the timeline from, uh, Lulu,
Missouri. Okay. So we've been, uh, I guess brief, uh, mentioning, uh, over the last few weeks that it
feels like we've maybe reached a local top on, uh,
on startup launch videos.
A lot of them are starting to look the same.
They're sort of the default now.
It's very hard to stand out.
People, I think, are certainly galaise over them a little bit.
But Lulu has potentially a new meta.
So let's pull this up.
She says, who's going to be the first startup to do this format as a launch video?
And it's the sorority.
And I would go out and bet that it's,
clearly. Of course. I bet they're already working on it.
They're probably working on it. There's not a format that they don't like.
The challenges some startup might might. This is so much choreography. I guess the people in the
back are just kind of dancing up and down. They're not really. Yeah. So the challenge is if you're
launching your startup Monday. Yep. You could be spending all weekend doing this, getting it ready.
And then Roy Lee and the team will launch it like Sunday in the middle of the day. So good luck out there.
if you're trying to launch.
Speaking of Cluelly,
obviously they still have to do so much
on the product side to really nail it,
get retention, all this other stuff.
But in terms of just the cheating as a keyword,
I think that there's something
that's going to be very, very sticky there.
Julia Steinberg, who's been on the show,
is posting a picture of a billboard,
says, hi, my name is Roy.
I get kicked out of school for cheating,
buy my cheating tool, cluley.com.
And I just feel like it's,
It's such a simple encapsulation of the value prop, just in one word that grabs your attention.
I was thinking it's kind of like Red Bull gives you wings.
It's not, Red Bull doesn't actually give you wings.
It's just.
Speak for yourself, John.
Yeah, yeah, yeah.
You've grown wings.
You've grown literal wings, you know.
But it sticks in your mind.
It grabs you.
And I feel like the, even though the Cluelly stunts are kind of getting like less and less, it's less shocking.
Because it's like, oh, okay, we kind of come to expect that Cluley is going to do.
do a sorority dancing video, for example.
But just the cheating as a keyword,
if they can own that in the broad consciousness,
I think it's going to continue to pay dividends
and grab people's attention and just be something
that they can run with for a long time.
Because even though the Cluelly stuff went super viral,
everyone in tech talked about it.
Everyone in Tech was arguing about it for a long time.
There's still hundreds of millions of Americans
that have never heard of Cluelly.
or maybe they saw it once and forgot about it.
And I think that that is going to be like a keyword marketing campaign
that just like keeps, keeps sticking in people's mind.
And it will, you'll find out that it goes viral.
Yeah, they have to figure out how to go mainstream viral.
But like it's such a distillation.
It's such a compression of a meme of an idea of like,
oh, I would like to like cheat on that, you know,
report that I've been working on at work.
And actually, my boss doesn't care if I'd change.
They don't care if I use that.
They care about the result.
And so, but it grabs your attention.
And so, like, I could see it running in a Super Bowl ad and actually driving conversions
and downloads.
Now the product has to be really great.
It has to be better than ChatGPT.
That's an extremely tall order.
They're going up against Gemini, which will help you cheat, quote unquote.
I'm using cheat in the just AI assistant language, not actually literally cheat, but, and they're
going up against free models and cutting edge models and all sorts of different things.
And we'll see how sticky that new, their UI implement.
is where it screen scrapes and and records your calls. But in general, I think just as a marketing
technique, like, I don't think we've seen the end of cheating as a buzzword that clearly will be
reaping the benefit of. Totally. Anyway, anything else in the timeline, Tyler, is the timeline in
turmoil or is the timeline quiet? Quiet at the Western Front. There's one thing. So, um,
some small updates for GPT5 from, from Sam Allman. He says, um, they're doubling rate limits for
plus users. Can you do the, can you do the,
Sam Altman voice. What is this? Sam? They're doubling. We're doubling rate limits.
We're going to let plus users continue to use 4-0. There we go. That's how I imagine. Yeah, they're
bringing 4-0 back. Interesting. And the auto-switcher was broken yesterday. Oh, interesting.
So he says it will seem smarter today. Yeah. The result, it was out of commission for a chunk of the
day and it will. It's kind of hilarious that they had like, you know, pretty big launch. There's like
two bugs, like one with the chart, clearly got misrendered for the live stream was final.
the blog post and everyone's like chart crimes this is the and then the model switchers
broken and people are like this is that doesn't work at all but i'm thinking i think we're back
i think we're back we're back maybe we heard it we've heard it you heard it we're here first
we are back accelerate your timelines you now have two days to escape the permanent underclass so
have a great weekend everyone put get to work old days two full days i i will be uh you know i think
I think most people at summer you should escape the permanent underclass this weekend by
hitting the pool, hit the beach, have some fun out there.
Get a ramp floaty.
Get a ramp floaty and spend a weekend on it.
Enjoy.
Have a great weekend everyone.
We will see you Monday.
You love you.
Leave us five stars on Apple Podcasts and Spotify.
And thank you for watching.
Cheers.
You later.
Goodbye.