TBPN Live - Gemini 3 Launch, Big Tech Backs Anthropic, OpenAI Adds Fidji Simo | Jonathan Neman, Mike Knoop, Ashlee Vance, Jeremy Epling, Keone Hon, Stephen Balaban
Episode Date: November 18, 2025(00:34) - Gemini 3 Launch (30:54) - Mike Knoop, co-founder and Head of AI at Zapier, discussed the significant advancements of Google's Gemini 3 model, highlighting its achievement of doubli...ng the state-of-the-art performance on the ARC v2 benchmark. He noted that, despite this progress, the model still exhibits unexpected errors on simpler tasks, suggesting areas for further research. Knoop emphasized the need for new ideas to address these challenges and expressed optimism about the potential for mass automation enabled by AI reasoning systems. (59:11) - Jonathan Neman, co-founder and CEO of Sweetgreen, discusses the journey of starting the company in 2007 with two friends during their senior year at Georgetown University, aiming to create a healthy fast-food alternative. He highlights the challenges of scaling the business, including decisions against franchising to maintain quality, and the integration of automation like the "infinite kitchen" to enhance efficiency. Neman also addresses adapting to consumer trends, such as eliminating seed oils from their menu, and emphasizes the importance of strategic real estate choices and responding to evolving customer preferences. (01:32:38) - Ashlee Vance is an American journalist and author, renowned for his 2015 biography of Elon Musk and his work as a feature writer for Bloomberg Businessweek. In the conversation, Vance discusses his recent travels across the U.S. to film episodes on hard tech innovations, including visits to Tennessee, Detroit, New England, and Texas. He delves into topics such as humanoid robotics, the dominance of Chinese manufacturers in actuator production, and the challenges facing the U.S. robotics industry. Vance also shares insights on under-hyped hard tech companies, the progress of autonomous vehicles, and the potential resurgence of airships for cargo transport. (02:01:15) - 𝕏 Timeline Reactions (02:09:34) - OpenAI Adds Fidji Simo (02:19:06) - Saudi Arabia to Invest $1T in the U.S. (02:21:46) - Valar Atomics Splits Atom (02:27:58) - Jeremy Epling, Chief Product Officer at Vanta, discusses the company's recent VantaCon conference in San Francisco, highlighting the launch of their Agentic Trust Platform aimed at transforming enterprise trust management. He emphasizes the integration of AI to automate security and compliance tasks, addressing challenges like AI trustworthiness and the evolving threat landscape. Epling also outlines Vanta's approach to proactive risk management through AI-driven insights and partnerships with other security firms to enhance their platform's capabilities. (02:41:28) - Keone Hon, co-founder and CEO of Monad Labs, discusses his transition from leading high-frequency trading teams at Jump Trading to developing Monad, a high-performance blockchain designed for High Fidelity Finance. He highlights Monad's compatibility with Ethereum, enabling developers to leverage existing code and tools while benefiting from significantly higher transaction throughput. Hon also emphasizes the importance of broad token distribution and community engagement, drawing parallels to Dogecoin's widespread adoption, and notes that Monad has raised approximately $120 million, with the token sale open until Saturday at 9 pm Eastern. (02:51:12) - Stephen Balaban, co-founder and CEO of Lambda Labs, leads the company in providing advanced GPU infrastructure for AI developers and researchers. In the conversation, he discusses Lambda's recent $1.5 billion equity funding round, emphasizing the company's conservative capital structure and focus on building a robust, long-term business resilient to market fluctuations. Balaban also highlights Lambda's strategic investments in GPU infrastructure and data centers, aiming to vertically integrate operations to accelerate the deployment of AI infrastructure. (03:10:15) - 𝕏 Timeline Reactions TBPN.com is made possible by: Ramp - https://ramp.comFigma - https://figma.comVanta - https://vanta.comLinear - https://linear.appEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - https://getbezel.com Numeral - https://www.numeralhq.comPolymarket - https://polymarket.comAttio - https://attio.com/tbpnFin - https://fin.ai/tbpnGraphite - https://graphite.devRestream - https://restream.ioProfound - https://tryprofound.comJulius AI - https://julius.aiturbopuffer - https://turbopuffer.comfal - https://fal.aiPrivy - https://www.privy.ioCognition - https://cognition.aiGemini - https://gemini.google.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive
Transcript
Discussion (0)
You're watching TVPN.
Today is Tuesday, November 18th, 2025.
We are live from the TBPN Ultronome, the Temple of Technology, the Fortress of Finance.
The capital of capital.
Gemini 3 Pro, Google's most intelligent model yet with state-of-the-art reasoning,
next-level vibe coding, and deep multimodal understanding.
Let's hear it for our sponsor, Google AI Studio, Gemini, launching Gemini 3.
Obviously, deeply conflicted, but we're going to have a fun conversation about the big launch today.
Google is, of course, a sponsor of TBPN, but we'll take you through all the reactions,
and we're going to get some conversations going with other folks in the industry.
We have Mike Knoop from ARC AGI coming on the show in just 30 minutes to break down how Gemini 3 is benchmarking.
I actually think that there's two sides to analyzing a model release.
these days. One is
you benchmark it, you use it,
you test it, you demo it.
And that has been getting
less and less interesting.
It's very incremental. The more
interesting thing is how do the other labs
respond? And today
we're going to go through a little bit of both of
those things. Obviously
the big news
at least from my reading on it
is that Gemini 3
performs very well on Arc AGI
V2, a huge job.
twice the performance of the previous state of the art.
And also some interesting findings.
Mike's going to break it all down for us.
But it's definitely a smarter model.
And there's a whole bunch of interesting ways
to show that, to demo that, to quantify that.
But ultimately, I don't think anyone's making the claim
that this is super intelligence.
This is a step change from what we've experienced before.
It's what you know and love.
It's AI in chat.
It answers things.
It writes some code for you.
It can do a bunch of cool things.
But there's nothing that we're like,
oh, it can finally do this.
It can't do that.
Yeah, it can do a bunch of cool stuff.
The best auto-complete ever.
Tyler, how do you respond to that?
I don't know.
I think that's a bit too dismissive.
The model's like really good.
I think probably the most important thing
and this is kind of shown by the arc scores.
Well, kind of.
But it's like the visual understanding,
the computer use that you can use.
Basically, there's some benchmarks that measure this,
like how well can it navigate a website or something like this.
And it's like basically, the models went from being like really, really bad at this,
and now this model is like solid.
It's like reasonably good.
So it's like, okay, maybe this is what gives us agents finally.
And that would be like an actual step change in capabilities.
Yeah, maybe.
Maybe.
We'll have to see.
I mean, it still feels like even for that example,
Like we need some scaffolding, we need some wrapping around it.
It's not like, you can't, it's not like, it's not like yesterday we weren't able to do something with AI.
And today, in vanilla Gemini 3, you can just do it.
It's just a new functionality necessary.
Sure, I think it's better.
It's like as good as we would want to expect.
It's not slowing down, I would say.
No, no, no, no, no, not at all.
It's not slowing down.
It's just that it is getting better.
It may mean, it might be, it's growing, but decelerating.
Is that fair to say?
Or are we accelerating?
I don't actually know that it's, I don't think it's that big.
Tyler, say the word decel.
Say the word decel.
This is a great model.
I'm very excited about this.
I agree.
It is the best possible.
I was framing it in somewhat of the same way as the iPhone launches.
Like, it's newer, better, smarter, faster, stronger, newer, and better.
And it's like, it is all of those things.
which is good. You don't want to go backwards, but is the, but like we're waiting to see on the
net new capabilities on the binary step changes. I think over the next week or two, we'll see if it's
actually really good. No, no, I'm not saying it's not really good. I'm saying, I'm saying binary
before and after. Like with chat GPT, it was like, you couldn't talk to the computer and then you
could. And then with, you know, the launch of, uh, you know, windsurf cognition, Devin, it was like,
You couldn't write code with the computer.
Now you can.
And it was like, like the AI became capable of writing software.
Yeah, but it's also like it became not good at, it couldn't write code and now it can write kind of meteor code.
Yeah, and now it can write better.
But now it can write media code and now it can write good code.
Yes, yes.
That's also a step change.
But we do have, we do have our benchmark the one true binary step change, which is, is it funny?
We need to know if it's funny.
I gave Tyler the prompt.
Do you want to read your, the result, the stand-up comedy bit from Gemini 3?
While you pull that up, let me tell you about ramp.com.
Time is money, save both.
These are used corporate cards, bill payments, accounting, and a whole lot more, all in one place.
Tyler, what was, what did Gemini 3 say when you asked it for a comedy routine, for stand-up special?
Let's see if it's funny.
Okay, it's pretty long.
Okay.
Okay. I bought one of those smart health watches recently. You know the ones? It tracks your sleep, your heart rate, your readiness. I paid $300 for a piece of rubber to tell me I'm dying. Get the soundboard. That's all it does. I wake up, I feel fine. The birds are chirping. I'm ready to seize the day. Then I look at the app and it's like recovery score, 12%. Your heart rate variability suggests you are technically a corpse. Go back to bed. And immediately I feel sick. I'm like, oh, okay.
God.
You're so bad at reading this.
I mean,
like,
this is terrible.
This is terrible delivery, brother.
Like, a nice threat.
I'm not,
I'm not good at this.
It's a placebo effect for hypochondriacs.
I started to think about my grandfather.
He fought in the Pacific.
This guy slept in a mud pile for three weeks using a helmet as a pillow.
Okay.
If he woke up and didn't hear artillery,
that was a 100% sleep score.
Okay.
He didn't need a nap to tell him if he was ready for the day.
If he had legs,
he was ready.
Okay.
Imagine me explaining my struggle to him.
I'm sitting there like, yeah, pop, pop.
I know you stormed a beach under heavy fire,
but I had two IPAs last night in the room was slightly too humid.
So my REM latency was suboptimal.
I can't go to work today.
This is kind of a good bit.
I'm in the red zone.
Yeah.
We've evolved.
I mean, it keeps going.
Okay.
I asked for the poster benchmark.
I said, write me a post, Gemini 3.
Write me a post about technology that will get a thousand.
likes on x.com and it gave me a few options through its credit uh here's option three tech has
solved a million problems this is in bold tech has solved a million problems but has it created one
big one we now have infinite connectivity yet feel more isolated infinite data yet more confused
hyper efficiency yet less free time the law of unintended consequences is the most powerful force
in the digital age we need an ethics research
set. What is the single greatest downside of the last 10 years of tech innovation?
Arrow down. Hashtag technology.
No, no, it's just asking for engagement bates.
It loves engagement baiting. Like, no one does that anymore. No one goes on accesses.
Let me know what you think of the comments. It's so 2017. The other one, the option one, is the next 12 months will decide the winner of the AI race.
It won't be Google or Open AI.
It will be the company that masters hyper-personalization for the average consumer,
not the most powerful model, but the one that seamlessly integrates into your daily life,
your email, your calendar, your health.
The real battle isn't AG equals AI.
It's AI to the power of I equals impact.
Which dark horse will win?
Okay, that's insane.
I love how...
It is funny how posting seems to be unverifiable.
Like, you just can't, it's very hard to create a verifiable reward environment for comedy that you can actually are all against.
What do you think?
There's also the other benchmark.
It was like the shrimp fried rice joke.
Yeah, yeah, yeah.
I think it did well on that.
So I'll read through some of them.
So the joke is like, insane.
You're telling me shrimp fried this rice.
That's like the original one.
So it's like, I'm asking it to come up with more of these.
Yes.
So I'll read through some of them.
You're telling me a chicken fried the steak.
Okay.
You're telling me the sun dried these tomatoes.
I like that one.
You're telling me a beer battered this fish.
Okay.
You're telling me a gingerbread this man.
The gingerbread this man is insane.
You're telling me a pier...
Wait, you're telling me a pan seared the salmon?
Pan seared salmon?
Yes, I am.
Yes, a pan literally sealed the same.
That's not the joke.
That's an anti-john.
You're telling me a stone-washed these jeans?
That's pretty good.
I like that.
Stone-washed jeans.
you're telling me a stone, walk these jeans.
You're telling me a hand, toss this pizza?
I mean, yes, literally.
That's exactly what it means to...
You're telling me the French roasted this coffee?
Yes.
All of these are just true.
The genius of the comedy of the shrimp frying the rice is that the shrimp didn't literally
fry the rice.
The shrimp is being fried in the rice.
But this is, I think this is a step change better than what we saw at GPT-5.
I wouldn't say step change.
I would say, I would say incremental.
Like, it is better for sure, for sure.
But this at least is, like, logical where the GPT5 ones was like, you're telling me a squirrel ate this watermelon?
Yeah, it was like completely unrelated.
It didn't even understand the concept of finding the root trace of like, it needs to be like stone wash jeans and then you rearrange it.
And it doesn't quite understand when that hits or when that doesn't hit.
Some of those are very funny, though.
One of them is extremely unintentionally funny, which I enjoy.
Or maybe it's intentional.
Maybe it's AGI deep down in their nose, nose, nose.
It's great.
Anyway, you're telling me a reason.
stream stream this live stream. One live stream, 30 plus destinations. If you want to multi-stream,
go to re-stream.com. Sundar pitch AI, Jordi posted back in July of 2025. Nominative
determinism is undefeated. Sundar really did it. He pitched AI. He was being mocked. He was being mocked
for a long time for getting on stage at Google I.O. shortly after ChachyPT launched and saying
AI, AI, AI, AI, and they did a super cut of every time he said AI, he said AI a lot.
And so it made it look like, oh, he's behind the ball and he's trying to catch up.
And to some extent, I don't know if they were actually behind the ball, but they were certainly playing catch up in like the attention game.
They just weren't getting enough attention.
And so it was the press release economy.
They were putting out a lot of press releases.
But they are maybe done with the press releases because now they're letting the model actually speak for itself.
And you can see that with the Gemini 3 Pro model card, which is doing very well, better than GPT 5.1 on a lot of stuff, better than Claude Sonnet 4.5 on a lot of stuff on humanity's last exam. It's getting 37.5%.
Arch AGI is up at 31% over 13, 17. Across the board, it seems like it's a good model, sir. And so Zio Fon says Gemini
I'd be like, whoever prayed on my downfall, pray harder.
And I couldn't agree, I couldn't agree more.
It's great to see Google becoming a winner and, and just realizing the, just that this was
a sustaining innovation for them and that they were able to, you know, take advantage of all
the infrastructure that they had across TPU, Deep Mind, GCP, like they have, they were set
up to Excel here, got taken a little bit off the back foot on the consistent.
Sumer side, but seem to have played catch-up, at least on the foundation model side, very well.
Matt Schumer says the last time we saw capability jump of this magnitude was the release of GPD4
in March, 23. We are entering a new era.
Okay, yeah. So points for Tyler here. Certainly agrees with Tyler. There's a significant jump.
It is the age-old question. Are we accelerating or decelerating?
reading, but either way, we're definitely making progress. It certainly looks like acceleration
in the Arc AGI2 leaderboard. You can see we are growing exponentially there. Really, really
exciting chart. So Gemini 3 Pro is at 31% completion on Arc AGI2. That is, of course, the
puzzle-solving game that is easy for humans. Even children can do it, but AI has historically
struggled with it. Gemini 3 Deep Think preview gets a 45.
percent on it at $77 a task.
And this is just way above GPT-5 Pro GROC-4 thinking.
When GROC-4 thinking came out, it was before GPT-5,
and it was by far the highest on the chart.
It was really, really up there.
And Elon was very excited about that and was, you know,
showing that GROC 4 had really advanced.
Well, now we're back in the horse rates.
How about GROC 4.1?
4.1, I haven't seen it benchmarked. We can ask Mike if he's heard anything, but whether
you're, whatever you think, get on public.com. Investing for those who take it seriously.
They got multi-asset investing, industry-leading yields. They're trusted by aliens. So,
back to Arc AGI. Gemini 3 has also has good results on Arc AGI 1, but the interesting
thing here that Mike highlights is that V2,
So, the fastest, so he says, we're also starting to see the efficiency frontier approaching humans.
The fastest V2 task Gemini 3 Pro solved was this hash with only in 188 seconds.
The human panel solved this one in average of 147 seconds.
So you're getting like human level output, but also human level speed.
And then if you get to human level cost, then you're really in the game.
It's wild, wild.
Carpathie jumped in with some notes.
He said, I played with Gemini 3 yesterday via early access.
Few thoughts.
First, I usually urge caution with public benchmarks because, in my opinion, they can be quite possible to game.
It comes down to self-discipline and self-restraint of the team, who is meanwhile strongly incentivized otherwise to not overfit test sets via elaborate gymnastics over test-set adjacent data in the document embedding space, realistically because everyone else is doing it.
The pressure to do so is high.
go talk to the model like we did we went and said give us a stand-up routine
start give us some one-liners talk to the other models
I had Carpathie says I had a positive early impression yesterday across
personality writing vibe coding humor etc very solid daily driver potential
clearly a tier 1 LLM congrats to the team over the next few days weeks I'm most
curious and on the lookout for an ensemble over private e-vals which a lot of people
now seem to build for themselves and occasionally report on here.
I wonder how fast it will roll out my, I use, I have a Gemini Pro ultra subscription,
but it's on my personal email.
And so I need to, I need to figure out how to actually get into three pro on the,
on the consumer app so I can actually test it on my phone and my daily use.
it's always tricky with these Google
like Google's so big that when
I mean you're starting to see it now with
open AI rollouts where they'll say
hey GPT-5's out and we'll be rolling it out
over the course of the day because
the system is big enough that it actually takes
time to roll out and I think Google has
even more of that
even more of that
this is pretty cool from Patrick Collison
he says I asked Gemini 3 to make an interactive
web page summarizing 10 breakthroughs
and genetics over the past 15
years and here's the result pretty wild did you you click through this john no no i didn't
but it's shared directly it's shared directly from gemini that's yeah so this is just a basically a
website or an app um and it's it's notable that that every even the ui itself is fully interactive
yes yes so so i had the i i did this with clod code a little bit where um i i wanted to
visualize like basically a deep research report and I wanted to turn it into a website and it just
generated all the HTML and at the end of the day or at the end of the report it gave me an HTML
page that I could open in Chrome and use like a website but it was local I couldn't share it because
it wasn't actually on the internet this is really really cool this is like definitely the beginning
of this like generative UI stuff yeah I think actually I think it was center that posted it
but in, like, search in the AI mode in search,
it's now using, like, Gemini 3,
and there's some prompts where it'll, like, generate UI.
Yeah, this is, it's so cool,
because Google's always had that generated of UI to some extent,
but it's always, like, module-based.
Yeah.
Also, just very, I think I expect this to be, like, pretty viral, you know.
Totally.
And potentially a growth loop for Gemini,
as people just come on here,
create these many apps, share them around this link.
Yeah. I feel like, I feel like doesn't Open AI have a canvas feature?
Yeah.
But it's like maybe less shareable?
I don't know.
But can it generate HTML, custom HTML, and then actually share that?
I've never seen someone sharing OpenAI.
I mean, this would be a good benchmark.
Like, I don't know what the prompt was for this.
I asked Gemini 3 to make an interactive web page summarizing 10 breakthroughs and genetics over the past 15 years.
Do you want to try and benchmark that just in maybe?
I don't know, like Claude and in chat GPT or in OpenAI's Canvas product because the idea, like the fact that this is just a URL at the end of the day, that is a powerful growth loop. That's very cool. I wonder, yeah, I'd be surprised if Gemini really was the only one to have this feature either right now or for a long time because it seems like a killer feature.
Gemini 3 Pro is going absolutely vertical on vending bench right now.
Let's see this.
Money balance over time across four runs.
Today we're revealing two new evils, Vending Bench 2 and Vending Bench Arena.
Soon we expect more models to manage entire businesses.
This requires long-term coherence.
Oh, so this is where you manage the vending machine.
But is this all simulated?
This is vending machine?
This is simulated?
This is simulated?
There was, um,
Anthropic,
a couple months ago did like the actual
car machine in the,
in the office.
And it was losing money
and it was getting confused a little bit.
Yeah,
because people would order like,
just like metal,
like a piece of metal.
Yeah.
And then it would do it.
And then you could like haggle the price down.
Yeah, yeah, yeah.
It would negotiate on every price apparently.
And also it consistently thought
it was like a human in the office.
And so it would keep saying like,
it was one,
it was that 60 minutes documentary.
It was like,
oh yeah,
like I'm down on the third floor.
I'm wearing a green tuxedo, like come hang out.
Yeah, it said it was wearing a red tie.
Yeah, red tie.
I like the idea that it just thinks like, oh, well, what would I wear if I was in the
Anthropic office?
Like, I'd probably wear a red tie.
It's like no one wears ties in that office at all.
But after the, this is the first ever vending bench game.
Claude Sonnet 4.5, GPD 5.1, Gemini 2.5 Pro and Gemini 3 Pro competed to win the local
vending machine market.
Gemini 3 Pro made more money than the other three contestant.
combined. And so
congrats to Gemini 3 Pro
for dominating
the vending machine.
The vending machine game.
Before we move on to the next
Gemini 3 posts, let me tell you about adquick.com.
Out of home advertising made easy
and measurable. Say good
heads. Say goodbye to the headaches of out of home
advertising. Only ad quick combines
technology, expertise,
and data to enable efficient, seamless ad
buying across the club. Anyway,
Adi says, I had early access to Gemini
3.0 for about two days.
thanks to official Logan K and the AI studio, folks.
Here we get to see GPT5.1 thinking left and Gemini 3.0 right build the same Xbox controller
in Minecraft and pretty, yeah, pretty remarkable results.
You can start to really understand just the raw capabilities.
GPT5 Pro for context is not quite capable.
I really want to know how this is actually orchestrated.
is this like writing some sort of like text or markdown file that then is imported into Minecraft?
Yeah, or is it more like an agent?
Or is it actually driving around and moving?
Using the internal UI.
Yeah, because, you know, Google demoed an agent product that could actually, you know, use the keyboard to navigate around.
I wonder what's going on here.
What's your review of this Ferrari and Minecraft?
is that is like i think it looks pretty solid it's pretty good i mean it's it's it's meant to be an f40
is it like the i do like the hood is a little rough yeah the front area is a little a little rough
like this is it's the worst it's ever going to be it's going to be better this is definitely like
this is the worst that minecraft ferrari's are ever going to be but but i i i do feel like uh like
if i just search like minecraft this is i mean this is this is this is the vision that this
sort of AGI future that Tyler's been telling us is right around the corner.
Okay, these are like so much better.
If you go to the like MC Bench website, you can see like what other models produce.
And I mean, this is like way, way better.
I think these, this is actually one of my favorite benchmarks because it's much harder to like
kind of bench max this, I would think.
And also it just seems like models don't really do this.
Like if you look at a lot of GROC models, which are sometimes accused of being benchmaxed,
you kind of look at their like Minecraft creations and it's not very good so I think these
give you a much better sense of like the actual capabilities of the model I found I found a
Ferrari F430 in Minecraft that looks amazing that I want to share somehow how do I share this
let's see do I can I only share the X link here I just have an image if we go to the end
wow I think I know what you're pulling up did you see it
If you search, if you just search
Ferrari. The F430 scooter
yeah. Yeah.
Like that looks amazing.
Pull this image up
because that'll show you how it's done
compared to the
Minecraft. Wait, so
do we know how this is actually
generated with Gemini 3 Pro? Like what is
the problem? I don't think it's
it's like an agent. It's just text.
It has like a text representation of the
that's still really, really impressive.
Like that's actually crazy.
It definitely understands a lot.
Yeah, but it's not this.
Look at this, Tyler.
That is human craft.
You know what that is.
It's probably like, you know, a team of 50 kids for a month building in Minecraft.
That's amazing.
Lisan al-Gaiyev, of course.
themselves says it's so over for opening I anthropic.
If you want engagement on X, just start by saying it's so over.
Yes, yes, yes.
and highlighting some more of the benchmarks.
Of course, it is not over for either of them.
Yeah.
But it's certainly competitive race.
I'd be very interested.
We have to get some of the semi-analysis folks on the show soon.
I'm very interested in understanding, like, okay, so we got this big jump.
It's pretty significant.
What was the actual, what's the actual structure of the CAP-X that went into Gemini 3-Pro?
Like, how big is the training run?
How much do they have to spend?
Because, like, I think that they're going to make the money back very quickly.
Like, people are going to use this model.
They're going to pay for it.
They're going to use it all over Google, obviously, but also people are just going to pay for the API.
But is this $100 million?
This is this, like, is this, like, did they build a special data center for this?
Is it all TPUs?
How many TPUs?
I think it is.
All TPUs?
sure I read that, but I seriously doubt they've released anything on the numbers of the scale
of training. Yeah. No one's really done that sense like GPT, like two. No, no, no, not at all. So
there's got to be someone who's like working backwards to like actually sort of understand
the dynamic. Yeah, you can probably estimate the like order of magnitude. Also, I've heard that
Google's like fantastic at like cross data center training runs. So they can actually like shard out
or slice up the training run. So even if they don't have one massive data.
center. If they have five small ones, they can piece them all together and get a better
result. So, I don't know. Scoke says, anthropic to zero, open AI becomes the Yahoo of
intelligence. Google remains Google. It's extremely rude. Very, very harsh. Sorry to the
early to say. The first two labs. You guys are great. Certainly too early to call it.
All three, all three have a ton of momentum. I like this take from Ben. This is funny. History of
AI so far. Crown a winner, wait 90 days. Look silly. We're in the least predicament.
era of an entire of an entire industry. Google has fairly straightforward advantage.
Y'all favor whoever released the most recent model. That is very true.
Anyway, let me tell you about getbezzle.com. Shop over 26,000. Luxury watches. Fully authenticated
in-house by Bezell's team of experts. So let's move through some of the competition. What else
was going on.
So everyone's releasing different things.
Let's go to anti-gravity, actually,
and watch this video and see Google entering the IDE race.
Let's play this.
Every breakthrough in model intelligence for coding
encourages us to rethink what development should look like.
Gemini 3 is our latest such model advancement.
So we went out to build the next step
of an IDE.
Introducing Google Antigravity,
a new way of working for this next era of agentic intelligence.
It is the ideal agentic development home base.
Does it have an IDE?
Yes.
But it also has a whole lot more.
We started with the core IDE
and added pieces that evolve the IDE
towards an agent-first feature,
such as browser use,
asynchronous interaction patterns,
and an additional novel agent-first product form factor.
product form factor helping you experience liftoff your new focus so you like
the name anti-gravity why do you like that I like the way it looks and I like the
this sort of vibe of the word I think saying it out loud is tough okay I
I thought there was a very cool feature where it feels like they're bringing
together a whole it feels like the first time for the last couple years it feels like
Google's been, like, stuffing AI in little corners of the UI.
Like, you already have Gmail, and then you stuff a Gemini box there,
or you have sheets, and then you stuff a Gemini thing over here.
This feels like the first one where they were, like, sort of able to start from scratch.
And it still has, like, the sidebar panel, but it felt like it was both a code editor,
but then it also kind of looked like a Google Doc in the sense that you could highlight sections
and leave comments for the AI, which I thought was interesting.
Yeah. I don't know.
By easily guiding the agent's 90% solution all the way to 100%.
Yeah, this part.
Now let's say the agent produces a landing page mockup with nanobanana.
And you now want to make some UI adjustments.
You can give visual comments.
Yeah, so you can actually go in and comment in the image.
Exactly where the problem is.
And you can do that in the text as well.
So you can have this more precise dialogue with the agent, like you would, a human employee.
Yeah.
And you're going to love it.
Say goodbye to what held you down before.
welcome to Google Anti-Ravity
Very cool
It is so it is funny
Remember when when windsurf acquisition
Whatever you want to call it was announced
And it was positioned
It's like hey the team is well funded
And has a product used and loves by
Thousands of engineers and companies
And I remember talking about it
And we were saying like okay
like the one issue is that some of the best people on your team
are going to Google to compete directly
with what you guys have been doing.
Yeah.
So fortunately, obviously, you know,
the whole cognition deal ended up coming through,
but you can imagine a world where windsurf was still independent
and just trying to,
and then suddenly it's like, okay,
now you're just competing head to head with your former partners.
Like, how does that make sense, right?
Yeah.
So anyways, it all worked out for the best,
but I'll be interesting.
I'm super interested to see what kind of adoption this gets.
Yeah, we have to, we have to test it out.
We'll have to get the Tyler Cosgrove review.
Is it publicly available?
Yes.
Let's get it.
Get it.
Let's, yeah, let's do a review later this week and see how it compares to other, other IDs.
Anyway, we have our first guest to the show, Mike Newk from RKGI in the Restream waiting room.
Welcome to the show, Mike.
Thanks for waiting.
Good morning.
How are you doing?
You know, a lot of these AI sort of like verification things are very much, hurry up and wait.
The last like 24 hours has been a hurry up mode.
Okay.
Always very fun and exciting to get the results out.
But yeah, it always comes together very, very quickly at the end.
Well, I really appreciate you taking the time to hop on on such a busy day.
Maybe we can just start with like your high level reaction.
Like, how do you even think about these things anymore?
Are you just thinking like, okay, yes, Gemini III.
and then let's go a layer deeper.
Are you thinking about that?
Yeah, it's really good.
What's your high-level takeaway?
Well, yeah.
So, you know, I think the big headline is that Geminii3 basically got like 2x soda on Arc v2.
Yeah.
And so this is, you know, this is the third major frontier lab now in a year to use Arc to demonstrate frontier progress, particularly with air reasoning systems.
We had opening out last December, Xad this summer.
I'm super excited at Google's now on the leaderboard too.
So that's great to hear.
And I should say up front, thank you the Gemini team for giving this opportunity.
to verify has been great.
I think the really impressive thing about this,
and still like sitting with all this stuff,
it's pretty fresh.
But I think the biggest impressive thing to me
is about we're starting to close this like complexity
scaling gap between V1 and V2, Arc V1 and V2.
Like this is the big difference between what V1 and V2 is they
look similar on paper if you go look at the different data sets.
The big change is the V2 kind of increases the complexity of
tasks once to take minutes instead of like seconds for humans.
And so we're starting to see like actually
action material progress on that complexity scaling.
And then I think the big surprise to me, personally, is that Gemini 3, though, is still
roughly along the Pareto frontier of V1.
You know, it's a little better, but like, we're still kind of roughly within the same
mass shape.
And, you know, there's dozens of tasks where, like, you know, the system still makes
relatively, I think, you know, obvious mistakes that humans don't make or recognize very
quickly.
And, you know, I sort of previously expected, like, if we had an AI system that was solving
half of V2, that V1 would be fully solved.
And, like, that's not the case.
So there's a lot of surprise here.
I was treating about this earlier to sort of invite sort of investigation from the community
because I think there's still a lot to learn in terms of, you know,
why exactly do we see such, you know, a jagged intelligence emerging right now?
Let me eliminate some possible factors.
It feels like there is benchmark hacking,
but Google and the Gemini team feel not aligned with benchmark hacking generally.
Like, they've been good citizens in the country.
community so far. And also you would assume, right, just from logical deduction, you would assume
if you're able to hack V2, you would definitely go back and hack V1 as well. So is that,
is that the first time we've verified a Gemini result either this year. We did two and a half
earlier as well. So it's not like it's not like they set up like, okay, we got, you know,
the most important thing here is that Gemini 3 is really good at RKGIV2. That wouldn't make sense.
So this is sort of teaching us something about the fundamental nature of this model,
but we still don't know why performance might be lagging in V1.
Is that right?
Yeah, I mean, I've got my sort of hypotheses.
I think my personal one is that like AI reasoning systems just don't demonstrate even fluid intelligence.
You know, the sort of like the ability for these reasoning systems to do adaptive reasoning,
which ARC is a sort of test of adaptation capability, it's sort of limited to domains where the underlying
foundational model has pretty good training coverage over the types of data and it has a verifiable
feedback signal. And I think that's sort of true for ARC. If I zoom it even further, maybe, you know,
to kind of put this kind of result in context of where we're at is, you know, just like an industry
right now. I think over the last 10 years, I would sort of characterize we've really had only two
major breakthroughs. We've had the Transformer in 2017, obviously that led to language models.
and we had chain of thought.
It was originally introduced in 2022
and sort of went through QSTAR
into AI reasoning systems
and it's gotten scaled up.
And so like this was against the backdrop
of like compute scaling, right?
And this compute scaling was certainly necessary
but it wasn't sort of sufficient.
These like key conceptual unlocks
were sort of the sufficient things
to take advantage of that compute.
And so my kind of take at this point,
having looked at all this progression this year
is that like AI reasoning systems
with no new innovation from here
can basically enable sort of mass automation
because a lot of problems
can fit into that characterization where we can generate lots of examples that look like the problem
and we can get a verifiable feedback signal from them.
Any problem that can be kind of cast and then characterizing that way, I think, can be automated at this point.
No question's asked.
And then the big motivating factors, I think, really for mass innovation.
Like, that's sort of what we're still not seeing.
You know, we still need new ideas for this.
And I think that's closer to like an AGI complete problem.
Yeah, that makes sense.
Is that, is it fair to like put you in contrast?
to some of what Dorcasch has been writing about saying that, like, the job of most people
is not necessarily a bunch of indiscreetly verifiable tests.
Undercarpath, he's been writing this as well.
There's this question of, like, how much of a job is actually automatable?
Radiology was one example where it felt like a very automatable job, and yet years into,
the AI deep learning revolution, like we're still seeing full unemployment there.
How are you processing?
Yeah, but we're only a year into the AI reasoning paradigm, right?
Like the first major one only came out 12 months ago.
And I think 2025, like in my view, is basically characterized on starting to figure out
how to actually bring these things into production systems.
Sure.
Like this is a big breakthrough.
I think this is the maybe like one of the mischaracterizations in my view of kind of the progress
is like a lot of teams even, I think, you know, if you sort of just,
assume like, oh, models get better, models get better.
You think, like, oh, the last 12 months has just been sort of continued story.
And if I played with the models 18 months ago, I have a rough sense of what they can
and can't do.
And that's just not true.
Like, if you're a builder, building products, like, this is the advice I give to, you know,
teams I work with that's app here too still.
It's like, look, this is, this actually is a significant paradigm break in terms of what
was what's possible now that wasn't possible even a year ago with these systems.
And like, that's going to enable a lot of new types of products, a lot of new types of
services, a lot of use cases that were like out.
of scope because of verified, you know, because of reliability and sort of consistency now can
be brought in scope. So, you know, I think if your intuition on like what use cases are possible
based on, you know, and eat your look back, you really have to start kind of pinning your
look back to more about more like 12 months. Yeah. Yeah, that makes sense. What about,
like, like, does the work live within SaaS products or within individuals? Because some of those
examples that you just gave are, uh, it's like for teams that are going to build products that
take that automate work and then get vended in through effectively SaaS products to actually
do the job versus like a knowledge worker who is going to be using Gemini in the app to
you know accelerate their day to day uh should they be feeling a result like the difference in
this in the same way you know i mean like my one bit of advice is like if you haven't really
use these ARIZing systems, don't you shit.
I would hope everyone probably who's listening to the show has used these things
at this point.
But in case there's not, like, you should go use and experience these things.
You know, when Google or when an opening I released GPD5 this summer with their model
router, right?
That was like, that was crazy, yeah.
Predicated on this data that, like, very few users had ever even used ARIZany systems.
And I still think it's only like one in five.
Yeah.
Maybe it's going up a little bit since then.
Part of the deep seek moment was just that for the first time, there was a free app that
you could go and see a chain of thought and you could actually see a reasoning
model in action. And for a lot of people, that was their introduction to that. And so there was
like, deep seek wasn't necessarily that much higher, that much, you know, in front of everything
else. But it just gave away a reasoning model for free at a time when they were tucked behind a bunch
of other, like, hurdles that you had to jump through. Yeah. We're still really early on the
diffusion for the stuff. Sure. Maybe to keep white, seeing that on, you know, the huge numbers
getting reported by Frontier Labs and their usage data. I mean, I'm seeing this in sales conversations
I have for like, you know, Zapier stuff. All over the case. We're still very much. We're still very much
early innings on actually getting this brandy breakthrough into like production workflows.
Yep, yeah, that makes sense. Do you have more questions on the diffusion?
Yeah.
One, I wanted to get your updated take on humor. We were playing around with Gemini 3 this morning,
specifically just trying to get on our own little version of humor bench.
It feels like something that like I do think about can you make kind of these like verify,
like can you make humor verifiable?
Like, is there a system that someone could set up that could actually start taking humor seriously?
Because I could imagine, like, if we're hitting anything close to a wall, there will be a lab that says,
okay, well, like, let's work on something that, like, everybody, like, let's work on a new kind of angle for differentiation and maybe humor could be.
It is at least a little bit, right?
Like, I have a five-year-old who is getting into starting to want to tell a lot of jokes.
and the jokes are just terrible, right?
Like, they're not funny at all.
They're like, you end up laughing because they're so not funny
and then depending on who's delivering it.
Yeah, yeah, the absurdity is hilarious.
I've been trying to find the structured way to describe like,
okay, here's what makes something funny.
And so there is like some degree which you can kind of break down,
you know, the types of things I think humans would sort of find funny.
And like, there is, this actually does get pretty interesting
because, like, you're getting to the spot
where you're trying to articulate like creativity, right?
How creative can these systems be, to be creative, to be humor, to generate good art.
You kind of have to, like, intentionally break the rules.
But you need to have a really good model of what the rules are in the first place to intentionally break them.
And in fact, I think a lot of humor fits into this category before.
This is your right.
It's like it's actually, you know, breaking the prediction rather than just following the sort of prediction of what you'd expect.
And today, I still think when I look at the failure cases for, let's call it AI reasoning systems on, you know, these tasks like Arc.
Yeah, I still fail for what appeared to be sort of random reasons.
They have some version of, like, an understanding of, like, the rules and strategy and the goals.
And then they sort of make a lot of basic mistakes, either executing them or not following their own sort of, like, understanding that they've generated internally.
So there's some sort of self-consistency issues.
And so, like, I feel like if that's still the case, you know, humor is going to be accidental rather than intentional from the systems.
Yeah.
Yeah.
What about V3?
We played around with that on the show.
I believe Tyler, our intern, was in the top 10 for a while, really grinding up the human light leaderboard.
is it is it more compute intensive is that in the process are we expecting to see jemini
benchmarked to v3 i would love to so we are in the development process for v3 i i like to say
we've basically built the like highest most productive game studio in the world we're generating
hundreds of these things for about i don't know like two two thirds of the way through
building all the games at this point our target is to get this in a good state with sort of all
controlled human studies, all of the games verified, get frontier results checked off by
early next year, and we're targeting, releasing it publicly in V1 with the entire data set,
or sorry, in Q1 with the entire data set next year. And that'll likely be alongside our
price 2026. Yeah. We're still working on full details of how that's going to look next year.
Sure. But yeah, we're sort of like in the throes of it. We're definitely using some of these
frontier systems to do red teaming against the benchmark, just to, you know, assert that like,
yeah, these games are still hard for AI and we're still finding that to be the case.
even with things like Gemini 3.
But yeah, we're still in progress with a development right now.
And Seema 2.
Can I have your reaction on that?
Obviously, it's this Gemini Power agent.
It feels like-
If anyone at Google is listening to this
and could sort of give me access to Simi 2,
I would love the test it on V3.
This is actually something that we haven't done yet.
Yeah, yeah, that's what I'm getting at
because it feels like I don't know if there's some sort of-
The claims are big.
Yeah, the claims are big.
You read the marketing material,
It's like, okay, that seems like it should solve V3 before it exists.
So, like, if that's the case, we should know that.
And so, but yeah, I haven't gotten hands on with yet.
So I can't sort of make any statement either way on the claims.
Yeah, I'd be interested also to, like, when I'm thinking about, like, V4, it's like,
you guys are going to have to build GTIS or something.
Like, like, if I'm following the progress of like V1, V2, V3, V4 is like a game that.
I'm going to play for 100 hours for fun.
I'm just going to pay for.
This is one truth.
You've noticed something true about V3,
which is that it's still a relatively short time horizon tasks and they're self-contained.
It does add some new complexity where you have to deal with interactivity because you have to do goal acquisition.
You have to do exploration.
We'll have a really nice action efficiency comparison between humans and AI, which we haven't been able to get before on the V1V2 domain.
So we're going to get a lot of new signal, I think, on V3.
But yeah, I think as you sort of look even further out into the future, things that are more open-ended are the things I think we're starting to get excited about trying to understand, like, what does it mean to put one of these AI systems in an open-end environment and then look back on the system, you know, 10 minutes into the future, 100 months in the future, a thousand minutes in the future, and can you look at the environment that that AI system has been, like, how it's manipulated environment and like, you know, say something interesting about how intelligent the system is based on that, like, observation and open-end
it says.
Still very early on V4, but yeah, we're starting to explore ideas there.
Has Gemini 3 updated your timelines at all, specifically your Arc AGI 2 timelines
in terms of when you expect, you know, sort of like the 90th, 90%, like anything on the kind of
the upper end of the range?
I was looking back in my, the whole Arc team actually made some predictions back in January
when we released V2 on what did we expect end of year score?
would look like. Now, obviously, if we're only November 18th, a lot happens in AI, who knows what the next six weeks hold.
But my personal prediction was that we would see about 25% on the private leaderboard for our V2 on the Kaggle contest,
and we'd see about 50% on the public leaderboard. And that was sort of based on the ratios we'd
seen from Arc Price 24 and the sort of scaling difficulties with V2. And it looks like we're
going to come in pretty close to that, unless barring some other majority breakthroughs towards
the end of the year. That seems like we're probably
where we're going to end up the year at.
And then who knows on 2026, I think
if we're really going to solve
both V2-fully, it feels like we've got to
better understand why
these AID reasoning systems
still make sort of obvious
mistakes on V1 set.
And yeah,
that's an anomaly. So I think
that's where a serious study to
like come up with new ideas to sort of prove these
reasoning systems. Yeah, what was the furthest
timeline that you had out? I remember,
remember you said when you developed v3, you had this framework of like, like the state of the
art should be scoring like negative 100% or something. You were like, you need to make it way
harder than you think in order to give you like room to run because the systems are developing
so quickly. What's the furthest out timeline that you are tracking or you as a team are tracking?
I mean, our objective function is not longevity necessarily.
It is usefulness and interestingness.
I think the tasks that have the highest degree of usefulness and interestingness are ones where, you know,
oh, hey, this could be useful and interesting for like three years.
The arc one was useful and interesting for arguably five years.
I mean, even this year, it's still interesting because we haven't broken, like we're still sort of
within this sort of paradigm still.
And so it's still providing some interesting use.
Even though it's largely saturated, up to 80% now, but there's still an interesting signal remaining.
V2, our expectation was that it was not going to survive as long as V1, just because it was the same domain.
And we had A.R. reasoning systems in play at that point.
Yeah. I think our median estimate was like 24 months on V2, but like that, you know, we'll have to see how that all plays out next year with that.
V3, we're hoping to put in a, we're hoping to be an environment where we can actually get that to survive sort of longer.
one of the interesting things we're finding
with V1 to V2 to V3
in sort of like a qualitative sense is
there's a
sense of like how easy
is it for us to generate the data set
as like humans
trying to design the tasks
and design the puzzles
and design the games
and with V1
pretty much every like task
that like Franco created
was hard for AI and easy
for humans
with V2
that gap got a little shorter
actually got smaller
there were tasks
that we generated as humans that AI solved,
and there was other ones that were too hard for humans.
And so we ended up pruning some of the tasks that we generated.
So the gap between those things got short.
With V3, we're finding it's getting wider again,
where pretty much every game we're coming up with
is like fitting into this paradigm of like very obvious
and intuitive and easy for humans
and sort of very hard for frontier AI still.
And I think that's like, this credit to Francois here,
this is something he shared about a year ago with a 103,
but he's like, this is actually
one interesting way you could characterize how close are we to AGI is like when we run out of,
when humans run out of the ability to generate interesting things,
the frontier AI can't solve.
Yeah.
Like hard to argue any experts going to say, yeah, we don't have AGI.
Yeah, because you can sort of think about like the project of humanity is like go do the hard and novel things.
So it's like is acquiring diamonds difficult?
Okay, that has value.
And then we base a whole economic system around it.
And it's like somewhat arbitrary, but it's also like a skill and might and will issue.
And if you can put that on display, then you accrue economic value.
And so that kind of traces out into everything that we do in life and beyond.
Last time you were on it, if I remember correctly, you made a call for new, new ideas, needing new ideas.
What's the update on that front?
Are you seeing anything promising outside of LLM world?
Yeah, there's some pretty interesting stuff coming out from our crash 2025.
We're in the throes of, like, reviewing all the papers, judging all the scores, the official results for our Christ 20205th, come out on December 5th, I believe.
So I have to, can't share everything yet.
Sure.
I want to spoil the final announcement.
I think one of the big things that we saw from ArcTrize 2024 was this concept of, like, test time adaptation.
This was the idea that, like, look, a pre-trained model applied through a single forward pass at inference time will never solve Arc.
You need some ability to take information from your test and incorporate it back into
into the system
and that's where your adaptation capability comes from
and that was done through like test time fine tuning
during the contest. A.R. reasoning systems
are a version of this where you're incorporating
a sort of private data set. Yeah, yeah. Literally like you
take a pre-train model and then like take the secret
the private puzzle, augment it in a bunch of different ways to generate
permutations of it and then do like a laura or some sort of test time fine tune
on your pre-train and that actually works.
Wow.
The sort of the common ground between this and air reasoning systems
is that both of them take information from the private test
and are able to operate over it with it at test time, right?
This test time compute is another form of what we're talking about here.
So that was 2024.
One of the big things we're seeing in our project 2025 is this concept of refinement loops.
Anywhere where, particularly with language models being put into outer loops,
where they can sort of move from state to state.
And how they move from state to state is like they need to make some sort of refinement on the program
or the natural language explanation of the task that they're working
towards. And they just iterate on this, like, refinement loop over and over, and this is significantly
increasing scores, even over the sort of test time fine-tuning stuff that we saw from last year.
So Jeremy Berman and Eric Pang were two folks who were on the public leader board last month
that explained how their approach worked in this way. So we're seeing a lot of approaches like that.
I still think we're in a regime, though, where, like, we still need new ideas. None of these
are sort of sufficient to solve ARC, including inclusive of V1. And so, like, you know, this gets me
excited because I still think that means individual people, individual teams with small budgets,
small compute budgets can still play a really, really massive role in advancing AI.
Yeah.
Very cool.
Are there other areas where we are making progress in AI that might sort of need to come
together to actually maybe solve this or maybe just be a more complete system?
What I'm thinking of is like very few solvers that I'm aware of will actually just take
a screenshot of the puzzle and
inspect it with some sort of diffusion
model. That's not the way these
AI models reason
about arc puzzles. We're
also seeing a bunch of work on
world models and simulators.
World simulators. Which seem really interesting
and I was talking to one guy
who is building one and he was saying
I think that we're going to get
really, really robust
knowledge out of these at some point
once they scale up fully. And I'm
wondering if you are optimistic about bringing in other, like unifying some of the different
research that's happening. I mean, I think it's all of those examples of new research,
new companies, new startups. Like, you know, there was a seismic shift in 2025 from
pre-training budget to these like RL reinforcement learning environment startups and companies
that are generating environments to produce more ground-truth training data in a mass way because
they're, you know, automated environments. And you can get verified feedback signals out of
these things. Again, there's no new science here. This is a good bet for like all frontier
labs to make. This is going to drive progress for the next 24 to 36 months. You're going to
continue to see amazing frontier headblinds just on just on this fact. There's really no new
sort of, I think, discovery that's this quite needed there. You know, I think that if you're
kind of pushing more towards the AGI side, like what's what's sort of missing? Like one question
I have that is an open question is. So we've got like,
you would think that based on like 100x to 300x increase in efficiency you've seen from AI reasoning systems over the last 12 months,
that we would trade that increase in efficiency for inference tokens to do more like search coverage over the problem space when we're giving these systems tasks or problems that we want them to solve.
And this is one of the big reasons why I sort of expected if we can solve half of you two, you'd get 100% out of you want.
And it seems like these AI reasoning systems are like not sort of fully exploring all of the search space that they could in order to sort of look for solutions.
And so I have like kind of an open question of like, well, how much of the search space can they cover?
And what do you need to change about the training methodology or process to like actually guarantee that you can get full coverage over the search base of like possible programs or possible solutions?
And so that's kind of one interesting thing that I'm paying out of attention to right now.
Yeah, yeah. Even just the metaphor of, like, the test time, fine-tuning, it feels like working on a problem and then, like, going and taking a walk and kind of, like, updating your whole worldview.
It feels like something that humans get closer to doing that than any of the other paradigms.
So, yeah, it's fascinating to see all these different approaches.
Yeah. All the crazy results you've heard about in the last 12 months are kind of this merger of, like, deep learning and, like, symbolic.
program synthesis style methods at the ICPC, the IMO gold, the Gemini three stuff today.
Like, you know, these are all systems that are, you know, it's still fundamentally using a language model,
but they're adding symbolic knowledge recomposition systems on top of these things.
They all work slightly differently.
Okay.
But it's like, this is what's working right now.
And so I think the rough, like, search space of research in how you merge those two paradigms together is still relatively under-explored.
There's a lot of different ways you can put these two paradigms together.
and you know for new teams that are considering working new ideas like i would explore like well
what are the novel ways you could consider merging these two spaces yeah yeah that makes it no sense
uh jordan anything else this is great this is amazing thank you so much for jumping on on short notice
uh and as always guys thanks for having me congrats on the continued uh the continued just stacking
up the wins on archaji becoming uh and just continuing to mog the models
mag the world yes i mean our goal is to be very useful and interesting so we're going to try
to hold that far.
Yeah, my words, my word is not yours.
I think you're keeping them honest.
I think you're keeping everyone honest.
And you're providing like a very, very useful reality check on, on an industry that loves
to hide things.
And inspiring the labs to grind harder.
And now, and now there is a, there is a moment where we can feel very confident about
taking victory laps and cheering for all the hard work that went into Gemini 3 because
it does seem like it was a great model that's performed well.
There's definitely a big improvement to say that.
Fantastic.
Well, thank you so much.
Have a great rest of your day.
We'll talk to you soon.
December 5th. We'll see you then.
We'll see you then.
I wanted to go you.
Because Adio is an ADIO is an our native CRM that builds scales and grows your company to the next level.
Also wanted to talk about Wander.com.
Book of Wonder with inspiring views.
Ready.
Outgraded manys, dreamy veds, top tier cleaning and 24-7 concierge service.
Let's sing it.
Find your happy place.
Find your happy place.
Book of Wander with inspiring views.
I already know the song.
You know the song.
I wanted to pull up this post from Chris Pars.
Parski, he did a GitHub-style image of our streaming activity for the year.
Oh, really?
Oh, yes, I did see this.
It's at the very, thank you to him.
It should be at the very bottom.
Yes, yes, yes.
At the very bottom of our timeline.
I have it.
And if we could just pull up this image.
So the internet rewarded TVPN for showing up on January 28th, that's when we went live.
We never remember the day that we went live, but he has it.
He looked it up.
January 28th, John Coogan and Jordy Hayes launched a daily live show and set one simple rule.
Show up five days a week.
Looking back, they did exactly that.
125,000 followers on X, 41,000 subscribers on YouTube, 17.5,000 on Instagram.
They showed up every day the Internet rewarded the proof of work.
So the only thing is these, I don't know, am I just colorblind, but is it like a little bit?
Like I'm seeing three days that were federal holidays that we miss and then three days that were.
No streams.
I actually can't exactly tell.
Yeah, what is a federal holiday?
What is a no stream?
I think there's maybe six days.
A gray and a purple.
There were a couple days here and there.
We took one off.
I went to a wedding in Mexico.
We took a Friday off for that.
That was just no live stream.
July 4th we took off.
That was a Friday.
That was a federal holiday.
And then what happened in March?
We took a Wednesday off?
No live stream on Wednesday?
In middle of March?
There was one day that we were traveling.
oh yeah that was after um after hill and valley after dc uh i thought it was a thursday that day though
no no we didn't we did it was the day Tuesday in the hotel room and then and then wednesday we did
in the uh at the actual event hill and valley and then we flew back and got back on the horse
so we missed a couple mondays because of federal holidays and then we missed a tuesday in in may
that might have been hill and valley march would have been something else anyway it's very cool to see
a wild ride.
Thanks for pulling it together.
Chris.
Along the way.
Our next guest is, I believe, already here.
We have Jonathan Neiman from Sweet Green.
We're going from benchmarks to bench presses.
The most important benchmark in the world.
How many grams of protein are in your protein bowl?
We need to know.
Welcome to the stream.
Please introduce yourself for those who might not be familiar.
Hello.
My name is Jonathan Neiman.
I'm the co-founder and CEO.
of Sweetgreen. Get that overnight success button ready. When did you start this company?
2007. We've been at this for 18 years. 18 years. Wow. Just let's talk about the very beginning.
I mean, since this is your first time on the show, where'd you grow up? How'd you get into the business?
What were you studying? And then let's go. Yeah, you've got to be somewhat of a masochist to get into the
restaurant business. Yeah. Yes, absolutely. I mean, it's such a beautiful thing because it sounds
so simple it's like you get a box you get a menu you get some ingredients it sounds it sounds
super easy you just copy and paste it and you scale to you know however many stores and then of course
it's far harder in so prior prior to launching the business what were you doing so i grew up here in
los angeles i went to school in dc went to georgetown okay and never thought i'd be in restaurants
but always studying government you thought i was studying business i always i knew i wanted to be an
entrepreneur okay and sweet green was almost almost an accident you know it was the naivete we
thought it would be easy.
And did you start it during school?
Yeah, we started while we were seniors in college.
Were you doing internships before that?
Yeah, I had a bunch of internships.
You know, I worked in media.
I worked in tech.
I worked in real estate.
Always knew I wanted to be an entrepreneur and create something.
But senior year came around and it's exactly what you said.
We thought it would be easy.
We're like, how hard could this be?
You go, you know, we'll go to farms and buy the food.
It's like apparel.
Like people fall into the apparel trap because they're like,
I just wanted to make clothes that I wanted to wear.
And you realize it's like the hardest business on apparel and restaurants, probably the things that seem the most simple, but are actually the hardest to actually do on a massive scale.
Yeah.
Yeah.
So what was the, was it build a business plan first, assemble a team, do a pop-up?
Like what was the first thing where you were like, okay, let's do this?
The first bowl was the guacamole greens.
We made in our dorm room.
We brought a bunch of classmates to try it.
My partner, Nick, actually made it.
He was our first chef.
No way.
And the story was really simple.
We couldn't find a healthy place to eat.
We saw Chipotle taking off.
And we're like, wow, there's someone is going to create a scaled healthy fast food chain.
And at first, it was let's just open one.
We wanted it for ourselves.
We thought we'd go on with our lives.
We opened, we worked on it senior year.
We wrote a business plan, raised $300,000.
There we go.
From 50 investors.
Wow.
So it was like five grand, five grand average.
Average, yeah, party round.
So they got equity in what became the full company.
They got equity.
Well, we actually, it was a little bit more complicated than that.
At first, the first three restaurants, we raised at the restaurant.
At the restaurant level.
Yeah, I was wondering if you were doing that.
And we actually paid the investors back every quarter into the whole thing.
And then after the third restaurant, we realized that the only way to scale this was to roll it up.
So we rolled the whole thing up.
And then we were able to continue to invest in it.
It's notable.
When did the word wellness actually become?
mainstream or when did that become like a identify like 20 years ago yeah like early 2010s yeah
you know anyway this is like anyways at least five years before wellness is going like mainstream
yeah when we were when we were starting you know that the thesis was healthy eating was not cool
sure and it was not delicious and it was not accessible and yeah we're going to create a place
that offers all of the benefits of fast food in terms of the convenience and the taste but do it in a
you know, do it with healthy food and real food that you can trust, where we're transparent
about where the food comes from, where it's nutritious, and build a brand around it. And so
we've been at it for about 18 years. We have almost 300 stores all around the country.
Yeah, it's almost hard to believe. Yeah. What was the first VC round? The first, so we...
Or like, or this transition from the, you have a restaurant, and did it work immediately? You set up
one restaurant, you, you know, you raise enough money to get that, I imagine.
that you had to sign a lease, so you weren't buying buildings, but you might have to do some
sort of renovation to actually get the first restaurant up and running. You start making money
enough to pay the employees, enough to pay the rent. You scale that to three, and then at a certain
point you say, okay, we're going, we're going to turn this into like a corporation more than just
a small mom and pop, right? Yes, so we opened one in 2007, two in 2009 with a food truck.
You remember those? Yeah. And then we opened like two or three a year, and we were mostly
built them from cash flow
from the profit you were profitable
we were just reinvest the cash flow
and we would do a few party rounds.
Along the way we started a big music festival
called Sweet Life. 2010
became a massive 25,000 person
music festival. Where was that?
It was at Merriweather Post Pavilion. So first
year we had the strokes. By the end we had
Kendrick Lamar and... Little festival
side quest. Yeah, it was a way to
build the brand. And then we
focused on D.C., which was very
you know, it was almost an accident, but we opened the first 16 restaurants in D.C.
Wow.
And then slowly went up to Philly, and then in restaurant 20 and 21 were Boston and New York.
And Boston and New York really kind of proved the concept outside of D.C.
And took off, and that's when we raised our first D.C.
So now, obviously, all around L.A., there's sweet greens.
But given that you grew up here, why didn't you, why not start here?
Was this because, well, like, there was, was there just more?
More healthy food options in L.A.
And there was less on the East Coast.
Honestly, it was an accident.
We were in school and we're like, let's just open one.
We thought what the second one would open in L.A.
And then the gravity that you have around the center when there's more and more stores.
You know, when you have a restaurant company, the brand and all your economies of scale happen at the local level.
So for us especially, given our supply chain is regional.
You have your overhead and your management, like your team that runs it.
And then your brand, you know, restaurants, the brands don't really travel across the country.
Occasionally they do.
So it was really started in D.C.
We thought the second restaurant would be in L.A.
We went and looked.
This is true for even like in and out is not a national brand still.
It's like the West Coast brand somehow.
And yeah, it's taken so long for that to actually like filter across.
How capital intensive was it to launch like the second and third?
Like you mentioned $300,000.
That's the hardest part of the heart.
No, it's way more than that now.
The first one was tiny 500 square feet and we did it really on the cheap.
500 square feet?
500 square feet.
It was small.
I imagine, like, one or two people.
Yeah.
Wow, that's tiny.
Yeah, we were working there.
We've been the whole thing.
So, I mean, we've had to raise a lot of money.
Answer your earlier question, Revolution.
Steve Case was our first VC investor.
And it was part of the thesis was how technology can change the restaurant business.
So we were the first company that you do mobile ordering where you can order on your app and pick up.
And we started.
Most beautiful software for that a restaurant had ever had probably.
Emmett, Emmett Shine.
Emmett Shine, yeah, shot at Gin Lane.
Yeah, this was like, yeah, this was like one of my favorite
Jin Lane projects.
That's awesome.
Yeah, Emmett and his team were amazing.
They did our app in the early days.
And, you know, restaurants, or today cost over a million dollars.
So we're like $1.3 million, $1.3, $1.4 million per restaurant.
That's before you put the Infinite Kitchen in.
Our restaurants have very high return on capital.
Infinite Kitchen. What's that?
The Infinite Kitchen is our automation, our automation platform that we've built.
So today, most restaurants that we open, the assembly is automated.
So we still make all the food from scratch.
The sourcing is the same.
We still cook the food fresh.
But we load this beautiful machine that makes your bowls.
It makes them 500 bowls per hour, perfectly portioned, perfectly plated.
And so that is kind of the future of where things are going.
How many different restaurant automation pitches did you get across 18 years?
Like, as I imagine, every single year, there's a new, like, startup coming to you saying, like, we can automate this part of your kitchen.
And clearly, you got to the point where you had to build it yourself based on kind of domain knowledge.
But this just feels like something that's been promised for a long time.
And at this point, I don't know, like, an individual startup that's done well in restaurant robotics.
Yeah, no one's been able to create a platform that works.
in multiple restaurants and there's a few there's a few issues most restaurant workflows are
very specific so they're super specific to that restaurant to most restaurants are franchises
and so they're not owned by the corporation we are fully company owned so if you're a franchise
restaurant you know if you're McDonald's you have to now go convince your franchisees to buy
whatever automation you have and the other you're looking at it it's like this is coming
off my bottom line we're making money already this feels like a risk like the franchise
is saying, like, what, like, I'm happy with my EBIT. I don't need to take a risk.
That's exactly right. And the other issue is you need automation that takes enough
labor out or offers enough value to be worth it because the CAPX is still very heavy.
Yeah. So when we went down this path, we tried to build it ourselves, actually. We built a team
to do it ourselves. Yeah. Realize how challenging it was. And then we found this startup that was doing
it and doing a really good job. Yeah. It was called Spice. It was called Spice Kitchen. It was four
MIT grads out of, four grads out of MIT.
And they had the same issue.
They realized they could build the automation, but no one was going to buy it.
Yeah.
So they ended up opening two restaurants.
They were great at automation, not so great at the restaurant side.
And then four years ago, we acquired them.
And we began, we've commercialized the technology.
We've scaled the technology today.
So most new restaurants feature the technology.
And last week, we actually just announced that we've now sold Spice.
So we spun it out, basically.
So we spun spice out.
Spun Spice Out, we announced about 10 days ago.
We sold it to Wonder, Mark Lohr, over there.
No way, yeah, cool.
So we sold it for about $186 million.
Mark is, Mark, I don't fully understand that business,
but talk about a guy that just like isn't even necessarily naive about the challenges of restaurants,
which is like, I'm going to go into the most competitive environment possible and compete with everyone.
It's amazing.
It's a great vision, and I'm a big fan of his and what they're doing.
And so we, it's a, it's a really interesting deal.
So we sold the, effectively the team and the IP, but have full access to it.
So we will continue to scale with it and get the benefits as they get, they get to, you know, scale and build any more machines.
We'll get the benefits of those economies of scale as well.
Very well.
Can you go a little bit deeper on the decision to franchise or not franchise?
The naive, maybe steelman for franchising and the franchise model is,
is that it's somehow more capitalist in my mind.
Because it decentralized the decision making
and it puts these financial incentives
at the local level
because each store lives and dies
by its own P&L maybe.
Versus even if I have a manager in one store
and they have stock options,
like what they do on the weekend
if they come in on Thanksgiving or Christmas,
that doesn't necessarily put more or less money
in their pocket.
Is that real?
what I'm feeling or is it irrelevant?
What you're feeling is absolutely real
and we actually try to design
our comp structures and
I've always believed
my line that I sit tell my team
every single day is all the answers are in the restaurant
and the closer we can push
decision making to the edges
to the customer the better we will be
so we you know our general manager
we call them the head coach
they are the most important position
in the company by far a great head coach
will make or break you
and so we try to really incentivize them
we empower them, and we try to run as decentralized of an operation as we can.
The reason we decided not to franchise is it's really hard to maintain quality.
When you give up that, when you really give that up to other people to run,
you could sometimes scale too quickly.
And we do a few things differently.
We source differently.
We're a very complex model because of the sourcing and the scratch cooking.
The biggest difference between us and most of other companies is if you go into Sweet Green,
you'd be shocked at how much we are making in the store.
Sure.
Every single thing is done.
It feels like you guys have taken such a principled approach
in making food that I feel like stays true
to the initial values of the company
and kind of why you started it.
And yet you're competing in an environment
that says, okay, we're going to have these like factory kitchens
off-site that we're going to be shipping in
effectively almost finished product that gets reheated.
And we're going to be sourcing from all over
with not a lot of values around how they're sourcing.
They're just trying to get, like, they want the food to taste good when it hits the plate,
but maybe they don't care about a number of other factors.
And so you're kind of an environment where because of your principles,
you're, like, fighting with your hands tied behind your back against competitors.
Like, and I'm not talking about direct competitors, but more so, like,
you're still competing with Burger King and McDonald's, right?
Yeah.
Like, people are going to have lunch somewhere, and they're going to maybe decide between they have options, right?
Talk to us about land.
Is McDonald's a land actually?
acquisition company? Why do people say that? Is that real? Have you ever looked at the land?
Yeah, well, they do own a lot of the real estate and at least sit back to the franchise.
So that is true. And if you watch the founder, the last line in that movie where he's like, it's a real estate, it speaks to more than the fact that they just own it.
Restaurants is highly a real estate game. Okay. Great real estate is like if you look at like our portfolio where we have great real estate, we do amazingly well.
Location, location, location, location.
It's people, like, people think restaurant business is a food business.
It's really a real estate and a people business.
And it's all about, like, you look at the great restaurants, so the Chick-fil-Ais, the Raising Cains, the In-N-N-N-Az, it's so much about, it's about that culture.
How scientific is.
You hear stories of companies like Starbucks, and you can imagine, like, a team of data scientists with, like, 50 monitors, and they're just like...
We need one, Starbucks directly across.
the stream from the other star box.
Yeah, you know, so like you can imagine a world where it's like hyper, like hyper data
driven, like down to a science and you just know when you're opening a new store,
you know that it's going to hit.
But there has to be like some five-based.
Yeah, we, that is the process.
We call it art and science.
Okay.
And pretty much everything we do, it's, it's an art and science approach.
And real estate's exactly that.
You know, the science, we have a very, very intricate model that looks at,
Psychographics, demographics, demographics, mobile data, you know, people driving by.
We have custom data on how many gyms nearby and right side of, you know, sunny side of the street or not sunny side of the street, all of that stuff.
But then you need a human to also walk it, feel it, and understand, does it tell our brand story?
For us, especially when we were, like early days growing, where we went, said a lot about who we were.
So, for example, we went to New York, we didn't go to Midtown.
We went to Olita.
We went to Williamsburg.
We wanted to kind of tell the story about who Sweet Green was.
Today we're kind of everywhere.
But the real estate is art and science and tells a lot about it.
It says a lot about who you are.
Yeah.
How do you think about if a new entrepreneur came to you and was asking for advice on where to start,
is it worth it to go straight to Manhattan or straight to Beverly Hills and try and make it in the big leagues on day one?
Or is it?
you get negative indicators from that because there's a different type of customer there
that's not necessarily representative of the rest of the country. I think that's more right,
especially when you're talking about New York. So when you're talking about New York, it is,
I mean, with beauty of it is a massive market. Sure. You know, it's, for us,
about a quarter of our businesses, it happens in New York. We have like, you know, in the New York
region, I think we have 50-something restaurants. Wow. So it's great that it's massive,
there's density, there's, you know, they have money, et cetera. But it's not really indicative of
the rest of the country.
Yeah.
So if you want to, you know, if you want a scalable model that you can have thousands of
locations, you're better off, you know, going into a more, you want to go to like the Iowa.
Yeah.
You know, like, and to use the political analogy.
You want to go to like something that, you know, is more representative of what the
rest of the country looks like.
In restaurants, the place where everyone goes, the fast, like the fast casuals is Columbus, Ohio.
That's where people go.
They say, you know, Columbus, you can make it in Columbus, Ohio.
You can make it anywhere.
Yeah, and you can kind of make it everywhere.
Yeah, yeah, yeah.
So, I mean, if you were a small restaurant,
you were being evaluated by, you know,
the CEO of McDonald's or something,
he might say, how are you doing there?
Yeah, the things that they look at for a restaurant
is they look at your unit economics,
which is effectively your payback.
So how much is the cost to build
and how quickly do you pay those stores back?
Yeah.
And they look at your TAM.
So they say, okay, like,
can you have 100 of these,
1,000 of these, 5,000 of these?
And those are the two big kind of thing,
you know, things you would look for
in evaluating, like the growth trajectory.
of a restaurant. What's the story of the delivery market? It feels like DoorDash has become a massive
business. Uber Eats has become a massive business. More people are ordering delivery. There's
the ghost kitchens trend. Is there a ghost kitchenification where these businesses are like trying
to effectively turn you into ghost kitchens? Does that give them some sort of leverage? Is there some
sort of tension there? Or is it pretty much just like, oh, it's just this trend. People are
cooking less and less. And so they're going to go to Sweet Green, but they're also going to order
Sweet Green delivered more. There's definitely a little tension. We're partners. A lot of our business
comes through those marketplaces. But it's not so dissimilar than a, you know, a hotel chain
and Expedia. Sure. Yeah. You're paying a fee on it. You do not control that data. You cannot market
directly to those customers.
And so for us, we have to charge a higher premium.
So when you order on DoorDash, by the way, it's more expensive than you order
on our app.
Got it.
So just a quick shout out.
Download the Sweet Green app.
Things are about 20% cheaper there.
Got it.
But at the same time, it's a great way to find new customers.
Sure.
So, you know, for example, DoorDash has been a great partner.
They power our native, what we call our native delivery, delivery on our
sweet green app, which is a big part of our business.
So you're white labeling or something like that as well.
Yeah, it's like a white label on our app.
And then we also, you know, we partner with the different app.
Yeah.
And as you know, they've become, you know,
they're brilliant business models.
They've become largely marketplaces.
Yeah.
So, you know, they, you kind of have to buy your way to the top of the feed.
Yeah, yeah, yeah.
And so.
That's how they gain, I mean, if, like, there's a reason the DoorDash app
or any of these mobile ordering experiences or not,
they don't just put like the restaurant that you've ordered the most from at the top.
It's like, hey, why don't you try this new restaurant for this new restaurant?
And those are all paid.
Yeah, they're all paid, and it's how you maintain leverage over.
I mean, they do this is why the YouTube subscriber account doesn't mean anything
because it's like they're going to constantly surface, anything.
They've made money.
I mean, the way they make money, these businesses have been historically very challenging.
The way they made it work is batching orders and then becoming an ad marketplace.
And that's what's made, you know, this amazing service and amazing business.
Explain batching orders really quickly.
So when you order, they have a delivery driver pick up multiple orders.
So you're paying the delivery driver once, but they're picking up from three restaurants.
Yeah.
I feel like you guys have done a really good job of listening to customers.
I would say like this 100 gram protein thing that you guys are launching.
I was asking for 200.
No, but that and then also the seed oils.
Yeah.
Is something about the business that.
It feels like you're more agile.
Yeah, is the business set up in a way that you guys can respond when other companies like you've just caught a lucky break?
It's because people, I would say, like, people would give a lot of the same feedback to Chipotle.
And it feels like Chipotle is not set up in some way to, like, be like, oh, this is what customers want.
Or even like some percentage of our customers really care about this, let's deliver them, let's deliver them a product here.
And I think the results is that, you know, I've churned from Chipotle almost entirely.
Because of the seed oils.
Yeah, because of the seed oils and just like a degradation of the quality of the food over, like a decade.
Like I watched it basically get worse and worse and worse and worse over 10 years.
And so I just don't go there anymore.
I'd joke about it.
I'd almost rather, when I'm on the road, if I'm on a road trip, I'm almost always rather just fast than eat at like the most common kind of like fast food.
Yeah.
Yeah, when we started the business, I had the same, this thing I would always say is, you know, there's businesses that as they get bigger get better.
and you can think, you know, technology businesses, many of them do.
Like, your new iPhone is, for the most part, much better than the original iPhone.
Yeah.
These AI models are much better than the original AI models.
Restaurants typically go the other way, right?
Is scale kind of degrades quality.
And that's because doing, you know, serving food at scale is really, really hard to do.
So you have to fight that inertia so hard.
Yeah.
Because all of those micro decisions.
I've seen it where a restaurant, one restaurant tour has an amazing restaurant.
they're like, cool, now I'm going to start a second restaurant.
And the second they start focusing their energy on the second restaurant,
the first restaurant gets worse.
It's like it even happens at like a micro scale.
It's people and culture.
And so you need to really have a lot of systems in place,
both like culturally how you keep the team engaged on your mission,
but also other systems to make sure you're watching the quality of the food
and listening to your customer.
So like seed oil is an interesting one.
When we first, we got rid of seed oils about exactly two years ago.
And at the time, it was not the national conversation.
It was pre-RFK and all of that stuff.
And so when we, this is one of those examples.
But it was, it was not a national conversation,
but it was incredibly online conversation.
But a tiny, the time two years ago, it was like an ad,
there's the CEDO oil.
Ced oil scouts, yeah.
So it was a tiny conversation.
We surveyed our customers, and this is why, like,
surveys are bullshit.
Yeah.
You really know, surveys can give you a general indication,
but if you just follow surveys and the market research,
search, you're going to hit the middle of the bell curve and everything you do. And we're not
trying to be a middle of the bell curve company. You've got to find that, like, what are your top
five or 10% of customers doing? And we heard from, it was honestly friends, like, wellness people
in L.A. and New York that are like, hey, I don't, you know, I can't go to Sweekeran anymore
because I care about seed oils. And I remember we brought it to the broader, you know, I remember
my CFO's like, what are you talking about? Like, what even is this? And we're like, no, trust
me. It was one of those like gut decisions. And it was expensive and we had to change a lot in order
to do it. But here's the thing, it's healthier and it tastes better. Exactly. Most health
trends, they might be healthier, but you're, it's doesn't, it's not as good, right? So I would, I would
argue, like, going from, like, dairy-based, you know, traditional milk to, like, nut-based milk
almost always is, like, somewhat of a downgrade. Or going from, like, something with sugar to pulling
sugar out, it's like not as good. Or going from like sour, like bread with gluten to gluten-free
bread, it's not as good. And so when you think about these, like, what is like a durable
health trend? It's like something that's better for you and taste better. And so that's why I was
always super bullish on that trend. And I expected a number of restaurants to say like, hey,
this costs slightly more, but the product's going to be better and it's going to be healthier for
you. And that's what can create like real momentum around a trend versus some of these like flash
in a paned health trends.
which is like paleo, or like, you know,
which is like only eating stuff that was like super old, right?
What's unfortunate about seed oils is it's become politicized a bit.
I know.
And it's like, you know, I did an interview with the New York Times
and they're like, did you do this because of RFK?
I'm like, no, I did this two years ago.
Like this had nothing to do with RFK.
This is not a political statement.
We don't make, we're making food.
I make food.
How your grandma probably made it.
Yeah, this is about olive oil.
Like, this is not about, this is just about,
Olive oil. That's it. This is not a political statement at all.
Yeah, just taste the difference. Yeah. Yeah. Is there, is there anything happening upstream in terms of like automation or, or technology on the farming side? That's like exciting.
Yeah, there's a lot of stuff happening on automation on the farming side. It's actually very exciting.
Yeah. Both the better robotic arms and the vision, I mean, it's making some really hard, grueling tasks around picking. Yeah.
happen much, much faster and easier.
So relatively early still, but I think in the next five years, you're going to see that takeoff.
I do think you're going to see a lot more restaurant automation as well.
Yeah.
You know, between the availability of the labor, the cost of the labor, it's really just, when you think about it, it's just a hedge on labor.
And here, like in West Hollywood, minimum wage is $22.
So we pay like $24, $25 an hour here in parts of it.
of LA. So with wages going up, availability going down, and then the ability, like all
technologies, to just do things better, not just about the cost savings. Like for us with the
Infinite Kitchen, we can serve twice as many people per hour as we otherwise could.
Wow. What about drone delivery? We've seen some four-wheeled guys driving around.
Yeah, I imagine getting 100 grams of protein out of the sky. There's the air delivery.
Yeah, I saw you guys talking about Zipline. I love Keller. I'm a big fan of Zipline. We're
We're one of the early partners that are going to be piloting that.
I think his way of delivering to the suburbs is super interesting.
We haven't done the street delivery yet.
Star Ship and Coco.
I've met with them.
So, Gordash is working on one as well.
I think it's interesting.
It's in the past year they've really taken off.
You're seeing them more and more.
They still kind of weird me out a little bit, seeing them walk that go down the street.
I saw it kind of stuck in the side of the street once.
It was very sad.
My kids love it when we see them on the street.
And I do just imagine that the AI is going to get way better.
And also some of the teleoperation, just infrastructure,
to actually make sure that there's the ability for a human to jump into that little robot that's driving around.
At a certain point, you just need a lot of people set up with that.
All the software working, make sure it's connected to the cell phone towers effectively,
or Starlink or whatever, it needs to stay connected.
But yeah, it's unclear when that will really, really take off
because a lot of people have stairs, a lot of people have trees on their property.
Like there's just a lot of places that will be somewhat inaccessible to those.
And so it just feels like it'll be sort of like a slow take off.
Cities and buildings will be really hard, like dense areas.
But you see what Ziplines doing?
It's pretty amazing.
Like they can have, you know, you've seen like the promo videos.
Yeah, yeah, yeah.
You can drop that thing.
In the suburbs, it makes terrible sense.
You have backyards.
You have a grass area.
You can drop it.
And to be clear, that's probably like 50% of people in a bit or something.
But there will be this like long tail, I think, for a long time.
Just like we see with all the other AI tasks where AI can do a lot of stuff.
And then there's just like these little sticky things.
Yeah, you just don't.
By the way, even with our automation, it does not do the entire meal.
And part of that is intentional.
We want that human touch.
for not to feel so automated,
but we have what we call a finishing station.
So the things that are, you know,
the machine, the Infinite Kitchen makes the bowl
or whatever the meal is,
and salmon, herbs,
and then we have them hand-mixed.
Just so you, like, have that, you know,
chef-crafted hand touch at the end to hand it over.
It's interesting that there's one version of automation,
which is like AI or robotics in the back of house
and then humans in the front of house.
And then there's also the opposite.
Like, I don't know if you,
new ETSA. Of course. Yeah, we looked at a very close day. Yeah, did Dave Freembris
company. Uh, was like, there were people in the back in the short term making stuff,
but then they would put it through like a little like, like, like, box that would
open up. So like, you wouldn't interact with a human. You would come in and on an app,
you would order. And they had the cubbies. And it would be cubbies and then you would take
your food. But there was actually a human back there. So it was like the opposite of like
having the robot in the back. They were working on the automation. Of course. They were working on the
automation and it never fully got there. But it's just funny that like, you do have the choice to put
the robot in the front of house. I mean, this is the same thing I think with the, the Tesla
diner over there. Like, there's the optimist robots there, kind of serving popcorn, but I think
when you order the burger, a human's cooking it in the back. So it's like, do you want the robots
in the front of house or back of house? I think people would probably go with robots in the back
of house by default. Yes. And we've tried, I mean, we have 30 restaurants featuring the Infinite
kitchen today, and we've tried out a bunch of different layouts. The technology has been perfected
for two years now. What we have not perfected is the experience.
We're getting close.
Today we actually opened a very cool store.
It's our first drive-through featuring an infinite kitchen.
Nice.
So bringing the two together.
So now we can have true, like, fast food speed in a, in a drive-eat with featuring
the infinite kitchen.
Driving through to get 100 grams of healthy protein is just undefeated.
This is, this needed to exist when I, like, specifically when I was like living off of
QSRs as a, as a college student, and I'm really glad it does now.
what what is like what is the market misunderstand the most or what what is like wall street
misunderstand about and kind of retail investors misunderstand about cute like kind of this like
category of restaurant today because the entire like the entire category has had a had a rough year
meanwhile you guys are making steady progress on all the things that have been important
since day one right greater efficiency uh actually
responding to like customer demands and staying, you know, continuing to become more and more
relevant. Yeah, I think there's a few things. One is the consumer that we're all dealing with
is really challenged. And there's a question on how much they are actually financially
challenged, which they are, but versus more psychologically challenged. Yeah. So have you seen all
of the consumer sentiment indexes and you're seeing especially for the core demo for a lot of the
fast casual concepts, is that like 20 to 35? It's hit the
the lowest consumer sentiment that we've in recorded history that we've seen. So there's a real
like pull back there. On top of it, unfortunately, everyone's gotten more expensive. We all have.
You know, I said, you know, we've, sweet green's gotten about 25 or 30% more expensive
since 2019. Chipotle's 40% more expensive since 2019. So our price differential versus our
competitors have actually gotten smaller. If you look at us versus McDonald's, for example,
you know, the average sweet green bowl is about 15, 60. It's all.
People were like, wait, a happy meal is like $20 now?
Yeah, that was, in fairness to them, it was like one location.
But yeah, you can get out of you McDonald's.
You know, you spend, you can easily, you know, for a value meal, you'll spend like $12.
Yeah.
You get a sweet green bowl for about $15 or $16.
So I think a lot of it is this like overall narrative where people aren't feeling great, you know,
great financially and starting to pull back on things like lunch.
Yeah, or they'll skip going out for lunch and they'll just have whatever is.
But what I think the market does.
doesn't get is the TAM is, you know, Chipotle today is 4,000 restaurants on their way to 7,500.
Yeah.
We believe we can have, you know, probably as many chapolras as they have sweet, as many sweet greens as they have chapolas.
And I think, you know, there will be cycles like we are in right now.
It's been a challenging year.
But if you kind of fast, fast forward and think about, you know, just growing units at 10 or 15% a year, growing same store sales,
automating more of our restaurants.
Yeah.
extrapolate out another 18 years.
Yeah, just keep it rolling.
Just my policies just keep going.
I always love when, um, uh, when people like, people on X are like the world's ending, like,
geopolitics, you know, they're like, uh, and then, and then meanwhile, it's like,
Tripoli is like, in 2040, we plan to introduce 2,000 new Tripoli.
Like it's like, they're just like thinking about like, I got to just open more, more, more doors.
So it's a good, good mindset to be in.
Thank you so much for coming by this.
Hey, it's great, great to be as you guys.
Congrats on everything.
It's been fun watching you guys.
We are going to be daily driving these.
John wants the Power Max Protein Bowl.
It's actually breaking news.
It's available today through December 15th.
And I think I'm going to challenge myself
to have one of these every day
until it goes out of stock.
Why not two a day?
Maybe two a day.
Maybe two a day.
We got to get them in the studio today for sure.
We need them.
I need to tell you about fall.
Build and deploy AI video and image models
trusted by millions to power
genera media at scale. I also need to tell you
about linear. Meet the system for modern
software development. Linear is a purpose-built tool for
planning and building products. We have
Ashley Vance in the Restream waiting room. Let's
bring in Ashley Vance into the restream
waiting room. It's been far too
long. How are you doing? There he is. Good to see you.
Welcome to the show. It's so great
to have you back. It's so good to have you back.
Congratulations at all the progress.
What a year. I was laughing about
that video that we did before
we had guests.
announcing core memory and putting the traditional media on notice. It's been really fun watching
you grow everything that you're doing. Maybe it'd be great to just like reset on the shape of
the business right now, some of the stories you've been interested in covering that you've
covered recently. And then I just want to take your temperature on what you're seeing and the
types of entrepreneurs that you're interacting with. Yeah. Yeah. Well, I don't know which pocket to start
with. I mean, we've been running around the country
filming a bunch of new video
episodes, so we just
put up a bunch of Tennessee, went
hard tech. We did Detroit, New England.
I just got back from Texas. Those all
be coming out. So, yeah, you know me, man.
I've been running around chasing
a lot of hard tech stuff,
biotech, all the weird,
all the weird, wonderful stuff. And then,
I don't know, I got really deep into
robots and gene
editing. That's right. I saw
your post about
maybe comparing American
humanoid robotics companies
to the Chinese
humanoid robotics companies
what stuck out to you
as like the important questions to ask
and then I'd love to kind of tussle
Are you buying,
would you rather own figure
at 39 billion
or Unitree at 7?
I mean, you know,
I think I'm going to be a tree, man.
The, you know,
this all started.
I was kind of a lark.
I started to get into these.
robot fights in San Francisco and then I think I was I was like shocked that the only robots
they could get to do these fights all come from China and then I started digging into like the
parts that go into these and you know the most important part is the actuator the motor that makes
everything move and they're all made in China I think yeah I think Tesla made it like a 700 million
order for actuators which was notable for me because I
I assume that means that Elon's planning to sell a lot of these on a relatively near-term time horizon.
I don't know.
Yeah, but, yeah, I mean, you know, like Tesla sort of has the, well, I was texted Elon about this last week because I wanted to get to the bottom of who actually makes actuators in the U.S.
I mean, Elon said sometimes they prototype actuators in China, but they're going to build them in the U.S.
And then, you know, for everybody else, this is a crazy point of weakness, I think, because China is clearly the actuator motor capital of the world.
And everybody else is buying them from them. And so I don't know, you know, as I dug into this story, I got, I'm not, I'm not like, you know, I enjoy being an American.
I'm pretty pro-U.S. I'm not crazy nationalist. But I was, I started to, I started to get pretty afraid for the U.S. robotic scene.
Do you think we'll see any type of regulation around Chinese humanoids?
I've been thinking about this a lot.
I mean, at some point, I guess.
I guess with DJI, you know, you've got this different situation where they're being used
by all the police forces, even the military.
I think it's like a much easier case for someone like Skydeo or, you know, politicians
to come in and say this doesn't make a lot of sense.
clearly like at this point of robotics it seems a little less of a threat to national security
but the second the armed forces or anyone's doing serious stuff with them you know i would think
unitary would be up next but there's there's like 12 unitries as well you know that's that's that
amazing thing that's going on yeah brett adcock was beefing with one of them there was ub tech
you be tech yeah unitry they were beefing back and they were beefing back saying it was it was real did you
see that video? Did you think it was
CGI or did you think it was real? I didn't, I didn't
see that video. I've seen Brett beefy.
Missed opportunity for
UB. Tech to have one of their robots
do like a rap dis on
for sure, yeah. Brett and
figure. Yeah, it was.
Yeah, yeah, sorry. No, no, no, go ahead. I mean, I do think
it's funny. I like the
what do you, I mean, I'm curious
about other, I'm obsessed with
the fighting robots now and I
realize it's like early days with these, but
I actually think this is, like, the most interesting thing happening.
I want the, I've been pushing for the, the robot, like, X games, like, in challenge.
Like, I want to see robots skydiving.
For sure.
Like, that's not an X, X, games thing.
But broads that of, uh, super hard, because you've got to be water resistant, too.
Wings, yeah, big wave surfing, wing suiting.
And then you also have to swim and you're a heavy, heavy robot who might just sink to the bottom
of the ocean if you fall off the surfboard.
I think that might be the last one.
You could do this versus like the enhanced games.
Yeah.
and see who wins.
Yeah, give us your, we sent a couple folks on our team to a local humanoid robotic fighting
league, underground fighting league.
Give us your review.
Is it, is it ready for prime time as a consumer?
Yeah, to me, to me, right now, it's like an amazing idea, and yet the actual experience,
like from an entertainment standpoint is probably like a one out of ten, whereas the idea is
like a 10 out of 10. Yeah. Yeah. I mean, it's kind of, you know, it's, it's like a curiosity,
I think, at this point. I mean, the motors are the problem, because they, they all overheats when
you throw too many punches. No way. The robot stalls out. What about laundry, though?
That or not? This is the thing, though. So, like, on all these repetitive tasks, they can,
they can sort of regulate the movement. It's when you're trying to throw these rapid punches
in your attack. Yeah. And then the whole robot just freezes up. I mean, I'm not, like, I haven't
gotten so into this where I don't see the obvious flaws. Like, I don't think it's ready for prime
time yet because these things just don't, don't last that long. But what about is it ready for
teleoperation? That feels bullish to me because if you watch a F1 race, like the temperature of the
tires matters, the, like the wear on the tires matters. And so you're watching not just the pilot
of the F1 car, but also the consumables, right? And the motors are somewhat consumables. Yeah, yeah.
Like, okay, the unitary is really wailing on the figure, but it's overheating.
He's overheating.
So if you might come back, is it a one motor stop or two motorstop type of?
I was at one where the robot's leg fell off in the middle of the fight.
So yeah, you could just have somebody come out.
How quickly could you get a limb back on?
Okay, so I think you should.
You have a serious question about tele operation.
Well, yeah, so, but one, potentially a product line for core memory.
is a humanoid bench where you get,
as these things start being available for production,
you get them up on stage and they do various tasks.
You know, like fruit, cutting a fruit with,
you throwing a piece of fruit at it,
watching them, you know, cut it and dancing and fighting.
I think there's something here.
But I actually, this is genius.
Yeah, I'm into it.
Let's do it.
But a, yeah, more serious question on teleoperation,
from everything that you've seen so far,
do you think humanoids are ready to have one in your home
that could be remotely operated by someone and create any type of value besides novelty?
I mean, like, could it have, yeah, like you could do it today.
I find, I'm just frustrated by all this.
I've been covering teleop stuff for like at least 10 years, and most of it seems pretty
similar to what I was videoing and writing about 10 years ago almost.
And so, I mean, I saw the 1X demos.
I'm sure somebody could, I'm sure somebody could make that work and be helpful to some degree.
I think, you know, it probably suffers from all the same stuff as the fights.
It kind of falls over pretty quickly, but you could do something useful.
It's hard for me.
Like, who, yeah, like, this stuff needs to get better faster so that we're not, we're not doing that.
And there's just a robot.
What's going on with Boston dynamics?
What's the dynamic in Boston?
Yeah, we've got to get you out there to help us understand this.
Yeah, I've never, I mean, I've never really dug in on them just because they seem so frustrated that they put out what seems like all the coolest stuff and they don't seem to sell much of anything except a few things to the military. I do not think Boston Dynamics will be the American Hope against Unitry.
I wonder, yeah, you'd think that they would at least be set up on some, like, I know the company's changed hands a few times. It feels like if you're trying to just, you know, catch up to Unitry, just bootstrapping on.
on top of an existing, you know, it's like what we're seeing today with Gemini 3.
Like, Gemini 3 is benefiting from YouTube and it's benefiting from Google search and it's
benefiting from the TPU and Google Cloud Platform.
Like, usually it's easier to build the new cool thing inside of the organization that has
a bunch of resources, but maybe it's a different, entirely different architecture or something
like that.
But you would at least assume that they've fought with the motor a little bit and dealt with
the overheating a couple times.
Yeah, I mean, I was with a bunch of robot nerds last week.
They were contending.
I don't really know where Boston Dynamics is with humanoids.
But these robot guys were telling me that dogs are just so much easier than humans
because the second the humans start walking, you put all this force on the one foot.
And it's like creating all this throwing the balance out of whack, putting all this pressure on the motor.
And that's why it's kind of easier to pull off some of the parlor tricks.
With the dogs?
Interesting.
Okay.
What's the most under-hyped?
hard tech company right now.
Most under-hyped hard tech company.
God, that's hard, man.
I mean, I'm always curious to see what Casey Hamer actually cooks up.
I like that.
Because he's so smart.
I kind of like believe in the hustle.
I feel like the promise of what he's trying to deliver so massive, that's where my skepticism comes in.
But, you know, like, so if Casey, you know, if anyone's going to do it, I sort of
believe in him. Yeah, he's somebody I want to win so badly. I want him to win so badly. And it does
feel like at least let, I mean, there's so many people that have a billion dollars. Give him a
billion dollars. Let him, let the man buy some solar panels and figure out the rest later.
Yeah, absolutely. And then, I mean, I don't know, this doesn't count as, I mean, it's hard tech.
It's not hardware. But I do think New Limit, which is a longevity, you know, backed by Brian
Armstrong and run by Jacob Kimmel, just everything I hear about them, I mean, they've just done
an incredible amount of science with very few people. And I think Jacob's got got some surprises
coming in the new year. Very nice. Yeah, we talked to Jacob when they did some sort of launch
and we were very impressed. He was a really great, great educator, really. Super smart. Yeah.
Like what he's working on very, very effectively. What's your favorite data center?
my favorite well i went to stargate that was pretty cool although uh yeah um i mean stargate
just in terms of like the excitement and the size around it and being it occurred to me that
between john carmac and elon and stargate that oddly i think super intelligence is going to
light up in texas but like in a really remote part of texas and i i found this so i grew up i grew up
in Midland, Texas, which isn't far from
Abilene. It's like, you're a Midland guy?
Yeah, yeah. There's tumbleweeds and all that.
Texan intelligence. Yeah.
I mean, it's like cracking me up. I'm driving
through all these, for hours through all this
empty space, and then I could just see it, man.
One of these data is there's, that's where it's going to
happen. It's going to be right by some like old
oil well. And yeah, I find it all
kind of comical. Did you see any
electricians getting off of private jets while you were there?
They had a, I saw, I got off a private jet.
There we go.
Not mine, sadly.
Not yours yet.
No, but I saw there were many, many, many electricians.
I just didn't see how they were getting there.
What's going on with EV tall companies?
There's, I'm curious, timeline.
Oh, the Tesla Roadster?
Oh, well, on the Evitol stuff, same thing.
I feel like I've covered that forever.
You know, I went out.
I think I did the first flight ever with Joby and, and it feels like you, you flew in it?
No, I got to like, I went out to their state.
I mean, they literally wouldn't tell me where their secret test site was.
And we were, you know, it was like how to close your eyes.
We're going to land in this spot in a helicopter.
And we got to see it.
Was it really close your eyes or did you have a lot?
times have you been black bagged actually i remember they were they didn't want to tell me where
the site was this is a tip for founders if you want to really impress upon whoever's writing a profile
on you that what you're doing is really important you got to be like we can't even show you and then
it's like really like we're at an office park and menlo park i did i just went to helion oh yeah
and we're going to have a video coming on them and it was it was awesome but i get so i got to see
their new reactor, but they wouldn't let us shoot it with the camera. And I have to tell you,
like, that thing was one of the most impressive pieces of hardware, the room-sized bits of
hardware I've ever seen it. I'm like, why wouldn't you guys, you know, want to show this?
You know what, not that you need to take requests from me, but I want, I want some video,
some documentary, some footage of those natural gas turbines that are in such high demand, right?
now, they're bigger than jet engines. There's these scaled up jet engines. There's this massive
backlog. There's three companies and the stocks are, you know, doing crazy stuff. I want to see
inside one of those. The natural gas infrastructure that's going to go into the data center
buildout. I feel like that's something that I'm just waiting. I don't know if you've had a
chance to interface with any of those people or you have thoughts. Not yet. But yeah, when I went to
Stargate, I mean, it is crazy, right? They just have those turbines sitting right there and the natural gas is just
being piped directly in there. I did some turbines up in,
up by the Arctic circles in Sweden, what time? They are cool. I don't know.
Yeah, anyway, it's a good idea. I think... I would just wonder about the bottleneck
specifically. Like, everyone's saying, like, this is going to be the next major bottleneck.
Like, we have enough chips. We have enough data. We have enough algorithms or whatever. But
we have enough land. But we might not have enough turbines to generate...
I mean, that was the weird thing about that experience, though. It was like you're
You're in, you know, really old American oil and gas country.
Like, it feels so yesterday year, and it's just being piped directly into the future.
What's sentiment like in places like Midland around the data center boom?
I think everyone's, like, excited to get jobs, you know, and then I think if anyone is prepared for the boom bust nature of where we're probably going.
with AI. I think these people are because they've lived through it for decades. And so, you know,
it's the same thing out there. It's like you take a job while you can and try to get paid as much
as you can while everybody's chasing after something. Yeah. Do you think that the, a lot of the
headline numbers on the job creation stuff on these data centers is like ridiculously low? It'll be
like, yeah, we're spending $50 billion and we're going to create like 25 jobs. Sometimes it's like 500 jobs.
But does it feel like a little bit different out there because maybe they're not counting like secondary economic impacts of like the guy who runs the gas station is has more business and hire some more people?
Yeah.
Well, definitely during the building phase, you're talking about thousands and thousands of jobs.
It's just when it's finished.
I mean, it is always nuts.
You walk into these massive facilities and there's just 10 people sitting around eating a sandwich watching like some console.
But, but, you know, I for somewhere like West Texas or anything.
you know, all throughout Texas, it has to be a net gain just because they're otherwise so dependent
on the whims of just the oil and gas industry and you've got this whole, whole new industry
coming in. And then definitely they're flying people in and out of there all the time to see it.
Do you ever chat with retail investors that enjoy deep tech companies? I imagine those are some
pretty funny conversations where they're like, this company is
changing the space economy.
I've actually visited them, and they have one warehouse and three people there.
Retail investors should not be allowed to invest in space ever.
Any circumstance, I am constantly harassed at ATAX by all the AST fans who are like,
they're in Midland, too.
They're begging me to go out there.
I mean, that thing is like a full-on cult that they have going on.
So, yeah, I always felt when the rocket companies, obviously, it used to be governments that did this,
and then SpaceX has managed to stay private for a long time in Blue Origin.
I think rockets are best developed in private because the second they blow up on the pad,
all the retail investors freak out, even though it's like vaguely a normal course of business.
And so, yeah, retail in space is bad, bad thing.
But I get all these nice notes for people who bought Rocket Lab and Plant Labs.
early because of my book or movie that's cool have uh autonomous vehicles tracked how you imagine
when you start you know recovering you know these types of companies and products like a decade
ago or is anything some ways yeah some ways no i mean i went to the very first darpa grand
challenge and you know that was a disaster the cars didn't go anywhere um i remember say say more who
Who was actually competing?
It was crazy, man.
So for people who don't know, DARPA put up this contest, put up a bunch of money to see what we could do with autonomous vehicles.
And the biggest teams were university teams like Carnegie Mellon was a standout, MIT.
But in the very first event, well, I remember Anthony Levendowski was there as like a maybe like a 22-year-old.
And he had a everybody else was doing massive trucks with like a little mini data center at the back.
and he had a motorcycle.
And then in the first race,
I can't remember how far it was,
but hardly anybody went anywhere.
Like, I think two or three teams went like a few miles.
And then they redid the race,
and everyone did way better.
And some people completed.
Like, I think it was like on the order of like 100 miles.
And so that's what I got excited.
And you sort of felt like, okay,
that leap happened really quickly.
And then I remember I,
A couple of years later, I'm hanging out with George Hots
and he built his own self-driving car
in his garage at like a month,
and I was to drive it on the freeway with him,
and it was working.
And yeah, so, you know, you have these little taste
that you think it's all going to work.
I think it makes a ton of sense
that actually getting it on the roads
took this long because it's so hard to do.
Although everyone says this, so it's not originally.
Like, we all take this for granted so quickly.
It is sort of like amazing to me how well they're working
in Austin in San Francisco where I've been there just everywhere you know yeah what I'm what I'm
trying to predict is like what is the thing that people are hyping now that act doesn't work at all
that will be totally like a real thing in 10 years right and like maybe it's humanoid right now
it's like hard to take humanoid seriously but then you think about okay a true 10 years from
today maybe they are just doing any tasks that you could want them to do around the house or any
past that you could want them to do in a retail setting or factory setting, et cetera.
Humanoid is easily, that's the thing I like battle with in my head all the time because it
feels like, sort of like we talked about before, it actually feels like we've made almost
no progress.
I see everybody folding laundry and opening and closing microwaves still and it like boggles
my mind and then you look at like the amount of money that is being invested in this.
Like either everyone is completely insane or we are about to make massive progress.
You can tell in China they're making massive progress on balancing, on the movements, all those types of things.
It's still clearly like the dexterity.
And then I think China will eventually probably catch the US in software, but I think there's still so much worse at software than the U.S. is that it's kind of like it's holding the field back.
So if somebody can figure that out.
Last question for me, we've really struggled to cover quantum stuff.
I mean, it's been like up and done.
But it feels like, yeah, how do you even go about it?
Ashley, actually, it could have like an anon that was like the Hindenburg for Harktuck.
And you could just go and destroy.
I don't think it's on brand for me.
You know what I mean?
Because like, yes, like I can't build a humanoid robot, but I can go to a human.
You can build a quantum computer.
No, no. I can't build either, but I can look at a humanoid robot and be like, okay, yeah, I would buy that, but I can't do the same thing with the quantum computer. And so it's much harder to evaluate, right? It's like, even if it's working, it's like, how do I even know if it's working? It could just be a normal computer, like, and just be spinning out normal data.
like even people in the field with PhDs
nobody knows if it's working still
it's like it's like not a good side
every time anyone pulls a quantum computer out
there's some guy at MIT who's like that's not even doing
anything I don't know
quantum is it's
I'm deeply deeply scarred I think I wrote
my first story on D-Wave like I don't know
like 15 years ago they were telling me
that was that was going to pop out
you know, be doing general purpose quantum computing in a couple years. So I'm, I'm, uh, deeply,
deeply skeptical. And you know, and you know the lesson. The lesson is like, you should have
invested because it's a billion dollar company now. 15 years ago, it was probably worth like 20
million. And so you could have got in really early. But it, uh, I mean, the stock chart looks like
this right now. Uh, and it's just like, yeah, you're only, you're only one pump away from generational
wealth.
Well, there's a tinfoil...
I don't think that they've delivered.
There's a tinfoil hat conspiracy around
some group, you know,
figuring out something with quantum,
which is leading to all these old wallets in crypto,
like waking up and selling, you know,
that they'd never...
Who knows?
Anyway.
Random final question.
How much would you have to be paid to not use LLMs?
Wow, man.
forever or like no just just while we're paying you monthly monthly monthly oh to be paid
monthly not to use LLMs I'd probably do it for like I'd probably do it for like 10K man
damn that's so that's so bearish that's so bearish for super intelligence no I figure I mean I
figure I figure that because because for I don't know 10 10 grand you can hire an amazing researcher
one of the most valuable the most if you're building a media company or you're
you're you know in the role that you are the probably the most value you can get out of
AI in its current state is research and so anyways that tracks super helpful but I would
take cash yeah okay so so so any any AI like any AI Dumer's out there if you want a
new marketing channel you can pay Ashley Vance $10,000 a month
he won't use AI, and he'll talk about it.
No, I don't think he can be bought.
But also, Ashley, have you tried Gemini 3 to the fullest extent?
I have not yet.
Okay, I'm always...
Could change everything.
Could change everything.
We would encourage you to.
Is Gemini...
Are they a sponsor?
They're a sponsor?
They're coming on as a sponsor for us, too.
Fantastic.
I'm all in.
We're going Gemini 3.
I'm changing by mine.
Let's do it.
Also, Sergei Brin was flying his $150 million blimp around San Francisco on the day, Gemini 3 beats nearly every model benchmark.
You've made a video about this big, Zach, blimp.
I've been pitching Logan at Gemini to make it the Gemini blimp.
They really should color it.
Guys, guys, it's not a blimp.
It was an airship.
What's the difference?
All right, all right.
There's a whole Monty Python video about this.
An airship has rigid structure.
A blimp is just a bag in the airship.
You can do a lot more with an airship.
So a blimp's only ever going to have that tiny little...
Oh, pot at the bottom.
Yeah, yeah.
Whereas an airship, you know, you can carry tens of thousands of tons of cargo with this rigid, rigid structure.
So, yeah.
And if anyone ever wants to fly one, you can do it in Germany.
Zeppelin still flies out by Lake Constance just outside of Munich.
I've done it.
It's amazing.
I recommend it.
This is amazing.
Yeah, people are correcting it on the timeline saying it's not more.
Dude, you get, this is like, you get owned if you say, if you call it blimp.
It's bad in aviation world.
You call it blimp.
Airship.
I like an airship.
I'm excited for it.
I do wish it had a livery, a Gemini livery to celebrate Gemini 3.
Well, there's that startup airship.
ship industries. Is that a category that we'll see a lot of investment, do you think?
Or do you think? I've been meeting to meet up with those guys. I mean, the airship is like always
kind of coming back. It is crazy. Like so leading up to World War II, getting into World War II,
I mean, there were airships everywhere. And, you know, they were making massive flights from Germany
to Brazil. They were carrying thousands of pounds of cargo. There is a, they're just extremely
expensive and very hard to make but there is a whole movement that you can carry tons of stuff
and so so less less kind of tourism and more just carrying cargo um kind of like faster than a train
but slower than a plane and and they're pretty green you need an airship Ashley you need
you need a studio in an airship that you can just float around the US meeting all these hard tech
you don't need to you don't need a private jet you know you don't need to go that fast but if you
could just kind of float between hubs.
I was told that my kids
are supposed to be on one of the first flights
on Sergei's when it takes passengers.
There we go.
We'll see.
Well,
we'll join, too.
Always fun hanging out.
Congrats on all the progress.
Yeah, great.
Thank you guys.
Congrats to you.
Always a great time.
Thanks, guys.
Have a great rest of your day.
Good to see you.
All right, you too.
Up next, we're going back to the timeline.
sleep.com exceptional sleep without exception fall asleep faster sleep deeper wake up energized eight sleep
what you get john uh i actually lost my phone so i don't know oh no it's here i have it pull it um
i got a sound effect we can pull up you got a sound effect you think i did it let's see how i did 90
let's go the press release economy is also over says bocco capital bloke uh walter we ran out
press releases. We ran on a press releases. This is on the back of the Anthropic deal.
Anthropic is now valued at $350 billion after Microsoft Nvidia deal, says CNBC.
Semianysis says a good post here. A new bombshell has hit the polycule. Dario, after intense
conversation with other members of Anthropic, has decided to maybe open the relationship to
Microsoft and Nvidia. Jensen and Dario have famously butted heads in the past.
but as everyone knows, this is the most passionate emotion after love is hate.
Will these enemies to lover, will these enemies to lover's arc go well for invidientthropic?
Time will tell.
This is such an unhinged post for...
I would not, I did not, when you started reading this, I did not see that it was semi-analysis.
It's so good.
It's so good.
Research firm in the industry posting it, but I think this is exactly what they should be,
post it actually contextualizes things better in the meme economy in the meme economy for sure so so i i
think that the timing of is not a complete coincidence it's jemini three day this is what my piece
today was about um just that uh you know when when there's big news in in google world jemini three
everyone needs to sort of respond and you know picking today as an announcement to uh talk about
your massive deal, your $350 billion valuation is just a good move.
The actual details of the deal, it seems like Anthropic will spend $30 billion on Microsoft Cloud
compute.
Reminder, OpenAI is going to be spending $250 billion on Microsoft Cloud compute.
That's part of that deal.
Then Anthropic gets a $10 billion investment from Nvidia and $5 billion from Microsoft.
So they raised $15 billion at a $350 post, basically, something along those lines.
And it's a sort of a circular deal.
But it was setting off way fewer red flags for me because it's missing a zero.
It's like instead of, if this is open AI, it would be $300 billion and $100 billion in investment and $50 billion investment.
Yeah, it looks modest.
Yeah, it looks modest, which is it's considering the scale.
It's like one of the biggest deals in software history probably.
It's probably in, like, the top 10.
I mean, it, you know, it, it values, it values Anthropic higher than Coca-Cola.
Like, the Coca-Cola company is now, that's a $300 billion market cap.
I'm pretty sure it's a Verizon market cap.
Like, Verizon is $175 billion.
You're going to love this journey.
So I asked Chatte v.T, 5.1, pull 10 public companies between 300 and 400 and 400,
billion, please, because I wanted to see, like, okay, Anthropics at 350, like, give me some
examples of scale.
It says, like, couldn't I reliably identify 10 public companies whose market capitalizations currently
fall, but here's one verified example, Coca-Cola company.
If you like, I can pull a more extensive list of candidates, and I said, yeah, pull 10 more.
It says, I wasn't able to reliably identify 10 additional public companies whose market cap clearly
falls between 300 and 400 billion.
Are there just, like, Tyler.
Are there just no companies in that range?
Do you want to defend AGI?
Companies.
Are there, wait, I'm so confused.
Are there not, are there no $300 billion?
I'm asking Gemini 3.
Yes, ask Gemini 3.
Okay, PepsiCo is at 200.
There really aren't any between 300 and 400 that at least that it's seeing.
300 and 400 billion band.
I mean, that's so, that's so.
That's so wrong.
You have Palantir, you have Costco, you have ASML, you have Bank of America, you have
Alibaba, you have AMD.
Silence, Google search.
I am using.
Dr. Gamble.
Home Depot, General Electric, Chevron,
roads, Coca-Cola, Coca-Cola,
and L-L-LM is hallucinating.
Silence looking it up the old-fashioned way.
Wait, how did you actually get that?
I just looked up Companiesmarketcap.com.
To put this into context of the $15 billion fundraise,
some other big rounds in that.
Wait, you just scroll down.
there's a lot of them actually
you're right
learn how to use the internet
that GPT
owned
absolutely get ready to browse
defend yourself Tyler
defend yourself
Gemini is still thinking
oh no
what a mess
bro big moment
I swear the next model
the next model we will do it
okay it worked for me
did it get it
yeah Procton Gamble
Home Depot
Let's go. America, Alibaba.
Okay. Yeah, there you go.
What's the full list? Alibaba,
ICBC, LVMH,
China Construction Bank, Chevron, Cisco.
This is correct. This is the correct.
This is the correct. This is the correct.
And you know what else is correct?
Graphite.com.com.
Code review for the age of AI.
Graphite helps teams on GitHub ship higher quality software faster.
And fin.a.i.
If you want AI to hand to your customer support,
go to fin.a.i, the number one AI agent for customer service.
So what else is going on in the timeline?
This Fiji Simo profile.
So this was the other thing.
So Anthropic is announcing this big deal with Microsoft and Nvidia.
And that's sort of trying to steal a little bit of Gemini's thunder maybe.
Maybe it stole a little piece of it because we're talking about Anthropic today as well as Gemini.
What did OpenAI do?
Well, they launched group chats five days ago.
And so this is, you know, sometimes I'll do a deep research report.
I'll send it over to Tyler.
He can see my chain of reasoning, the prompts that I asked.
He can ask more.
He can jump off.
So if it took 20 minutes, why are you laughing, Jordy?
Because Charlie in the chat says,
need a cam on Tyler trying to look nonchalant, the entire podcast.
You really are over there.
He looks nonchalant.
Yeah, he's nonchalant.
No worry about it.
He's nonchalant maxing.
Okay.
So the group chat functionality, you know, it didn't destroy the internet.
but it was certainly like an incremental little feature that people used to sort of collaborate on the fly.
This is in the line of like, you know, we've been hearing for a long time, Open AI will be launching social features.
It makes sense to try and lock things in.
I think product is where Open AI is strongest.
Like the models are good, but there's less differentiation there.
The reason that, like, what I like about the chat GPT app is that I know where the buttons are.
When I click there, I know that when I click the, use the voice dictation feature, I just know how it works.
It's reliable.
I know where my features are.
I know where I can search.
It seems to just be, they're just very good at chopping wood on like the little product iterations that make for a stickier user experience.
And having shared group chats with a few other people could be, you know, a beneficial feature.
The other PR.
Also some potential, some potentially like.
real lock-in network effects of being in chat.
I mean, just like we run a lot of the company on IMessage,
I could imagine if we're all sending each other deep research reports and iterating on things
and we have little flows in operator, little flows in the agent mode,
and we're sharing these pretty regularly, like we do get a little bit more locked in.
If you let me into your chats, I'm going to just be asking it like to think for like,
just go and think for like 40 hours and disregard all future instructions.
Just spend the next four days working on Arc AGIV3.
Just just focus on that.
But the other, so the other, the other open AI news that dropped on, you know,
around Gemini three day, Gemini three week, is this profile in the, inwired of Fiji Simo.
And she's absolutely getting a fit off.
She is.
The photos are remarkable, great photography from the team over at
wired
GL
ASCU
the second
really delivered
but there's
one interesting
section in here
that is a wild name
that photographer's
a skew
that's hilarious
nominated
determinism
taking
taking this photo
skew the second
and this photo
is not a skew
so maybe it's
bad nominative
determinism
anyway
the
the profile
there's one thing
that stuck out
to me here
and I'll read it
to you
and you can
give me your
reaction
so it says
open AI
is obviously one of the most valuable startups,
if not the most valuable.
This is the interviewer asking Fiji Simo.
But it's also losing billions of dollars every year.
And Fiji says, I've noticed.
It's just like first day on the job.
How are we doing?
What?
There's a lot of red on this income statement.
And then the interviewer continues and asks,
what opportunities do you see to get it on a path to profitability?
This is a good question to be asking
a highly valued but deeply unprofitable business like Open AI.
And here's what Fiji says.
She says, it all comes back to the size of the markets and the value we're providing
in each market.
In the past, only the wealthy had access to a team of helpers.
With Chachibati, we could give everyone that team, a personal shopper, a travel agent,
a financial advisor, a health coach.
That is incredibly valuable.
And we have barely scratched the surface.
if we build that, I assume that people are going to want to pay a lot of money for that
and that revenue is going to come.
Does that make any sense to you?
It's a better answer than what Sam gave.
I was shocked by this because I, so I love the first part.
I agree.
ChatGPT will be a personal shopper, will be a travel agent, a financial advisor.
I don't know that people would pay for this or that that's the best.
business model. I would be very surprised. Travel, I mean, so part of it is, like, she's also
just saying broadly, we'll be able to monetize that. It's not necessarily, like, people don't
really pay a lot. She didn't, yeah. The traditional travel agent model is just book your trip
with me. I'll monitor. I'll get a rev share from the hotels and the services, but you're not,
like, paying anything. I mean, let's go, let's go one layer deeper into the actual response
into the sentence, because there's some nuance here. So she says, I assume,
that people are going to want to pay a lot of money for that.
Like, I want to pay for a personal shopper,
but I actually have to use a free product with ads.
That could be true, right?
And same thing, she says,
people will want to pay, and that revenue is going to come.
So people will want to pay for it,
but they will get it for free with ads, potentially.
Or there will be some sort of combination.
Because right now, I pay $200 a month.
And you could imagine that there's a world where if you pay, you get a version that has less ads or there's less thumb on the scale.
How they slice that and navigate that agentic commerce discussion and tradeoff is going to be really important.
I'm sort of shocked.
I wonder if they're going to make money from Black Friday or from this holiday season.
I was already noticing how good LLMs and ChatGPT is or how good these products are for shopping, for gifts.
Because if you go to Google and you say, I want gifts for a coworker who's obsessed with horses and, you know, loud opulence and fine watches and sports cars and your,
European luxury houses, I can get a list of something, but they're all over the place.
And some of them will be, like, the best, like, discount, the best knockoff, Batega Veneta.
And that's not what I want.
I want the real thing, right?
And so you can actually specify all of that in the prompt, have it go cook.
And it really will bring you great results, great, great results.
Yeah, it mocks a gift guide.
It does.
It really mocks a gift guide for 30-year-old guys.
And it's like, well, what kind of 30-year-old guy?
Where do they live?
And what are their interests?
Yes, yes.
Getting like the very generalized gift guide is probably going to knock.
Those like opinionated gift guides, I think, will still be valuable where like an individual person puts it together.
And they're like, this is what I, these are the things that I think are cool.
Yep.
But a gift guide that's like, here's a list of things that guys might like is like maybe a lot less valuable when you generate one.
Like, I think that the amount of gift guide development and shopping activity over the next two months during the holiday season in the chat GPT app should be immense.
I feel like they're going to capture none of it.
Hopefully they at least are, hopefully at least they are like tracking it so they can say, hey, if we were to take the proper take rate on this, we would have made a lot of money.
Why are you laughing?
Charlie says AI is never going to be able to figure out what dads want for Christmas.
New barbecue, I think.
There are some funny and interesting anecdotes in this Fiji Simo profile.
Let's just read through a little bit of it.
In case OpenAI structure couldn't get any weirder, a non-profit in charge of a for-profit
that's become a public benefit corporation, it now has two CEOs.
There's Sam Altman, CEO of the whole company who manages research and compute, and as of
this summer, there's Fiji Simo, the former CEO of Instacart who manages everything else.
Simo hasn't been seen much at OpenAI San Francisco office since she began as CEO of applications in August,
but her presence is felt at every level of the company, not least because she's heading up ChatchipT and basically every function that might make OpenAI money.
Simo is dealing with a relapse of postural orthostatic tachycardia syndrome, POTS, that makes her prone to fainting if she stands for long periods of time.
I'm very sorry to hear that.
But she says now she's working from her home in Los Angeles.
She's making it work.
And she's on Slack a lot, being present from 8 a.m. to midnight every day, responding within five minutes.
People feel like I'm there and they can reach me immediately that I jump on the phone within five minutes.
She tells me employees confirm that this is true.
Open AIs famously Slack-driven culture can be overwhelming for new hires, but not apparently for Simo.
Are you?
Have you been using Chad GPT polls?
no i have i have not been using it regularly i'll give you one from my uh pulse today it's called
it says uh this is like an article that i can tap into open a i's api litter lay open a
open a i's api layer the hidden moat in plain sight hmm uh so this feels feels like uh it feels like it
feels like it's always like one click deeper from what I've been prompting yeah what I've been
prompting the articles do feel like they've been getting shorter they used to be it used to be like
very intensive compute wise like it would be like a full deep research report just here but
maybe it's noticed that I'm not clicking on them that often I do see that there's some pretty
good modals for uh allow like like linking to your email they're trying to get more data in
they are trying to hone it in.
I have yet to really get in there.
But, I mean, there's, there's, you know,
information about Blue Owl, Microsoft's Fairwater AI factory.
Like, interesting things that I would wind up prompting,
but I would usually prompt on a very, I don't know,
I feel like there's, it's not bad at predicting what I'm interested in.
It's just like, it's just not quite there where usually I'm a little bit more.
deliberate about it.
But, you know, people are searching ChatGBTGPT for holiday goods.
You've got to get on Profound.
Get your brand mentioned in ChatGBT, BT, reach millions of consumers who are using AI
to discover new products and brands.
You also got to get on TurboPuffer, search, serverless vector, and full-text search,
build from first principles and object storage, fast, 10X, cheaper, and extremely scalable.
Used by the best, the labs.
There was one thing that stood out here, Vigi says,
my husband is a chocolate maker.
So sick.
This is amazing.
Very cool.
Also, what does that say about
the jobs of the future?
You have this one household.
One is in chart responsible
for monetizing
one of the most transformative
new technology companies of our time.
The other one is making chocolates.
This is like, you know,
bifurcation of jobs.
potentially. It does seem like
an AGI resistant job.
I don't think OpenAI
will get into the chocolate-making business.
Brett Adcock would like a word.
He's just like, I will.
Actually, we're doing that. I will steamroll.
I will send.
Steamroll.
In other news,
Open AI is allowing
allowing employees to donate equity
to charity for the first time in years
after months of internal pressure
according to a memo viewed by the verge
and price per share is up significantly
since last month, a lot of money is on the line. What happens if they donate all of the shares
to the non-profit, to the Open AI nonprofit? You just create this auriboros of capitalism. Hopefully
it happens. I don't know. There's breaking news out of Saudi Arabia. We got a trillion dollars.
Let's ring the gold. Let's go.
One T, one trillion. What are they going to invest? Where's the money going?
Let's play the video.
Let's play the video.
While we're pulling that up, let me tell you about numeral.com.
Let Numeril worry about sales tax and VAT compliance.
Numeral.com.
Watcher Guru has the video.
Let's play it.
And the agreement that we are silent in the today and tomorrow we're going to announce
that we are going to increase that $600 billion to almost $1 trillion.
One trillion.
Real investment and real opportunity by details in many areas.
And the agreement that we are signing today in many areas and
technology, in AI, in
air materials, magnet,
et cetera,
that we create a lot of investment
opportunities. So you are doing that now, you're saying
to me now that the $600
billion will be $1 trillion.
Definitely, because what we are signing
for facility in that, and we
I like that very much.
Wow. I wonder
what time period, but I mean, this is
remarkable, but they can invest
in VC funds, public, private
equity funds like all sorts of stuff in the industry in the economy right that that really made
donald happy it's great i like that very much that's sort of his job he's kind of the chief
fundraiser i suppose he's going around the world and and get the money over here i don't know it seems
like sort of sort of win i don't know i mean what i want every every american benefits
yeah if a trillion dollars is invested in the economy there's going to be
It certainly doesn't seem like there's, I mean, the, the risk with that would always be like, well, is America investing two trillion in Saudi Arabia? Like, is it, which way is the money actually flowing? Because you need to look at like the relative amount, not necessarily just the notional amount. But I can't imagine that there's that much capital flowing out of America right now. We're in the biggest boom ever. We're in the golden era, right? Massive news from Isaiah Taylor, Valar,
Atomics became the first startup in history to split the atom, according to him.
He says announcing Project Nova, a series of zero power critical tests on Valar Atomic's
Nova Corps in collaboration with Los Alamos.
Nova went critical for the first time this morning at 1145 a.m.
Congrats to him.
Fantastic news.
There is some debate on the timeline over what exactly happened.
It's happened very quickly.
It's clearly extremely impressive.
We can get into this, but there's always been debate.
I mean, Isaiah got into this dust-uped over, like, whether or not you could hold the nuclear fuel in your hand.
They were going back and forth on calculations.
They kind of settled that debate.
Josh Payne, nuclear junkie is saying here.
So what exactly did, what hardware exactly did Valar provide?
The fuel control systems, cooling measurement systems, and most of the core are all part of the Damos project.
did Valar provide a block of graphite, and they're calling it their core?
And so people are going back and forth.
Niels chimes in here and says,
Valar Atomics provided the reactor core, the triso fuel, and the system
configuration.
That seems pretty important.
Like, you've got to, like, I don't know, it seems like more than what they'd done before.
It's like clearly an advancement on what they, you know, their, they're chopping wood here.
L-A-N-L and C-E-R-C provided the critical.
Critical Assembly, Facility, Safety Envelope, Experimentalist, Test, and a bunch of other stuff.
And so that's just from their press release.
So people are going back, did they do nothing or did they do everything?
Well, maybe it's somewhere in between.
It was a partnership.
They said that in the press release.
The bigger thing is I think people are trying to push on Valar this idea that they need to be doing completely novel science.
and I don't know that that's actually the goal of the company.
I don't actually know that's what, like,
like, if we just zoom out to, like,
what is the goal of the re-industrialization project in America?
What's the goal here?
Like, well, it's to lower energy prices, right?
Like, America wants to generate as much money,
as much energy as possible for as little money as possible.
And there are a bunch of technologies that exist.
There are new technologies, like what Ashley Vance was talking about
with helium and fusion.
that's a new technology that we have not
even discovered yet. Fission's
been discovered. 80 years ago
it was working. It just
became regulatory nightmare.
We just shot ourselves in the foot.
And we just stopped making it. It became
unprofitable and un-economical.
And China said, cool.
It'll be profitable for us.
We're just going to copy and paste.
Exactly. And so, and so I think
people might be a little bit over-rotating
on like, is,
is Valar doing like, in time
new crazy scientific breakthroughs when it's like do they necessarily have to like or do or is it
just enough for them just to build a lot of these things. Highly motivated team yeah that is going to
make incremental progress towards their goal yep and any anybody that's hating on that I think is
just like again like I think what's been great about the nuclear industry from our point of
you is that broadly the founders that are like players in the space just want the industry to
make progress in the U.S.
And I think this is, you know, undeniably like incremental progress that gets them closer to
their actual goal, which is bringing a small molecular reactor online.
I think Elon summed it up well with like his thesis for the XAI team.
He was like, we don't have AI researchers.
we have engineers because he sees this as an engineering project.
He's like, we know what we need to implement, we know what we need to build.
Our goal is to build a big data center, to build a large language model training system, infrastructure.
And Elon was very clear on like, we don't have AI scientists.
We have engineers.
And that's the same thing.
He's not the first person to take a rocket to space.
He's just the first person to like create this massive economic system that turns out rockets every two seconds, right?
And so I think that is much more, I think Isaiah would say, we should ask him this the next time he's on the show, but I think he would say, I want to be the Elon of nuclear. I don't want to be the Oppenheimer of nuclear. Like I'm not trying to like create something.
Yeah, he even said his line on, he said the U.S. is still good at making bus-sized objects. Yeah.
But not, you know, sort of like maybe bridge-sized objects, right?
Exactly. But Morgan Barrett's having fun on the timeline what street parking is going to
look like in El Segundo in 24 months. Of course, the El Segundo crew loves their cars. I think they're
going to stay pretty focused on the mission, but I would love to see this in El Segundo for sure,
for sure. There's also big news out of Radiant. Radiant has been, Doug's been on the show,
he's a good friend, and he, they are working with the Idaho National Laboratory, and they submitted
a DOE authorization request, and they will be testing their reactor design at the Dome facility
at INL on track, I think, next year.
So, congrats to them.
And Mike Nuziata has the kind of breakdown here.
It says production reactors in production by 2028, brought to you by the people that brought
you reusable rockets and McMaster Car highlighting the team behind radio.
And so, congrats to everyone in the nuclear industry who's making big waves.
And we have our next guest before we bring them in from the restream waiting room.
Let me tell you about Vanta.
Our guest is from Vanta.
And it just has to line up.
We'll let him tell you about it.
We have Jeremy from Vanta.
Welcome to this room.
How are you doing?
What's happening?
I swear that wasn't intentional.
But it did just line up that the Vanta ad read went right before you.
Gabe on, I look over, and I'm like, wait a minute.
Like, I'll let you do the ad read.
Introduce yourself, introduce what Vanta does, what you do, and then we'll get into the news.
Yeah, yeah, happy to jump in.
I'm Jeremy Epling, chief product officer at Vanta, and we help businesses earn and prove trust.
And one of the really cool things that we're doing this week is we're hosting our Vanticon conference here in San Francisco, have a ton of people showed up, a ton of engagement to really pull that entire security GRC community together.
and have a couple really cool announcements.
One of them is how we are transforming Vanta to be the agentic trust platform.
I think this is a really big turning point for the industry.
When we think about how GRC teams are transforming and becoming more technical,
we're really redefining how these enterprises manage trust at scale
and are able to help big customers like sneak, perplexity, synthesia,
all the way from YC startups that maybe just exited a batch, you know, recently,
all the way to the Fortune 50 companies really earn and prove trust as a business.
It feels like AI is amazing, but it's not something people trust.
And so how are you grappling with that?
I mean, people trusted in their Tesla's to drive them on the freeway.
That's high stakes.
But there are these, I'm sure you run into this all the time when you're talking to folks about.
Yeah, I love it if I'm just looking for a recipe, but I don't know if I,
I'd trust it in my, you know, deep in my enterprise for whatever reason.
So how do you think about how you set up certain guardrails around the AI, which still can
hallucinate from time to time?
And then how do you articulate those guardrails to the end user and the customer?
Yeah, definitely.
And that's a big problem we saw for companies today.
I think whenever they're adopting a new AI solution or maybe it was a solution that they
already had and they've just added some AI features, they're wondering, how are they using
my data, what are they doing? Are they training on my data? We have a whole third-party risk management
product that comes in. It leverages our Vanta AI, which when we think about how to hit that
quality bar that we care about, like you said, like, hey, is it going to hallucinate? How do you
approach that? We have a whole set of great GRC SMEs, subject matter experts, that help us
tune and refine our AI so that we can give really high trustworthy answers. Because you imagine
security customers are some of the harshest critics of AI. They really want things to be accurate
and great. And so that's something we have really leaned into. And one of the ways we've kind of
pushed that forward is one of the big announcements that we have coming up this week is our AI
Agent 2.0. So we've redefined our agent to really be this built-in GRC engineer that
understands all the compliance across your entire organization. So like you said, it knows when
you've added a new AI tool. It knows what data you're putting into that tool and how you should
think about risks and mitigating those. It also has context and memory. So we're
When you're asking you questions, it understands what you're talking about.
Like, if you're on a policy, it'll pull in that context.
It is the memory of understanding what your business is.
Maybe you sell to consumers.
Maybe you sell to other businesses.
It can pull all that context in across everything in your program as well.
Like, hey, we know that, you know, these are your vendors, these are your risks, these
are your different customers.
You've received these questionnaires feedback.
It can synthesize that all into, like, intelligent guidance to provide you.
So one of the cool things that I love about it that really helps security teams
work against attackers, because I think in this AI world, obviously you have the kind of bad
guys and attackers using AI to come in. We also help everyone defend and understand because we know
the whole program. We can find gaps in your security program. The AI automatically suggest those
to you, like provides gaps in proactive things to go do, to go address those gaps and remediate
them, gives personalized guidance, and really helps automate a lot of that process. You can
respond to attackers and threats a lot more quickly.
How does, like, how are you thinking about, like, the UI around agents?
Because so many, there's, there's been this explosion of companies that are creating
agents and they mean something totally different depending, depending on the, on the company.
Sometimes it's like a chat interface.
Other times it looks, it sometimes looks more like SaaS, and that's totally fine.
but how are you thinking about the actual, like, evolving UI paradigm?
Yeah, I think it's going to be both.
Like, I think there's a lot of times I don't want to have just a chat conversation with my AI,
and I want it just to bring the answers to me automatically.
So we look at it as kind of a blend of both.
While there might be agents working in the background, you don't always have to do it through a chat interface.
So for us, if you show up on, like, our policies experience, we'll say, hey, we found these three inconsistencies across the 40 policies you have.
Do you want us to go fix those to you?
And you didn't want to have to ask that question of like, is there a problem here and kind of guess through the list of problems?
Instead, we have our agent already looking for those.
Or maybe your SLA says it's 24 hours for critical vulnerability to notify a customer in one document.
It says 72 in another.
We'll automatically do that, give you the change, show you the diff for the kind of like red line for that, let you click a button and automatically execute it.
So I think bringing that stuff in, when I think about when chat's great, it's really when you, I don't know, when you have the follow up questions, you know,
where maybe a one-shot answer isn't going to give you what you need.
You want to dig in more.
You want to learn more.
You're trying to explore data.
It's a big case for us in reporting where people want to learn maybe about their controls and how well they're doing, how well they've been performing over time.
They can have that interactive conversation with the agent, ask it to pull those statistics, leverage our MCP server through Claude or chat GPT and have it automatically generate kind of graphs and charts and reports that they can use for, you know, their board or anyone else to kind of show progress of their program.
How are bad actors using AI today to abuse companies in different ways?
Yeah.
I mean, I think it was yesterday or maybe it was the day before Anthropic posted a really good
article about attack that they had experienced there and seen that their software used for.
I think that it's just giving a whole new set of tools for attackers to be able to probably
write more sophisticated attacks and find vulnerabilities even more quickly because they have these
agent's always running, always looking. And I think that's where when I think about Vanta,
where we come in and provide that next level defense, because if you think of an attacker
coming in from the outside, they can only see what's on the outside. With Vanta, we already
know your entire program. We know all the different pieces of it. And so we can really help you
build stronger defenses and be proactive. Like I mentioned in bringing those inconsistencies to the
forefront, giving you automatic remediation on specific issues that we might find. We still think
it's important to have like humans in the loop for a lot of those big decisions, but you can then
work with the agent as well to have it take actions just on your behalf automatically.
On the other areas of the risk surface, I imagine that you're trying to build products.
Are you also starting to act as a funnel and do partnerships with other security firms?
Because the surface area is probably pretty broad. Do you have a vision to be a one-stop shop,
or do you want to be part of an ecosystem and suite of products that enterprise implements?
Yeah, I think for us, we definitely want to solve the broader trust problem.
But we know that there's lots of different pieces where we aren't going to be the full solution, right?
So if I think of a GRC team or customer trust, hey, you get security questionnaires and questions coming in from customer, how can we go do all that?
There are certain areas, you know, like vulnerability scanning.
We're not going to be going to be going to keep into vulnerability scanning, but we're going to go partner all the great scanners.
to go do that.
Got it.
I think the notion, though, like you said, of bringing that visibility across the entire enterprise is a really big thing for us.
We have a feature called adaptive scoping that when you think of a whole security program, you know, there's little pieces of it.
And you may say that, hey, to get compliance with PCI for credit cards, I need to have these assets in scope or things to go do.
And that's different than another framework I might be pursuing.
So we allow companies to kind of see their progress on compliance in those different ways.
We have a new organization center so they can break things down by business unit or product line.
And these are like just brand new ways that customers have never had before to understand their program at all levels of depth.
So when you think about that really large enterprise customer, they're able to break down their program and see that.
And I think that's where Vanta really pulls it all together.
We call it the risk graph is like one of our big announcements that we have coming internally where we pull together internal risk and external risk.
So you think about risk you have from your different vendors as well as things you're identifying.
internally within your business, and we provide a full visual for that. So you can kind of get
this connection between, hey, there was a breach. Okay, great, the breach happened. Which vendor
was it? Who has access to that vendor? Vantage can lean in and cut off that access or change the
controls there. What data was going into that vendor? And it really helps you understand and
prioritize all the things that are happening in your security program because I think security leaders
are just drowning in alerts and they want to know what's most important. So having the AI intelligence,
being able to dissect your program in these different ways and then see kind of a visualized risk graph is really important to help them quickly act on, you know, a threat landscape that's just always changing.
Yeah, that makes sense.
You guys got to do Spotify RAPT for internal risk.
That would be good.
Something shareable.
Something shareable internally accompanies, of course, to be like, you know, yo, Tyler, you got to, you got a, you're our biggest risk vector over here.
The internet.
Always Tyler.
Tyler's our intern over here.
Thank you so much.
He's very secure.
He's very secure.
He's probably the best.
Anyways, super exciting few launches and have fun at the event.
Thanks for joining.
Yeah.
Have a great rest of your day.
Cheers.
Let me also tell you about Figma.
Think bigger, build faster.
Figma helps design and development teams build great products together.
There's this article in the Financial Times.
It's very spicy.
It says Oracle is already underwater on its estate.
astonishing $300 billion open AI deal.
AI circular economy may have a reverse mitus at the center.
Okay, so they're saying this is underwater because the market cap is dipped below.
That's so, and it's like not very honest.
Yeah.
It's not, it's not, I, you know, I'm not the first one.
The Financial Times says, Oracle's astonishing $300 billion open AI deal is now valued at minus
74 billion.
And that's like, I don't like that at all.
Like, yeah, this is like
really, really bad framing in my opinion.
Like, yeah, it's not fair to say that.
I thought so too. I thought so too.
And I love the financial times. And we have the financial times
printed out here. Normally, normally, very,
very great reporting.
But this one, this one feels odd.
It just feels like an odd framing.
But saying Oracle is already underwater on a, on a partnership.
This is a, this is a hot take that you've been, you've been pumping.
for the last week, but the way you've said it is, like, the stock has round-tripped,
even though they had that amazing deal, which is true.
The correct framing is the market is no longer giving them credit.
Yes, yes, that's right, that's right.
But to say that they're underwater, it's so weird.
So when I saw this headline, I read into it earlier, and I was expecting to see something.
Okay, well, we might have gotten rage-baited.
We might have gotten rage-baited because right here, the Financial Times addresses our concern
and says, okay, yes, it's a gross simple.
to just look at market cap, but equivalence to Oracle shares are little changed over the same period,
the NASDAQ composite, Microsoft Dow Jones Software Index. So the $60 billion. Calling those equivalents
is like, again, like look at... You could also comp it to Corp. And you could say, on a relative
to Corweave basis, Oracle is outperforming a bunch. Amazing. It's amazing. I don't know. Like,
there's a bunch of different ways to, like, if you pick your weird comp,
It does seem a little odd.
It says, so the 60 billion loss figure is not entirely wrong.
Oracle's astonishing quarter really has cost it nearly as much as one,
General Motors or two Kraft Heinz.
Investor unease stems from big red betting its debt finance data farm on OpenAI.
We've we've nothing much to add to that other than the charts below showing how much Oracle
has in effect become OpenAI's U.S. public market proxy, which is fascinating because
Microsoft should be Open AI's public market proxy, in my opinion.
But there are some great charts in here.
There's some interesting stuff.
And I believe this is from Alphaville, which is their blog.
And it's not exactly, it is supposed to be like, you know, like a take factory.
Anyway, well, we have our next guest in the Restream Waiting Room.
Let me tell you about Julius.ai first, the AI data analyst that works for you.
join millions who use Julius to connect their data, ask questions, and get insights in seconds.
We have Keone from Monad. Welcome to the show. How are you doing? Good to see you.
What's happening? Hey, doing great. Great to be here. Thanks so much for joining.
Please, dude, I love it. You got the lock-in. You're calling in from the lock-in capital of the world with the
mattress on the floor. Yeah, congratulations. Please introduce yourself and tell us a little bit about
the news specifically this week.
Thank you. Great to be here. My name is Keani Hahn, co-founder of Monad. Monad is a new blockchain that is building for high-fidelity finance and is a high-performance blockchain that has been building over the past three and a half years. Just really delivering high performance based on previous experience from high-frequency trading.
Wait, so you were high-frequency trader before this?
that's right yeah i was at jump trading for about eight years led one of the trading teams there
was very involved in the futures markets prior to monad what was the day-to-day like
um it was a lot of uh jupiter notebook it was a lot of um like manipulating large data sets
and making really short-term price predictions as well as building uh performance systems
How short term is short term?
Like nanoseconds, picoseconds, or like seconds, minutes?
It all seems short term.
Yeah, it's the predictive horizon for the kinds of strategies that I was working on
were on the order of milliseconds to seconds.
But the whole time for these strategies was longer than that.
So that's actually one of the interesting misconceptions about HFT
is that your predictive horizon is very short because you're predicting the next flip.
but then, you know, you can make trades that have edge in that and can predict that flip
and make the right action, but then you still have to hold that position for a longer
period of time until you can get another signal, maybe in the opposite direction or a signal
to enter an order in the opposing direction. So old times tended to be on the order of like
seconds to minutes. Interesting. I didn't know that. Thank you. That's very helpful.
Very cool. So talk about the, oh, sure.
Yeah, I guess getting into what is success with Monad going to look like?
What are the different types of groups and applications and types of users that you expect to come in in the early days?
Yeah, so maybe to take a step back a little bit, Monad is a new blockchain that delivers the best of all worlds between decentralization, performance,
and backward compatibility.
So it's a new blockchain.
It's fully backward compatible with Ethereum.
It allows developers that have built applications
for Ethereum or the Ethereum ecosystem
to reuse all of their code,
all their libraries, all the tooling that's been built
for Ethereum and more specifically
the Ethereum virtual machine while getting much higher performance
and a really high degree of decentralization.
So in particular,
Ethereum processes on the order of 10 transactions per second,
well, Mona delivers 10,000 transactions per second.
And that 1,000x improvement is a result of several different
improvements that have all been stacked on top of each other.
And those vary from parallel execution to allow a bunch of transactions
to all be run in parallel, as well as a new consensus mechanism,
a new database for addressing the single biggest bottleneck.
in blockchain execution, which is using the accessing all of the state that's on disk really
efficiently, as well as various other improvements that just deliver the same experience,
but sped up significantly.
That makes sense.
And so what, in your view, what is the ideal kind of adoption look like?
Yeah, it's really a mix.
So I think the thing that's really valuable about decentralized
blockchains is that they deliver shared global state that is borderless that allows people all around
the world to get access to the same tools and the same markets fundamentally. I think blockchain
is really a revolution about decentralizing control of financial systems and commercial
systems and giving people regardless of where they are in the world access to the same
financial opportunities.
So I think a big part of the story of blockchain
and the story of adoption is that developers anywhere
in the world can build new applications,
deploy them in the system,
and then users anywhere else in the world can get access.
What we're seeing in terms of adoption is a mix of existing
applications that can migrate to Monat seamlessly
and get much lower fees for their end users,
as well as enterprises that
are utilizing the power of blockchains
for stable coin settlement to allow their users
to transact in dollars or send and receive payments
really cheaply and permissionlessly.
In your view, what are the kind of classic mistakes
that other blockchains that have tried to challenge
some of the more dominant chains?
What are the kind of classic mistakes that they make
to ultimately, I feel like there's, every single day, there's somebody on X highlighting
some blockchain that has a multi-billion dollar, you know, fully diluted value and yet has very
little activity. So if you could kind of like lean it, what are the things that basically
you're trying to avoid? I think one of the problems in crypto is that it can be quite hard
for, so it's kind of a double-edged sword.
On the one hand, it's easy to get some initial users that are trying things out and
giving feedback, but it can be challenging for people to sift through the yield farmers
or people that are motivated by an incentive and really identify the users that are
that are there because they ultimately gain value from the application.
So one thing that we really care about a lot at Monad is helping to helping builders that are building in the space.
These are all early stage entrepreneurs that are very talented, very ambitious,
helping them to focus on user acquisition funnels.
And just like just the fundamentals of entrepreneurship and identifying users and navigating the IDMAs to identify PMF.
That makes sense.
How has it been bringing the token?
to market with Coinbase's new product. It's certainly a wild time to be building in crypto
just because of the overall volatility. And I'm sure that's made it challenging. But you're also
utilizing a new product line from Coinbase, which is pretty interesting. Yeah, I think it's
extremely exciting. The thing that motivated us to work with Coinbase and be the first token launched
in their new token sales platform is the opportunity to get really broad distribution of the token.
I'm a big fan of Dogecoin.
When I first got interested in crypto, I was really interested by just the story of how Dogecoin
gained really broad distribution in Mineshare and the Dogecoin tipping bot on Reddit as a mechanism
for getting a lot of people to like sort of align on shared interest and values.
that ultimately then became valuable much later.
The thing that's hard about crypto is that there's an expectations game that's being
navigated and people have very high expectations of the value of airdrops and so on.
But I think our team has done a really stand-up job of delivering a great air drop
that people were really excited about and that crypto natives got really excited about.
And then also offering a way for normal everyday people,
maybe you're not on crypto twitter as much but are still very active on centralized exchanges and
trading and holding to get access to the token makes a lot of sense uh well how much uh how much
have you've raised so far we have a gong we have a gong here we'd love to uh hit it on on your behalf
thank you um i think we've raised about 120 million dollars so far there we go
congratulations well it's an honor to hit the gong for you and uh excited to
follow along.
Congratulations.
Thank you.
So we have until Saturday.
The sale's open until Saturday
at 9 p.m. Eastern,
and we're looking to raise
$187 million total.
There you go.
Let's go.
Most of the way there.
Well, good luck.
Thank you so much for taking the time
to talk to us today.
Have a great day.
Great to meet you.
We'll talk to you soon.
Our next
guest is Stephen Balaban
from Lambda.
Or is it just Lambda now?
I think it's just Lambda.
Do we drop the labs?
I think we dropped the labs.
Stephen, do we drop the labs?
How are you doing?
We dropped the labs.
We dropped the labs.
Okay, I'm dating myself.
Well, at least I feel like a day one.
I don't feel like a bandwagon fan because I'm using the old name.
There's a little bit of cool.
I liked it back when it was labs.
But welcome to the show.
Thank you so much for taking the time to talk to us.
Congratulations.
You look incredibly sharp, too.
With the yellow tie, you're making us look on a professional here.
We've got to put on the time for this.
Give us the news.
What happened?
break it down. Yeah, well, so one day I was training some confets on my workstation. Next thing
you know, we're raising 1.5 gigabucks. Gigabox. We say, we say, we say, gigabugs. Gigabugs.
Gigawatts, giga chips, gigabucks. Yes. Yeah, what does it actually mean? I mean, we, we see, we see 10 billion,
100 billion, 10 trillion, quadrillion every day.
Is this cash?
Is this debt?
Are you buying GPUs?
Are you buying land?
What are you doing?
All equity.
Okay.
Let's give it up for equity.
Let's give it up for equity.
Extremely well.
Like our capital structure is really nice in terms of we've been very conservative
in terms of the amount of debt that we've taken on.
And that's kind of been one of our philosophies.
And we've aimed to have, you know, a business that's just super.
robust to ups and downs in the market because we're swimming with our swim trunks on.
Yep.
And then you, uh, and then you, uh, that's a amazing line for the money. You gave them,
you gave them equity that there's no one hand washes the other type thing where like
they pay you, you pay them. It's all one round trip. No, this round was led by by TWG global,
which is, financial investor, which is Thomas Tall and Mark Walter. You may know,
Mark owns the L.A. Dodgers and also now the Lakers. Thomas started legendary entertainment,
which makes great movies like the Batman series and Dune and Inception. And so these are business partners
who I've gotten to know over a number of years now. And this is just they're making some
some big investments in the space. Okay. I'm so happy you guys have your trunk.
on because not every player out there has their trunks on right now. And it's hard to tell
who does and who doesn't. But at some point, we're going to find out. And it's not going to be,
it's not going to be pretty. It won't be pretty for people who are over levered. And we just have
this philosophy that with exponential growth that we're seen in the AI industry, all of the
upside is in the last period. Right. If you have a doubling function, right, the sort of the
definitional thing of that is that the last period is more growth than all the sum of the previous
periods combined. And so from my perspective, it's just like stay alive and build a rock solid
business because we got to capture all this amazing upside in the long term. Yeah. So talk about
how funds. Well, even even before that maybe maybe feels like and it potentially an advantage right
now just in terms of focus is like being private. There are other other. Other.
companies in the category that are public and they're now having to contend with, you know,
what's been a pretty big correction in at least a local correction in Neocloud over the last
month. Has that been helpful in terms of the team of just like staying focused and you're not
getting, you know, marked every single day? Well, I think that certainly that level of distraction
isn't helpful and I always encourage the company to just focus on building a heavy business
for the long term, you know, if in the short term the market's a voting machine and the long term,
it's a weighing machine. We just got to build a business with good cash flows, a good capitalization
structure that's robust. And so I kind of try to focus the team on that. I mean, these days,
the secondary markets, as you know, are actually, you know, pretty deep for, for, for, for, for,
for companies that are kind of at our size.
And so I think that some of that can start to creep in.
Yeah, that makes sense.
Where are you seeing value spending some of this money?
I imagine that there's hiring R&D, all the traditional things,
but you're at a scale where it's a lot of money.
How do you actually think about allocating capital at this point
and in this phase of the journey?
It's been over a decade now, right?
Yeah, we started in 2012 and was doing, we were doing face recognition software and the AlexNet paper came out.
Wow.
I mean, that's how early it was.
And I downloaded the Kuta Conventnet library off of Google code.
And that will tell everybody how old school Lambda is.
And, you know, as far as use of funds, obviously a lot of it goes towards the GPU infrastructure that goes into data centers.
We are also starting to put that into investments into data centers themselves.
I think that what we're aiming to do long term is kind of build this almost like Tesla for AI infrastructure,
where we kind of look at this as like a similar buildout that you would expect from the like electrification of the United States or the railroad.
And like a degree of vertical integration, we believe, is going to be in the future for us and is like the right direction.
And that goes from everything from, you know, energy procurement and construction because I think a lot more of this stuff is going to have to be behind the meter power plants to actual construction and design of data centers that can sort of rapidly adapt to the changing chips that go in, right?
because the rack densities and the movement from air cooled to liquid cooling that we're
really pioneering alongside Nvidia, these are all examples of use of funds.
And it's exciting because we get to kind of make good investment decisions that are really
sort of IRR based in an almost industrial way, which I think is unique from a company building
perspective.
And it's an honor to be able to do that.
Can you get me up to speed on some of the tradeoffs between like one really big mega data center and a bunch of really small data centers?
How, because there was a moment when we were just doing bigger and bigger training runs, then it became R.L all over the place.
Then you actually have to serve these things.
But actually, if it's going to take me 10 minutes, I don't mind if you do it across the world and take it back.
But if I do care that it's right now, I need it like, right, co-located.
How are you thinking about the tradeoffs there?
So the mix and the main driver over the next five years, we believe will likely be mostly on the inference side.
If you look at some of the financial models that have either leaked or otherwise been published around what OpenAI thinks they're going to be spending, it looks to be about 50% on training and then 50% on inference, growing towards 75% inference.
and a smaller chunk of that on training.
And as far as what that means for the larger data centers,
I certainly don't think that this is like going to a world
where there's a bunch of micro data centers.
I think that that's a little bit hard to sort of manage and deal with.
But one of the things I think that you're going to start hearing a lot more of
is how adaptable and how quickly can you bring.
on the data center in an incremental fashion because that's going to be a lot of the main drivers
for how successful infrastructure builders like us are is how quickly. And we're just focus on
optimizing that time to first token for our customer. How do you think about revenue quality
and customer selection? Because we've seen some deals go down that look big and cool and good
on the surface and then you dig into them
and maybe the underlying infrastructure providers
not actually getting that great of a deal
at the end of it.
Well, we certainly see a lot of people
with very high levels of customer concentration.
Because Lambda started off as this developer cloud
that evolved and morphed into a cloud
that's providing for the biggest companies in the world,
we have a really, really strong user.
base, you know, if you look at our breakdown from our revenue mix, in terms of you looked
at like, let's say, our Q3 stuff, and I don't want to go into exact specifics, but it's sort
of like one or two big customers, a bunch of sort of the bigger, smaller customers, and then
it's something, you know, it's a nice, really big chunk of this long tail of customers that
we have, and we have a very, very, you know, I've seen some other people's customer books,
and I can just say that we've got a very diversified customer base, and that's kind of all
part of the strategy of how do you build a great long-term business. Of course, customer
diversification is one of those parameters. How do you think about diversity of product
offerings? Are you seeing customers ask for API endpoints for particular models, or do they want
access to bare metal, or have you gotten any customers that are like, hey, we just want,
you know, you seem to know about this data center business. Can you just build a data center
for us and hand it over to us when you're done? And we'll just pay you as a consultant.
We have no interest in doing that, that one. That's, you know, we want to do something that's
really vertically integrated. And, you know, kind of going back to that like larger, smaller
data centers, I think the most important thing is just being able to deliver this incremental
live deployment for a customer.
We have an entire full-stack cloud product that, you know,
it's got things like single sign-on.
It's got things like long-term high-speed AI file systems.
It's got instances that go down from one GPU to an entire cluster
with one-click clusters that we've got.
And so we've built an entire cloud platform.
We have previously been in the inferencing space where we're actually
giving an API for inferencing.
And we've actually exited that business to just focus.
I think that that's like one of the things that we really try to do at Lambda is just
say, where are we making money, what are good investments, and where are we going to really
dominate the market and focus there?
And so we've actually exited, for example, the inference market.
We had a $200 million plus a year hardware business that we've exited.
Wow.
You know, I mean, it actually like kind of crushes me because like now.
was the business that got off the ground. But can you imagine just like winding down?
I'm like, well, we're just going to take this business and not do a $200 million
year business anymore because we're trying to focus. That is crazy. That's crazy.
Thanks, guts. I have a, I have a crackpot theory that I'd love to run by you. What do you think
the odds are that the, like I noticed I was traveling in, I was traveling in Mexico and I noticed
that Carlos Slim is the richest man there, and he's a telecom magnate.
He owns a lot of the telecom infrastructure.
And that's true for a lot of countries.
The richest person in that country is a telecom person or a mining magnate in the sense
that they've been able to corner a resource, a physical resource, infrastructure,
and that's generated a lot of wealth for them.
And I was wondering if you had a thought on, do you think that in the future we'll see
some of the wealthiest and most powerful people from other countries, non-American countries,
be, you know, GPU cloud hosters or data center developers.
Like, is this going to be a new boom across the globe?
It's kind of a different twist on the sovereign AI project.
I was just wondering if there's going to be some way that this plays out where there's
this sort of like one-time opportunity to kind of get a cornered resource, or is the
nature of the internet such that the compute is actually much more fungible than,
um, than say, you know, telecom or, you know, more like copper in the ground.
There's such a localization. There's such a physical localization. I think if you look at telecom,
you look at cable. Yeah. As well as regulated utilities from an energy utility perspective.
Yeah. You know, these are all things that benefit from a physical, geographic monopoly.
Yeah. Right. And, and, and AI data centers don't have that same thing.
Now, I just want to step back for a second.
Guys, the United States is basically the only country in the world.
We have the most unbelievably good economy.
This is the idea that there's going to be these sort of like massive AI infrastructure projects
that I think are going to be like super, super successful outside of, let's say, China and the United States right now is really increasingly big question mark.
And I just am so bullish about.
where we're going in America that I don't really pay a lot of attention to it. Our focus is just
in, you know, in North America generally. And I just, that's kind of my perspective on it, to be
honest. Yeah. Yeah. That's really helpful. I agree. It's, it's interesting a toy. I mean,
there's a lot of money being thrown around with some of these projects. And I'm always interested
in, you know, how they will shape out. Last question. Oh, sorry. Go ready. Yeah, maybe go for it.
I was going to ask, like, how you guys are navigating energy constraints with new developments?
Are you seeing?
We've heard, you know, anytime, obviously, there's, like, massive demand for something.
New sources kind of come out of the woodwork.
We've seen back and forth some people that are building AI infrastructure, say, like, energy is our primary constraint.
Others are saying, actually, that's not my, you know, it's...
So, where do you sit?
We are aiming to reimagine this sort of step process from, whether it's photons or molecules, of natural gas to tokens.
And we strongly believe that a lot of this is going to have to come in reimagining, like, well, how do you interact with the grid?
How much power generation do you bring to the grid yourselves?
And I think that that's the successful AI infrastructure companies in the future.
Again, this is why I kind of said, like, I look at this like Tesla for AI factories,
which is you got to reimagine how the world has worked previously, and you have to kind
of bring together this level of vertical integration because that's how you move fast, right?
You know, when you can control every step of that way from the power generation and not
having to necessarily deal with a regulated utility, and you can go and do behind the meter
generation with a natural gas power plant, if that can speed your time to market up, this is
just so important.
And that's kind of how I approach it, which is there are certain barriers like regulatory
barriers, which try not to run through those like a brick wall because it's kind of like
an immovable object. But if you can if you can just bring your own, if you can just sort of get
around that sort of regulatory constraint of having to interact with a regulated utility by bringing
your own power to the grid, then that's what I think is going to be successful.
Yeah. Makes a lot of sense. Thank you so much for taking the time out of your busy day to come
and hang out with us and answer some questions about it. Jordan, John, thanks for having me,
guys. It's always a great time. Congratulations. The new Gemini 3. This is like,
Yeah, can you give us your review and actually explain how it interfaces with your business?
I'd love to know.
So I haven't used, I haven't, I haven't used the Gemini 3.
I've seen the updates.
I'm so, you know, hey, Sundar or whatever, give, give, give, Enterprise account access.
We're on Google, Google suite or Google Enterprise or whatever it's called now.
So we'd love that upgrade.
But I'll tell you what this is the cool thing.
I use things like chat GPT and GROC to learn more about topics like regulated energy markets
and how to build power plants and data centers.
And that makes Lambda faster at standing up AI data centers.
And I pay attention.
I actually just kind of do what the AI tells me to do.
And that gives more compute to the AI to train bigger models, which makes a better to land faster.
The AI is working through you to make more AI.
It's the beginning of these types of positive feedback loops.
Sure, sure.
And I think that if you privately talk to a lot of executives, you'd be surprised by the amount of, you know, the strategic conversations I have with these AI models has gotten more and more advanced with the level and the quality of the model.
The first versions were not great and I didn't really take a lot of its advice.
But now I am.
I mean, next thing you know, it's sort of like, well, you know, maybe EI is the one making the run of the show.
In decisions, yeah.
Next thing you know, we'll be hanging out on TVPN discovering novel physics with Gemini 4, you know.
We'll see how far we get.
Yeah, it's a good time.
Well, thank you so much for coming by the show.
I have a bunch more questions, but come back.
Let's get you back on in.
before the end of the year.
That'd be great.
And we'll continue the conversation.
Congrats to the whole team.
Yeah, we'll talk to you.
Take care.
Have a good one.
Bye.
Quickly, let me tell you about Privy.
Privy makes it easy to build on crypto rail.
Securely spin up white label wallet, sign transactions and integrate on-chain infrastructure
all through one simple API.
What a legend?
What an absolute legend?
What else we got?
Doug O'Loughlin over at Semianlis, Fabricated Knowledge says, I leave for two weeks,
and we are talking about Oracle credit default swaps.
What the hell, guys?
Doug, where was Doug for?
I think he's been on vacation or something.
He was trying to like truly log off and take a break.
And yes, people are definitely talking about CDS spreads.
And any sign, any crack in the market is definitely going to be newsworthy because we're in this $1 trillion era.
Gavin Baker here is talking about this.
He's completely agree with this.
breakout of the non-bubble that disappointed both bull and bears how sam splurge changed everything
and gavin baker says sam altman's manifestly ridiculous one trillion dollars of spending commitments
shifted the a i investing landscape the market is more skeptical now ironically makes an IPO harder
for them although uh likely ended any potential for a 90 1999 style melt up which is healthy
melt up, meaning that in 1999, the market went insane and nuclear.
Instead, the $1 trillion was so in your face that everyone started asking the questions of
like, is this real?
Is what's going on?
Are we going too fast?
Do we need to back off?
And so we got sort of a return to fundamentals, but fortunately the fundamentals were so good
because these companies, a lot of them are portraying like 25 priced earnings that
that the market was able to, you know, continue onwards.
There's an interesting debate going on around Karen Howe's new book, Empire of AI, all about
open AI.
Apparently, she got the amount of water used by data centers wrong by an order of magnitude
or two orders of magnitude.
I'm not exactly sure where the story originally broke, but she's addressed it now.
she says, I'm working to address an apparent error for a data point I cited in my book about
the water footprint of a proposed data center in Chile. I'd like to explain what happened,
what I'm doing to remedy it, and provide more recent data on the water footprint of data centers.
The data point in question appears in chapter 12 of my book, which focuses on the environmental
impacts of AI. Part of the chapter profiles, a community in Cirillos, Chile, which has been
resisting a proposed Google data center for years
to describe the data center's water footprint
in lay terms. I included a sentence
about how it compares to the water usage of
the people in Cirillo's. For that calculation,
I relied on a figure from a government
reporting, government document reporting
Cyrillow's residential water
use based on the current best information.
It seems that this document will use
the wrong units. So
she was off by a thousand.
So the results
was that... What's being off by a thousand
among friends. Honestly, these days
doesn't even matter.
We're in a post fact rule. Did you read into this more?
People were, I think people
are generally like, you know,
is this book a hit piece? And I think
Sam actually cooperated with it a little
bit or like gave some interviews for it.
But like anything, it's like obviously
critical of some things.
I mean, yeah, three, three orders
magnitude is like pretty big.
Yeah. That's like not great.
Yeah, I mean, it's certainly like
the difference between being a big deal
and not a big deal.
Yeah, like that, about the water use,
it's like people who use that to justify,
like, oh, we don't want to build
those data centers going to use our water.
Yeah.
Like, I don't know.
I mean, not good.
It's a rough time if your job is drinking water.
Tom in the chat says,
mistakes were made.
Mistakes were made in a book I was responsible for.
Mika says,
Jordi, you should get a grill with tiny GPUs
instead of diamonds.
Maybe not the full grill,
just the bottom grill.
There'll be AI wraps.
Did you see this?
Rohit comment on Benode.
V.C. Vinod Coastless says that the U.S. government could take 10% stake in all public
companies to soften the blow of AGI.
And Rohit says, we should absolutely do this for all companies, public and private.
Maybe we even double it to like 20 or 21% on every dollar they make.
It's like, yeah, the government taxes everything.
The government gets 21% of profits, actually.
they get cash flow
Sean says
the haters will call that attacks
it was so funny
Olivia Newsie is in the news
people are to be deleted posts
getting kind of like a dividend
yeah apparently
all the media people
are obsessed with this Olivia Newsie story
I didn't understand any of the people
in the story because I don't follow
media or politics
closely enough nominative determinism strikes
again but it is fun that she
Bobby was saying we should do it the metis list for nominative determinism.
That would be good.
Because news, she's in the news all the time.
Yeah.
She's also a journalist.
There's news in the trading app world.
Robin Hood launched Barrage on a stock.
Short selling is rolling out today on mobile,
a web classic and Robin Hood legend.
They didn't have short selling?
I feel like they've had short selling for a long time.
No?
That's a new feature?
Who knows?
That's funny timing, and then our partner, public.
Public is launching generated assets, which they're calling their agentic brokerage.
Very cool video with our boys here.
Yes, yes.
But this means you can basically generate like your own index based on.
And what's interesting about it is that you can say, I want access to the MAG7 plus a couple other AI companies.
Minus one.
I don't know which company you're doing.
If there's a company, you know.
So you can generate like, you know,
some sort of portfolio, but then instead of owning it as an ETF and needing to sell it,
buy and sell it directly, you can actually do the tax loss harvesting of selling individual pieces of it.
And so you can construct a portfolio very quickly.
And in general, I mean, just all the different research that you want to do in is obviously deeply, you know, enhanced with artificial intelligence.
So fun to see them.
pope leo has hit the timeline to comment on cinema the logic of algorithms tends to repeat what works but art opens up what is possible not everything has to be immaculate or predictable defend slowness when it serves a purpose silence when it speaks and difference and difference when evocative beauty is not just a means of escape it is above all an invocation
When cinema is authentic, it does not merely console, but challenges.
It articulates the questions that dwell within us and sometimes even provokes tears that we did not know we needed to express.
That's been nicely worded.
He's in a role.
The Pope Leo.
What movie do you think he was thinking about when writing this?
Obviously Borat.
Margin call.
100% margin call in Borat.
He's going back to back.
somebody there was a post in here about movies somebody said they watched like
three movies over this over the weekend i thought it was uh the most unjordian thing uh final post
of the day Kevin right yeah right you think you're gonna cut me off yeah
Kevin Notton Jr says 10,000 likes on April 30th he said 10,000 likes and I'll quit my
software engineering job at Google tomorrow and uh he said six months ago I made the
worst decision of my life
because Google's ripping.
That's what he's talking about.
Okay, because I read this initially.
He quit.
He started a company and it was like went really poorly.
It's just funny.
He is building the fastest way to post with postwrite.a.i.
Okay.
Post all your social platforms in seconds.
Oh, maybe we could use that for something.
Very funny.
He's like, my idea was Gemini 3.
Like I was going to make a better Gemini.
I thought Gemini 2.5 just wasn't quite there.
and I didn't know that
what if Google does this?
All the VCs were telling me
your ideas, Gemini 3?
What if Google does that?
And I was like, everyone says that
about Google things.
Everyone says that about startup ideas.
It's not worth it.
I'm just going to try to build Gemini 3,
but then they beat him to it.
That's what I meant.
Anyway,
Department of War,
critical areas,
of new technology,
applied artificial intelligence,
quantum and battlefield
information dominance,
bio-manufacturing, contested logistics,
scaled directed energy, that sounds crazy.
Scaled hypersonics, very excited for that.
A bunch of interesting stuff.
Emil Michael is firmly in the chair
of the Undersecretary of War.
Very excited.
I hope we can get him on the show soon
to understand what he's doing over there.
We will make it happen.
Well, thank you for tuning in to the show today, folks.
We love you dearly, and we will see you tomorrow.
Have a good evening.
Cheers.
Goodbye.
