TBPN Live - Dwarkesh Patel, Nadia Asparouhova, Augustus Doricko, Casey Handmer, Ishan Mukherjee, Mike Knoop, Trump Pardons Milton, Coreweave IPO Winners
Episode Date: March 28, 2025TBPN.com is made possible by:Ramp - https://ramp.comEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - ht...tps://getbezel.comFollow TBPN:Â https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://youtube.com/@technologybrotherspod?si=lpk53xTE9WBEcIjV(02:30) - Trump Pardons Trevor Milton (15:26) - Coreweave IPO Winners and VCs (30:55) - Dwarkesh Patel (01:07:40) - Casey Handmer (01:31:55) - Nadia Asparouhova (01:47:21) - Ishan Mukherjee (02:00:59) - Augustus Doricko (02:30:02) - Mike Knoop
Transcript
Discussion (0)
You're watching TPPN.
It is Friday, March 28th, 2025.
We are live from the Temple of Technology,
the fortress of finance, the capital of capital.
This show starts now.
We got a great show for you today.
We have a whole bunch of guests.
Rakesh is joining.
We got his book here.
He sent me the PDF.
I printed it out.
Did you?
Out of respect.
Out of respect.
But if he doesn't, if he doesn't come on the show.
It's basically a manuscript.
If he bails on us, you heard it here first,
I'm gonna tweet out the link to the PDF.
Piracy's back, folks.
Napster's being sold for 200 million, and it's a threat.
Honestly, you could just upload it to chat GPT.
It's easier.
People can just query it.
It's a fantastic book.
I highly recommend going and picking it up.
Stripe Press never misses.
The folks over at Stripe are fantastic.
We have another Stripe Press author coming on the show.
We have a bunch of incredible thinkers and founders
and investors lined up.
Little bit of an AI focus day,
little bit of focus on geoengineering today.
Casey Hanmer and Augusto Dorigo will be coming in
talking about solar and also cloud seating so Augustus
has been accused of being a part of the deep state he's breaking his silence and
he's breaking his silence yes with TBPN so we should have some fun with that
obviously but it is kind of getting to a serious point with his with his drama
where like I think that the the tone needs to shift and hopefully we can be
a part of that but first let's start with a great post by Deleon
advocating that Trump should un-pardon Trevor Milton because of how bad his video is.
Nothing else. It's just the video. Yeah, which is so bad.
And so we're gonna go through Trevor Milton is the founder of Nicola, a
electric vehicle company that Zero Emissions Trucks
went to prison over a fraud case
and will break it all down.
Deleon says, my God, can you be more narcissistic?
Maybe take some responsibility for your fraudulent behavior.
Yeah, it's not a good signal if Trump pardons you
but doesn't post about it.
Oh yeah.
Because if he was going to get positive attention
from doing it, you'd think he would, you know.
Even if he knew he was gonna get some negative attention,
but I don't think anybody, it's hard for me to see,
you know, prison is a sad thing,
but it's hard for me to see sort of whatever the argument is
around freeing Trevor, given what seems to be pretty clear cut pretty clear cut totally
But let's read the Wall Street Journal article on it and get some facts and then we'll make and then we'll give you our take
So Trump pardoned Nicola founder Trevor Milton Milton was convicted of fraud in 2022 for statements about zero emission trucks
in 2022 for statements about zero emission trucks.
Milton said in a video posted to social media Thursday that he received a call from Trump
who spoke about how much of an injustice this all was
done by the same offices that harassed and prosecuted him.
"'The greatest comeback story in America is about to happen,'
added Milton.
The White House on Friday confirmed
that the pardon had been granted.
A federal jury in Manhattan convicted Milton in 2022 on one count of securities fraud and two counts of wire fraud,
which is kind of always the framework for these anything that anything that goes wrong in business.
It's all either securities fraud or wire fraud because you're making statements that lead to
someone buying or selling a security that are false. During the trial, prosecutors portrayed Milton as a con man who duped investors
including in podcasts and on social media about the company's sales and the
capabilities of its vehicles. In one instance, prosecutors said he created a
video of what appeared to be a truck driving normally. This is the famous video of
the truck going down the hill, but really it was an inoperable prototype
rolling down a hill.
And that video always got me because I just feel like
such a skill issue there.
Like can't you just build one prototype truck?
Can't you retrofit a truck?
Even if it was a flat road.
I mean there are kids that are like
engine swapping Miata's right now
and you can't like engine swap a truck
and put an electric powertrain in there.
It doesn't seem that hard to get something functional.
The hard part is actually the scaled manufacturing.
That's what Tesla is good at
and that's why Tesla's so impressive.
The first Tesla was impressive too.
But even with the first Tesla,
it was built on the Lotus Elise
and they just chopped it up and extended it and stuff.
It was very much like, the criticism was like,
that's just a Lotus Elise with an electric powertrain.
And it was impressive and it's great
for all the investors that saw that,
but it was never misrepresented.
Elon was never like, oh yeah, this design,
you might think it looks like a Lotus Elise,
but really we designed it.
No, he just said like, yeah, we used the body
from a Lotus Elise, we paid them, they're happy,
we're happy, and that's why the Tesla Roadster,
the first one, looks like a Lotus Elise. It's them, they're happy, we're happy, and that's why the Tesla Roadster, the first one looks like a Lotus Elise, and it's fine.
And that's not fraud, that's just actually doing
the important parts of your business
and outsourcing the ones that aren't important.
Calder in the chat says, put him back in jail,
don't let him out until he's built a truck.
Yes, this is our thesis.
Yes, I wanna see breakthrough biology
from Elizabeth Holmes.
Writing from Elizabeth Holmes. I want to I want to see a truly trustless
ZK proof network hacked together by SPF in prison
He's moving prisons by the way, not a full story, but just an interesting fact
His morning routine now includes getting woken up at 3 a.m. Yeah the real by the guards the real
Sort of you know crypto protocol is one that you could build and launch from within jail.
Just pure code.
You sort of get it onto one of the library computers and then it's just out in the world.
I mean, SBF was telling that story of like, I'm the James Rieck, I'm this genius, I'm
this amazing hacker.
And it's like, okay, show us. Show us.
Build the chain.
Build something better.
Build the chain.
Build the chain.
Write some code.
What's the code Rust that they all use or something?
I forget.
I don't know.
It's a hard programming language.
Programming for crypto is not easy.
And so if you can drop some amazing code, some decentralized network or something, he would
win a lot of fans back instead of just being like,
ah, it's all talk.
Yeah, and here's something.
Campaign finance records show that Milton and his wife
donated more than 1.8 million
to a Trump fundraising committee in October.
Wait, like from prison?
I guess.
You can do anything from prison.
That's what we've learned.
Going to prison, you can do podcasts,
you can tweet apparently, you can make political donations
What can't you do in prison?
Yeah, you would think that from within prison. Yeah, there should be some guardrails around sort of
This type of thing, but yeah, oh, I mean also
Milton's lawyer is Brad Bondi who's the brother of Attorney General Pam Bondi.
And so clearly cozied up to the Trump administration
as things got worse for him.
Unclear if he was connected to the Trump administration
before all of this happened.
But if you're running a fraud,
probably pay us to start making some friends in Washington.
Also, if you're not running a fraud, probably need a lobbying group anyway.
Get a DC office.
The lesson is have friends in Washington.
Yeah, I guess no matter what.
Yeah.
And so Milton maintained his innocence
and he said he acted in good faith
accusing prosecutors of cherry picking his public statements
to build their case.
And of course there is an argument
that this company could have been successful
and was just kind of faking it till you make it,
but there's always this question of like,
when is too fake, when are you faking it too much?
And there's always this like fine line.
I think a decent amount of faking is actually acceptable.
Like it's fine to put up a CGI render
of what you want to build and just say,
hey, this is my goal. It's not to put up a CGI render of what you want to build and just say, hey, this is my goal.
It's not here today.
But it's so different for it to be CGI
or even a hype video versus a demo.
Totally.
Faked demos are wrong.
But even Google has gotten in trouble for faked demos.
And this is why there's any time you see these
hard tech companies that won't be named on X
creating what they sort of blur the line
between hype video and demo.
And I believe that that is wrong.
Totally.
You gotta kind of like draw the line around,
is this an advertising for a future state
or is this the current state?
And the reason for that is if you're putting something out
that you're sort of positioning as a demo,
but it's not real,
and then you're using that to raise money, that's wrong.
Yeah, yeah.
I do, it is interesting, like Google had an AI demo
where you could just take a user camera
to kind of communicate with Gemini and use voice.
And it was a very cool demo,
but people dug into it and found out
that part of it was sped up.
And so they had done some light editing,
so it wasn't a completely unedited video,
and they hadn't disclosed that,
but they hadn't said that it wasn't edited.
And so it was more just like a bad day in PR world,
but you could imagine that if the stock had sold off a lot,
there could be a security fraud case,
but we'd still be like 25 steps away from the founder CEOs in jail.
The Nikola case is like much more, you know, like just way, way more red flags.
And so Milton is 42 years old.
He founded Nikola in his basement in 2015.
He took it public in 2020 at a valuation of 3.3 billion.
He resigned from the company later that year
after a short seller's report alleged
he made misrepresentations about the status
of the company's vehicles and the production
of hydrogen fuel needed to run them.
So it wasn't even an electric vehicle,
it was hydrogen, I remember.
Nikola, whose market value briefly eclipsed
that of automaker Ford before the fraud case
against Milton filed for bankruptcy last month.
Everybody has, they put Ford squarely on their targets.
Adcock clearly wanted to raise at a bigger valuation
than Ford, otherwise why would you price the round
exactly a hundred million dollars above it or whatever.
But Ford is Lindy, I think we're gonna be driving
Ford GTs in the year 3000.
They're just not, they're not going away
Yeah, it's too great of a company. I
Low-key go to yeah, I'm 100% I was about to say many people said why don't you just attach a motor to the horse?
Robotic why don't you make robotic horses? Yeah, yeah, he's not operated horse
He said we have to wait a while. He'll you know robotics can get there, but for now we're gonna make cars
But hopefully the horse makes a comeback. Hopefully, you know all this humanoid buzz with with
Figure and Tesla Optimus. We know we want the robotic horse first. Yeah, I want to be galloping to work the self-driving horse
The horses are self-driving you can literally train train a horse to, hey, go home,
and you can just kick him on the butt
and he'll just take you home.
Who needs a Waymo?
Who needs a Waymo?
Let's bring horses back.
Milton had sold roughly $400 million of stock in Nikola.
Wow, that's a lot of secondary.
D-listed as chair from the NASDAQ a few days ago.
Two weeks ago, federal prosecutors asked the judge
for Milton's criminal case to order him to pay back
nearly 661 million to shareholders.
The SEC sued Milton in federal court in July of 2021,
alleging he committed civil securities fraud.
That case, which was on hold during the criminal proceedings
remains active.
Court records show the SEC declined comment.
So this is interesting.
He got $400 million.
They're going to ask him to pay back $661 million, but in the meantime, he's spending
that money and making campaign donations.
What happens?
I mean, he will just be personally bankrupt, right?
So basically, he has $400 million that he can just spend and he will just be personally bankrupt, right, so he can, so basically he has 400 million
that he can just spend, and he can just give away,
and if there's no way to claw it back,
like if he buys a house, and then he goes into bankruptcy,
they'll force him to sell the house,
and then the house will go to the shareholders,
or the proceeds from the house, but if he donates it,
what?
He's out now, he's also, he's,
he's probably got another 30 years
of his career in hard tech.
I could easily see him running something back.
Come back.
I mean, I'd love to see it.
We've always maintained that it's always sad
when people go to jail and we always believe
in restorative justice and we certainly hope
that he gets back on the mechanical horse.
Yeah, the video to Deleon's point,
it just makes it feel like maybe he doesn't,
maybe he doesn't actually feel bad at all.
Yeah.
It's odd.
Odd.
Anyway, Trump issued a raft of pardons since taking office
and has pledged to crack down on what he has described
as the weaponization of the justice system.
Trump was twice indicted by the justice department
after his first term and separately convicted
of falsifying records in the State Court of New York.
Both federal cases have been dismissed and Trump is appealing the state
conviction. Earlier this week Trump also pardoned Devin Archer, the former Hunter
Biden business partner who gave congressional testimony about their
business dealings. Milton Archer and Silk Road founder Ross Ulbricht, whom Trump
also pardoned, were all prosecuted by the US Attorney's Office
in Manhattan.
The Justice Department has also asked a judge
to drop a bribery case against New York City
Mayor Eric Adams.
So, oddly, a couple of Democrats getting pardoned,
a couple of Republicans getting pardoned,
a couple of, seems like apolitical people
who have maybe just asked politely, asked politely, any pardon.
I think the future, if stuff gets really, really political,
will just be your, when you get sentenced,
you're in prison when your party's out of the office.
And then, so you get pardoned as soon as your guy
gets into the White House, and then you get unpartened
on day one of the new administration,
and you're just in and out of prison every four years.
Exactly.
I think that's the future.
Yeah, and you're just donating to your team
while the person's in prison to try to make sure
like your team wins, otherwise it could be an eight year
stand.
Schrodinger's prisoner, both in and out of the clink.
Dark.
But a lot of people made money,
a lot of people lost money on Nikola. If you were buying putz puts probably did pretty well. Yeah, those short sellers the short-sellers. Yeah. Yeah, they probably did
Actually have them on I wonder what do you think it was?
Hindenburg that did it. I wonder who did the actual Nicola short seller report
But anyway, if you're interested in going long going short
You got to do it on public investing
for those who take it seriously, multi-asset investing,
industry-leading yields, the trusted by millions folks,
go to public.com, sign up.
The best in the business.
And there's gonna be a new stock
that you can trade on public any day now.
Wait, before that, Nicola in 2020
had some coverage from Hindenburg Research,
not the firm that you want to get coverage from
How to parlay an ocean of lies into a partnership with the largest auto OEM in America
so
Anyways, it's a little rough. So
Hindenburg's kind of retired now. Yeah
Anyways
Out of the game out of the game, But maybe there'll be a new viral short seller
that emerges soon.
I think it's Scrally.
Yeah, he's doing it.
Yeah.
He's been doing pretty well.
He's really doing it.
He's like, he actually realized you don't need to
do a report.
You just need to post.
Yeah, just post.
Just post.
But yeah, he's been good.
Anyway, new stock coming to the market.
Everyone's talking about CoreWeave's IPO.
And we're gonna go through the information's report
on the winners in the CoreWeave IPO
and why VC's missed out.
We had Tane on the show and he had a great quote.
He said, CoreWeave is the web three to AI pivot done right.
So it was originally a crypto mining company
and now it's an AI data center.
Potentially the best to ever do it.
The best to ever do it, truly, truly.
And it looks like it's gonna be a fantastic outcome.
It looks like it's gonna price right around 40,
up from five cents a share in 2019, not too bad.
And so we'll take you through who invested,
when they invested and how much money they made.
Let's kick it off with the information
reporting from Corey Weinberg.
He says,
CoreWeave's IPO, which started trading today,
had a rocky path to the market,
but delivered a windfall to Magnetar Capital.
The investor bet big on the AI data center startup,
but presciently protected itself
from the risk of big losses.
And if you wanna pull up what Core Weave is doing on public,
I'd love to know how is it trading right now,
because this is the IPO day.
Today is the IPO day?
Today is the IPO.
I didn't even.
Boom, we got IPO news, folks.
I'm pulling it up right now.
And I'm gonna need some soundboard from you,
I'm gonna need some public.com data from you,
I'm gonna need a lot of stuff.
It's so crazy, I didn't actually realize
that they were trading it.
Yeah, well, there's always this trickle
of like the S1 comes out and then there's gossip articles,
will they go out, will they not go out,
will they scrap it?
Well, they went out, it started trading today,
it had a rocky path to market,
but delivered a windfall to Magnetar Capital,
the investor bet big on the AI data center startup.
But they also protected themselves
from big risk of big losses.
It seems like they sold at a certain point
in the secondary market.
So Magnetar invested when CoreWeave,
when others wouldn't touch it.
And that helped it get favorable terms
that turned 850 million in equity into 4.3 billion folks.
Let's hear it based on the $40 per share IPO price. That's a good
return. And they weren't exposed to significant risk. Magnetar is the biggest of a handful
of winners in the offering, which priced Thursday night at a lower price than expected. So CoreWeave
has long been controversial. It borrowed billions to build data centers that serve the huge
demands of artificial intelligence. That debt really scared off Silicon Valley's
big venture firms and none of them will be celebrating
one of the biggest tech listings in recent years.
A lot of VCs have a blind spot said Nick Carter,
former guest of the show, a venture capitalist
focused on crypto startups and he made an early investment
personally in CoreWeave.
We had him on to talk about that.
We'll have to reshare that clip today.
The conventional wisdom as a VC is you don't want to invest
in capital intensive businesses.
This is something that we've seen again and again.
They're harder to underwrite.
And they did quite the party round.
Oh yeah.
If you look at some of the friends and family,
there's a Russian poker player, Italian music producer,
a New York based plastic surgeon,
and a Sesame Street voice actor.
That's exactly what you wanna see.
Which is kind of what you wanna optimize for
in the early stages.
You never know, one day you might need some,
as a founder, the founder might need some plastic surgery,
the next day you need a voice actor.
People talk about this, it's diversity of thought.
Exactly.
You want diversity of thought around the table.
Yeah, the next day you might say,
you might be down to your last
hundred grand need to take it to Vegas. Yep. Multiply it. Got your
poker player. Help you there. If you get them into the country.
That's helpful. You need to make a jingle. Yeah. The jingle.
The Italian music producer. Yeah. The big one. The big one.
The surprise. You want to sing a core weave jingle in Italian
voice. What's their tagline? CoreWeave, big data centers, highly leveraged.
Yeah, I don't know if this is accurate,
but their debt to equity ratio right now is 800,
over 811X.
Let's go!
We love leverage.
That doesn't seem possible.
I don't know if it's right, but.
When CoreWeave started, investors were betting
on more of the cannyness of its founders,
Michael Intrator, Brian Venturo, Brannon McBee.
Then on their business plan, the first business of CoreWeave, Ethereum mining became worthless
after Ethereum holders essentially cut miners out of the process in the fall of 2020.
That of course was the merge when Ethereum went to proof of stake as opposed to proof
of work.
I had mentally written off the investment, Carter said, I thought they were going out of business.
Instead, the founders repurposed chips and computing equipment they used to mine Ethereum
into other uses like graphics rendering and AI. And of course, Crusoe Energy also did something
similar. They were using peaker plants and oil and gas extraction, just the extra energy that's
sitting stranded on the grid, you can't use it to power homes,
you drop a bunch of server farm right where the energy
is super cheap or maybe even free,
and you start mining crypto, and then over time,
you start training AI models.
So let's go into Magnetar Capital.
Magnetar, which turned a $50 million loan
to CoreWeave four years ago,
into a multi-billion dollar stake,
is set to be
the largest beneficiary from the IPO.
So it sounds like they were...
They know what they're doing.
Yeah, they knew.
I mean, absolutely insane.
Former Citadel Traders, MagnaTars CoreWeave's largest investor and has continued to back
the company through its most recent private fundraising.
Its name is mentioned 157 times
in the company's IPO prospectus more than twice
as often as the CEO.
I mean, you look at this chart,
it's like it came in with a convertible note in 2020
at $2 a share.
They did the series B at $5.5 a share,
or $5.58 a share.
And then they also came in the series C, $38.95 a share.
And then they did a tender offer at $47 a share.
And so they have done just again and again and again
been involved in this business.
So this was cool.
So the managing partner of Magnetar, David Snyderman,
says, sometimes the stars just align.
I think we were the first firm to get
comfortable lending
against that asset called high-performance compute.
She's referring to Core Weave's inventory of high-end chips
produced by Nvidia to power AI products.
Some of Magnetar's investments came in the form of loans
that turned into equity.
This is the convertible note facility.
Yeah, if it was a convertible note,
it's sort of confusing
to just say this was.
I think they did it all of the above.
Yeah.
I think they did do just straight up debt with warrants
on top, and they converted those because it went well.
It also vacuumed up shares from executives
selling privately and secondary offerings,
but it forged an agreement preventing founders
from selling more than 20% of their shares in the year following the company's IPO.
Remember, Tane was telling us that there was a lot
of secondary action by the founding team.
I think they had sold hundreds of millions of dollars.
And so they've created this agreement that,
hey, the founders won't sell more than 20% of their stakes,
but because the company's so big,
it's still a huge amount of money
and a huge amount of liquidity in a frothy AI market.
This is an important IPO because if it does well
and it becomes a really strong business,
it's incredibly bullish for the entire industry,
but at the same time, there is a lot of debt.
It's an early-stage company.
They've only been in business for a couple of years.
Who are watching closely?
I just hope that the angel investor
that's the Sesame Street Voice actor
was able to get some liquidity.
I think they probably did fantastically.
I mean, Nick Carter did great.
So why would the Sesame Street actor not do well?
As for Recruit.
I wonder if they were able to prevent a lot of dilution
just by being able to fund a lot of this growth
through the debt offerings.
Yeah, I mean, it seems like it was kind of 50-50
because they did take a lot of dilution
from the penny warrants, right?
That's why Magnetars mentioned so many times
in the IPR prospectus.
But probably less than if they were just trying to,
hey, we need $50 million to buy a bunch of chips
and build a data center, and we have no revenue,
and so it's a $10 million pre-money
and we're gonna give up 80% of the business or something.
What is the alternative if you're not doing debt
at this scale?
As CoreWeave grew, Magnetar lent another $730 million
collateralized against Magnetar's, oops, where am I?
Magnetar Capital, collateralized against CoreWeave's
contracts to sell computing power to Microsoft
and Nvidia.
CoreWeave has paid $66 million in interest so far on those loans, so less than 10%, and
these mature by the end of the decade.
Magnetar was the second biggest investor in CoreWeave's fundraising last year, putting
in another $350 million document show.
Magnetar and tech investment firm Co2
Were able to hedge their bets They got a put which allowed them to sell back their stakes to core weave if the stock fell two years after the IPO
But that could become an important investment term because investors in the round paid
3895 a share meaning they stand to lose money on the deal if core weave stock goes down much
After the $40 a share IPO, Of course, they might not exercise that immediately.
They might wait it out, but see where the stock is in a year or two, but they do have
that put option for a while.
If investors were to exercise the right, it would be very costly for CoreWeave, the company
said in its filing.
Spokesperson for Co2 and Magnetar declined to comment.
They're not talking because they're in a quiet period, of course.
Several funds are underwater on their investments.
CoreWeave, in CoreWeave at the IPO price.
Investment firms, Jane Street, Fidelity, Macquarie,
BlackRock, Newberger, Berman, and others bought
$650 million in stock last November from early investors.
Moment of silence for the folks who are in debt.
This moment of silence is brought to you by public.com.
They bought at 47 a share share 15% above the IPO price
Magnetar best known for winning bets on dodgy mortgage loans before the financial crisis
These are former Citadel guys after all they know what they're doing
Is now deep in the AI investing frenzy
It's expected to write one of the largest checks to OpenAI, also a CoreWeave customer.
In OpenAI's $10 billion funding round this month, Bloomberg reported, Magnetar launched
a VC arm to leverage its relationship with CoreWeave by investing in new AI startups
in exchange for access to the NVIDIA chips CoreWeave owns.
And CoreWeave put $50 million in that.
And we've talked about these VC funds attached to startups before.
You said maybe it doesn't make sense
for perplexity, but in Core Weave's case,
it's like they have the underlying data center,
they have something that's very complimentary
to an AI company, an AI app, an application layer company,
and so maybe it makes a little bit more sense in this case.
And if you're building an AI application company
and training or inference is one of your most large
expense line items, maybe it's great to actually
have that relationship really tight and take some equity.
I think it makes, once you're maybe past
the $ billion dollar stage
It's a little at least a billion dollars of revenue
Do your VC fund? Yeah, like it seems like core we have product market fit, right? Yeah. It's a pretty basic business
It's yeah, they build a they build a data center and then they sell access to that data center. Yeah, and
They're they're doing great. And interestingly
Dylan Patel just put out
on Semi Analysis the GPU cloud cluster max rating system,
how to rent GPUs in the US.
I'm gonna try and refresh this.
But I believe he gave Corwee a very strong rating.
It actually rated a few,
let me see if I can go to Semi Analysis.
You wanna keep reading?
The article's mostly over,
I can cover a little bit from Vidya.
Yeah, so Semi Analysis rated every single Neo Cloud,
CoreWeave, Crusoe, TogetherAI, Oracle, Azure,
AWS, Lambda, Google Cloud, and interestingly,
CoreWeave got the highest ranking.
They are the only SemiAnalysis,
they're the only one that won
the SemiAnalysis Platinum tier.
In the Gold tier was Crusoe, TogetherAI, Nebius, leptin AI, Oracle and Azure.
But AWS got silver and Google cloud got bronze,
which is something that most people wouldn't expect from a hyperscaler.
But you know, we've seen with Google cloud, they just had to buy, um,
they had to shell out $32 billion, uh, for, uh,
upping their security. And you know, even though they are known as the greatest hyperscaler
in many ways, there's a little bit of like,
maybe they were behind the ball on specifically running
AI workloads in the cloud for other companies.
Whereas CoreWeave built their cloud specifically with AI
in mind for the last few years, kind of had a fresh start,
maybe a Greenfield project and did very well
and earning high marks from Dylan Patel
over at Semi Analysis.
Anyway, there's another interesting line in here
about Les Wexner.
He's in here, he owns 3% of CoreWeave.
He invested $1 million through a trust
and that's worth something like $800 million now, of course.
Lex Wexner is a controversial figure.
He's been linked to Jeffrey Epstein, and he also in here,
he used to own Victoria's Secret.
And so again, not your traditional AI data center
investor, but someone who got on the cap table early
and rode that stake in a really, really huge way.
Yeah, I think it was roughly 700x.
Banger.
Banger.
Never thought we'd be hitting the size gong for.
Size is size.
Size is size, though.
Size is size.
Nvidia is also in the deal,
probably Silicon Valley's biggest winner from the offering.
Yeah, it's good to see them get a win, to be honest.
Finally, finally.
Yeah, Jensen needed it.
They've been sitting on the sidelines.
It's been a rough, yeah.
They really did need like an AI narrative for them.
They needed something to really pump the stock.
So they're, of course, the dominant chip maker. They have made a
crucial supporting role as one of Corveeb's largest investors. It owns about 3% of the
company fully diluted after having bought about $100 million worth of shares in the
Series B in early 2023, which was, you know, that chat GPT moment was just taking off and
scaling laws were just kind of becoming popular.
It's so strange that Nvidia and Lex Wexner
have the same ownership.
That is crazy.
CoreWeave.
Yes.
Narrative violation.
So Nvidia owns about $700 million of the IPO price.
Victoria Secret, Jeffrey Epstein guy.
Yeah.
And has the same ownership levels as Nvidia.
One of the greatest companies of all time.
And CoreWeave, one of the highest profile cloud computing ideas.
I mean, it really is just a bizarre company.
Like, they're doing amazing stuff.
You can see from Dylan Patel's analysis,
the product that they've created is clearly top notch.
I believe that Dylan Patel's very objective in his analysis,
and he's not, because of the nature of his business, he,
There are some lists in this industry
that you can more or less pay to get on.
Yes, but I don't think this one is. but I don't believe that Dylan Patel's is exactly
Well, yeah, it's 1130. Oh it is and we have a special guest we do I believe they are in the waiting room
Let's bring him in
What's up guys? Hey, how you doing? There he is. What's going on? See you guys. It's great to have you to see you
I heard you've been doing space repetition to practice for interviews
So I I've been doing it it says
Your name is Dwarkeh Patel you have podcast and you interviewed somebody named Mark who owns a website called Facebook comm
Can you tell me about that?
Yeah, in fact, I am I made an entire on key deck just for you guys
just for you guys. Really?
No way.
No way.
We're honored.
That's great.
How's the book launch been going?
How's the press tour?
How are you doing today?
It's been going good.
The people have all kinds of a book
called The Scaling Era.
And it compiles the interviews I've been doing about AI.
And people, like as you mentioned, Mark and Demis
and Dario, the heads of the AI labs,
but also researchers and engineers
and philosophers and economists about.
And it's been really interesting
because AI is one of those topics
where there's so many fractal questions you could ask
about what is its impact gonna be,
how are we gonna train it,
how do you even think about a super intelligence.
So been dealing with a lot of different kinds of questions,
which has made it interesting.
That's great. How contentious was the was the sort of
book process did you know you wanted to go with with Stripe I imagine you could
have had your pick of the litter in terms of like legacy publishers that
promised you all sorts of things but you happen to own you know sort of your own
distribution so maybe it was just about picking the right sort of underlying
partner for it. It honestly was never a matter of picking a publisher.
The main question was whether I should do a book
in the first place.
And so some folks at Stripe Press reached out
and if I was gonna do a book, it would be with them
because as you know, their reputation precedes them.
And so then it was just like deciding,
do I want to do the book or not?
And I think retrospective was the right call.
I think I'm really delighted with how it turned out.
I want to talk about acceleration.
Are you feeling the acceleration?
Mathematically, we are not accelerating GDP yet.
Although technically we are today.
I think GDP ticked up just a little bit, which means
we're technically accelerating.
It must be your podcast.
Yes, hopefully.
Yeah, hopefully we're responsible. But energy use is not accelerating. It must be your podcast. Yes, hopefully. Yeah, hopefully we're responsible.
But energy use is not accelerating.
Even some of the benchmarks are kind of saturating.
We're not seeing acceleration curves.
We're seeing solid growth.
But at the same time, it feels like we're
on the precipice of acceleration.
But what does it mean to feel the acceleration for you?
I think it's a really good question,
because we have these models which we think are smart.
And as you say, we haven't seen them even automate
the things which we, like when we're having a conversation,
we'll be like, oh, the call center workers,
they should be really worried.
And they still have got their jobs, right?
So what's going on?
As you know, people have been talking about what we need to make the models cheaper.
When Deepsea came out, they were like, oh, it's going to be Jevons Paradox,
and we'll be using them way more now that they're cheaper.
I think the real bottleneck is just we've got to make them smarter.
I don't, like, they're already so cheap.
It's like two cents per million tokens or something. Ridiculous.
I think the real bottleneck for me using them more is not their price,
but them being more useful,
being able to take over more parts of the economy.
Yeah, do you think that intelligence is all we need?
Andre Carpathy was talking about the importance of agency,
talked to other people about maybe,
it's like what makes humans effective.
It's not just intelligence, it's also agency,
it's also coordination, friendliness, networking.
Tyler Cowen's talked about, like, do we even need to map the different parts of like the
skill tree that humans have, like charisma, wisdom, like underrated is like these, like
the AIs are getting more intelligent, but they're already like maxed out on wisdom,
right? But do we need to think about a different taxonomy here,
or do we just need to max out intelligence
and everything else will come?
No, I think you're absolutely right.
I think you need a lot more skills.
There has been this trend in AI
where whenever there's a big breakthrough,
we think we've automated a large part
of what intelligence is.
And in fact, in retrospect,
it's clear that it was only a beginning.
So the big example here is when Deep Blue came out and beat carpe Casper off at chess. Yeah
People thought that this was like a big breakthrough in intelligence in general because we thought that what chess required was the general intelligence
And you might have heard this concept of AI complete problems where if you solve this problem, then you've solved intelligence
So people said that all self-driving
where if you solve this problem, then you've solved intelligence. So people said that about self-driving,
the Turing test was supposed to be AI-complete.
We've gone through all of these sub-components of intelligence,
and afterwards we realized there's actually still more left to it.
The thing that's sort of underrated is not even agency per se,
although that's a part of it.
I think the thing that's underrated is we humans have this global hive mind,
where the reason we can make iPhones and
we can make buildings and whatever is not just intelligence and also not just
agency it's the fact that there's so much specialization there's so much
capital deepening people are just like doing things trying different ideas
AIs need to be smarter in order to do that they need to have more agency in
order to do that but once they can you just if you have millions. AIs need to be smarter in order to do that, and they need to have more agency in order to do that. But once they can, if you have millions of AIs
running around trying different things,
that's when we get the real acceleration,
and you'll feel it in your blood.
When you say millions of AIs,
you've said that you think that there will be
billions of AIs running around.
What does that actually mean?
How can we quantize?
Are we just talking like chat GPT DAUs?
Are we talking about individual threads?
Because you can inference multiple threads
on a single chip.
That's right, that's right.
Like, how are you thinking when you think,
it sounds more concrete when you say
there will be billions of AIs,
but is that really just like,
there will be a hive mind that is equivalent
to billions of people?
Are we talking about like one,
or maybe each model is like one entity,
but then there's sub threads? How do you think think about like that concept of like billions of a eyes?
Honestly, I don't think anybody knows I think it'll just like dependent how the tech tree shapes out sure
um, I have heard like these wild ideas in some of these interviews where
one person I check Hootra mentioned this idea of the blob and the blob is
Right now, you know, it's really hard for if you have
an institution or organization company it's really hard for the person at the
center to have that much awareness of what's happening in the company to
control it to any great extent Xi Jinping has the same 10 to the 15 flops in his
brain as any other Chinese person or any other person in general and in the
future you can imagine that look the thing at the center just has way more compute and it's not clear whether
you think about it as like more copies of AI Xi Jinping or AI Sundar Pichai or
something but you could just have this like huge blob that's constantly
you know it's like learning more things it's like writing every single press
release the company releases it's reading every more things. It's like writing every single press release the company releases. It's reading every single pull request
It's answering every single customer response
That in no, I don't know if that at all helped answer the question
Yeah, it's just it's a very weird question of like how this will actually play out will
Like like do we need to recreate the concept of like an individual brain and then copy paste it a billion times
Or do we just need one really big brain? It's unclear to me. I don't know Jordan. What do you got? to recreate the concept of like an individual brain and then copy paste it a billion times?
Or do we just need one really big brain?
It's unclear to me, I don't know.
Jordy, what do you got?
How did you feel on Wednesday,
this sort of like Ghibli moment?
A lot of people were saying,
oh, I can't even go on X, it's just all slop.
And my takeaway was, this is not slop, this is beautiful.
This is like actually the most beautiful
the timeline has ever looked.
And the most powerful thing about that moment,
it was the first time that you,
like I felt that the entire world could get consistently
perfect outputs without any sort of like prompt engineering
and basically just one-shotting these outputs
in this sort of very scaled way.
Are you, are you, so to me, I get very excited about it
because it's like having any human be able to create
beautiful images out of text is fantastic.
And I think we should be excited about that.
But how, how did you kind of react to some people saying
like, you know, oh, this is bad or like,
I'm going to log off forever, right?
You saw people that were just like, okay,
like I'm going to delete my account and just leave X now it's over how many
times have those people said that I think that's a very zero some people
were saying things like oh you're like eating up the like this fall you know
fossil fuel which is like our affection for Ghibli somehow by making these
things I think it's just a very zero-some view of the world
where there can be a limited amount of beauty,
there can be a limited amount of joy.
I just don't think that.
And can I be honest?
Like the thing I was really feeling
when all these Ghibli images were coming out,
I became more of,
I became more convinced
of our glorious transhumanist future.
Where like, look, you're getting a glimpse
just from these early images of how cool and beautiful
the things AI makes or helps us make will be.
Just imagine this scaled up like 100x, 1000x
integrated into all our senses, maybe even into our minds,
integrated into the way we relate with the people
we care about and so forth.
Yeah, I'm just like, the future could be really beautiful.
I agree.
Yeah, I'm wearing a VR goggles, and it's
making everyone beautiful.
It's really rose-colored glasses, right?
Yeah.
Yeah.
I thought you were going to begin with.
Yeah.
Yeah.
How do you feel?
So Manus AI is apparently doing a roadshow in the United
States right now, sort of raising for American VCs,
potentially. I saw some people pushing
back on the timeline saying like, you know, bad luck to any American VC that does that.
How do you feel, you know, you know, and the criticism would be like, you know, we're in this
sort of AI, you know, Cold War, American, you know, venture capital dollars shouldn't be sort of
funding companies that are potentially competing with you at US AI labs or application layer companies. What's your sort of broad take on this sort of like cross border investment in AI?
really striking to me how dismayed the venture capital system there felt and the tech ecosystem generally, because after the 2021 crackdowns, people are just like, really pulled back.
And it sounds like after the deep seek moment, that sort of change, at least in the AI, at
least in AI, because now the state, you know, the city funds and whatever are more willing
to pitch in. Yeah, how should people react to this?
I'm of two minds because one, I do believe that there is, like there could be an intelligence
explosion and you really want to be ahead of that and you don't want to help them get
ahead on that.
So I think like the expert controls and whatever are wise.
As for just, man it seems like in the middle ground here where I wouldn't want to just generally
try to harm China by terrifying batteries or cars or something.
This is an application of AI and it's complementary to American AI Foundation Labs because they're
using the cloud model right.
So I honestly don't have a strong thing.
What do you guys think?
I don't I don't I'm exactly strong thing. What do you guys think? I don't.
I'm exactly in the same position as you.
I don't have a, you know, if American, you know,
I did find it a little bit weird that some American venture
capitalists were just like cheering on DeepSeek,
just like blanket statement, like, you know,
open source is good.
This is good.
When it felt like the way the launch was like rolled out
and announced was done in a way to potentially sort of harm
American financial markets.
But my take has been that just just on a pure investment finance
level, like investing in a Chinese company just can be difficult to get your money
back at a certain point because the money just gets kind of stranded there.
And then depending on who's in charge of America at the time,
it could be very onerous to bring that money back.
But I want to stay on China real quick.
I thought, I mean, fantastic piece.
I don't even know what you call a video essay,
but one of the things that really stuck out to me
as a creator like yourself
was the lack of Chinese Joe Rogan, basically.
And I was wondering, have you thought about that more?
Have you unpacked that more?
You would almost expect that even if all the crazy
censorship is true, why is there no power law winner
and there's a Joe Rogan who just spouts propaganda
constantly, what is going on that's driving the lack
of these long tail, like you know world like
country renowned country famous people
Yeah, I feel like you guys might actually have a good perspective on this because
With somebody might have said why doesn't tech have their like?
Jorogun equivalent and you guys started your podcast network, and I think somebody could have said before you guys are to your podcast network
Why doesn't something like this?
exist already
I honestly I I don't speak Chinese and I don't
This is a secondhand stuff. I heard in China
I feel like reluctant to make grand conclusions about Chinese culture based on like why don't they have a drug and then because like this is
Chinese culture is like this or something
the sense I got was that they are more concerned, like
whether it's young people or just whatever people want to consume
is often more focused on practical matters.
And if you listen to Joe Rogan, it's very much like,
let's just shoot the shit about whatever.
I get the sense that it's like,
that is just not that interesting.
Just like less practical.
To at least the people I met.
Yeah, yeah, it's less practical stuff.
I wonder if there's also an effect where
just some of the first social networks
probably accelerated more quickly
to these highly diffuse algorithmic driven TikTok feeds
that allow for smaller micro celebrities essentially.
Whereas in America we've been building up
this celebrity culture for so long
that we have this power of a power law dynamic.
I don't know, I don't really have a thesis on it.
That's super built out but it is fascinating.
Yeah, that's interesting, that's interesting.
Have you been there?
I was there once on a layover
but Jordy did live there for a while.
Oh really? In 2016 I worked at at a I went to Fudan
It was a little hedge fund called high flyer
Yeah, no big deal. No, I just studied abroad there
so I was there for a semester and and I worked out of China accelerator, which is like a
Startup accelerator so a very very interesting experience that I don't.
I mean, the scale is crazy.
I was in Guangzhou even just for a day
and the scale of the buildings there,
it's just, it really is remarkable.
Like you need to see it in person
because the pictures kind of compress everything.
You don't really understand until you're there.
The one thing that I found that was weirdly fascinating
is I would, for some reason,
I don't know if it was the iPhone camera at that time,
but I found it very difficult to get high quality images
because it was so polluted that something about,
your own eyes would be able to kind of like,
I don't know if it was a horror,
but I would take a picture and I'm like,
that's not what it looks like.
And I realized over time that it was just like,
certain areas were so heavily polluted
that the iPhone camera would just kind of bug out.
But I had a question for you.
Something from this week that I thought was funny
was that we were all kind of holding our breath
for the next like chat GBT moment.
And then it was also chat GBT with their image generation
product.
Do you think we should be holding our breath
for sort of another company to sort of experience
this sort of moment like that, where it's just full,
you know, complete takeover of the mind share, right?
Because mind share is just so important right now.
The sort of benchmarks come out
and everybody in our corner of the internet
is like hyper fixated on it.
But the average consumer, you know, one of the things I thought was fascinating is, you
know, John and I had a couple Ghibli posts each that sort of broke containment.
And a lot of people are quoting it and saying, all right, like, tell me what this app is,
everybody.
Like, what's the joke?
Like, they just still didn't know like chat, GBT or anything like that.
But I think it would be amazing for the industry if another company could have a moment that big.
Do you see something like that happening this year?
Or have we sort of reached a level?
Even the agent stuff, like flight booking,
you could see that there's some application that
would go real broad.
Is there any next milestone that you're waiting for?
If somebody did get a reliable agent to work,
I think that would just be like,
that would have a similar break to the internet
in a way that you personally could use.
And you could just like log in and,
I think it'll probably be one of the
foundation lab companies.
People have been for years trying to build agents
and they just haven't worked.
And it makes me think that that's a
five dimensional limitation of the current models models and so it'll just be the
company that is building a future model that is geared towards computer use and
so forth. That is what I'd expect to be. I mean I was thinking the other day,
remember when Sam got fired and people were posting on Twitter, oh there's
something, what did Iliasi do? Q-Star. star, yeah. Yeah, yeah. I don't think that's why he got fired, but it is notable that in retrospect,
if you were following the Q star rumors, you actually would have been in a good position to anticipate,
you know, that there would be this like reasoning breakthrough, that's kind of what they were talking about by the time.
Similarly, GPT 4.5 and Grog 3 being not that much better.
If you were following Twitter, and six months ago you would have seen,
oh, pre-training my plateau, but we'll have to go with inference scaling or something.
So maybe my update has been that you can sort of know what's going to happen.
I mean, I remember at the time I was just like,
these idiots on Twitter are just like, they don't know what they're talking about.
They're just like a rumor mill.
And in retrospect, I'm like,
eh, kind of like, I mean, you know,
take it with a grain of salt,
but they kind of had the big picture.
There's like no secrets.
Yeah, there was always this funny dynamic
where people were criticizing Sam for launching ChatGPT
without telling the board,
but then at the same time,
people were criticizing Sam for being non-technical
and not driving the product forward.
And I was like, no matter what you think of Sam,
those two things cannot be true simultaneously.
You have to pick a side here.
You can't criticize it for both not innovating
and also innovating too fast.
But anyway, I want to talk about a flip side
of the P-Doom argument that I've been kicking around.
Basically, we've seen these, like, accelerating
trends before nuclear energy, energy too cheap to meter, hasn't talked about before, and
we have hit stagnation. Like, it is possible that just something can break in our society
and all these different economic and political forces can align to just say, hey, you know
what, nuclear energy is not going to double every couple of years.
And it didn't. And I'm wondering, like, what is your peace stagnation? Like, just your
probability that something happens, maybe people freak out, maybe there's just, you
know, one world government or something. But we actually see AI stall for a significant
amount of time, like 50 years, there's no intelligence explosion, purely for stagnation reasons. Do you think that's a possibility?
Like 10, 20 percent. I think there is a dynamic I talked about earlier where in the past we have underestimated how much it takes to make a coherent intelligence that has agency and so forth, right?
that has agency and so forth, right?
That could be part of it. Another is that there is no intelligence explosion.
So, sorry, I mean, the most important thing here is,
look, we can keep increasing compute
that we're putting into these systems
for maybe the next five, 10 years,
because computers is growing at this ridiculous rate
where in three years, we're gonna have 10x
the amount of global AI compute that we have right now.
But at some point, right now we're spending 2% of GDP on compute and data centers and stuff like that.
You can't just keep like 10xing that forever.
So if somehow this whole deep learning paradigm is wrong and we just like totally missed the boat
somehow then I could see it happening that's like a 10 20 percent otherwise if
we do get a GI I'm of the opinion that we're not it would just be so hard to
contain it like it's an incredibly powerful technology even if there's no
intelligence explosion even if it doesn't help you make an ASI or something
just a GI alone is like would just make the economy explode and all kinds of crazy shit
I mean on the on the like there's there's a little bit of a force of deceleration like the GDP question
but also just I've had this idea that like
No matter how intelligent you are you can't break the laws of physics at a certain point
You need to like get the sand out of the ground and turn it into silicon and like at a certain point you need to like get the sand out of the ground and turn it into silicon and
Like at a certain point just moving the sand around fast enough
Even at light speed you're not 10xing every two years
And so it feels like there could be a slowing down even as we're having the robots do basically everything
It's like the robots are still maxed out by physics. I don't know. I was thinking about this this morning actually.
And the intuition I was thinking about is, so since 1750s,
we've had 2% economic growth in the world.
Before that, it was like a 10th of that, right, 0.2%.
If you were around in the 1500s or 1000,
and somebody said there'd be like 2% growth,
I think you might, given your reference class,
have been like, look, it just takes a long time to learn
how to artificially select crops and how
to build new structures and aqueducts and whatever.
That is a process that takes a while.
So why do you think you're just going
to be going through, like, increasing that 2%, 3% a year?
And in registered, it is like a really weird,
you look at the last hundred years of history.
We're discovering all this new things
in physics and chemistry and so forth.
The last 50 years, we're like,
we start with the transistor
and now we're talking on this magical screen.
And that was just like, physics didn't bottleneck that.
I think like you get another 10X
and I don't see any in principle reason why
at the next 10X the physics is like
just would not allow the robots to move fast enough.
Yeah, I mean, certainly on the GDP question,
I think the energy questions like maybe a little bit murkier
but then there's probably other ways to optimize
and still get those GDP lifts even with energy growing at a more reasonable,
less explosive rate. So I think I agree with you there.
I'm sure you've talked about this in other interviews and with some of the individuals
that are sort of leading initiatives at these companies, but what's your sort of broad take on
Apple's position and how they've been approaching everything in AI, you know, it's sort of like I've
seen, you know, they've sort of led with like genmoji almost as
much as they've led with like, you know, everything, all the
potential of what you would want out of sort of AI assistant. But
how do you think these, you know, companies like Apple and
Google sort of like figure out product development
and proper distribution of these products?
Because I feel like that's been the big critique.
It's just like they have every possible advantage.
Such talented team members,
and it has to be so frustrating internally.
And it should be a sustaining advantage
or sustaining innovation
if you're looking at the innovator's dilemma framework
and yet it feels like it might wind up being disruptive,
I don't know.
Yeah.
They're not AGI filled enough, you know?
Like if you treat it like another feature,
well, even if you treat it like another feature,
it's like mysterious why Siri doesn't work on my phone.
But, but it's like more people basically.
And if you take that seriously, you're not just gonna be like,
oh, and the 25th department in our complex
is about making CRE better at speaking or something.
No, it's like, this is the future.
Yeah.
And then that's, is that the,
that then becomes like the gigabole case
for safe super intelligence, meaning like,
none of the features
or consumer applications really matter at all today,
and you shouldn't even release them,
and you should just sort of accelerate
towards the end goal that enables all the other goals.
Mm.
That's an interesting point.
I think maybe somewhere in between where like if you didn't release chat GPT you wouldn't have been able to just like know that this is a feature people really wanted and that
they would get a lot of use of as compared to the other things people were using GPT
3.5 to do.
And I wonder if other things that other like features of AGI will be
similar where if you don't if you don't deploy it to a bunch of engineers on
cursor you just like won't know what it would actually make something a good
coding bot. The counter argument to this and I think what the SSI people would
say is that they actually are deploying but they're deploying towards the one
thing they care about which is accelerating AI research and they don't
need to do that externally.
They can just do that internally.
And so the basic question is,
can you get this closed loop where you build the AIs
which are helping you accelerate AI research,
dot, dot, dot, super intelligence.
I'm like 50-50 on that question.
But that other 50% is like a big deal.
Yeah, you mentioned AGI-pilled.
Is there a difference between AGI Pilled and ASI Pilled?
And why do OpenAI co-founders seem
incapable of starting anything but a foundation model
company?
Yeah.
I always wondered, I just want one of them to be like,
yeah, actually I'm starting a travel company.
But it seems to be like they are.
They're going to be traveling a lot after their jazz or golf.
They only know one thing. They're all one-trick ponies I mean I love them all
but like it's just funny that none of them started anything else it's the only
thing that matters maybe yeah I mean I think you're right I think um some people
I wouldn't even put it as ESI because you don't believe in this like God like
intelligence that's gonna control the world I'm not sure I believe it either.
I think there's a GI pill and there's like transformative AI pill where you say,
look, even if they're just like humans, if they have the advantages that AIs will just intrinsically have because of the fact that they're digital, which is the
fact that they can be copied with all of their brain, all of their knowledge.
Right?
So think of the most skilled engineer in your company, like Jeff Dean or Ilya Setskova.
You can like copy that person
with all their tasks and knowledge and everything.
You can merge different copies,
you can scale and distill, distill AGI's.
Those advantages alone and the fact that there will be
billions of copies as we increase the amount of compute
in the world, that alone is enough for transformation
in the sense of going from, you know, like, what we were like before the industrial revolution to the industrial revolution pace of growth.
And so I think somebody can be as you I feel in a sense as they like, yeah, I expect like human level intelligence to emerge in the next 10 years, but they still don't take that seriously as in like, okay, well, what does that imply about what is happening through the economy?
Does that just mean like, oh, you've got a smart personal assistant? does it mean like no we're in a we're in a very different
growth regime
Two last questions, I guess
One question. So are there any people in the book that you feel like in the fullness of time?
Are sort of very under under hyped or not getting enough attention people that are unsung heroes that maybe don't post a lot on X
Today, but when we look back, you know, 15 years from now, we'll be like, you know, those were the people that were doing because there's this this weird phenomena right now where it's like if you're just loud on the internet, like you just like, you know, you get you suck mind share and attention and maybe there's somebody like a building over that's doing more impactful work or really at the forefront that isn't posting at all because they're actually onto something.
That's a really good question.
I think a lot of the people I've interviewed
have subsequently, or at the same time,
are well known, right?
So even if you're not a lab CEO,
if you're like a Leopold or you're a Sholto or Trenton, people know who you are on Twitter as well.
The person who I think might be underrated still is an interview I did that we only released in the book.
So we have two interviews that we kept for the book. One of them is Ajaya Kotra, and she is somebody who has been doing this really interesting, like since the 2010s,
these really interesting analyses of how much compute
did evolution spend in total in order to like,
over the billions of years evolution has been going,
like how do we model that as a computational
like pathfinding exercise and using that as an upper bound
on how long it will take to build AGI.
And then like how much computer does the human brain use?
How much like time do we spend learning as kids?
And how much compute is that total in total
compared to how long it takes to train these models?
What does that teach us about how much better
these models could get given this overhang?
That has informed I think a lot of,
oh shit, sorry, this is the wrong answer.
Although J.I. is excellent and she's also underrated.
The one who is also super, super underrated is Carl Schulman.
And I think, I don't know if this name rings a bell to you,
this man is like, you would not believe the amount of ideas
that are out there in the AI system
from like the software only singularity,
basically the intelligence explosion kind of stuff,
to these like transformative AI and modeling out
the economics of this new growth regime
Like so much more. It's like this one guy it all came from him
He doesn't like to write that much so he he like tells other people his ideas
I had him on my podcast and we put his stuff in there
I mean, but what like he just has all these galaxy brain takes like one of them was you looked at the research on
What changed between chimpanzee brains and human brains
And he's like oh, there's like this. There's a bunch of structural similarities
It's just that the human brain is bigger, so this lends credence to the scaling hypothesis
I have one more question, and then we'll let you go by the way X is down
I think you broke it or we broke it together
But yeah, we're still live
Yeah, but how do you think about? broke it together. I think we broke it. But yeah, we're still live on YouTube. It's a scaling era.
But how do you think about, more on the business side,
because I think everybody's been fascinated
with your journey.
When you started your podcast, everybody would have said,
there's enough podcasts, we just don't need more of them.
That clearly is not true.
There's plenty of white space,
and your growth is proof of that.
But how do you think about
value capture and what you're doing?
Because I'm sure you've had people come to you and say, like, hey, look, like,
you know, just just come keep doing the show, but we're going to give you
50 million dollars of sort of like shares and you don't know.
Nobody's going to be with that.
Well, they should. I think. Yeah.
But yeah, like, how do you think about kind, I think there's this general fear in the tech community
right now, where it's like, this is the last two to three years
where you can accumulate wealth, and then it's over.
And I don't believe that that's true.
But how do you sort of balance, you're
doing something you love all day long, which is just
talking to interesting people and thinking about the future and humanity's potential
and technology and all this stuff.
But how do you balance all that fear
and wanting to capture value from your work,
but also wanting to not be conflicted, right?
And being able to just be sort of this independent actor.
Yep.
I actually am very curious about the answer for you guys because even though you're the
sort of the network you're starting now, it like just it started with a much bigger bang
than my products are starting to out with.
So I assume you actually got a bunch of these kinds of offers, right?
As you're like, oh, this is like new and exciting.
We care a lot about, you know, being like Switzerland, but specifically from like the investor, the investor side, right?
Like long term, we want to have any sort of investor be able to come on the show and talk
about what they're doing.
That's right.
And I would imagine the same thing for you.
Like, you don't want to be like, you know, so tied in or have to one, you know, foundation
model, you know, company that you can't talk about the incredible things that someone else
is doing, right?
Yeah, yeah.
I have had like different podcast networks or whatever reach out to me in the past,
and I've seriously considered them.
In one case, I was like close to saying yes.
And in retrospect, it was like, this is many, many years ago before, like, the podcast had grown that much at all.
And it was like like we'll give you
We'll edit the show for you and we'll produce it for you
And all we ask is 50% of the revenue you earn through the future
50% of your lifetime earnings
But I had a couple of friends who I was like, they're like, dude, it's like working. Just do it yourself.
And I'm glad because also another thing that might have influenced your decision as well is talent is so key.
Talent in the sense of I think it really matters to have one or in this case, in your case, two people who are like, we care.
This is our vision and we are going to instill it rather than this is an institution Where I'm the face of it, but secondly what I was going for is like talent as um your editors your
other people on your team
For me, I've just been super delighted with the people I get to work with there
And it just like the care and attention to detail they have just would not be replicated with here's a team of editors of this podcast
Network it's like instead of the people I've sought out and I love working with and I give them detailed feedback
and they give me detailed feedback and you know, yeah.
That's also what makes it special.
Yeah.
Onto the specifics around, do you think the general
techno capitalist fear of, you know,
basically like, I think a lot of 22 year olds right now are coming in to their sort of careers and they're saying like, well, everybody's gonna be paper clipped in a few years, like, you've got to sort of like, create value and capture it now so that you're okay in the sort of like AGI future. I do feel like that's maybe sort of common fear throughout history, right? Where there's like people have this sort of pending sense of doom
Sometimes but do you think that's like, you know?
What would you say to somebody that that had that sense?
Um, I think like I
Think the way to model out the next few years from a career trajectory is you'll
just have 100 extra leverage. But you want to be in a position where you can use that
leverage. There's a common thing and I'm sure you experienced this now as well is that as
you advance in your career, before you're like, I've got a bunch of time, but I don't
know what to work on. And after you're further along, you're like, I have no time,
but there's like 1,000 different ideas
I have for things that would be super valuable
or I think would go really well or something.
So I think what you should do is just get to a point
where in whatever you think is interesting or care about,
you're at the frontier and can see
what the problem space actually looks like.
If you care about AI, I would really recommend moving to SF.
And then just start working on problems and
You know use the leverage that AI gives you and if we end up getting paper clipped. It's like look
What's the point of like you personally?
What's the point of that you like not doing anything worrying about that in the 80% of world or 90% of worlds where we don't get
paper clipped
You'll get to say you worked on something really cool at a time that was really important in the history of humanity.
That's great.
Well said.
Thanks for coming on.
This is fantastic.
This is fun, guys.
We got to have you back.
This is really, really awesome.
Let's make it a regular thing.
I enjoy this.
And super-bolish around you guys.
Thank you so much.
Well, your previous thing is a couple of months in, but this is a couple weeks in and you're
already killing it.
This is awesome.
Thank you so much.
We really appreciate you coming on.
And for the record, we're not building a network. Yeah, it's actually a head fake. It's awesome. So much. We really appreciate you coming on. And for for for the record, we're not building a
network. Yeah, it's actually a head fake. It's a head fake.
It's just a show. It's just a show forever. That's right.
Show with never will never come to you and say give us half for
editing, you know, and you want at least 75%? Yeah. Great
having you on. Great. I'm excited to get, you know, fantastic. I was excited to get everybody to get access to the book.
Yeah, go get the book.
I printed it out.
We have a manuscript.
But you can buy the actual thing.
Yeah.
Thank you for doing that.
So again, straight press.
The scaling era is here.
Cheers.
Thank you guys so much for having me on.
Bye.
See you.
Bye.
And we got Casey Handmer coming in.
We kept him waiting.
Sorry about that, Casey.
Hopefully, you're still here with us
because we wanna hear about Terraform.
We wanna, have you seen his office?
He builds from a castle in Burbank.
It's like amazing.
Some guy built these crazy castles.
It's a fascinating company.
Boom.
Hey Casey, how you doing?
Hello, very well, can you hear me? Yeah, we're
gonna hear you. Great. Are you in the castle today? Are you somewhere else? Where are you?
I'm in the castle. I'm behind a fancy background of some synthetic natural gas samples we made
last year. But yeah, I mean, the castle windowless wall is right by me here. So very cool. Can
you, can you give me just a little bit of intro explanation of the company? And then
I mean, I do want to hear the story of like how the castle
got built again, because it's a funny story.
Sure, well, let's start with Terraform.
So three and a half years ago,
I rage quit my job at NASA JPL
and incorporated Terraform industries
to build cheap synthetic natural gas from sunlight and air.
And we've been at that three three and a half years now.
We've made a lot of progress.
It's super exciting.
The castle itself was built in the 80s
by the bandy family who built many
of the industrial buildings in Burbank.
And they worked with a general contractor
who was down to do crazy stuff.
And it turns out if you're building from cinder blocks,
you can make them any shape you like.
So that's how we got a castle.
Yeah, yeah, it's amazing. Can we, can we start with this, this quote tweet
that went mega viral this week? You said, I think a general misunderstanding in either
direction about how hard this is, is a major contributor to the outside context problem
occurring in politics right now. Don at me it's a video of
Elon Musk landing a rocket and and I think Elon quote tweeted you and said
just read the instructions but can you give us some more context on like unpack
that like what what exactly is the outside context problem what's going on
yeah it's a bit of a geeky deep cut so the outside context problem? What's going on? Yeah, it's a bit of a geeky deep cut. So the outside
context problem is a concept popularized in in banks is culture novel series, in particular,
the novel accession, which primarily deals with the culture coming in contact with, with
kind of a previously unknown alien. I mean, the whole books are full of aliens, but this is like an alien alien intelligence that's modality is very reactive and so
the usual ways of kind of reaching out and probing and attacking and so on are
just being reflected with overwhelming force. And then just read the
instructions is the name of one of the culture ships in in in a mixed novel. So
in this novel there's millions and millions of of fully sentient very very large spacecraft that fly around basically
looking after humanity. The deeper point is I think Elon's a very interesting
person. I think he's obviously very underestimated, has been throughout his
entire career for whatever reason and I, and, um, but is he, and I think, you know, convention, sorry,
sorry to interrupt, but is he underestimated in our corner of the
internet or, or just broadly, do you think he's underestimated by the
average Elon, you know, fanboy?
I think even then, um, yeah.
And, and I think, you know, conventional wisdom a year ago was like, oh, Elon is getting
involved in politics, he's going to shoot his feet off. And I think he genuinely took
a big risk backing Donald Trump for the election. If Kamala had won, I doubt it would have gone
very well for him. But now this person who a year ago, we're hearing all these op-eds like, oh, he doesn't
understand politics.
You're still hearing he doesn't understand politics.
He's taking a huge risk.
He should just focus on Mars.
What is he talking about, et cetera?
He's now in the White House, literally running the IT modernization process of the entire
federal government, which is code four.
He has root access to the entire government.
And I think ultimately, I think history will
show this is going to be a massive net positive for United States. You know, obviously there's
going to be some mistakes and some struggles and pain along the way. That's always been
clear from the outset. I wrote a blog post about this last year saying, you know, why
do we need a department of government efficiency? But I think overall, we should probably regard
ourselves as extremely lucky that someone of Elon's caliber has taken
interest in fixing the processes that affect all of us not just you know
Finding finding ways to serve the interests of his particular companies or or like, you know skirt around certain regulatory issues
It's just the normal normal way that you deal with these things
but you know, I'm I think the conversation needs to be had here. And my general complaint is that the criticisms that are being leveled at the Doge process
are not constructive because they're not engaging with the reality of what's going on.
And I think this is a process that would be improved by constructive and high quality
criticism and suggestions and better ideas flying into the system.
I think that's always the case.
But there's an outside context problem because most of the people who are jumping up and down about you know
The various alleged outrages that they're just committing do not understand what's going on. They it's it's beyond their context
Which is not a huge surprise, but you know, it is it is a major problem
Particularly for their interests, you know, I wrote a plug I wrote a post about this as well in the context of California
I live there. I wrote a post about this as well in the context of California.
California, I live there.
I love it.
It's great.
It's a great state, but it's not the state that anyone wants to be like right now.
That's a real problem for the future of the progressive movement.
San Francisco needs to be a shining city on a hill.
Yeah, I agree.
I have a random question, but I think you're going to have an interesting take on it.
Do you have any sort of like broad advice for investors that are fancying themselves as deep tech or hard
tech investors after years of SAS and sort of Web3?
And now it's sort of the hot thing.
I mean, even you started your company in 2021,
even then, you know, it wasn't, you know, there's sort of the
meme, right, which is, you know,
deep tech would have loved zero interest rates, right?
But like, there wasn't a lot of like companies
like yours actually being, you know, there were plenty,
but there was not as many as there are now.
Did you have advice to just sort of venture capitalists
broadly about how you're evaluating, you know,
if a bright entrepreneur comes to you
and says they want to do something that like many people say is impossible, like how do you actually, you know, how would you advise them to kind of like evaluate that?
Because like, it just seems like there's so many exciting companies right now and some are very clearly like fake and some are very like real and maybe they won't succeed, but like they're doing like very real work.
And, you know, very likely we'll have a dramatic impact.
Yeah, thanks for that. I mean, one can theorize about these issues and how one might go about, you know, acquiring the necessary expertise to make better than random judgments when it comes to
potential deep tech startups.
But the reality is it's extremely capital intensive.
It operates on a different set of both physical and legal laws than SaaS.
And that actually makes it significantly less efficient, I think, in terms of the health
and functionality of a typical VC ecosystem in this case.
But instead you can look at, well, what are the contemporary and historical
cases where major
innovations were successfully brought at scale into the market and in almost all cases
it was actually led within an organization. So you have an existing organization that has existing
you know, capital relationships and projects and in particular a large
team of very skilled
people who are extremely aggressive about delivering value.
And then you take that team and you throw the one a new kind of problem and a new kind
of problem and so on.
And the most salient example in the recent past is the Elon Industrial Complex, where
Elon has this family of companies that are doing things that everyone else really struggles
to even kind of wrap their heads around in most cases.
And, uh, but, but, you know, history is replete with examples. And I recently have kind of gone very deep on Henry Kaiser, who founded
more than a hundred companies built the Hoover dam, built 1500 ships in World
War II, you know, stuff that we regard as impossible today, like, Oh, how can
we fix the American shipbuilding?
He just stood up a shipyard from scratch in like less than two months, uh, in,
in 1941.
Like they didn't have computers back then.
It's just insane stuff.
So yeah, there are historical examples that are happening.
And I think one way of thinking about it is that capital allocation in hard tech is a
stronger emphasis on human capital than on just liquid money.
And probably the optimal place
to put the capital allocation layer
is within already successful organizations
that have already proved their worth.
Speaking of organizations, you said you rage quit NASA.
Is NASA's best work behind them?
Or now that Elon has root access,
you know, could we see, you know,
the organization sort of revitalized at some point?
I very much hope that the Doge team will devote some efforts
to helping NASA recover its historical capabilities,
which we have to remember within living memory
where the envy, remain the envy of the world.
But unfortunately my experience,
it's personal experience,
I didn't see all of it obviously,
but my experience at NASA was that the organization is,
it's still stuffed to the gills with brilliant people,
but most of those people spend most of their time
not being allowed to do really extraordinary work.
Like actively being blocked and hindered
and punished for going above and beyond,
which is not how we should be running a space program
competing with China.
Like China's program managers in their space program,
they fear the consequences of failure.
And NASA's program managers do not.
And that is the key difference.
Can you talk a little bit about stagnation?
What happened in 1970?
There's a whole bunch of different theories.
Is it a culture issue?
Do we need to just adopt a you can just do things mindset?
I've always been tracking energy.
That's the point where the energy growth kind of broke
from 2.7% per year to 2%.
It feels like AI might be enough of a moment
to kick us back into gear maybe
it's this doge stuff that's going on will it be political but how important
is unstagnating on the energy side to kick everything into gear and what are
the stack ranked priorities for actually seeing increased human flourishing,
economic growth, energy, all the proxy metrics
that are important.
Well, you hit on the most important aspect there,
which is we can measure this, right?
At the end of the day, we either have total factor
productivity growth or we don't.
And if we do, then our children will have a better life
than us, no matter what.
And if we don't, then they won't, no matter what.
It doesn't matter how innovative we get about, you know, social programs and redistributive spending and like various sneaky forms of communism.
At the end of the day, we're either growing the economy or we're not. And I think that, you know,
what the hell happened in 1971 is overdetermined in some ways. You know, Henry Kaiser died in 1967,
various OPEC issues and oil shocks and so on occurred. We had the passage
of NEPA and SIGWA in the early 70s as well. All of these things I think well-intentioned or kind
of contingent at the time, but since then, I think if we had not seen the emergence of Moore's law
and improved computing capacity, we would have been in much more dire straits as far as
economic stagnation goes. But I'm also extremely optimistic that we're going to, we're going to turn this
around first of all, because we seem to have gained the ability to talk about
stagnation, first of all, and like to measure it and to worry about it and be
conscious of the fact that this is probably actively impeding, for example,
the fertility rate. Um, and the second thing is that, uh, you know, obviously AI
will help. Um, but we've also got this incredible technology and in solar power,
which is allowing us to convert, in a sense, we're
going back to the land, converting sunlight into energy
about 100 times more efficiently than plants do. And electricity
is quite a bit more useful than coal. So I'm just very optimistic
that we're going to solve the energy problem. You know, this
decade, we might take a good crack at the permitting problem,
at least in the West, this decade, um, we might, we have, take a good crack at the permitting problem, at least in
the West, uh, this decade as well.
Um, and I think those two things combined plus, you know, some,
uh, intelligent AI's will really help us get back on the Henry
Adams curve.
Can you talk more about solar?
Why are you so bullish on solar and not maybe less bullish on
nuclear?
There's a lot of chatter and everyone loves both right now.
But why is solar what you've chosen to kind of make your life's work here?
Yeah, at least for now. I mean, to be clear, I'm not bigoted against nuclear power. I'm not
worried about nuclear radiation. I taught nuclear physics at Caltech for a while.
It's a fabulous technology. But again, like we can theorize about it or we can look at history.
And history shows us that nuclear reactors have been enthusiastically adopted by various
navies for operating clandestine underwater vessels that need air-independent power supplies
or nuclear aircraft carriers that need the ability to outrun said submarines.
But other than that, even the Navy moved away from using
nuclear reactors to power their surface fleet. And I think if you want to honestly understand
this, you need to kind of dive into why that is the case. At the end of the day, nuclear reactors
are steam engines. And steam engines operate on what's called the Brayton cycle. And there are
certain irreducible costs associated with steam turbines and so on that just drive the cost up.
costs associated with steam turbines and so on that just drive the cost up.
And that's even if NRC didn't exist tomorrow, right? Even if you could buy enriched uranium on amazon.com, it would still be the case that just the steam engine component
is going to cost no less than coal. Solar on the other hand is fascinating because
sunlight rains down on the earth every day for free. There's a fusion reactor up in the sky. Due to some like weird quantum effects and again, silicon trickery,
it is possible to convert that into high grade energy in the form of electricity with a layer
of silicon that's thinner than a sheet of paper, considerably thinner than a sheet of
paper. The silicon is enormously abundant on the Earth. We've gotten really quite good
at making it.
There are factories worldwide now churning out
more than a terawatt of solar per year
with no signs of slowing down.
If anything, production is increasing 30 to 40% per year,
which is just bananas.
It's a sort of growth rate you'd like to see
for any technology.
And the reason this is occurring
is because there's a positive feedback loop
that's already
been kicked off.
Right?
So we don't have to theorize about, well, what is it going to take to get to, you know,
first of a kind, second of a kind, hundredth of a kind, you know, start getting those economies
of scale down in solar that's already happening.
It's a commoditized product.
It has no moving parts.
You don't need any special skills or labor to install and operate it.
It's even easier to do than planting corn.
You just put it on the ground and it spits out power,
which in our discussion about 1971, electrical power is basically wealth. It's a free money
printer you can put on the ground. And this is one of the reasons that I'm slightly frustrated
about the various tariffs on solar panels. I think if China, our geopolitical adversary,
is attempting to harm us by giving us solar panels subsidized at their taxpayers' expense,
we should do the textbook thing that you do when people are trying to do predatory dumping, which is buy
as much of it as you possibly can to hurt them even worse.
And I don't know, pave Nevada with them or something.
We could figure out something useful to do with them down the track.
But yeah, basically that's the key.
Some batteries on the side and the problem is solved.
Yeah.
Speaking of like paving Nevada, what does your, what does the future of earth look
like in 20, 50 years?
Is it solar panels all over in space?
Is it solar panels all over the earth?
Is this, is it going to feel cyber punky or will there still be trees around at all?
Am I just in a matrix pod and but my total factor productivity is going
up so I'm happy. What is the long term future look like for you or in your mind?
I think it's a bit hard to say. I'm quite optimistic. You'd have to be to start a hardware
company and I think really what the future looks like comes down to people like you and
I and what we decide to build. Right. Like if you want a great future, go and build it.
In terms of energy, we can give every man, go and build it. In terms of energy, we can
give every man, woman and child on earth the amount of energy we enjoy here in the United
States, which is about 20 barrels of oil per person per year, with something like 6% of
earth's surface under solar, which is much, much less than we currently use for grazing
or for row crops or for forestry. It's a little more than we currently have covered
in like densely populated cities,
but it's quite a bit less in agriculture.
That's enough for everyone.
If you, like the scenario,
the previous call you were talking to Dwarkesh
about like getting paper clipped,
we're not gonna get paper clipped.
What's gonna happen is the net present value of land
for agriculture is about 500 bucks per acre per season.
But for solar, it's about 100 you know, 100 or 200 thousand dollars per year. Now, if that's powering an AI, artificial super
intelligence data center, which is a thousand times more economically productive than your
terribly poorly evolved human brain, which has to sleep eight hours a day, and, you know, browse
Twitter, then obviously, economically speaking, our farmland is ultimately going
to get paved over with solar and we're going to starve to death.
And that's the true AI doom scenario.
So again, that's something we should be aware of and something we should figure out how
to forestall.
We will obviously have a lot of solar in space, but I think mostly it'll be for powering applications
in space.
I don't think people are going to be beaming power down from space to the
earth at any time, anytime soon.
Is that just because the economics don't work out?
I've seen a couple of these companies that are using mirrors or lasers.
Yeah. The mirror one is interesting actually, um, because,
because it kind of exploits the fact that like your,
if you're a solar array owner, you have no other way of getting more power.
Um, other than paying someone to reflect some more down to you, which is super cool. Don't get me wrong. As far as solar cost goes, the Starlink constellation
is solar powered. It's in orbit. It generates considerably more power than the space station
does collectively, obviously, like maybe 100 times more power than the space station.
And most of that power is used to amplify radio signals and transmit them down to Earth where they're received
by our antennas.
And the revenue per watt of electricity used to transmit microwaves through Earth's atmosphere
to the consumer, because it's chopped up and turned into internet data, as opposed to just
raw power, is about a billion times higher.
The revenue per watt is a billion times higher
with it being internet at a few hundred watts
or whatever transmission power,
than it being attempting to transmit gigawatts of power
down to power plants where we pay 10 cents a kilowatt hours
or something like that.
Now, Starlink is actually fabulously profitable
and I'm super proud of the team there.
I think it's absolutely incredible what they've done.
But there's no way in hell that the profit margin is 10 million
percent. Right. So like, it's just it's just not possible.
So I don't think that space based solar power is likely to be a thing.
Do you think that the average seed stage deep tech founder doesn't take economics
seriously enough? Because almost almost every sort of question we've asked you,
you've had sort of economic sort of rationale
behind your answer.
And I think we've seen some companies emerge recently
that, you know, there's sort of this like sad scenario
where they like achieve the impossible
or do the really hard thing.
And then what's waiting for them is potentially a product
that like is not economically viable in the sense that you
made the product but then you can't actually get any value.
Yeah, I've heard about nuclear companies where it's like they literally just had one number
wrong in their spreadsheet model and they did the thing but it just wasn't competitive
on the grid at all.
Yeah, that's the nightmare scenario.
At the end of the day, if you're super passionate about a particular
technology and you think it has a chance of success and you have investors who agree with
you, then go give it a shot, right?
You don't know, I don't know.
Like customers and consumers, they like weird things.
We just don't know.
Like, that's the fabulous thing about capitalism.
It just firehoses options and potentials at the market to see what sticks.
And if you think back to when we were children, there's no way we would have known that like
the stable attractor form factor for cell phones would be something like this.
That's not how they worked back then.
So for the younger listeners, just like quiver in fear.
We didn't have cell phones when we were young.
So you should give it a go.
On the other hand, if you are trying to do a hardware tech thing at massive scale, you
absolutely have to have capitalism behind you.
You cannot fight capitalism to get to massive scale.
You have to grab the gravity.
It's not a mystery.
Yeah, exactly.
Would you rather have them on your team or against you?
And I just see this mistake time and time again, which is like, well, this technology
will work if we can bring about some like massive behavioral change in the
entire market or something like probably that's not going to happen. But if you can produce
a product like an iPhone, where when the new one comes out, hundreds of millions of people
worldwide feel actively burdened by a thousand dollars of cash in their wallet, and they
just like shut up and take my money, then you've done the right thing, right? You're
thrilling people.
You're giving them something that they'll happily part
with their hard earned cash to receive.
And yeah, I mean, it's not like this is a mystery.
Build something that people want.
I have one too.
Okay.
We'll go fast.
Yeah, the Scrolls project, is it just about inspiration
or is there a practical application to that project?
I think there's a certain kind of person who really likes to curate and organize information.
And I think it will ultimately tell us a lot more about our historical origins in the past,
and I think that's worthwhile.
You mentioned briefly the sort of AI doomsday scenario where the AI just
realizes that it should blanket the earth with solar panels and to, uh, you
know, feed itself.
Uh, what else scares you in the world to end on a high note?
Um, I think that when we figure out how to drastically increase human
lifespans, it will turn out that we would have been able to build
these drugs since the 1930s.
And so we've literally allowed billions of people to die painfully, lonely, in old age, when if we'd just had enough insight and enough effort, we could have figured out how to stop that a long time ago.
Wow. Interesting. Well, hopefully we go build them soon because the best day to plant a tree is today.
Well, unfortunately, we used all the GPUs to make Ghibli images this week.
Maybe next week we can put it towards extending human lifespan.
Next week we can put the AI to work on,
extending our lives.
Thank you so much for joining us.
This was a fantastic conversation.
Thank you so much. I really enjoyed this.
Yeah, what was that?
I was gonna say one last thing.
One of the major problems with Doge
is that we have these entitlements,
costs of Social Security, Medicaid, Medicaid, and so on.
1% of the federal budget is spent on dialysis.
These are all diseases of old age.
If we can just increase human lifespan by 10%, we cut that whole trench, trillions of
dollars of annual spending by 10% because 10% fewer people die every year.
And they also get to not die.
It seems blindingly obvious. And yet we spend less than 0.1% of our national health budget on anti-aging research.
That's less than 0.1%.
Wow.
Well, we'll have to dig into that more.
Thank you so much for stopping by.
We would love to have you back on.
Maybe we'll come over and we'll do one in the castle at some point.
Yeah, I'd love to.
That'd be great.
Thank you so much.
This is a fantastic conversation.
Really enjoyed it.
Have a great rest of your day.
Have a great weekend. Talk to this is fantastic conversation really enjoyed it. Have a great rest your day. Have a great weekend talk to you soon
What a brilliant what a brilliant mind yes scrolls and joyer you always get good stuff with scrolls and joyers
Guys the scroll guys we got to have the whole the whole maybe she do a scrolls day
Scrolls day
But back away from scrolls. We're going back to books. We got Nadia Asperruhoff coming in the Temple of Technology
She has just announced a new book called anti memetics
She's a former strike press author in the in the the Hall of Fame of
Authors in my opinion if you publish on straight press you can publish there you can publish
That's what people have been saying. Yeah, it's really,
it's really the NFL publishing. Yeah. Yeah. Like it's, it's uh, anyway,
but let's bring her in. Thanks so much for joining. We're having some fun.
Uh, yeah. Thanks for joining. Can you give us,
I'm surprised that Deleon didn't join by the way,
because Pavel was about to have his he was about to have this amazing first do not send Deleon the link. Yeah, just bomb. I think he has the interview. He does have
the link. Don't let him know that he can use the link because he's a great hype man. Anyway,
yeah, would you mind just a brief introduction and give us a little high level of the book?
Sure.
I'm Nadia, writer and researcher.
I just published this new book called Anti-Mimetics, Why Ideas Resist Spreading.
And it's about this idea of anti-memes, which are self-censoring ideas.
So ideas that we find really interesting and compelling in the moment, but for whatever
reason they kind of resist sharing or being remembered.
So I think we usually think that if an idea is good or interesting or compelling, it's going to find a way to share or be shared or spread on its own.
But there's this whole other class of ideas. So things like taboos, forbidden ideas, stuff we don't really talk about in public,
cognitive biases, so stuff that we are sort of like blind spots about ourselves that we don't really realize. All
this kind of forms this class of ideas that are anti-mimetic and don't spread super easily,
even though they're interesting and boring ideas. So that's what I wanted to write about.
How much of mimetics is just actually compressing the idea down? I was thinking about how Lulu's
going direct manifesto has this really pithy phrase going direct founder mode. Same thing with the network state
kind of two words that really hits versus Leopold Oshinbrenner wrote a
fantastic essay about AI called it situational awareness. We don't see
people using situational awareness as a meme to talk about acceleration and
where the future of AI is going. Is it just a packaging problem, or is there
something about the ideas itself that creates
an anti-mimetic property?
Yeah, I think some of it is a packaging problem.
Ideas themselves have these innate qualities
that are mimetic or anti-mimetic.
Some of it is just about how the person perceiving it
or receiving it is how do they interact with that idea.
So you can think of certain ideas that are spread
really, really easily in certain networks,
but not in others.
Conspiracy theories are a good example of this.
So yeah, I think it's a mix of both.
It's partly about the idea.
It's partly about the person that is like sharing it
or receiving it.
Sure, Jordi?
Well, one, I wanna make this segment
like the most beautiful ad for your book.
So I'd like to go like through a number, a number of the of the ideas.
I did want to a book had been going viral and I got a copy of it.
I wanted to get kind of your high level take on it.
It's funny.
There was this was maybe released in 2021.
There is no anti-mimetics division.
It sort of went viral a little bit last year, but then it
is no longer like for sale or I guess you can buy
it or the paperbacks are trading.
Is that the great thing?
Hundreds of dollars on eBay right now.
Yeah, so I have one.
I should probably just sell it and buy like 20
No, keep holding. Keep holding.
No, keep it.
I just thought the great irony of like, you know, they were
they're maybe going after like, you know, something similar to what you've been researching it.
But but but then that's not even available anymore. So it's like nuts nuts. Not a
You have any sort of like
Yeah, I'm curious like
more broadly like
in researching this like what's the full breadth of what you were kind of looking at, right?
Because I imagine you were researching
and finding forbidden PDFs and books
in a bunch of different areas, right?
Of this sort of esoteric knowledge
that humanity ignores for some reason.
Yeah, I'll give a big plug for Quantum,
who wrote, There is No Anti-Mimetics Division,
and he actually coined the term anti-meme
like back in the day. And it originates from this fiction, like horror sci-fi kind of online wiki.
And so yeah, he published that book. I think 2021. It's a horror science fiction book,
and it's about anti-memes as this sort of like anthropomorphized creatures that are
kind of like these black holes. They destroy everything they touch. They consume everything that comes in contact with
them. But for whatever reason, we can't seem to remember that we've interacted with them.
And so there's this intelligence unit that is trying to fight anti-memes, but like can't even
remember that they're doing it. Hence the title. There is an anti-memetic division. Super cool book.
Highly recommend it. And all of his writing, honestly. And that was the first book I read
back in 2021 that kind of kicked off my interest in the topic. I had never heard of the
concept of anti-meme before this. And yeah, I was reading that book kind of like depths of COVID,
yeah, 2021, sad, dark times, but like also, you know, it was kind of everyone's doing stuff in
like group chats and they're doing stuff like clubhouse a thing, and I had been working at Substack.
And so I was thinking a lot just about how does this concept
apply to the real world and where do
Antonines show up in the rest of our life?
So it was definitely a direct inspiration for this book.
But yeah, I wanted just a nonfiction
treatment of the concepts.
Totally, it was not practical in any way.
It was just like you said,
a sci-fi horror exploration of it.
I feel like it's the best sci-fi.
Yeah, it's very-
Can you talk about,
I feel like we're in such a weird time.
Our generation was sort of like not born on the internet,
but we sort of like gained consciousness on the internet.
And the internet is something that just like
circulates ideas, both like fact and fiction,
like extremely rapidly.
And so in many ways, like conspiracy theories
and these like sort of forbidden potentially truce
or maybe it's fiction, you know, you don't know.
That sort of like, I feel like I've grown up in a world
where I don't, you know, like the only conspiracy,
like the only conspiracy theory I believe
is that there are just like many, many, many, many conspiracies going on and all sorts of things at all times.
Right. So can you talk about like, yeah, I think like what what can people take away maybe from your book in terms of how to just like process information online when you don't know if something is completely false or an idea that
or something that's factual or maybe it's an idea that's sort of just anti-mimetic and it doesn't
want to be known or it doesn't want to be identified as as truth. Yeah, I thought about that a lot while
I was writing this just because I think there's a lot of doom and gloom about kind of like where
the internet is right now or like social media didn't turn out to be what we expected.
And I really didn't want to write that kind of a book because it is partly about this
transition from, you know, once upon a time it was you could post whatever you wanted
on Twitter and you weren't worried about like getting, you know, trolled, canceled, attacked,
whatever.
And a lot of people, I think part of why we kind of withdrew to group chats and these more private spaces is because we kind of just wanted to tune out all of that.
But I really wanted to show that it's not that one is replacing the other.
It's not like, oh, Twitter or whatever is over and we're just going to all go back into
our little caves and only talk to our friends.
But both of these things actually feed off of each other.
And all the things that get workshopped in the group chats kind of make their way back out into public
channels and vice versa.
And I think like there's there are things about that that are good and bad.
And maybe to your point just about yeah not being able to even tell what's real anymore,
but like group chats aren't really like a safe haven from whatever is happening in in
public channels because they end up
kind of like becoming these super incubators for crazy ideas.
And so ideas can get even crazier and even weirder
when they're being workshopped by a small group of people.
And so in some ways, they're kind of mutating and making
ideas even crazier.
But yeah, not to be overly defeatist about it,
but I think it is also kind of like your reality
is what you make of it.
And where we decide to direct our attention, what
we decide to focus on, that is what your reality becomes.
And so maybe that's a little bit scarier destabilizing
to people to feel like they can't tell what's real anymore.
But I also think if you just can find ways
to harness your attention and focus on the things you actually
care about, then life is not really so bad.
Yeah, and you compared what's happening
in the town square, something like X,
and then what's happening in group chats.
Oftentimes, we're in this world where
there's this fiction appearing on the timeline,
and then the truth is racing through group chats.
And on a long enough time horizon, they intersect,
because almost an idea can be like put out there and maybe people agree
with it in the moment but then eventually sort of what's happening in
the sort of like private group chats like actually does start spreading and
sort of like breaking containment and in many ways it feels like that's happening
faster than ever, right?
Yeah, I wanted to ask a question about
kind of like practicality here.
I think a lot of founders and business folks
think about can they align their company or their mission
or their vision of the future along with a meme
or condense it down.
Obviously the going direct thing
has been a great encapsulation of what Lulu does,
and it's helped her business in addition
to helped people that have adopted that strategy.
Is it possible to do the opposite,
to take an attack on you and somehow twist it
into an anti-meme so that the idea that
is hurtful to you or your business or whatever
becomes harder to discuss.
Interesting. Um, yes, definitely. Uh, yeah, I do talk in the book about, uh, this idea of like
obscurantism, um, which Nick Bostrom coined, but, uh, the idea of like, can you make things boring
or can you kind of hide them as a way of sort
of suppressing them? So if you think about like suppressing ideas, people often think
about like putting a hard, like a hard lock on it so that, you know, make it forbidden,
make it password protected, whatever, but that often makes people more excited to figure
out like what is actually going on under there. But if you make really, really boring and
really uninteresting and like, or just like really difficult to parse, people kind of
just lose interest and wander away.
And that's definitely a,
that would be like an anti-mimetic information warfare
tactic, I guess.
I've seen this where I've seen people chirping at each other
on X and I've noticed that whoever puts up the longer piece
of content usually ends the conversation.
So you post, you know, I like SpaceX,
and then I quote tweet you and you're like,
SpaceX is terrible. But then then I quote tweet you and you're like SpaceX is terrible
But then if you quote tweet back with something that's just like an essay, I was a bit I'm tired
I'm out and and you kind of win by doing and I've also seen the I've also seen these attempts to take
You know a legitimate attack vector and then and then wrap it in political extremism on the left or the right
And so if you're being attacked by someone, you can say,
oh, well, like that's an idea from 4chan
or that's an idea that's from the communist manifesto.
So we should disregard it
and put it in the forbidden category,
even if it's a legitimate criticism of whatever's going on.
And so, I don't know, it's a fascinating topic.
Can you talk about some of the differences
between anti-mimetic ideas and anti-commercial ideas and anti-commercial I would put something like intermittent fasting in that bucket where nobody makes money if people eat less right like it's sort of this idea that like there's there's whole conspiracies around intermittent fasting which is like big breakfast was like came in and they're like got to eat a big breakfast and like there's a bunch of people that benefit from like the big breakfast, you know
You know complex and then if you start going around and saying like hey, you know
You can just have coffee or water and like just eat lunch like it's hard to monetize that right?
It's not like I can be like hey don't eat food and also like, you know, pay me to like teach you
how to not eat food.
And some people have done that,
but like I feel like there's maybe a line
between ideas that are anti-mimetic
and like just don't spread for one reason
and then stuff that's maybe like slightly more obvious,
but it's just like not spreading
because there's no like sort of economic
or commercial interest that's like trying
to spread a certain idea.
Yeah, that's super interesting.
I hadn't actually thought about that yet.
Yeah, I think like there's still plenty of ideas
that can spread memetically even without sort of like
a commercial engine behind them.
So yeah, folk was aphorisms, things like that.
But you're right that having, especially I think
in like the realm of like health related things, I'm
just thinking about like a lot of chronic health issues and stuff like that where people
kind of have to like bumble around and like find their own answers to things. And it's
partly even if they're like, you know, chronic health issues that tons of people are facing
and dealing with those no like clear answer to them. And I think sometimes when the answer
is something that can't be monetized or sold, then it's, if we think of it as just like another engine
for driving the spread of information, if that engine is missing, then something kind
of might languish and not reach everyone that it should. So yeah, that's super interesting.
How do you think about Elon's positioning with XAI around the sort of like truth seeking
AI? Like, is it possible that a properly trained LLM
could be more inclined to spread ideas
that are sort of anti-mimetic within humans,
yet like the machine is just like, you ask it a question,
it's like, well, obviously like this is like,
what you're dealing with, whereas a human would tell you
some other answer that maybe was more correct in some ways?
I think it could be really useful for ideas that
are anti-mimetic on an individual level.
I know some people will use AIs for this, where it's like, tell me,
based on the conversations we've been having,
tell me something about myself that I don't know or I don't realize.
And maybe just having someone that can just
tell you straight up what the answer to that is versus your friends who might kind of
not be super honest that could be useful I still think it's like really hard to
spread things that are like taboos or forbidden ideas through a network so
even if one person really believes it and and yeah it's totally bought into an
idea like how do you get it to actually spread from person to person and yeah
it's a little bit harder to think about like how that that takes place.
Well, I'm super excited for the book. Congratulations. Yeah, we should do what like once we get a
full copy here, it's like actually ordering we'd love we'll do a deep dive on it. Yeah,
we're gonna try to Yeah, we'll have you back on and we'll help the audience figure out
how to make money. How to make money. Becauseibiotics. That's the theme of the show.
That's what really matters.
That's what's really going to get the book.
Oh, my competitor?
No one's ever heard of them now, all of a sudden.
No one can remember them.
They just disappeared from the internet.
Because we put Antibiotics to work in capitalism.
Fantastic having you.
Thanks so much for coming on the show.
And congratulations.
Two-time author now.
Fantastic. Incredible. We Two-time author now. Fantastic.
Incredible.
We're going the opposite direction.
We're going straight into the AISDR scandal that's going on.
We're going back, revisiting 11x.
You might have heard about it earlier this week.
11x, there was a TechCrunch article,
said that maybe the ARR figures were inflated, maybe
they were using customer logos who had churned and they hadn't been removed from the website
quickly enough.
We have the founder of Rocks on, who is here to tell us all about SDRs, AISDR technology
agents, and a whole lot more. But I'll let him introduce himself.
Let's go. Welcome to the template of technology. Great to have you here. How are you doing today?
I'm good. I'm good. Thanks for having me on, John. Jordy, nice to meet you.
Sam connected us. So excited, excited to join your ramp sponsor.
Yes, this is a ramp sponsor segment. Seriously, like Sam. Seriously, I went to ramp and I was like,
okay, there's a whole bunch of noise in this whole industry,
but there's probably something that's working.
Just tell me what you guys use,
and they were like, well, we use rocks, and it's great.
And so they introduced us, and I'm glad to have you here.
And I wanna learn more about the product genuinely,
because it's not something that we're using yet,
but in the future, we imagine that millions of people
are getting phone calls and emails from us every single day,
encouraging them to listen to the show,
encouraging them to download,
rate us five stars on Apple Podcasts.
We want a really intense million AI sales force
and we're hoping you can help us with that.
We want to swarm.
We want to swarm.
The high tide.
The swarm. Amazing. But anyway. You want to swarm. The high swarms.
The board.
Amazing.
But anyway, what do you actually do?
Yeah, absolutely.
So we're set pretty early.
Most of it is all word of mouth is yet.
So there's a big launch coming in a couple of months
and we actually have a sales and marketing team.
So we built the first enterprise ready agentic CRM.
So it's a new generation of software
where the CRM works for you.
So rocks in the system, like in RAM gets installed,
it unifies all your customer data in one place,
keeps it in RAM's environment,
and then feeds a swarm of agents.
And our agents are designed to supercharge
the max premiums that sound to the world.
Like how do you supercharge
the highest compensated frontline kind of winners or killers that go to market?
And the way we do that is we double or triple
their productivity by having these agents basically
do a lot of the back office work.
And that's kind of our vision is the winning companies
of tomorrow like Ramp are gonna have supercharged builders
with cursor and cognition and supercharged sellers with hopefully rocks so that's what we do.
I love it. I have a ton of follow-up questions. I mean it sounds like one thing that we were
kicking around was this idea that maybe it's too early to have an agent at the front line actually
writing copy, hitting send, and maybe we're more in the Centaur era where a human with an AI is
more powerful than either a human with an AI is more powerful
than either a human or an AI alone.
Is that how you're thinking about it right now,
even if in the long term you're going full,
the AI will actually send the emails.
Yeah, and kind of going off of that specifically,
I think everyone can see why back office work
is getting sort of automated.
Perfect for LLMs.
And, you know, Curs cursor can have an amazing NPS,
even if it has a huge error rate.
Yet front office work, when you're
interacting with customers, everybody's
experienced this at some point.
Either the individual themselves or an employee
of the individual sort of messes something up with a customer.
And it's sort of like this frustrating experience
because it's actually lost revenue versus a lost five minutes correcting a bug. So balancing that sort of like this frustrating experience because it's like actually lost revenue versus like a lost five minutes, like correcting a bug.
So balancing that sort of front,
like the bigger challenge is probably
like the front office work where the error rates
just have to be much lower.
Yeah.
Absolutely.
As Lulu kind of gave me the best advice,
like we're building bath suits, not butlers.
And the core idea comes from,
I've had the bag all my life
and I ran a good market for a public company before.
We're not at a point where you can completely go FSD for your largest customers.
The core idea is how do you supercharge the Sam or Max or Eric with AI, but they are the
ones who are orchestrating or cubing the system where they review and send.
Because the cost of really messing up is in my kind of
the small term feels like a bad email,
but you would degrade trust through the brand,
through the person.
And that's what we focus on is how do you build
an agentic system that you could use today?
It's not vapor words, things we can use today,
but how do we supercharge the kind of frontline
kind of back carriers?
I think over time, like there's going to be dramatic seat compression or efficiencies
to be had where people who support those back carriers, which is kind of 80% of the employee
base, they will have to kind of evolve to thrive or kind of risk basically not being
relevant.
Can you talk about the innovators dilemma, sustaining innovation in agentic systems versus
disruptive innovation?
Logan Bartlett on his show said that you know, if you go back to mobile the the the Salesforce of mobile was just Salesforce and
His thesis was that maybe the Salesforce of AI is just Salesforce
But and they're obviously doing stuff in AI
But at the same time we're seeing Apple drop the ball on product development, Google's dropping the ball
on product development.
It feels like there's more fertile ground than ever.
So how have you reasoned through that
and why are you taking this bet,
even though there's some people that are saying,
hey, it's only a matter of time
till the big guys get it together.
100%, like Benioff's the goat,
like he's the goat of all goats.
Let's hear it for Benioff folks. You're a legend.
Yeah, like I grew up not in the Bay area,
like you read up on them and they're the big inspiration.
So if you look at ServiceNow and Salesforce,
they have a massive advantage
because they have distribution, but also data.
Like kind of firm belief is that the alpha
is in working with the intelligence providers
and being earning the right to be the custodian
for enterprise data and bringing in public data.
So our position is what we're arbitraging is
what somebody who holds kind of a bag
at a public company knows is
traditional systems have both lost usage, the users,
they don't wanna use them, they use new tools,
but they've also lost data.
Data is now in the warehouse.
So if you, 40% of the data in a software
kind of data warehouse, like a snowflake, is actually
go to market data.
So when we think about kind of building these new systems, incumbents, a $3 billion kind
of incumbent has a massive advantage.
But what we're focused on is how can we, A, earn the eyeballs of the SAMs and Max Freemans
of the world, but do it in a way where we're doing a rug pull
where build a system of record,
which indexes the data that's not already
in the system of record, right?
So Roskast installed, indexes everything in the warehouse,
brings in, you can connect all the other sources.
And I think the winning kind of companies would be folks
who bundle data and users in these platforms
and hopefully earn the right to be the next kind
of default platforms. bundle data and users in these platforms and hopefully earn the right to be the next kind of
default platforms. Talk about the pressure that you feel and then maybe other
other companies specifically in San Francisco around you see these companies coming out you
see these revenue charts right it used to be investors it's seriously in 2021 I remember saying
investors would say you know sort of best in class companies are getting to one million ARR in nine months.
Yep.
And now it's basically like, OK, best in class companies are getting to 10 mil in three months.
Yep.
So if you're and so what the sort of potentially toxic thing about this pressure is that it's making some people feel like, well, if I want to win as a company,
like we need to put up, you know,
just these ridiculous numbers that, you know,
and anybody that's built a company knows that like,
in like revenue ramp is not necessarily,
like always connect, you know,
tied to like customer satisfaction or product quality
or everything like that.
So I'm curious to know, like, do you think that sort
of pressure to grow extremely quickly is forcing people
to make sort of like short-term decisions
or riskier decisions or in some cases even like, you know
misstate facts?
Yes, I think it's ultimately the founder psyche.
Like I've been around the blog, but I have to say two things
this time around is makes me the most insecure.
Like all of us are very insecure, right?
My wife's listening in, obviously, like we're all insecure monkeys in some ways.
So two things.
One is I think this new generation is about talking about these crazy kind of revenue
curves.
The way I internalize it is ultimately like the secret to building enduring high quality
revenue is to make customers happy and the right customers happy. So we focus on strategic, like
the Siemens of the world, enterprise customers like Redis and MongoDB and RAMP, like how do we
land there? How do we earn the right to expand and be essential in these businesses which
would generate enduring revenue over time, as long as we're kind of helping them kind of secure
and grow their own revenue. So that's how I
internalize that I think the game is going to be a quality
about enduring revenue. And there are businesses like ways
and data to be built now. It's just that data dog and ways
although they were rocket ships, they always focus on essential
kind of solutions to really, really large businesses. And
that's what we're focused on.
The first part of the psyche is I think is not spoken enough is the fact is,
and consumer expectations are in this invisible asymptote where they're expecting singularity,
like consumers because they've been sold singularity and are using perplexity and chat GPT,
they expect all work to be done by software. So that is actually the one that keeps me up at night.
Like how do you deliver your agentic experiences,
which are inherently probabilistic
to meet these kind of insane consumer expectations.
And that's where I think the winning products
are gonna come from.
So the revenue stuff kind of really, really hits,
but I kind of process it out,
but the consumers and their expectations,
there's some stuff that kind of keeps me up 24 seven. Is and their expectations are some stuff that can accuse me of 24 seven.
Is it is it?
Internally, are you thinking with the team?
Like how do we achieve this sort of like chat gbt moment for the enterprise?
Right, like you could imagine like there is some place that you guys could hit in terms of product quality
That would be so magical that like you would get that
But cognition kind of had that where they launched Devin
and it was like this viral sensation.
Yeah.
Though it was an enterprise agent for coding.
But it's a great question.
Yeah.
Do you think that that's like maybe it's
the wrong way to look at it?
Because businesses will realize, hey, we
have something magic here.
We're not going to talk about it.
We're just going to like.
Antimematic.
You know, it's antimematic.
You know, we just. Right. Yeah. We're not going to talk about it. We're just going to like it. It's anti-mabatic. You know, it's anti-mabatic. You know, we just talk about this.
Right.
Yeah.
You're spot on.
So we're in the boring space of ERP and CRM,
the most essential but also the most lucrative software market.
Like, we've chosen the path to be the daily driver for all
customer-facing knowledge workers.
So we focus on building something that's a land expand.
Land with the maxes and the sound of the word,
but growing to 200-plus kind of active users at crampling.
That's kind of the core motion.
I think that agentic applications that win is where everybody's using it day in
day out. And so we optimize for like internal, like post-land virality.
So there is no seats, you land rocks and everybody uses it.
And I think that's, that's at least my thesis is become a daily driver.
Can you talk about the actual instantiation of the product?
Obviously, it sounds like you're plugging
into the data warehouse, the snowflakes of the world.
It sounds like you're maybe not plugging
into the Salesforce installations.
Is there a Rocks app?
How does a salesperson actually interact with Rocks day to day?
Absolutely.
Pure ramp-like, kind of cracked engineering shop.
So we build product, we're pragmatic product builders,
and we work with the platforms.
So we have a web app, an iOS app.
The guy who built ramp and Robin is here.
We have a Slack app, we have an email app.
So we want to be the front application where
it is a swarm of agents working for you.
It's powered by our own warehouse native CRM,
so it's running in your warehouse.
And it's a two-way sync to Salesforce or HubSpot, right?
So that's kind of our core idea.
Got it.
In kind of large sophisticated organizations
who are already winning, like the Mongos and these folks,
we would definitely integrate our API into the internal tools.
In most of the rest, we wanna build an application
that humans actually want to use.
And then hopefully get two to three extra productive
this year and put them on a path
to being kind of 10 times more productive.
Love it.
Jory, you got anything else?
That's all I got.
Cool.
Well, this was fantastic.
We'll have to have you back.
We need to get our own.
I hear there's big news coming.
I want him back when he announces it.
I wanna hear it first.
Absolutely.
Well, one short story.
Sorry, I'm a sports nerd.
I grew up in India,
getting up in the morning, seeing SportsCenter
to see like Jordan and the Wizards.
That's when I started watching it.
So, congrats on the success.
Hey, hey, we talked about this morning,
like there's gonna be a one name entrepreneur
for AI B2B SaaS.
And you could be that guy.
So, you could be LeBron, you could be Koby of B2B SaaS. I agree. You could be that guy. So you could be LeBron.
You could be Koby of B2B SaaS.
We're rooting for you.
Wilt.
Let's go.
Jordan.
Well, thanks for coming by.
It was really fantastic.
Oh, my.
Cheers, Karen.
All the best.
Congrats.
Congrats, yeah.
All right.
See you.
Well, we got another one-name entrepreneur coming
in the temple of technology any minute now.
Augustus.
Augustus.
You don't even need to know his last name.
You might know him by his, uh, the rate maker.
Former Roman emperor.
Former Roman emperor.
Yeah, yeah.
Accused member of the Deep State.
Accused member of the Deep State.
We're here to confront him.
Put on the tinfoil hat.
Break it down.
I promised him we wouldn't go too crazy with the jokes
because we know the internet is watching and this will be clubbed. I won't go too
crazy. Augustus I really appreciate you taking a shot and taking a
chance on this crazy show. No I mean the man has no fear he faces his vocal
opponent's daily.
Yeah, he really is good at duking it out on the timeline.
Never lets them get to the last quote tweet.
He's having fun with it too,
but it must be a high stress situation
because the numbers are getting crazy.
And we got Augustus here to break it down for us.
There he is.
Welcome to the temple of technology, Augustus.
Wow, I feel like your hair grew an inch.
You must be drinking a lot of milk or whatever
to stimulate that hair growth.
I'm drinking Justin Maier's bone broth, bro.
There we go.
Kettling fire shout out.
Shout out, former brother of the week.
Former brother of the week, Justin Maier's.
Can you give us, just set the table for us
for those who haven't been following the drama.
What's going on?
How'd you wind up in this situation?
Is it frustrating, is it hilarious, a little bit of both?
What's going on?
And it's great, it's the same effect as like
the sort of historical attempted hit pieces
where it's like your attackers are like,
he is the most powerful man in Silicon Valley.
Yes, we really did.
He is controlling the weather.
Yeah, yeah, the hit pieces have gone direct now.
Like you're getting the puff pieces
in the mainstream media and then the decentralized media, the citizen journalists are coming for you. So break it down. Yeah. Yeah
No, dude, my favorite so far has been the guy that was like this is clearly a CIA Psy op to make weather modification
Cool. He's backed by Peter teal and the same click of the same cabal of VCs as all these things
the the state of the Union is 29 states this year have proposed
legislation to carte blanche all weather modification and atmospheric engineering. And so these
bills they're essentially coming from a place of people being concerned about chemtrails
and people being concerned about solar radiation management. Chemtrails, it's the
suspicion that the government is like what you see is contrails, long streaks in the sky. They
suspect that's just like poison. Probably not. I haven't seen evidence of that yet, but open to
being convinced. And then solar radiation management is an attempt to dim the amount of sun that
reaches the earth to cool the planet down. So it's this climate intervention. And then cloud
seeding kind of just gets lumped in. Cloud seeding is what Rainmaker does. And it's nothing
like the other two things that people are describing. But because nobody knows the difference,
they're trying to ban it because they think that people are either getting poisoned or
that we're some sort of like agenda 2040 anti-human globalist initiative. And so surreal as it's
been, I'm both in the trenches in Twitter and in many state capitals throughout the union.
A lot of them I haven't testified at
because I realize that actually runs up
a lot more craziness than maybe is good.
But we're in this public knife fight
with the government of Florida and Tallahassee
just trying to, you know that tweet,
that iconic tweet where Trump says,
I just wanna stop the world from killing itself? I just wanna bring people water that need it. And I think that if the iconic tweet where Trump says like, I just wanna stop the world from killing itself. Like I just wanna bring people water that need it.
And I think that if you ban five seating,
you're basically banning like rocketry or fission.
And so that's what we're trying to stop right now.
Can you talk about water scarcity?
I think people, it's hard for them to process
because they turn on the tap or they turn on the shower
and just water comes out and it's not that expensive.
Yeah, water comes from the faucet.
You know, we're here. Gasoline comes from the pump. We're here in Los Angeles, which is one of the most
sort of arid parts of the United States.
It's only possible because of some corrupt bargain
that happened to reroute the river 50 years ago, right?
Yeah, yeah. Well, there's one, the Owens Valley situation.
And so LA County basically had to send out agents posing as private people to buy up
all the land between like the Owens River Valley and then redirect the water.
Then there's a couple of water barons that like you can't even say the name of or you'll
be like here in California.
But the California State Water Supply Strategy, like from the Department of Natural Resources,
says that half a million acres of farmland have to turn into desert by 2030 in California in order to maintain water supply for the cities.
Like Phoenix, Arizona is banning new housing development because there's not enough water.
Salt Lake City, people are getting respiratory problems because the Great Salt Lake is drying
up and there's not enough water.
And then the wildfires that go on all over the country, that's because there's not enough
water on the ground.
And in the case of Florida, a lot of people have told me like, you know what, listen,
I'm convinced that cloud seeding is useful and beneficial for a state like California
that is in a drought and that is a desert.
But Florida, we get plenty of rain.
Like, why do we need to bother modifying the weather here?
Well, 14 million acres of farmland in Florida is currently in a drought.
And then 30,000 acres of Miami-Dade County just burnt to the ground because there wasn't
enough water, because there was a drought.
So like if Mayor Suarez is listening, please save us.
Yeah, we just want to bring people water and drought is much more ubiquitous and consequential
than people realize.
Like, if you want to make America healthy again, we have to grow food
domestically and if we don't have enough water to do that, then it's going to be some cookie
GMO like Chinese stuff that gets shipped over the ocean or like soiling green product. Not
to hear you. Deep cut. Yeah. That's that's where we're at right now with respect to drought. No, no.
Yeah, a lot of distinctions.
What are these, is there anybody besides you
fighting to save the private and public,
just private and public groups
from being able to do cloud seeding?
Like are you the last man standing or do you have a team?
So, well, I've got a great team at Rainmaker
and I'm really grateful for that.
And we've got a bunch of farmers that are beneficiaries
of our program go to bat for us because like,
they previously had to tear up their pistachio orchards
because they didn't have enough water for them.
And now they're better able to farm
because of the production of water that we're doing.
But you know that part where or that like famous Stanford speech where Teal says, you
know, you should go after a really small market. Like if there's lots of people involved and
it's already too late. Well, I'm not like the last man standing. I'm kind of like the
first man standing. So it's other than the farmers that we're helping
and the other great team members I had
at Rainmaker running around the country,
it's Augustus de Rico V a couple states, 30 states.
Yeah.
How do you even balance this sort of simultaneously
you have, you know,
you're a company, you generate revenue,
you have expenses, you're sort of planning timelines
and you're also potentially fighting like legal battles
and you're also dealing with like technical risk
in the business, right?
Like what you're doing is very hard, it can be done,
but like that's also a challenge.
Like as, you know, as an entrepreneur, you know, how do you kind of balance all
those things? And it feels to me like, you know, the, the sort of legal risk is
like the most asymmetric risk, you know, you can always launch and create a new
prototype, you know, take another flight, et cetera.
But then if there's sort of these like blanket bands that, that feels like, is
that like taking up 60% of your time now,
just making sure that you're gonna be able
to do this in five years?
Well, so not quite.
I mean, it's definitely a large double digit percentage.
The interesting thing with respect
to the political dynamic though is this.
It's not really a right-left issue
so much as it is an east-west issue
because the west doesn't have water.
So Democrats and Republicans alike,
on the Western half of the US are like,
oh, this is sick, I would love to have water.
Then the people that have as much water
as they have conventionally needed
and aren't a subject to drought,
they're the ones more wary of it.
So I'm not worried about rainmaker domestically
at a state level, just because there's a lot of states that
we haven't gotten into yet, like Nevada or Arizona or New Mexico. I do think that the
domino theory thing holds true to some extent. So yes, we do have to focus on it. The super
like funny, crazy outcome is that I just like, if Rainmaker was totally banned from the US,
then we just double down on Riyadh. and then we like camp out there for five years
And come back weathered and you know wearing desert garb
but
My Iraq Lawrence of Arabia mode my Iraq is my dune
Watch I watch dune to you in theaters five times bro
Religious zealot comes to turn the desert
five times bro religious zealot comes to turn the desert I love it. That's great. Yeah. So no, I trust my team a lot.
I've really talked a little bit about um just like the general
fear around cloud seating. I think that right now with
health and maha there's just this general idea that like
anything new is risky. Yeah fertilizer. Yeah, Luddites are having a heyday.
Yeah, bull market and Ludditism for sure.
But I mean, at the same time, like, you know,
you eat too much salt, your doctor will tell you
that's gonna hurt your heart.
And there's a million different things where,
even if it's safe at a certain level,
you take it up by 100X and it gets bad.
And there's just all these balancing.
And then also we do the mice studies, but does that really transfer? And what happens when
someone's vaping pink liquid for 50 years? We don't really know and we're kind of figuring it
out right now. What gives you confidence about the low impact of this?
So, you know, I really sympathize with people
that are worried about this because it sounds on its face
like crazy to be modifying the weather
and to be dispersing chemicals.
But to your point about like vaping stuff for 50 years
and not having longitudinal data on the health outcomes,
like cloud seeding, even though Rainmaker is very innovative,
even though a lot of new things have been done in academia that we're implementing from
like the last five years, cloud seeding is 80 years old, dude. This was invented in the
United States in 1945. GE has the first patent from 1946. People have done 80 or 30 or 50
years, depending on the watershed studies on the concentration of silver iodide that ends up in the water and the soil. And after decades of operation, you only see parts per
trillion of this stuff. Like you need super sophisticated instrumentation to even detect
it in the first place. So there's been no negligible health impacts found to either
people or agriculture or the environment. And those studies, they've been done. And
like, should Rainmaker replicate those?
Should we continue to do it just to prove it out further?
Totally agree, but it's totally safe.
And if you think about the LD50,
like the lethal dose of silver iodide,
it's less than salt.
It's less than table salt.
It's milligrams per kilogram with salt,
and then it's 2,800 for silver iodide.
So this is a resoundingly safe material to be using
and the data from decades shows that.
Can you talk about a little of the history of,
I feel like when it's easy to look at the chemtrails thing
and be like, okay, well, like the most aggressive
conspiracy theory there is that it's like,
literally like mind control and it's like,
we're so divided in America.
I don't know.
The mind control drugs clearly aren't working because what are they mind controlling us
to do?
But there is a legitimate criticism that like when you fly a bunch of planes around those
contrails are emitting pollution and your skies get dirtier.
And I remember after 9-11 all the planes were grounded. It was the clearest day in the skies ever and there is some sort of like low key harm
and there's always these like knock-on effects.
I'm sure with the solar radiation management, same thing.
We have been experiencing global warming but we've also been experiencing global dimming
and those have been kind of counteracting each other in some way.
Can you just talk about some of the history of these things and,
and how you think about worrying about the second and third order effects of
cloud seating?
Yeah, yeah, totally. So, um,
with respect to weather modification and climate engineering,
um, like the thing that I think everybody needs to realize is that we've been
doing it unintentionally for hundreds of years. Like build a city, then you have a heat island that affects cloud formation
and precipitation patterns. If you have a coal plant or a steel plant or a nat gas plant,
the steam and the aerosol you emit reliably creates more clouds and precipitation,
like tens of miles downwind as well. The emissions from our cars, right, like those have pollutants in them apart
from just like CO2 considerations. So we do modify the weather unintentionally all the time.
Rainmakers use this as that like we should be modifying it intentionally to either like
unfuck the earth as it stands or make it more lush and abundant once we've done the former.
as it stands or make it more lush and abundant once we've done the former. When it comes to knock-on effects, the common question is like, you know, if you're making it rain more
here, is that reducing precipitation downwind there?
Yeah.
Totally reasonable question. Like I think on its face that logic makes sense. Unfortunately,
the system's pretty complicated. It turns out that only 9% of all of the water that
traverses the atmosphere in the United States
precipitates over it.
The vast majority is either recycled by the oceans,
precipitates over the oceans, evaporates away,
and never condenses again over the US.
So if you just increase the utilized water
in the atmosphere, then it's purely positive sun.
And you can select, to some extent,
for which clouds aren't naturally going to precipitate with the appropriate radar and probes. The evidence of
downwind drought is totally uncompelling. Nobody's provided reliable data there, but are people
concerned about it? Are they rightfully concerned about it? Sure, totally. And then the last thing
I'll say with respect to contrails,
there's this funny thing that people realized where if you fly a bunch of planes early in the morning
and then develop contrails,
you'll actually cool that area locally
and the planet to some extent
because it'll reflect sunlight
before it warms the planet up with the light of day.
And if you fly a bunch of planes at sunset,
then that will retain the heat
and kind of act like an insulating layer
and keep the earth warmer.
So like just plane flights, you know,
American Airlines, Delta, whoever,
they are doing weather modification unintentionally,
apart from any chemtrail stuff.
I'm just hoping to give a little bit more sanity
to the ways in which we're modifying the weather.
Earlier this week, there was a video that went viral
of a very cinematic desert in China
that has been reforested.
Do you think you'd have an easier time in the public eye
if you were just planting trees
or is any modification of our terrain
just a hot button issue regardless.
And do they have to do cloud seeding in addition to that
or are they just using traditional irrigation?
Like how does that, I wanted to talk to you
about that Chinese video that went viral
because it was so striking and it was like,
for me it was like I want that here for sure.
But I don't know how other people took it.
Yeah, it looks like you get like a thatched roof
in the middle of the goby.
Totally, yeah.
It felt like this idyllic little preserve.
You put a ranch house on that,
the golden retrievers running outside,
you're taking the horse down to the local.
Golden retriever mode.
Yeah, to the local saloon.
And it just seems like it's the West, right?
Again, you have new land that people can go live in
and that's just like a land of opportunity
and I think that'd be so cool.
But yeah, what's your take on it?
Dude, I mean, that's what the Central Valley was.
The Central Valley used to be deserts and swamps
and now it is the most productive agricultural region
on the planet.
It produces like 30% of all of the fruits
and then the majority of like the red
fruits in the United States. We used to terraform all the time. Like the Hoover Dam was an attempt
to terraform the West. I think that like people that, you know, like maybe it's a last man problem,
maybe it's Luddism, maybe it's just like general, like well no I won't say that.
Like we've just lost the desire to be great. I think in the United States we've lost the desire
to like build the future and see the future through. If I was just planting trees, yeah for
sure it would be easier. You need the water though, so unless you're setting up enormous conveyance systems, which China largely is,
and cloud seeding in addition to that, it's a non-starter.
There's a couple technologies, you know,
I mentioned last time, like soil amendments
that can get soil to retain more water, that would be good.
But, you know, trees, they emit dust that creates
like a majority of our clouds in some regions
and induces precipitation.
Clouds like it mimics that natural process.
It's just with the material
that works a little bit better.
So, yeah, man, like I plan to plant trees in the future.
I plan to make as many things green as possible.
I hope that people get on board with the vision
to make deserts green and great,
but it's gonna take a minute.
I had a funny experience at Hereticon.
There was a guy talking about terraforming.
I'm blanking on his name,
but he put out this like amazing-
Thomas Huero.
Thomas Huero made this amazing presentation.
It was like 40 minutes long,
showing all the different places around the world
that we could terraform and how amazing it would be.
And one of the places that he highlighted was the Salton Sea and how you could
like theoretically like divert the Colorado River.
I've been there with Augustus. We went out there.
And then, and then I went up to him afterwards and I was like,
this is an amazing project. I would love to make the Salton Sea, like Dubai,
which just like, you know, it's, it's a, it's a,
Jordy comes up, how do we make money off this?
No, but I asked him, I was like, how, how do we actually do this, you know, it's a, it's a, the Jordan comes up, how do we make money off this? No, but I asked him, I was like, how do we actually do this?
You know, I was thinking like the way to do it
is through a hyper commercial project that says
all this land is worthless right now
because it's toxic surrounding this sort of evaporating sea.
Like if somebody were to come in and buy up all the land
around the Salton Sea, that's now very depressed
and then start lobbying to
actually make this stuff happen.
It feels like you need...
I asked him, how does this get done?
He was like, I have no idea.
Somebody should try it, but I don't know how they would actually get it done.
Is it really just one person caring like makes these kind of projects possible, right?
Like in the case right now, there's like a good chance
that weather modification, cloud seeding, et cetera,
just would get banned if you weren't like
flying to Florida frequently.
Great man theory of history.
Great man theory of history.
But yeah, like it feels like we miss in this sort of Doge era,
like if we do enter a period of deregulation
in some areas, some of these projects might be possible,
but yeah, how do you think about the salt and sea
opportunity and then just if we're even capable
of doing projects at that scale anymore?
So a related thing, interestingly enough,
is like I think that there should be way more
alternative finance that startups employ.
I promise this is related.
I'm gonna wait until Q1 or Q2 26 to do this,
but we're just gonna stand up a land fund
and we're just gonna start buying up the land
that currently isn't arable,
primarily in Arizona and California, because the same acre of land that's worth
like six grand in either of those places has the same soil quality that the Central
Valley does that would sell for like 70 to 500000.
Like we'll set up a subsidiary that we sell some equity in to get LPs involved
so that we can terraform that and then flip the land or keep it as an agricultural asset that is only made viable because of the water
that we bring. It has to be a commercial interest. I don't think that we, you know, maybe America
at one point had like maybe there's this idea like previous America that existed. I'm not
sure where people did things just for the public benefit or because it was cool.
Now, yes, it has to be cool, but moreover, like there has to be commercial interest. And yeah, I'm going to do exactly that.
And I hope that somebody beats me to it for the sake of all the benefit
that will come from it.
But if they don't, I'll make a lot more money because of them.
This is hilarious, because you're going to be duking it out when you're buying that land,
because Casey Hanmer, we had him on the show earlier. He wants to put solar panels all over it and turn it into a parking lot
And you're gonna be like no like I'll pay a dollar more per square foot
But I I do you can have the auction. We'll have that will do the auction. Yeah
five dollars five dollars
I do have a question about the cloud-seating stuff.
It feels like, I mean, all of this stuff is somewhat zero sum.
Like, we're not really creating new water.
We're kind of moving it around.
But desalination seems awesome.
I was looking at it pretty intensely a year ago.
And it seemed really difficult, honestly.
But desalination, like like what is your take there?
And it just feels like we have, you know, I would expect a Gundow company to be doing
this.
Like we have people working on nuclear now.
We have people working on solar.
You're working on cloud seating.
Like desalination, if I heard like, oh yeah, there's some hot guy, hot startup who's working
on desalination. I wouldn't
be like, Oh, this is breaking my mind right now. I'd be like, of course that's the next
thing that the guys will go after in the Elsa gondo because like they've kind of checked
the box on everything else. Uh, what's the state of desalination and what's your take?
Yeah. Um, do you sell is great. I'm not anti-decel. I don't think that we step on each other's
toes. Um, it is a modification of the water cycle.
Whereas they are taking salt water out of the ocean,
making it fresh and usable,
we're taking cloud water out of the sky
and bring it down to make it fresh and usable.
It's relatively efficient right now.
I think there's a lot of promise in catalytic desalination.
I'm not enough of an electrical engineer to come up with something
sufficiently innovative in the space, but I think that probably shows more promise than
RO. It's relatively efficient as it is. The problem with desalination for gigascale American
projects for a lot of the world that is commonly under supplied with respect to water is conveyance. We can set up a
huge desal project in LA. Sweet. No dice for Colorado. Doesn't matter for Nevada. Barely
even matters for the Central Valley because you have to step hundreds of miles of huge pipes
and then maintain those pipes for as long as the project exists. And it's doable. The California
State Water Project is evidence of conveyance working, but it is crazy infrastructure and crazy eminent domain that goes into shipping
that water. So should California be more reliant on it? Totally. Could we like with tariffs or
something strong arm Mexico into giving us like the Sea of Cortez to desalinate for Arizona? Maybe.
But if you're in the interior of the United
States, the only way that you can make more water is if you bring it down from the sky.
And that's who we're primarily trying to serve. This was Blake Master's point that I really
enjoyed. He said, you know, how do we get water into the interior states, Nevada, Arizona,
New Mexico? Well, we're going to reroute all the water from California into those states.
And then California, the future of water in California is nuclear powered desalination.
And he said that like 10 years ago, haven't heard anyone try and build it.
Maybe he should have stuck with venture capital and back that company because I want it to
happen.
But maybe it'll happen soon.
That'd be great.
Okay.
We're perp or no desal, no cloud no cloud seating boiling the ocean
Nuclear boilers on the surface of the ocean to make big enough clouds to fly inland. Oh
Interesting. Okay. So you just clouds and then float them over the river
Atmospheric rivers, right many people have wanted to boil the ocean but not literally so we need more literal ocean boilers for sure. I
I have a more personal question. How are you doing with all this?
I care about you.
I'm lucky to be an investor in Rainmaker,
but I care about you even more on a personal
level. I think your prefrontal cortex is like,
about to maybe fully develop.
Right. But like you're under a lot of pressure for
being like, you know, in your mid 20s, like
I, you know, anybody who's at a post go viral or sort of break
containment knows what happens.
Like your, your message requests on every platform are probably
like, you know, very dark.
Um, and, uh, you know, are you, are you, uh, you know, you seem like
you're handling it incredibly well, but, uh, does it ever, does it ever get to you?
That's really, that's really caring.
Thanks for asking, man.
You know, like I I view Rainmaker as the best means by which I can serve God in my lifetime.
And I owe everything in my life to Jesus Christ because of all that he's done for me in this
life and then hopefully in the next. So this
situation that I'm going through, no big deal at all. Like if it ends up on the other side with,
you know, ideally me having gotten to educate and convince some people and bring them more water
in places where they need it, I am happy to take the punches and the death threats in the interim. And even if it didn't work out,
I'm gonna give it my all.
And I was a high school debate kid.
So I live for this kind of dorky.
Do not mess with the high school debate kids.
They will go to war with you on the timeline.
Yeah, I mean, we kind of predicted like,
the rise was so quick the mood would shift
But I didn't expect that this was the instantiation of that mood that shift around the vibe of the company, right?
Yeah, and I just have to say like there's so many people rooting for you, right? Like you really are
You know your critics that call you an industry plant, you know in many ways like you are
You know, you are a product of of our industry and there's just a lot of people
You know rooting for for your success. So
proud of you for for not not letting the
Never back down never back down always claim victory always claim victory. Don't let the haters get to you and
Never back down. Always claim victory.
Always claim victory.
Don't let the haters get to you.
And I would say, you know,
like have a good rest of your Friday,
but I'm sure today's basically a Monday
for the Rainmaker team.
Job's not finished.
Yeah, I'm so sorry that the squat rack had to go.
How are you getting fitness in these days then?
That was tragic, dude.
How are you getting fitness in these days then? I was, that was tragic, dude.
I am walking to and from the office
cause I live four blocks away in Gundale.
And otherwise I'm going hard skinny priest mud.
I'm big skinny priest mud.
Skinny priest mud.
You've been the muscular warrior.
I look forward to when you exit this
and become a portly merchant.
It's going to happen.
You're going to IPO the company, sell it all, and become a private equity guy,
roll it up the rest of the farmland or something.
The portly merchant is the future for sure.
Now, I'm excited to have you back on when you're doing some of the land stuff,
because that does feel like verticalizing and capturing
the full value that you guys can create over time.
It's just very obviously you should
own the land that you're raining water down on
and gain the full economic benefit of the work.
So it's fantastic having you on.
Yeah, this is great.
As always.
Thanks guys.
Our weatherman.
Our weatherman.
We needed a weatherman.
We got it.
We're forecasting for TPPN.
Yeah, yeah you should.
You can't do,
forecasting would just be too easy for you.
Cause like, yeah, there's gonna be rain
like in that square mile over there.
Yeah, we need the zoom background, like the green screen and see you and like the map of next time you're on fake zoom background
With the with the map of LA and you can tell us where it's gonna rain
This weekend. I'll have you on Friday hang out be fun
Great. And we'll talk to you. Talk soon. That's great. And so next up we have Mike new
from He founded Zapier. He also founded
Ark prize which we've talked about the show before so Ark prize
I highly recommend everyone go and check it out and try and do them. So our prize is a
AGI eval an evaluation that is designed to be
hard for AI but trivial for human beings.
And so they look like puzzles,
and there's a grid of squares that are different colors,
and your goal is to recreate the pattern,
or understand the pattern,
it gives you a whole bunch of references,
and then you are tasked with creating the solution
to the puzzle and it's been very, very difficult for AI.
AI has struggled with this even though it looks like
something that should be able to be one shot
and Mike is in the studio now.
Welcome Mike, how you doing?
Hi guys, thanks for having me.
I'm doing good, how's your Friday going? Oh, guys. Thanks for having me. I'm doing good.
How's your Friday going?
That's great.
How's yours?
Quite excellent.
Spent a lot of time with the research team.
Oh, awesome.
Can you give us just a brief introduction, your background,
and then how you wound up working on ArcPrize
and what you're excited to announce most recently?
Yeah.
So I guess probably best known for Cofinex Zapier
about 15 years ago, an automation company.
That's what I've been working on the last 13 plus years.
And I had sort of like, call me AI curious.
I went sort of all in on AI early January, 2022
when the chain of thought paper first came out.
All these like LM reasoning benchmarks
were sort of like spiking up in performance.
And it got me really curious,
like are we on path for AI or not?
And this ended up leading to Zapier deploying,
I think a lot of AI products really early into the market.
And there was this trend I had seen and heard,
I spent probably hundreds of hours
talking with customers around,
playing AI agents, Zapier's been deploying
for a couple of years now.
And the feedback was always consistent.
Hey, I get the hype, but they're not reliable they don't they don't work two out of 10 times.
And that just doesn't work for these unsupervised automation products. And this was sort of
in contrast to all this like EGI hype, right, that I see on Twitter all day long. Yep, same
thing. I'm sure you guys saw, you know, in the 2023 2024 era. And so I tried to set out
to figure out how do I like explain this because like, I have two sets of lived experiences
sort of don't match. And this is how I rediscovered François Chalet's
ARC benchmark that was originally published in 2019 and I sort of expected
to have gotten kind of beaten by this point you know first looking into
probably again 2023 2024 surprised that basically hadn't and not only hadn't
been beaten there basically been no progress in it which I thought was
really fascinating given the fact that we've like scaled up these language model systems by almost like
50,000 times over the last, you know, three, four years prior. And so that was kind of the genesis
of me leading to meeting Francois and kind of pitching with this idea of like, hey, I think
this is literally the most important unbeaten AI benchmark in the world. I think it makes a really
important statement that like pre-training scaling alone is not sufficient to get to AGI
And more people should know about this fact
And so we launched our price together try to sort of raise awareness of the benchmark and just honestly inspire more people to work on
New ideas again towards AGI
Can you talk about a little bit about the prize money and then what happened with ARC prize one and then what's happening now with
The second iteration. Yeah, so when you launched last year, you year, like I said, there'd been very little progress towards,
I think, GPT-4.0 had scored 4% on the benchmark.
That's four years in, five years in.
And again, like an eight-year-old
should be able to do these?
Yeah, especially RKDI-1.
It turned out to be, in retrospect, quite simple.
It's a very binary sort of fluid intelligence test.
And yeah, all the knowledge you need
to solve this benchmark are doable by,
you acquire the knowledge very early on in childhood
development.
You can take the tasks in fact, yourself,
if you go to arcprize.org, we have a big player.
You know, I think what's special about Arc is that it is a,
the sort of design concept of it is it targets something
that's quite straightforward and easy and simple for humans,
but hard for AI.
And this is in contrast to every other AI benchmark
that exists today,
what you're trying to like challenge this like PhD plus plus
frontier, you know, like frontier math,
or humanities last exam, you know,
these are sort of AI benchmarks that you really do need
to be like PhD level plus to be able to even solve as a human.
And in contrast, Arc makes a very different claim that, hey,
there's still a lot of very straightforward
for human capability that these frontier AI systems don't have.
And so that's why we launched Arc Prize.
During the contest, I think one of the things
Arc Prize 2024 will be noted for is probably
the introduction of this test time adaptation method.
That was some of the papers that came out from the contest.
We had a big million-dollar prize in order to beat the benchmark to get to 85%. No one did.
You also have to do it with a very high degree of efficiency. And we kind of expected that
was going to be the case. We also had this paper track where people could submit new
ideas and push the conceptual forward force, which is where most of the coolest progress
came out from last year. And then at the end of the year, obviously last fall, we started seeing things like O1 and O3.
And this is a very big update, I think,
because systems like O1, particularly O1 Pro
and things like O3, are not just purely scaling up
language models.
These are not language models anymore.
These are really AI systems.
They have a big model as a component of the system, but they have some sort of synthesis engine on top
that allows them to reuse their existing pre-training knowledge
at test time to recombine the knowledge.
And that allows them to make significant progress towards ARC.
And we saw it with 3 in December,
score a pretty high degree of efficiency,
solve ARC-AVG on it.
It's 75%, so they're not quite at 85%.
And then opening a test
and even compare compute a version of it that probably was probably a couple
million dollars to test they got 87% I think. And so that's kind of where we
wrapped up last year. We've been part of work over the last couple years.
Just real quickly while you were saying that fantastic setting the table. I went to arcprize.org slash play.
I did the daily puzzle.
I still got it.
I'm still at a job.
You could do it at home if you want arcprize.org slash play.
Do you think with all the billions of dollars
flowing around the ecosystem, do you think arc prize
is capital constrained in any way?
Like would we have seen better results to date
or is this sort of social status of just like achieving?
Just kind of like scaling, are we compute bound here
or is it algorithm bound?
Right now I think Arc AGI 2 that we just introduced
this week on Monday is pretty direct evidence
that we need new ideas still to get to AGI.
We shared, I don't know if we can throw it up
or we can put it in the shadows or something.
There's a chart I shared on Twitter
that shows the scaling performance
of even frontier AI systems like O1 Pro and O3
against the version one of the data set
that we've been using for the last five years
and then version two that we just introduced this week.
And basically it resets the baseline
back to 0% language models.
Pure LLM systems are scoring like 0% now, again, on ARCv2.
Single COT systems like R1 and O1 score like 1%.
And the sort of estimates we have right now for the frontier systems that are actually adding pretty sophisticated synthesis engines on top,
like O1, Perono, 3, single digit percentages on their efficient compute settings.
So I think there's a very interesting point that like,
you know, we kind of moved from this regime
where people were claiming,
oh, we're just going to scale up language models, right?
More data, more parameters, we're going to get to HDI
and people realize that that's not quite the story now.
And then there's a new story that's emerged
over the last like five months, which is,
oh, we're going to scale up this test time compute
and that's going to get us to HDI. And I think what V2 shows is that that's not over the last like five months, which is, oh, we're gonna scale up this test time compute and that's gonna get us to AGI.
And I think what V2 shows is that that's not quite either.
We still need some structural new ideas
in order to like get to a regime where we can scale up
to actually reach human performance on this stuff.
Do you think like with how much human capital
is sort of concentrated in San Francisco,
like do you think that is in many
and sort of like geographic concentration
of so many of the brightest minds in a eyes sort of like
potentially even holding back like new random ideas?
And that's why Arc can potentially sort of
if we need new ideas, like maybe they don't come from the sort of the center
of the universe and maybe they come from, you know, some random
kid on the Internet who's just like,
you know, has the time to think, you know, completely independently and...
This feels kind of scrolls adjacent.
Yeah, exactly.
You put this idea out and a bunch of people can take
wild, wild swings at it and you get just new idea generation.
I gotta say thank you to Nat Freemium, you talked about the Pseudocene challenge.
Literally, I'm very inspired by what they were able to accomplish and that was one of the
motivations for actually running the prize the way we did. I think yes, part of the goal of
running our prize was specifically to try and reach independent researchers and teams and like
give them a problem to work on that they could actually make meaningful frontier progress on.
Right. I think the, especially in academia in the last maybe five years, there's been this
disheartening belief of like, hey, can we really
advance the frontier because like, you have to have this
billions of dollars in funding to scale up these language
models. And I think Arc shows that like, yeah, there actually
are frontier problems that are unsolved, that individual
people and individual teams can actually make a difference on
today. You know, that part of our whole goal of launching our
price was re inspire a lot of independent researchers to work on things like this
and bring new ideas to the fold.
I remember when, uh, when chat GPT got a computer and it was, and it was able to
write some Python code, then execute it.
Is there, are there a restrictions on kind of custom systems that are LLM or reasoning model driven, but fine
tuned to help with the execution of ARC prizes.
Is that breaking the rules or is that something that is actually encouraged and fine?
How do you think about like, I guess it all, it all, um,
boils down to like overfitting on this problem, but, uh,
it seems like even with the incentive to overfit, it hasn't really happened yet.
Yeah. Um, our,
this is one of the reasons our be one lasted five years was because it was,
it had very good design, original design built into it with a public test set
and a private test set. Sure.
Now private test set really prevented folks from being able to sort of overfit on it.
And so there's actually two tracks for the Ark Prize Foundation. We have the contest track.
This is on Kaggle. This is the like big grand prize. All the prize money is attached to this.
Kaggle graciously donates a lot of compute to allow us to host there.
And on the grand prize is basically to get to that 85%
within Kaggle's efficiency limits.
So this is about, get about $50 worth of compute
per submission that you send in.
And this is like a pretty high bar for efficiency.
And in fact, this is not an arbitrary bar.
We do think that efficiency is actually
a really, really important aspect of intelligence.
You know, you can brute force your way up to intelligence,
but we really do wanna be shooting for like human targets
and efficiency for this stuff. That's the contest. When we launched last year, there was a
lot of demand from just the community, the world on like, Hey, I, okay, I get that. Like I can't
run my big language model on this, you know, in Kaggle, but I really want to know how to
frontier AI systems do. And so we launched a second track that's hosted on arcprize.org
where we benchmark and test all the like frontier commercial systems to showcase what are they, you know, capable of doing basically these
existence proofs that I think will eventually, you know, filter down to the
higher efficiency solutions and open source solutions that can be run on
something like Kaggle.
Um, and even there, like within this sort of same efficiency, uh, you know,
accuracy specs, we still haven't seen any frontier AI system beat, uh,
RKJ two level and RKJ one.
Yeah.
So there's still a long way to go.
What does that track actually look like?
Is, does that mean that like the team from OpenAI is writing the specific prompt?
Are they doing prompt engineering or is it your team is just taking the API and,
and giving everyone kind of the same goal?
Is there any standardization there?
There is.
We, so we, ArcPrize does most of the testing today,
ourselves, like we basically wait for API access,
either get early access or wait for the public access.
And we have a GitHub repo where we just have a standard,
very simple prompt that we use to baseline across
all the sort of frontier systems that we test.
OpenAI situation was a little different
because they had reached out to us and said,
hey, we think we've got a really impressive result
on the public eval set. and we'd like your help to verify
on a semi-private set, which is what we created
that data set to be able to do.
And in that case, there was really, you know,
very little prompt engineering at all.
It was very mostly just a verification of what we'd seen
from them.
So yeah, in all of these cases, the amount of prompting
that goes into these is extremely minimal.
We're basically just giving it the sort of grid with numbers
and asking it to solve it, and it directly goes into it.
Very cool.
Can you talk about consumer AI agents
we've talked about on this show?
Like, it feels like, you know, once a week,
someone comes out and says they're building
the consumer AI agent that's gonna like book you a hotel
or book you a flight, or like or do these things that sound really simple
and then nobody's delivered to our knowledge
a truly magic, agentic consumer experience
to book a flight.
Even you could use OpenAI's operator to do this,
but even when you're watching operator work,
it feels like you're using,
it doesn't feel truly magic yet.
And just given, in many ways,
like Zapier is just like such a foundational company
when it comes to automating anything on the internet.
I remember that was magic to me as a kid where I was like,
oh, data, you know, a kid, I say like in like college, right?
It's like, wait, data came in here and then it just went and did this other thing or like,
I did this one thing. It feels like a superpower, right? Like,
and so in many ways, it's like criminally like underrated in terms of getting to that sort of
like, you know, agentic internet. But like, why haven't people been able to like, I even saw
like perplexity, like was saying last week, like we're building, you know,
consumer agents that's gonna like book your flight.
So like, and Apple's promised this
and everybody's promised it,
but we haven't seen that magic experience yet.
And I'm curious why you think that is.
We've been trying to deploy agents
for two years it's out here.
And this was kind of what led me to sort of originally
getting bit interested in Arc is, you know,
that's this reliability problem.
Language models fundamentally are stochastic
in terms of how they work.
And so you just never get that 100% accuracy
directly from the model without adding
a lot of extra additional guardrails.
So we're like processing on top.
We've done some of that for Zapier.
The way that I kind of think about it,
you've got these concentric rings of use cases
that kind of get locked as reliability
and consistency goes up.
And today we're pretty much still in the, and this is true at the frontier, we're pretty much still in the kind of get locked as reliability and consistency goes up. And today we're pretty much still in the,
and this is true at the frontier,
we're pretty much still in the kind of concentric ring
of like personal productivity and team-based automation,
where the risk to failure isn't very high, right?
Like if you're gonna start deploying one of these
like completely unsupervised agents into the wild,
you're gonna have a pretty high bar, right?
If there's a sort of high degree of risk
if this thing goes wrong,
you know, all of burn customer trust or brand trust
or something like that.
So that's what we've been seeing is companies and users
slowly increasing those concentric rings
as the progress as the technology is improving.
I actually think we're going to start to see agents start
to work this year, specifically because of progress in Arc.
I think this is an underappreciated fact.
People always ask, hey, don't Arc looks like puzzles?
What's the economic utility of this stuff? One of the most important things
that Arc tests for is your ability to adapt to problems you haven't seen before. That's really
the spirit and the essence of what it is. It's saying, hey, can you solve problems you didn't
just memorize, but you actually have used knowledge you did memorize to solve new things that you've
never seen before. This is fundamentally the same aspect, like, thing we're testing for in terms of
generalization or reliability with agents. So I think because
we're starting to see AI reasoning systems that are able to make progress against ArcGIS 1, albeit
relatively inefficiently, but the capability is sort of now existing. There are AI systems out
there that exist some degree of fluid intelligence. That's going to start increasing their reliability
and more and more of those concentric rings
are gonna get unlocked, I think, starting to share.
I wanna talk about just like AI, AGI, ASI metrics.
I feel like ARC Prize is a fantastically important
benchmark, I agree with you on that.
Ray Kurzweil has been benchmarking against just like
flops to human flops, basically.
He's put the singularity at 2045.
He kind of nailed the Turing test date, I believe.
But I've been kicking around this idea of like,
maybe the real test of like AI hitting some sort of like
tipping point is just like, how much economic value
is being created by AI? And once that hits some sort of like 10 point is just like how much economic value is being created by AI
and once that hits some sort of like 10% of total global GDP or 50% then we've reached
the AI singularity or something.
Are there any metrics that you're looking at for kind of this like intelligence explosion
ASI, AGI, but outside of like an individual moment or, you know, application use case benchmark,
something more global and like human relevant.
You know, pragmatically, you know, I think the definition that I use for AGI these days
is basically when we stop being able to easily find tasks that humans can do that AI can't do, we've got EGI.
And I think that's, like I said, a bit of a pragmatic answer.
One of the things that surprised me about ArcGi2
was the relative degree of ease
by which we were able to create it.
Given this big moment in December
of like 03 beating ArcGi1,
it was actually not too hard for us to come up with tasks
that we were able to sort of verify. We actually did a controlled human study to make sure
every task in the V2 data set was solvable by at least two humans on the same rule set.
And we were able to find that relatively easily. So I think that shows we still have ways to
go. I think once we hit that bar, though, when the gap between like easy for humans
hard for AI is zero, I think it would be extremely difficult for anyone to like argue the opposite side that like we don't have AGI. I'm not
I don't find the economic like measurements quite as useful for understanding the capabilities
of what frontier AI does. I think it's a good measure of impact. And I have no problems
using it for that point. But I think if you want to use an economic measure to make a statement about capabilities, the challenges
it obscures, and there's multiple ways to get to that outcome. And this is one of the
challenges with language models. Language models generally work in a memorization-style
regime where they're learning lots of data, they're able to apply it to very similar types
of patterns that they've seen before, but not novel patterns of what ArcGi shows.
And like those have economic utility.
It turns out that like intelligence is just memorizing
actually does interesting things for us
as after it's making money with the current regime
of AI today.
And so I think if you really wanna like understand
capabilities of frontier AI systems
to make predictions based on the capabilities,
you need more precise, narrow like wedges
on that capability. And that's really what our QGI tries
to be is this wedge on, you know, do do do these AI systems
have the ability to actually adapt to problems that they
haven't seen before and acquire skills, you know, rapidly and
efficiently to apply to new things that haven't been done
before?
A little, a little more fun question, what was your reaction
to this week's sort of Ghibli moment?
It felt like we talked about this on the show earlier.
Sputnik moment.
The funny thing for me is, so we had a couple of posts each
that sort of broke containment out of traditional teapot
or whatever.
And there was a lot of people quoting.
Like normies.
People quoting our posts being like, all right, the jig is up.
What filter app did you use for this?
So there's still people out there that are just completely unaware of progress.
But for those that use these tools every day, it felt like this magic moment where people
are just one-shotting these image generation models and getting these sort of incredible outputs.
But, and yeah, just felt natural.
And one that we're excited about,
like, I guess my question to you guys is,
do you think that the degree by which we saw
like the ghibliization of like Twitter and X over this week,
do you think that was the degree of baseline interest in that like series
and movie prior to this week? Or do you think like most of the people that were posting
that stuff were this is the first time they've ever really got exposed to that kind of content?
Yeah, I kind of ran a test on this inadvertently, because I posted both a scene from Oppenheimer
gibbified it got like 30,000 likes. and then I posted an actual picture from Spirited Away,
a real still from the actual movie, 24 likes.
And so there's a little bit of like-
I love the fact you're running experiments,
that's fantastic.
Yeah, and so there's a little bit of like,
okay, it's not just the filter, it's this recontextualization.
I thought a lot of it was like,
there's a reason we're
using Studio Ghibli specifically because there are other art styles that you can
recreate with style transfer for a long time but Studio Ghibli fits this like
perfect it doesn't quite go into the uncanny valley and then if the AI if you
say hey I want you to recreate a picture of me, but use a Hollywood
VFX pipeline to recreate the 3D model and then do exactly what they do in the Avengers
and take it to the top level, it'll just look like a photo because VFX has now become photo
real.
And so you can't go full photo real or else it just doesn't look like a filter at all.
And if you go to stick figure, it's unimpressive. And so you need this like perfect art style
that looks clearly different, but still recognizable.
And the level of alteration that happens
in Studio Ghibli's animation style
is that perfect intersection where it feels like
impossible to just create with a vanilla style transfer
or like edge detection and just pixel manipulation
You're not just recoloring. It's not a filter, but at the same time, it's not
Just oh, so it's so photoreal that it's just like oh, yeah
You took a picture of me like this and then it looks like this and it's like that's impressive
But it just looks like you took the photo like this. I don't know that was my take
Yeah, I guess like I I just get, you know,
my own answer to that is like,
I think it significantly increased the awareness
of like this art style, this media.
Totally.
I don't know by how much, but clearly like,
people are doing it who've never even like seen
Spirited Away or heard of the director
or like anything like that.
And so I do think that there's really important,
you know, thing around the ease of use of the tools, you know,
actually gets people like, I think exposed and empowered to
actually like do this stuff on a very mass scale and very rapid.
Like I think it's gonna think we saw totally sweet, you actually
really matters for mass adoption on this stuff. Yeah. Yeah. I
think I see it.
What would you do human like the human creativity, like we had
basic like AI image generation for a while, and then we got the Harry Potter Balenciaga
video.
And that was clearly AI generated, but the idea to combine Harry Potter and Balenciaga,
which are these two very orthogonal concepts, that creates the humor, and that came clearly
from a human.
And so I'm excited to see where this goes
to where I will see a Ghibli or a Ghibli inspired video
and be like, oh wow, this is actually entertaining to me,
not just a tech demo.
Jordy, what was your take?
So I'm curious to get your take.
So if you were running an image,
it was a bad week to be running an image generation model
because unless you're having your users prompt and
get that quality of output that consistently and there's just sort of like a distribution
effect too where like I'm sure they're even just like the user base increased dramatically
just off of this one sort of meme cycle.
Where would you, if you were running one of these sort of meme cycle. Like where would you, you know,
if you were running one of these sort of image generation
models, where would you go?
And do you think that some of these players
should be thinking in a more weird way,
like kind of trying to generate these new ideas
versus making like sort of derivative
foundation model efforts?
There's an interesting tie into things like Arc
that we were talking about before
and trying to assess capabilities
and pushing new ideas forward, right?
Like, this is steering the conversation
a little differently from how you sort of framed it there,
but like, you know, one of the risks of like
a sort of frontier of AI that is sort of dogmatic in its view of scaling up is going to produce systems
that end users are using that all look the same and have the exact same capability.
Arc shows this, like, hey, there are still frontiers that are unsolved for,
there are interesting things, here's a benchmark that you can go measure against and actually
direct progress and inspire folks towards,
that potentially could have some sort of large step function
change in capability of what these systems are actually
able to do.
You want people, I think, pushing and exploring
that frontier.
And measures of benchmarks, I think,
are actually good ways to help direct and guide research
attention there.
And I think, I don't know the full details
on how the new image generation from Aura works,
but at a high level, it does seem quite structurally different from the diffusion approach that even Dolly three and
a lot of other, you know, image generators are using, they're doing some sort of like
tokenization, like system decoding system. You know, they're like clearly have tried
something new that like doesn't exist that didn't exist before this week that's allowing
them to get these like frontier results. I think we want to see basically that mindset continue
to get pushed across like the entire field, both on, you know,
media, obviously, but also on the sort of reasoning and AI side
that think that's gonna unlock a ton of use cases that people
sort of written off AI for today, they're like, okay, I
think I think I know what AI can do and can't do. And, you know,
now I'm comfortable in my sort of, you know, what my job or,
you know, the things I'm'm gonna ask it to do.
And I think there's still a lot that we want it to do
that it can't yet.
And things like ARCA are hopefully useful measures
towards those tasks.
Yeah, my reaction was just generally like,
if you have a hundred million dollars in a bank
and you were in the bank and you were trying to do
what OpenAI is now doing like remarkably better
than you are doing, then like maybe you need to focus on true innovation
and trying to create these novel approaches
to do said thing,
otherwise it may be difficult to compete.
Or maybe some unique distribution or something.
We talked about the enterprise.
I had this hot take a year ago.
I think anyone who's just doing model training at this point
is lighting money on fire.
If you really wanna make a unique difference,
especially if you're a small startup, like a founder,
like you gotta go take an orthogonal approach.
You gotta try something different
than what everyone else is doing.
That's the only way you're gonna be able to potentially,
like I think capture attention
and provide a lot of new value to the world.
Yeah, we talk, I know we're probably over time,
but like there's no capital constraints
in early stage AI right now.
Right. And like with Zapier, like I think the number one, one of the number one
reasons that I would like see you guys in headlines is like another story of like,
Oh, Zapier raised X, you know, single digit millions or whatever.
You want to hear the funny quick quib on that is about a million bucks back in
2012. We never spent the money.
By the time we actually got the round closed, figured out who we wanted to hire, got the payroll
started, like revenue had caught up.
And so literally, I think you could trace every dollar we
raised all the way to. Yeah, that's sort of like it's
potentially a problem right now that people aren't forced to be
hyper creative because they just are like, oh, well, we have
30 million dollars like we might as well like we were supposed to spend
it. And like, spending money to be innovative is different than
like, you know, like being innovative because you are
constrained on the capital side.
Yeah, I have a follow up here. How should we be thinking about
Gary Marcus these days, he wrote deep learning alone isn't
getting us to human like AI's been an advocate for symbol manipulation.
He's been kind of on the outs in inner AI circles.
Rich Sutton wrote the scale is all you need,
the bitter lesson, of course.
But we're in this weird scenario where we're like,
we've done the scaling, we're still bullish on scaling.
We are gonna build bigger data centers and do bigger training runs. But then we might also need
new algorithms. So he's like, kind of right. How do you interpret, maybe not just his legacy, but
just the puzzle of how scaling fits into all of this? I generally think he's been more right than
wrong. I think if you like just take a limited five year view on this from 2020 up until 2020 end of 2024.
You know, I think it was a generally right, like he was making the right ideas. You know,
I think one of the reasons I personally find things like Arc more useful is they provide a
direct way to do sense finding on this stuff. You don't, I don't have to like rely on my trust
of another individual or another human in order to make statements or build my confidence or intuition
or my personal model of the world based on trusting someone else's analysis.
I can just look at reality.
Arc is a contact with reality mode.
We can have AI systems.
I could go try to do this stuff.
If humans try to do it and measure the difference and actually look.
And I've always found that going straight to the truth is a much faster way to get to
the frontier of knowledge.
If you
really want to know what's true or not, you just have to look for things like that that
measure it as opposed to relying on proxies like other folks. But yeah, I would say Marcus
has generally been more right than wrong. And for what it's worth, I think actually
the bitter lesson is somewhat under, I think often gets misinterpreted, makes a statement
about search and learning as these general four methods of scaling. But it also makes the key point in the
paper that hey, like these are the thing that we are actually
applying search and learning on top of is an architecture that
was invented by human in the first place, the core idea of
the thing that we are scaling fundamentally came from from from
a person from a human and that's still to do today still. And I think
that is very inspiring even in the current regime we find ourselves here in March 2025, where
yeah, I think we actually do need some idea changes. I think we need some structural changes
in terms of how the architecture, how the algorithms work here in order to certainly
beat something like ArchiJ2, the Heidegger efficiency. And yeah, there's going to be a
scaling component to it. But like, don't miss that like,
ah, yes, there's actually an idea component too
that often gets like kind of brushed over.
Final question.
Who do you wanna highlight that you feel like
is doing important research in AI
that's maybe under hyped, but not super online,
but whether they're attacking Arc
or just generally doing work.
Because there's a lot of people that love the hypercommercial approaches to AI,
going and working at the labs, but it's totally valid if you just want to do research
somewhere and just focus on that.
It's probably a few individuals that I think are doing really interesting work in and around
There's probably a few individuals that I think are sort of doing really interesting work in and around program synthesis, which is a sort of parallel AI paradigm to deep learning. I actually don't think either is sufficient. I think some merger of the two is what's necessary to get to AGI.
Tough story for another day. But there's quite a few people that are sort of working in this alternate paradigm that are doing some interesting work. On Twitter, this person is extremely online, but Victor Taylor is a really good follow
on Twitter.
Been working on this sort of alternative system called HVM that does this crazy enumeration
of program synthesis really quickly.
There's a couple of academics that I respect a lot.
Josh Tenenbaum at MIT, Melanie Mitchell at the Santa Fe Institute are two folks
who've been really deep in this, the Arc world and the program synthesis world for many years and have
sort of cultivated and stewarded some of the community over the last five years for its
working towards new ideas. So I really do respect and appreciate those folks for that.
I have one last question and then we'll let you go. Would you recommend against learning to code and do you think it's possible to try and build
an ARC prize for programmers?
A task that a novice or reasonable programmer could do
that no AI could solve?
Yes, in fact, let me answer the second first.
ARC is that challenge itself.
So one thing that is somewhat maybe confusing
is like we present the puzzles very visually for humans to take, right?
They look like grids, you like draw the pixels.
When these challenges are presented to computers, it's not an actual image processing challenge
at all.
It's just a 2D list of numbers.
It's a matrix of numbers that represent the input grid.
It's just zero to nine for each number represents a color.
And then you get an output matrix for the output. And your job as a programmer is to write a program that maps the input sequence to the
output sequence. And that is absolutely a challenge that every programmer human today could do because
they'd be a look at this, you know, look at the pattern and say, okay, I'll just write the program
that transforms this site can figure out the rules because I'm like a smart, intelligent human.
I have like the capability of adapting to problems that I haven't seen before.
And this is actually a program synthesis challenge.
You literally are asking a human
to write a program to solve it.
And that's the same thing we're asking the AI to do as well,
is to produce a program that can solve it.
And things like O3 are kind of unique
because they're language programs, right?
They have this chain of thought.
You can think of a chain of thought as a program
in English that transforms the input to the
output. But yeah, literally, Arc is the second thing that you asked for there.
What about the first question, should you learn to code? I haven't, like, I guess thought
super deeply about this. My hot take is, yes, you should still learn to code. Primarily
because it's been it gives you leverage over technology today. And yeah,
like I don't see that leverage over technology going away anytime soon, particularly if you want
to work on like the frontier of this stuff. And so, you know, I sort of think that the arc of
humanities history has generally been to like produce tools that give individual humans more
leverage over the universe around us. And I still think code is that thing today. I think AI will probably eclipse
it, but I don't think code is going to go away in that future.
Got it. Well, thanks so much for joining. This was a fantastic conversation. We'll have
to come back. I know that there's going to be more developments with ArcPrize 2. I want
to stay abreast of them, so please come back when there's big news.
Thanks guys, ARKprize.org if people wanna enter the contest.
Yeah, go check it out.
Cheers. Have a great day.
Thanks for coming on.
Bye. Bye.
I mean, I love ARK Prize.
I think that they should take out like a huge billboard for it.
Go to adquick.com.
They should be doing out of home advertising,
especially with AdQuick.
They could make it easy and measurable.
They could say goodbye to the headaches
of out of home technology.
Only AdQuick provides.
Wait, I have an idea.
Yes.
Let's donate them enough money
just to run a billboard with AdQuick.
That'd be fantastic.
Now we're talking.
I want everyone working on ARC prizes.
It's a fantastic project.
It's very, very fascinating.
We'll have to have Francois Chalet,
the creator of the actual
the actual test as well on the show at some point.
And maybe we can get him to pick up a watch on bezel because they have over
twenty three thousand five hundred luxury watches fully authenticated in house
by Bezel's team of experts.
And, you know, if any company running on a ramp will save time and money
and then they can sort of actually you know potentially do more distributions and then the shareholders would actually
be able to spend more money on bezel oh yeah which is one of the many reasons to
use ramp that's great and I mean if they're doing all that like where should
they stay when they're on vacation I would find your happy place find your
happy place look a wander with inspiring views,
hotel-grade amenities, dreamy beds,
top tier cleaning, and 24-7 concierge service, folks.
It's a vacation home, but better.
Do you wanna do any of these timeline posts
before we get out of here?
It's already past two.
We've done a great job this week.
I think we've streamed a lot.
I'm excited to get home and just completely sleep
on my eight sleep.
Cause their eight sleep brings you nights
that you'll feel your best days.
You can turn any bed into ultimate sleeping experience.
Go to eightsleep.com slash TVP end.
I love the, I love the head fake.
Oh, should we get into some content?
Oh, actually another ad baby.
And I mean, there's, there's some other news
we should cover. Crusoe raised $2 million dollars in debt to buy more Nvidia GPUs
And they're using current Nvidia GPUs as collateral and people are saying this is very circular, but it makes sense
I mean Nvidia GPUs have have value and of course they can be priced and
Lever up do it again ten times. You know, just one hand washes the other.
I love it.
We also have a post that we should shout out
to Rob Schultz over at Snagged.
They're officially launching domain sales
and brokerage service.
We're starting with a list of 50 plus premium domains
like try.com, tuskony.com, geeks.com, and beverage.com.
You had me at tuskony.com, Rob.
Yeah.
No, but here's the real story.
Rob got us TBPN.com and he did it
in a shockingly little amount of time.
It was like a message.
A lot of people have been like,
oh TBPN kind of a mouthful.
It's a lot of letters, but I'm like,
okay, think of another four letter.com
that we could get very quickly.
And we have the four letter handle on X.
And so I think that it's underrated
to find
one of these short single word domains,
single acronym domains,
and then build your brand world around it.
And I think even though it's been,
it's probably harder to get people to remember TBPN
the first time they hear it.
After they hear it about 10 times,
it has the chat GPT effect.
Where you're like, oh yeah, TBPN.
It's a weird thing, but it just,
it stands for what it stands for.
It's great.
Anyway, also Xi Jinping advocates for stable trade
at Beijing business meetings.
This is a bigger story I'm sure we'll be digging into later.
But the Chinese president, Xi Jinping,
engaged in a series of meetings with global business leaders
and foreign dignitaries in Beijing,
highlighted by his discussions with representatives
of the international business community, C emphasized China's continued openness
to foreign investment.
He's saying, let's open the floodgates.
Let's get some capital in here.
Let's build some tech.
We'll see how it goes.
And we'll close out with an Elon Musk post
that you were alluding to earlier.
Kitsy, he's quote tweeting Kitsy,
who says, finishing touches, we need more compute
to cure cancer, right?
And you can see that the image is loading,
it looks like a Simpsons character.
And so it's very funny, we're in this weird thing.
Do you think that loading effect is
just to make it more addictive?
It's not, I think it actually does relate to the change
in the algorithm or technology,
because we're no longer using pure diffusion
where it's all random noise
and then you're denoising it iteratively
at the whole image level,
which is what you see on mid-journey
when they're sending you those updates.
Now it is more going line by line
because it's a token-based system.
So there's some diffusion that's going in,
but then there's also some tokenization.
And I think OpenAI, you know,
the gibbification kind of drowned out
the technical discussion that should follow.
Ben Thompson broke it down a little bit.
I'm sure there will be other deep dives.
And I'm sure there'll be open source versions of this.
Like every time a new, you know,
Manus came out and there was open Manus
and people always love to dig into,
even though the papers aren't released,
you know, this is not an open source technology,
I'm sure we will see an open source version
of this technology within six months to a year,
probably two weeks if we're being honest,
because as soon as people see this tech,
they wanna build it, they wanna figure it out.
And so that's the story.
Anyway, thanks for listening.
Please go leave us five stars on Apple Podcasts and Spotify
and stay tuned, we have a bunch of stuff lined up
for next week already.
Gonna be doing more special days.
We had a really fun time with Defense Day.
Today kinda turned into AI, Author Day,
there was also some terraforming day in there.
But we're
very excited for it.
You never know what you're going to get.
You never know what you're going to get. So stay tuned. Follow us.
We are excited for Monday.
Have a great weekend, folks. We'll talk to you soon.
Talk soon. Have a good one.