TBPN Live - David Senra, Samir Chaudry, Aidan McLaughlin, Will Depue, SoftBank's $40B OpenAI Deal, Inside Sam Altman's Firing, Luxury Show-Jumping
Episode Date: April 1, 2025TBPN.com is made possible by:Ramp - https://ramp.comEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - ht...tps://getbezel.com Numeral - https://www.numeralhq.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://youtube.com/@technologybrotherspod?si=lpk53xTE9WBEcIjV(04:17) - SoftBank's $40B OpenAI Deal (19:09) - Inside Sam Altman's Firing From OpenAI (01:10:57) - Intel's New CEO Plots Turnaround (01:25:01) - Luxury Show-Jumping (01:29:08) - David Senra (02:03:51) - Samir Chaudry (02:18:35) - Will Depue (02:30:56) - Aidan McLaughlin
Transcript
Discussion (0)
You're watching TBBN. Today is Tuesday, April 1st, 2025. We are live from the Temple of
Technology, the fortress of finance, the capital of capital. This show starts now. We got a
ton of news. Open AI is worth $300 billion now. Ermes is going back to their roots and
focusing on horse saddles, both equally important stories in the world of technology journalism.
We're going to take you through it today.
And we have a bunch of great guests for you.
We got David Senra coming on the show.
Samir from Collin and Samir, we're going to talk about the creator economy.
He just did a fantastic interview with Mr. Beast and Mark Zuckerberg putting two of the
greats together.
You'll love to see it.
Where do you go from there?
And it's Open AI Day.
They're raising $40 billion at $300.
We got two folks from OpenAI joining the show,
Will and Aidan, so we'll talk to them.
Probably not about the financing,
probably about the product and all the technology
that they're building, but we're very excited for that.
Anyway.
We'll be ranking their top 5,000 Ghibli memes.
Yes, and we'll be pressuring them
into buying watches on bezel.
That's right.
Undoubtedly.
Anyway, let's kick it off.
Do you have any breaking news? I think we should address, uh, we are officially at 32,000 followers, right? We
have doubled another doubling, another double, another doubling. The show is bigger than
ever. Uh, we're taking over. Uh, so we're at 32.3 now, and that means that, uh, it's
a Dom Perignon episode technically, but we are shifting our strategy. So these Don Perignon episodes now
are in partnership with ZBiotics,
one of my friends' companies actually.
They are not an official sponsor,
but I do love ZBiotics and ZBiotics,
I actually introduced the founders.
Steven Lam I went to high school with.
I met Zach, the CEO, through YC.
I think he did the early YC, like the startup school,
sent out an email, hey, I'm looking for people,
and Steven was looking for a co-founder
wanting to get in the startup community,
kind of introduced them, and they hit it off,
and they've been running this company
for like half a decade now, it's great.
Almost a decade, I guess they went through YC saying-
And people are really obsessed with this product.
Oh yeah, yeah.
So if you're not familiar, Zbiotics,
it's a pre-alcohol, probiotic drink
designed specifically to kill hangovers, basically.
It was hugely popular at weddings.
People buy these and they hand them out to all their guests.
You take them, it really, it can make up for a lack of sleep,
so you will be tired, but it really does a great job
of killing that terrible feeling. feeling know what it's great
And if we were gonna be drinking this bottle of champagne today would be perfect because it's the middle of the day
You don't want to go home at the end of the day with a hangover. Yeah, right
Yeah, and so we we want to shift from from drinking the Dom pairing on on stream to giving away bottles of Dom Peringon.
And this Dom actually has a sponsor itself,
Slow Ventures.
Oh yes, Slow Ventures.
They sent it to us.
They sent this to us.
Good friends of the show.
So thank you to Slow.
We're gonna pass it on.
We're gonna regift it.
Well also, if you go to a founder,
yeah, that's kind of,
that's kind of, regifting can be a little bit taboo, but in this case, it's really a celebration. I think you're a founder, you's kind of, that's kind of, re-gifting can be a little bit taboo,
but in this case, it's really a celebration.
I think you're a founder, you listen to the show,
you win the bottle of Dom Perignon,
and then you send this to Sam Lesson,
and yeah, it'd be like, hey, I wanna pitch you.
And it's just a snake eating its tail,
an Ouroboros of Dom Perignon.
What are we,
This could be the most storied bottle.
How are we gonna give it away?
I wanted to see a super creative five star review
on Apple podcasters, Spotify, I think that's fun.
If you've already left one, get a friend's phone,
leave another one.
That's right.
Screenshot it, send it to us, just tweet it at us,
and the one that we think is the most funny,
we will get in touch and mail this to you.
How about that?
Boom, perfect.
You gotta be over 21
I think it's kind of a contest. So I guess there's no no cost to enter
No purchase necessary. Yeah, but I guess this isn't a purchase anyway, so yeah, I don't know Just please don't sue us like we're just trying to give out a bottle of Dom. It's fun
I don't know people get worried about these like contests like they need to be like regulated and stuff
Like I I buy it. I want to be a good Samaritan.
They're not regulating Dom giveaways yet.
They're not yet, but if we get any bigger, they might.
Anyway, let's go through the SoftBank deal, OpenAI.
Masayoshi Son got the deal done,
is putting in $10 billion.
Our leverage king.
Leverage king.
Yeah, people are like, oh, he's using debt.
So the Wall Street Journalist's article,
this hit piece against debt,
which we love here, they say, how is SoftBank funding
its mega investment in OpenAI?
A lot of debt, like that's a bad thing.
Baby, he's going long.
He's going terribly long.
We love it, we love to see it.
Masayoshi Son's company said the first 10 billion
would be financed by borrowing from Japanese bank Mizuho
and other lenders.
Of course, we've seen this stuff before
and SoftBank has a lot of assets,
so it actually makes sense,
but we'll talk about how they're collateralizing it
in a little bit, what you got.
No, I was gonna say, so to be clear,
this is a little bit. What would you get? No, I was gonna say, so to be clear, this is a stage investment.
Yes. They're doing $10 billion now.
And 30 billion in 2026.
Seemingly entirely from Mizzouho
and a handful of other lenders to be completed this month
with the remaining 30 to come by the beginning of next year.
So Masa is gonna be under, this guy likes to put himself under pressure. He's going
to be under pressure for the rest of the year knowing he's got to have, has this sort of
$30 billion sort of investment that he needs to make.
Yeah.
That he needs to come up with some combination of debt and equity.
It is funny because like, couldn't OpenAI have just gone to Mizzouho directly and gotten
debt if they wanted debt, But now they have preferred equity
on their balance sheet and Masa has the debt.
It's amazing.
But potentially everyone wins, right?
Like if OpenAI does really well,
Mizuho gets paid back, they want interest,
they want to be paid back, Masa sees the upside
and OpenAI doesn't have any debt on their balance sheet.
It's like, it's beautiful
This is capitalism, baby. I love it. We end to be clear to give masa some credit. I couldn't he
Fund this entirely by selling off his arm for things for sure. Yeah
But this deal is the largest startup
Largest investment ever in a startup. I knew, we seriously need a bigger gong.
Hold on to that gong.
There we go.
We gotta hit the gong hard for this one.
This is a big deal.
Whatever anybody has to say about Sam,
the guy knows how to do deals.
This was the whole thesis of OpenAI from day one.
It's like Greg Brockman, the CTO of Stripe, Sam Altman,
he's invested in a billion things, made so much money.
It's like the founding team was really, really incredible.
And if your whole thesis is just like bet on founders,
like Sam, pretty good at making money.
It's not that crazy.
This is your, this has been, this is like,
I feel like your favorite investments are always
into companies where the founder is just like,
you can tell, like they're good at making money.
He's commercial, he's commercial.
Like, he doesn't like losing money.
And I think that's great,
and I think that's exactly what you want.
And that's why I'm so excited
about not being a non-profit anymore.
Like, it just clearly is a consumer tech company.
It needs to be a for-profit.
And yeah, when you think about like,
could this be the next Google?
Could this be the next, like,
Chat GPT for the last year has been on the home row
of my app, on the home row of my iPhone.
And like, it's not because like,
I'm shilling for Sam or something.
It's just like, it's a useful product.
It needs to be there.
And like, when I think about what else is in the home row,
it's like Google, Apple, and opening up.
And so like, can it get to, you know,
hundreds of billions of dollars of valuation?
Like, sure, it seems fine.
But I mean, I'm sure there'll be a lot of good takes
about how, where this could go wrong.
Obviously, it is a very high valuation. when Moss is buying Sam Lesson says you
want to be selling who knows I'm just excited to see a lot of money change
hands and a lot of debt stack up together with SoftBank's pledge to lead
the 100 billion dollar Stargate cloud computing initiative with OpenAI the
investment marks a massive bet on the artificial intelligence startup it
intertwines the fortunes of SoftBank
with a company that expects to lose billions of dollars
for years to come.
The hope is that OpenAI emerges as the leader of the pack
in a race to spread artificial intelligence
throughout society and commerce,
a market that many believe could be worth
trillions of dollars a year.
This is what Sam was saying about like,
it's either worth zero or a trillion.
And so like you average that out
and you're like, yeah, half a trillion.
Yeah.
And it's hard to do a discounted cashflow analysis
of that binary of an outcome.
I mean, this is one of Peter Thiel's takes is like,
when they were doing the DCF on Google
and Andrew Reed talked about this too,
like they just, they didn't have the spreadsheets
go out far enough to understand that, oh yeah, if you really play out this trend, like Apple and Google,
these companies could be throwing off like, like hundreds of billions of dollars of cash
a year and they just would, the numbers just kept going up. They never got to that. And
no matter how far you went out, if you discounted back, it would have been a much bigger number,
but no one was really thinking that longterm. Now people are like. And that's when tech decided to say, no thanks finance.
We're done, we're done with you.
We're done with your models.
It's purely value based investing.
And now tech has gone the other direction,
which is like, oh, this thing isn't gonna make money
for two decades and then it'll make a trillion dollars.
Like sure, I'll pay you a trillion dollars right now.
And maybe we've gone too far, who knows, we will see,
but good luck to everyone involved,
and congratulations to all the OpenAI shareholders
and employees who seem to be doing very well right now.
Yes.
Yeah, you have to imagine a good amount of this
is going to secondary,
even if it's a small percentage relative even if it's 500 million dollars
Yep, still pretty meaningful soft bank always delivering great quotes
The statement says the information revolution has now entered a new phase led by artificial intelligence and called its partner
It's open AI its partner closest to achieving a GI in which computers operate on the level of humans. And Sam was tweeting about this, he's saying like,
AGI has been achieved externally and it's unclear.
I think it might just be April Fools joking, I don't know.
But clearly the product works, people like the images,
people like the text responses, people like deep research.
I like these products and I think people
will continue to use them.
Obviously it's extremely competitive and there are several hundred billion dollar plus efforts now to win this war.
And it's the greatest time to be a podcaster in history.
The edge that OpenAI has is if you talk to people that aren't as terminally online as the rest of us, they will tell you, I love OpenAI.
I don't just like it and use it.
It, they have a passion for it,
just as everyday consumers.
And they'll be like, Sam who?
And it's hard to get them like, oh,
Helen Toner who?
If you tell somebody it's like OpenAI,
they're gonna say like, but I love OpenAI already.
Yeah, exactly.
You're like, no, no, no,
but it's two points better on MMLU.
They're like, I don't know what that is.
This one won an IMO gold medal.
Yeah.
What?
What's that?
Yeah.
Yeah, it is very funny.
SoftBank is taking a lot of risks for a piece of open AI.
Ratings agency S&P Global said Tuesday
that SoftBank's financial condition will likely
deteriorate as a result
of the OpenAI investment and that it plans to add debt
that could lead the agency to consider downgrading
SoftBank's ratings.
None of the startups with early leads in generative AI
have shown that they can operate profitably.
I don't know how true that is.
You could probably look at the GPT-4 training run
and amortize just that, but yes,
on a quarter to quarter basis,
these companies aren't profitable.
In terms of the overall business, it's a good point.
And the sector is pouring tens of billions of dollars
into data centers based on assumptions not yet proven
of a future in which AI rapidly permeates the globe.
It feels like it already is rapidly permeating,
but I guess it's up for debate.
Early tech leaders often falter, a point SoftBank learned when it made Dotcom-era bet that Yahoo
would be the dominant force in search.
The background debt has been a common feature of Sone's risk-heavy strategy.
The CEO borrowed heavily for the company's successful acquisitions of Vodafone's Japanese
unit and chip design company Arm.
More recently, SoftBank has been licking its wounds
from piling tens of billions of dollars into startups
just before values plunged in 2021.
And Sone repeatedly said SoftBank would stay on defense.
Now having pivoted to offense,
SoftBank ramped up spending,
including a $6.5 billion acquisition of chip startup Ampere.
Go back and listen to our deep dive on
Mosa and softbank and generally it's very fun and fascinating. I think what people
fail to consider is that Mosa is a
gambler and he's not
Oblivious to the risk that he takes but he's constantly searching for that next
absolute banger. Totally.
And there's a very good,
there's plenty of non-technical,
but powerful vibes-based analysis that you could do
to make the case that this could very well
be a fantastic outcome for SoftBank.
It's one or the other.
It's either a good or a bad investment.
It's probably not neutral.
We talked to the experts and we determined
that it's gonna be either good or bad.
Yeah, it's gonna be either good or bad.
You heard it here first folks.
But it's gonna be extremely good
if you participate in that secondary deal
and then you head over to getbezel.com
and you shop for over 24,000 luxury watches.
I can't imagine a more AGI resistant asset than luxury watch.
They're not making any more FP Jordans after
Rolexes or tax. No,
I'm just saying the actual companies themselves. Sure.
If you want to make the next Rolex,
you have to go back 100 plus years ago and start it.
Yeah.
At least if you want to run it today.
Yeah, exactly.
If you want to run what could be the next Rolex,
but not be alive for when it becomes said asset then.
Godspeed.
So, yeah.
But go to.
Getbezel.com.
And can we just give a shout out to the graphics department.
We're a small team here, but I think it's incredible
what we can do.
I mean, we're just shy of 75 people now,
and they have really been over delivering,
and I really love the new tickers.
Of course, you know we're sponsored by Public.
We have the Public stock ticker there,
but we also got the Polymarket ticker,
and we hope you're enjoying staying up to date
on what tech markets are going on.
It's such a beautiful way to understand the world.
It is.
And we should actually get some bezel,
we should get like watch.
Watch listings, I just want more tickers.
Every. We need more tickers.
Yeah, so I think every sponsor gets a ticker.
Over our eyes.
Yeah, yeah, eventually it's just 99% ticker.
I think so.
But to close it out, go to getbezel.com,
download the app.
The app is fantastic.
We are DAUs.
And you should be too.
And it's like the greatest time in history
to be a Watch fan because there's a major
Watch announcement.
The Watch world is exploding with new news.
Rolex launched the Land Dweller.
Patek launched a Complications watch
that's worth over a million dollars.
AAP has made some updates.
A lot of things are launching.
So I wanted to highlight this Polo 79
in white gold from Piaget.
It's about $100,000 watch,
but I think it's a lot of watch for that money.
It's certainly a lot of money, but it's a lot of watch.
It's a lot of money, but it's a lot of watch.
And remember, I mean, you can drive a GT3RS,
but you can't take a GT3RS into a board meeting.
That's right.
So, yeah.
That's what many people have said this.
You can't take a 911 into a board meeting,
but you can take your Piaget Polo 79 in white gold. Yeah, if you drive your GTeleven into a board meeting, but you can take your POJ
POJ
To a board meeting you will get arrested you
Except I think more
More santo road VC firms should have massive roll-up doors for board meetings If you want to drive your car
I think you should be able to drive in like a drive-up and you should just be able to roll down your window and be
Like yeah KPIs were good. Okay. Yeah, we'll be raising in q3. Okay
Well, yeah, so we're in the we're in the hunt for a new studio
Yep, and roll up doors are not like a deal breaker, but they're certainly nice to have nice to have
they're certainly nice to have. For sure, it's a nice to have.
Anyway, we're gonna have Quaid from Bezel,
the CEO on the show tomorrow,
to break down exactly what's happening in the watch world.
I follow all this stuff.
He's in Europe right now.
And I noticed I have the worst takes
because I'll see a new watch drop
and I'll be like, this is amazing, I want this immediately.
And then he'll be like,
that's the most controversial watch that's ever launched, everyone in the watch world hates it.
Like you have like the most contrarian position,
I'm like, I'm kind of fine with that,
like I don't really mind.
Cause I'm like, dipping my toe in, I know the brands,
I know the different models,
but I'm not like, you know, in the discourse.
We're watch enjoyers, we're not watch experts.
Exactly. Big difference.
Yeah, so I would gladly wear a knockoff Royal Oak
from Rolex called The Land Dweller.
I think it looks great.
Anyway, let's stay with OpenAI
and move on to an excerpt from a book
that's coming out on OpenAI.
There's a segment in the Wall Street Journal.
It's a long read.
And this is different than Ashley Vance's
Open AI book?
It is.
It is.
And this is an interesting,
I'll give my take at the meta level.
This reporter, I don't remember who actually wrote this.
It's from a book that's coming out in a few weeks.
Adapted from, it's called The Optimist,
Sam Altman, Open AI and the Race to Invent the Future
by Keach Hagie.
Or Hagie.
It will be published on May 20th, 2025.
And I'm sure it'll be an interesting read.
It is incredibly well-sourced.
So this writer, this author,
was able to get insider accounts
of private dinners between Peter Thiel and Sam Almond.
You've been to some of these parties,
people don't love talking to the press
that go to those parties.
It's pretty rare.
Well, and the way they're positioning this dinner
just from the graphic is that it was actually
just a dinner between the two of them.
It was a birthday party.
There were other people there.
But still, it's crazy that this leaked at all.
So it's incredibly well-sourced,
and there's very interesting facts and quotes
that bubble up in this story
that we'll kind of go through today.
But I think the analysis is god-awful, and so.
Truth zone.
It's definitely truth zone. It's not not even truth zone like the facts are correct
I think it's more just like the follow-up questions that are obvious to ask if you're in takes in the truth
Yeah, the takes are you know need to be put in the truth zone
Anyway, let's read through this because it's a it's a beautifully written. I mean, it's like a great, you know, great writer great great author
I'll take it off.
On a balmy mid-November evening in 2023,
billionaire venture capitalist Peter Thiel
threw a birthday party for his husband at Yes,
an avant-garde Japanese restaurant located
in a century-old converted bank building
in Los Angeles' arts district,
not far from where we are at this moment.
Seated next to him was his friend Sam Altman.
Thiel had backed Altman's first venture fund more than a decade before and remained a mentor to the younger
investor when Altman became the face of the artificial intelligence revolution as the
chief executive of OpenAI. OpenAI's instantly viral launch of Chatchie BT in November 2022
had propelled tech stocks to one of their best years in decades. Yet Teal was worried. Years before he met Altman, Teal had taken another
AI obsessed prodigy named Eliezer Udekowski under his
wing funding his institute, which pushed to make sure that
any AI smarter than humans would be friendly to its maker. That
March, Udekowski had argued in Time magazine that unless the
current wave of AI research was halted, literally everyone on
Earth will die.
You don't understand how Eliezer has programmed
half the people in your company to believe in that stuff,
Teal warned Altman.
You need to take this more seriously.
You wanna take it over?
Yeah, I mean, it's a fascinating history.
I mean, Altman's first venture fund
was called Hydrazine Capital, I believe.
He was actually an investor in my first company.
And Hydrazine, I haven't asked Sam if this is the reference,
but Hydrazine is rocket fuel, but it's also extremely toxic
and will kill you if you breathe it.
And so there's this weird, I like to think that it's this
weird double entendre about venture capital can accelerate
your business, but it can also kill your business.
It's kind of beautiful.
Anyway, under-reported, Sam is like so known
for OpenAI now, I think people forget
that he ran a very, very successful venture fund
and made some incredible investments during that time.
And then also what's missing from this is that
Peter Thiel is listed on Wikipedia
as a co-founder of OpenAI
and was one of the initial backers.
Obviously Elon was a major, major backer
and then has fought and there's all this
controversy there so that has become the bigger story
but the initial team behind OpenAI was crazy.
I mean there was a YC research project
so it was heavily YC influenced in that.
And YC actually has a stake in OpenAI now
that must be worth a ton now.
And PG famously like found out about that on Twitter
because he was asking someone and they were like,
don't you own a stake?
And he was like, yeah, I just asked legal
and we actually do.
It was very funny.
But yeah, I mean, they go way back.
And at one point, Peter Thiel had been an advisor to YC
and Sam Altman was running YC and Sam brought Peter in
and they did, I think they did a podcast interview together
and they've talked about each other.
So there's just a lot of history here
that's kind of interesting.
So moving on, Altman picked at his vegetarian dish
and tried not to roll his eyes.
This was not the first dinner where Thiel had warned him that the company had been taken over by the EA's
By which he meant people who subscribed to effective altruism
EA had later had lately pivoted from trying to end global poverty, which was what we saw SBF doing in the
EA thing the whole idea was like mosquito nets
We got to get mosquito nets because mosquito nets are super cheap
and they stop malaria.
And so for just like a dollar,
you can save someone's life
and it's the highest ROI on reducing human suffering
versus if you're dealing with like Alzheimer's treatments,
like older people, very expensive,
mosquito nets are really cheap.
And so everyone, EAs were originally very focused
on ending global poverty. then they kind of pivoted
to trying to prevent runaway AI from murdering humanity.
Teal had repeatedly predicted that the AI safety people
would destroy open AI.
Well, it was kind of true of Elon,
but we got rid of Elon, Sam responded at the dinner,
referring to the messy 2018 split
with his co-founder Elon Musk.
They did not fully get rid of Elon.
That's so true.
Who once referred to the attempt
to create artificial intelligence as summoning the demon.
That's a great Elon quote, I love it.
And I mean, it's the whole nuance here,
which is like everyone is somewhat AGI-pilled,
somewhat AGI-doomer, no one has a P-Doom of zero,
no one has a P-Doom of 100,
everyone's kind of on this gradient,
but there's just a question of like,
how do you deal with that risk,
and is it with more innovation
and building a more positive future like the T.Lian,
like, optimist, like definite optimism,
and hey, okay, we need to build an AI that doesn't kill us
versus the kind of indefinite pessimism of Eliezer Yudkowski
which is like, it's going to happen no matter what,
there's nothing we can do.
Nearly, yeah. Give me we can do. Yeah.
But give me money for research.
Yes.
Give me money for my institute.
Yes.
Nearly 800 OpenAI employees have been riding a rocket ship
and we're about to have the chance
to buy Beachfront Second Homes.
Let's hear it for some Beachfront Second Homes folks.
We love Beachfront Second Homes on this show.
And even if you're not in the position
of those 800 OpenAI employees, you can go to Wander.com.
Yeah, and you can rent a beachfront second home.
Fantastic.
By the day.
It's pretty cool.
You can just get homes by the day.
By the day.
Fantastic innovation.
Yes.
Luxury homes by the day.
By the day.
Hyper fractionalized.
Fractionalized.
I knew you were going to say that.
And so there was a tender offer going on at the time,
valuing the company at 86 billion.
There was no need to panic,
so all the OpenAI employees are very happy.
At least it seemed that they were.
Altman, at 38 years old, was wrapping up the best year
of a charmed career, a year in which he became
a household name, met with presidents and prime ministers
around the world, and most importantly,
within the value system of Silicon Valley,
delivered a new technology that seemed like
it was very possibly going to change everything. And that really is true.
Like OpenAI, yes, the transformer paper was written at Google. Yes, like other people
were working on language models, but OpenAI was the company that really went full scale
pilled, scaled this thing up and then also productized it and also figured out like,
hey, people just want to chat on they verticalize research through product product and it
they were the first to do that in a really really meaningful way clearly
whatever was going on at Google was not allowing for that verticalization there
were fantastic researchers and fantastic product people selling ads and making
YouTube a great product they were not talking to each other.
But as the two investing partners celebrated beneath the exposed rafters of LA's hottest
new restaurant, four members of OpenAI's six-person board, including two with direct ties to the
EA community, were holding secret video meetings, and they were deciding whether they should
fire Sam Altman, though not because of EA.
This account is based on interviews with dozens of people who lived through one of the wildest business stories
of all time, the sudden firing of the CEO
of the hottest tech company on the planet,
and his reinstatement just days later.
At the center was a mercurial leader
who kept everyone around him inspired
by his technological vision, but also at times confused
and unsettled by his web of secrets and misdirections.
From the start, OpenAI was set up to be a different
kind of tech company,
one governed by a non-profit board
with a duty not to shareholders, but to humanity.
Altman had shocked lawmakers earlier in the year
when he told them under oath that he owned no equity
in the company he co-founded, which of course was true,
but very much mocked because the idea
of somebody doing something not for money is considered.
The senator with the line saying,
you need an agent.
You need an agent, yeah, that's great.
He agreed to the unprecedented arrangement
to be on the board, which required a majority of directors
to have no financial ties to the company,
of course, because it's this nonprofit structure.
In June, 2023, he told Bloomberg TV,
the board can fire me, and that's important.
Behind the scenes, the board was finding, to its its growing frustration that Altman really called the shots of course through like soft power
Not through legal means or voting shares or any and it's entirely new way of controlling a company
It's a really great way to expose that
You don't actually understand the company because I'm sure if you went to the open AI office and talked to anybody
understand the company because I'm sure if you went to the Open AI office and talk to anybody you would find out almost immediately that Sam did in fact call all the shots.
Of course, of course.
As every CEO does.
And there's a lot of just, this is like what we're talking about with like, there's intelligence
and Sam is intelligent, but there's other intelligent people around the table.
It's really will and agency and vision and coordination and communication and all these like EQ things
that really push an organization forward. And there are tons of smart people at Google.
They couldn't launch a jackpot faster than OpenAI somehow. And it's like, why is that?
It's probably not because they didn't think of it. It's not probably not because of a
lack of intelligence. It's a lack of volition. For the past year, the board had been deadlocked over which AI safety expert to add to
its ranks. The board interviewed Ajay Kotra, an AI safety expert at the EA charity Open Philanthropy,
but the process stalled largely due to foot dragging by Altman and his co-founder Greg
Brockman, who was also on the board. Altman countered with his own suggestion. There was
a bit of a power struggle, said Brian Chesky, the Airbnb CEO, who is one
of the prospective board members, Altman suggested.
There was this basic thing that if Sam said the name, they must be loyal to Sam, so therefore
the board is going to say no.
The dynamics got more contentious after three board members in the pro-Altman camp stepped
down in quick succession in early 2023 over various conflicts of interest.
That left six people on the board,
on the nonprofit board that governed
the for-profit AI juggernaut.
And you're getting this weird dynamic where
the nonprofit board is set up to just like,
hey, we just wanna do research and advocate for AI safety,
but all of a sudden we've birthed this $300 billion.
The demon.
The demon is really just a consumer tech company, I guess.
Yeah.
Consumer applications, actually.
Yeah, yeah, just Apple.
Your mother was right.
Your mother was right.
Altman and his close ally Brockman, their fellow co-founder
Ilya Sutskever, and three independent directors,
the independent directors were Adam D'Angelo, the CEO of Quora
and a former Facebook executive, Helen Toner, the director directors were Adam D'Angelo, the CEO of Quora and a former Facebook executive,
Helen Toner, the director of strategy
for Georgetown Center of Security and Emerging Technology
and a veteran of open philanthropy,
and Tasha McCauley, a former tech CEO
and member of the UK Board of EA Charity Effective Ventures
and I believe the wife of a celebrity, right?
Tasha McCauley, she's the wife of, I forget who,
someone who is in Looper?
Joseph Gordon-
Joseph Gordon-Levitt, that's right, yeah.
Concerns about corporate governance
and the board's ability to oversee Altman
became much more urgent for several board members
after they saw a demo of GPT-4,
a more powerful AI that could ace the AP Biology test
in summer of 2022.
God forbid.
God forbid.
The AP biology test gets aced, it's over for us.
It's over.
It's not a neurotic statement at all.
That was humanity's final exam.
Yes, yes, yes.
The real one.
Yes, it's over.
Once you can ace the AP bio.
AP bio.
Once you understand AP bio, you can do anything.
You can reverse engineer anything you want anything you want I mean
To steal man it like you know you understand bio you maybe you understand bio weapons
Maybe you help someone in the basement create a new smallpox and create some like there are guardrails that need to be on these things
obviously
But it's you know, it's a little bit overblown. I think
Things like chat GPT and GPT-4 were meaningful shifts
toward the board realizing that the stakes
are getting higher here.
Toner said, it's not like we are all going to die tomorrow,
but the board needs to be functioning well.
And yeah, that's a fair critique.
Like the structure of the company was not aligned
with what was happening at the company.
At the same time, it feels like they're acting like
everyone was gonna die tomorrow.
Right?
Yeah, and we actually talked about this
before we got on air, but
I think what,
if they wanted to have a sort of strong case
for their actions, there should have been sort of
to have a sort of strong case for their actions. There should have been sort of information being fed to them
from active members of the research team
and the team broadly saying, hey, we're
very worried about this.
It's not just a cute chat app that's going to go viral.
Yep.
This could sort of snowball into something else.
But it seems like they were doing their sort of snowball into something else, but it seems like they were doing
their sort of own vibes-based analysis of the situation
and just frustration feeling like
having an inflated sense of self-importance
when realizing they weren't a real player in the game.
Yeah, I completely agree.
So, I mean, most of the board was non-technical,
not actually interfacing with the changes
at a technical level that were happening
in the iterations of the GPT models.
Yes, they were scaling up,
but how were they doing on evals?
How were these systems integrated?
What systems did these models have access to?
All of those things, completely agree with that.
The other thing is basically what you're advocating for
is a whistleblower program,
which kind of exists in every company,
like if you've ever been on a board
and some low level employee emails you,
like it's probably not good.
They're probably complaining about something
and a lot of times they have good points.
What was interesting is that there was a story
that came out that OpenAI had some really onerous
exit process where if you left and you didn't sign like an NDA,
they could claw back some of your equity.
And everyone was like, this is terrible.
And I was like, yeah, it seems like maybe overly aggressive,
but it's kind of super aligned with the EAs
because if you actually think that OpenAI's new model
is going to kill all of humanity,
then your money is worthless. Right. And so you should absolutely say, well,
the money's useless.
Let me go to the media and say what's happening and then,
and then I'll stop it and then I'll be, and I'll write a book and I'll make money
that way. Like, like there's a million ways to make money.
If you're a successful whistleblower and you actually do uncover, hey yeah,
I'm the person that saved humanity and I can prove it.
Because this model, I have proof
that it was gonna kill everyone.
Yes, I lost my shares in OpenAI,
but it doesn't matter because I saved the world.
And like what would be more edifying
and then also financially rewarding over the long term
than saving humanity?
So there's always a way.
Ben would go to the press and say,
John and Jordy are considering going to
eight hours a day live streaming,
which I think might massively slow GDP growth
or potentially even go negative
because people just stop all work
and just listen to the show.
It's a real asymmetric risk.
Yes, definitely. Yeah, I mean, we've advocated for, uh,
for podcast safety very significantly and potentially even creating like a
regulatory capture, regulatory capture and like monopoly.
Basically there should be 10,000% tariffs on podcasts.
There should be like an FDA for podcasting where you have to submit an
application with what your show is about. then you get approved it takes years cost
Hundreds of millions of dollars exactly and then you can't just have any willing nilly podcast just drop an RSS feed
Yeah, just like pharmaceuticals. Exactly. We don't want anybody coming to market with a new pharmaceutical
This makes so much sense. You shouldn't have anyone just come into market with the podcast for sure
Anyway, let's move on to equally ridiculous things
toner and McCauley had already begun to lose trust
in Altman to review new products,
and this is interesting because we haven't actually heard
before how the safety process worked with an OpenAI,
and they do a good job breaking it down here.
So to review new products for risks
before they were released, and again,
I agree that when you're launching new products,
like you should assess them for safety.
Even Instagram, when they launch a new algorithm,
they should be like, is this promoting anorexia?
Is this promoting extreme content or not?
And to what degree is it making people sad?
Is it making people happy?
Let's understand the impact of the technology
that we're building, that's great.
And here's how they wound up doing it.
So OpenAI has a joint safety committee board with Microsoft,
a key backer of OpenAI that had special access
to use its technology and its products.
So Microsoft can vend GPT models into Azure
and then deliver them into different segments
of the Microsoft ecosystem.
During one meeting in the winter of 2022,
as the board weighed how to release
three somewhat controversial enhancements to GPT-4,
Altman claimed all three had been approved
by the Joint Safety Board, Toner asked for proof
and found that only one had actually been approved.
And so this is like the smoking gun,
like Sam Altman like lied about the approvals.
And it's like, okay, like author, like what are the what are the controversial enhancements?
Like I remember what the difference between GBD 3.5 and GBD for work
It was context window went from 4k tokens to 32 K tokens
They trained on 10 times as many tokens like we saw we saw the the the small dot and then the big circle and everyone
Was scared I get that everyone was scared.
But like, what is the problem?
Like GPT-4, yeah, okay, now you can do PDF upload.
Is that gonna kill everyone?
Like, what is the problem that you're actually upset about?
I understand that like, yes, he might've like
not followed the right rules in this case,
but like habeas corpus, like we have to produce
some sort of corpse here.
And like, what is this controversial enhancement to GPT-4?
We've all been playing with GPT-4 for years now. It's no big deal
It's been copied open source deep-sea can run it on your phone
You can download it on github mistral and like all these all these GPT-4 level models
Like what are we worried about? I don't understand
It's so it's so confusing and infuriating that the author didn't dive in deeper
and say like what were the controversial enhancements?
Because to me, it was a controversial lack of enhancements.
It was like GBT-4 was great, but it wasn't good enough.
It should have had more.
Immediately I was like 32K context window, not big enough.
What did Google do?
They launched 100K context window.
They launched a million token context window. Is that controversial?
I don't know, but it's super useful if you're trying to deep dive a book like I don't get it
I'm so confused by this
Anyway, and then this is the really funny thing because they're obviously like all the all the hates been on Sam
But Microsoft is just like doing crazy stuff
This whole time so around the same time Microsoft launched a test of the still unreleased GPT-4
in India, and they're just like,
yeah, let's just try it in India.
Who cares?
We don't need to ask anyone.
Let's just do it.
This is the first instance of the revolutionary code
being released in the wild.
That's not the first time Bill Gates
was tested on India.
Bill Gates organizations have done things like that.
Yeah, it's like, what if it was,
Microsoft, you're afraid of AI safety,
what if it had gone wrong and completely screwed up India?
That would be bad, right?
It's so bizarre.
Well, it's also internet products, so.
Yeah, I mean, I think it's fine.
I think it's great to go test these things wherever.
I don't think it's a big deal,
but it's just very funny that Microsoft is just like, oh yeah,
let's not deal with that stupid joint safety board.
Let's just rip it.
Yeah, like my boys back in India need GV4.
We really need chat summary in Teams.
Like we're gonna take the risk.
We're gonna risk humanity for chat.
It's so funny that that's what we're talking about.
We're talking about co-pilot in Outlook and stuff.
It's like it's so low stakes in some ways.
It's like better autocomplete in many ways.
Anyway, so Microsoft launches GPT-4 in India.
They don't get approval from the Joint Safety Board
and no one had bothered to inform OpenAI's board
that the safety approval had been skipped.
The independent board members found out
when one of them was stopped by an OpenAI employee
in the hallway on the way out of a six hour board meeting.
Never once in that meeting had Altman or Brockman
mentioned the breach, probably because it's not
that important that Microsoft is testing GPT-4 in Indian
office products. It's just a very funny dynamic
to be in this sort of hyper scale startup mode
where you're scaling head count rapidly,
you're raising all this money,
you're scaling users sort of dramatically, right?
And then you have this board that's sitting
over your shoulder kind of like nitpicking
every small decision that you make.
And for a completely new class of like risk vectors.
Like when I hear, like if I was running a company and somebody was like, told me like, completely new class of risk vectors.
If I was running a company and somebody told me, hey, a partner of ours just launched our product,
kind of white labeled it and they launched it in India.
I would be like, okay, well, what does that mean
for my brand?
Maybe I want my brand to be front and center
in that market, and so I don't love that
because they're kind of stealing my brand building that will eventually happen when I get the chance to roll out in
that country so that's an issue. Are we paying taxes appropriately? Are we legal
there? Do we have a registration properly? Is this going to be a PR
nightmare? Like, do we have another partner that was expecting to work with
us in India and now they're going to be upset.
How are they ensuring that they're putting enough spend behind the go-to-market to make sure that they're successful?
Yeah, there's like a million real questions and then the second one is like, is ASI going to explode in India and like kill everyone?
It's like that's not at the top of my stack.
So anyway, then one night in the summer of 2023,
an open AI board member overheard a person at a dinner party
discussing open AI startup fund.
This is the venture capital fund that they used to invest
in startups that could potentially use open AI's technology.
Basically giving money to the companies
that they're gonna steamroll with the next generation
of chat GPT, it seems like, but I guess there are
some companies that probably took money from startup fund
and wound up doing something application-layer
that was specific and doing great.
I think Harvey took money from OPEA.
Yeah, yeah, that makes sense.
Because it's much more narrow than just being like,
oh, we're like slight prompt engineering
on top of chat GPT and we're gonna get rolled.
Sam's a dog.
I don't think he was wanting, wanting to use a startup fund
to fund potential competitors.
It was very oriented around,
you're gonna be using chat GPT,
we'll give you some money.
Yeah, truly like you're gonna be an API customer forever
and you're never gonna train a foundation model.
So the fund was announced in 2021
to invest in AI related startups
and OpenAI had announced it would be managed by open AI
But the board member was overhearing complaints that the profits from the fund weren't going to open AI investors
This was news to the board
So they asked Altman over months directors learned that Sam Altman owned the fund personally open AI
executives first said it had been for tax reasons then eventually explained that Altman had set up the fund
because it was faster and only a temporary arrangement.
And knowing everything about OpenAI's insane legal structure,
I believe this.
Like, it's hard to set up a fund as like a Delaware C-Corp
or like an LLC.
It's tricky.
There's not exactly like,
like, Carter or Angel List for setting up a fund
when you're a for-profit that's owned by a nonprofit
that's operating all these different things
and has this ownership by Microsoft
and it's all this complicated cap table.
Like, I'm sure it was very onerous
to set up something like this.
So, but this is where-
It's a funny dynamic too because,
the Sam Altman, even if he was fully embracing the sort of EA mode
Should be in the mindset of like I need to ignore. Yeah the board like some aspects of the board's, you know
desires because if other competitors
Like totally seek beat us, you know that there wasn't really on the radar at the time
I'm sure it was if you were building a foundation model. Yeah.
If other people beat us to this, like none of it even matters. Like,
all this stuff doesn't matter. So we need to like take some shortcuts. Yeah.
Yeah. It's like, it's like, what, what did Ilya see? What did Sam see?
Sam saw commoditization of the foundation model layer and a knockout,
drag out fight in the application layer. And it was like, I gotta go fast.
And this is a consumer tech company. Um, but here, drag out, fight in the application layer. And it was like, I gotta go fast and this is a consumer tech company.
But here is a line that I think we need to take
much, much more seriously.
So the truth came out about the structure
of the startup fund.
OpenAI said Sam Altman earned no fees or profits
from the fund, an unusual arrangement.
So he had no carry and no fee from that fund.
And that is something that I think is unacceptable.
Like any fund manager should be getting two and 20,
at least the great funds, you know, three, four percent.
We've seen 30 before.
And so this is a big problem.
If I was investing in a fund and someone was like,
oh, like no fee, no profit,
I'd be like, yeah, like no fee, no profit, I'd be like,
yeah, straight to jail. Yeah.
But of course, that actually aligns
with OpenAI's incentives, we're joking,
but it is very odd.
And I mean, it does kind of align with this idea
that he was really fast.
My question on this is,
OpenAI said Altman earned no fees or profit from the fund
that he could have had to and I think your interpretation is right there's also another interpretation where he just hadn't
Had the hurdle yet
You know, yeah, whatever I think I did but I think you're in
Yeah, if I had if I had to guess on what the actual structure was, it was like technically he owned the GP,
but an OpenAI was an LP in the fund.
And so he had legal control over it,
but all the money and profits would flow through
without anything passing to him.
And that seems like the fastest and most current.
Clearly Sam Altman, ridiculously talented venture investor,
should be the one at OpenAI making the final decision
on what companies are getting invested in or not,
and that's in the best interest of OpenAI.
And also it's like, in what scenario is he making,
is the idea of him, he could just start
a separate venture fund
and go raise a billion dollars and just be like,
hey guys, I have two jobs now.
And everyone would just be like, yeah, that's fine.
So if you wanted to make two and 20 on a lot of money,
he could go do that.
And so it doesn't quite track that like,
what frame of mind would he have to be in that it's like,
okay, I'm like the CEO and leader of the next generation
consumer tech behemoth, but really I want to run
like a micro fund and get like, you know,
have control over it.
Like, it's like, the economics don't really make sense there.
And this is what I'm talking about,
like about the follow up is like,
you follow up and be like, at what was actually at stake
if he had taken the fees and the profits from the fund?
Like 10 million bucks, 100 million bucks?
Like certainly not 10 billion.
That would have been sort of a smoking gun
if he was out in the world saying,
I don't have any profits in OpenAI
and then was like running the side fund and managing it
and it was branded.
But the fact that he's not earning fees
or profits from the fund just goes to show like, okay,
that should be like, the board should be able to be like,
small slap on the wrist, like next time,
go through the proper process,
make off balance sheet investments from OpenAI
in the meantime, just do it the right way.
And he would still say, yeah,
probably should have just done it,
but whatever, we are moving quickly.
It'd be so funny if like, it's like, oh yeah,
he set up this fund, he personally owns
30% of Anthropic now.
He's just like, I can't deal with these guys at OpenAI,
I'm just gonna go turbo long with direct competitor.
No, it makes no sense.
And there really is no smoking
gun in the portfolio that I've seen at least. But again, like
follow up to the independent board members, the
administrative oversight defied belief. I can't believe it. The
cast previous and cast previous oversights as a possible pattern
of deliberate deception. For instance, they also had been
alerted to the previous fall. They also hadn't been alerted the previous fall
when OpenAI released ChatGPT.
At the time, considered a research preview
that used existing technology,
but ended up taking the world by storm.
In late September, Suscovore emailed Toner
asking if she had time to talk the next day.
This was highly unusual.
They didn't talk outside of board meetings.
On the phone, Suscovore hemmed and hawed
before coughing up a clue,
you should talk to me or more.
And so yeah, the launch of ChatGPT is funny
because I've seen, I talked about this with Dworkesh
saying that there's two criticisms of Sam.
One is like, he's non-technical, he's never done anything.
He's not creating really new technology.
And then the other one is like, he moves too fast.
He launched ChatGPT without telling the board. board and it's like only one of those can be
true right like you can do one or you can take one or the other but I've seen
people try and take both and I'm like that just doesn't match but yeah I mean
I don't know probably should have told the board you're launching chat GPT I
understand that you know it was this research preview they did launch a
lot of different things the open AI sandbox
They launched playground and none of the other things had taken off like chat GPT
Yeah, maybe it was just lucky and the use cases for that early research product were pretty tame
It was like generate the you know subject line for your ecommerce brand
Yeah, that's what people were building on top of it. Totally. There was nothing agentic. Yeah, yeah, it's just like a text generation tool. Yeah. That was that was good,
but not even great. It was magical. Yeah, but it wasn't
anything. Yeah. If you just think about the, the, like, the
frame of mind that they're in, they're like, we are training
this, like, AGI God, we are training this like AGI god
and we're spending $100 million on a training run.
Like this, and like if it comes out wrong,
it could be like really misaligned
and it's like really high stakes.
And so like, I imagine that that's where the board's focused.
They're not focused on like, oh yeah,
like you know how we're vending in like the GPT 3.5 DaVinci
to 002 until like a bunch of companies,
like we created a wrapper on that
that just lets people like chat with it directly.
And it's just like a new UI on top of the LLM
that we already trained.
Like when you frame it like that, it's like, oh yeah,
like they did just like kind of whip up
the chat GPT interface in like a couple days, I imagine.
I mean, it was a couple weeks,
but like the first version of chat GPT was not,
it was really just like a wrapper around their LLM
and it didn't feel like that would be the thing
that broke it through.
It felt like the thing that would break it through
is scale it up another 10X, spend a billion dollars,
do Stargate, you know what I mean?
Anyway, let's go into Mira.
So Mira Morati had been promoted to Chief Technology Officer
of OpenAI in May of 2022 and had effectively been running
the day-to-day ever since.
Absolute dog, operational mastermind, you'll love to see it.
When Toner called her, Maradi described,
corporate athlete for sure,
Maradi described how what she saw
as Altman's toxic management style
had been causing problems for years,
and how the dynamic between Altman and Brockman,
who reported to her but would go to Altman
anytime she tried to rein him in,
made it almost impossible for her to do her job.
And so that is a very odd dynamic.
I believe Brockman was a co-founder
and obviously, you know, Glockman,
Altman and Brockman go way back
because Altman was an investor in Stripe
and Greg was the CTO.
And so they go way, way back.
They co-founded the company together.
And now there's like this layer in between them
that's Mir and Morati,
who's clearly like an incredible, talented corporate athlete,
but it is a little weird.
Like imagine if like, it was like,
oh yeah, I'm actually hiring someone
and you report to them and they report to me.
Like you would still come to me and be like, yo, like what?
We go way back, it's weird.
And so however they got in that situation,
it just seems bad, right?
Moradi had raised some of these issues
directly with Altman months earlier,
and Altman had responded by bringing the head of HR
to their one-on-one meetings for weeks
until she finally told him she didn't intend
to share her feedback with the board.
Brutal, That sounds not fun in
general. Toner went back to Sutsgiver, the company's oracular chief scientist. Love that word.
He made clear he had lost, he made it clear that he had lost trust in Altman for numerous reasons,
including his tendency to pit senior employees against each other. In 2021, Sutsgiver had mapped
out and launched a team to pursue the next research direction for open AI.
That's probably the reinforcement learning,
the O1, the Q-Star technology that they were working out
at the time that was very controversial,
but then DeepSeek open sourced it.
So it's like, was it really that big of a deal?
We don't know.
Sutskiver had mapped out this and launched a team
to pursue
research but months later another open AI researcher Jacob Pachaki began
pursuing something very similar and so the teams merged and Pachaki took
over after so it's given returned his focus to AI safety and so Altman later
elevated Pachaki to research director and privately promised both of them they
could lead the research direction of the company
which led to months of lost productivity
because you have two groups.
It's like the Google strategy basically.
And so, you know, just like kind of like, you know,
a lot of weird dynamics of like who's really in charge,
the board is misaligned with the management,
the management team is misaligned with like the co-founders
who are clearly like brilliant but maybe not operational. And so you throw some operational talent in between them and then all of a suddenigned with the co-founders who are clearly brilliant but maybe not operational,
and so you throw some operational talent in between them,
and then all of a sudden you have a co-founder
who's super technical that's reporting to someone
who's an executive, and it just gets very, very funky.
Yeah, there's two challenges.
One, you have a non-profit research group
that's transitioning into a for-profit hyperscaler,
and then you also have this business that's verticalizing,
that's doing everything from research
to the end product layer.
They're also doing an API,
and they're also still worried about AI safety.
And then you combine that with products
that are generating millions and then tens of millions
and then hundreds of millions of revenue
that's also losing so much money
that like if Sam can't continue,
he's on the sort of fundraising treadmill.
So it's like, well, we need to keep growth
at this sort of ridiculous rate.
And this is like even before all the non-profit,
for-profit conversion stuff.
So again, I know how to juggle.
You can juggle on a corporate level,
but not, you can do a Rubik's Cube.
Yeah.
I can do a Rubik's Cube, but I can't juggle, unfortunately.
It's on my 2025 goals, I gotta learn.
Anyway, this is another wild, wild story again.
Treading carefully in terror of being found out by Altman
Marotti and Sutskiver spoke to each other each of the independent board members over the next few weeks
It was only because they were in daily they were in daily touch that the independent directors caught Altman in a particularly egregious lie
Toner had published a paper in October that repeated criticism of OpenAI's approach to safety.
Altman was livid.
He told Sutskiver that McCauley had said Toner should obviously leave the board over the
article.
McCauley was taken aback when she heard the account from Sutskiver.
She knew she had said no such thing.
And so how would you feel if I published a paper saying TBPN is the worst thing to ever
happen to media and podcasting generally.
I was just catching up here. Like it just seems like an offensive thing to do.
Like, I don't know.
It's just weird.
It's weird to be on the board of this company
and then publish a paper that's like,
oh, you were referencing that.
I was looking at the dynamic of Sutzgiver,
McCauley, McCauley.
Yeah, yeah, yeah.
Yeah.
I mean, this alleges that Sam was basically
like mom and dad.
Like mom said I can watch TV.
Go ask dad, what does your mom say?
It's like that dynamic, right?
But I just think the whole core is so weird
to not just, you're on the board.
Like if you're not comfortable with the approach of safety.
Sounds like that should have been an internal memo.
Should have been an email.
And you decided to risk what was already, I'm sure,
anybody on the board putting out
basically like activist materials
when you're supposed to be working internally
with management to sort of get,
like it seems like a very extreme step
to just sort of put this like it seems like a very extreme step to just sort of, you know, put this thing out randomly.
And I mean, this is what I keep coming back to is like,
like, let's use our example.
Like if you published a paper saying like,
TBPN's going downhill, like it's a disaster.
John's not taking podcast safety seriously enough.
Like, like first I would be like offended like,
hey, why didn't you just talk to me like off mic
or like even on the show?
Like you could talk to me whenever.
We're live for three hours a day.
You could email me.
There are a million ways that you could bring that up.
But.
I could use the horn.
Yes, you could use the horn to let me know
that you think I'm not doing a good job But the thing that that would really irk me is if you did that and you were wrong
like like
Okay, so your toner you're criticizing open AI for the approach to safety
habeas corpus show me the damage that open AI is approach to safety has done like what are we talking about here?
What is the damage is it part of it? And there's's a million ways you don't even have to do the killing everyone. You could say, Hey, like the
election is going to swing because of opening eyes products. Like, did that happen? No, no, not at
all. I think the I think it comes down to if you're the safety guy or girl, yeah. And you care a lot
about your job. It's part of your identity. it's part of your entire brand, AI safety,
all this EA stuff, and then your job's actually
really not that important at the moment,
and you feel like you're not getting enough attention.
Like this just screams, like, I want attention.
Like, give me attention, validate my beliefs,
make me feel important, which is like a board member
who has an inflated sense of self-importance
and wants to be a star, is like a pretty toxic dynamic.
Yeah, also it's just like,
there's no relative assessment here.
Like, okay, you have a problem
with opening eyes approach to safety,
let's compare it to DeepSeeks,
let's compare it to Google's. Let's compare it to Google's
Let's compare it to grocks. Let's compare it to mistrals like
Rank it for us Helen like tell us like where where are we sitting here?
Because like a I'm not seeing any damage from open AI products
What I'm seeing is like a minor product boost
Productivity boost for knowledge workers like people like can find answers to things
a little bit faster, write some copy a little bit faster.
Yeah, basically it's a great, you know,
co-pilot for your work.
I know, Satya.
It's like, wait, so it's what I've been using it for?
Even the job displacement thing, right?
Like if we were seeing like, oh wow,
like unemployment is really ticking up,
like this is having an impact, I'd be like, okay, yeah, like,
like we do need to talk about this.
We need to really think about, okay.
Yeah, yeah.
And maybe, maybe it's, you know,
some sort of regulation.
I don't want to like lock this down too much,
but maybe there's some sort of like, you know,
incentive or tax, tax benefit for human workers.
Like we're doing something to take care of Americans.
Like I, I'm all here to debate that and discuss that
There's probably a good way Sam's talked about UBI before like there are ways to address like serious drop job losses
But we have like 3% unemployment like we're not in some like massive AI disruption to jobs yet
Like there's no there's no damage being done. And so like what are you talking about?
anyway, yeah, it's Yeah, it's almost like that board level role
around AI safety and thinking about the impact of AI
is like if Toner was writing research papers,
like advising governments globally,
saying like, here's how we see this technology panning out.
Totally.
Here's how you need to be planning around employment and what the future of sort of
various careers looks like.
And that'd be super useful.
That'd be very valuable and beneficial.
Totally.
But that person should report to the CEO and be like, hey, here, I have a bunch of ideas
about how we should be positioning this.
How should we be messaging this to governments?
I'm almost in like the PR department
and I'm doing and I'm publishing all this,
not higher up like taking shots of the company
you're in charge of.
Anyway, I totally get the criticism
that like getting mommy daddied is annoying.
I totally get that.
I think that that is a legitimate criticism here.
And so it clearly also pisses off Ilya and Mira
because they'd been collecting evidence
and now Setsgiver was willing to share.
He emailed Toner, Macaulay, and DeAngelo
a lengthy two-page, two lengthy PDF documents
using Gmail's self-destructing email function,
which I didn't know was a thing.
Google, what are you doing?
Like, you haven't surfaced that to me.
Did you know that there's self-destructing emails in Gmail?
Never heard of that.
Fun.
Fun. Give it a try, folks, when you're whistle blowing.
One was about Altman, the other about Brockman.
The Altman document consisted of dozens of examples
of his alleged lies and other toxic behavior,
largely backed up by screenshots from Maradi Slack channel.
In one of them, Altman told Maradi
that the company's legal department had said
that GPT-4 Turbo didn't need to go
through the Joint Safety Board review.
Again, it's like, what is GPT-4 Turbo didn't need to go through the Joint Safety Board review. Again, it's like, what is GPT-4 Turbo?
Just like, say model runs faster?
Like, do we really need a Joint Safety Board review?
It seems like he kind of has a argument there,
but then Maradi checked with the company's top lawyer,
and the lawyer said that he had not said that.
It's just, again, I keep going back to this,
but being in this sort of existential fight for your right
to be a company, which is you're losing money, but you're growing
quickly, but you need to be raising more money. And you have
a lot of heavily funded competitors. Yep. And then you
have the Joint Safety Board review who says we need another
review before you can release a slightly updated version of a product that's already been out in market.
I mean I can't imagine anything more cumbersome
than having to do like heavy, heavy review.
Honestly, it's such, you kind of understand
what was happening and again, like the critiques
of Sam not being consistently candid,
which was what they eventually came out and said.
It was like, yeah, that's fair.
But at the same time, the fact that he would,
has been able to lead the company
to hundreds of millions of users
and staying at the edge and winning
at the sort of app layer right now, despite all of this,
is just a testament to his executive abilities.
No, I agree.
Yeah, yeah, it's odd.
So anyway, on the afternoon of Thursday,
November 16th, 2023, he and three independent board members
logged into a video call and voted to fire Altman,
knowing Morati was unlikely to agree to be interim CEO
if she had to report to Brockman. They also voted to remove Brockman from the board. After the vote
the independent board members told Slutskever that they had been worried
that he'd been sent as a spy to test their loyalty. That night Murati was at a
conference when the four board members called her to say they were firing Altman
the next day and asked her to step in as CEO. She agreed when she asked why they
were firing him they wouldn't tell her and this was kind of odd because like she had been in earlier
conversations and just a couple paragraphs earlier says Sutskiver and
Maradi were collecting evidence like and and and there's a whole bunch of these
examples where it's like where you'd think that Sutskova was communicating with her
so she should know, but then she's like,
Maradi's like, I don't know why you would fire him,
which is kind of weird because earlier in this article,
they're saying like, she's really annoyed
by all the things Sam's doing,
so shouldn't she just immediately know that,
oh yeah, they're firing him
for all the stuff we talked about.
But that's like very odd, like why would she even ask?
And then she really goes on the offensive and says,
like have you communicated this to Satya Nadella,
knowing how essential Microsoft CEO's commitment
to their partnership was to the company?
They had not.
I remember when this was happening,
it was so haphazard and so shocking and sudden
and the team didn't know what was going on.
And there's all these sort of,
it's like a family in fighting, right?
They're all on the same team,
but clearly there's power struggle and dynamics.
And then you know Satya is kind of like the grandpa
that he's like, really guys?
You didn't consult me on this. Yeah, that's insane
And like from his birthday like you guys are fighting over autocomplete and Clippy like the new guy the new Clippy
You guys are building the new Clippy. He's like I just wanted the new Clippy faster
And I already tested it and it's a
And I already tested it and it's a hit
Yeah, and so all the board said was that he had not been consistently candid with the board
Friday night they the open eyes board and executive team held a series of increasingly contentious meetings
Marathi had grown concerned the board was putting open AI at risk by not preparing
But not better preparing for the repercussions of Altman's firing at one point point, she and the rest of the executive team gave the board a 30 minute deadline
to explain why they fired Altman or resign.
And again, this is odd because it's like,
it feels like she's kind of on the executive team
and saying like, we're all wondering why you got fired, Sam,
but like she knows or she should know.
And so it's like, why didn't she just turn around and say,
hey everyone, I actually know why you got fired.
Like I was in these conversations. It's very odd. The't she just turn around and say, hey everyone, I actually know why he got fired. I was in these conversations.
It's very odd.
The board felt they couldn't divulge
that it had been Maradi who had given them
some of the most detailed evidence.
They had banked on Maradi's calming employees
while they searched for a CEO.
Instead, she was leading her colleagues
in a revolt against the board.
Isn't that a crazy thing to do?
That's so wild.
So like, the board is like, well like this is our next CEO Mira
She's asking us to explain why we fired Sam
But she's the reason that we fired Sam and we can't say that because we do we lose her as the CEO
It's a it's like this crazy power struggle. It's like so wild. It's a Game of Thrones over there
A narrative began to spread among Altman's
allies that the whole thing was a coup by Sutskiver driven by his anger over
Pachaki's promotion and boosted by Toner's anger that Sam Altman had
tried to push her off the board. This was like the I love you all Ilya like coded
tweet from Sam where he said he signed his tweet I love you all which is ILYA
Ilya and so he was like oh he's like pointing the finger at Illya
And so then this like whole like meme of like Illya's responsible
When really it seems like it was like a lot of people were involved and stuff and you've heard the whole story now
Crazy, so that's give her was astounded. He had expected opening eyes open employees of opening eye to cheer
but they love People love consumer tech companies.
They're like, we're making Clippy here, guys.
Clippy but good.
I don't care about all this safety mumbo jumbo.
Clippy but good.
Cracked Clippy.
Cracked Clippy.
What are you talking about, Elia?
I just want to
get a second beach home and vend Clippy into every Microsoft product.
It's crazy that crackedclippy.com is available for $12.
Go get it. Go get it.
Get sued by Microsoft. I'm sure they still own the thing.
They will come after you hard so by Monday morning almost all
of them had signed a letter threatening to quit if Altman wasn't reinstated
among the signatures were Maratis and such givers such a twist actually I
really like the guy oh yeah I mean this is this is intelligence versus EQ right
yeah the the intelligence thing says that like well legally we can fire Sam I mean, this is intelligence versus EQ, right?
The intelligence thing says that like,
well, legally we can fire Sam,
but the emotionally intelligent thing is like,
actually a lot of people like Sam
and OpenAI is nothing without its people, right?
And for a while, Sam was actually talking,
he was like, I'll just go to Microsoft Research
and hire everyone back that likes me,
and a lot of people like him. Yeah.
And no one else for sure was getting 40 on 300 done.
For sure.
Outside of Sam.
For sure.
So let's hit the size gong one more time for Sam.
Yeah.
And the opening eye team.
What a funny story.
Anyways.
It's one of the greatest tech stories of all time.
I mean, I know it was very stressful
and frustrating for a lot of people involved
But you know, we I don't know we just stories not over the story's not over
It's still a knockout drag-out fight. Anyone could win is open AI Yahoo
Google let's
Jessica Livingston co-founder of
YC
Had a has a podcast and she had Sam on to talk about basically that weekend and the craziness. So worth a listen. But Sam, just like him walking through the sort of like step by step
conclusions and then he's a deal guy. So like he was at certain points that weekend was
basically like, yeah, I can just go to Microsoft
and keep doing what I'm doing.
Yeah, I mean, I know that if I was in his position
and I was super stressed during this weekend,
I would wanna just get out of the city.
I would wanna go to Wander and Sonoma Vineyards.
I would go to wander.com
and I would try and find my happy place.
I would say, I would actually be singing to myself.
I'd be singing.
Find your happy place. Find your happy place I'd say. I would actually be singing to myself I'd be singing. Find your happy place. Find your happy place. Book a wander with inspiring
views. Hotel Grandin many's, dreamy beds, top tier cleaning and 24-7 concierge
services. Wander is such a fantastic idea. It's basically take some of the greatest
homes in the world and make them available by the day. You love this too
much. Two days, three days, four days, thirty days. This wander in Sonoma Vineyards is fantastic.
Four bedrooms, three and a half baths, five beds,
eight guests, 3,400 square feet.
There's a pool.
You love it.
In Sonoma County.
So if you go stay at that wander, DM me.
I'll give you some recommendations.
Yeah, you almost got sucked into that whole
East Bay rationalist thing, the EA thing.
I was born in the dark.
On the dark side of the valley.
You merely adopted it. I was born on the dark side. You're ready to fight. I was born in Oakland born in dark on the dark side merely adopted
I was born in it you're Oakland children's hospital we'll get Eliezer on here you can
debate him yeah you'd be like but what if I turn the server off what if I pull
the plug Eliezer what happens then oh if I have it what if there's bad robots
but I have a good army of robots powered by Brett adcock figure AI what then what then?
Eliezer are you gonna do about that owned they don't even walk like Biden
Yeah
Anyway, let's move on to Intel
Everyone's wondering what's gonna happen with liputon the new CEO of Intel
tons of tons of debate over what should happen to Intel. Of course, Intel, the story chip manufacturer integrated.
So they both design and manufacture the chips.
They have a design arm and a FAP.
Very different from Nvidia and TSMC.
And it's crazy, only one.
Nvidia designs.
One, if they had one different letter in their product,
they would have been a trillion dollar company
If there was just a G instead of a C
The GPU said it's oh true true. They really just one off and you know, they actually do make a GPU really Yeah, don't hear about it much. You don't hear about it much. It's not very popular. Anyway, uh
So libuton isn't signaling a major departure from Intel's past strategy when he came on he said hey
We're saying the path and we were kind of debating,
like, is this just signaling to employees like, Hey, don't worry.
Everything's going to be the same new CEO. You just keep doing what you're doing.
We're going to figure out a new strategy, but like your job is safe right now.
Or, or was he, was he genuine believer in the new strategy,
in the strategy that's been going on for years? He has long agitated for, Hey,
we need some change. And we think that's why he was brought in,
but the Wall Street Journal has some new reporting
that we're gonna take you through.
And so he's only been on the job for two weeks,
but time isn't really on Litbutan's side.
Tan became Intel's new CEO on March 18th
and has already started laying out some of his vision
for the storied but troubled chip giant.
In a letter to shareholders,
he spoke of the need to up our game.
We love that.
To make Intel's products more competitive
in the crucial market for artificial intelligence systems.
He also said he was equally focused
on building up the Foundry business,
where Intel manufactures chips designed by other companies.
Of course, Intel has lost TMSC on the TPU,
on the Apple silicon chips like
on Nvidia stuff like they really they they have a lot of great
clients and Intel can do a lot of it's called the trailing edge
fabs. So think about the chip that goes in your car. Yeah,
doesn't need to be an h 100. It but it does need to be made with
quality at scale. And so we saw a lot of this during COVID, a lot of the chip delays led to cars being out of stock.
Daytona SP3 with an H100 would go pretty hard.
Pretty hard.
I mean, you're saying that,
but I actually would love a car
that was able to inference whisper
and like,
you know, chat GPT or grok or deep seek or something,
like in real time.
So you can have a conversation with zero latency
because it's all done in the car.
And there's enough energy in these cars.
There's enough like power, it's possible.
Just, just burning a V12 to power your H100
while you're talking to your chat bot on the way to work.
That's the future.
That's great.
Hey, make a to-do list, bot.
It's great.
But yeah, I mean, this is one of the things
we talked about with Tesla, and Sam was saying,
maybe Tesla and XAI and X all kind of combined,
and you can imagine that if you're driving a Tesla,
you want to have the best interaction with the AI features.
And if you have autopilot enhanced by XAI,
you could be using, you could be scrolling the timeline.
Exactly.
I didn't know this, but my buddy Ben Taft
and I were hanging out over the weekend
and apparently with the Tesla autopilot,
they make you keep your hands on the steering wheel
and they use a camera.
And it used to be that you could hack it
by like putting weights on it.
So you could just sit there.
But now having your car on autopilot,
but being forced to keep your hands on the steering wheel.
And then it gives you strikes.
If you get five strikes, you can't use autopilot anymore.
That's insane.
Yeah, it's actually kind of miserable.
But you get used to it and you kind of adapt.
I mean Mercedes has lane keep assist
and you have to check in with it every like 20 seconds
and like give it a little wiggle
or like hold your hands on it basically.
But you can take your hands off for like 10 seconds.
George Hatz's comma AI does not need you
to touch the wheel at all.
It's amazing.
It just has a camera.
But if you look down
or you look at your phone, it will blink at you and disable.
And so George has, I think, correctly argued
that that is the future of self-driving systems.
Just camera on the face is the person paying attention.
If they fall asleep, it detects that.
Doesn't matter if their hands are on the wheel.
If they're sleeping, that's not good.
And so I think that's the future,
and I think that's where Tesla's moving.
And I think that the situation that you described is very
Temporary I think that we will see Tesla long-term to a pure mechanism purely AI
But it's still yeah, it's not that was hilarious
Yeah, but I mean that's why most basically we know just use chauffeurs and then they sit in the back
That's right, because the chauffeur will keep the hands on the helicopter. Yeah
and then they sit in the back. That's right.
Because the chauffeur will keep the hands off the helicopter.
Exactly.
Yeah.
Yeah.
In other words, the same things Intel last CEO was trying to accomplish.
Tan reiterated those points in a speech on Monday, kicking off the company's Intel Vision
Conference.
Beyond a few aspirations, including better AI strategy and custom built chips for niche
computing work, there was little to distinguish Tan's playbook from his predecessor.
Stay tuned for Intel's plans on humanoid robots,
he told the audience.
Let's go.
Let's go.
In short, his short tenure on the job so far means Tan
could well have more significant changes in mind,
but one option Intel doesn't have is more of the same.
Tan's predecessor, Pat Gelsinger, absolute dog,
was effectively booted following an ambitious multi-year effort
to both improve the company's chip designs and catch its manufacturing processes up to
those proffered by TSMC.
And yeah, it's hard to do two things at once.
They're trying to be a master of a jack of all trades, but they are a master of none
right now.
That effort hasn't worked, or at least not yet. Intel's annual revenue has shrunk by 33%
over the last four years.
The once flush chip giant has been burning cash since 2022.
The Foundry business still mostly produces
Intel design chips and it lost 13.4 billion last year.
Little size gone moment of silence.
Anyway.
This moment of silence is brought to you by RAMP.
Save time and money.
They need to be on RAMP for sure.
One change that Town hinted at is more wax
at Intel's cost structure.
The company reduced its workforce by 13% last year,
but still employs far more people
than any other company in the industry.
You know the Anakin meme?
Yep.
Where you're a chip company,
you must have done great over the last five years.
It's like, you must have done great, right? have done great over the last five years. It's like, you must have done great, right?
Intel down 60% the last five years.
Yes, so rough.
Yeah, that sounds like, so Tan involves listening
more closely, his plan is to listen more closely
to customers, that sounds like corporate speak,
but it's meaningful, I didn't tell.
The Wall Street Journal, pulling no punches.
Decades of technical success and near monopoly
on personal computer chips nurtured a culture of arrogance. The Wall Street Journal, pulling no punches. Decades of technical success and near monopoly
on personal computer chips nurtured a culture of arrogance.
An Intel recruiter who interviewed Gelsinger
for his first stint at Intel out of a technical school
called him somewhat arrogant and noted,
he'll fit right in.
You'll love it.
But Intel's been in trouble.
Look at this graph of people power annual revenue
every annual revenue per employee.
Nvidia is putting up historic numbers, 3.5 million per employee. NVIDIA is putting up historic numbers,
3.5 million per employee.
Intel is down at less than 500K per employee in revenue.
That is not.
Right there with the manufacturer of the TI-84.
Texas Instruments.
Texas Instruments is one of the best corporate names
of all time. It's a great company. Also, the founder of TSMC, worked at Texas Instruments is one of the best corporate names all the time.
It's a great company.
Also, the founder of TSMC worked at Texas Instruments,
was passed over for a senior role,
then went to Taiwan and was like,
you guys want to do this?
And they were like, yeah.
Yeah.
It's great.
And Texas Instruments manufactures calculators,
but they also manufacture weapon systems and stuff.
Yeah, yeah, yeah.
It's a great company.
Yeah.
But it's very funny that they also make the calculator.
What so far has been absent from Tan's strategy
is a deeper shift into Intel's business.
Andy Grove, the storied Intel chief who mentored Gelsinger
would have called the current AI wave
a strategic inflection point that required decisive action,
much like when Intel itself abandoned
making memory chips in the 1980s.
Back then, Japanese producers were making memory
more cheaply, rendering it unprofitable for Intel,
so Intel took up the then-nascent market
for personal computer processors.
Effectively, an entire pivot for the business.
Andy Grove famously said,
only the paranoid survive,
and he was very aggressive about steering the ship.
Was he the original person to say that?
Yes, that's his quote,
and that's like the name of his book. I think that's why yeah
He might have gotten it from somewhere else. He probably stole it from a group chat
One of his boys
One of Frank's sleutman's boys like Frank you guys gotta amp it up
Book about that.
Yeah.
Today some analysts suggest Intel should split off
its manufacturing operations from its chip design
and marketing functions following a long established
industry trend of the fabless designer
or the pure play fab TSMC or Nvidia strategy.
It could gather outside investors in the manufacturing
operation to bring in more capital, something the company has already in talks to do
Investors so far have welcomed tan sending the company stock up 10% since his appointment in March
And so if you have a take on Intel if you want to get exposure to Intel you want to go long or short
You got to go to public.com investing for those who take it seriously
they got multi asset investing industry leading, and they're trusted by millions.
And Public is the sponsor of our ticker.
You'll see at the bottom of the show all day long.
Crypto.
Let's stay with Intel for a little bit more.
Intel's new CEO Plot's Turnarounds, we need to improve.
So this is heard on the street.
A little bit more information here.
We gotta get some audience members' stocks up on the street a little bit more information here. We gotta get some audience members
stocks up on the ticker.
Oh yeah, that'd be great.
We certainly have some public market corporate athletes.
Some Oscar maybe?
Yeah, yeah.
Yeah, that'd be good.
So there's some Palantir.
Yep, of course.
There's some other good ones.
I think Tesla's up there right now, friend of the show.
Thank you, public.
So in Las Vegas at the Intel Vision Conference,
there was one more line that stuck out.
We will redefine some of our strategy
and free up the bandwidth, Tan said.
Some of our non-core business, we will spin it off.
And so people are wondering what businesses
are technically non-core, what will he spin off?
And now Intel is leaning into AI,
which will include humanoid robotics,
which Tan said has the ability
to redefine manufacturing in the future.
And they also need to regroup its existing pool of workers
while attracting new talent
with a clear vision of the future.
But I mean, as I was thinking about this,
if I was sitting down with Liputant,
the new CEO of Intel, and I had one piece of advice for him,
I would tell him to go to ramp.com because time is money
and he should save both right now.
They're losing, they lost $13 billion.
I mean, if they had easy to use corporate card
bill payments, accounting and a whole lot more
all in one place, I think we'd see a very different Intel,
for sure.
Yeah, they're focused on what should we spin out?
Should we focus on design design on the fab side?
They really need to be focused on immediately
just basic financial operations and one
of the most basic things you can do,
but intelligent things you can do.
So we should have the CFO of Intel on the show,
give them the pitch for RAM, really dial it in,
understand it.
What corporate card is he using? And is that the source of all of Intel's problems? It's possible. It's very possible
Well, I would like to see this bet expressed
I want to know if Intel is gonna split this year and I hope we can get a poly market set up for it
I don't know exactly how we would define that but we're happy to announce the poly market is now an official partner of TBPN and
We want to give a shout out to the ticker down at the bottom the polymarket ticker is powered by polymarket we had a friend of ours
we had their Cosgrove write some code and pull in their API and we will be
iterating on that and hopefully hopefully making it better and better
over the next there's currently no market on Intel until spin out I would
like to see you will work on getting one set up.
I mean, you can kind of express that in the public market,
but I love Polymarket for just defining these
like narrow events and just creating
another information source and underrated,
there's a great conversation section
that happens in the comments, people debate these things
and Polymarket does a great job of surfacing
interesting markets on their X account and
Online when things are shifting when things are trade are changing so highly
Follow it's an alternative way to traditional news to understand the current thing which is you know news you go on
You know New York Times. They're sort of being very sensational about things. It's not really oriented around truth.
It's oriented around attention.
And a polymarket is, it lets you know, like what's actually important.
Like I was talking to Shane about this.
I was like, I want a newsletter from polymarket and maybe we should kind of use our show
as an, as like a version of this is like, I want to, when a market moves significantly in the
polymarket tech markets, when there's a big jump in the, in the, in the prediction, um,
like there was recently a big change in, uh, the, the, the expected best LLM by the end
of May. And I think it flipped from open AI to Google really quickly, really suddenly.
Um, when something like that happens and the odds shift all of a sudden, that's when I want
to read the news story.
That's when I want the deep dive.
That's when I want to go deeper on that story.
If we're just seeing everything's humming along at 50-50, I don't need another breaking
news story about it because that's probably just a press release that's pitched by some
PR person.
Right?
It's not really news.
And so I say it's not news unless it's moving the market and that's my thesis but you know what else is moving the market
horses that's right this this story has shaken the world of technology
journalism to its core and I think a lot of tech journalists are gonna be very
excited to hear this so the horse war Hermes inside a luxury show jumping
competition if you're a technology journalist and you grow up doing just The horse war Hermes inside a luxury show jumping competition.
If you're a technology journalist and you grow up doing
just sage or or show jumping like you are going to want to
pay attention to this because it's a really big deal.
There was an annual equestrian event in Paris and the French
brand has been keeping in touch with its saddle making roots.
This is a big deal for everyone who follows horses and show
jumping obviously pretty much everyone in tech journalism.
Yeah, and for those that don't know,
Hermes has always said the horse was the first client.
Yes, the former chair of Hermes, Jean-Louis Dumas,
used to say the horse is the first client.
And that's because horses were,
for the French luxury brands, first marketed in 1837.
They're not starting new companies
that have that much of a lineage today. When it opened as a harness maker.
They hopefully will. We don't get paper clipped.
Yeah. They are also its original muse, inspiring stirrup inspired closures on bags, designs
on silk scarves and in the company's horse and carriage logo. Last week that connection was on vivid display under the Grand Palais glass ceiling where
some of the top showjumpers in the world competed for branded rosettes and a
400,000 euro grand prize. The Sao Hermes is
the Sao Hermes is among the most challenging equestrian competitions in the world with
hurdles reaching 1.6 meters or just above 5 feet.
SAO is French for leap.
The three day event is also a display of everything Hermes, a reminder that in addition to selling
bags, scarves and coats, the brand still outfits top riders.
In a luxury downturn, when many brands are cycling through executives and creative directors and Ermes posted a 13%
increase in sales in 2024 compared to 2023
And I think it's probably entirely driven by increased
Increased demand from technology journalists, right? You have to imagine their power to the luxury market
downturns downturns, ofurns due to the family debt.
Of course, because all the family money, yeah.
Its power is in its heritage,
which appeals to athletes as well as fashion clients.
It's one way to show that our equestrian roots
are very much alive, not just a narrative,
says the managing director of Hermes's Equestrian Category,
or Metier, as the brand calls it.
110 horses with names such as Hello Chandora Lady, Al Capone de Carmel, and the brand calls it. 110 horses with names such as Hello Chandora Lady,
Al Capone de Carmel, and Cocaine Duval traveled
to the temporary stables along the Champs-Elysees.
Their white-tented enclosures were lit
with the same round chandeliers that decorated the palais.
They could warm up in an enclosure right near the intersection on view for passers-by round chandeliers that decorated the colors orange and brown or the facade of the brand's flagship on the rue de faubourg
I can't even pronounce that less than a mile away anyway very fun story a little
bit of more information that I liked best known for its colorful scarves and
leather handbags Hermes famously makes the Birkin bag but they also sell
equestrian accessories and these are some things that tech journalists in the audience are going to want to pick up.
They sell $1,200 breaches and $460 for a felt sugar box.
And that's a box that's $460.
You open it, you store your sugar cubes in there.
So when you want to give your horse a sugar cube, you have it in a nice presentable box.
Yeah.
And for those that don't, you know know horses are highly opinionated about this type of thing you try to give them
sugar out of a regular old plastic Tupperware they are gonna be pissed
that's just a microplastics thing yeah you don't want to be giving a horse
microplastics. You're not gonna have an enjoyable ride definitely not giving them you know Tupperware
sugar box definitely not anyway speaking of someone with as relentless
as a drive as a horse, we got David Senra in the building.
He actually has a, he told us just before joining
he's got a good horse story.
So why don't we start there.
Okay, tell us the horse story.
What's up guys?
Good to see you. How you doing?
Good to see you.
Did you get some more books since last time
or is this just a different angle?
No, I'm using a different camera.
Looks good.
So I absolutely love the horse story.
I was talking to one of the most successful tech company CEOs
a few months ago.
And he was telling me how stupid he
thought it was that his wife picked up a horse habit.
And there's a I guess it was like a famous like horse trading
training facility in Wellington, Florida, which is like the middle of nowhere, kind of by like West Palm.
And he goes, I thought it was a biggest waste of time.
And he shows up at this event and like Michael Bloomberg's there.
And like the guy from Goldman Sachs is there and like all these
essentially like fabulously wealthy and successful.
Yeah.
Oh, maybe this isn't a waste of time.
If the horse, even if a horse costs a million dollars, you get one deal done at that equestrian
event and it pays for itself.
So I, I read two books last week.
One was terrible, which I told you about, and then one was good, but not episode worthy.
So I had to republish an old episode and I republished an old episode
about this guy named Daniel Ludwig
who was the richest man in the world in 1980s
and no one knew his name.
There's no pictures of him anywhere.
And he made his money,
the first way he made his first fortune was hauling oil
and then he had huge cargo ships
and he was getting smoked for contracts
for like the big Middle East oil providers
because the Onassis and all the other Greek shipping magnets
would build the most, the biggest yachts in the world.
And then they would invite the people they want to sell
onto the yacht and they got all of the contracts.
So Daniel's like, okay, I'm gonna do the exact same thing.
He didn't even, he just worked all the time.
So he didn't even go on the yacht.
He let them use it.
And he said later on that he made more money
from that ship than all of his super tankers combined. Wow. Yeah, it's amazing. Um, Jordy,
what should we talk about the Ken Griffin episode that's coming up? Are we leaking that?
No, no, I just finished it this morning. I haven't eaten anything and I'm on like six,
uh, 600. I had three cometeers so far today.
So it's like just under like I'm shaking right now.
We got some caffeine, like caffeinated Senra.
To match your energy.
But I got it out.
So what happened is like for you.
Tell us about the inspiration.
Tell us about why you did the episode.
Obviously everyone knows Ken Griffin, founder of Citadel,
but what inspired you?
Because I study psychos for a living and I love them and they're the most interesting,
fascinating people to me.
And so you guys actually made a video on this tweet that got like a million and 1.5 million
views, right?
It's John Arnold talking about, Hey, what did Ken Griffin do when Enron blew up?
And he talks about just like, you know, a lot of people like, Oh yeah, I know Enron
was making money.
Uh, like all these, this talent is going to like, you know, essentially go to the wind.
You know, some people will recruit, maybe they'll build like a good commodities business.
And he tells a story where the day blows up and keep in mind in this story, Ken Griffin's
like 33.
He's like really, really young and he char, immediately charters a Gulfstream jet, goes
to Houston and interviews every
single person that was important in the Enron trading business. And then he winds up hiring
all the best talent. And in the, in the talk that I use for the basis of the episode, because
there's no biography written on Ken, he goes, and since then we've made about $30 billion
trading commodities. So the main reason I wanted to profile him is because
I get to meet a lot of really interesting people because of the podcast and I always
ask the same questions. It's like, who's the smartest person you know, who has the best
business you know. And if they're into finance, even if they don't know each other, they,
I kept hearing Ken Griffin, Ken Griffin, Ken Griffin, and they would say two things about
him. They say he's a winner and he's a killer. And you just find all these stories,
they're not in a book,
but they're just like spread across the internet.
They might be in other books of just him taking something,
he just took it to the next level.
And then John Arnold, the one thing where I knew
as soon as I read that line, I was like,
oh, I'm gonna do an episode immediately is John's like,
listen, I'm not gonna take a job, but I respect Ken.
So yeah, I'll talk to him.
I'm headed to Aspen for an event.
Tell him he can call me when I get back to Houston next week.
And then Ken's assistant calls back like two minutes later
and is like, hey, would you talk to Ken
if he flies to Aspen tomorrow?
And he's like, yeah, I'll do that.
So it's just like, there's this reoccurring theme
through all these biographies of history of great entrepreneurs
is like, how bad do you want it?
And I think reading these stories
like really stretches like what's possible.
Did Ken have a jet at that point?
Cause you've got to discount the like quick flight
a little bit already at TJ.
And it was just more.
You flew Southwest actually standby.
It was rough.
If he didn't own the jet,
he was definitely chartering the jet.
So I don't know if he owned it, but yeah, he was not.
Still meaningful.
Did you find the story of Ken Griffin in college getting the stock tips from State Street?
Do you did you cover that? No? No, the one thing I heard was that he convinced Harvard to let him install
Like a satellite dish so he could get real up to the minute
Data he was the only person Harvard that had this data flow
So that was true for the for like the publicly traded companies that he needed stock prices for minute data. He was the only person in Harvard that had this data flow.
So that was true for the publicly traded companies that he needed stock prices for, but he also
had this interest. So he's trading convertible debt at the time and convertible debt, it
doesn't just have a ticker that you can just look up online or call someone. You actually
have to go to a trading desk and talk to a sales and trading guy to get like, hey, what's
the market trading that particular bond at because it's not incredibly liquid.
And so what he would do is he would take the red line from Harvard in Cambridge down to
Boston go into State Street, which was one of the largest, it still is like huge global
asset manager.
You would bring flowers for the ladies at the front desk and sweet talk them and kind
of just be like,
oh, like so good to see you, Susan. Like you're the best.
Like, and then he would walk through and then just walk the desk.
I don't know if this is a proctor role. They just kind of told this story at
Citadel. Yeah. But, uh, but they would, he would,
he would just walk and then he would just like tap a guy on the shoulder and be
like, Hey, like what, what's the spread on the convertible debt on Microsoft
today or something like that?
And then you would get the quote, go back, run his pricing model, and then decide
whether or not he should actually put it in an order.
It was like everything he does, everything he does is like that.
That's why I was so, so fascinating.
Cause like, he just takes the extreme, this idea of like buttering up, you know,
the gatekeepers inside of a business, like that David Geffen's biography, he talks
about, he had a huge advantage doing that.
Michael Obitz had just did two episodes on Michael Ovitz same situation like they're always able
to obtain information that other people can't because there's somebody in the
way and they just figure out how to get around that person and it's usually
through like kind acts of kindness. Did you get into the story of the the the
IP theft that happened in 2011? No so no so basically there's no biography of Ken, right? And so I watched every single
interview I could find with him. But the problem is because it's him, most of the interviews,
they want his, it's like, they're all timely. I was looking for like timeless, right? And
so it's like, they're like, Hey, what do you think about this political candidate? Or what
do you think about the market now? And so the only thing, the best thing I found was
this talk at Yale, the guy interviewing him, kind of horrible,
but I transcribed that talk at Yale
and then went through the transcriptions
just like I do the books.
And then there's another thing, it's this book right here,
that I found this because Josh Wolf talked about,
I think Ken was an investor in the first fund to Josh,
and that Ken recommends this book,
and I think he makes people,
or he strongly suggests people at Citadel read it it and so I read that to get context because I think if you
listen to if people tell you hey this book is really important to Mary you know there's like
five books you have to read it gives you an insight to their personality right and that that
book is literally the subtitle is are you playing to play are you playing to win and think about the
the tweet from John Arnold like was he playing to play or are you playing to win? And think about the tweet from John Arnold.
Like, was he playing to play or is he playing to win?
He was playing to win.
And so I think there's a lot of analogies in the Yale talk that he gives where he
gives a brief over his career and then he gives a bunch of like principles for you
to apply to your career, which I thought was more of like a founders episode.
Can you talk about what what it means for Ken to be a killer,
like really getting into like your definition of that,
because you described him as like a winner and a killer,
but I think killer can have plenty of different definitions.
You can think of a killer as like somebody
that just like figures out how to win and is aggressive,
but then I still think of someone like Eric Gleiman
as like a killer,
but he's also extremely kind. And so it's possible to be like kind and a killer.
How do you describe Ken Griffin? I think every single person that I've profiled on founders
is a killer. And I mean, it is not a pejorative. I mean, this is like,
they take what they're doing very seriously. The greatest, this idea really stuck in my mind because there's this biography of Bernara
and all it's called like the taste of luxury.
It's like really hard to find.
I think it's like $3,000 online and because very few biographies of them in English.
Right.
And this one ends when Bernara is like 40 and at the end of the book, he calls his shot
and you guys were just mentioning earlier, I heard you on the show.
It's like, well, if you want to like do a competitor Rolex or some kind of the book, he calls his shot. You guys were just mentioning earlier, I heard you on the show, it's like, well, if you wanna like do a competitor Rolex
or some kind of luxury brand,
like you had to start 100 years ago.
And Bernard had that insight, you know, 40 years ago,
35 years ago, and he's just like,
oh, these things are very valuable.
I think they're gonna get more valuable in time.
And I had no competition because you have to start them,
you know, two centuries ago or two generations ago.
And, but there's a line in that biography,
I never forgot, where they're describing him. And they say only killers survive. So what I mean by
killers, like we I was actually out to dinner in Miami on Saturday, Saturday night, the night before
I was going to record this episode. And I mentioned somebody came over from the table that we know.
And he was asking me, he's like, Oh, like, what's the next episode you're working on? And he works for another fund hedge fund. And the guy that owns that fund is really
close to Ken. And so when I brought up them doing Ken, he's like, Oh, I have so many fucking
Ken stories. And one thing that he told me that this is an example of like a killer away
from like your competitors, right? Is that there's like, there there was a tiny business inside of Citadel
that had something to do with the new technology
they were developing.
And it's actually Ken sat and talked to the guy
for a few hours about tiny details,
like where are the servers?
What are we doing with them?
He was just completely obsessed.
He has no other hobbies than just building
this massive empire. And then one thing that I think is directly related to your question. Jordy is like in the the talk
He's just like you don't want to win like you want a landslide
you want to beat your competitors so bad that they do not survive because if you let them survive they will come back and
You don't want them to come back again. And when I got to that part of the transcript, I'm like this isn't
This is not Just how Ken thinks. There's a great line in this book called,
Event and Wonder, which is the collected writings
of Jeff Bezos.
So somebody, Walter Isis and collected all of Jeff Bezos's
shareholder letters and then transcribed and edited
all of Jeff's important speeches.
And Jeff has a great line in there.
He's like, do you really want to prepare for a future where you might have to fight somebody as good as you? And
he goes, I don't. And so the same thing is like, I don't want to win two to one. There's
a line in hardball. It's like, you need to fuck winning two to one. You need to win nine
to two. You need to stomp them. Um, I just had dinner with, uh, I can say this because
it's probably like a, uh, Mark Laurie and he was the one that founded diapers.com and you know, and then he went on to sell jet and all this
other stuff. But I was lighting them up with questions. I was like,
I want to know what it was like competing against Jeff Bezos at that time.
And he was just like, you can't, there is no company. He's going to steam around.
You, I had no choice.
That's amazing. Uh, I mean, we, we, like the, the killer mindset, uh, with Ken Griffin just takes me back to 2012 this this
trader on the on the on the quant team stole code for they call them alphas basically code
that no matter who I read this story, they're going to generate return stole a bunch of
stuff on like a hard drive. Citadel finds out, realizes they're stealing the IP,
the code, the guy freaks out, he dumps the hard drive
into the river in Chicago.
And Ken Griffin and the Citadel team get scuba divers
to go into the river and find the hard drive
just to send it to the FBI to get this guy busted.
And I was like, yeah, like he's
Not gonna let you just take his his alpha. He's just not like flying in scuba divers
Yeah flying in the Caribbean basically he talked he talks about that in the talk
I didn't include in the podcast but he's like listen if somebody's gonna leave because you know
You work at Citadel and then you want to work you want to be a doctor? I'm gonna write you
You know a letter of recommendation
I'm gonna support you in every way you leave for competitor and that just brings up different,
very different feelings within me. And so that's what I mean. It's just like he's completely bought
in. I think the important point here is like, go ahead. On that note, and maybe you can just
wrap this into the story you're about to tell, but how does he think about investing in other funds?
Right. Because in many ways, funds may start with a singular strategy, but in a long enough time horizon, the manager
says, well, okay, I'm going to run an empire. Now I'm going to do the Ken Griffin playbook.
We're going to have a bunch of different verticals. He's invested in some venture funds.
He's invested. Okay. So I think maybe it was on your show. I forgot who told me. It's kind of not. Oh, no, no, no. I know who told me.
Like you'll see this over and over again.
When people start to understand how valuable
their industry is,
they essentially get in all the good deals.
So like think about, you know,
Nvidia is in every single AI anything right now.
I just did an episode on Jerry Jones.
And I talked to a bunch of people that live in Texas,
a bunch of people that are around him.
And they're like, yeah, yeah, the Cowboys is one
thing, his early oil and gas, he's like, you don't understand what he's been doing, the
investment he's been doing since then.
He's in every single thing.
I don't know, I didn't track like how, I just hear over and over again that he's got money
everywhere.
And again, these are private companies, so you never know, but what I was told, and you
kind of see this because like if you come to Miami, he's like buying everything like
Every like not just like literally everything he's building. I think was gonna be the most expensive house in history
In Palm Beach and so what I was told is forget the enterprise value of the companies, which is you know
Who knows 80 billion whatever the number is, right?
He's like he the guy's been pulling out billions and billions and billions in cash. He just doesn't know what to do with it
So yeah, I'm sure he's invested in everything.
I heard stories about Bloomberg.
I went up talking to somebody that knows a lot about his family office.
And you know, I was asking questions around this trying to get a sense of how large his
fortune is.
And he was like, well, I can give you a little hints.
Like, look how much we spent X on charity.
We have this many people in the fucking family office.
And you kind of like piece this together.
It's like, oh, well, what are you going do if you're making five seven ten billion dollars in cash?
Year after year after year like I eventually fight for a set of AMG one allocation, of course
I mean it is crazy how like how diverse the Citadel team has become because they have this high-frequency trading arm
I mean they started with convertible debt
They kind of like the story of Citadel is like,
he got so good at trading convertible debt
that he just straight up maxed out the market size.
Like he was Tam constrained on that.
And then went into high frequency trading
and then got the global equities team up and running.
And the equities team, when I was there,
they were doing like 2000 CEO interviews a year
or something, just like interviewing every single person that runs a public company,
getting the temperature, figuring out, Oh, do we like this manager?
Do we like this leader? Should we invest? And then on the other side,
you have the, the high freeze. These guys who are like,
I haven't talked to him in being in months and it doesn't matter.
And they're both printing. They're both absolutely printing. Yeah.
What I love too about like about like, you know,
him maxing out markets earlier in his career,
that's something he takes Q and A
from the Yale students at the end.
You're like, how do you decide like what business to go in?
And he's like, you have to think
about total addressable market.
We have to be in deep liquid markets
because we're gonna do the best research on the planet
and we need to be compensated for this research.
Another thing that I absolutely loved,
I think is really important too,
is he talks about how much of an influence that mentorship and apprenticeship is going
to have on your career as a young person and as you continue to go. And this is something
I see over and over again. There's this book on my desk, like this Impossible to Find.
It's called, you actually like this, it's Autopsy of a Merger. And it's this deal that
Jay Pritzker did, one of his most infamous deals.
And what I thought that was interesting is like, I remember reading Sam Zell's autobiography
and then getting to speak to Sam and Sam, Jay Pritzker was like Sam's older brother
and mentor.
And so he said, Sam said that Jay had the greatest financial mind of anybody that he
ever met.
And so I was like, okay, well, Sam is studying this guy.
I need to study this guy.
But I want to go back to the point I was about to make. It's like,
like, you know, think about every single person covered in founders. It's like they were so good
at their job that somebody wrote a book about them. This is like the smallest percentage of
the people that have ever lived. But there's so many, like, if you take all these lessons
as abstractions, you can apply them even to like, business businesses far afield. This is actually advice that Ken gives,
where he's like, as an entrepreneur,
you need to be looking for edges.
So obviously you study inside your company,
you study your competitors,
but then you study businesses that are far afield.
So he talks about, he had,
he built this thing called a risk wall in Citadel.
It's 30 feet by 10 feet.
It's a giant screen.
And he says, before they were getting like a B, B plus on
how well they were managing risk and you know, Ken's not going to be satisfied with B anything.
And so he actually is in the office of Saudi Ramco in Saudi Arabia and he sees that they have
this giant fucking board that's like 30 feet long, 10 feet wide, and it has all the important metrics,
like where the ships are, how much they're producing, everything else. He's like, oh, I'm going to take that idea and I'm going to apply it not to ships and
oil, but to getting all of the data I need so we can start managing our risk better.
And he says that one idea had him go from, you know, B, whatever the case is, to like
one of the best like risk departments in the world.
So the reason I think about him is because I don't know if you guys told Truman to do this, but I have been getting some threatening emails and text
messages. I'll tell the audience because they might know. I'm getting like calls from
Geordie at like 530 in the morning California time, just letting me know he's on his way
to the studio. I'm getting text messages from John at545. I get a text today and it's, all I see is the back
of John, Jordy and I think the rest of the TPBN crew
and they're doing curls till failure.
And then this is the caption.
You're like, TPBN coming for your neck.
It was a good time this morning. I mean that's every morning yeah, I mean podcasting never sleeps
Yeah, but you guys are taking it, you know to a completely different level
Our killer mindset is it's not enough for us to be the biggest other people need to quit
other people need to quit.
They need to quit.
We like to say never podcast weekly, always podcast strongly.
I love that line.
And that's why we train every morning.
We'll talk about like just frameworks for focus.
You covered a little bit of this,
but is there any, as Ken put anything out there around
like how he's evaluating,
like they're running all these business lines. And then he's
got somebody that comes to him with an opportunity, you know,
internally, he's evaluating it based on the market potential.
But is there anything like, you know, he didn't roll out every
business line in the same two years, it was sort of staged,
right?
Okay. You just accidentally hit on like something that's very
important. I put in the episode description is like one of the other reasons in addition to
like all these crazy stories. Uh, I, let me, let me back up.
Don't let me forget where I'm at. But, uh,
the first time I came across Ken cause I don't know any, like I just,
I don't pay attention to finance, right? Uh, now I do cause I watched TTVN,
but I'm reading.
You don't need a fantastic liquidity provider like a security.
You gotta get your size up. This is one of my favorite books, okay, I think it's episode 222 of founders
I think I maybe did it episode like in the 80s like, you know six years ago
whatever it's Ed Dorps biography of man of all markets and a Thorpe is a fascinating character legit genius and
He was the first person the person that made the first quantitative hedge fund ever
He built the world's first wearable computer with Claude Shannon And he was the first person, the person that made the first quantitative hedge fund ever.
He built the world's first wearable computer
with Claude Shannon.
He was the first LP in Citadel.
So the first way I came across Ken Griffin,
the way I was exposed to him,
was at the end of the book,
Ed Thorpe's like, he closed down his hedge fund, right?
He already made more money he could ever possibly spend.
It's called Princeton Newport Partners.
And what happened is, Ken Griffin was 19 years old, comes to Ed Thorpe's house,
and then Ken Griffin's mentor is with him. And essentially, he's like, I always wondered
how far Ed Thorpe was like, how far can I go with the strategy? Like, I wonder how far
I could have gone. I wasn't the right person to pursue it, but I wonder if somebody did.
And he's like, well, the prodigy came over and I gave him all my files.
And this wasn't publicly available information.
Like you couldn't get it anywhere
to the point you were making earlier, John.
And then he's like, I want it being the first LP.
And then you fast forward and look like,
I think the book ends, he ends in that where it's like
six billion and they have like 15 billion of assets
in our manager or something.
And now it's like, multiples of that.
But I think like one of the, uh, like, I guess the way I think about this is just like,
I obsessed with people do things for a long time. And so even Ken says, Ken founded Citadel 35
years ago. He's founded Citadel securities 23 years ago, if I'm not mistaken. Right. And he's
like, we made more money in the last four years. Like we are making more money today than we ever have in the past.
And he talked about how this maybe give you insight on how he's able to scale up
all these different businesses. One, they take a long time to do,
but also he said he had a last year,
he had over a hundred thousand people apply to work for him.
And so you just have a huge labor. Like you have all the town in the world,
you know, you have all these resources and you just build it slowly,
like slow over time over many, many decades.
Yeah.
Citadel Securities, I originally,
I think Ken was thinking about doing an investment bank
and then eventually spun that out
and sold that arm of the business
and now it's morphed into a market maker
and it's a very,
the business itself has been through pivot.
It's much like the Core Hedge Fund.
I do have a question about the core hedge fund.
There's always the question of, as you scale up a fund,
size and outside LPs, you put up some amazing numbers,
20% returns, everybody wants in.
Is that sustainable when you take it
on order magnitude bigger?
Did you get any insight into how Ken Griffin
thinks about scale and outside
investors? Because at the end of the day, you know, it's his fund.
He wants to grow it, but sometimes having a bigger fund is advantageous.
No, I didn't get it. Like there was a couple of things where I was kind of getting
frustrated with the interviewer because it's like Ken says, um, he made,
he's the most successful hedge fund of all time,
which one I asked a friend of mine who knows about this. Like I thought
Renaissance, like why, like what's in it? It's like, yeah, but most of that was like their own money. Like they clothe the medall hedge fund of all time, which one I asked a friend of mine who knows about this, like I thought Renaissance, like why, like what's in these like, yeah, but
most of that was like their own money. Like they closed the medallion fund. They closed
off to outside investors a long time ago. Jane streets. Yeah. And then the other thing
Ken said that they like, please ask. I heard you earlier when you guys were going over
the wall street journal article on opening eye where you're like, follow up. God damn
it. I know. Why aren't you following up? It drives me crazy? It drives me crazy. Like, um, I'm going to have to
do an interview show now just so I can get these questions out
of it. But he mentions this that like he had made more money than
anybody else, but there's also years where he says I lost over
a hundred billion dollars.
I was there when I came in, they, during my like training
seminar, they, uh, the, the guy's like, oh yeah, we had a
really rough go during the training seminar, the guy's like, oh yeah, we had a really rough go
during the financial crisis, during the housing crisis.
Our fund lost 50% of its value.
And then he was like, and I went outside the next year
and I was talking to my neighbor and he was like,
oh, how's it going?
Any better than next year?
Or than last year when you lost 50%?
He's like, actually we're having a great year.
We're up 50%.
And the next door neighbor was like, oh, that's great, you're back to where you were.
And the guy was like, no, that's not how it works.
You have to go up 100%.
And he was just kind of exposing basic math,
that's basic math of literacy and the general population.
I thought it was an interesting anecdote.
So there is an interesting principle to take away from that
is because likely most of us three and people listening
are not gonna have $100 dollar loss in a year.
Unless Masa is listening.
I'm just.
But this goes back to have the opportunity to lose a hundred billion.
Exactly.
This goes back to, again, just being really gifted at a young age, because he says that
one thing that helped him very well is going to the proverbial scene of accidents.
So when other funds or other companies grew up.
That's the Enron story.
Well, there was a, yeah, but the Enron happened in 2001.
There was a story in 98.
So keep in mind, at 98, Ken is 30, okay?
2008, when you're describing,
when he loses half of his equity,
he almost goes out of business,
which he talks about in the,
I talked about in the episode,
he talks about in the talk at Yale as well, he was 40.
But he says what saved him from going out of business
in 2008 is when he was in 98,
when long-term capital management broke out,
he went and he's like,
you guys lost 90% of your equity before,
once they crossed the 90% threshold,
then they lost control of the business.
How the hell did you not lose control before then?
He's like, most like, it shouldn't have happened.
And he said, and then this is another thing
where the interviewer didn't follow up.
He's like, and he's like, so what I learned from that
was pivotal for me not to lose control of Citadel
and not to go into business.
And it's like, okay, well, the follow-up question is like,
what did you learn?
But he never, he just talks about going to the scenes
of the crime.
He didn't actually specifically say,
hey, this is, I learned y or z do you think that ken has another 40-ish years
in him uh in the same way that the the warrens and the and the charlies yeah or is he building
his is he building the biggest house, the most expensive house ever,
and he's just like gonna post up there and write it out.
One of the things, again,
I pick the people I cover very carefully
because I wanna be inspired by them,
and one of the things that I really loved
at the beginning of the research was,
I would watch some interviews for hours
and literally only just pull out one line,
and was that he says that
he's been obsessed with the stock market since the third grade for reasons I don't entirely
comprehend. That's the line. He doesn't even understand. He's like this, there's a great line
that Jeff Bezos says that I believe in where it's like, we don't choose our passions, our
passions choose us. And Ken clearly is like completely obsessed with this. So where he was
even saying, he's like, I don't know if I was interested in entrepreneurship. I was
just interested in solving problems. I am addicted to solving problems. And guess
what? Entrepreneurship and investing, like he says the public markets, like
that's the biggest, you know, the biggest game in the world with the biggest,
hardest problems to solve. So that's what I'm addicted to. Usually people like that,
I don't think they ever stop. Yeah. Yeah
It says did you get a chance to read good to great?
Jim knows I read it a long time ago. Yeah
It's the book that it was like it's on your desk when you join the firm
Like he requires everyone to read it. I think he's like grown the library now, but it's an interesting framework
I thought you would have a good take on this takeaway from the book, which I think Ken
probably embraced. He says, the culture of discipline, when you combine a culture of
discipline with an ethic of entrepreneurship, you get the magical alchemy of great results.
And it's focused on technological accelerators, good to great companies think differently about
the role of technology. And I wanted to get your feedback on that.
So I'm going to answer that question one second. There's other three other books that he says
he says good to great hardball, which saw my desk, which I read. And then another one
was the I can't remember the other. Well, it was like I went to buy I think it was like
published in like the 1960s. The problem with these books is you go and read them,
like even hardball, there's nothing wrong with hardball,
but I felt I had better examples of the principles
based on like the reading I've done,
because a lot of the principles are just like,
this company is so great.
And then you look up and the company's gone.
Like, and same thing in good to great.
So yeah, I think it's like less tying them
to specific companies and just saying,
hey, does that idea make sense
in like what in your business and what you're doing,
even if you only use the idea temporarily.
But I think one of the main themes,
if you're studying, like think about it,
it's like I've been studying the history of entrepreneurship.
If you focus, it's such a focus on like the market economy,
maybe like last 200, 250 years,
mostly taking place in America at this time.
There is one principle
that reoccurs over and over again. And it popped in my mind when I read Andrew Carnegie's
autobiography like years ago, and that's they all have in common that they invest in technology,
the savings compound, and it could be gives you an advantage over slower moving competitors.
And it could be the difference between a profit and a loss. And if you go back and look at Rockefeller, Carnegie, Ken Griffin, Sam Walton, all these
people, like their edge was in many cases, like they invested in better technology at
a faster rate than their competitors. Like, you know, nobody thinks of Walmart as a fucking
tech company, right? But if you go and read, there's this great hard to find book that I
did an episode on,
it's called Sam Walton, Rich Man in America,
published the year or two before Sam
wrote his own autobiography.
And there's a story in the book
where in like the 1970s,
Sam was in his 60s,
and these guys come to him,
and I think it's 1979,
and they're like,
hey, we need to invest $500 million
in this new computer system for inventory.
Computers in 1979,
and at first Sam said no,
he thought computers were overhead,
and slowly over time his top guys convinced him,
like no, this is an advantage.
And then they spent so much more on computerized logistics
and inventory management and everything else,
they just had an unfair advantage
that compounded decade after decade after decade.
I love it.
Last question.
Last two small questions.
Ken owns 80% of Citadel LLC.
Who owns the other 20%?
I don't know, but I talked to the guy,
I talked to one of the guys that owns part
of Citadel Securities.
I can't say who, but that business is a monster
from what I hear.
Yeah, it's huge.
What is it about great investors
that make them maybe not so great at picking spouses?
Is it just the intensity?
No, it's not funny.
It's sad.
He's been married twice, divorced twice.
I'm sure he's very intense dude.
He's clearly focused on winning you know, winning at all
costs.
He's going long, short matrimony.
That's so brutal.
High frequency marriage.
High frequency trader. High frequency matrimony.
It has not, it's not just investors, it's entrepreneurs. I remember, I think the best
episode I've ever done is on, actually on James Cameron, which is hilarious. And I start
that episode where I'm reading from this like in- GQ profile of them. And I don't say
anything, but if you read between lines, you're like, he moved to New Zealand with his fifth
wife. It's just like, what kind of personality type do you think that person is like easy
to deal with? They're flexible. They get along easily with other people. I think in many cases, and I think I'm definitely at risk for this, I don't know.
Well, I mean, you guys are grinding hard as hard as hell right now.
I am curious, like your personality types, I think you're a little bit more guarded,
like you would avoid this.
They just like you just are so addicted to what you're working on that you destroy everything
around you.
And this is why I say like, Man of All Markets is my favorite book.
One of my favorite books.
I think the subtitle of that episode is like my personal blueprint is because Ed Thorpe
is one of the few people, you know, I've heard almost 400 officials, great entrepreneurs,
and it's just like in many, they're fucking cautionary tales. They destroy their health, their marriages, their, you know, in many cases, bad
parents. They burn everything to the ground because they're so obsessed and dedicated
with what they're working on. And I think that personality type is like, you know,
it's a lot of that is internal. And so I always look for positive examples,
him being one, so Price is another one,
where it's just like, I want to learn from what they did
where Ed Thorpe took care of himself.
If you can Google him, and I've shared pictures of him,
he's 93, and I'm like, how old do you think this guy is?
And they're like, he's 60.
He worked out, he started picking up a physical fitness
habit like 70 years ago when no one was working out, because he's taught, again, he started picking up a physical fitness habit like 70 years ago when no one
was working out because he's taught, again, he's brilliant.
He's like, oh, I think of every hour I spend on fitness as one less day I'll spend in the
hospital at the end of my life.
He made more money he could never spend.
But then once he passed that threshold, he stopped trading more time for more money.
So he like all this crazy deals, guaranteed money to come to him.
He's like, no, I'm not interested.
His kids are in the book.
They talk about how great a father he was,
that he was present.
His wife had just died,
but they were happily married for 50 years.
I mean, the guy just nailed it.
He was like super smart, had fun, great dad, great father,
lived a life.
His book, I say in the Ken Griffin episode,
his book reads like a thriller.
But yeah, I think there's a ton.
I mean, I would say most of the great entrepreneurs,
you're not gonna see a large overlap, unfortunately,
between like, you know.
Well, John and I are gonna go read that book
because my wife who's watching live with the kids
just texted me, yikes.
Yikes.
I love you, Sarah.
And we'll go to the end.
So I just, I just had this. We have another guest. We got to get you out of here.
Yeah. I just had, I just, I just had this example last week because in the, in the OVETS
episode he talks about if I could do things over again, I should have worked 10% less
and I wouldn't have changed my professional success at all. Could have been 20% less.
So I was bitching about going on vacation last week
because I don't want to travel
and I want to like just work on a podcast.
And my wife's like,
Ovid said you should work 10% less.
So I'm like,
you're 20, you're 20.
Well, thanks for stopping by.
This is fantastic.
That is all.
Of course.
I love you guys.
Love you David.
We'll talk to you soon.
Talk soon.
All right, bye.
Great.
We are over time.
I'm excited to get into that episode. Yeah, me too.
That'll be great.
Let's get some air in here.
Yeah.
In the temple of technology.
Welcome to the stream.
Samir, are you here?
Hey, great to see you.
Sorry we're running a little late.
And look at the lighting. Oh my God.
The 4K.
Yeah, beautiful.
It is looking great.
I would expect nothing less.
This is a professional.
You've got the best setup of any game.
We've had 70 guests in a month and...
Knocking them all out of the park.
Knocking them all out.
Can we start?
That's high praise, thank you.
Can we start with the latest interview?
You did Zuck and Jimmy, MrBeast, the biggest tech god,
the biggest creator god, you're putting them together now.
What'd you learn from that and how'd that come together?
Yeah, well first of all, what's up guys?
Thanks for having me on.
What did I learn from that?
I mean, I think actually, first and foremost,
it was fascinating to talk to someone who,
we consider to be like the auteur
of this generation of human connection, right?
Like if you really take a step back and you go,
the way we're all connecting,
even what we're doing right now,
there was this era that happened that pushed the way
that we as humans engage with each other
and how information travels.
And I think Mark was like obviously heavily involved
in that and arguably the out tour of that,
this era of human connection.
So I find that to be super fascinating.
We met Mark up at Meta a while ago to do an Orion demo.
You guys know what Orion is?
Like his new glasses.
Yeah, I'm super jealous.
It sounds amazing.
So that was one of the craziest,
that was truly one of the craziest tech experiences
I've ever had in my life.
And we met and it was nice and casual conversation
and started talking about having him on the show,
but with a guest like that, it takes time.
And then adding Jimmy to the mix, it was like,
a lot of the context was around us talking
about Facebook video, us talking about creators on Facebook,
and it felt like we only have a handful of friends
who have found success on Facebook.
Jimmy's one of those people.
So it felt like having him be a part of the conversation.
And when you say Facebook, do you mean actual Facebook?
Actual Facebook.
Like legacy Facebook?
Yeah.
So does he just post the full YouTube video there,
or does he have a different strategy on Facebook?
So most people post clips, like short form clips on Facebook.
Now, for conversational content,
like what we're doing right now,
I find it really hard to do,
because there's so much cultural nuance.
You have to dub, which was one of the main things
Jimmy talked about was like you have to dub content for other cultures to understand it
And the majority of the users are not us-based and not English speaking
So Jimmy talked about how a lot of his content that is highly viewed is is shorter form content. That is
Language agnostic. So there's like a video of him running with bags of money. I remember you can understand that no matter what
but I think that the main thing that we talked about that I couldn't stop thinking about was
around the premise that maybe a lot of
The engagement of the internet in the future is gonna happen in the context of messaging and DMS a lot of the human engagement
in the future as compared to probably us engaging with a lot of bots
and AI agents and you know.
Yeah, I kind of noticed that on Instagram,
I'm seeing more and more reels where the number of shares
is higher than the number of likes,
which means that people are not going to the comments
to have the discussion.
They're sending, oh, you got to watch this
and that starts a conversation in a DM.
Well, I think as creators,
we think about that quite a bit,
that a short form piece of content
is a unit of conversation.
And thinking about that and through that lens,
you actually create differently, right?
Like your thought is, OK, I'm going
to create something that somebody can share in a DM
with a friend.
And that's like a gift.
You're giving them a gift to reconnect with someone,
social currency to make someone laugh.
It's a very different thought than like a long form,
38 minute piece of YouTube content
that you're gonna watch on a connected TV.
Yeah.
Can we go back to the Orion demo?
Yeah.
It sounded amazing.
Everyone's review was unanimously, it's incredible.
Yeah, put it in the context of the Vision Pro.
Exactly, the Vision Pro had the same thing
where people tried it for an hour
and they were like, this is incredible. And then it had this same thing where people tried it for an hour and they were like this is incredible And then it had this churn Kirk
Turn curve where people were returning them and they were collecting dust and that seems to be like the classic VR problem
Are you convinced that this is the one that I'll be wearing for six months straight?
No, but I find it I find it fascinating and we've sat with a lot of you know tech CEOs most most recently
Evan Spiegel and Mark Zuckerberg and
You know both are very focused on AR as the future
of how we'll engage with tech.
Orion was the most compelling
because it was the lightest weight, to be honest.
I think two reasons Orion was compelling.
One, it was the lightest weight.
It feels like just glasses on you, which is very different.
Like the Vision Pro does not feel like glasses.
And when you have glasses, I think you retain the level
of social engagement
that human beings are used to. Or like if you've ever worn the Ray-Ban Metas, I think
the Ray-Ban Metas and Orion, like they're tangibly different in weight and size. But
if you take what the Orion can do and put it into the Ray-Ban Metas, that's the most
compelling product I could think of because it's lightweight. It looks cool. You can still
look someone in the eyes. But the thing that was really unique about the Orion
glasses was that you wear this like neural wristband and before we did the demo, you start to like,
they ask you to do some movements like click your finger like this, each finger is a different
function. And these your brain waves and like your neural waves are fed into this computer and basically it learns what it looks like when you
Go like this or go like this and the reason why is because eventually when you're wearing these glasses
They want you to be able to be holding a cup of coffee and be able to click on something because this command is different from
This command is different from this command and we did that we held something and basically you almost just think the motion and it happens
So I think what was shocking to me was like how intuitive it is held something and basically you almost just think the motion and it happens.
So I think what was shocking to me was like how intuitive it is.
Love it.
If it can be in the form factor of the Raven Meadows or, you know, something that looks cool.
I think we're too aware of our own image as humans to walk around wearing Apple
vision pros.
I mean, I've got a bunch of questions. What's going on with Tik TOK?
Do you have an update there?
There's supposed to be some decision made,
or like early April is what we've been hearing.
But it's been radio silent.
I've been looking at Polymarket and trying to figure out
what's going on.
Doesn't seem like Mr. Beast is in the lead
to buy it at the moment.
But do you have any insight there?
I mean, look, I know I've seen probably similar things
that you guys have seen.
I've had some conversations around the industry about it.
My perspective is I don't think,
I think it'll keep getting punted.
That conversation will keep getting punted.
I think TikTok is a dramatic economic driver in the US.
I think it had a lot of sway in the election.
I think it's a way that politics is getting communicated.
I think it's a way news is getting communicated.
I think it's probably the biggest search engine
for Gen Z right now.
Even for me, I'm going, you know, going on a
trip this summer. I'm searching stuff on TikTok.
It's a crazy platform.
We started with a dry account.
We got 300k views on like the first video we
posted. Yeah, it's like just the
opportunity is like staring you in the face,
no matter what you think about the politics.
Yeah. Do I know if it's actually going to sell
to a US company? I have no idea.
Like I saw that Mark Andreessen and Andreessen
Horowitz is putting their name in the hat.
I know what Alexis Ohanian is doing
and Project Liberty is super interesting
in terms of trying to like reshape
what it looks and feels like.
But, you know, there's a, right now it is the most powerful
for you algorithm that we have access to.
A lot of American businesses are powered by TikTok,
meaning young startups, young media entrepreneurs
are powered by TikTok.
There's a lot of brand dollars
that are exchanged through there.
When we sat with Adam Masseri,
when we sat with Evan Spiegel,
obviously maybe the ideal scenario,
even a conversation with Zuck,
is those ad dollars shift to US companies versus TikTok, but like I
Don't know we haven't we haven't really seen or we don't really have precedent for it in the US unless you guys know that
We do but like I don't know that we have precedent for
Creator businesses
Follow-up on that. Yeah, we could start there. I wanted to get your take on AI and creators.
Yeah.
Buddy of mine, Trevor had broad
that was like super, super early and ahead of its time.
Trevor McFetteries.
Yeah, yeah.
Doing little Michaela was like almost like 10 years ahead.
Yeah, that's crazy.
Executed very well, exited the business.
And then last year was funny.
We covered a few of these.
There was like a period where the new like
e-comm style info product was like,
everybody should have an AI creator
that makes like $5,000 a month for them,
you know, like doing brand deals.
And like, obviously that's not actually like
happening at a wide scale or it's certainly not easy to do.
Are you seeing instances yet of AI,
like fully AI generated creators
that are actually building loyal followings,
meaningful ad businesses or digital products, et cetera?
So I mean, look, like what you're talking about
with Lil Micaela is like considered VTuber, right?
And I think VTubers are picking up.
Like at the end of last year, the most subscribed to Twitch streamer was Ironmouse, which is a VTuber, right? And I think VTubers are picking up. Like at the end of last year,
the most subscribed to Twitch streamer was Ironmouse,
which is a VTuber.
So virtual creators, like animated creators,
I think will continue to pick up steam.
Like we've always been interested in animated content.
Like what happened with GPT's, you know, image model,
immediately everyone's like turning themselves into anime.
Like we're into animation.
Fully AI generated is not something I've seen.
I think AI enhanced is like, it's almost comical
to suggest that content is not at, in some level,
AI enhanced, whether it's the audio of it, the video,
but I have not seen an instance of a fully AI generated
creator, yeah. And that's probably the most I was going to say probably most
compelling fully AI generated content is the podcasts that
have been created through notebook LM notebook LM.
That's the most compelling to me.
Like audio, I think is where it'll happen first before it
happens in video.
I think we're too aware of video for it to happen
through true human.
Yeah.
I mean, Lil Mikaela proves otherwise,
but I think audio, it could happen.
Because what Notebook LM showed me was,
hey, if I have a 20-minute drive,
couldn't I just customize, hey, I wanna know about
what's happening in the world of politics,
what's happening in the world of the creator's happening in the world of the creator economy and the NBA scores from last night? Yeah, and just tell notebook LM
I have a 20-minute drive just build me a podcast. That's bespoke to me for this exact moment for this exact use case
So that that I think is where we'll see it first is in audio. Yeah last question
Do you think that
create some creators have moved on
from this idea of the goal is scale, right?
Like it used to be, you know, you sort of like start out
and you're like, I want to get millions of views.
Like I want my videos to go viral.
I think now people are realizing that virality
is sometimes like-
Curse.
Curse or doesn't even necessarily help
with like building a sort of sustainable business.
Cause there's some creators that can get 10,000 views
of video and they can have a fantastic lifestyle
and sort of pursue their creative passions.
And that's like the right size for whatever niche
that they're in.
But you know, we see this like,
the video John's referencing,
like we got a video that gets 300,000 views on TikTok
and it like doesn't do anything.
We get a text message about it.
Yeah, like nobody saw it, like it's fine.
But I'm curious if you see sort of a broader
sort of systemic shift where people are like,
really orienting around like quality of audience
and sort of the business side as well.
I think you guys are an example of that.
I think you guys have built a really cool brand.
And I think brand is really hard to build.
I think what people have learned is that views
are available across platforms.
If you solve the math equation, you can get viewership.
That's not very exciting.
Brand is exciting.
I think brands are timeless.
Brands give a feeling to people.
And I think we're just maturing to that point
where like brand matters.
Views are somewhat interesting,
but we're also in a world where like,
you could spend an hour on TikTok
and not remember what you watched.
So Colin and I always think about this concept
of memorable views versus forgettable views.
And you wanna be producing memorable views.
And oftentimes that means you're actually
targeting a smaller group.
I think our world has gotten increasingly fragmented,
right, like everybody is famous and nobody is famous
because there's this stat that 47% of Gen Z
are a fan of something on the internet
that no one they know personally is a fan of.
Whoa.
And that is one of the most, that's from YouTube,
is one of the most interesting stats
I've seen about the internet.
That will only increase over time.
And so if we play that out and explore
like how fragmented attention is getting,
how bespoke our online experiences are,
we're moving away from monoculture.
We're moving into a world of like smaller tribes, right?
Smaller ideologies across the internet.
And I think the opportunity is to build
a really meaningful brand for one of those
small groups of people.
I think the challenge to be like the next Mr. Beast
to be like a massive monocultural brand,
like I find that to be not even exciting to me.
Like I like meeting our audience and meeting them out
in public and being like, oh yeah, these are my people.
This is people like me.
That's exciting to me, and I think as the internet gets
more and more fragmented, that will continue to happen.
Yeah.
Well, that's fantastic.
I wish we had more time.
We gotta have you back.
This is fantastic.
Yeah, yeah, next time we'll block the full 30 minutes.
I mean, really, we're sending our best
for the entire Palisades rebuilding effort.
Thanks, man.
I mean, I know most of the fans will probably know
that you were affected, and we really hope
that everything's going well there,
and if there's anything that folks can do to help,
yeah, we'd love to be supportive of that.
Appreciate it.
I love what you guys are doing.
Thanks so much.
I think it's super cool.
It's unique, so yeah, keep it going.
Yeah, we'll talk to you soon. Thanks for I think it's super cool. It's unique. So yeah, keep it going. Yeah.
We'll talk to you soon.
Thanks for coming on.
Thanks so much.
Bye.
That's great.
Yeah.
I mean, we can talk to him for like two hours.
It's like, you know, it's like our business.
There's so much there.
We didn't even get to like, I wanted to hear the Doug DeMuro strategy versus what Mr. Reese
is doing versus what other folks are doing on the monetization side.
Slow Ventures and stuff.
I know he'll have some great takes there, but of course, we will have him back in the
temple of technology any day now.
Let's hopefully bring in our next guest, Will from Obanana, any day now.
Let's see.
Will, how are you doing?
How are you guys doing?
Yeah.
We're doing great.
What's going on?
Congratulations on everything.
I mean, everyone seems to be on a tear.
For those that don't know, Will created artificial intelligence.
Yes, that's right.
Hand built.
Yeah, he's on that.
He's an author of that paper, Artificial Intelligence.
Yeah.
How it's done.
I've often invented the transformer.
Yeah, yeah.
We appreciate you being here. Do you want to give a little bit of overview of like what
you actually do day to day? I think that'd be interesting.
Yeah. Yeah, totally. I mean, I joined OpenAI like two years ago, have like a startup background,
but worked on like video gen for a while, worked on Sora for about a year. And then
I work on like RL post training, Chatchity stuff. So cool. How did you process the Studio Ghibli moment?
Was that expected?
And I really want to know, internally,
did you think that images in Chatchity was going to go viral
and you didn't know what the prompt would be?
Or were you like, everyone's going to give you stuff?
Yeah, I want to know.
Were Ghibli's flying around for 48, 48 hours ahead of ahead in slack.
Give us some secrets here.
Yeah, I mean, I've honestly like I was very bullish on this. I think we like definitely under provision GPUs. I was surprised our team predicted like less traffic than we should have expected.
Yeah, I think the thing that I was thinking about the whole time is that like, I mean, I was walking around SF and some of these like needle salons have like clearly Deli 3 generated
like image fronts or whatever that is, right?
And I didn't get as clear to me that like
pretty much every business in the world very soon
is just gonna use this to like, you know, pick logos
and you know, we can do transparency,
we can do controllability.
I think like my favorite eval is the Fiverr eval
or just go on Fiverr and count how many of these things
you've been automated.
And like this takes off like a lot of those things.
And hopefully there's a lot of new things as well
that this creates, but I just think of like total utility
to the world is just massive and yeah.
Have you thought about why it was Studio Ghibli
in like precisely because you could do Simpsons,
you could do South Park, but there was something
about the anime style of Studio Ghibli that was like,
it kept enough of the human in there.
It didn't just become a completely generic stick figure.
But it also wasn't a stick figure.
It was a magical experience.
You needed to do nothing to prompt it.
I love that you could misspell Ghibli.
It's still got a perfect output.
It was great.
And then the output was just magical.
I would generate the same image with four different styles
of art and the
Ghibli one every single time felt it just felt like what was your take on it
no yeah that sounds about right to me I don't think I predicted Ghibli
specifically last time for the dolly three lines he made a bunch of pepe's
and that was the thing that really went off but yeah I remember I mean for a
while the version one of image Generation, it felt very Art Station informed,
it felt very like sci-fi dystopia,
like it was really good at those,
but people weren't really doing,
I mean there were certain like, you know, anime styles,
but this one was just everywhere.
What else?
Can you talk about, like to me,
to me the most, like, the thing that's most exciting for me is the Ghibli-ification of everything,
right?
It's like on a long enough time horizon, everything is Ghibli, right?
It's just sort of abundant and very free.
And if you thought even a year ago that you would have this sort of like, anybody could
instantly basically for free get this sort of beautiful hand-drawn anime style artwork
that historically and you know somebody else posted about this but you know
there'd be like a four second scene in spirited away that was like that took a
year and a half to generate right and so like to take something that's just so
time-intensive so expensive and then make it free and abundant
is so powerful.
And to me that is, that's why when I think about everything
that's happening in sort of politics and tech drama
and everything like that, and then I think about like,
we're moving towards like everything being ghibli.
It's like everything today feels like a distraction.
Is that sort of like how you guys like as a team stay focused
with an OpenAI, which is just like,
hey, there's gonna be a lot of noise.
There's gonna be like, you know, this benchmark today,
that release over here.
And it's like, we're sort of moving towards
this like future target.
Now that's sort of a ramble,
but I'm curious how you think about sort of focus.
Yeah, I think OpenAI is extraordinarily focused on real progress.
I mean, for so long, you see the diffusion models, they can't spell, and it takes us
like two years.
There's no Dali update, and then finally we come out with this, and it's not for no reason.
I think people who are focused on really what is pushing the frontier forward and what's
actually the evolution of this technology.
Yeah, I don't know. I think there's a lot of potential here.
I think, I'm actually not sure what the ImageGen team
thinks about what the next steps are here.
I think their view is mostly about making this truly useful.
Can we go from diffusion's a fun toy
to this is a truly useful tool in the world?
And clearly it's already having that impact.
Yeah.
Yeah, I mean, you said your backgrounds in startups,
can you talk a little bit about just being
a consumer tech company now and like the focus on product and some of the product decisions
like images and chat GPT, it's not a separate app like Sora.
That seems intentional.
How are you thinking about product development?
What are the strategies?
How much feedback are you doing?
Are you just testing stuff internally?
Are there A B tests running?
Is this secretly running in India six months before we get it here? How do you
inform product decisions?
Totally. I mean, I think it's hard to pre-launch models because they create so much noise,
but we don't really test them. But I guess like, I think people's eyes move from being
a research company into a deployment company over time. And we're still in the middle of
that kind of, you know, we're doing both. And I think we're just doing a lot more deployment than we've ever had in the past. I think people are
very motivated around like how do you actually make this thing useful? How do you deploy into the world?
I think it's really cool to be in a position where like we can launch new products that are
completely unlike anything people use Chashmere D today for and people are so receptive to them
like it's so like you know I don't know I think it's hard to get product market fit and get people
to kind of try new products but to have this like kind of all in one AI thing,
we can try deep research, we can throw out image in and people are just excited
to try it out and like see if it works for them. It's just like so rare to find.
Yeah. Are you excited?
Like one of the things that I felt like really worked with images in chat GPT was
like, I could go to my camera roll, click share, share it into chat GPT type,
studio, give these style and boom, it gets it. And it takes the, the, it takes the, the, the pain of inspiration out of the process.
And I'm wondering if you're looking at ways to create those magical, easy to generate
moments.
I remember mid journey was talking about this too, where like part of why mid journey worked
was if you just give someone a text box, they'll just be like cat and it'll be like, okay,
that's a picture of a cat. But if you show the mid cat and it'll be like, okay That's a picture of a cat
But if you show the mid-journey and it's like the discord and it's all these different ideas then they can spin off that
How do you think about giving the user really setting them up for success in the context of Sora? Yeah
Yeah, I mean I think generally like it's just very difficult of a problem
Like I think it's one of the most biggest things we're thinking about right now
Which is like when you open chat if you used use it, you get dropped in a text box
and like the other person has no idea how to use it.
I think, I mean, we have a lot of views on how to do this.
I think personally, I think I'm most excited about
is that like, I think the, if we're talking about like,
what is the like dolly to image gen,
what is the like, you know, kind of whatever,
each evolution of like the next kind of order of magnitude.
I do think that the long-term thing
that's extremely exciting
is just like maximizing personalization,
super long context, the model knows who you are,
the model can follow up being,
okay, by the way, I've seen you doing this,
do you wanna try a anime style image?
Or, you know, like maybe create things for you
with proactive, like I think there's a lot to do
on like kind of, yeah, moving the agency away from you
into the product as well, yeah. You call yourself a master of slop, which is a great line.
But does slop even exist in five years when people were saying, oh, the timelines all
slop on the Ghibli moment.
I was like, this is not slop.
Everybody says this is fantastic.
In fact, I would argue that the raging politics was slop.
Yeah, it was the first day argue that the raging politics was slop. Yeah, yeah, yeah. We were free of slop. It was the first day that the time that it was free of slop.
No, but it seems like the trajectory that we're on,
it's like eventually, you have to try to intentionally prompt
it and say, make me an image that's like the ImageGen
outputs from 2022.
And it's like, yeah, they've got six fingers
and the hand is connected to the body.
I want a vintage Dolly one, the Dolly one.
But like what happens?
Like I feel like we're in the slop.
The slop era feels like it's coming to an end.
It's like, we're not going to be, you're not going to be an image model that's
constantly producing slop that is getting funding going forward.
Yeah.
That's a good point.
You have like a new bar has been set.
It depends what you mean by slop.
I feel like I went to the, I went the SF MoMA recently and like half of it felt like slop and I think this
Like I mean half of it did not to be clear
But I think a friend of mine always quote that like, you know
Art is like the search for the periphery and like you're kind of like constantly looking, you know
Kind of it's in your things forward or whatever that is and you know, I don't know what when things get played out
They get boring like they get cringe they get slop right?
Like I don't know. Maybe the problem is that
humans are slopping, not that the models are slopped, right?
Yeah, that's a good point.
Are you worried for college students today that have access to these tools and making
sure they actually learn how to do things like writing, which is, as we know, associated
with thinking, you know, if you can't write well, then I just,
you know, the models are getting better.
It's less obvious that I'm seeing something that's output.
When I was in college, if I had this tool,
I would have aced my Studio Ghibli animation course.
Yeah, yeah, and AP bio.
And AP bio.
Yeah.
Which I did not do well in.
No, I'm curious, like, you know,
how do you think universities, like,
and just curriculum should adapt? Yeah, future of learning, yeah. I'm curious, how do you think universities and just curriculum should adapt?
Future learning, yeah.
I'm extremely worried.
I think the world just gets a lot more extreme.
And I think this is just true with AI across the board.
Where I think the bottom percentile students
are just gonna get a lot, they're gonna fall behind.
And I think the top percentile students
are just gonna get extremely overpowered.
I think if I had this tool as a kid,
I would just be vastly smarter than I am now.
Even now today, I just talk to Cheshire PD every day.
I was asking at random concepts, learning random things.
I went super deep down a bio rabbit hole yesterday
and rare supplements I could take from the internet.
I don't know, this is like,
I think it's an incredible tool for learning,
but yeah, I think it's kind of bimodal.
I think a lot of people are gonna get a lot dumber.
I think a lot of people are gonna get smarter.
I don't know what to do about that, but yeah.
Fine-tuned on Gwern.
How's your experiment where you just text a number
to post going?
Oh no, it's gonna make me way more radical.
Because then I just send something, I forget about it,
and then I check again.
Are you gonna-
Would you recommend it to someone fighting brain rot?
Yeah, totally.
I think also, I think the problem with tweeting nowadays is it's not not as fun once you have more than
Like 3,000 followers within this risk and then you're like, oh now I care if people are mad at me or something
And yeah, we're even just seeing the numbers and being like oh like this one didn't do as well as there
I'd like it, you know, you know, I know from the first 60 impressions if it's a banger
Yeah, and then you have this desire
I had a I had a post last night that like I like John and I both thought was like hilarious
But like I realized that like nobody like really got it
You needed to know you needed to have like read Ben Thompson's like Friday article
Low tam
But but anyways, you got anything else John this no no no
It was fun. We youats, it was fun.
For the record, we didn't plan this around
the fundraising news.
We just wanted to chat.
So congrats to you and the whole team
on a cool milestone, and job's not finished.
Job's not finished.
Get back to work.
Please make the model better.
Make it better.
This was just you talking to customers.
Yeah, yeah, this is customer.
We enjoyed it. Anyway. Okay, sounds good. See you talking to customers. Yeah, yeah, this is customer. We enjoyed it.
Anyway.
Okay, sounds good.
See you guys.
Cheers.
See ya.
Talk to you soon.
Yeah, little mini OpenAI day.
We got a couple folks coming in.
We got Aiden coming in, talking OpenAI today.
We'll see if he can join right now.
And I'm working on a surprise guest,
but I don't know if it's gonna come together.
He hasn't read my messages yet, but I'm hoping it happens.
We'll see.
Fingers crossed. Double, double message.
We got Aidan coming in.
We got a guest coming in.
How are you guys? Good to see you both.
We're great. How are you?
Good, good. Good to be on. I love this show.
Are you are you celebrating today?
And if so, what?
All right. Over the fundraisers?
A fundraise or just the Studio Ghibli moment.
I mean, it was a massive moment for crossing the chasm again.
That's so last week, John.
Yeah, you're over it.
Yeah, it's like a million years ago here.
Time moves differently inside the lab, right?
Yeah, we're excited.
The team morale is really, really good.
That's awesome.
And it's awesome to see my colleagues have worked really
hard to make these tools more liberal, to make them more fun,
kind of look out into the world and see
the fruits of their labor, right?
Yeah, that's awesome.
How are you guys?
We're great.
So I guess talk about evaluating like what it,
like how do you guys evaluate internally
like a successful launch?
Because the reason that I knew it was successful
is I had a few posts that were sort of breaking
teapot containment.
And I would have a lot of people quoting it saying like,
all right guys, jig is up.
Tell me what app you're using for this.
Like people just like, there's still people out there
that like just don't.
They don't know the name OpenAI.
Yeah.
But they know that this filter exists and they want it.
And that's a very, very good sign.
I do wonder if there was like this interim moment
where people are like, wow,
SnapChat's gotten really good.
Yeah, totally.
Like what's going on here?
Like, you know, I haven't used this app for a while.
Yeah, Spiegel's been cooking.
Yeah, this must be TikTok. Yeah, I mean, I haven't used this app for a while. Yeah, Spiegel's been cooking
Yeah, I mean I got a couple cold DMS from people saying like hey, I ran out of images Can you make one for me? Clearly you're making a lot of these
Yeah, exactly exactly
But I mean, how did you process the moment? Did you did you predict this this and what was the shape of your expectation versus what wound up happening?
Yeah, so I did not predict this to be candid. You know, like we had a duel internally
You know, like I played around with him like that was cool
Didn't quite like, you know feel in the way that I think that like I felt it immediately after launch, right?
Yeah, I do think our leadership predicted this pretty well
I think like Sam and others had like, you know, well-calibrated intuitions for this
Yeah, I'm gonna think at this point like, you know, well calibrated intuitions for this. And I think at this point, like, you know,
they've done a lot of viral stuff, right?
Like, you know, I think that, you know, at some point,
like that you start to update all the way down.
But yeah, like I think, go ahead.
Yeah, on the tech side, like it does seem like
there's a new algorithm involved
and we don't need to dive into that,
but I wanna know like there is a different path
where we're seeing more incremental updates, I imagine.
And we're seeing like the text is getting slightly better every week, but there's something
almost more viral about being discontinuous.
And all of a sudden just hitting the internet over the head with like, text is good now.
Right.
And, and is that just a function of your development
in the product cycle,
or is there some deliberate strategy there?
Yeah, I think, I can't speak for too much
in terms of deliberate strategy,
but it is interesting where, to your point,
for these text models, we update them pretty often.
Yeah, of course.
Every two months or so, they're getting better,
people are always seeing these steady improvements.
But the funny thing was, when DeepSeq came out, it was this really interesting moment
where a lot of people hadn't tried our most recent models.
A lot of people hadn't tried our most powerful models, and they saw that discontinuity.
They're like, I've been using 4.0 from earlier this year.
I used this new reasoning model.
Like, holy shit, this is amazing.
And I do think that like
Even things that are continuous to us right like you know as people that use these tools often still are sometimes weirdly discontinuous
To like outsiders right yeah, I'm or to let people that aren't as new to the product
But like you know for image gen is like a great example of that right like we went from like
Dolly to this and it was like this massive jump And I think that like that jump does add to the virality as you said totally that makes sense
How do you guys you know I feel like a long, even people
that use and love opening eyes products daily
sort of continue to give the feedback on naming,
it's confusing, blah, blah, blah.
Has that not been a focus because on a long enough time
horizon,
you just sort of ask it to do things
and it just sort of selects the correct underlying model
and it's just routed perfectly.
And so it's just like not,
cause like if you were like listening to customers,
you'd be like providing more,
basically like more information for the average user
in the product to be like,
well, you should really use this for that
or maybe try it again with this other thing.
So is it just like, we're accelerating so quickly
that like none of this is even gonna matter,
you're not even gonna know the names?
Yeah, at this point, like opening it up
is so bad at naming that I,
we've like dug our own grave here, right?
Like internally things are even worse, right? Where like, like hell is this mean like Sam tweeted something this morning where it's like, you know run redo like this time for real
Yeah, restart. Oh three one final final to restart for real
Yeah, I mean I was laughing at it like is it 40 chest?
But at the same time like you could imagine that you gave every model like like ice cream and like you give them cute names and then it makes
Even less sense like at least I know like oh three is a higher number than oh one so like it's probably better
No, like sometimes we don't number things like in order
I think it could be worse, but yeah
I do think that you know Sam like tweeted this month ago, but we do plan to start unifying
Yeah, I'm like, you know our model selection, right? Yeah
Um, so I think like, you know soon we will have like much simpler easier to grasp thing and even for me like, you know
I'm with one like testing these models all day designing them and such
It's still it's like unclear
Day-to-day like what do you actually do? Yeah, I work on model design here.
So I work on behavior, the way the models act,
the way that people experience behind use.
But I also just work on general capabilities,
so making the model smarter in many domains,
not just kind of narrow areas.
Talk about making the model.
And I don't know if you touched this,
but you posted, I think it was a couple of days ago,
you said, nowadays I don't spend money
without first deep researching.
Everything I buy, hand-sew, toothpaste,
lunch, house plants, is just better now.
Sure, waiting five minutes for people.
Towel recommendations is weird, but in retrospect,
and you specifically called out Amazon
and Amazon Choice here,
because I've wanted for a very long time
to just have an Amazon where every,
only products that appear are companies
that have existed for like 100 years or more
because like I don't want like the drop shipper
who's like, yeah, I found the way to make this
like 10% cheaper and we can spend, you know,
that extra margin back on ads and suddenly it's a business.
And so I can totally imagine a world in the future
where I just go to you know chat GPT and
Say that I want to buy paper towels
And then you guys are routing that basically based on my what you know about me as a consumer
Which I will pay 10% more to get 50% better operator buy me paper towels from a company that's been in business for 200 years
Give me paper towels from a company that's been in business for 200 years. A real American paper towel.
Ben Franklin's paper towel.
But that's been people that are, the critique has been like, okay, you're using this to
like, it's not being used for commercial search.
It's just being used to generate answers to questions.
But then, yeah, obviously, and I'm saying this,
you're not saying this, but it's obviously a threat to Google
and to Amazon and other marketplaces when I'm going there
and I'm saying, this is what I want to buy,
like help me buy the best version.
And it can be more aligned with my interests
than an ads-based business model.
I'm like, you know, kind of an extreme libertarian
when it comes to like sales and such, right? I do think there's gonna be this like really really cool world that we move into the next like year or two here
Whereas our tools for selecting like products like become like super super smart, right?
Like way smarter than me like it's gonna be so much better than I or any other human
I've ever met will be at like knowing exactly what I want exactly like what other people love
It's just gonna create like increased competition, right?
It's gonna increase like selectivity, right?
Like, you know, truly great companies
who build like actual products that people love
are gonna like float to the surface way easier.
And then like kind of all the like shit,
like the drop shipping, like random guy doing a,
you know, thing out of his basement
they got the Amazon choice label.
Like that's just all gonna like fall away,
like deadly, you've saw enough of a tree.
I'm excited for that.
So it's just gonna make the market better, right?
Can you talk about other maybe underrated uses of OpenAI
models?
Obviously, everyone knows you can gibble-ify your dog.
Everyone knows that you can ask it to explain AP bio questions.
But that use case is pretty interesting.
Where else are you seeing usefulness in the models?
Yeah, one thing I'm on a kick with this recently
that I'm really, really trying to get better at is just remembering to use these tools for learning.
I think that like, you know, there are a lot of things where I'm like, okay, I do this
every day.
I wanted to do, I want to do it better.
Like here's a way for me to use deep research, et cetera.
But I think the opposite is kind of interesting too.
It's like when I don't use these tools or where like I did not have something previously
in my life that would have demanded use.
And I think like an example of this is like,
now that I have something that like
can give me a McKinsey style report in like minutes, right?
Like I should be using this all the hell,
like every single day, right?
So before I go to bed,
like I like generate like a deep research.
I like to remind myself,
like you gotta read the deep research tonight.
Like I gotta like,
it's just like brushing your teeth or something, right?
Yep, totally.
The remembering to use these systems
is like such an important thing to me.
Yeah, if you're very intentional about using them,
they can make you smarter.
Yes.
If you're unintentional about them
and you're just like, last second,
I got to send this email.
Just do it for me.
Maybe that's making you less.
I mean, even for silly stuff, I was like,
yes, I would like a 50-page report from a McKinsey analyst
on the Metal Gear Solid fandom or like the history of sword art online
Stuff that I would normally like have to puzzle together from like wiki and YouTube videos
I'm like, I just want it all in one place to understand this stuff
And it's like these stupid things that you would never think to you know, actually get a full report on my gosh available
Your fingertips it's amazing as a kid
I like all those like, you know, like kind of fan fiction wikis or whatever, and I'd dive through, and they're like,
terrible, right?
Like, it adds everywhere.
Yep, totally.
Like, I'm so excited to just remember
that I can use these things now, right?
Exactly.
Oh yeah, it's amazing.
Talk about how we figure out,
I feel like the way people disrespect robots.
You saw this with bird scooters.
You remember there was the whole bird era
where like, you know, in LA at least,
like there was just entire fan pages dedicated
to like throwing birds off of, you know, buildings.
You posted last week, you said,
I suspect in six months it'll be safer to Uber than Waymo.
And going into this, you're basically saying like,
people are just like messing with the cars like on purpose.
Like is there, when are humans gonna learn to respect,
respect robots and like, do you have a sort of solution
for this or anything?
I say thank you at the end of my prompts.
I love my waymo, yeah.
Always.
I like my waymo is like a dog, right?
I'm like, oh my god, good boy, thank you for the ride.
Good boy. Yeah, yeah, exactly.
I think it's, I'm like very outspoken about this and this is just my take. This is not like the view of my company. But like, I do think that we should think a lot about like
model welfare and like, you know, that like, you know, well-being and like health of these models
as they get much smarter than they are today, right? Today, this is like not at all a concern.
But we could imagine though, that as these systems get quite complex, you know, we about their internal state. We want them to be well. And it might be that this
is just more economically valuable too, that models that feel like they're in a good spot
emotionally or whatever actually just help humanity out more. But I think that this is
something that we should get ahead on rather than let it hit us. And I hope that as these
systems get incredibly intelligent
and as humanity co-evolves with them,
we can get a partnership that's mutually beneficial here.
And to be candid, I am a bit disturbed.
People throwing fruit at my Waymo.
Maybe even more disturbed by watching videos
of people kicking robot dogs on Twitter and stuff.
It's a small thing.
And again, these systems, they don't feel pain or whatever today, like obviously.
But I do think that like sentiment though, like,
you know, played out over the next like five, 10 years here
as these systems again, become incredibly,
incredibly intelligent, way more intelligent
than we kind of expect today.
You know, it does worry me a bit.
And I think this is something we should just think about now.
Hmm.
What do you have, you posted in sort of kind of giving high level props to Manus for people
saying, oh, it's a wrapper, but then you're saying like, no, it doesn't really matter.
Like the capabilities matter a lot more.
Do you think that developers are not taking full advantage of open AI products in some ways because, you
know, I think there's a concern of like, you know, a talented team might not build an agent
right now that was like, they felt like was going to compete with operator because they
feel like, oh, we're going to get steamrolled. But like, do you feel like there's like just
enough verticals and sort of places that you can build that like people aren't taking enough
advantage of the underlying tech?
I certainly do.
Yeah, like I think, you know, like Manus is a great example, right?
Like, you know, we have like deep research as a product, we have like Operator, and you
know, they put it out and people still loved it.
Like, and it did like still feel like some area that was kind of like, you know, left
uncovered.
I also, you know, before I joined OpenAI, I was like a founder and I was working a lot
with these APIs and such. And there really was this like sentiment,
I think in the community where people were like,
so scared of the wrapper company label
that they really did leave like, you know,
kind of low hanging fruit, like unpicked, right?
Are they were like so scared of like, you know,
like not even of like OpenAI or Anthropic Steamrolling them,
but more of just like people being like, ah,
it's just like wrapping a language model.
Like it's adding no value.
Like, no, like the right rapper is a ton of value.
You know what I mean?
I completely agree.
Yeah.
I thought the whole rapper meme was like a VC Psyop basically,
because there are plenty of like fantastic businesses
that maybe wouldn't hit hyper scale.
And so the VCs were like, don't even bother.
But they could build like really cool businesses,
especially for like these young kids who are like in high school
and like could have a business that's making $100,000. that's life-changing and sets you up for so many different opportunities so much experience product development
I thought the rapper thing was a little annoying talk about humor John has this thing where
Every week or so he'll say like, you know, write me a joke and kind of prompt it in a way around kind of the stand-up
Comedians and it's not consistently hitting yet.
It's sort of like, it's structured like a joke.
So you keep waiting for the, you know.
It's unintentionally funny because if you read it
like you're a standup comic,
it will sound like you're bombing intentionally
and that's funny, but I'm still the one
that's making it funny.
Like the punchline is crickets, right?
Like, yeah. Oh yeah, yeah.
But when I think about the next Ghibli moment,
it would be being able to consistently generate
something that makes the user laugh.
And think about the magic of that experience.
I can't wait.
I mean, it's actually a bare case for humanity
is we're all just generating perfect jokes over and over and over, just laughing. That's when we all just like, you know, generating like perfect jokes
over and over and over just laughing.
That's when we all wire head for sure.
We all go crazy.
Yeah, it turns out like the ultimate addiction
is like, you know, humor, right?
Yeah, yeah, we're just laughing ourselves to death,
entertaining ourselves to death.
I will say, you know, GPT 4.5
was like a really interesting moment for humor for me, right?
Where like, you know, I was testing this model
internally a lot.
And like one of the ways I like kind of realized like,
wow, this actually is an interesting step change above
our existing models.
It's always funny sometimes.
The green texts are great.
It's long-form jokes, maybe left a little bit of desire to your point.
Sometimes it would construct something that looks a lot like a joke, and we'd get to
the punch line, and I would just cock my head like, what?
I have no idea what you mean there.
But to be fair, though, if you asked me to like write a good long-form joke and you only gave me like 500 words to do it and
like 500 like you know that's true words worth of thinking I could not do it right like you know I
actually I would do much worse I think than these models. In fact I actually don't know if I've
written like any great long-form jokes you know I mean like if this is like one of those things where
like we hold these models to a crazy standard but I am so excited for them to get great at this so I do think that like to your point like
this is you know this could be like the next image and moment where people like
do this it just brings joy into the world they're having a lot more fun like
yeah yeah I want that right it might solve your your your issue of like the
beating up on the Waymo like if there's a friendly robot he's telling you a joke
like it's a lot easier to give him a pat on the head and be like good robot you
know you have any are there versions of various models in the past that you felt an emotional
Sort of bond to that you that you still you know
Like they got away like, you know for some reason like you know
The one that got away or it's like an old an old friend that you just like, you know
You know went different directions in life
and you know, you don't talk to them anymore,
but you still, you know, every now and then you're like,
oh, like, you know, you smile to yourself
thinking about good old days.
3.5 DaVinci, they just don't make them like they used to.
That one actually had like the bottomless pick jokes,
you know what I mean?
Like that was pretty good.
Yeah, yeah, yeah, yeah, that was always fun.
Yeah, but not even in just humor, but-
Generally, yeah.
Yeah, you're so close, you know, working.
And I could just imagine like, you know,
a version being so good for one thing
or something that you cared about.
And then-
I mean, Ben Thompson still talks wistfully about Sydney.
It's just like, oh, that was the most amazing experience.
Sydney was pretty great.
Cause Sydney was like this sassy online character
from like clearly from like Tumblr,
some like, you know, convexity that it fell into. I don't know. But who's your Sydney?
My Sydney is easy answer for me is it was Claude Doriopis. That was like such a great
model and even not an open AI model too. So yeah, yeah, yeah. Y'all never seen that.
But you're a ton of life to it. It was, I think like I coined this like phrase big model
smell when I came out. I was like this I was like, this model, if you gave me
a blind test between this model and other models that
score as well in it at academic benchmarks,
I'm pretty certain that much better than chance
I could take out Cloud 3 Opus.
I just was sold to it.
And it's a bit hard to see these things.
When you do regular day-to-day prompts,
I'm just asking it for a recipe or something.
It's not going to be like, oh my god, it's a big model. But I do think though it's easier
to tell the difference in like model capabilities and kind of like character
when you push them to extremes, right? When you deploy them into like you know
agents that are doing like crazy things, when you give them a tons of context,
when you have them like tackle really out of distribution hard problems, then
you start to see like the creativity emerge, right? And this is like a really
fun thing and I think that it's a it's a good lesson for playing with models.
It's a good lesson also for designing benchmarks
and evaluating the models too.
But yeah, 3Opus is pretty great.
And I think my daily driver right now,
outside OpenAI is GPT 4.5.
It's just such a fun model.
I do feel closer to it than I do for previous models.
It feels like it is more live, if that makes sense.
And are you using GPT 4.5 in a different way
because it doesn't interface with,
it doesn't have some of the functionality
that you get from an 03-high deep research product.
What does an interaction with 4.5 look like for you?
Is that just random questions throughout the day?
Yeah, yeah, just like chatting, casual stuff, right?
These models are like great colleagues,
great therapists, right?
It's good to talk them out.
Like this is one of those things,
just like you can generate a deep research report
before bed every night
and just become a way smarter person, right?
You can also just chat with these models about your life
and I think become a really well-adjusted person.
There are some people that I talk to in SF
who are really, really plugged into the model system, right?
And you can kind of tell, yeah, these guys are like,
they got that sorted out. I don't know who their therapist is. It must be like 4.5 or 3.5 Sonnet or system, right? And you can kind of tell, yeah, these guys they got that sorted out.
I don't know who their therapist is.
It must be like 4.5 or 3.5 Sonnet or somebody.
But they seem stable.
I wonder if in a few months you'll be at a party
and you'll be like, oh, that guy's clearly a Sonnet guy.
That's a 4.5 guy over there.
It's rooting for a different football team or something.
Talk about when somebody asks you,
like, you know, hopefully they don't ask you this too much,
but if somebody asks you like,
oh, what do you think about the tariffs?
Like, do your eyes just like glaze over
because like none of this stuff matters
on like a 10 year time horizon?
Like I was talking to John the other day
and I like sounded like so just AGI-pilled,
like as I was saying it out loud,
John was like, you sound ridiculous, but like, I agree.
And it was this basic sense that like,
there's just so much chaos in the world right now,
politics, tariffs, all this stuff, every, you know,
people burning down, you know, Teslas or whatever.
And then you kind of zoom out a little bit
and you sort of like understand,
you just sort of like look at this sort of adoption
of this technology and sort of understanding the potential and it's almost like, yeah,
none of this stuff matters because the world is going to look so, so different. What, you know,
are you, are you able to just be like 110% focused on the work and ignore all the noise? Or do you still care to look up every now and then
and look around?
Maybe I should be like more locked in than I am, right?
Like maybe I should be like in a dark room,
like just talking to the models and like, you know,
like feeling it out, but I think-
Maybe we're models.
Maybe the OpenAI team is generating us to test on you.
Yeah, yeah.
Real-turning test.
Simulation goes deep, man.
Like I was gonna tell you,
but you found out too quick.
I think, you know, like when I joined OpenAI
I like tweeted this thing
where when you like shoot an arrow
and like, let's say you're like in outer space, right?
There's no gravity and you like shoot an arrow, right?
You know, the arrow is gonna go forever
unless it like runs into some black hole or something.
And like, you know, the little degree difference
and like where you point it, if you like point it up a bit,
point it down a bit, like compounds to like light years,
right, like as it like kind of travels
like different destinations, right?
Like, and in some sense, like, you know,
if I like shoot this arrow from like earth
without gravity or something,
and I like move my hand to the left a bit,
it's like half the universe away, right?
That's crazy.
And I kind of, you know, believe in the same thing
because we like push these models models to crazy limits, right?
That little, little differences in the initial conditions of AGI being born or of the politics
of the time or of the capital distribution or whatever, I think can compound to incredible
differences in the limit. And I think that while it seems that these systems will be
incredibly powerful and very robust to maybe human will at some point.
I don't think that is an argument for the initial conditions
not mattering, if that makes sense.
Great.
How are you thinking about just safety AI do-mers
in these days?
It feels like the conversation kind of moved past.
It feels like the do-mers low key fell off.
Yeah.
Where are they?
I don't see people working at Western
all the Twitter as much now, right?
Yeah.
Well, I mean, they were telling me
I was going to be paper clipped for like 24 months straight,
not paper clipped yet.
And so I'm like, OK, yeah, move on.
But what's it like internally?
What is it like internally?
I can't say too much about kind of that.
I think just like your vibe in SF, maybe.
Yeah, I think there are a lot of sharp people that like keraton about like this going well.
Right. Yeah. There's this like great quote. I forgot like that. And even though like,
you know, NASCAR driver, I'm going to butcher at Mario, Mario something. Right. But like,
you know, it's crazy at the formula level level, one level, how many people think the brakes are
for slowing down. Right. And like sometimes like to, to go as fast as possible, right. Like to, you know, to get this out into the world as quickly as
possible to make the most economic impact, you do have to do a bit of safety.
You do have to do a bit of testing.
I think this should increase over time, as the expected impact also increases.
I think OpenAI is incredibly good at managing this.
I think that our leadership have done it for a while.
They will continue to get better at it too.
And I think it's kind of a cool thing, right?
To build systems that have unexpected behaviors
and can do things that maybe we weren't aware of originally.
And it's an important science to figure this out
before we release and to build things that get safer.
Yeah, I completely agree.
Last question for you.
Do you see us having a Ghibli-style moment
around agents this year?
Not necessarily at OpenAI, but just potentially broadly.
Because we've talked about this on the show a bunch,
but people have just been promising for decades now.
You're going to be able to talk to the computer,
and it'll book you the flight and the hotel and make your restaurant reservation
and we haven't had that magic experience yet.
Operator demos.
But I haven't seen that viral where it's like,
literally everyone that has access to Operator
has used it and posted the results
because they got something cool going or whatever, right?
There's some sense too where like,
Ghibli's awesome because it takes three seconds
to look at an image and be like, oh wow, that'sli is awesome because it takes three seconds to look at like an image, right?
And be like, oh wow, like that's beautiful. Like that like made like, you know, your like engagement photo like way cooler, right?
Like, it's like really quick. Then I do think that like some of the most important agents,
like some of the, you know, like most important things that we'll build over the next like few years,
might not have results that you can like look at in three seconds to be like, wow, like great job model, right?
And in fact, I actually think that as the economic potential
and power of these systems increases,
maybe it actually becomes slightly harder in the limit
to tell the difference between capable systems
and very, very capable systems.
Right?
Just as models get better, it might be a bit harder
to tell the difference between a good model
and a brilliant model.
Right?
The cool thing is, though, is that a nine's a reliability
scaling problem.
That as you like kind of like push these nines out
and as you like make them more reliable
and many more contexts, at some point they are just doing
like a sizable chunk of economic labor, right?
So, you know, do I expect a ghibli moment?
I'm sure, but I do expect like, you know,
these things to just provide a lot,
a lot of economic value, if that makes sense.
Yeah, that makes a lot of sense. Beautiful. Well, thanks for joining us. I loved this sense. Yeah, excellent sense beautiful. Well, thanks for joining
Great we have you back soon. That's awesome. Fantastic. Yeah, you're welcome anytime. You're welcome anytime
Good to see you guys. Have a talk to you soon. Congrats on the milestone and
Keep it up. Yeah
Size gone by
Fantastic yeah. Yeah great conversations the opening night team great guys Boom, size gone. Bye. Fantastic.
Yeah, yeah, great conversations.
The opening night team, great guys.
And fun that they can talk as freely as they can
given their role, you know?
Obviously it's a very important and valuable company.
Yeah, it's actually, if you think about it,
it's most companies just period
would not allow.
Yep, posting or any of that stuff.
Well, posting is one, but joining a live stream,
a live show.
High stakes.
That's high stakes.
But it's fun.
It's great.
But it's fun and I thought they had fantastic answers.
Well, let's.
Makes me very optimistic.
Let's go through another autopilot, Numeral,
which puts sales tax on autopilot.
They do.
Spend less than five minutes per month
on sales tax compliance,
backed by benchmark.
Numeral's a new sponsor on the channel.
And we want to give a shout out to Numeral.
Shout out to Nate, Sam, and Matt.
For those that don't know,
Numeral works with over a thousand different ecommerce and sass businesses
Including a few of our favorites. Yeah ridge and graza. Yep many many more
If you're running an ecommerce business or sass company, and you would be absolutely silly to be on new world use numeral
I mean, it's funny that they're saying spend less than five minutes per month because I feel like the alternative is not spending more time
But it's billing some like lawyer $10,000.
It's actually insane what you had to do before.
It can be extremely expensive to do sales taxes pre-numeral.
Big shout out to Numeral.
Go check it out.
Yeah.
You want to get ahead of this early.
I think a lot of founders both in e-comm and SaaS aren't really taking this seriously
until they're at scale.
There's real money on the line.
So start early, start often.
And we have some breaking news that was sent to us
about the Newsmax IPO.
Have you followed this story?
Newsmax. Newsmax.
I'm Newsmaxing.
So Newsmax is up 110% today.
It IPO'd yesterday.
It's up 1900% in two days from a $10 price.
It's over $200 now.
The price is all over the place.
It's a 26-year-old company,
digital media company founded by Christopher Ruddy in 1998.
It has been variously described as conservative,
right-wing, and far right.
Newsmax MediaVision include cable and broadcast channel newsmax TV and
I can't even keep track of all these right-wing media companies because there's so many like the truth social far like like
There's truth social there's parlor. There's gab and newsmax and a whole bunch of these
I'm so happy. We never discussed politics. But we will discuss an IPO.
We will discuss an IPO.
Especially one that's popping.
I saw some funny jokes about how CoreWeave priced exactly
at $40, which is what you want to do to not leave
any money on the table.
Newsmax obviously left a ton of money on the table
by pricing the IPO so much lower than the price
that it's floated at now.
It could have, in theory, sold shares
and put a lot more money on the balance sheet
at a lot lower dilution.
So it's being valued beyond $30 billion right now,
which is about 1x.
That's great.
Well, if they hit a stumbling block,
maybe Jeremy Giffon will have to come bail them out.
He's hiring an investor,
and we want to highlight it here on the show.
He says, you'll be the first hire working with
him on the most interesting complex and asymmetric special situations in tech.
You'll be a good fit if you 1. trade annual letters like Pokemon 2. are high
paced and biased to action 3. are unapologetically money motivated 4.
have created and sold a product. Ideological
minority at a top 10 school,
debated risking it all with an SBA loan, bought stock before you could drive,
work in PE but long for higher MOIC, work in VC but long for higher ROIC, went
through YC but yearned to invest, can get a meeting with anyone on earth, made
money online in high school, value heterodoxy over consensus. Value making money over being right.
Steve Schwartzman defines eights
as those who can follow marching orders.
Nines as those who can execute and strategize.
And tens as those whose can sense problems,
design solutions, explore new directions,
and make it rain.
Eights and nines need not apply for this job.
This role will have an equal focus
in sourcing,
analysis, execution, and operations.
In essence, you'll wear every hat
as we build the firm together.
It will be all encompassing and demanding,
but autonomous and yours to shape.
You'll invest and transact with the best operators
and financiers in the world and produce work.
Add an exceptionally high standard.
So go hit up Jeremy and
Switch your business to ramp calm and and go hit up Jeremy and let him know that the technology
I think you the TVP and sent you this
undoubtedly a dream job and
Working for Jeremy will change your life.
Yeah.
He used to be a tiny man, but now he has big aspirations.
That's true.
He worked for tiny.
Now he's been a size, always going size board.
Yeah.
But some of the smartest people in the world
call Jeremy to get advice
before they make important decisions.
That's true.
And so being able to work for him is like,
many people have said it'd be like getting
2,000 MBAs at once.
Yeah.
So.
It's like buying shares in CoreWeave in 2004.
Yeah. Right.
Yeah.
It's like buying Solana in 98.
Yep. Exactly.
Anyway.
Where should we go to next?
Oh, I mean, I don't know if we actually,
I think we did post this in the timeline.
You might not have seen it,
but Jeremy Gifford obviously does special situations
if your company raises too much money from VCs.
You build a decent business,
but you're just underwater on the pref stack.
He will come and unstick your business.
And it's somewhat adjacent to kind of injury lawyers,
if you have a cap table accident.
So we put up a fake billboard on the 101 that said,
a cap table accident called Jeremy Giffon,
put his real phone number on there, he didn't like that,
so he censored it out before he posted it.
But if you're looking to buy a billboard, go to Ad ad quick comm out of home advertising made easy and measurable say goodbye to
The headaches of out-of-home advertising only ad quick combines technology out-of-home expertise and data to simply
efficient seamless ad buying across the globe so you can be a startup an
Agency anybody can go and leverage ad quick and you'd be silly not to yep, and
Did you I think you threw this in the chat
Angry Tom says it's over meta just announced mocha a new model that turned text or voice into super realistic talking characters
There's no way to tell anymore
And we actually saw a fan of the show turn us into Lego characters, which is very cool
and I think this like repurposing and gibbification. We've only seen the very beginning of it and
It is interesting we talked some ear about the Vtuber thing this type of model obviously
It's very expensive very slow right now
But in the future you could have a model that's running on your live stream
And we could be Lego characters the whole show one one day if we're going to be people that watch the whole show
Ghibli mode Ghibli mode. Ghibli mode.
And be able to toggle it in the sidebar.
Yeah, just like a color filter.
Yeah, I thought this was interesting.
I think Meta's struggling from a lack of quality posters
on their team because nobody's pumping us in the timeline.
Well, they're probably over on threads
and Zuck is posting on Instagram.
He's actually very good at going direct.
He posts the direct camera videos
when he launches new stuff. But I want more.
I thought some news that was interesting.
So Brett Goldstein launched a new all-in-one CRM today
on the anniversary of Gmail launching.
They launched on April 1st, oh yeah.
April 1st.
So apparently back in the day,
they launched and nobody actually believed them.
Because launching a product on April 1st is not a good idea.
Huberman launched a funny product this morning.
Contact lenses.
Blue light blocking contact.
I fell for one April Fool's joke today.
Very sad.
I fell for Trey's.
He said he was buying a grain silo in Ohio,
and I was like, yeah, that's believable.
Yeah, too believable.
You can't make it too believable.
He was just teeing himself up for a joke
that he wanted to be a serial,
serial entrepreneur with a moat.
Yeah.
It was, it was, whenever my head.
So Brett says the email form factor
hasn't changed at all since then.
After many attempts, it's clear that we're actually
just looking at email wrong.
And so what Brett is building is a email companies
that's sort of like embracing email as a database.
So like basically building applications on top of the email,
everything from like to-do lists to project trackers
to news, things like that.
So that's exciting.
And then clearly someone else had a similar idea.
Notion also launched Notion Mail, a new email service,
also coinciding with the launch.
I don't, I think the strategy of launching products
in general on April 1st,
when every company is doing fake product launches,
is I get the sort of like launch on-
Underrated, you think?
I don't think it, I think potentially,
you just gotta keep launching, right?
So like Brett should just keep launching
over and over and over.
Notion should figure out, you know,
how to launch new features, things like that.
But anyways, cool to see these two launches.
You know, Superhuman was fantastic when it launched.
I remember the FOMO from everybody of like,
how do I get an invite or whatever.
And then-
I really want a new AI first email client.
And I know that, yeah, everyone's gonna say,
like, Gmail will kill, Google will kill this
in a few years, but like, I'll pay you $100 a month
for two years straight.
And like, I think that's a reasonable business.
Because I'm drowning in email,
they're not filtered properly.
Like, there's like these buttons now
where it's just like, do you want Gemini to summarize this? Like, it're not filtered properly. Like there's like these buttons now where it's just like,
do you want Gemini to summarize this?
It's like, no, I can scan, that's not what I'm looking for.
I'm looking for a better spam filter.
Yeah, it's crazy that Gmail values a newsletter
in the same way as like, you know,
some critically important thing.
It's crazy, I mean, they do put it in like a separate box
sometimes, but it's very tricky. It's not good. And
Yeah, I mean I want to almost find a product. I bet it exists
I want to find like a new RSS reader that I can basically take all my newsletters and just put there
So it's like a different app
essentially because like I don't want it in my email inbox at all and then I want a proper unified inbox because
Gmail and Chrome now want me to use like different Chrome profiles
with different log, and it's like,
it's just getting so confused.
I just want all my email in one place.
Anyway.
How'd you sleep last night?
I slept.
You went to bed at.
I like that.
That's good, that's drama.
That's drama.
Hopefully that's not copyrighted.
So I'm gonna.
No, no, no, no, no, no.
Vivaldi's 300 years old. We're good.
We're good.
How did you sleep?
Tell me, Jordy.
88.
Ooh.
How did you sleep, John?
I slept 75.
Nights that fuel your best days.
Turn any bed into the ultimate sleeping experience.
Folks, go to eightsleep.com, use code TBPN.
Use code TBPN, get $350 off.
I'm very excited we're going to be having Mateo, co-founder
of 8sleep, on the show soon.
I'm going to bring you down.
We're going to make him our official sleep coach, not just
a sleep expert for the show.
The 8sleep's so helpful, but at this point,
it's doing everything it possibly can,
and clearly I need to do more
Yeah, like like the reason I'm not getting 100. It's not on eight sleep anymore
Autopilot has optimized so much and it's pushing a boulder pushing you harder. It's pushing you harder
I want to be a consistent 100s and I need to know the stack that I need to get there
Probably taking less stimulants all the waking up
Between four and five daily
means that I basically am getting into bed
when it's light out now after the time change.
It just feels so silly to be getting into bed.
And you're like, oh, I guess I'm five years old now.
But whatever it takes, anything for the show.
Yeah, anything for the show.
Should we close out with this Dwarakesh Patel post? I thought this was interesting. It's great. Whatever it takes. Anything for the show. Yep. Anything for the show.
Should we close out with this Dwar Kesh Patel post?
I thought this was interesting.
It's a long one, but I thought it'd be good to read through.
He says, that feeling when Gwerne casually articulates a more insightful framework for
thinking about your life's work than you've ever sketched out yourself.
Gwerne writes six days ago, and there's a screenshot.
You can see it as an example of alpha versus beta. when someone asks me about the value of someone as a guest
I tend to ask do they have anything new to say didn't they just do a big
interview last year and if they don't but they're big can you ask them good
questions that get them out of their book big guests are not necessarily as
valuable as they may seem because they are highly exposed which means that
which means both that one they probably have said everything they will,
they will say before and there's no news or novelty.
This is really important when you're doing interviews,
you need to get people off their book and I think you're very good at that with
the, uh, asking people about the current thing essentially. Um,
cause there's a lot of people that do, they, they,
they pass their time timeless wisdom seven different times on various podcasts
But they haven't talked about humanoid robots yet
Yeah, and so you get them on that and it's interesting or they are message disciplined and nobody's full to talk about
Nobody's asked Senra why our great allocators cannot that's true in the right life partner
That's true. Good point in this analogy Alpha represents undiscovered or neglected interview topics,
which can be extracted mostly just by finding it
and then asking the obvious question.
Usually by interviewing new people,
Beta represents doing standard interview topics slash people,
but much more so harder, faster, better,
and getting new stuff that way.
That's us.
Lex Friedman podcasts are an example of this.
He often hosts very big guests like Mark Zuckerberg,
but nevertheless, I will sit down and skim
through the transcript of two to four hours of content
and find nothing even worth excerpting for my notes.
Friedman notoriously does no research
and asks softball questions,
they're going hard on legs.
Oh, it's rough.
And invites the biggest names he can get
regardless of overexposure.
And so if you do that, you will get nothing new.
He has found no alpha
and he doesn't interview hard enough to extract beta
So he's sort of the high expense ratio index fund of podcast interviews
I think that there is I think this misses the fact that like for a lot of Lex reason audience like you might not
Have heard that Mark Zuckerberg and it's fun to just listen to him for a couple hours
So I still think there's value there
Anyway, Sarah Payne on the other hand hand, which was a guest on Dorkash and blew up,
seems to have been completely unknown
and full of juicy nuggets and is winning the lottery.
You can make your career off a really good trade
like Payne before it gets crowded.
Great trade.
And we gotta have her on.
We're taking her.
She's coming on our show next to our cash.
Just kidding.
But you're welcome to come opine on the current thing,
Sarah, if you'd like to join.
But we're not gonna do do I want to hear anywhere near
Things about uni tree. Yeah robotics. Yeah, break it down for us break it down. What's your p-doom?
however, if
If another successful podcaster has her on they will probably not discover pain is their most
Most popular or growth productive guest ever the well is dry pain may have more to say someday
But that day
is probably closer to five years from today than tomorrow. And that's a very good point.
So a good interviewer adopts a optimal foraging mindset. Once you have harvested a patch of
its delicious food, you have to move on to another patch, which hasn't been exhausted
yet and let the original patch slowly recover. So a great guest for Dwar Kesh's blog would
be say Hans Moravec or Paul J. Warbus.
Moravec hasn't done anything publicly in at least a decade and is fallow, while Warbus
has been more active and in the public eye, but still not much and is such a weird guy
that just about any question will be interesting.
Reich was even also a good guess because while Reich is very public in some senses, he's
written popularizing books even, he is still obscure. Almost none of what he has published is well known and he is involved in
so much fast-paced research that even the book is now substantially obsolete and he has a lot of new
stuff to say and Reich will have more stuff to say if revisited in say two years for an update.
So a harvester will be making a note to revisit him if the current crop of interview candidates
in the pipeline is looking marginal a
Difficult and mediocre guest would be Tony Blair
He can surely say many interesting things about the current geopolitical context in his work since being PM
But he is super experienced career politician who has survived countless question times and may eat you for breakfast and exploit you for our
For ulterior purposes rather than vice versa similarly Mark Zuckerberg and Satya Nadella are tough nuts.
There's meat there, but are you willing enough to bring down the hammer or will you settle
for a mediocre result that mostly just fills space and is not a must watch?
A bad guest might be someone controlling and extremely PR savvy like Mr. Beast.
This is the sort of guy who will give you a bad interview pushing his book shamelessly
and then might wind up spiking the interview
Anyway, if he felt he wasn't getting enough out of it and just drops it as sunk cost
Though it was weeks of work on your part and blows a hole through your schedule and that's not his problem
That's why we only spend two minutes texting guests and then no minutes
Prepping so that we just have them on for 15 minutes and try and get we do it live throw them off
No, I mean I I I feedback no we
Try to
It's less interesting to talk to a founder and have them only talk about their business
Because it's just like you sort of know and you've heard elsewhere. Yeah, you've heard it elsewhere. It's on their website, etc
It's much more interesting to kind of get their opinion on the market broadly
website, et cetera. It's much more interesting to kind of get their opinion
on the market broadly.
But this is great.
In other news, Circle has officially filed for an IPO.
That was on the Polymarket ticker.
Yeah, so this was cool.
So I actually have the Polymarket pulled up right now.
Yeah, did it jump?
And it had been sitting at,
a week ago, is that, yes, was at a,
Circle IPO in 2025 was at 54%.
And yesterday morning, it just absolutely popped up to 88%.
And then now it's sitting almost at 100.
And yeah, I'm excited to like dive into this.
We should try to do an S1 breakdown
tomorrow.
Cool.
And yeah, this is a big one.
Well, congratulations to everyone over at Circle.
And that is a good place to close out the show.
Remember, if you want to get a bottle of Dom Perignon.
Leave us five stars on Apple podcasts.
Be creative.
You can put an ad.
Be creative.
Put an ad in it.
You can do whatever you want. Send it to us, tweet tweet it at us and we will send you this delicious bottle of
dump and we are going to make a decision by the end of day tomorrow and we will
also send you the z-biotics to go with it there you go not hungover today was
fantastic it was great and I can't wait for tomorrow I'm so happy it's only
Tuesday yeah me too it'd be three three more days shows ahead of us big shows
Thanks for watching. Thank you folks. Have a great day. Bye