TBPN Live - Ilya Sutskever on Dwarkesh Patel Reaction, NVIDIA’s Response to Google’s AI Progress, Trump Unveils Genesis | Diet TBPN
Episode Date: November 26, 2025Our favorite moments from today's show, in under 30 minutes. TBPN.com is made possible by: Ramp - https://ramp.comFigma - https://figma.comVanta - https://vanta.comLinear - https://linear.a...ppEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - https://getbezel.com Numeral - https://www.numeralhq.comPolymarket - https://polymarket.comAttio - https://attio.com/tbpnFin - https://fin.ai/tbpnGraphite - https://graphite.devRestream - https://restream.ioProfound - https://tryprofound.comJulius AI - https://julius.aiturbopuffer - https://turbopuffer.comfal - https://fal.aiPrivy - https://privy.ioCognition - https://cognition.aiGemini - https://gemini.google.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive
Transcript
Discussion (0)
Timeline was in turmoil over the weekend and yesterday.
We covered a little bit about the nucleus dust up on the timeline.
The biggest news in tech in AI is that Ilyos Sutskiver, Dwar Keshe Patel podcast, has dropped.
The opening clip is iconic.
It's very funny.
It's a bit of a hot mic moment.
Listen to that all of this is real.
Yeah.
Meaning what?
Don't you think so?
Meaning what?
Like all this AI stuff and all this Bay Area.
Yeah.
that it's happened, like, isn't it straight out of science fiction?
Yeah.
Another thing that's crazy is, like, how normal the slow takeoff feels.
The idea that we'd be investing one percent of GDP in AI.
It hasn't even set up the cameras, you know, where right now it just feels like...
And we get used to things pretty fast, turns out, yeah.
But also it's kind of like it's abstract, like, what does it mean?
But it means that you see it in the news.
Yeah.
That such and such company announced such and such dollar amount.
right that's that's all you see right it's not really felt in any other way so far yeah should we
actually begin here i think this is an interesting discussion sure it's like one of the greatest
podcast intro from the average's point of view so good really good so good that's going to be a new
meta yes yes you can't you can't fake that it's amazing also it's just funny because uh uh you know
it's effectively getting caught on a hot mic but i was joking it was like of all the things
that you could say on the hot mic before you sit down oh okay we're actually
recording. His is just completely reaffirming everything we know about Ilius Satskimer.
It's just completely the same. Like, okay, he is a true believer. It's not like he was sitting
down and being like, like, North Cash, like we got to go on my private plane. I just sold so
much secondary. It's crazy what's going on with this stuff. Like if people really think this
AI thing's going to pan out, I'm making billions of dollars. I'm bashing out. I'm, I don't
believe any of this stuff is real. No, he wasn't caught on a hot mic like that. He, he, he, he, he,
His hot mic moment is like, wow, it's exactly like science fiction.
Everything is all real.
It's all real, yeah, which is just iconic.
Tyler, did you have any other takeaways from your speed run?
You're listening to it at 5X, right?
Does he pop the scaling bubble?
Does he give a bearish take about AI at any point?
So I wouldn't say he's like anti-scaling, but he does kind of give this interesting take,
which he basically says that like AI companies, like there's too few ideas for the amount of companies.
And for the scale that we're at, you can think.
of AI progress as being in these kind of distinct ages, right?
So he says 2012 to 2020 was like the age of research
where you're trying all these like different ideas
and the scale of things is very small, right?
Like to train the original AlexNet was like two GPUs
to do the original transformer was like eight, maybe 64,
but like, you know, very small amount of GPUs.
Once we kind of figured out that transformer's work,
we entered this age of scaling.
And that's basically from 2020 to 2025.
And now we're basically at this point,
where, like, yes, you can keep scaling, and models will get better.
But even if you scale 100x, like, are we really going to get super intelligence?
It'll get better on the benchmarks, and they'll become more useful.
But it's not like this, he doesn't think that just raw scaling alone is basically what's going to bring us there.
I mean, this has been echoed by a lot of people.
We still need a couple different kind of paradigms for this to work.
The reason that Opus 4.5 was better is not just because they scaled pre-training.
It's scaling generally.
The scaling has gone from pre-training, and now it's already.
Yeah.
And so we basically, we need to find another paradigm.
And the way you do that is just doing, like, research.
And so he talks about SSI as basically being this, like, all they do is research.
Return to research.
Yeah, it's small kind of training runs.
Even though, you know, they only raise $3 billion, which is like small compared to other research institutions,
the fact that they're basically putting it all on these kind of, I mean, I don't know if they're moon shots,
but they're these small training runs where they're doing experiments.
Yeah.
And then they're going to scale it up eventually.
but they're not just basically trying to win the AI race
by just scaling up and doing the same thing as someone else?
Yeah, yeah.
They're trying to find a new,
a way to actually bend the scaling curve,
find a new scaling law,
or find a new technology that they can scale against.
I was thinking about Ilius talk at Nureps last year.
He pulls up this chart of the relationship
between the mammal's mass and the brain volume,
and it's a pretty linear graph.
And so, like, the elephant is a lot bigger than the mouse.
And so it has a proportionally larger brain to its body volume.
And it's this perfect, it's this perfect linear curve.
I should, I should just try and figure it out.
If I, uh, I can maybe text it in.
Boom.
So basically the, the mammals have this like very clear linear trend.
But then the, uh, non-human primates are a little bit higher up on the chart.
and they're just doing a little bit better.
But then hominids, the actual humans,
have a different, there's a very distinctly different curve.
It was making me think, like, maybe that's what we're supposed to see
when we think about, yeah, this, this.
When we say, like, straight lines on log graphs,
when we say we are seeing scaling happen
with the current architectures,
which line are we scaling against?
Are we actually scaling on the human curve?
or are we waiting for divergence from that current scaling wall?
Scaling has taken all the air out of the room, right?
Where, like, basically, we have more than enough compute
to try these, like, different ideas,
but they're just all going straight into training the next big model
using the next paradigm.
And maybe it's slightly different, right?
You have a different way of doing RL or whatever,
but it's still fundamentally the same thing, right?
And he talks about maybe continual learning
is really the better approach, right?
We've been in this era of, like, having a pre-training thing
for so long that we think of, like, AI as, like,
you train this thing, and then you release it, and it's like done.
And RL's like a little bit different now because there's this idea of post-training
and you can kind of integrate different things.
I also thought the interesting thing was with pre-training, you use the whole internet
so you don't have to decide anything.
You're just applying this algorithm to just all the data, all the compute, and there's no decisions.
But then with RL, you have to decide, okay, we're putting in these math equations
and we're maybe not putting in something else because we're actually creating the data.
And it's not just this simple thing.
This is maybe why we see these kind of like models that are super well.
They do super well in e-vails, but not so much.
Yeah, some of the overfitting.
And the reason is because the data that we choose is not the correct data,
because researchers are basically being reward-hacked maybe into, like, just solving for benchmarks.
It's interesting to hear this, like, the conclusion is we need another breakthrough
and then simultaneously consensus be, like, but like we're definitely going to get that breakthrough in, like, the next decade.
I feel like it echoes a lot of even what Mike Knewp has been saying, right?
We need new ideas saying this for months.
But it's way hard to predict the rate at which breakthroughs will arrive as opposed to like
you can actually chart out, okay, the formation of capital, the time it takes to build a data
center, how long it takes to, you know, manufacture a bunch of GPUs, rack them, run the
training round.
Like that's much more predictable than like human came up with new algorithm.
That's sort of random.
And he brings us up as the reason why you see companies doing this.
it's just, if you're raising money, it's so much easier to justify the raise by saying
we're going to buy this data center and we're going to do this training run.
It's going to cost exactly no as much.
It's going to monetize this way.
Yeah, and then the model will be this good, and then we can use it to monetize this way.
Totally.
Where if you're just saying, like, oh, yeah, we're just going to pay a bunch of, like,
really smart researchers to do a bunch of research and then they'll figure something out.
That, like, you can't really be.
Yeah, in some ways, it feels like SSI is set up for, like, somewhat of a mini AI winter
or, like, at least riding the hype cycle down.
Yeah.
because it doesn't sound like he's sitting there being like,
we raise $3 billion and we're spending it in the next 12 months.
It's like...
2.9 was debt.
No, not...
No, no, no, no.
That's the point is not.
It's like equity.
It's just sitting there.
It's like he can clearly pull back.
I'm going to give each researcher or all these different teams, like a shots on gold.
No, I love it.
We're going to keep taking those shots until obviously he'd be able to raise like another $10 billion
whenever he wants, especially if he has like a key breakthrough insight and they can be first
to scale that. We're delighted by Google's success. They've made great advances in AI, and we continue
to supply to Google. NVIDIA is a generation ahead of the industry. It's the only platform that
runs every AI model and does it everywhere. Computing is done. Invita offers greater performance
versatility and fungibility than A6, which are designed for specific AI frameworks or
functions. That is a crazy thing to post. Crazy, crazy, crazy thing to post.
Sometimes we get stuff from Indiana.
I don't know, boys, but having the largest company in the world sending tweets to defend their main product is not very reassuring.
I feel like this would be so much better delivered.
I actually don't have that much of a problem with the actual text here.
This should be delivered by Jensen with some nuance in a conversational setting.
It just hits a lot different when this is in at exactly 9 a.m., like clearly scheduled, clearly typed out in a document.
And it feels like a press release, which is just an odd thing when it should be, there should be
an answer to a question.
Someone, Bobby Cosmic in the chat was saying like, oh, the mainstream media is just now
picking up on the Gemini three story.
And there's articles in the Wall Street Journal and other places saying like, oh, maybe Google's
back.
Like, you know, buy Google.
Like, it's very exciting.
And so, Nvidia feels the need to respond to that.
But it's a lot different when it's actually a response instead of just like a,
we're putting out a press release.
Like, who knows why?
Yeah.
As opposed to, like, Jensen saying, like, well, since you asked talk show host or news anchor
or whoever he's top podcast host, whoever he's talking to, Dwar Cash, whoever he's talking
to, maybe us, we'd love to have him.
I can ask him that question.
He can defend this here.
Well, the timing seems important because they are coming under a huge amount of pressure
right now.
There's an article in Barrens this morning by Tay Kim.
The headline is not what NVIDIA's comms.
teams would have liked it to be.
NVIDIA says it's not Enron in private memo refuting accounting questions.
That's a crazy thing to say.
Let me get it to the coverage.
So TAY says a series of prominent stock sales and allegations of accounting irregularities
have put NVIDIA in the middle of a debate about the value of artificial intelligence
and its related stocks.
Now, NVIDIA is pushing back in a private seven-page memo sent by NVIDIA's investor relations
team to Wall Street analysts over the weekend.
The chipmaker directly addressed a dozen claims made by skeptical.
investors. NVIDIA's memo, which includes fonts in the company's trademark green color,
begins by addressing a social media post from Michael Burry last week, which criticized the company
for stock-based comp, dilution in stock buybacks, Barry's bet against subprime mortgages
before the 2008 financial crisis was depicted in the movie The Big Short, of course.
NVIDIA repurchase 91 billion shares since 2018, not 112 billion. Mr. Burry appears to have
incorrectly included RSUs, RSU taxes, employee equity grants should not be.
conflated with the performance of the
repurchase program. NVIDIA said
in the memo, employees benefiting from a rising
share price does not indicate the original
equity grants were excessive at the time of
issuance. That makes sense. Barron's reviewed
the memo, which initially appeared in social media
posts for the weekend and confirmed
its authenticity. Burry
told Barron's he disagrees with NVIDIA's
response and stands by his analysis. He said
he would discuss the topic of
the company's stock-based comp and more details.
Burry is, of course, now over on
substack. He's charging $380.
a year, and if you are a perma bearer, I can't. This is like Christmas coming early.
NVIDIA didn't respond to Bairns for a request for comment, but they also responded to claims
that the current situation is analogous to historical accounting frauds, Enron, WorldCom,
and Lucent that featured vendor financing and SPVs. Unlike Enron,
Envidia does not use special purpose entities to hide debt or inflate revenue.
NVIDIA also addressed allegation that its customers, large technology companies aren't properly accounting for the economic value of NVIDIA hardware.
Some of the companies use, we've talked about this, use a six-year depreciation schedule for GPUs.
Bury said he believes the useful lives of the chips are shorter than six years, meaning NVIDIA's customers are inflating profits by spreading out deep depreciation costs over a long period.
The TPU's equal bad for NVIDIA take is up there with the dumbest, maybe worse than Deepseek, as it completely misses what actually happened in the last six weeks.
and I will remember who is who in the zoo my view one demand for AI is bananas no one can meet demand everyone is spending more Google said just yesterday they have to double capacity every six months to keep up two scaling laws are intact he's referencing Gemini 3 the flywheel is about to speed up somehow the mid-curve crew thinks this is zero-sum competition none of this suggests that if you think the race is hot now wait until you see what comes out of large coherent blackwell clusters all the magic
from the quote god machines is pretty much still hopper based lastly a quick gp u tp u less than the cost
and performance specs on the box aren't what you get in real life uh and google is going to get a
bat margin two double dot what matters is system level effective tokens to watt to dollars and tCO
invidia GPUs have higher fm u because they are they're already embedded in workflows slash the ecosystem
is massive uh by the way this is a good test if you have an opinion on this topic but you have to look up
fmU then perhaps curate better source you what mfU mfU what i said fmU the above effective token
watt gap also likely widens with reuben add in that jensen can actually deliver volume in a tight
market plus future flexibility multi-cloud capable programmable for paradigm shifts and he'll sell
every gp u.p u.s. google will too since everyone wants a second supplier and tp u is a fantastic chip
but this is as far from either or as it gets.
The one benefit of this confusion is that it is likely to give Google a brief stint
as the world heavyweight champion, the most valuable company.
I would guess the midwits put the strap on them in less than two weeks.
Put the strap on them?
What does that mean?
Just like pile in?
It seems like he's predicting that people will overplay the NVIDIA bear take
and overplay the Google opportunity.
opportunity and that will result in Google becoming the most valuable company in the world.
And he uses the phrase, put the strap on them in multiple in less than two weeks.
According to today's Wall Street Journal, AI-related investment accounts for half of GDP growth,
a reversal would risk recession.
We can't afford to go backwards.
The article is how the U.S. economy became hooked on AI spending.
President Donald J. Trump unveils the Genesis mission to accelerate AI for scientific discovery.
Today, Trump signed an executive order launching the Genesis mission
and new national effort to use artificial intelligence
to transform how scientific research is conducted
and accelerate the speed of scientific discovery.
The Genesis mission charges the Secretary of Energy
with leveraging our national laboratories to unite America's brightest minds,
most powerful computers, and vast scientific data
into one cooperative system for research.
The order directs the Department of Energy
to create a closed-loop AI experimentation platform
that integrates our nation's world-class super-class super-com.
computers and unique data sets to generate scientific foundation models and power robotic
laboratories. The order instructs the assistant to the president for science and technology to
coordinate the national initiative and integrate an integration of data and infrastructure
from across the federal government. There's one more note here on strengthening America's
AI dominance. Trump continues to prioritize America's global dominance and AI to usher in a new
golden age of human flourishing economic competitiveness and national security. Yeah, I'm very
interested to hear how, like, how the public-private partnership actually works here. There was a time
when every, basically, every cool technology was coming out of DARPA, coming out of the U.S.
government. The U.S. government landed on the moon. And since then, you know, I think a lot of
people in technology have lost faith in the U.S. government overseeing the development of technology.
Even academia. I mean, people, people think, like, you know, AGI will emerge from a price.
at C-Corp. That's where people believe that the best work will be done. Give Ilya Sutskiver,
give the best scientist $3 billion, let them go cook. Like that's the thesis currently. This feels
like somewhat of a rejection of that in some ways. There's obviously lots of different places
where having AI resources, having science and technology resources within the government
make a ton of sense. But it'll be interesting to see like where are the interfacing
points between the two categories. By default, I think most people in our audience in technology
would say, hey, let's leave the space travel and the AI research to the private sector.
Should we run through the Astral Codex 10 piece on trait-based embryo selection? This is from Scott Alexander
in Astro Codex 10. He said, suddenly trait-based embryo selection. When a couple uses, I, so in
2021 genomic prediction announced the first polygenically selected baby. When a couple uses IVF,
they may get as many as 10 embryos. If they want one child, which one did they implant? In the early
days, doctors would just eyeball them and choose whichever looked the healthiest. Later,
they started testing for some of the most severe and easiest to detect genetic orders,
like disorders like Down syndrome and cystic fibrosis. The final step was polygenic selection,
genotyping each embryo and implanting the one with the best genes overall.
in what sense? Genomic prediction claimed the ability to forecast health outcomes from diabetes
to schizophrenia. For example, although the average person has a 30% chance of getting type 2 diabetes,
if you genetically test five embryos and select the one with the lowest predicted risk,
they'll only have a 20% chance. So you get a 10% bump there. That's nice. Since you're taking
the healthiest of many embryos, you should expect a child conceived via this method to be
significantly healthier than one born naturally. Polygenic selection straddles the line
between disease prevention and human enhancement. In 2023, Orchid Health, founded by Noor,
who we've had on the show, enter the field. Unlike genomic prediction, which tested only the most
important genetic variants, Orchid offers whole genome sequencing, which can detect the de novo
mutations involved in autism, developmental disorders, and certain other genetic diseases. Critics
accuse GP and Orchid of offering designer babies, but this is only true in the weakest sense.
couldn't design a baby for anything other than slightly lower risk of genetic disease,
you're basically just selecting out of what you already got.
They're not editing the genes.
They're merely sequencing them and then allowing you to select.
These companies refused to offer selection on traits, the industry term for the really
controversial stuff like height, IQ, or eye color.
Still, these were trivial extensions of their technology and everyone knew.
It was just a matter of time before someone took the plunge.
Last month, a startup called Nucleus took the plunge.
They had previously offered 23 and Me-style genetic tests for adults.
Now they announced a partnership with genomic prediction, focusing on embryos,
although GP would continue to only test for health outcomes.
You could forward the raw data from GP to nucleus,
and nucleus would predict extra traits,
including height, BMI, eye color, hair color, ADHD, IQ, and even-handedness.
And it's worth noting that nucleus is now being sued by genomic prediction.
Even though they have this partnership.
I'm assuming the partnership is no longer, we can ask.
But I'm assuming it's no longer because one of GP's co-founders left the company to join Nucleus.
Interesting.
And allegedly turned off all the security cameras.
Is that metaphor?
Or is that actually?
The lawsuit alleges that he turned off all the security cameras on his last.
That's not a metaphor for like, you know, sharing a Google Drive of people.
It's his last day at work, and he was allegedly, like, rounding up.
Okay, so he turns off the cameras, allegedly, and the implication is that maybe he was rummaging around,
like literally taking documents or something like that.
That's at least what the timeline is accused.
Okay, wow.
People at Nucleus were emailing the former co-founder at his old email address,
evidence of them
violating the agreement that they had.
So anyways, it's very, very, very, very messy.
We can ask.
Yeah, there's like four or five companies involved in this.
And all of them are controversial because this is the most, I think the most
controversial, probably like category that you can be in.
Yeah, it's certainly up there.
And also there's just like the, there's just, it's so easy to throw.
I mean, in the same way that people are throwing Enron at,
at NVIDIA, like, it's so easy to throw Theranos at any biotech company that's not, you know,
that's accused of anything. And also biotech, it's like, it's pretty hard to understand the
underlying science. It's not, it's not a, it's not as popular as, okay, like, does the website
work? Does the business make money? You know, what's the cash flow like? It's way more
complicated. And so it does attract even more attention. So one of the other companies in the
space is Heresite. And Astral Codex 10 continues here.
They enter the space with the most impressive disease risk scores, yet an IQ predictor worth
six to nine extra points and a series of challenges to competitors whom they call out for
insufficient scientific rigor.
Their most scathing attack is on nucleus itself, accusing its predictions of being misleading
and unreliable.
Let's start with the science and then move on to the companies to see if we can litigate their dispute.
In theory, all of this should work.
Polygenic embryos, polygenic embryo screening is a natural extension of two well-valvents.
validated technologies, genetic testing of embryos, and polygenic prediction of traits in
adults. So genetic screening of embryos has been done for decades, usually to detect chromosomal
abnormalities like Down syndrome or simple gene editing disorders like cystic fibrosis. It's challenging.
We've talked about this before. You need to take a very small number of cells, often only
five to ten, from a tiny protoplasenta that may not have many cells to spare and extract a readable
amount of genetic material from this limited sample, but there are known solutions that mostly
work. And so the companies that we're talking about today aren't necessarily doing like the
fundamental lab equipment, development, building the machine, figuring out how to sequence data
from the first. It's more about the analysis that happens on top of the results. And the
recommendations, which is probably, which I would say is the most controversial part of this.
I don't know that any of them are recommending, hey, we think you should take, we think you should pick this baby.
They're more just saying like, we think that according to the data, this baby might be taller than this one.
But if you're giving somebody risk fact, if you're giving, if you're, yeah, but that's not a recommendation.
If I tell you, this car is 700 horsepower and does zero to 60 in two seconds, and this one does 800 horsepower and does zero to 60 in 2.4 seconds, this one's faster a straight line.
This one's faster on the curves.
And then, like, you pick, like, I didn't make a recommendation.
I just told you the stats.
If a company engages in malpractice, e.g. plagiarism, providing products, they should know our
bad two customers, et cetera. Is it water under the bridge if they can clean up? That's obviously
a reaction to my question was, you know, is there a redemption arc in his mind? Somebody says
Volkswagen can answer this question really well. I think that's because Dieselgate.
I just feel like the next turn of discussion needs to be, okay, we tested the models, we tested the
data. We tested the claims at a lower, at a higher level of rigor, I guess.
Sajuan Mala is also accusing Kiann of using a Chad filter. This happened before. So when
Keon came on the show, maybe six months ago, growing Daniel accused him of using a Chad filter
and went super viral. And I was kind of like, oh, like that's, that's, that's, I'm
I don't know.
I don't know how to, you know, even respond to that.
That's a very silly claim.
I have no idea if this is real.
I can't tell at this point on a Zoom call at this resolution.
What do you think?
Do you think this is real?
Are you guys just cracking up?
Because everyone, does everyone think it's real?
I don't think that he used a filter.
I don't think he used a filter.
I don't think he used a filter.
I don't think he used a filter either.
I think he just grew a beard.
I think he's just been mewing maybe.
Maybe he's just photogenic.
Yeah, it is possible that he just,
You just, you know, flexed his jaw muscles and, like, you know, it has low body fat.
I don't know.
I feel like it would be extremely high risk to run a chin augmentation filter.
The filter goes down for a second.
I mean.
Because you know that's what happens, right?
When you're using, like, the Snapchat filter or, like, the TikTok filters, like,
sometimes they pop in and out.
And if they pop out, like, you're done.
It's got to get the nucleus test for the Gigacad test and publicize the results.
Everyone's cracking up in the studio.
We're having a wild time.
Anyway, this is actually insane.
Apparently, according to X, I don't know if this is true,
but the robbery that took place yesterday
in which an armed thief posed as a delivery driver
and robbed somebody for $11 million of Ethereum and Bitcoin
was Locky Groom that was targeted.
Whoa, what?
An armed thief posing as a delivery guy, finesse his way
into the $4.4 million
mission district home
shared by investor Lockheed Groom.
Yes, Sam Altman's ex-boyfriend
and another tech investor named Joshua.
Okay, so it was not
Locky, but Joshua?
Gary Tan posted the footage,
panicked enough to delete it minutes later.
Crypto security experts are now saying
what everyone thinks, self-custody is great
until someone shows up your door
with a fake UPS label and a Glock.
San Francisco's tech leader
about to hard pivot into vault custody,
private security, zero public flexing because this heist wasn't random. It was a warning
shot. Very chat, TBT, writ. Mario Knopfal. But, uh, anyways, very sad.
Yeah. We will be back on Friday. Friday. For Black Friday, we have a fantastic lineup of a bunch
of different entrepreneurs, e-commerce, founders, brand builders. Some of the most savage
operators.
It will be a lot of fun. It will be a lot of fun. Cannot wait. It's going to be a great time.
A lot of friends.
So we'll talk to you.
Thanksgiving. We are thankful for each and every one of you. Thank you for being a part of this,
and we'll see you Friday. Goodbye. Cheers.
