Big Technology Podcast - Why Extreme Risk Takers Are Winning — With Nate Silver
Episode Date: August 14, 2024Nate Silver is a statistician, election prognosticator, and bestselling author. He joins Big Technology Podcast to discuss his reporting on extreme risk takers, and why the seem to be winning. Tune in... to hear Silver's theory on how society bifurcates into the risk-forward, probability oriented thinkers (The River) and the safety seeking, status oriented community (The Village). In this episode, we discuss poker, sports betting, effective altruism, VC, AI research, and existential threads to humankind. Tune in for a fascinating deep dive from one of The River's foremost instigators. To buy Silver's book, On The Edge, check out his book page: https://www.penguinrandomhouse.com/books/529280/on-the-edge-by-nate-silver/ --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Why is a group of extreme risk takers dominating our society and how is their mentality
influencing everything from tech startups to artificial intelligence? We'll find out in a
conversation with election prognosticator and bestselling author Nate Silver all coming up
right after this. Hey everyone. Let me tell you about The Hustle Daily Show, a podcast filled with
business, tech news, and original stories to keep you in the loop on what's trending. More than
two million professionals read The Hustle's daily email for its irreverent and informative
takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines in 15 minutes or
less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app,
like the one you're using right now.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation
of the tech world and beyond.
Boy, do we have a show for you today, one I've been looking forward to for a long time,
because Nate Silver is here.
He's the best-selling author of The Signal and the Noise.
He's also the author of a new book called On the Edge, The Art of Risking Everything.
It's out this week.
It's one of the best books I've read in a long time, and I'm really stoked for this conversation.
So, Nate, welcome.
So great to have you here.
Thank you so much, Alex.
I really appreciate it.
Your book goes into this group of risk takers.
You call them the River.
We'll get into that in a minute.
But I'm just curious how you think about risk taking because risk is something that's all around us all the time.
Some people have frameworks for it.
Some people don't.
A lot of people are risk-averse.
A lot of people sort of approach risk as it comes.
But you've thought a lot about risk.
So tell us a little bit about the way that you think about risk.
And is it good to take a lot of risk?
So that's a lot of good questions.
I think in general, most people are too risk-averse.
Although in the book, you meet lots of exceptions.
In the book, you meet some of the people who are overly into risk, who are degenerate gamblers.
We sometimes call them that can be sometimes,
term of affection, sometimes not.
But in general, look, we have millions of years of human evolution and societal and
civilizational evolution, which is premised on like barely eking out a living in a subsistence
lifestyle where you might have a life expectancy of 40 years.
If you catch a disease, there's a good chance that you'll, that you'll die from it.
And so most people are safety oriented and sometimes don't take enough opportunities where they
can better their life or change their life or or take what I would call a plus CV meaning positive
expected value gamble.
And there are studies that show, for example, that when Steve Lemit, Steve Levitt, the famous
economics economist has done studies where when people flip a coin to make decisions and it comes
up heads and they make a change, so meaning like changing careers, moving locations, getting out
of a relationship that you feel ambivalent about, on average, they wind up happier.
So that's kind of like the short version, but people should.
also be better about how they measure risk. People have a lot of sloppy heuristics when it
comes to assessing potential courses of action. You know, I come from the world where my kind of
lodestar is poker. I took kind of a low risk consulting job out of college and quit that
after a couple of years to go play poker professionally during a so-called poker boom years of the
mid-2000s. Now, in poker, you can calculate things quite precisely, right? There are 52
possible cards. You can run the permutations and combinations for the order in which they might
come out. Real life is not that easy sometimes. But yeah, what the book is trying to do is give you
like a guide to the mentality of people that are in this world. And it's mostly about people who
are taking mathematical, quantitative types of risks. So, you know, poker players or venture
capitalist or hedge funders. But there are also people who take physical risks. There's like an
astronaut I talked to in the book and like an explorer, an NFL player. And, you know,
player. And oddly enough, actually, they have some of the same types of mentality. Being cool
under pressure is obviously a huge thing. Not being results oriented is a term we use a lot
in poker where you have imperfect information. You have to make the best choice you can
given the information you have. And sometimes you have to make a choice quickly because in business
and other things, if you wait to have perfect information, well, somebody else is already
taking advantage of that opportunity. And so being kind of very focused on process.
and being focused on the long run and having a long time horizon, these are all helpful qualities.
Right. And we're already in it here. I mean, we're talking about expected value. Now, one of the things that I sort of, that perked up for me when I was reading the book was you bring up expected value and talk about it a lot. And I'm, you know, thought about, oh, this is an effective altruist term. We've talked about EAs on the podcast in the past. And we're going to definitely talk about how effective altruists think about risk and especially the way that that approach to risk has been reflected in the business.
community and the tech community in particular. But can you just define what expected value is?
Because clearly this is something that extends beyond EA and is also like, I think,
pretty foundational when you're thinking about the type of risks you want to take.
So expected value is what outcome you get on average in situations where there's randomness
or uncertainty. So if you invest in a portfolio, you start a fund to invest in whatever 20
early stage startups, then the expected value might be that you expect a third of them to
never make a dollar, right? A third of them to break even and then a third of them to make money.
And the question is, what's the chance of like a 100x or a thousand X outcome in that,
in that bunch of startups that you invest in? Poker, it's even more clear since there is literally
an element of randomness in poker. But that's all it is. It's a situation where, it's important
to make the distinction, by the way, it's a situation where you are playing out a scenario
some number of times, or at least some theoretical, multiverse version of you, right?
You know, if there are 52 cards in a poker deck, then every car will be dealt once
the next time. And figuring out kind of what the average result is on average in those
situations. Now, look, if it's a one-off decision, should you risk your life or something
when the risk of death is tangible, maybe not the best framework. But for repeated decisions,
even things like, let's say you're going to work, and there's one route that's faster on average,
but there's a drawbridge that's unpredictable, and if you hit the drawbridge, then it'll be slower.
That's a situation where you might think about, like, okay, if I get in five minutes late, does it matter,
or will my boss, like, fire me, that kind of thing.
But, like, I think it's actually in some ways a fairly intuitive concept when we have repeated practice with uncertain situations.
Right. And I get to think another way I just thought about it is, and tell me,
this is right. Like if you're thinking about asking out like a romantic prospect and you think if,
all right, if I do this a hundred times, they're going to say yes 60 times and no 40 times based
of like my interactions with them and all the people I've asked out in the past. That would be a
positive expected value thing to make you go and ask them out because the average, if you take all
of it, is a yes. Yeah, look, and I think it gets into slightly cliche life advice. But I generally think
people who are willing to take a risk and ask for things they might want, right,
whether it's a romantic partner or a new job or a friendship or whatever else, you know, I think
they're usually rewarded by that. I mean, people are often paralyzed by uncertainty where
they do nothing, and that can often be the, the riskiest course of action. If you're taking
physical risks, certainly, if you're in some type of military battle, just standing there and
doing nothing when you're getting shelled is the worst course of action. Maybe you want to
retreat. Maybe you want to attack and fight back and try to disrupt the source of the whatever incoming
missiles. But the kind of, you know, the fight, flight, or freeze reaction. I mean, sometimes
people freeze when that's the worst possible choice. So you divide society into basically two
different groups, right? This is like a good foundation for it. The first is the river, people that
think about life in these terms and are willing to take risks. And then the village, which is
I guess, like people who want that safety.
Yeah.
Talk a little bit more about the river and the village, and why was it necessary to make that distinction?
I mean, a couple of reasons.
One is that I'm a very, like, geographic kind of visual figure.
So imagining things as, like, kind of a mental map.
The river is kind of a term borrowed from poker.
So on poker, the last card that's dealt in Texas Holden is called the river,
but also poker kind of comes from the Mississippi River boats, right?
That's its origin where you have different people with lots of time on their hands.
people like to take risk and that's where poker emerges from.
But what I found in writing the book is that like these different communities that we've
already touched on, so like poker or VC or crypto or to some extent even effective altruism,
although they're a little bit different because they're not kind of, they're altruistic,
at least in principle.
You meet the same types of people, which are people who are very analytical, so good with
numbers, good with probabilities, cross with being very competitive, right?
Sometimes in a way to make money, but also just wanting to, I mean, Elon,
Musk is insanely competitive, right? You're getting people who are at the extreme right tale of
how much they care about proving themselves to themselves or proving themselves to others.
That combination is very powerful. The counterpart to that is what I call the village,
and that's like a little bit more of a familiar term. I mean, I think of the village as kind
of academia, government, especially when you have a democratic or progressive government, academia,
journalism, nonprofit sector, fields like that where it's less individualistic, it's more risk
reverse, you're maybe more concerned about your social status, right? In the village, the ultimate
punishment, although this might be changing a little bit, is ostracization or cancellation might be
the more modern term for it. You know, if you look at how political parties behave, there's a lot
of group think. When Joe Biden dropped out and full context for the reader, that's happened very
recently before recording this, when Joe Biden dropped out, all of a sudden, every single Democrat
goes ahead and endorses Kamala Harris. They're all kind of playing follow the leader, which is
kind of very different than the poker player mentality when you're trying to be maybe more of a
lone wolf or in Silicon Valley where they're explicitly trying to be contrarian and look
for bets that other people or companies and ideas that have been overlooked by the rest of society.
And so your thesis is that the river is winning? Why is that? I mean, if you look at
the profits for venture capital. I mean, the top decile firms are making rates of return of 20%
per year. That's really high. I mean, if you look at finance continues to grow, meaning Wall Street
as a proportion of the economy, Las Vegas continues to grow every year. I mean, part of it is that,
look, it doesn't necessarily mean that, like, the average VC fund is doing that well. Like,
the non-elite firms, their returns aren't particularly extraordinary. But if you're looking at
what kind of happens, I mean, how do you end up with people who wind up with
that are worth $200 billion.
They almost, by definition, have to be both really lucky and really good and keep doubling
down and making more and more bets.
I mean, Elon Musk, for example, chose a very incentive compensation heavy deal with Tesla,
the other companies that he's been in, even something like investing in Twitter or buying
Twitter when he kind of has his life made.
And, you know, I think buying out X, I suppose.
That's kind of a high-risk thing in a way.
not quite the same as high risk if you're like struggling to make an income and you like cross some international boundaries, refugee, like that's also an admirable, maybe more real risk in certain ways.
But the people who keep doubling down and are skilled enough to win not every bet, but more bets than not are the people that by definition in a world of 8 billion people kind of wind up on top.
And like, and the very richest people, the top 10 richest people in the world are worth about twice as much as they were 10 years.
years ago. You keep seeing this tail get pushed outward and outward. And by the way, those people are
by most definitions, to some degree or another, all self-made. If you inherit a huge fortune,
you don't have that much incentive to keep gambling with it. But if you're kind of selected for
being a maverick, rebellious founder, and a lot of people who are in that department
had some type of trauma or challenge in their childhood,
where they kind of keep having to feel over and over to people
like they have to prove themselves
and they feel like they have some duty to themselves
or others to continue to compete and gamble,
then that's where you get the extreme part of the distribution curve.
Right, and we're going to go through some of these things,
including poker, VC, effective altruism, AI.
But I first want to ask you a couple questions before we move on.
First of all, okay, so you're saying,
basically capitalism rewards risk.
Is this such a new idea?
I mean,
this seems to be something
that's fairly self-evident.
I think it is self-evident.
I think, though, also, you know,
it can take a long time
for industries to mature and play out.
Like, poker is a good example
where if you took like a,
I mean, now you have what are called
computer solvers in poker.
So a solver is literally,
technically speaking,
the Nash equilibrium.
It's a game theory optimal solution to poker.
Now, it's very complicated.
the solution. It involves a lot of randomization and mixing different strategies even with the same
hand. But I think people don't realize what happens when you have exponential growth, right? You have
exponential growth and the world grows at 2% per year globally or 3% per year. That begins to add up.
But also when you have sectors like tech that grow at 10% per year, that really begins to add up.
So it's like not a new idea, but the kind of manifestations of it. And also the fact that you're
now getting into technologies like AI. And if you read the book, I have complicated feelings
against AI, kind of present arguments for and against P. Doom, so-called, the chance of
catastrophic outcomes from AI as well as a chance of very good outcomes. But, you know, this is kind of
the trajectory of capitalism, is that people keep making more and more bets. And also, in some
ways, government capacity in the U.S. is getting, I believe, somewhat weaker. There is still a very
large governmental sector. I mean, you know, there are a lot of taxes. The federal government's
very active in state governments, but like, look at AI.
It's not like the Manhattan Project, which was developed by the U.S. government.
It's being developed by the private sector.
You know, medical innovations happening more and more in the private sector, for example.
And look at things like social media even.
I mean, you know, someone like the owner of Twitter or a big technology platform
might have more power to affect how the world works than even many governments
around the world do.
So I think it's not new.
It's just we're kind of like seeing the manifestations of it.
And despite a lot of missteps from Silicon Valley and big tech,
I mean, you can point to lots of failures of judgment and lots of personality types that are flawed.
The fact is that they do keep growing and creep and keep winning and become like a larger
and larger segment of the American economy.
Right.
It's almost like there's so much more action happening in the river, quote unquote, now that
there's a potential to even win even more. But, but, you know, we'll discuss this throughout.
I have one more question for you before we, we keep going, which is that, I mean, you pointed
to Democrats as like a fall in line type of group, but I mean, I'm not even saying this from
like a partisan standpoint. The Republicans certainly are a full online group. If you, you know,
see what they've done with Trump, for instance, like they are definitely a group that doesn't really
tolerate, like, you know, sort of the lone wolf type of thing and standing out and speaking out
against the consensus. For sure. I mean, and this is kind of why I've always felt like I never
really fit in in politics in general. I mean, you know, a lot of people in the river are people
who are dissatisfied with, with the two-party systems. They might over-index toward third
parties, for example, or over-index toward more centrist or libertarian candidates, for instance.
But yeah, politics in general is about in-group coalition building and reinforcement, right?
You don't want people to think for themselves particularly much.
There's a lot of social signaling.
You have to form consensus around a nominee.
And so, yeah, part of it was, part of the inspiration for the book was me feeling alienated as part of a world where I was mostly known for covering politics and kind of feeling like I was bashing my head up against a wall where, you know, I think political partisans are happy with you if you say their candidate is winning and unhappy if you say they're losing in your forecast.
and there's not a whole lot of nuance and complexity beyond that.
But there is this group of people that are more numerous than you think
that do think in terms of probabilities and expected value.
And, you know, I mean, it's a common personality type on Wall Street and in Silicon Valley,
which, again, are like, you can have mixed feelings about them,
but like, but they are industries that are kind of in some ways eating the world more and more.
I think that, again, that combination of like the analytical part mixed with the competitiveness
and the desire to prove people wrong,
it's just some type of like skeleton key
for unlocking high variance.
I mean, by variance, I mean,
you can also have like very bad outcomes.
We talk a lot about SBF in the book,
who not only ruined himself,
but also did a lot of damage to other people.
And there are people who believe that AI
has some very serious tail risks as well.
So we're talking about this idea of the river,
people that are interested in taking chance,
a little bit more risk forward and how they're winning.
And it does also seem like maybe society is moving in a way that indicates that it's
feeling these shifts and responding to them.
And I think, you know, we're going to go, again, we're going to go through all this
effective altruism, AI, BC.
But we're going to briefly touch on this sort of, I think you talked about like a poker
moment or the growth of poker in the U.S. and worldwide, whereas a few years ago, the
World Series of Poker was like held, uh, not televised, not really paid a lot of attention to.
And now it's become this thing that's like on ESPN, you've been in the World Series of
poker. You've been, you've done quite well in the event. But it's become almost like a,
the maybe tell me if I'm wrong here. I think it's the most watched like non-sport sport.
Like no one's like, you know, running around and stuff. But like it is very, uh, captivating on
ESPN and people really pay attention to. Poker players have become.
celebrities. And so what's behind the rise of poker? Is it just a reaction to the fact that
people see that this type of river-style thinking is going to be a winning ticket for them?
I think that's part of it is that, you know, if you're in fields like finance or tech,
I mean, all the, you know, all the guys I know who work for hedge funds or the women who
work for hedge funds, like poker is like your training mechanism to practice decision-making
under uncertainty. And I think it's actually fairly similar in a lot of ways.
it partly grew so by the way we just had the main event of the real tours of poker back in july and it set another record with over over 10,000 participants paying $10,000 each so you have a $100 million price pool basically.
I think ironically the pandemic helped to boost poker.
So part of what happened is that people were trapped, I guess, in isolation or indoors, began playing a lot of online poker games.
And when the real world opened back up again, there were record numbers of players who felt stir crazy and were playing in poker tournaments.
Like, I think, look, I think we talk a lot in the book about the gender issues in poker.
I, you know, my podcast partner, Maria Konnikova is a great female poker player.
There are some great players we talk to in the book.
I do think also you're in a world, however, where a lot of men feel a little bit lost.
And, like, poker provides an opportunity for, like, for a bonding experience.
that you can go and play poker and have some degree of hang out with the guys and the girls
and have some degree of socialization and competition.
I mean, it is a game that combines a lot of things that I think are very interesting.
There's math, but there's also a lot of people reading.
There's risk.
There's reward.
It feels really, really, really good to, like, make a deep run in a poker tournament.
Where everyone else has been eliminated and you're kind of one of the survivors and your chip stack grows over and over again.
I mean, poker is also a classically very American game.
It did kind of originate in the American, you know, Mississippi River Delta,
although now you have people from all around the world who play the game too.
But there's something about like, like part of it, I think, to Alex, is like,
I think we're getting more bifurcated in our risk preferences,
meaning that there are some people now who are, you saw during COVID,
that some people are very risk-averse and that might be sensible if you're in a high-risk situation or setting.
And some people are more yolo.
And so instead of having this nice belker, because you want both, right?
You want people who are cautious in society and you want people who push the envelope
and probably society benefits when you have a mix of those in the aggregate.
You have such a great line in the book.
You say people are becoming more bifurcating their risk tolerance.
You've got the Netflix guy in the country and the strip club guy in Miami.
Yeah, you got like, I mean, people and it feels like we have more and more options,
but options can be like parallel.
and crippling to some people.
I mean, I think people have felt there's less and less trust in almost every institution
in the U.S., less trust in media, less trust in government.
In COVID, people often felt like they got contradictory advice and had to make up their
mind for themselves, and that can be both wonderful and terrifying.
If you feel like I kind of rule my own roost, then that can be great for some people
who are comfortable with uncertainty and optionality and things like that in a more virtual world too
you can like have more influence with more people around the world but if you're someone who's like
paralyzed by choice and just want safety or not paralyzed but just in a life situation where
where you have things to take care of you have kids to take care of or health problems to take care
of you know that can be more challenging and i i kind of worry that we're having a certain
type of, new type of inequality almost where people who understand risk and understand new
technologies like AI and understand how to be entrepreneurial and how to kind of craft their
own way in life, have like never done better. But then, but then it's harder for the average
person, I think, to cope with this. And sometimes, I don't mean this in a conspiratorial
way, but sometimes things are kind of like rigged against them, right? If you go into a casino
and play slot machines, literally the big casino companies,
Caesars and MGM, literally hired teams of analytics nerds
to figure out how to manipulate those slot machine probabilities
to make you gamble more and more and more.
Yeah, it's amazing.
It's like all marching down to zero.
It's just the question of how you get there.
Although I will say the best way to play slots is you go play penny slots.
I know they have worse ads, but you put five bucks on a card,
play those penny slots.
get like three drinks and your night is made as you play. And there can be good, look,
the word I use a lot in the book is agency. If you're a person who has high agency and you say,
you know what, I'm spending a amount of money that I can afford. I'm going to go play slots
or blackjack. And I have 200 bucks in my wallet. I'm not going to go back to my room and get
any more cards or chips. And like, I'll get free drinks out of it. And now and then I'll win and
have a good story. Then like, I think that's totally fine, right? I mean, I'm kind of at least a
lowercase L. Libertarian. So I'm not in favor of like banning any of.
this. But, but, you know, people who are prone toward different types of addiction or
manipulation than, then I worry about more and more. I mean, most people are not problem gamblers,
but like problem gamblers gamble so much, they make up a lot of, you know, probably not the
majority, but a decent share of casino revenues. And this is where we sort of put your river
versus the village thesis to the test and ask if the river is really winning, because you have
this stat in your book saying that Americans, Americans alone lost 60,
billion in gambling in 2022 and then an extra 40 billion in unlicensed gambling and 30 billion
in state lotteries. So, I mean, is it that the river's winning or like a select portion of
the river that actually understands what it's doing as well? A select portion of the river.
Because like another game, for example, that in a certain way is kind of rigged against you
and you have to be careful. Let me be careful about how I say this. But like, but sports betting,
the problem with sports betting is that if the casino, if the sports betting site,
thinks you're good,
then they'll probably
limit you from betting
more than a few dollars a game.
It varies from site to site.
Draft Kings, for example,
is more eager to limit people
than Caesars, at least in my experience.
But yeah, I mean, sometimes it's,
you know, the river isn't necessarily
the average person in a casino
in Las Vegas or New Orleans or whatever.
And by the way, I should say
both of these communities
are communities of elites.
relatively small numbers of people and the average American is trying to navigate between these.
And that's, by the way, why they have all these rivalries.
You know, when you saw it's a little bit, if you're not a politics fan, this is a little bit esoteric.
But when you saw like Bill Ackman and hedge fund people going after like the presidents of Harvard and MIT and whatnot, that was a classic river versus village confrontation.
Yeah, our listeners are familiar, for sure, yeah.
Um, but like, yeah, Harvard is a classic village institution and, and hedge funds VC are
classic river institutions. So, and that was a fight between different, but you could argue Harvard's
a hedge fund itself, right? Well, I mean, that's a deep irony of, of all of this, right?
Is I think, and you can point out lots of hypocrisies on, on both sides that Harvard does like
print money with a huge endowment. On the other hand, you know, lots of,
people in the river think of themselves as being nonpartisan and above politics, but,
but you see that partisanship is a very powerful drug. And I'm probably not in a mood to
point toward individuals and say, oh, that person's a partisan and that person's not. But like,
they often get like quite pilled, quote unquote, by politics and, and it's, you know, are not
always able to remain neutral. They tend to wind up deep team blue or deep team red or deep team
read before, before too long,
pilots can be intoxicating in that way.
So, yeah, both these groups
are full of personality flaws
and although I'm kind of
more in the river
mindset, I have experience in both
and part of what the book's trying to do, like,
you know, my pitch to people
in the village is like, I can
help you to understand
how these people in the river think.
Because I'm one of them and I talk to them and I think I have
very honest conversations with them.
But they can be trying to take advantage of you
or can be trying to run circles around you, and they are very capable of lots and lots of lapses
and errors and judgment, as well as just things that are kind of self-serving the way the
different technologies are regulated, for example. And we will get into that in a moment. Also,
for listeners, towards the end of this conversation, we're going to ask Nate for his like three
key poker tips. I'm excited for that. I recently joined like a once a month weekend poker game,
so this will help me, but we're going to hold off on that for a moment because we do a
want to talk a little bit about sports betting and then get into the VC stuff. The sports
betting section of your book is fascinating. The first part that I found so interesting was,
or I would just say the most interesting part to me, was the fact that the casinos don't really
have like such a sophisticated operation setting lines and they're waiting for professional
betters who've like their own formulas or crunching numbers to make their bets and then they
sort of adjust a line based off of what these pro betters are. So effectively if you get in to a bet,
pretty early on like basically or immediately once it goes up you might have better odds than if you
wait for for sure no look you might have some like smart kid um who's 25 years old who uh you know
whatever PhD in economics or math or not even a PhD probably maybe but more likely just to
be a math and is getting paid um 100,000 dollars a year to go work for a casino somewhere and
you know the college basketball lines must go up and he used like a computer program to
estimate Duke versus North Carolina or whatever else and like maybe he's talking to his
colleague the other side of the desk but no I mean they are not that sophisticated in
terms of setting lines what they're good at is understanding how to account for information
and the actual act of what is called bookmaking right maybe Alex they think that you're
smart so the line starts out with the Kansas City cheese favored by two points
when you bet on the chiefs, they'll move the line to chiefs minus three.
Whereas for me, they think I have the profile of a recreational beter.
I bet on the chiefs every week.
So when I make that bet, they're not going to change one thing.
They'll want to keep the line there to encourage me to bet, to bet more potentially.
So that act of gaining information is a traditional process of bookmaking,
and it's a market discovery, almost like an auction type format, a price discovery format.
In your book, you talk a lot about sports betting, but the one thing that you sort of stay away from, or maybe it's not applicable, but I want to get your opinion on is the fact that this is potentially dangerous for the sports.
I mean, I think probably the Shohei Otani thing happened after your deadline, but we've already had, you know, show hay potentially implicated in a sports betting scandal, or definitely implicated.
We don't know if he was involved, but, you know, who knows?
We may never find out.
We've had an NBA player suspended.
I'm sure there's been, you know, a lot of these platforms won't let college games.
be bet on. Do you think that sports are getting a little bit too close to the gambling platforms
because, you know, we're going to just end up seeing scandal after scandal of people being
unable to resist. Like gambling can be in a compulsion and especially if you have a way of trying
to swing the outcome. It can really change the balance of what might be one of the last pure things
in the United States or in the world, which is like sports played by the rules.
I think the leagues are getting a little bit greedy in some ways. So for example,
For example, in the NBA, Jonte Porter's, a player who was suspended for betting on his own prop bets, right?
He would bet that he wouldn't get as many rebounds as the over-underline had, and mysteriously, he would find ways to miss rebounds and have injuries that required him to be taken out of the game.
So why is anyone betting hundreds of thousands of dollars in this obscure player?
Like, that doesn't make sense.
So having, like, I call off the kind of in-and-out menu, where you only have like six items, but they're all done for.
pretty well. Like, that's more rational where if you want to bet on, you know, big time sports,
the point spread, the money line, which is just the odds that each team will win or the over
underline, like that's probably fine. When you have things like player prop bets, when you have
things that are based on news, there are many casinos that will let you bet on like who is
taken first in the NFL draft. If you work for ESPN and you're an NFL reporter or you work for
an NFL team, you're going to have a lot of insight information about that. So I think it's been
a little bit of a money grab in the short term and there's not enough focus on making it more
sustainable. I mean, look, I live right in New York City. I go to Mass and Square Garden a lot. At one point
they were like literally three different ads for three different sports books at New York Rangers games
on like the rink around the ice, the sideboards. Usually people care about category exclusivity
in such a narrow venue
but they so didn't care
that like they were
I mean I think it was overbought
and you've seen some
some consolidation
where actually draft kings and fan duel
are doing reasonably well
they have a bit more market share
relative to the others
but it's always been a tough business
and the reason it's tough business
is because like it's not that hard
I mean it's hard don't get me wrong
it's very hard to beat the sports books
but when they offer this gigantic menu
of hundreds of different bets on every single game
in thousands of leagues around the world.
And by the way,
and there are nine legal sports books in New York
and however many gray market offshore ones
so I can look for the best line
out of thousands of options.
It's not that hard to find a line
that I can beat by 4% on average,
which is what you need to beat the house.
You have to beat it by, you know,
they take about a 4%, 4.5% cut of every bet.
So I think it's always been
in Vegas sports betting has always been seen as an amenity
AME and IETY meaning like you come in
there are certain types of customers that want to go in for the NCAA
tournament or for an NFL weekend or you know big soccer match
and and bet a few hundred bucks and gamble I'm watching the big screen
and like and sports books are fun kinetic environments if you go to like
I was there for Copa America I was just in Las Vegas and like
it's it's fun to see kind of the Colombian fans
and the Argentinian fans and all the people who have action on the game.
Like, that's a fun piece of eye candy for the casino.
But sports betting's never been a major profit center.
It's like one or two percent of overall biggest revenues.
And I think it's been like a little overbought in a way that isn't fully sustainable.
Okay, one last thing on sports betting.
I don't know if you guys started this at 538 when you were there.
But this was something that's always kind of annoyed me about the analytical approach to sports.
Is that like sometimes the sites will be like this team has.
has a 70% chance of winning or this player has like a 30% chance of hitting a home run.
Like you watch baseball on Apple TV right now and it's ridiculous.
Like they adjust the win probability with every, every percentage.
And for me, as like a massive sports fan, I've always thought this isn't the word chance
is so wrong here because it's not a chance.
Like it's not like you're like putting a ball in the roulette wheel and allowing it to come
where it is.
It's actually something that's determined by human outcome, human,
intervention here. And I just think chance is the wrong word for this. What's your response to that?
We can get into some existential questions here. I think it literally depends on the sport to some
degree. In a sport like hockey, for example, which there's a lot of luck in hockey, generally speaking,
teams don't have that large an edge. Like literally, if the Zamboni made a certain patch of the
ice slick in between periods and maybe the puck bounces and goes in the goal and like otherwise
it doesn't certainly
but I look this that's actually a very deep
Alex existential debate about the kind of
nature of uncertainty
look epistemologically
you don't know right
it might be true that's not literally
a matter of chance although some things actually are
kind of random like this is part of what like
chaos theory said is that some
systems are are irreducibly
complex like weather systems for example
now I don't know if like
a baseball game fall on that
category. Um, but there is probably some element of like, of like literal, a literal chaos, I
suppose. Okay. I feel like I could do a full show on this, but I'll accept the argument.
All right. One more thing about, uh, on this line, politics. So polymarket, um, you know,
watching it predict like who's going to be the nominee, um, who's going to win the election. You can
bet on anything there. It's a betting site on sort of news outcomes. I mean, I think you're on the
board of as well? I'm an advisor to parli market that's correct yeah an advisor are they getting to are these
prediction markets getting to be better than actual polling in terms of predicting an outcome um so I have
my own polling based model at my newsletter so I'm a little bit partial to it however one thing about
polling is that it takes a few days to incorporate new information you actually have to contact people on the
phone or online and have them take your poll and you go back and do your weights and run your numbers
whereas they get to react to things in real time.
So, for example, in the disastrous debate for Joe Biden,
was it June 26 or 27th or something like that,
you could see reflected in real time prices at Polly Market and other markets
changing on the chance that not just Trump would win,
but that Biden would drop out.
You could see Bitcoin reacting in real time
because Trump is seen as bullish for Bitcoin.
So, you know, these were not people who were waiting for information,
to come in. There are people who were making their best guesses in real time. And that skills
underrated too. Look, I'm a guy who loves to be rigorous. The book is very rigorous. It involved
three years of reporting and writing. The model we build is thousands of lines of code. But the
skill that people in the river have to make decisions based on incomplete information is often
vital. Because if you wait for complete information, then someone else has already made that bet
first, right? There's no more profit in that bet anymore. You have to be willing to make judgments
when the data is noisy. And that's kind of what makes the actual skin in the game, risk-taking
communities actually have the potential to make profit and makes like academia or the village
than too slow most of the time. Okay. That's interesting. So you're a river guy,
but you're also a polling guy. Yeah. And the answer that I'm hearing you give right now,
is maybe the river will eventually have an edge on polling?
I think this is...
In politics, or am I reading this wrong?
I think this is changing too, in part because, you know,
the markets are becoming more like...
The politics markets are becoming more like sports betting markets,
which, again, are at least pretty smart
once you have price discovery.
I think the reason why is that everything is kind of becoming eaten by politics now.
And so you have more and more financial firms
hedge funds, investment banks, so forth, who think they have to like understand political risk
because political risk can affect things like different classes of stocks, how different industries
are regulated, interest rates, whether foreign countries, if you're investing in some foreign
exchange, is this country going to be stable or not? Will the leadership be good for the
economy, good for equities? So you have more and more investment, I think, and this is somewhat
firsthand knowledge based on going around and talking to people about consulting
opportunities and so forth. You know, I have noticed more sophisticated conversations happening
in the last few years. Some of these companies will trade in different ways, but like
prediction markets are are a part of that. I mean, the critique was always that when you just had
kind of hobbyists, it didn't get that far because you only have like one election every four
years. Building an election model is hard. People are are very partisan. But you have kind of more
institutional investment now in building models of things involving political risk or other
types of exotic derivatives where there's actually adequate liquidity and actually adequate
opportunities to trade, which I didn't think was as true, say, five years ago, 10 years ago.
That's crazy, Nate, because, I mean, we know that money has influence on politics.
Are we going to end up getting to a place where, like, I mean, I guess this was always the case,
but like companies or financial institutions that may end up placing big beds on candidates in one arena,
just work hard behind the scenes to manifest that outcome that they're financially linked to?
I mean, there are a couple of things here.
One is that there have been accusations before.
In 2012, for example, there were accusations that Mitt Romney traders were deliberately buying up Romney stock at prediction markets
because they thought that would create favorable buzz
and therefore that's like free earned media, for example.
But remember, that's at a point 12 years ago where there's less volume
so you have to spend less to influence or manipulate
if you want to use the term that market.
Yeah, look, you do get into some gray areas here
where if you have like, you know, traditionally organizations
like the CFTC, which regulates commodities exchanges,
have not loved betting on real world events
in part because they were worried that it could create moral hazard, for example, you know,
if you're betting on some insider trading case or something like that, then if you have access
to, like, manipulate the markets.
However, the fact is that, like, you know, if I were talking to CFTC, this risk is price
into lots of commodities and so forth anyway, that when the, when the Donald Trump price
increases, it's probably good for maybe defense contractors or Bitcoin or, or, or, or,
whatever else. I don't want to speculate too much. That's not the part of it that I do.
But like, yeah, but like, you know, political risk has huge effects around the world.
If there's some, you know, dictatorship that emerges in what had previously been a capitalist
country, then that can have profound effects on the global economy. Things like, like, for
example, if you're investing in Nvidia or semiconductor supply chains, then understanding
the China-Taiwan conflict is going to be, like, very important potentially.
And so these things are things that people have to price anyway.
And the argument for prediction markets is that they give you a more efficient and explicit way to do it.
Okay, let's talk about VC and founders.
Yeah.
We're like 40-something minutes in.
We have to do this.
It's the show mandate.
But VCs and founders, they think about risk in really different ways, which is another thing that you highlight in the book that to me was surprising.
So can you talk about like the different type of risk that a VC and a founder tries to take on?
and how does that influence what they do?
Yeah, I mean, the main difference is that most VCs want founders
who are going all in on an idea with a very long time horizon.
So for 10 years or often more.
I mean, these companies can have, I mean, the J-curb means you can often take 10, 12, 15 years
before your fund is actually making money, even if you've made good investments.
Right.
I love how you call these founders hedgehogs.
So this comes from...
Basically, get in that hole and keep burying.
Yeah, so this comes from the typology from Isaiah Berlin, where
a hedgehog versus a fox where a fox is crafty and kind of roots around for different things
and it's kind of like a hunter-gatherer type right whereas a hedgehog very much like
digs in and stays berries in the hole and is very much very much all in um so for a founder
I mean being a founder is I think legitimately often a very risky thing to do it's pretty
rare for a company to become the next Google or SpaceX or whatever else um
My definition, you're pursuing an idea that probably is not adequately addressed in the market.
Usually the market's smart.
It's for a good reason that your idea wouldn't work potentially.
And so you have all these VCs who are foxes, right?
They'll say, yeah, actually, we don't want to have too much domain knowledge.
You talk to like Michael Moritz, right?
He used to be a journalist and then became one of the best investors on Earth.
Right from Sequoia.
He's like, yeah, as a journalist,
you kind of parachute into a place and you have to, again, use incomplete information to make
decisions quickly. And you don't necessarily want to be a domain expert. Sometimes domain experts
get blinded or biased by the forest for the trees a little bit. You want to be like a good
generalist. Whereas for a founder, you want them to solve a problem in a different way
and devote all their resources to it and be a true believer in themselves, right?
You know, Paul Graham, another VC I talked to, says you need founders to be optimistic.
It's kind of irrational in some ways.
You would think you want a founder who's a perfect assessor of risk and very well calibrated.
But if you're not like a true believer in your idea, then it's very hard to sell other people on it.
And kind of in a world where things are getting more.
and more quantified and analytically driven.
Maybe you need, maybe there's more of an edge now from being a true believer and saying,
I don't care what the barriers are.
I don't care what the conventional wisdom is.
I'm just going to kind of plow through things.
Now, to be clear, this often ends in disappointment or ruin.
And if you're a VC, then you can collect 15 hedgehogs into a fund.
And then that turns out to be a very nice thing.
I mean, somewhat contrary to the conventional wisdom, before I start.
started this before. I knew anything about VC. I thought, well, actually, it's like a poker tournament
where you lose money most of the time. Like, that's not true. If you talk to like, talk to like
Mark Andresen and Driesen-Horowitz or funds like that or Sequoia, they'll say, actually,
we make money from maybe 60% of our investments return something and then 20% return a lot. And that's,
it's actually a pretty nice business if you are in a top-tier firm. But that's from aggregating
risk and that's again, that's where you realize expected value is if you can literally kind of
run the experiment a hundred times, you can be almost guaranteed to make money. We run simulations
in the book where it's like if you really are returning 20% ROI per year on average with a lot
of variance, you're almost guaranteed to make to make money. And there's enough empirical evidence
that like, oh, like a certain A16Z fund will have a down cycle.
and only make 11% per year for five years,
it's still better than you can normally do in the S&P 500 or things like that.
And so it's a really good,
it's a really good business to be able to find these dogmatic founders
who are taking huge risks and then have a piece of all of them
and hedge that risk.
Right.
And you talk in the book about how Silicon Valley marries risk-tolerant venture capitalists
with risk-ignant founders.
Basically, founders go all in.
and it's win or lose.
I mean, they'll make a salary, probably.
But basically, they could do all this work,
destroy themselves physically for nothing.
And whereas a VC, that's okay,
as long as they pick enough.
The VC mindset is such that when you had FTX blow up,
I guess it was Sequoia, said,
yeah, you want high variance bets,
and so we would make the same bet again.
They do it again, which is, they lost a lot of their,
I would say, Sheen, Sequoia did.
with the FTX, but also I think that you made a good point, which is that one of the things
that makes Silicon Valley unique, and it goes back to this quality of being willing to be
embarrassed, is they're willing to be embarrassed? Whereas like, it's not necessarily, like in the more
villagey status, you know, safety type of occupations and fields, like that actually matters a lot.
But for Silicon Valley, it's just another day.
Yeah, and look, you see lots of behaviors in Silicon Valley, some of which are maybe a social
or antisocial behaviors, but everything from like tolerance for psychedelic drug use to tolerance,
which may not be a good thing for like difficult kind of, you know, asshole founders.
And look, I think within itself, like Silicon Valley, it's kind of conformist relative to
itself in certain ways, but it really doesn't care about what the village things or what the
rest of society things, or at least it shouldn't.
I mean, I think you have like, like, you know, obviously certain VCs and founders are, are more
politically sensitive, but, you know, you talk to Mark Andreessen, he thinks the rest of the world's
becoming more conformist. And that's great for me because it means the returns to being like
different are, are higher now. If there's more of a penalty, a social penalty for for nonconformity,
then I don't care about the social penalty and I'll all make money and investing cool things
and kind of make my own universe of this. But isn't that a little bit, isn't that
bullshit like a little bit from from Andreessen because you know as I'm reading the VC chapter and I think
you even put this in like I started having these questions like all right so vCs if you're in a
position like entries in Horowitz you're probably guaranteed at least a 15% return as you spread your
bets okay so that's safety VCs they're herd followers right so like they are pack animals in a way
that like you would as to typically associate with the village and then you ask like all right well
are VCs disruptors or are they incumbents and in some way you know they are the incumbents making
money off of the disruptors.
So I'm curious what you think about that.
Like, Andreessen can talk a big game about time to build and change the world.
But with this whole, like, raise your hand up and fight type of mentality, like, aren't they
the man just as much as like Wall Street institutions are or the traditional village
institutions are?
For sure.
Or the comparison I'm sure they won't like that I'm making the book is that it's a lot like
Harvard or Stanford.
Because a lot of these guys have become anti-big universities.
But, like, if you're Harvard and you're gearing.
to be able to recruit, you know, an 85% yield rate from the best students around the world,
it's really hard to fuck that up, right? If you're a startup, then it's such a positive signal
to have Andresen Horowitz invest in you that you'd kind of have to be crazy to turn them down.
And so, I mean, Andreessen even told me that it's a self-fulfilling prophecy at some point.
I respect the fact that Silicon Valley understands the importance of making high upside bets
and understands the importance of having a long time horizon, being willing to project out 10 or 15 or more years.
I do think sometimes that some of them are spoiled winners.
I say that slightly affectionately, but like they have a lot of things going for them.
They control a greater and greater share of the world's wealth.
And I understand as someone who himself often gets annoyed by the village, I understand the, you know, the impotent.
just to be annoyed by it, but like going back to being a little bit more apolitical, I think
might be a better look and maybe in the long run, because there is some risk of regulation.
I mean, certainly when it comes to things like AI in particular, then government's going to
have an opinion about that. In crypto, government's going to have a lot of say over kind of
over how crypto works and what the upside potential of those businesses are and those platforms are.
So I think they are a little bit asking for fights.
And again, I have this tendency, too.
I'm competitive.
I like to pick fights.
But like it doesn't always, it's not always leading to decisions that I think were good decisions at the end of the day.
When you kind of wake up and look back on things later on.
Yeah.
And look, I mean, I appreciate VC.
They find a lot of interesting things.
We talk about them on the show.
They're fun to speak with.
But with some of the top end ones, there does feel like they're some of the same sanctimonies.
that you would find like on a Princeton campus and that that kind of the horseshoe theory of
like of like exactly objectively have it really good if you're like a tech billionaire you get to
spend all day meeting interesting people and investing in cool businesses and throwing cool parties
anywhere in the world i mean you have if you have infinite money then that's a pretty nice
pretty nice life and i i think i understand where the competitiveness comes from but i you know
we are seeing more signs of a tech backlash, and, you know, so far they're still making lots of money.
But I do think AI in particular is a sector where there is a pretty big appetite for regulation.
It's actually one of the few things that both Democrats and Republicans agree upon is they actually are concerned about AI.
Not as concerned as like the effective altruists or whatnot.
But the notion that, like, hey, it's going to take over your job is going to lead to discrimination.
it's going to lead to whatever else.
You're going to cede your agency to the machines.
Like, that is intuitively scary for most people.
And I think they have to be careful about, about, I mean, you know, sports betting is a much lower stakes industry.
But, like, you know, VCs should have a long-term outlook.
And that includes the kind of political environment in which they're operating.
Because you could have a shift toward more regulation, for sure.
Totally. Okay. So we'll take a break now and we're going to come back. I think we'll try to talk about a handful of the following. Sam Bakeman-Fried, effective altruism, AI existential risk, nuclear stuff and Nate's tips for poker. So I'm going to try to get to at least three. We'll be back right after this. And we're back here on Big Technology podcast with Nate Silver. He is the author of On the Edge, The Art of Risking Everything. You can see it if you're watching on the video here. I've plowed through it. Really great book.
Let's talk a little bit about effective altruism.
So, and Sam Beekman-F, in this, for, we'll start with SBF.
Then we'll zoom out a little bit.
I guess, like, effective altruism takes this, like, expected value type of thing to the extreme.
And with, like, SBF, there's a moment in the book.
You meet with him as, like, the whole FTX thing is going down.
And he's such a compelling character.
It's, like, the third book I've read that has, like, a lot of SPF.
And I feel like my appetite for reading about him is just, like, endless.
But there's an interesting moment where, like, basically Sam says if he can flip a coin, I'm going to get details of this wrong, but flip a coin three times and two times the world is twice as good as what it is.
And one time the world is obliterated, he would do it every time.
Or even less than that, right?
If you win 51 out of 100 flips and you make the world twice as well off, then he literally would take that coin flip.
And I think he's actually not even being tongue-in-cheek about that.
I mean, he told Carolyn Ellison that in court testimony that I obtained.
He said similar things to like Tyler Cowan, the economist.
Yeah, it's a philosophy known as utilitarianism taken to a quite logical extreme.
And I think this speaks to, look, I think effective altruists tend to be, A, I think SBF is not representative of the whole group, certainly.
I think they are very smart.
They ask questions that other people don't ask, which I always think there's value.
in doing um but i do think so explain explain what they are from your perspective i mean sorry
i know this is you to try to do it in like a minute because everybody has so many different
definitions of them but you actually went through the reporting process of like meeting and learning
about EA and then you can go back to like the weakness so effective altruism is a brand is the way
i call it created by um will mccaskel and toby ord who are professors of philosophy i think at
Oxford. And it kind of is what it sounds like. So the original idea was how do you more effectively
do altruistic things, for example, give more money to charity, right? If I give money to like
the American Red Cross or something, you can calculate that actually X percent of it is wasted
on administrative costs and then the way they distribute it is for things that may or may not
be good, right? And so, you know, the original idea was that if you give money to things,
for example, like mosquito nets in Africa
were found to prevent loss of life
and prevent severe disease
at a pretty low cost.
You can save a life for $5,000 or something on average.
So it began, I mean, I'm deliberately simplifying the story here
because the origin story gets a little bit convoluted,
but it's kind of like this short version of it.
It began as a way to do more effective philanthropy.
But then people began to apply it to all sorts of other things,
like animal welfare, for example.
You know, if I decide to eat a chicken or some chicken nuggets, right,
what is the utility for the chicken's life versus my pleasure
and my nutritional value in, like, eating those chicken nuggets?
So I know some EAs, for example, who won't eat chicken but will eat beef
because you can create a lot more portions out of a giant cow than you can out of a
chicken and cows are also kind of dumb and therefore they are valued less in some EA systems.
So it gets very weird, very fast.
And then kind of where the interest in things like existential risk and AI come from is that
if you think that there's a chance that something could destroy civilization, like
unaligned AI or like nuclear weapons or like bio-weapons, for example, where climate change,
although climate change is not traditionally an area of EA focus, we can talk about that.
that has a very high expected value gain or loss based on what you do.
So, yeah, it's kind of taking this gambler's expected value maximization mindset and
applying it to all sorts of real world problems that it's not traditionally applied to.
And I'd say with kind of very, with very mixed success.
Right.
And the point is that this is sort of making its way into the business world.
And if you think in, if expected value is sort of like you're a North Star and you think about like, do I want to risk the company on this behavior that may have like a two and one chance of working, but a one in chance, one and three chance of doom or two, sorry, two of three chance of working one and three chance of doom. Most founders would never do that. But like SBF for instance, seemed to be open to making this type of decision. Yeah, I mean, literally he told me that if you're not willing to.
to risk completely ruining your life that you're a wimp was a term he used and not undertaking
enough risk. And believe me, that is, that is atypical. But, you know, you talk to, like,
Vitalik Buterin who founded Ethereum and he's like, yeah, I thought there was only a 10% chance
that it would work out like this, right? But if you make 100 extra money in that 10% of the
time, then the expected value is positive. I mean, my issue with someone like Sam is more than
I think he's miscalculating a lot of things. If you talk to people who were in his
his orbit, then, you know, he was a person who might make a good first estimate, but then not
refine it and become very hedgehog-like and get very dug in. And also not thinking about the
harm that would occur to other people. But, you know, look, I'd be the first to tell you that, like,
you know, I like to work on problems that are kind of closed-ended problem. So we were talking
before about sports and how, is it really random or whatnot? You can get into some philosophical
discussions there about sports or elections. What's really random? What's not? But
The fact is that you can represent these things reasonably well with the model.
Sometimes there are little fudge factors that you might round up or round down the model based on things.
It's a little bit hard to quantify, but your experience tells you are important.
But these are kind of closed world problems that you can have repeated trials of.
I worry that sometimes the EAs are like operating in a world where you assume everyone has like 200 IQs instead of us regular stupid human beings.
and often have like a lot of short-sighted realizations.
Because when I look at like work even in sports or something like that, there are lots of bad
election models out there and bad sports models.
And that's with relatively good data and closed systems.
So, so, you know, you can very easily make very bad models if you're not, if you're not
careful.
Exactly.
And that to me is like the huge hole in effect, and sorry, an expected value that, you know,
we've been talking about through this whole conversation, and I think you explicitly like talk about
this in the book that like basically expected value can be good for things that are quantifiable,
but the problem is that people will take it and say like, you know, this has an expected value
of this, but they're fudging it. They're really, I think you called it improv. And to me,
it sounds a lot like a religion. So this one I, I, maybe this is from the book, maybe I just wrote
this as a note. But there are strict values up until there's something that's, that has self-interest.
and then you can squeeze it in and sort of fit your expected value calculation.
And this is some examples you gave that effective altruists.
They say that, you know, future lives are important, but they also are okay with abortion,
which is an ideological inconsistency, not really, I'm not taking a position on it.
They're okay with not having kids, although they value future lives.
They defend immigration restrictions because they say that, like, if we're anti-immigration,
then, like, it means that there's a chance that, like, more.
pro-immigration politicians will come in and allow more immigration. Like, it reminds me exactly
of religion where, like, you have this tenants and you have these self-beliefs and you make the
tenants work to advance your self-beliefs. It feels like the same exact thing. And I know religion
is sort of like a pejorative in these type of worlds, but it's, that's what it sounds like.
And that's a huge, huge liability when it comes to expected value. I mean, there are some similarities,
right? Just like in some churches, you tithe or tie it. I'm not sure how to say that word, but give
10% to the church.
Some EAs believe in earning to give where you go work for a hedge fund or something and then
you give 10% of your income away for that.
Look, one of the ironies in the book is I think people who are operating in the world of like
VC or poker, in some ways, the profit motive grounds you a little bit and kind of gives
you maybe more of a BS detector and that whereas if you're a true believer, like the people
who worked for SBF, I think were actually very nice, conscientious people, but that may have
made them less skeptical than they should have been of like Sam himself. And don't get me wrong,
like I think a lot of EAs have done a lot of smart things with respect to like poverty reduction
and things like that. I think overall, having more focus on existential risk is a big win for
for EA's. I mean, there's been a paradigm shift in, in the awareness of the conversation about
AI and what risks and opportunities that it might pose potentially. So look, I think they've done
good for the world on that. I think it's a small community. It is fairly demographically
homogenous, right? It's, you know, a lot of kind of 30-something, mostly, mostly white people,
a lot of men very concentrated in some areas like Oxford or Silicon Valley or things.
like that, maybe people who don't have that much necessarily life experience. So I think it's a
useful movement. But like, yeah, there are things that are, I mean, I think religion is a little
bit too disparaging of it. But they proselytize, right? They want to spread the word. They tend to
be pretty good at PR. When they have, when there's an EA book coming out, it has like big like
book release parties and, and things like that. And yeah, I mean, I think there are,
are there are definitely some parallels. Yeah, I mean, look, my first introduction really to AIA
was learning about it through SBF. I'm more impressed with it now than I was previously,
but there are still like some flaws. So, but it is very interesting movement and we'll be
doing more about it on the show. Let me ask you this then. And this is sort of what it all sort of
builds up to, which is that EA is extremely influential in the AI field. Yes. And just
not even, not even just EA, just this thinking, this similar like thinking, river style thinking,
thinking an expected value. And then this sort of hypothetical that you had with SBF of like flip a coin
a hundred times, 51 times the world is twice as good, 49, it's destroyed. And a lot of people
who think in this expected value mentality will be like, all right, I'll flip that every time.
Maybe not a lot of people. Maybe you assume is the outlier. But then you think about AI. And it's like,
oh, is AI that exact calculation where AI, if it works out well, will make the world twice as good.
And if it works out poorly, it might destroy the world.
And do you want people with this type of thinking to be running AI projects?
Yeah.
So Sam Altman, who, believe me, I think is not in the same category as SBF.
I talked to different sources like.
I mean, he also got ousted by EA aligned folks.
Yeah.
But Sam Altman, and I talked to him, I think, in August 2022 for the first time, that was the main interview,
which was a good time because it was like when he was a little bit less guarded.
Right before.
Yeah, so they kind of knew what was coming with 3.5 and 4, and it wasn't quite public yet.
So he was, I think, in an optimistic and unguarded mood.
And he's like, yeah, I think that AI could end global poverty.
Like, he meant that very literally.
Like, it could end global poverty.
We'd have so much economic growth and plenty and automation of tasks that used to be burdensome for human beings that you could end poverty, he thought.
But he also thinks that it could also destroy the world.
I mean, I think the chance that he would put is not as high as some people.
But he's like literally willing to make not the 50-50 bet that Sam made, but maybe a 90-10 bet, maybe a bet where there's a 10% chance of really bad things happening because he's convinced there's a 90% chance of really good things happening.
I guess that kind of gets more into expected value terms.
But then you get into questions of like what right does he have to make that bet for all of humanity?
And there are parallels.
I mean, when the Manhattan Project was experimenting with atomic weapons, there was one
experiment they thought had some chance of, like, actually igniting the Earth's atmosphere on fire.
It was very unlikely, but they figured it might be a one in one thousand chance of, like,
literally incinerating the Earth, right?
And, like, you know, who has the right to take that sort of chance?
At least the Manhattan Project is kind of, at least now,
an offshoot of the U.S. federal government,
which is an elected company.
But, yeah, and look, when you have people
who are kind of taking risks on behalf of
everyone else, then they often wind up
being villainized by society.
I mean, that's an area where, again, the potential for, like,
I think political backlash is
fairly high
and, you know, it's also, like, if you
have these GPUs that have lots
and lots and lots of compute,
I mean, one good thing about
AI risk is that we there are a finite number of entities that we know have enough compute for now to to build top top tier models large language models and other models that might not be true as things get faster and faster in five years and ten years which could potentially be more dangerous but but I think they I mean for one in one ways it's reason to be optimistic is that like you know something doesn't quite add up to me where you have these like
smart rich like nerds in san francisco and silicon valley making bets for all the future of humanity
like something doesn't quite add up there i mean i would think humanity might have a tendency to want to
have something to say about that maybe right and the people that are closest to this are the ones
that are most concerned right it's like you met people who are so afraid of i think ai risk that
they won't save for retirement you made people whose whose p-dooms were like in the 30 percent
and they felt they were considered moderates in their community.
And yours is between 2 and 20%, which is interesting.
But it struck me that you took this fear of existential AI risk pretty seriously.
Look, the fact is that the relevant experts in the field are pretty unanimous.
This is at least a risk that we want to monitor, right?
So like you just said, Alex, like the fact that the people who are closer to it are more worried is worrisome.
Also the fact that, like, we don't fully understand the technology.
I mean, people call it like a giant bag of numbers.
How large language models do what they do is a little bit of a mystery.
And considered, like, a little bit of a miracle.
I mean, people now kind of take for granted.
Part because maybe we're at, like, a little bit of a plateau kind of post 3.5, 4.
We'll see where we are in a year or so.
But, like, you have this machine that's basically, like, can almost pass.
the Turing test, right? You can, like, ask it a question about anything, and some of the answers
are stupid, but some are quite ingenious and brilliant. It can draw pictures. I mean, there's just
like thermostatic increase where you just get used to this for now, but like, I, you know,
few experts in the field would have expected anything like this to occur 10 years ago, right?
the reason why Open AI was originally a non-profit is because it was not clear,
it was enough of a long-shot bet that it wasn't clear if it would work,
oh, you just kind of throw a bunch of compute through this transformer architecture,
and maybe it'll keep scaling up and produce a miracle where you have like a machine
that has comparable linguistic intelligence to human beings,
and all of a sudden it does, or at least it's pretty close.
I mean, that's kind of a miracle.
and like the fact that we don't it's not fully legible what it's doing exactly I think ought to be ought to be frightening all right Nate before we leave love to get your three tips for poker players what are the three things that anybody who wants to be good at poker needs to know I think the most important basic tip for poker is is be aggressive if you're new in activity there's a tendency to to take the middle ground so if you if you play like a backyard game with people who have never
play poker before, they tend to call too much when they should be, be raising or folding.
That mechanic of like, take more decisive action is a good decision-making framework across
any type of risk-taking, I think.
You know, number two is being cool under pressure, and partly that involves not trying to be
a hero, right?
Executing basic strategy and kind of, you know, because when you are operating, you
in an environment where all of a sudden you're playing a $10,000 pot in the world
turns of poker or something, you're actually operating on different physical and brain
chemistry. Like literally, right? You have a stress response that kicks in, an adrenal response,
and you're kind of in like a flow state or a zone state, which some people will panic
and go on tilt, and you have to be able to avoid that. That's something that can be learned
a little bit with experience.
And number three is being really observant.
One of the things I learned just playing a ton of live poker over the past three years
is that things like physical tells, getting a read on someone, having an intuition for
what someone's doing through their just general demeanor.
Like, that's a real thing.
I used to think this was like exaggerated kind of BS, but like it's a real thing.
I mean, human beings making decisions under stress are not very good at concealing information.
So whether it's like a micro expression or something like that or a betting pattern or a speech pattern or or involuntary, you know, looking at someone's like neck muscles, for example, or the way they're hunched over, what they're doing.
If they look away, they can be a sign of strength.
And so just being attentive and the importance of attentiveness is is a big lesson, I think, for poker players.
And why don't I throw out a fourth one that I learned from your book, which is that you got a bluff.
And you have to bluff.
Yeah. Poker is a game. The whole premise of the game is based on bluffing. There are lots of fun card games that don't involve bluffing per se, although they may involve deception. But the only reason in poker in theory to ever pay off someone when they have a good hand is because they are capable of bluffing. And even until you get to the very top 100 best poker players in the world, believe it or not, most poker players bluff too little and not too much.
Yeah, I'm always like, I want to hold on to those chips, but I'm learning after reading the book.
Don't do that.
No, you have to take risk to win a poker tournament.
It's unavoidable.
And so that's part of why it's an attractive metaphor for life, et cetera.
The book is On the Edge, The Art of Risking Everything by Nate Silver, who's with us here today.
It's out this week.
I encourage you to go grab it.
Definitely one of the more interesting books I've read in a long time.
Nate Silver, thanks so much for doing this.
Of course. Thank you, Alex.
All right, everybody.
Thanks so much for listening, and we'll see you next time.
on Big Technology Podcast.