Bankless - Superagency: The Bull Case for AI | Reid Hoffman
Episode Date: January 27, 2025In this episode of Bankless, Reid Hoffman, co-founder of LinkedIn and author of Super Agency, explores how AI is set to transform humanity and amplify individual potential. We dive into Hoffman’s vi...sion of “super agency,” We examine AI's potential to democratize technology and reshape society, along with an analysis of four distinct perspectives on its impact: pessimistic, cautious, ambitious, and balanced. Hoffman shares lessons from past tech revolutions, the risks of AI misuse, and how innovation can unlock a more optimistic future. ------ 📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24 https://bankless.cc/spotify-premium ------ BANKLESS SPONSOR TOOLS: 🪙 FRAX | SELF SUFFICIENT DeFi https://bankless.cc/Frax 🦄UNISWAP | BUG BOUNTY PROGRAM https://bankless.cc/Uniswap-Bug-Bounty ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle 🌐 CELO | BUILD TOGETHER AND PROSPER https://bankless.cc/Celo 🎮RONIN | THE FUTURE OF WEB3 GAMING https://bankless.cc/Ronin ------ ✨ Mint the episode on Zora ✨ https://zora.co/collect/base:0x4be6cd4d402fed49eb2de95fbc8e737e8ffd3e7f/22?referrer=0x077Fe9e96Aa9b20Bd36F1C6290f54F8717C5674E ------ TIMESTAMPS 00:00:00 Intro 00:09:10 Is superagency a superpower? 00:11:09 How to identify a humanoid AI agent? 00:13:28 Which people will get this AI superpower? 00:17:22 Propagation of AI 00:18:47 AI religions 00:24:31 Zoomers are e/acc? 00:25:56 Thoughts on Gloomers 00:29:09 Mainframe computers' relation with today's AI 00:37:03 Negative and positive impacts of AI 00:50:28 Benefits of AI to the average citizen 00:52:58 What if AI misuses/sells our data? 00:59:31 How AI will deliver value worth millions 01:02:17 Why innovation is safety 01:05:59 AI choices shaping our freedom? 01:10:07 America’s approach towards AI 01:12:27 Is AI overhyped? 01:14:05 Closing & Disclaimers ------ RESOURCES Reid Hoffman https://x.com/reidhoffman Blitzscaling https://www.blitzscaling.com/ Super Agency https://www.superagency.ai/ ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Can you guarantee me that killer robots will never be built?
The only existential risk for human beings is not killer robots.
There's pandemics, there's asteroids, there's nuclear weapons, there's climate change,
and the list kind of goes on.
And so you have to look at existential risk as a portfolio, namely, it's not just one thing,
it's a set of things.
And so when you look at any particular intervention, you say, well, how does this affect
the portfolio?
My very vigorous and strong contention is that AI even,
unmodified at all is net, I think, very positive on the existential risk portfolio.
Welcome to bankless, where today we explore the frontier of AI. This is Ryan Sean Adams. I'm here
with David Hoffman, and we're here to help you become more bankless. The question for today,
will AI give us super agency, or will it be used to enslave us? We have Reid Hoffman on the
podcast today. He gives his bullcase for AI, why it's good, why we should accelerate AI into
the future and how it will turn each of us into super agents. And to him, that equals more freedom
for everyone. I think the bankless journey is all about becoming a more sovereign individual. That's what
David and I have talked about since inception. And it's increasingly hard to imagine being a
sovereign individual without crypto, which we've talked a lot about, but also without AI.
Like, crypto gives you the ability to own things, but AI seems to be the ability to control your
own destiny. And that's why we're doing an AI episode today with Reid Hoffman to help stay ahead of
the AI curve. A few things we discuss. Super Agency, Dumers, Blooms and Zoomers, what could go
right, how to use AI, American Super Intelligence, and finally, we end with the question to read,
what if this whole AI thing is overhyped? Stay tuned for that answer. Super Agency is the title
of Reid Hoffman's book, which is coming out this week, if you are listening to it at the time of
release. And so this is all on the backs of that. Ryan of the two co-hosts of Bankless read the book.
I did not. And so I'm more along for the ride. I'm in listening mode asking a few questions here
or there. But it's really Ryan in the driver's seat for this episode. So I hope you guys enjoy the
episode with Reid Hoffman. But first, before we get there, a moment to talk about some of these
fantastic sponsors that make this show possible. Are you ready to swap smarter? Uniswap apps are
simple, secure and seamless tools that crypto users trust. The Uniswap protocol has processed more than
$2.5 trillion in all-time swap volume, proving it's the go-to liquidity hub for swaps.
With support for growing numbers of chains, including Ethereum, main net, base, Arbitron, Polygon, ZKSync,
Uniswap apps are built for a multi-chain world.
Uniswap syncs your transactions across its web interface, mobile apps, and Chrome browser
extensions, so you're never tied to one device.
And with self-custody for your funds and MED protection, Uniswap keeps your crypto-secure
while you swap anywhere, anytime.
Connect your wallet and swap smarter today with the Uniswap web app or download the Uniswap wallet,
available now in iOS, Android, and Chrome.
Uniswap, the simple secure way to swap in a multi-chain world.
With over $1.5 billion in TVL, the M-Eath protocol is home to M-Eath, the fourth largest
ETH liquid staking token, offering one of the highest APRs among the top 10 LFTs.
And now, CMEEth takes things even further.
This restaked version captures multiple yields across Karek, eigenlayer, symbiotic, and many more,
making CMEEath the most fission and most composable LRT solution on the market.
Metamorphosis, season one, dropped $7.7 million in Cook rewards to M-Eath holders.
Season 2 is currently ongoing, allowing users to earn staking, restaking, and AVS yields,
plus rewards in Cook, M-Eath Protocol's governance token, and more.
Don't miss out on the opportunity to stake, restake, and shape the future of M-Eath protocol with Cook.
Participate today at M-Eath.mantle.XYZ.
Sello is transitioning from a mobile-first, EVM-compatible, layer-1 blockchain to a high
high-performance Ethereum Layer 2, built on OP stack with eigenDA and one block finality.
All happening soon with a hard fork.
With over 600 million total transactions, 12 million weekly transactions, and 750,000 daily
active users, Sellow's meteoric rise would place it among one of the top layer 2s,
built for the real world and optimized for fast, low-cost global payments.
As the home of the stable coins, Sellow hosts 13 native stable coins across seven different currencies,
including Native USDT on Opera MiniPay, and with over 4 million users in Africa alone.
In November, stablecoin volumes hit $6.8 billion, made for seamless on-chain FX trading.
Plus, users can pay gas with the ERC 20 tokens like USDT and USDC and send crypto to phone numbers in seconds.
But why should you care about Selo's transition to a layer two?
Layer 2's Unify Ethereum.
L1's fragmented.
By becoming a layer 2, Sello leads the way for other EVM-compatible layer 1s to follow.
Follow Sello on X and witness the great Sello happening, where Sello cuts its inflation in half as it enters its layer 2 era and continuing its environmental leadership.
Bankless Nation, very excited to introduce you to Reid Hoffman. He is a founder-investor. He co-founded LinkedIn,
which I'm sure many of you have used in the past. He's extremely active in Silicon Valley,
particularly over the last couple of decades. And more recently, he's been very close to what we would
call the epicenter of this whole AI thing. So he was serving on the board of open AI starting in
2018. Notably, I should mention, because whenever someone talks about the board of open AI,
a lot of things you'll come up. But he actually left to go focus on AI investing before
Sam Altman in the ousting drama, and you guys remember all of that. He's also a gifted writer,
a communicator. I've read several of his books. I think one of the canonical books for tech founders
is this book called Blitzscaling, which is just phenomenal on how to grow an internet-scale business.
And now, all of this preamble to say, now he's written a book on AI called Super Agency.
And I'd pretty much describe this as maybe Reid Hoffman's thesis for artificial intelligence
and how it will impact us in the decades to come. Reed Hoffman will.
Welcome to Bankless.
It's great to be here.
And I look forward to not only this conversation, but future ones as well.
Yeah.
I mean, I think we're going to really focus on AI because that's the subject matter of your book.
But maybe in a future episode, we get into crypto because I know you have a lot of thoughts on that.
Yeah, I know.
I actually think I bought my first Bitcoin in 2014.
Congrats.
It's a little late.
But, you know.
It's earlier than most.
Yeah.
That's a good seasoning of a time to buy Bitcoin for sure.
So let's talk about this book.
Let's talk about your thesis.
for artificial intelligence.
And when I heard that you were writing a book called Super Agency,
my first question, without reading anything further,
was like, okay, super agency, who is Reed talking about?
Like, who are the super agents?
Is this the humans?
Do they become the superagents?
Or is he talking about the AIs themselves?
Do the robots become the superagents?
So maybe you could kind of start there.
Could you define what you even mean by super agency
and, like, who gets it?
Yeah, so let's actually start even a little bit earlier with agency
and then get to your excellent question, which is, what is agency? Agency is the ability to kind of make plans, do things in the world, you know, kind of make parts of the world, you know, according to your intentions and desires and express yourself in the kind of the ordering of the world around you. And obviously, nobody has perfect agency. You know, that's, you know, kind of theoretical, deistic like creatures.
Like God has that, maybe.
Yeah, perhaps. And it depends even on what your particular theology is. So that's the reason I was being a little bit more vague.
Non-denominational today in this podcast.
Yes, exactly.
And so superagency, the precise term is kind of when millions of human beings get access to kind of an elevating technology, a transformative technology.
The superpowers, not only they get as individuals, but society gets transformed.
And so, for example, a canonical example is cars.
So you go, well, it gives me superpowers because I can go far.
I can drive.
I can get to farther distances.
But as other people in society also get cars, you know, like suddenly you'd had to go down to the doctor's office to get an appointment.
Now the doctor can come to you.
And obviously, later instantiations of this is you can get, you know, Instacart deliveries and, you know, all the rest.
And so superagency is kind of how we all get superpowers.
And so to your opening question about our humans, the superagents, or our AIs the superagents, to some degree it's both.
but the important emphasis is that rather than us as human beings and humanity losing our agency,
we are gaining agency.
And by the way, in a very similar pattern to the way that I gain agency when you guys also get cars, right?
It's not just me that gains agency with my own car.
I gain agency when you guys get cars.
And so that's the elevation of agency and hence super agency.
I was almost thinking about your book title.
If you used a different synonym besides super agency,
if you just titled the book Superpowers, right? And like who gets them? That's almost the same
discussion. Like, or maybe when we get into this term agency, is the term superpower and super agency,
are they kind of synonyms? Is the short form version of this to just like we get all of this
additional choice surface area? We have new abilities to do things that previous generations could
not have imagined. That feels like a superpower to me. Is it kind of one in the same in your mind?
Well, superpower, it's deeply related. The Venn diagrams have a high overlap because the kind of the elevation of capabilities are superpowers. And every, you know, kind of new major technology gives us new kinds of superpowers. Now, some of it is with a superpower and as lots of people get superpowers, you know, individuals, institutions, societies, governments, etc. Your agency changes some. So it isn't, for example, like the agency of people who are, you know, individuals, institutions, societies, governments, etc. Your agency, so it isn't, for example, like the agency of people who are.
kind of driving horse and buggy carriages, that changed with cars. Because it was like, well,
no longer are the streets set up for you, no longer can you be doing this thing that, you know,
you had been doing and we're planning on doing. You know, no longer, for example, was the
horse's transport industry, you know, kind of central. And by the way, even like earlier technologies
like trains, those changed in the kind of the ways that people would express your agency and be able to work on it.
So superpowers are a way you extend your agency, but when it happens in a super agency context,
it also transforms it and changes it. So that's the reason why it's not 100% the same, but closely related.
Read something that we share in common is we actually both have podcasts. You have a podcast called It Possible.
And I think Ryan and I stumbled on your podcast, and we noticed that you did an episode, an interview, with yourself.
But yourself was, we would might call it in the crypto world an AI agent.
Now, maybe this illustrates what you mean by super agency, and maybe you can take that metaphor all the way home.
But how do we know that we are actually talking to the real Reid Hoffman and not your AI co-host spot that now is with us actually?
And the real human, Reid Hoffman, is somewhere else doing work in a different direction.
How do we know you're the real Reid Hoffman?
Well, that will get to be a more and more complicated question.
At the moment, the video avatars are not actually, in fact, real time.
So the read AI discussion has to be a little bit scripted, even though it looks like it's on a podcast that it's a completely real-time thing.
There's actually, in fact, you know, kind of running it through the chatGBT instance that's trained on 20 years of my writing.
And then more specifically, getting the audio and video produced with the right kind of quality doesn't really, you know, enable that for kind of a full real-time stack today.
But, you know, part of the reason, of course, you know, I did it, put it on possible because what did possibly go right, was to start getting people familiar with the future universe, just as you guys are doing, you know, in the kind of all of the technology broadly, but also around, of course, crypto and what is sovereignty and identity and all the rest of that mean, it's kind of like, here, here's a lens into the future. And we don't know exactly where the future is going, but we're trying to get everyone, you know, kind of participating, ready, navigating well, etc.
And that was part of the reason why doing read AI. But there is, you know, obviously at some point, one could get to that as an interesting question. And, you know, my own, you know, hazard of an answer here is something a little bit more like, you know, well, crypto signatures and identity is sure what's happening. But of course, you know, given that I'll probably have both the crypto signatures for me and for read AI, you know, that might still even be a live question.
Yeah, it's really interesting, though. There's something very empowering about the experiment.
that you're running with Reed AI because it leads to a promising future of like if individuals are
sovereign over their own kind of AI agent twin, maybe that AI agent twin could go do work. Well,
they're like goofing off. They're going like doing something that they enjoy. Maybe they're watching a
movie. They're doing art. They're like working out, something like that. And then there's Reed AI doing
podcasts like while all of this goes on. And you know, the real Reid Hoffman sort of has ownership
over that and somehow like that feels very democratizing. I want to get back to the through line
of this conversation, though, when we talk about super agency, though. So your thesis is we understand
what super agency is and how that's similar versus different to superpowers. And you said very emphatically
that it's not the robots that get it. The AIs do get it, but also the humans get it. Your view here
is that it's going to be humans amplified by AI. That's the real unlock here. But like, I have a
question within that subset of humans who get it. Which humans are we talking about, Reed?
Are we talking about the Silicon Valley elite in your thesis?
Are we talking about, you know, the 1%, those that control most of the capital in society?
Are we talking about governments or are we talking about individuals?
Because the distribution of this seems incredibly relevant to how we actually view whether this is a good thing or not.
So I think the path we're already on, you know, with hundreds of millions of people using chatGBT and, you know, exposure to, you know, agents in other contexts,
whether it's, you know, anthropic, Gemini, co-pilot, etc.
So I think we're already seeing hundreds of millions of what you're referring to as individuals,
but, you know, kind of call it access from, you know, kind of a bulk of at least middle-class
Western folks, although, like, one of the things that I was very cool that I had heard about
from a friend who was traveling in Morocco recently is that the taxi driver was using
chatGBT as the translator for, you know, like, where do you want to go for the tourists? And so,
you know, it's very broad indeed. Now, that being said, I don't want to paper over the fact that
we live in a human society that has, you know, kind of differences of wealth, differences of power,
differences of position, not just between nations, but within nations. And, you know,
that's not going to go away. And so it wouldn't surprise me, you know, if you said, well, but actually
the kind of AI that the wealthy have access to, has some improvements in betterness than, you know, maybe real-time, you know, responsiveness, maybe, you know, number of GPUs available, et cetera, et cetera, than, you know, kind of a lower income person.
Now, that being said, part of the reason I'm really optimistic is a little bit like, you know, kind of smartphones, which is, you know, three quarters of the world today has mobile phones.
But the smartphone that, you know, Tim Cook has or Jeff Bezos has or Sundar Bishai has is the same smartphone that, you know, the Uber driver has.
And so I think that the natural drive in technology, which includes AI, is building it for the very mass market, you know, the billions.
And so I think that I can confidently assert that superpowers will be available very, very broadly.
even if, you know, there's also some differences in superpowers based on, you know, country and wealth and, you know, kind of access.
But I think democratizing will be the name of the game.
So in your world, AI is really a democratizing technology.
It's pretty like, you know, of course, you know, if you're in the early adopter curve, maybe you get things a little bit sooner.
But generally, it's going to take the form of the way cell phones did, where in the 1980s is a large, you know, big brick.
it costs thousands of dollars, until then the technology democratized, or the way the internet has
kind of democratized things. It's not because there is this fear out there, Reed, that AI is kind of
going to be controlled by superpowers, let's say, governments, or a small cabal maybe in Silicon Valley,
that they're going to have the technology and kind of the rest of us plebs, like maybe won't.
But you're saying it'll be more similar to, I guess, the propagation of the internet or the cell phone
in that it will be fairly widely distributed
and actually be like a technology
that's available to the general public?
Yes, in short.
And part of that's also because, you know,
the same called Silicon Valley ecosystem
that built smartphones that built the internet,
you know, and obviously it's not just Silicon Valley,
but there's a lot of Silicon Valley contribution,
is also very similarly building, you know, kind of AI,
both in the hypers and the large models,
but also, you know,
At this point, there's so many thousands of startups that, you know, they kind of, you know, you could start mapping various cryptocurrencies per startup.
There's similar numbers of orders of magnitude.
Let's talk about the AI religions that exist because I think this was a fairly fantastic framing in your book and one of my chief takeaways.
So you talk about, and I'm using the term religion, you could say ideology, you could say philosophy.
But just the point is that each of these categories, I think all of them have a,
an expected outcome or an article of faith because of course the future is unknown. But anyway,
so the four categories in your book of people with thoughts about AI. And it's useful, I think,
to categorize them to sort of understand the worldview a bit better. One is the doomer. Okay.
The second is the gloomer. The third is the bloomer. And the fourth is the zoomer. Okay. Now,
these are four different categories, subsets of groups with different perspectives on AI. Could you
just define those four categories for us? Absolutely. I'll go through them in that order,
which is Dumers basically are like, AI is the destruction of humanity. And, you know, it's very much
like the Terminator robot or other kinds of, you know, kind of popular Hollywood thieves argued
in a way that's kind of like, well, it'll be more intelligent than us. It'll kind of want to run the
earth. You know, it'll look at human beings as either hostile or, you know, kind of, you know,
ants or a kind of equivalent. And so AI should just be stopped. Gloomers are essentially, look,
I don't think the AI future is going to be particularly good. I think it'll, you know,
take away a whole bunch of jobs and kind of disorder society. It may lead to much more misinformation
and kind of unbalanced democracies. It'll have a whole bunch of kind of more information,
you know, kind of surveillance. And so their privacy will be worse. And so,
So, like, I don't think it's stoppable because, you know, multiple countries and multiple, you know, companies around the world are building it. And, you know, that's the way that humanity rolls and, you know, companies are going to become a lot more productive for this. But I think it'll be an unfortunate outcome.
And it's gloomers, by the way, because they only see the gloomy side if that helps people.
Exactly. And actually, I'll do zoomers before bloomers because I want to spend a little bit more time on bloomers since I self-identify there.
Zoomers are essentially like, no, no, no, this technology is great. It's like the opposite of doomers.
And it's like everything that we're going to build with it is going to be really amazing.
You know, the sky isn't even the limit in terms of what kinds of things could be made.
Or, you know, maybe AI is going to invent fusion rather than us inventing fusion.
And everything that comes out of this is just spectacular.
And Zoomer, Zoom refers to just hitting the gas pedal.
Just go forward. Go fast.
Exactly.
Yeah.
And then Blumers, which I describe myself as, is kind of a zoomer, but it's supposed to just, like,
maximally hitting the gas pedal in all circumstances.
as you go, well, drive intelligently.
Like, avoid the potholes, slow down at the curve, you know, be looking at kind of like,
oh, look, this is a little bit of a dangerous area.
Let's go through with this a little bit more care.
Still accelerationists, the kinds of things that we can build in the future,
whether they're medical outcomes or climate change outcomes or, you know, kind of human
enablement with work and with education, all of that stuff is super important to get to.
But, you know, let's kind of make sure that.
that we're not enabling rogue states or terrorists or unbalancing crime waves or other kinds of things as ways of doing this.
And let's make sure that we don't, for example, inadvertently create Terminators, you know, because it's a little bit of question of how we drive.
It's not inevitable.
So that's the Bloomer category.
And that's the category I'm in.
And obviously, if you said, well, you can't pick Bloomer, I'd be closer to Zumer, much closer to Zumer than Gloomer or Dumer.
But it's also part of the reason why the subtitle of Super Agency, which parallels the podcast, is what could possibly go right, is because we always, as human beings, encounter new technologies with that, oh, my God, the world's coming to an end.
I mean, remember all those discussions around crypto?
Maybe we're still having them.
And also, you know, by the way, the Internet, and by the way, cars, and by the way, the printing press.
It always starts with, oh, my God, this is the end of society.
And then when we start navigating, we go, oh, wait, if we do that this way, we make society a whole lot better.
And by the way, we have in every technological instance in history so far made that happen and gotten superagency through all of them.
One can argue the AI technology, it is new and unique, whether it's new and unique in that characteristic or not.
And that's, of course, why to write the book and go out and talk to people and so forth to show.
Actually, in fact, the only way you can create a positive future,
is by imagining it and steering towards it. And so that's what we should be doing.
Let's make sure we understand these examples of these four categories.
Like maybe by wave example, actually. So somebody on the zoomer side of things, and again,
we're not referring to Gen Z here. We're talking about zoomers. I was thinking in my head,
another term that bankless listeners might be familiar with is EAC, if you've heard that term,
read effective accelerationalists, of which we've had Beth Jzos on the podcast. He's basically
like full speed ahead. Like let's harness energy, let's harness AI and like conquer the universe,
full speed ahead. Mark Andresen, you know, put together a techno-optimist manifesto that has some
EAC characteristics. Zumer is basically the EAC group. Is that right?
Exactly. Although I think you might say that Zomers and Bloomer's are kind of two variants of the
EAC group. Because I also, by the way, I think, you know, I started using term techno-optimism some
number of years ago. Like, E.J., I'm a techno-optimist, not techno-utopian, which is you can build
great things with technology. It doesn't mean everything you do with technology is great. Right. So,
you know, do it with some care. I'd say it's the zoomers are, hey, anything that anyone's doing
with this, it'll end up good. And the bloomer is, hey, most of the stuff is going to end up really
good. Let's try to, like, steer a little bit. It's hard for me, too, to actually put people in
boxes. Like somebody like Mark Andreessen, I don't know if he's full like kind of everything technological,
technical is good, or how much of this is sort of, you know, a personal choice to just amplify this
extreme position in order to kind of... He might need to plant a flag in order to shift the Overton
window. Move the Overton window, right? And like, I think that's part of the meme games that people
like Beth Jayzos and maybe Andreessen are doing. But it's hard to speculate on it. Okay, so that's the
zoomer. Now, the Dumer is pretty easy. I think we've also had guests on Ben Gliss. Eli's your
Kowski, he very much clearly thinks that, like, everything that we're doing right now in AI,
like, basically, we only have years, maybe decades to kind of live before AI actually supplants
us. Like, he genuinely thinks that. That's the Dumer category. So you don't have to go into more
detail there. But how about the gloomer category a little bit more? It seemed to me that this is
sort of the mainstream media type of take on things. And it might even be the popular narrative
around AI. Like if you ask the average American, what do they think about AI? I think in like the
2020s with the current spirit of the age, I think there'd be some cynicism about AI. There'd be some
pessimism about AI. It would definitely be the glass, you know, half full type of outlook. And I think
that's the popular idea. But who are some archetypes for this gloomer category? So I do think
that it's kind of generally speaking, you know, kind of the discourse.
because the discourse now, just like earlier times in history with earlier technologies,
tends to focus around, like, all the things that could possibly go wrong.
And so many journalists, definitely the vast majority of people in Hollywood,
who are like, oh, my God, this is the destruction of the content production industry.
And, you know, when SORA and VO are going, you know, all of our jobs are going,
a lot of it's focused around job displacement.
So worries and concerns about job displacement.
So, you know, I think it's more or less kind of like if you can't put the person clearly in another bucket, they're probably in the gloomer bucket.
It's probably the, and that's a little bit like mainstream media.
It's the everyone else bucket.
How about from a political landscape perspective, would you look at the axis that way here?
Because I think a lot of people listening would be like, okay, Democrats are a bit more on the gloomer side of things and Republicans are a bit more on the, maybe not the zoomer side of things, but,
the bloomer side of things. Do you think that's an axis at play as well?
Well, I think it depends, right? Because there's also a lot of modern Republican that's kind
of anti-Big Tech, you know, thinks that big tech is, you know, too big for its britches and should
conform. So, you know, I think that there's kind of, as it were, gloomers in both sides. I think
the Democratic side tends to be a little bit more we should be regulating. And the Republican side
tends to be the, no, no, we should be allowing, you know, industry to do what industry does.
So, Reed, I think for the rest of this podcast, I think we want you to make the case for bloomerism here.
Like, why is AI going to go really well for humanity? This idea of humans really amplified by artificial intelligence, and it kind of leads to really positive outcomes.
One of the early chapters in your book talk about some history I actually wasn't familiar with. And maybe this is an analog that will be helpful for some. It was helpful for me.
So this is the history of the mainframe computer. And you go back to 1960s. And apparently,
I did not know this. Maybe some bankless listeners also don't know this. During the 1960s, when the
mainframe computer kind of entered the cultural public scene as a new technology, we had computers
that could do incredible things for the time. There was a media hysteria that broke out. Okay.
And there was concerns about this new computer that had the ability to recall in a few seconds,
every pertinent action, including all of your failures, your embarrassments, or incriminating acts
from a lifetime of every citizen. There were many comparisons to a 1984, the book, of course,
that's in Western Canon by George Orwell, just like this Orwellian society that would be built
out by these mainframe computers. There were even congressional hearings, guys. So one lawmaker
warned the danger of the computerized man, which is a citizen that would lose all of their
individuality, their privacy, basically their agency, and they'd be reduced to magnetic tape.
That, of course, was the technology to program computers at the time. It's like literal magnetic tape.
So give us the history of the mainframe computer in this hysteria.
And why do you think this is analogous to what's happening today?
So, well, you've covered it pretty well. Thank you for actually reading the book.
That doesn't happen that often these days. And so, you know, I think that the question is any time that we encounter a new technology, and in this case the mainframe, they were like looking at like, okay, what could possibly go wrong?
And they think about, well, actually, in fact, this could, you know, track everything, make all the decisions, take away the agency of people by putting it into kind of government centralized control, you know, a little bit of the discussion of what's happening with, you know, AI in some circles today.
And then make, you know, kind of, you know, us as human beings essentially powerless and agencyless.
And that's, you know, of course, you know, part of.
A lot of the, and we talk about this in Super Agency a bunch is, you know, a lot of it was the 1984, George Orwell worries, where that kind of centralizing technology became a control over individuals and individuals through this kind of control of information, control of power, become, you know, almost irrelevant cogs in a machine.
And, you know, if you look at it, same thing, it goes, well, what, you know, what's AI doing with my data?
Oh, am I going to be able to make decisions?
because AI is going to be so persuasive and manipulative and advertising systems and information systems.
You know, am I going to be able to control my life and work?
Or is AI going to be doing all the work?
All of those are very parallel, not just obviously to the mainframe discussions,
which are, you know, relevant and close.
And we at least gotten through, I think, punch cards to magnetic tape before we started having all the worries.
So we're magnetic tape, not punch cards.
That's meant to be a joke.
And so, anyway, that was essentially.
what the dialogue was going. And people forget it now because it seems absurd looking back on it.
I mean, it's kind of like, well, yeah, I don't know why those people thought that. I mean,
look at all the computers we have now and look at, you know, the smartphone that everyone has in
their pocket is, you know, thousands of times more powerful than those mainframes, right? And, you know,
kind of everyone has one. And it's kind of, you know, working, you know, throughout the entire place.
And by the way, I think everyone's going to have an agent, too, and, you know, with AI. And so I think
that's why the parallel of the discussion to say, we're going through all this energy to imagine
like every possible bad outcome when a lot more of the energy is better put into what are the
good outcomes that we should be steering towards and which specific bad outcomes, you know,
that are not ones that are easily correctable as we get into it. So for example, you can put a
car on the road without bumpers. It's good to build bumpers later. You can put a car on the
road without seatbelts. It's good to put in seatbelts later. But you don't try to imagine all 10,000
things that could go wrong before you will put the card on the road. You got to put the car on the road
and start learning as you're going. And that's what the AI thing. And so for most gloomers to
kind of persuade them to switch from, call it, AI skeptical to AI curious within the kind of the
bloomer category is to say, start using it. And start using it not just for, hey, I have these
ingredients in my refrigerator, what can I cook? Totally good use case. Or my relative is having a birthday
party and I want to create a sonnet for them. Great. But for real things. Like, for example,
I'll give a personal example, because I think this might be useful, in particular, to the bankless
community. So when I first got access to GBD4, I sat down and said, how would Reidhofen make money
by investing in AI, you know, as a proxy for, you know, what degree of job replacement do I have
with CBD4. And it gave me back an answer that was powerfully written, compelling, and completely
wrong. Because it gave me back the answer that a business school professor who was very smart,
doesn't understand venture capital, would say. First, you'll analyze which markets have the
largest pan. Then you'll analyze, you know, kind of what the substitute products might be.
Then you'll go find teams that could possibly build those subsidies products and send them up in
order to invest in them. And you're like, yeah, that's not the way any capable venture.
Any venture capitalist who is successful does not operate that way.
Yeah, it's like business school slop, I guess, right?
Yes, exactly.
And so it's like, okay.
But then you say, well, then is it completely irrelevant to investing?
And the answer is no, no, actually, in fact, one of the things that AI, I like figure this out by the next day was, hey, I can feed in the PowerPoint deck or feed in the business plan.
And I say, what are the top questions for answering and due diligence?
And while, as an experienced investor, I might have known all those questions and gotten to the most.
all, it helped me go, oh, yeah, question number three, I would have figured that out is the right
question to ask three days from now, and it's useful to have it now while kind of composing a
due diligence plan. And so that kind of acceleration or that kind of amplification, you know,
or that kind of agency, super agency is part of the kind of human agency. And so all of this is a
personal story to go back to, you know, the bankless community, say, we'll start using it for
things that matter to you. And even if the first one, like how do you invest in, you know,
know, cryptocurrency doesn't give you anything useful.
Keep trying in different things, and you may find something like, oh, this helps me with how I
can operate at speed and with accuracy.
And then that gives you a wedge to start learning, you know, kind of how you can be kind of
superpower enabled.
The Arbitrum portal is your one-stop hub to entering the Ethereum ecosystem.
With over 800 apps, Arbitrum offers something for everyone.
Dive into the epicenter of DFI, where advanced trading and
lending, and staking platforms are redefining how we interact with money.
Explore Arbitrum's rapidly growing gaming hub from immersed role-playing games,
fast-paced fantasy MMOs to casual luck-battle mobile games.
Move assets effortlessly between chains and access the ecosystem with ease
via Arbitrum's expansive network of bridges and onrifts.
Step into Arbitrum's flourishing NFT and creator space,
where artists, collectors, and social converge and support your favorite streamers all on chain.
Find new and trending apps and learn how to earn rewards,
across the Arbitrum ecosystem with limited time campaigns from your favorite projects.
Empower your future with Arbitrum.com. Visit portal.arbitrum.io to find out what's next on your Web3
journey. What if the future of Web3 gaming wasn't just a fantasy, but something you could
explore today? Ronan, the blockchain already trusted by millions of players and creators,
is opening its doors to a new era of innovation starting February 12th. For players and investors,
Ronan is a home to a thriving ecosystem of games, NFTs, and live projects like Axi and Pixels.
With its permissionless expansion, the platform is about to unleash new opportunities in gaming,
defy, AI agents, and more.
Sign up for the Ronan wallet now to join 17 million others exploring the ecosystem.
And for developers, Ronan needs your platform to build, grow, and scale.
With fast transactions, low fees, and proven infrastructure, it's optimized for creativity at scale.
Start building on the TestNet today and prepare to launch your ideas, whether it's games,
meme coins, or an entirely new Web3 experience.
Ronin's millions of active users in wallets means tapping into a thriving ecosystem,
a 3 million monthly active addresses ready to explore your creations.
Sign up for Ronin Wallet at wallet.com and explore the possibilities.
Whether your player, investor, or builder, the future of Web3 starts on Ronan.
I completely agree.
And my lived experience of using, like, you know, tools like Chad GPT is that it does
amplify my productivity when I use it in the right way.
And I, like, have to spend, like, time to figure out how exactly to apply this to my own
amplification of, like, what I do.
I guess when I was reading this section, about the 1960, the main frame,
computer. I was sort of putting my head in the minds of people at that time. And you could kind of see at
time the way compute was sort of playing out, it was really controlled by a small number of companies
and governments. It was sort of like, I mean, the computers were the size of buildings, right? And so
you can sort of take a 1960s mindset and extrapolate that and get very scared. What ended up happening
was, of course, the personal computer revolution, where everybody got those building size computers
in their own home as an amplifier for their own product.
and society completely forgot the 1960s hysteria around Mainfrey. But I can't help but, like, also
wonder if some of the criticisms were sort of right. Okay, you go back to the 1960s and they talked
about, you know, surveillance and kind of the lack of privacy. And they weren't completely wrong.
You know, we didn't get the worst case scenario of what they were projecting, but we did get
a lot of good and then some bad outcomes. And this is why I sort of want to ask you about
your framing of like, do you actually think the doomers and the gloomers are completely wrong? Or do you
think that there's some probability of like a dumer style of outcome or even a gloomer style
outcome where AI is like not so sunshine and rainbows that actually is kind of negative for society?
Like what do you think about that from a probability distribution perspective and do they
have a point? So I think smart people always have a point. And so I think the question's good because
it's always to listen to what is the thing that they're thinking.
about. I think the two answers are very different between Dumers and Gloomers. Let's start with Dumers,
who, you know, another thing you know, the bankless community may be familiar with is, you know,
X-risk. And so they tend to be existential risk predominantly, you know, especially Yukowski and
others. Now, the thinking starts like this. It says, can you guarantee me that killer robots
will never be built either in the hands of humans or autonomously? You say, well, you can't
guarantee that. There's lots of things you can't guarantee.
To say, ah, so then we have an existential risk that's being added, and we should stop that
existential risk, because why should you add any existential risk? QED, my argument's over.
You're like, well, until you consider the fact that existential risk is not one thing,
like the only existential risk for human beings is not killer robots, there's pandemics,
there's asteroids, there's nuclear weapons, there's climate change, and the list kind of goes on.
And so you have to look at existential risk as a portfolio.
Namely, it's not just one thing.
It's a set of things.
And so when you look at any particular intervention, you say, well, how does this affect the portfolio?
Now, my very vigorous and strong contention is that AI even unmodified, and we'll get to why steering is good.
But unmodified at all is net, I think, very positive on the existential risk portfolio.
Because when you get to, for example, pandemics, one of the things we've experienced in our lifetimes,
and, you know, obviously, if it was a lot more fatal and everything else, it could have been substantially worse than the, you know, many thousands who died.
The question is to say, well, how do you see it, you know, detect it, how do you analyze it, and how do you both do therapeutics and preventive vaccines at speed in order to navigate that?
And AI is the primary answer to that.
like none of that can work without the speed of AI.
And then you get to, oh, well, how about asteroids?
Well, identifying which asteroids might get to us being able to intervene on them early,
you get to, like, for example, climate change.
You go, well, actually, in fact, whether it's anything from accelerating the invention of fusion to how do we manage our electric grids better,
there's positive contributions across all this.
So you go, okay, given all of that, I think AI even like unmodified, just to,
let the industry do exactly what it's going to do is going to be strongly positive in the existential
risk bucket. And I'll pause there in case you have a contention on that before I get to the glomer
category. No, I'll just say it in another way where you're just saying the most fully zoomer,
the fastest engine going into the AI revolution, it hits every single pothole. It's on two wheels
as it's going around the corners. Even under that situation, the solutions that it provides to
all alternative existential risks is still net pot hole.
positive in your opinion.
Exactly.
Yeah.
So that's the reason why I'm very far away from Dumer's.
Okay.
Well, how about the Glamers?
Do they have a point?
Yeah.
Well, no.
And by the way, I thought the Dumers have a point too, which is you say, hey, by the way,
we should try to minimize the killer robot risk.
Yes, that is something we should be doing.
And we can get back to them.
And I guess your answer would be like through use of AI to help us also.
Yes, exactly.
Okay.
Yes, exactly.
That feels a little recursive.
Hey, whenever technology is part of the problem, it's,
almost always the best part of the solution, too.
Okay.
Okay.
That's the optimist.
That's the EAC can you talk, I think.
But I think I have history on my side, which is good.
And we can get back to the privacy thread from the mainframe things as well.
How about the gloomers, though?
So on the gloomer side, the primary thing where I think I'm very sympathetic to the gloomers is that if you look at, and we cover this some in super agencies, as you know, if you look at the transitions for human societies in these technologies, we as human beings, we as human beings.
adopt and adapt new technologies very painfully, like the disruption. So you go, ah, the printing press,
we could not have anything of the modern world without the printing press. You can't have science,
scientific method, you can't have literacy, you can't have, you know, kind of a robust middle
class. Yet there was a century of religious war because of the printing press. When we as
human beings come to this, the transition period's almost always very painful. And I think even with
AI, we're going to have pain in the process. I don't think there's any.
way, unfortunately around it, part of the reason I'm writing superagency and doing these conversations
say, well, let's try to be smarter about it than the times we've done it before. Let's try to make
the transition as easy and kind of more graceful, but it will still be painful. Like in terms of,
even if you say, hey, most human jobs will be placed by humans using AI, that process itself is
still painful. People have to learn AI. Maybe it's new humans. Maybe the human who couldn't
learn AI feels out of place, you know, is suffering because of it. And that,
the kind of thing that I think the gloomers are kind of putting, as it were, an intuitive finger
on, which is, hey, look, all this kind of transition, they'll project it to infinity.
But all this kind of transition, boy, this is going to be difficult.
And you're like, yes, it is.
Right?
It's not, no, it's not.
And we're going to try to make it as good as possible.
And that's, again, part of the reason why I'm arguing about we should be intentional here
about what could possibly go right is you say, well, and this gets back to the technology
of the solution.
You say, well, okay, so we're going to have some job transitions.
We're going to have in transformations.
We're going to have information flows and misinformation flow transformations.
We're going to have some expectations of privacy transformations.
What should we do?
And the answer is, well, I actually think AI can be helpful in all of these cases.
And like one of them, part of the reason why, you know, inflection and pie was, you know, kind of, you know, something that I helped getting as an agent for every human being that's on your side that's for you.
and by you is one of the things that can help you then navigate, because it can be like, okay,
how do you help me navigate this new world? And I think it's one of the things that's really
important for us to provision early that goes all the way back to your democratization question.
And one of the reasons why I think that's an important thing to make sure that there's very
broad access to. Okay, let's underscore this point because I think some of the reason why the
gloomers sort of are winning right now in the narrative war is because, like, of course, fear is
a bit more viral and it's easier to imagine. It's much easier to imagine. It's much easier to
imagine an Orwellian future in the 1960s or the 2020s than it is to imagine a more optimistic future.
And as soon as you start talking about this optimistic future, it sounds like too utopian.
It just doesn't even sound real, right? But we are limited in terms of our imagination.
But that question, that prompt that you just raised is like a chapter in your book is the
question of what could go right. And the gloomers rarely ask what could go right. And I think,
to be fair to them, they have some limitations on their imagination. So I want to ask you as a kind of
the techno visionary, like, how would you answer that question? So if human beings, if every citizen
in the United States had an AI agent that amplified what they do, and we had this across the
side, this technology was widely deployed, what could go right? Like, what are the benefits
for the average American year? So line of sight, namely, no technological innovation,
it's just the question of how we get a built and deployed. A medical assistant that's better
than your average doctor that's available 24 by 7 in every pocket. So,
So you have a health concern.
It's 11 p.m.
You have a health concern for your kid, your parent, your grandparent, your, you know, pet, anything.
You know, you can begin to address it.
And it can help you, including going, oh, for that, you should go to the emergency room right now.
Right.
And so that's buildable.
A tutor on every subject for every age.
Anything from 2-year-old to 82-year-old.
Like, hey, you'd like to learn this.
You'd like to understand more.
By the way, there's obviously economic implications of that.
that's, I think, another thing that's available.
Then, to your democratization point, there's a lot of services, not just medical and access
to doctors.
You know, some people have concierge doctors.
Most people have to go through, you know, kind of their medical plan on some people
don't even have medical insurance.
Even in the U.S., there's a bunch of people who are uninsured.
What other kinds of things could be?
It's like, well, actually, in fact, like, I'm reading the lease for my rental.
Like, how do I understand that?
What's important to know about it?
Well, the agent can help you with that too.
And that's all line of sight today.
That's not even getting to, hey, how can it help you, like, code better?
How could it help you, you know, create marketing plans better?
How could it help you?
Like, all of that stuff is also coming.
But, like, those three basics for everybody is, you know, life transforming.
What about the societal level?
So when those things for individuals kind of aggregate and compound, we have better health care,
we have better kind of like learning capabilities. We have better things in all areas of our life.
Like, what does that mount to from the United States from a societal perspective? Do we have like more
free time as a society? Does our happiness increase? Does our GDP like double or triple? Do we
get those things as well? Well, I definitely think the equivalent of what GDP is supposed to be
measuring should be increased. Now, GDP has this challenge that it's measured in kind of an industrial
dollars for things
way. So, like, for example, all the benefits you get
from Wikipedia are actually
deflationary in GDP,
but the quality of that.
You know, another thing that people worry about with
AIs is, oh, I'm not going to spend time talking
to people, I'm going to spend all my time talking to agents.
And so loneliness will be increased
versus decreased. I think that to some
degree, that's a design, you know, kind of
choice, and I think what we want to both
see and I hope we'll get, and we want
to nudge towards is, you know, like when you ask
inflections pie, hey, you
you're my best friend. It says, no, no, I'm not your friend. I'm your AI companion. Let's talk
about your friends. Have you seen them recently? How would you like to talk to them?
You know, maybe you could set up a lunch date, you know, that kind of thing. And I think that could
lead to a much greater happiness for this. And I do think that actually, you know, part of what
I love about, you know, the Bhutan, you know, evangelical concept is I actually, I think
measuring, you know, kind of gross national happiness is also a good thing that we should be,
you know, aspiring to as a society. And I think that could be, you know, increased for this.
But I think that the place where we'll see it is in being much more like kind of fulfilling lives.
And the fulfilling life might be, you know, kind of like, hey, I get more time to do my hobby.
You know, I love fishing.
I'm going to have more time to be fishing because I can do my work in a shorter amount of time.
Or for people who, because, you know, American society tends like to work, is like, oh, I can accomplish a lot more in my work.
I'm maybe still working the same amount.
but as opposed to putting a whole bunch of time into form entry,
I can now do the parts that are not just like form entry
and do the other parts of the work
in much more kind of fulfilling and capable and productive ways.
Yeah, one way I think about it is like 100 years from now,
well, the average person in the United States
or wherever this technology is deployed,
like have a better quality of life.
I think of, you know, the 2020s,
and I compare that to the 1920s.
And I would, like, hands down,
you like prefer to live in the 2020s,
for all of its problems than in 1920s. But before the advent of antibiotics, like, you know, look at kind of
mortality rates from that time. Look at kind of the amount of society that had to basically, like,
do a grueling agrarian type farm job in order to just to get by, right? And it's like much better
for most people now than it was previously. And there's lots of stats we could get into on that.
But let's just pause and go back to kind of another gloomer objection. So they would say,
read, everything you're saying sounds so amazing. But like, yeah, we've heard.
it before. This is another bait and switch from Silicon Valley. Okay. They promised, and remember the
2020s in the advent of your Facebook, not to mention LinkedIn, they promised that we would connect the
world. And what ended up happening, Silicon Valley got rich. They extracted our attention.
There's, you know, the term extraction is used a lot about this. They sold us products.
They sold us as products to like the highest bidder. And now I'm thinking about even the time I spent
with chat GPT. And it feels really good right now. Like, it's amazing. I spend more time with chat
GBT than I do with, like, Google. And, you know, as a result, I think chat GPT knows me even better
than Google. I mean, a lot about me could be revealed by my search history, but like, even more so
with Chad GBT. And I'm getting the point of daily use where it's like, who knows me better than
chat GBT? Like, maybe my wife, maybe a handful of other individuals, but like, it knows me.
And that all feels good because it's amplifying what I do, okay? But what happens.
if things go dark, if we get this kind of like bait and switch, if suddenly open AI or whatever,
your Silicon Valley corporation here, starts saying, oh, you know, all this AI stuff is pretty
expensive. We're going to have to start harnessing all this data we know about Ryan to, like,
do something and sell it to the highest bidder.
Cambridge Analytica 2.0. Yeah. Or maybe they sell it out to the government or something,
or they control me in all of these subtle ways by recommending things that aren't in my best interest.
It's in their best interest or some government's best interest.
okay? This is the crux of the bait and switch. And so address that head on, Reed. Like,
how do we know this isn't a Silicon Valley bait and switch? Because it feels like that's happened
previously with social media. Well, I mean, a little bit depends on what you mean by bait and switch,
because I think, for example, let's take your example with Google and AdWords. Yes,
Google gets a bunch of data from you and can advertise to you, you know, better. And, you know,
by the way, hopefully that means that the products that you're seeing, you know, are actually
things that might interest you, which I think is a feature, not a bug, in terms of things
you might want to buy, and actually has so far the best business model that's been invented,
certainly in the media world, maybe in any, you know, part of the world today. And they say,
well, what do you get for your data? Well, you get a panoply of amazing free services, you know,
free search, you know, free email, a bunch of other things. And so it's a voluntary, you know,
something you participate in, you know, because you get a bunch of value, you know, kind of
transaction. And by the way, you'd probably rather be having it figure out how to monetize off your
data than saying, oh, in order to get our RPU, you've got to pay us, you know, 50 bucks a month,
right, for this. Like, no, no, no, I'd rather, you know, get the advertising, right, and give me
all this stuff for free. And so it's possible that, you know, the kind of AI agents will end up in a
similar kind of thing, where they say, well, hey, look, we could charge you 50 bucks a month like
Google could for a search, but actually, in fact, figuring out a way that it's kind of, you know,
transparent and voluntary and engages with you because it shouldn't be deceptive, it should be
with kind of your awareness in engaging and using it, that this becomes a positive, you know,
kind of economic transaction for you. But it could be other things, too. It could be a subscription
model. It could be integrated into the various productivity apps that you're using. It could
be any number of things for that. I think that the dialogue that's, you know, kind of very well
captured in this compelling slogan, surveillance capitalism, is misleading because it's like,
well, but like, I for one, like surveillance medicine. I like, you know, the fact that my, you know,
watch is tracking my sleep and health things because it's for me and it helps me, and it's part of that
positive thing. And a lot of the uses of these data in these internet systems are as a way of
making it free for you, where they had the economics for expanding and improving the free product.
And so, you know, I think I would challenge the bait and switch kind of methodology.
And, you know, the last thing I guess I would say is, like, for example, you say, well, whether
it's a social network, you know, and I, by the way, obviously think LinkedIn has handled this,
you know, the best of, you know, all of them, whether it's Google, whether it's these things.
these are all voluntary participation questions.
You might say, well, it's very hard for me to participate in modern society without being
informed in the way that I could be informed this way.
And it's like, okay, you know, like, yes, I myself, you search a lot, but I think I can do it
in a way that is, you know, maybe you should say, hey, Google should offer a paid alternative.
On the other hand, you know, for that to be economically viable, at least two or three percent
of the audience would have to opt for it, right?
And I'm not even sure two or three percent of the people would opt for it, right?
I mean, you'll get individuals that I would do it, but like that might not be, you know, economically relevant unless at least two or three percent of the people were doing it.
So anyway, so I think it's a challenge to the challenge, as it were.
I think the Blumber take on this was sort of interesting, right, which is like acknowledging that there are some potholes and there's some costs, right?
Maybe to society and to individuals, but also saying that the benefit far exceeds the cost.
One way you underscored this was like, you said this, I wanted to get you to justify this because it kind of blew my mind when I was reading it. Even if LLMs get no better, that is no better than today, the consumer surplus to the average 20-year-old living today is millions of dollars over their lifetime.
Yes.
Okay, so what you're effectively saying is for an actual zoomer. So somebody in Gen Z, do you know, somebody in that age demographic, they're going to be able to harness LLMs and it's going to deliver millions of dollars in.
value to them. And that's not even talking about the LLMs and AI of the future. That's talking about
the AI of today. How? Some people think about that and they're like, how does that deliver someone
millions of dollars in their lifetime? Well, let's just start with something that's really simple,
which is, you know, legal assistance. So you're going to counter employment contracts,
you're going to rental contracts. You're going to have, you know, products and services you might
be engaging in. And today, your average person just basically can't afford to pay a lawyer, right? Because
the lawyer is hundreds of dollars an hour.
Well, now, even today, with GPD4 today,
you can put it in there and get useful analysis,
useful kind of participation.
So if you just take every single contract
that you're potentially engaging in and use that now,
that gets you a lot of dollars towards your millions of dollars.
Then you say, well, what about like medical stuff, right?
Like consulting medical or other kinds of things,
or especially if I, like, you know, in the periods where since, you know, in this country,
we tend to do, you know, insurance in kind of challenging ways, mostly through employers,
you know, like, okay, so getting medical advice.
Well, that's another area where you can get a bunch.
Then you say, okay, well, how about amplifying my ability to find and do economic work?
That's another place.
And so when you add all that up and you add it up for hopefully what is even a longer life,
because if you're getting, you know, kind of call it pre-critical medical advice about how to
preventively stay healthy and preventively avoid certain kinds of, you know, catastrophic health
conditions, or navigate like early signs in ways that you can do it before you're in critical
condition. Not only is that hugely economic, but that should also lead to longer lifespans.
And so all of that is part of how, you know, we get to, hey, today, it's already worth millions
to you. So, Reid, this gets us to regulations. And I think the gloomer camp has one take on how we regulate
AI and the EACs and the bloomers and the zoomers have a different take on this. But generally, what I'm
seeing coming from the establishment government is like breaks. It's no gasoline on this thing. It's all
breaks. They have this precautionary principle, which is like they think about what could go wrong and how to
prevent all of the things that could go wrong. You make a different argument. I think that your argument is
that innovation is actual safety. So you're making the argument, and I want to hear how this makes
sense, but you're making the argument, I think, that actually hitting the accelerator on AI
is how we make this thing safe. And that feels very counterintuitive. What's that claim based on?
Why do you think innovation is safety? So, for example, when you get to, like, how are modern
cars able to go these speeds and able to go them much safer than earlier cars with doing them,
is that as you iterate and deployed them, you realize it's like, oh, actually, in fact, we could put in anti-lock brakes.
Oh, we can put in seatbelts. Oh, we can have crumple zones. Oh, we can have bumpers.
And that's an innovative path to making the car safer. And the car can then go faster and navigate circumstances because you've innovated safety into the car as part of the innovation with the car.
And the parallel with that is essentially doing, you know, kind of like, well, what are the future features of AI and what are the things that we could be doing that make them much safer from these kind of aligned circumstances?
You're like, okay, so can we make the AI really enable, you know, people who are trying to figure out stuff with, like, their health and other kinds of things, but also make, you know, any efforts at terrorism, you know, kind of.
you know, kind of much more difficult and much harder.
And by the way, this is, of course, what, you know,
red teams and safety and alignment groups are already doing at Microsoft's
and Open AI, Anthropic and others, you know,
for doing this because they're aware of these kind of safety things.
But it's that innovation into the future that is the kind of really important thing.
And the way you discover that is by iterative deployment,
by actually, like, making it live and then seeing what things needed to be modified.
Now, obviously on really extreme things like, well, okay, terrorists who are creating weapons of mass destruction,
we want to make sure that that's as little possible in any field as is absolutely the case.
And, you know, for example, safety groups more or less use as their minimum benchmark.
Let's make sure that these agents are not any more capable of doing that than Google searches today, right?
And obviously, we want to drive both of them to the lowest.
But that's what, you know, kind of innovation to safety means.
And a kind of historical and easy to understand car example and what it means in terms of technological features for building future software.
Last objection here that I think comes up is this idea that AI kind of kills human autonomy.
Like this is a control technology.
It's not a freedom technology, basically.
So like, and in this AI world that we're all moving towards, I mean, where is my agency?
I mean, you title the book, read, you know, super agency.
agency, right? It's like, but I feel less, like I have less agency if the AI is making all of the
decisions for me. I want you to address that, too, because it kind of ties into this concept of
freedom. And I sort of wonder how much of this is also like boiling the frog. Like, we just kind of
get used to it. And maybe that's okay, but maybe it's not. So if you went back to the 1960s and
pulled those same people and you told them, hey, in 2020s, most adults, they would actually
meet their future spouse or mate or partner by computer algorithm.
is basically computers decide. That's actually the lived experience of how most people meet and get married
today. It's like they meet via social network of some sort. They meet on like, you know,
Tinder or whatever dating site they subscribe to. It's kind of the algorithms that are almost
matching them. That sounds dystopic in the 1960s. Now it's like, oh, I've kind of gotten used to it.
I've met many couples in healthy relationships and they sort of met online by route of computer.
Anyway, back to this, this argument that AI agents making the decisions and outsourcing that part of our intelligence will actually restrict our freedoms, what do you make of this? Or do you think that there's some merit to this argument?
So I think one of the things that I said at the very beginning is agency changes. It isn't just new superpowers, but also as you get to superagency, it changes some things around. And so, for example, think of it as kind of different kind of tactile perceptions of what it means to be kind of human.
and human gauge in life.
Like when you first make a technology,
it feels kind of alien,
you know, fire and agriculture
and, you know,
glasses and computers and phones.
Like, you know,
starts feeling like, you know,
kind of everyday life.
Like our grandparents use phones now, too,
even though at the beginning
of the kind of smartphone era,
I was like, oh,
this is one of those newfangled things.
I'd rather just, you know,
go get on the hard line
and call my, you know,
grandchild or whatever.
And so it does make changes,
and part of the iterative deployment
and learning about it is how do you make those changes such that when we get to the future state,
we go, oh, yeah, yeah, this one's better.
And you say, well, is our current state just adapting?
And actually the fact the previous state judgment was correct, well, if you kind of look at it,
like take your 1920s and 2020s, like, you know, do you actually understand what the world
past, you know, kind of penicillin and antibiotics and all the rest of the stuff really fully looks like
and what the consequences all that is and why the portfolio of it is so much better.
And so you actually have to take that state that you learn into.
It's kind of like think about it as, you know, kind of the judgments that you make as a child,
the judgments you make as an adult.
You go, well, look, there's a certain, you know, kind of innocence to the children.
But like, we get wiser as we learn and we get experience, and we use that for the viewpoint
of kind of making good judgments.
And that's part of the reason why I think, yes, you'd say, hey, you're, you're,
meeting your life partner now on a internet service. Like, whoa, that seems really alienating.
But actually, in fact, it's the, okay, how do we make that a lot better than the lottery of college or the
workplace, which was very limited. Yeah, and what you had before. And again, it's an iterative process.
It doesn't mean that there aren't still some things that are broken in the internet, you know, kind of
dating things. But it's one of the things that we say,
We know how to continually improve it, and that's one of the things that we continue to work on.
As we begin to close this out, I want to ask a question about the United States and America.
And based on your different religious preferences for AI, you might decide to kind of regulate this thing in one direction or another.
And the question becomes, okay, how do we implement this technology across America, across society?
There are some that get to this stage of the conversation.
And they're like, well, the doom or take, and even the gloomer take is not sustainable because we live in a multipolar world with many
different actors, and this is kind of an AI race. And so, if not us, then our adversary doubles their
GDP, and we kind of stay stagnant. And that leads to a world that maybe we don't like. So I want to ask you
this question, what do you think America should do here? Like, what should our approach for AI be?
Well, one of the things that I started doing is calling artificial intelligence, American intelligence,
for precisely this reason, which is, it's really important that we embrace this cognitive
Industrial Revolution because the societies that embrace the Industrial Revolution had prosperity
for their communities, their children, their grandchildren, and kind of made, you know,
kind of the modern world. And I think the same thing is true for the cognitive industrial
evolution with artificial intelligence or amplification intelligence or American intelligence.
And we want the kind of the spree decor of American values, the American dream, the empowerment of
individuals, the ability to, you know, kind of do your best work and to, you know, kind of
make progress from wherever you start in the rungs of society to take more economic
control over your destiny. And I think it's one of the reasons why it's particularly important
that American values are deeply embedded in this and that it's an empowerment of American society.
And it's part of the reason why I think that our regulatory stance needs to be much more, you know,
blumer, zoomer, and
accelerationist, than it does
putting on the brakes, because I think
that's part of the future
of the world as we can help
make it become. As we close this
out, then just a final question.
Is there any chance in your mind that all this
AI stuff is kind of overhyped?
We basically, like, we flatline here
that we have chat GPT4
and the innovation really slows
to a crawl, that like, none
of this matters that much
because it'll happen very
slowly over time?
I think there's zero chance of that.
So I think that already we see enough in these scale compute and learning systems that are just
only beginning to get deployed.
Like part of what 2025 is going to be the year of there, we see the acceleration of what
happens in software coding across the board.
And that software coding is both going to enable a bunch of other things.
Like all of us as professionals are going to have a coding co-pilot that helps us do our
work in various ways.
but it's also it's a template for how you advance a bunch of other functions of all of this work.
So I think even if you say, hey, GPD5 is only going to be like, you know, 10% better or 20% better, I think it'll be a lot better than GPD4.
And that the progress of the increased cognitive capabilities slows down.
I think the implications throughout the cognitive industrial revolution, the technology is already visibly present.
It's just a question of how we build it, configure it, deploy it, integrate it.
And I think that's part of the reason why, you know, American intelligence.
There you go, guys, from Reid Hoffman, 0% chance that all of this stuff slows down.
So into the frontier we go.
And we're going with you, Bankless Nation.
Reid Hoffman, thank you so much for joining us here today.
It's been a pleasure.
My pleasure as well.
I look forward to the next.
Yeah, we'll have to talk about crypto in the next conversation.
So everyone listening, the book is called Super Agency.
It is out now.
We'll include a link in the show notes.
Fantastic book with Reid's entire thesis around this distilled.
Gotta let you know, of course, crypto is risky.
So is AI.
You could lose what you put in, but we are headed west.
This is the frontier.
It's not for everyone, but we're glad you're with us on the bankless journey.
Thanks a lot.
