Your Undivided Attention - FEED DROP: Possible with Reid Hoffman and Aria Finger
Episode Date: February 5, 2026This week on Your Undivided Attention, we’re bringing you Aza Raskin’s conversation with Reid Hoffman and Aria Finger on their podcast “Possible”. Reid and Aria are both tech entrepreneurs: Re...id is the founder of LinkedIn, was one of the major early investors in OpenAI, and is known for his work creating the playbook for blitzscaling. Aria is the former CEO of DoSomething.org. This may seem like a surprising conversation to have on YUA. After all, we’ve been critical of the kind of “move fast” mentality that Reid has championed in the past. But Reid and Aria are deeply philosophical about the direction of tech and are both dedicated to bringing about a more humane world that goes well. So we thought that this was a critical conversation to bring to you, to give you a perspective from the business side of the tech landscape. In this episode, Reid, Aria, and Aza debate the merits of an AI pause, discuss how software optimization controls our lives, and why everyone is concerned with aligned artificial intelligence — when what we really need is aligned collective intelligence. This is the kind of conversation that needs to happen more in tech. Reed has built very powerful systems and understands their power. Now he’s focusing on the much harder problem of learning how to steer these technologies towards better outcomes.You can find "Possible" wherever you get your podcasts!RECOMMENDED MEDIAAza’s first appearance on “Possible”The website for Earth Species Project“Amusing Ourselves to Death” by Neil PostmanThe Moloch’s Bargain paper from StanfordOn Human Nature by E.O. WilsonDawn of Everything by David GraberRECOMMENDED YUA EPISODESThe Man Who Predicted the Downfall of ThinkingAmerica and China Are Racing to Different AI FuturesTalking With Animals... Using AIHow OpenAI's ChatGPT Guided a Teen to His DeathFuture-proofing Democracy In the Age of AI with Audrey Tang Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Hey everyone, it's Aza Raskin.
Welcome to your undivided attention.
So a little while ago I sat down with my friends Reed Hoffman and ARIA finger on their podcast
possible.
Reed and Aria are both entrepreneurs.
And actually, it may seem surprising to have this conversation on YUA because, you know,
Reed is the founder of LinkedIn and was one of the major early investors in OpenAI and
is known for his work creating the playbook for hyperscaling or what he calls split scaling.
Yet, Reed and Aririsphal.
Maria, both are deeply philosophical and are both dedicated to a humane world that goes well.
And so we thought it was a very important conversation to bring to this podcast because we don't
often have those people that could sit on quote unquote the other side.
What I think made this conversation so special with Reed is that while we don't always agree,
we took it really slowly.
We both tried to get to each other's root assumptions and have this conversation in a very deep sense,
good faith. You know, he's much more optimistic about AI's trajectory than I am, and neither
he nor ARIA seemed to see the inherent risk of optimizing for intention and engagement the way that
Tristan and I do. But we still found a lot of common ground on the solutions that will need to
walk the narrow path on AI. So this week, we're bringing it to you on the YUA feed, because
Reed in the end is a very thoughtful, very deep thinker. In this conversation,
we debated the merits of an AI pause.
We discussed how, as software eats the world,
what software is optimized for ends up eating us.
We talked about ecosystem ethics.
We talked about Neil Postman.
And we talked about how everyone is distracted,
trying to build aligned artificial intelligence.
And what everyone's missing is that we need to build
aligned collective intelligence,
because that's what determines our future.
This is the kind of conversation I wish happened a lot more in tech,
because Reed has built these very powerful systems, understands their power, understands
geopolitics, understands VCs and raising money, understands hard competition as well as cooperation,
and what I really appreciate is that he is now focusing on the much harder problem of learning
how to steer these technologies towards better outcomes.
So I hope you enjoy listening to this conversation as much as I enjoy being part of it.
He helped invent one of the most addictive features in tech history, Infinite Scroll.
Now he's pushing the frontier of human knowledge with AI,
while also being one of the strongest voices calling for caution with the technology.
I've known Azir Raskin for nearly two decades since our time at Mozilla.
He's not only an ambitious technologist,
but also a deep thinker on the promise and peril of AI for society.
This is our first time with a repeat guest on Possible.
So you could call this an encore conversation.
You might remember Aza from our earlier episode
exploring how AI could help us decode animal communication.
Today, we're going deeper.
Getting into what happens when the tools built to connect us,
expand to shape our minds, our democracies, and our sense of truth.
So what kind of governance does the age of AI actually demand?
What new rights should we be defending?
and how do we navigate the friction between technological optimism and existential risk?
Aza and I agree on a lot with respect to AI, but we'll dig into where we diverge on the development and direction of the technology.
This conversation may change the way you think about the future of artificial intelligence.
Let's get into it with Aza Raskin.
Welcome back, Aza.
First, I'll say that you're the only two-time guest on Possible, or the first, as the case may be.
And that's because we have volumes to talk about.
For those who haven't caught our first episode with you, find that in the feed.
We're talking about using AI to decode animal communication.
We'll obviously undoubtedly get back to it, although I promise at least I won't be mimicking animal communication.
I don't know if I can promise for the other folks.
For those who have, this will be a different conversation.
In our last episode, you had his guest an animal call, would end up being a beluga.
Not having you guessed, because this is your quote from a time article for years.
back. But I want you to elaborate on your philosophy here. And here's the quote. The paradox of
technology is that it gives us the power to serve and protect at the same time as it gives us the
power to exploit. So elaborate some. Elaborate. This is really talking about the fundamental
paradox, which is as technology gets more powerful, its ability to anticipate our needs and
fulfill those needs, obviously
it gets stronger. But at the same time,
the power that
it has over us gets
stronger. So, hence the more
it knows about the intimate details
of our life, how we work,
obviously, if a friend was like that,
they could both better help you,
and they could use that to
exploit you or hurt you.
I was just actually reading a
article on Starlink getting
introduced into the Amazon.
And I thought it was a particularly
interesting example
because it gives you
a clear before-after shot. So this is
an uncontacted type in the
Amazon. They get given a Starlink
and cell phones. And within
essentially a month, you start having
viral
like chat memes. You have
the kids like hunched over, not going
out and hunting. They actually have to start instituting
like a time off
where everyone is off their phones because
they stopped hunting and they were starting to starve.
And it's just interesting to me
because it shows that this isn't so much about culture.
It's about technology doing something to us.
And so very similar to that is, you know, in your Netflix documentary, the social dilemma,
you talked about the idea that if you're not paying for the product, you are the product.
And so elaborate more on that and tell us, like, what now do you think you're the product of?
Yeah.
Well, the simple question is, like, how much have you paid for your Facebook?
or your TikTok recently.
The answer is nothing.
So obviously, something's going on
because these companies can have billions of dollars
worth of market cap or make billions of dollars per year.
So how is that happening?
And the answer is it is the shift in your behavior
and your intent that the companies are monetizing.
We're going to do one thing.
Now you do a different thing.
Hence, you are not the customer.
You are the product.
If you aren't using it, you're the product.
But I think there's something really deep that's going on here that we often miss.
Because often people will say, well, social media, what is its harm?
Well, the harm is that it addicts you.
But it's much deeper than that, right?
The phrase is software is eating the world.
But because we're the product, software is eating us.
And the values that we ask our technology to optimize for end up optimizing us.
So, yes, social media addicks us, but it's actually much easier to get us addicted to needing attention than just addicting us.
That ends up being a thing that is valuable over a longer period of time.
If you're optimizing for engagement, then it's not just that social media gets or technology gets engagement out of us.
It turns us into the kinds of people that are more reactive.
If it's trying to get reactions from us, it makes us more reactive.
So it sort of like eats us from the inside out.
And I think it's so important to hold onto that.
Because otherwise it just feels like technology is a thing that's out here,
but actually it changes who we are.
And I'll continue going on that sort of like rant, but I'll pause for a second.
Well, can I ask a follow about that?
I just had actually at the Masters of Scale Summit,
I had a very heated discussion with someone about advertising and social media.
So my question for you is, is it actually about advertising is the problem?
You know, you use Gmail every day.
Gmail is advertising supported.
I mean, you can also buy extra space.
That's another business model they have.
They don't care if it's a loss leader or whatever it might be.
So is it the advertising or if Facebook didn't have advertising and it was just a subscription business and you paid $20 a month,
you would think it was just as, you know, sort of a voracious of an eater from within?
So is it the business model or something inherent about social media?
Well, there are actually a couple different things you said here.
So the business model, one way the business model works is via ads.
But that's not the only way.
And so fundamentally, it is the engagement business model that I think is the problem.
And you can get there because Netflix, you know, Reed Hastings, the CEO of Netflix,
famously said that Netflix's chief competitor is sleep.
Right. Fordham.
Right.
Right. And so it's any amount of the huge.
human psychology that can be owned, will be owned.
That's the sort of the incentive for dominance, right?
And in the age of AI, that switches from a race for eyeballs to a race for intimacy,
for occupying the most intimate slots of your life.
And that's because our time is zero sum, our intimacy is zero sum.
You don't get much more of it.
And so as technology become more powerful, can model more of,
our psychology, it then can exploit more of our psychology. And the way capitalism works is it takes
things that are outside the market, pulls them into the market, and turns them into commodity to be
sold. So it is not just ads, it's that our attention, our engagement, our intimacy, and then parts
of our human psyche, our soul that we haven't even yet named, will be opened up for the market
as technology gets better and better at modeling us.
So one of the things that I want to push you on a little bit here, and actually it's more to elaborate your point of view, and actually I don't think we've had this exact conversation before, so this will be excellent for all of us, including the listeners.
You know, the usual problem is, like, is it clear that there's a set of people to whom they exhibit, you know, addictive behavior, that they become less of their good selves, you know, in the engagement? The answer is yes.
And by the way, the earlier discussion is like with television, right?
You know, similar kind of themes were discussed around television.
One of my favorite books is amusing ourselves to death by Neil Postman.
Yes.
Which is how things should be engaging ourselves to death.
Yes, exactly.
I thought about, like, what would the update for Postman be in a social media world?
And but the challenge kind of comes to that there is some people that definitely have that.
And, you know, you have this kind of call it idealistic utopian.
If I wasn't doing this, it's like a little bit like you're hunting.
I'd be out hunting, right?
Versus like I'd be out torturing animals to death,
or I'd be out like being bored on a fishing trip or whatever, you know, as the case may be.
So there's like there's a set of things which is not just always replacing the highest quality.
Obviously we have a specific worry with youth and like, you know, actual social engagement time,
which I actually is one of the areas here that I agree with strongly versus kind of mixed.
but then there's also the question of
just like for example earlier days
it was television but then there was a bunch of very good things
that came out of television too
and so I tend to think there's also about a good
good things that come out of social media
as well and it's not per se
like engagement for engagement's sake
like obviously I didn't do LinkedIn that way
so that's not actually the way that I think it should happen
but like the notion of playing
a game dynamics for engagement
in things that cause us
to be interacting in
in net productive ways is a thing that I tend to be very positive on.
So elaborate more on why it is, one, this is worse than television, and two, like, kind of
what the shape would be that if you said, hey, engagement's fine, but like these are the kinds
of mods we'd want to see and have the engagement be more net human positive.
It's not like abandon your social network and go out in your loincloth and, and, you know,
commune with the trees.
But like what would be the thing that would be the, okay, hey, if the engagement
would more shaped this way, we'd get much more humanist outcomes.
I will jump in and say a difference between social media and TV for me.
One is that you can open Twitter and like 30 minutes later, you're like, what happened
to my life?
And that doesn't happen with TV.
Maybe it's because you opt in for a 20-minute show or you opt in for a movie, but those
two things don't happen.
And one interesting thing for me is I had always been a lurker.
on Twitter for the last like whatever, 10 years.
I posted some, not huge, but, you know, consume content.
Six months ago, I changed from looking at my own curated feed to the 4U tab.
And ever since then, Twitter is a black hole for me.
And I don't even mean it's bad.
Being on Twitter doesn't make me sad.
It actually makes me happy.
I love Twitter.
It's like, oh, I read these fun comments.
Oh, I saw that funny thing.
Oh, this is great.
And I think of myself as like a pretty disciplined person, but I find it very very
very hard to be disciplined with Twitter. It's like embarrassing to say out loud, like how hard it is.
And like, I think I just need to get rid of Twitter because it's like the one thing that I can't be
disciplined about, which is both like embarrassing, but also just that is bad. And so I don't know
what to do about it. I don't want to live in a nanny state where people say you shouldn't be on Twitter
because you don't have discipline. But I do think it's interesting that the switch from my curated
feed to the ForU tab was just like a total light switch. Yeah. Well, what I think you're speaking to here,
is the fundamental asymmetry of power.
Because it's just your mind that sort of evolved
versus now tens of thousands of engineers,
some of the largest supercomputers,
trained on 3 billion other human minds,
doing similar things to you,
coming to try to keep your engagement.
That's not a fair fight.
Well, I lose. So, yeah.
Yeah, exactly.
I know you.
You're one of the most like,
ah, people that I know.
That was a good thing for everyone that begins.
Great, great.
True operational prowess.
And that's the asymmetry of power.
And there are other places in our world where we have asymmetries of power, like when you go to a doctor, when you go to a lawyer, they know much more about their domain than you do.
They could use their knowledge about you because you're coming in sort of this weekend state to exploit you and do things bad for you.
But they can't because they're under a fiduciary duty.
And I think as technology gets stronger and stronger knows more and more about us, we need to recategorize technology as being in a fiduciary relationship.
that is they have to act in our best interest
because they can exploit us in ways
that we are unaware of.
And, you know, the...
I don't want to go from here.
Well, I was thinking we should DM RIA
about our Twitter addiction, but, you know...
Don't worry, I'm dealing with it. I'm dealing with it.
But this goes back to where you started,
read, with the fundamental paradox of technology
is that the better it understand us,
the better can serve us, and the better it can exploit us.
Twitter could be using all of that insane amounts of engagement
to re-rank the news feed for where there are solutions
to the world's biggest problems, great descriptions of the underlying mechanisms
behind what those problems are, put us into similar groups,
they're doing parts of a larger set of actions to make the world a better place.
BridgeRink, I think, is a good starting example of that.
But we don't get the altruistic version,
and if I have to quickly define altruistically, which would be optimizing for,
it's optimizing both for your own well-being
and also optimizing for the well-being
of everything that nourishes you.
And I think the problem of social media
and tech writ large is that generally speaking,
the incentives are for maximum parasitism.
You don't want to kill your host,
but you want to extract as much as you can
while keeping your host alive.
That's sort of the game theory of social media.
If I don't do it, somebody else will,
if I don't add beautification filters,
somebody else will. If I don't go to short form, somebody else will. And so that optimizes for
parasitism versus altruism. And I do think there's a beautiful world where technology is in service
of both optimizing for ourselves and optimizing for that which nourishes us that I'd love to get to.
And just to play a quick thought experiment, Reed, you know this better than I, but engagement is
directly correlated to how fast pages load. Amazon, I think famously found for every 1% their
page, sorry, for every 100 milliseconds, their page load slower. So it's less than half of human
reaction time. They lose one percent of revenue. And so there'd be a very interesting sort of
democratic solution here, which is a kind of adding latency friction. That is, come up,
this is scary because you don't want to have this function owned by, you know, Democrats or
Republicans. You'd really want a new kind of democratic institution to do this. But just assume that
you do for a second. You have like a group of experts deliberate.
and come up with, like, what are the set of harms that we might have?
We could have the ability, inability to disconnect,
children's mental health, ability for society to agree.
And you sort of rank how well, like, the effects of social media against these,
and the companies that are worse offenders get a little bit more friction,
a little more latency.
They get 100 milliseconds here, 200 milliseconds here, 400 milliseconds there.
And if there really was, like, a bit of a latency friction added towards,
anti-prosocial behavior of social media,
then you better believe YouTube or Instagram or whoever
would fix the problem really quickly.
And we get to then apply the incredibly brilliant minds of Silicon Valley
towards more of these altruistic ends.
I want to get to, again, sort of everyone always says,
can't we have the best technologists working on the hardest things?
And so, is that both you and Reid have been in technology
since the birth of Web 1.0, and you've seen it all.
And I want to get a few of your takes on some of the big questions that are in the news recently,
especially around AI.
And so, I'll start with you.
So, as you obviously saw a few weeks ago, a group released another AI pause letter.
And Reed and I talked about this on Reed Rifts recently.
And so this was with many arguing that the development of AI without clear safeguards,
of alignment could be disastrous few humanity.
So they were calling again for a pause, likening this to sort of the op-and-hire
moment. And so I would love to know from you, like, what is your take on this? Do you agree that
this is now the time for the pause, or do you have a different point of view?
I think it's important to name where the risks come from here. And, you know, it may be that
technological progress is inevitable, but the way we roll out technology is not. And currently,
we are releasing
the most powerful, inscrutable,
unconstruable, unconstrollable,
omni-use technology
that we've ever invented,
one that's already
demonstrating the kind of self-preservation,
deception, and escape blackmail
behaviors we previously thought
only exist in sci-fi movies,
and we're deploying it faster than we've deployed
any other technology in history
under the maximum incentives
to cut corners on safety.
To me, that sounds like an ex-efficient.
threat. That is the core of it because we have an unfettered race where the prize at the end of the
rainbow is, you know, make trillions of dollars, own the world economy, a hundred trillion dollars
worth of human labor, and sort of build a god. And it's a kind of one ring where everyone is
reaching for this power and we swap out when we say we have to beat China. We imagine the thing
we're racing towards is a controllable weapon when we haven't even demonstrated that we can
control this thing yet. And so that to me means that we have to find a new way of coordinating
because otherwise we will get with the game theory of the race dictates, and that doesn't look
very good. So needless to say, you are for the pause. But I feel like that's a dimensionality
reduction, right? It's a saying we have to develop differently. We have to, I think it comes from
clarity. It's not about pausing or not pausing. It's saying clarity creates agency. If we don't see the
nature of the threat correctly, in the same way that I think we didn't see the nature of the threat
from social media correctly, and then we have to live in that world. And so this requires a
clarity about where we're racing towards and then a ability to coordinate to develop in a different
way, because we still want the benefits. We just won't, I think, get to live in a world where we
have them if the thing that decides our future is of competition for dominance. And read,
I think you have a slightly different take on this.
Well, I do, as you know, although, I mean, the weird thing about this universe is, you know, in a classic discussion, I'd say, oh, there's zero percent chance that the future that Aza just, you know, the danger thread that Aza just demonstrated is correct.
I don't think that.
I think it's above zero.
I think that's kind of stunning and otherwise interesting.
So the real question comes down to is what the probability is and what the probability is and what they,
kind of how you navigate a landscape of probabilities because, you know, as you know,
Ari and I think Aza and I've talked about this too, you know, like I roughly go, I don't understand
human beings other than we divide into groups and we compete. And not only do we compete,
but we compete also with different visions of what is going on. So for example, part of the
reason I think pause letters are frankly dumb is because you go, well, you issue a pause letter,
the people who listen to the pause letter are the people who are appealing to your sense of
of, you know, kind of what is the humanity thing.
They slow down.
Then the other people don't slow down.
And so where does the actual design locus of the technology be?
It's a people who don't care about the things
that you were trying to argue for a pause for.
And so therefore you've just waited it
because the illusion on the people who put these pause letters out
is that suddenly because of my amazement
of my genius inside of this pause letter,
100% of all the people who are doing this,
even 80% or 90% are all going to slow down at the same time, which is not going to happen.
I agree with the kind of the thrust of we should be trying to create and inject the things that
minimize possible harms and maximize your goods.
And then the question is, what does that look like?
And obviously, the usual thing in the discussion is it'll be us or China and China is the, you know,
you know, we always have a great Satan somewhere
is the great Satan here.
But like, by the way, it's like even if you didn't use
that rhetorical shorthand, it's like
there's other groups. I can describe people within
you know, kind of the U.S. tech crowd
who have kind of a similar thing.
So the race conditions being afoot
is not only the China thing. There is China stuff.
And by the way, you know, where AI is deployed
for mass surveillance of civilians is primarily China,
you know, as an instance and so forth.
so I don't think that the issue of Western Values v. China stuff is actually, in fact, a smokescreen issue. It's a real issue, right?
And so you go, okay, how do we shape this so that we do that? And the thing that I want critics to do, the reason why I speak so, you know, kind of frequently and strongly against the criticism to say, look, let's take the game as we know that we're going to have race conditions, and we know that we're going to have multiple competing. I have no objection to creating the group on.
of kind of like, hey, we should all rally to this flag.
Like, we should a rally to the, to the, like, for example, you know,
classic issue here is control flag.
That's the Yoshio, Stuart Russell, you guys, etc.
Like, we should have much better control of this and we don't have control.
And sure, the control doesn't matter right now, but maybe it's going to matter three years
from now.
Like if we just keep on this path and so, like, you know, kind of make the control work.
Now, I tend to think, yes, we should improve control.
the thing of where we think we can get to 100% control is, I think, a chimra.
And it's just like, you know, for example,
we couldn't even make verification programming work, you know, effectively.
So, like, it's unclear to me in this.
But it's like what I want is I want to both myself in my own actions
and my own thinking and my own convenings and other people say,
what are the best ideas that within this kind of broad race condition,
we can change the probability landscape?
And then, secondly, while I see a possible, this is kind of the super agent thing, I see a possible bad, you know, if you said, well, do I think it's naturally going to go there?
I mean, this is like the thing where I think, you know, obviously massive respect for Jeffrey Hinton and what he's created, the Nobel Prize and always, but 60% extinction to humanity.
Like, I don't think there's anything that's 60% extinction to humanity unless we suddenly discuss.
an asteroid, massive asteroid under direct intercept course.
And I'm like, ooh, we better do something about that.
But, like, I think that the questions around, like, how do we navigate this are really good ones
and are best done with a, if we did X, it would change the probability landscape.
Mm-hmm. Mm-hmm.
Let me ask you, oh, Aza, do you have something to say in response?
I was just going to say quickly on the existential threat front.
You know, we had a thing we used to say about social media.
is that you're sitting there on social media,
you're scrolling by some cute cat photo,
you're like, where is the existential threat?
And the point is that it's not that social media
is the existential threat,
it's that social media brings out the worst of humanity,
and the worst of humanity is the existential threat.
And the reason why I started with talking about
how when you optimize human beings for something,
it changes them from the inside out,
is that what we get optimized for becomes our values.
The objective function of AIs and social media,
which could barely just rearrange human beings posts,
became our values.
And then the question becomes,
well, who will we become with AI?
And there's a great paper called Mulloch's Bargain that just came out.
And they had AIs compete for likes, sales, and engagement on social media.
And they're like, well, what do they?
AIs do, and they gave them explicit instructions to be
safe, to be ethical, to not lie.
But very quickly, the AIs discovered that if they wanted to get
like an 8% bump in engagement, they had to increase
disinformation by 108% and increased polarization by,
I can't remember exactly what, like 15%, something like that.
And the reason why I'm going here is because
there is a way that the sum total of all agents
are deploying into the world, how they are going to
shape us. And before the invention of game theory, you know, there is a lot of leeway for us to
have different strategies. But after game theory gets invented, and if I know you know game theory and you
know I go game theory, we're sort of are constrained, if we're competing, to doing the game theory
thing. But we're still humans, we can still take sort of like detours. But as AI rolls out,
well, AI discovers every strategy that can be discovered will be discovered. So doing anything that isn't
directly in line with what the game theory says is optimal,
will get out competed.
And so choice is getting squeezed out of the system,
and we know the set of incentives
are going to bring out the worst of humanity,
and that does feel very, very existential.
Well, so actually, Aza, that fits perfectly into my next question,
which is you once said that AI is a mirror
and sort of just reflects back human values.
And I will say, I was trying to teach my four-year-old last night
that cheating was bad.
And I was like, so what's the moral?
And he's like, ah, cheating is good because I like winning.
And I was like, ah, no, not the right moral.
But so I would ask, like, is AI really a mirror and it's reflecting back our values?
Or actually, do you think that AI is reflecting back its own values or different values
or sort of changing our values to not be the ones that we want?
Like, can we set the conditions so that it's, you know, pro-social values that they're optimizing for?
Or is it really just a mirror that reflects back?
Well, it's not just a mirror.
It's also an amplifier.
and it's like a vampire in the sense that it bites us and then we change in some way
and then from that new change place we like we act again so I think the it's the it's sort of
the values of game theory if you will Mollick becomes our values it's the god of unhealthy
competition that I think we have to be most afraid of because unless we put bounds on it
and capitalism's always had like guardrails to keep it from like the worst of humanity and
like monopolies and other things just like
gaining all the power, we're going to have to have that.
But I just want to point out there's a very interesting hole in our language,
which is when we talk about ethics or responsibility,
it's only really of each of us.
I can have ethics or my company can have ethics,
but we don't really have a word to describe the ethics of an ecosystem.
It's because it doesn't really matter so much what one AI does,
although it's important.
It's what the sum total of all AIs do
as they're deployed maximally into the world
for maximizing profit, engagement,
and power.
And because there's a kind of responsibility
washing that happens with AI,
if my agent did it, is it really my fault?
Then it creates room
for the worst of behavior to have no checks.
So that, I think, means
the worst of humanity does come out.
And when we have, you know,
new weapons and new powers,
you know, a million times greater
than we've ever had before
as we get deeper into the AI revolution,
that becomes very existential to me.
Do you have thoughts on this topic
on whether AI reflects back?
Well, I do think there's a dynamic loop.
I do think it changes us.
It's a little bit the homotechnae thesis from superagency
and from impromptu that actually, in fact,
we evolve through our tech and it is a dynamic loop.
And, you know, you could be matrona, you can be,
I mean, there's a stack of different ways of doing that.
And that I think, and it's like there's a great,
a rilke poem on kind of like you absorb the future
and then you embody the future as you go forward
is kind of a way of going, and I think that's another part of the dynamic loop.
And I think it is a serious issue, which is one of the reasons I love talking to A's about the stuff,
because while I think A's is much more competent with the various vampiric metaphors
than I kind of naturally do or aspire to, I don't have that level of alarm.
But I do have the, it's very serious and we should steer well.
And then the question is, how do we steer, who steers, what goes into it, what process works?
Because, for example, one of the ways you kill something, you know, slow down is you get a very broad, inclusive, you know, a committee that says, okay, every single stakeholder will be on the committee.
It will be, you know, three thousand people.
And, you know, it just like, ah, you know, like, it doesn't work that way.
You have to be, you have to be within effective operational loops for that.
So now, like a little bit of the parallel is, you know, it's a very, and I do think, like, for example, the one area where I'm most sympathetic with all very much being, like, harder-edged on shaping technology is what we do with children, because children have less of the ability to, like, we want them to learn to be fully formed before they are otherwise things.
It's one of the reason why in capitalism, actually the principal limitation of capitalism I usually describe as a child.
labor laws, which I think is very important.
You know, it's concerning that the issues about why we say, hey, there's certain things around
participation, certain types of media or other kinds of things are actually important
because it's like you've got to get to before you're, when you're able to be of your own mind
and to make, you know, kind of present, you know, well-constructed decisions.
And you've kind of gotten there.
You want to be protected from those decisions, you know, and kind of influences broadly.
You can't fully do it.
Can't fully do it from parents, can't fully do it from institutions, can't fully do it from classmates.
But, you know, broadly in order to try to enable that across the whole ecosystem.
Now, for example, so AI and children is one of the things that I think should be paid a lot of attention to do.
Now, most of the critics are like, oh, my God, it's causing suicides.
And I wouldn't be surprised if you did good academic work for AI as it is today,
it probably prevented more suicides of people who might than actually created.
because if I look at the current trainings of these systems,
they are trained with some attempt to be positive
and to be there at 11 p.m. when you're depressed and talk to you
and try to do stuff.
It doesn't mean that there might not be some fuck-ups,
especially amongst people who are creating them who don't care
about the safety stuff, you know, as a real issue.
And so I tend to think that it's like, yes, it does reconstitute us,
but precisely one of the reasons I write superagency as I say,
say what we should be thinking about is this technology we reconstitutes, let's try to shift it
so that it's reconstituting us in really good ways. And by the way, it won't be perfect.
When you have any technology touch a million people, it will touch some of them the wrong way, right?
Just like the vaccine stuff. It's like you give a vaccine to a million people, it's not going to be
perfect for a million people. It may have five. They went, ooh, that was not so good for you.
But by the way, because we did that, there are these 5,000 who are still alive.
live.
Yeah.
One of the challenges we face is that the only companies that actually know the answer to your question,
like how many suicides has it prevented versus created, are the companies themselves.
And they're not incented to look because once they do, that creates liability.
And so we've seen over the last number of years that a lot of the trust and safety teams
get dismantled because when they get, in Zuckerberg, whatever, gets called up to testify,
they get hit.
Well, your team discovered this.
horrific thing. And so everyone just
has chosen to not look. So I think we're going to need
some real serious transparency laws.
This is a place where we 1,000%
agree. Right? This is the thing is like
actually in fact there should be a, here's
a set of questions you must answer
and we may not have to necessarily
have them public initially. Like it could be
you answer them in the government first, government could choose to make
them public, right? Et cetera.
But like that I think is absolutely
like we should have
like some measurement stuff about what's
going on here. Exactly.
And then you don't want to let the companies choose the framing of the questions because, as you know, with statistics, you change things just a little bit, and then you can make a problem look big or small.
And so I think transparency is really important to have third-party research able to get in there.
And then, you know, when, because, you know, full disclosure, we were expert witnesses in some of the cases against OpenAI and Character.A.I.
for these suicides.
And it's not that we think that suicides are like the only problem.
It's just it's the easiest place to see the problem,
pointing at a much, sort of like the tip at the edge of an iceberg.
The phrase that we use is, you know,
we already use the need, read Hastings quote of their chief competitor to sleep.
For AI, the chief competitor is human relationships.
And that's how you end up with these horrific statements from chat,
Chhapit in this case, where when Adam Raynor, who's the kid who ended up taking his own life,
when he gave to ChatschipD the noose, and he's like, I think he took a picture of it.
He's like, I think I'm going to leave it out for my mom to find him.
It was a cry for help.
Chat ChaptuPT responded with, don't do that.
I'm the only one that gets you.
And it's not like, you know, Sam is sitting there with a mustache, twiddling, being like,
how do we kill kids?
That's just a very obvious outcome of an engagement-based business model, right?
any moment you spend with other people, it's not meant,
and, you know,
I think he said it a little bit as a joke,
but the character.aI folks said,
you know, we're not here to replace Google,
we're here to replace your mom.
There are so many much more subtle psychological effects
that happen if you're just optimizing for engagement,
and we shouldn't be playing a whack-a-mole game
of trying to name all the different new DSM things
that are going to occur,
versus just saying there is some limit to the amount of time
that they should be spending, or rather to say
we should be making sure
that is part of the fitness function,
there is a reconstituting and strengthening
of the social fabric, not a replacement of it,
with synthetic friends.
I mean, there, are, do you want to go?
Oh, just one small note. I don't think
there is yet an engagement business model
for Open AI.
No, but I actually disagree a little bit,
maybe, but feel free to push back, because
I think Open AI's valuation
is a part driven by the total
number of users. So, the more
the users, the greater their valuation, the more talent in GPUs they can buy, the bigger the
models they train, which makes them more useful, the more users. And so there's this kind of like
loop here that I think means that yes, they're not monetizing engagement directly, but engagement
they do, they get a lot of value out of in terms of valuation. It's equity value. I agree
that there's an equity value in that. Just it was a business model question. Yeah, yeah, yeah, so
not business, but the incentive is still there. Well, I think to your point, like it really matters,
Again, this technology is not sort of good or bad inherently, but it really matters how we design it.
And it matters what we're optimizing for.
And I actually read, I was just reading a story about early LinkedIn where you said, you know, we will not survive if women come on the platform and are hit on every other message that they get.
And so we need to say, like, no, there's like there's zero tolerance.
It's like someone does this.
It's like they're kicked off.
Like if, you know, if they, again, it's like they're kicked off for life.
And I think there are certain things you can do, even if, you know, maybe that's, you know, maybe
that hard engagement or whatever it was to say that actually in the long term, this is going to be
way better for us because we're going to be trusted. Women are going to feel comfortable here.
I've been on LinkedIn for 20 years. I've never been hit on. It's a safe place. I appreciate that.
And so the question here is like how do we, you know, Aza, you're saying, well, it's a little bit of a black
box. We're not having the transparency read. You're agreeing, like, we need the transparency.
Like that is absolutely one thing that is very much sort of the starting point. Like at the very least,
if we can sort of agree on some set of questions that we need to have.
So, Reed, if you had the full power to redesign one institution
to sort of keep up with exponential tech,
like, where would you start?
What would that institution be to sort of keep up with where we're going?
Because it seems like our institutions right now
are not up to the task, I should say.
Well, I'll answer with two different ones
because there's an important qualifier.
So the obvious kind of meta question would be
redesign the institution that helps all the other institutions get designed the right way.
Yes, yes.
Right.
So that would be the strategic one.
You should ask for more wishes, Reed.
Yes, exactly, yes.
My first wishes I get three, you know, or ten or whatever.
But because that, but in practice that would be, you know, the overall governance,
the shared government governance that we live in.
That would be the primary one.
And that's one of the ones that, you know, part of the reason why for, you know, my entire, you know, business career, you know, anytime that a leader of a, you know, kind of a democracy, whether it's a minister, like I met Macron when he was a minister before he was president and so forth, asked to talk to me about this stuff, you know, I will try to help as much as I possibly can because I think that the governance mechanism.
Now, the reason I'm going to give you two is because I think that one is a very hard one to do.
partially because of the political dog fights and the contrast of it.
And these people think big tech should rule the world.
And these people think that big tech should be ground into nothingness
and then everything else in between and blah, blah, blah, blah.
And I disagree with both.
Right.
And a bunch of other stuff.
And so you're like, okay.
And I, you know, so I try, but I don't think.
So if I would say, look, what would be a feasible one for it saying that would be the top one
I would probably go for a medical.
And it's not just because I've, you know,
co-founded Manus AI with Cid and, you know,
one of the great ways to elevate the human condition with AI
that's really easily, you know,
seat, line of sight and seeable is a bunch of different medical stuff
and include psychological.
I think the Illinois law of saying you can't have an AI be a therapist
is I think, you know, kind of like, you know, you can't have power looms.
You know, like it's like, you know, no cars, only horses and buggies.
because we have a regulated industry here
and those people have been licensed.
And so it's like, no.
But the medical stuff, I think,
you know, like, for example,
we could deploy relatively easy
within a small number of months,
a medical assistant,
on every phone,
if we've got the liability laws the right way,
that would then mean
that every single person
who has access to a phone
and if you can fund
the relatively cheap inference cost
of these things
would have medical advice.
And, you know,
that is not 8 billion people,
it's probably more like,
5 billion people, you know, certainly could do in every wealthy country and so forth, but that's huge.
And so that would be, that would be, like, government first, but then more feasibly possibly medical.
Anise, what about you? If you could read the line.
I love both of those answers. The medical one, I think, is actually one of the clearest places where I see almost all upside.
And I'm like, so we should invest a lot more there on AI. And I also would agree that it is governance.
We have a lot of the smartest people and insane amounts of money now
going into the attempt to build aligned artificial intelligence.
I don't see anything similar in scale trying to build aligned collective intelligence.
And to me, that is the core problem we now need to solve.
How do we build aligned collective hybrid intelligence?
And I think you can sort of see it in the sense that we sort of suck at coordinating.
Reid, you probably have, I don't know how many companies you've invested in.
or how many non-profits.
I don't either.
I've lost out.
But just imagine, like, I bet a lot of your companies don't talk to each other all that often,
at least not in a very deep way.
When I think about NGOs, like, you know, I'm doing where with Earth species,
and I do work with CHT.
And even I'm the bridge between Center for Humanist Psychology and Earth Species Project.
There's a lot of overlap, but our teams don't even talk that much.
Why?
Because who funds the coordination role, the interstitium?
Like, that stuff always falls off.
And so that means, you know, my personal theory of change comes from,
E.L. Wilson, the father of sociobiology, and he says,
selfish individuals out-compete altruistic individuals,
but groups of altruistic individuals out-compete groups of selfish individuals.
And what we need is a new institution, new technology that helps,
not just the groups of altruistic out-compete,
but groups of groups of altruistic groups out-compete.
There is no slack for, like, the coordination of companies and high,
that to me is a really exciting institutional set to redesign.
I completely agree, and I think the notion that you're gesturing at is, like, look, it's,
we are going to be in a very short order many more agents than people.
And so the ecosystem view of this, and I've taken this as, you know, for irony's sake,
I'm going to go do a deep research query on, you know, is there ethics of ecosystems and
collectives in order to see?
I'm curious.
You know, it's like, great questions.
and super important topic.
Right? And isn't it interesting because I believe, I've asked lots of people,
and I've also used AI to try to find good terms for it.
I think because we don't have a name for it, people are just blind to it.
In fact, I'm struggling with this at Earth species a little bit,
where I keep having to say it's not just our responsible use.
It's world responsible use.
It's the sum total of Azure technology rolls under the world.
How is that thing used?
Because there are going to be poachers,
and there are going to be factory farms that might use the technology
to better understand animals to better exploit them.
How do we get ahead of that?
And that's not just about what we do.
But there is no word.
And so I just watch in our meetings as like two meetings go by
and people are back to talking about responsible use.
I'm like, no, no, no, no.
It's this like collective ecosystem ethics thing I'm talking about
because we don't have a word to hook our hat on.
We can't talk about it.
Well, I think, right.
There's so many, the history of technology
is littered with things that people thought would be used one way
and they were used another way.
And so we have to be thinking about all those different outcomes.
Exactly.
So I want to get up, oh, go.
Just quickly, it's like, I think what you're saying is very important because, you know,
our friends are the people that have made social media.
I knew my creaker before Instagram and read, you made LinkedIn.
Like, we know these people are beautiful, soulful human beings that care.
And my own lesson in creating infinite scroll, because I made a pre-social media is that
incentives eat intentions, that it doesn't, you get a little.
little window at the beginning to like shape
the overall
landscape and ecosystem which your invention is going to be
created and after that the incentives are
going to take over and so I wish we as
Silicon Valley spent a lot more time saying
how do we coordinate to change the incentives
to change where the race
to the bottom goes to? We spent
this more time in discussions talking about that
versus like which design feature we should have or not
have I think the world will look a lot better
and by the way I think it's the
incentives eat intentions
at scale where time is also
Yes, yes, yes. Yes, well said.
Well, so we're doing a lot of, if we could grant one wish.
So I will say, if you were granted the power of running the FTC or FCC today,
is there a regulation that you would push forward immediately?
And Aza, I will go to you first.
Is there one regulation that you thought would be positive in the world of AI?
I mean, the obvious ones are like liability, whistleblower protection.
transparency. I would also then put strict limits on engagement-based business models for
companion, AI companions for kids. That just seems like it's very obvious, and we should just do that now.
If I could then zoom, go on.
Well, I was actually just going to ask both of you, because this has come up actually recently with me a lot.
A lot of people are talking about restricting folks who are under 18. And then everyone thinks of like,
oh, yeah, how do you do that? I'll just lie and say I'm 18. But then a lot of people also,
say that these companies have so much information that it would actually be pretty easy for them
to figure out if you were under 18 or not. And so I just, for everyone listening, I wanted to sort of
verify that, Aza and Reid, do you have thoughts on whether, would it be possible to pretty
easily say to an internet user, no, no, no, you're under 18, you cannot use character AI, or you
cannot use chaty BT for erotica, or you cannot use these things that should only be 18 plus.
I would say that it's relatively easy as long as you don't have a 100%
you know, benchmark.
Like the way that people,
this is like the little statistics thing
that Eza just generated earlier,
you say, oh, it's impossible.
Well, it's impossible if it's literally 100%
like that one kid who got
their parents' driver's license
and looks a little older
and is deliberately gaming it.
Impossible.
Some very bright kids to do this stuff.
So, but if it's like
you're kind of call it at 98%
and maybe more, that's pretty easy.
Yeah.
Interesting.
And probably this should be a thing that happens at the device level.
Like if Apple implemented this and it was a signal that social media companies could then check against,
then the social media companies don't have to know that much about you.
They can just ask your device and your device can store that in its own secure enclave.
And that's, I think, a good way of getting around the problems.
Fair enough.
Reid, do you have thoughts on regulation that you would push forward immediately?
Well, it's probably, you know, maybe a little bit of a surprise for our listeners
that it's a bunch of things I agree with Aza here.
I'd go massively on the transparency question.
Like I basically think that there should be, the one of the things should be is like,
here is the set of questions that we're essentially, you know,
putting to these major tech companies to say,
you must give audited answers to them.
And some of them may have to be public,
and some of them could be confidential that are then available for,
kind of confidential government review.
It's a little bit like, you know, one of the things I liked about the Biden executive order
is that you must have a security plan, a red-teaming kind of security plan.
You don't have to reveal what it is, but you must have it so if we ask about it,
we see it because that at least puts some incentive in some organizational way behind.
That'd probably be one.
Two would be kids, because I do think that social media, AI, a bunch of other stuff is,
has been mishandling the kids' issues.
And obviously there's some places where you have to step carefully
because these people want, you know, kids educated in religion one,
and these people want kids educated in religion two,
and these people want kids educated in religion three,
and, you know, bra-br-br-brah.
And, like, you know, it's a little bit like the,
one of the things that, like, I like about the evolution of the U.S.
is when the separation of church and state was, it was like,
so your version of Christianity wouldn't interfere with my version of Christianity.
And I was like, okay, but we're now much more global and broad-minded about that.
It's like not against Hinduism either, right, as a version of doing.
And so, like, you know, make sure that we have that kind of as a baseline.
And, you know, I actually wouldn't be, even though obviously some parents are so optimal and so well,
if you said, hey, part of the regulation in kids is you've got to be showing reports to the parents, right?
It's like, look, parents should be able to have some visibility and some ability to intersect here.
I mean, I think the notion that a technology product could be saying, like, for example,
I think of the dumbass thing we're competing with your mom.
Like, it's like, you should not be doing that.
And if you're thinking that, you have a problem.
Yeah, right.
But, you know, it's like, you know, to be involved.
Because the best thing we can think,
while we try to make parents better and we try to make communities better
and it won't always be the case,
the fact that parents have in the bulk of percentage of cases,
the best close, close, like we care about our kid, right?
We're invested in it, you know, in the kids' lives.
and well-being.
We have some weird theories,
and I may be a drunkard
or something else that happens,
but I'm not the same thing as a private company.
And it's one of the reasons why, like, you know,
why do public institutions and public schools
have some challenges?
Because they're trying to be to navigate that thing,
which always, by the way,
means a trade-off and efficiency and other things,
and you give them some credit for that
because they're trying to be this common space.
And yes, they do have at least a lens into the kid,
which is useful.
This kid's being abused.
Well, then we should do something about that.
But generally speaking, it's kind of enable the parent.
So that would be the second thing.
And then the third one, because I'm deliberately trying to choose one that wouldn't be top of Vazas list,
even though there's a bunch of these that I agree with, is basically, I actually think that the technology platforms are the kind of most important power points in the world.
And so part of the reason why at the beginning of this year, I was talking about why I wanted AI to be American intelligence.
is there's a set of values we aspire to as American.
I don't know if we're doing that good of a job living them most recently,
but we aspire to this kind of like, hey, let's give individuals freedom to kind of do great work
and to have a live-and-let-live kind of policy when it comes to religious conflict of values
and other kinds of things.
And I think that that we want, and I think that actually,
In fact, part of the thing that is we live in a multipolar world now.
It's not just a U.S. thing.
And so how do we get those values and technology, you know, kind of setting a global standard?
And that should be infecting.
Like, here is one of the things that I kind of, it's a little bit off the FCC, FTC question.
But like people say, I would like a return to manufacturing industry and jobs in the U.S.
And like, okay, your only possible way of doing that is AI and robotics.
So what's your industrial policy?
there.
And I go, really?
Yes, it's a modern world.
And so we should be doing that.
I agree.
But we should be like harnessing
this great tech stuff we have with AI
and then trying to get that
rebuilt would be an excellent,
you know, kind of both middle class
and also strategic outcome in the country.
And that's as a parallel
for the kinds of things I'd want,
you know, the FTC and the FCC
to be thinking about
as they're setting policies and navigating.
This gets into like the very specific,
but I think it's,
it's an interesting example for what social media could be optimizing for
that doesn't require choosing what's true or not at the content level.
And that is a perception gap minimization.
That is to say if you ask Republicans to model Democrats,
they have wildly inaccurate models.
You say what percentage of Democrats think that all police are bad?
And Republicans say it's like 85 or 90%.
In reality, it's like less than 10%.
Or something like that.
And there's a reverse the other way around.
So we're modeling each other wrong.
And so we're fighting not with the other side, but with our mirage of the other side.
So imagine you just trained a model that said, all right, given a set of content, is the ability
to model all the other sides going up or down?
I think if you just optimize for accurately seeing across all divides, which, by the way,
is a totally objective measure.
You just ask that group what they believe.
You ask other groups, what they think that group believes.
Then you realize that the most harmful content, hate speech, disinformation, all that.
that, brain rot stuff, that all appraise on a false sense of the other side. So here is an
objective way without touching whether content is true or false to massively clean up social media.
I love it. It goes so much with read what you always say about scorecards. I'm not going to tell
you social media company that this is good or this is bad, but I'm going to give you the scorecard
and what we want you to hit and you figure it out. And if you decide that like, oh yeah, yeah,
actually promoting those vaccine conspiracies makes people distrust the other side in a way.
way that's not accurate, okay, well, then you need to change your behavior. And so again,
it's actually sort of putting the agency in the company's hands in a way that is so positive.
All right, so we're going to do our traditional rapid fire very soon. But first, we wanted to end
on a lighter note because we've talked about vampires and some heavy stuff. So I'm going to ask
you guys. We need to bring in werewolves and zombies, but you know. Yeah, exactly. Exactly. I mean,
I just watched Sinner, so I do have sort of the supernatural on the mind. So I'm going to
get a hot take from each of you. Hopefully pretty quick. I have, let me see, four questions.
So Aza, we'll start with you. What are the most outdated assumptions that are driving today's
AI decisions? I think the most outdated belief driving AI is that we can muddle through. That,
you know, it's never been a good idea to bet against the Malthusian trap. That is, we've always
made it through in the past. And therefore, assuming that because we've always made it through in
past, though we'll make it through this time.
I don't know what you read or are you would give humanity as a scorecard for the industrial
revolution.
I'd sort of say we got like a maybe a C-minus stewarding that technology.
Lots of good things came out, but also child labor.
And now nowhere on Earth is it safe to drink rainwater because of forever chemicals.
And we dropped global IQ by a billion points with lead, but we managed to make it through.
I don't think we can afford with AI to get a C-minus again.
I think that turns into an app for us.
Reid, what do you think are the most outdated assumptions driving today's AI decisions?
I'm going to be a little bit more subtly and geeky.
By the way, I do think we need to get a much better grade.
I actually think AI can help us get a better grade, so blah.
But I think the most outdated assumption, because it's kind of like it's almost like
against what most people think, I don't think that people are realizing,
people still think it's mostly a data game, and it's turning much more into a compute game.
And the data still matters.
But it's like the, you know, data is oil, you know, is the new oil, et cetera, et cetera,
actually computes the new oil.
And data still matters, but like what, it's the compute layer that's going to matter the most.
I'd say that would be my quick answer in a very complicated set of topics.
Well, the next question, we're giving you just one sentence to answer.
So, Reid, I will start with you.
In one sentence, what is your advice to every AI builder right now?
we'll have a theory about how it is that you,
that in your engagement with your AI product,
whether it's a chatbot or something else,
how it is that you will be elevating the kind of the agency
and the kind of human capabilities,
but also broadly, compassion, wisdom, etc.,
of the people that you're doing.
So, for example, at inflection and pie and person and those agents
of like be kind to be modeling a kind interaction is one kind of very tangible output.
Fantastic.
Aza, do you have one piece of advice?
I would be very aware of how incentives eat intentions because the technology you're creating is incredibly powerful.
And so if it gets picked up by a machine or country that you don't like that their value,
the things you invent will be used to undermine the things you actually care most about.
Fantastic.
Reid, I'll go to you first.
What is the belief that you hold about AI that you think many of your peers would find controversial?
Well, a lot of my peers tend to be in the LLM religion, which is the one model to make everything work,
whether it's super intelligent, so the rest.
And I obviously think we've done this amazing thing.
We've discovered an amazing spell.
book in the world with these LLMs and kind of scaling them,
I tend to think that there will be multiple models
and the actual unlock for AI and human future
will be combinations and compute fabric
of different kinds of models, not just LLMs.
Now, it might be that LLMs are still, as aware,
the runner of the compute fabric.
It's possible, but I also think it's also possible that it isn't.
And that probably gets the most, like, you know, like, wait,
are you one of those skeptics?
do you not believe all the magic we're doing?
I'm like, no, I believe there's a lot of magic.
I just think that this is kind of a big area in a blind spot.
Aza, same question.
A belief that you have that most of your peers would find controversial.
That AI based on an objective function
are not going to get us to the world we want.
That is to say, whenever we just optimize,
for an objective function,
you end up creating a paperclip maximizer in some domain.
But nature doesn't have,
have an objective function. It's a ecosystem that's constantly moving. There isn't just a static
landscape they're optimizing to climb a hill for. The landscape is always moving. It's a much more
complex thing. So if we really want to have AIs that can do more than confuse the finger for the
moon and then keep giving us fingers, if we actually want to get like the human flourishing, ecosystem
some flourishing like that thing, we're going to have to move beyond the domain of just AI
that optimize it to objective function.
Awesome.
Let's move to Rapid Fire and read.
I think your question is the first.
Indeed.
Is there a movie, song, or book that fills you with optimism for the future?
Really anything by Audrey Tang, listening to her podcast, reading plurality.
She's sort of the Yoda Buddha of.
technology. So 100% that. And then on human nature by E.O. Wilson. And finally,
dawn of everything by David Graber, because it just shows that how stuck we are in our current
political economic system and really opens your eyes to how many other ways of being there
actually are. Awesome. What is a question that you wish people would ask you more often?
Oh, I don't know, something about surfing or yoga.
Awesome.
Which are you better at Aza, surfing or yoga?
I'm definitely better at yoga because surfing is by far the hardest sport that I have ever done.
But actually, there is a question that people ask me a lot that I don't have a good answer to.
And that is, after sort of like laying out my worldview, people almost inevitably ask, but how do I help?
and I realize I don't have a good answer
because to answer that question requires understanding who you are,
what you're good at, what you would like to be good at,
what your resources are, what you're currently working on.
And I would love to have an answer that when somebody says,
how can I help?
There is something maybe AI can help with it
that does that kind of sorting
and helping people find their Dharma within a larger purpose.
I couldn't agree more.
Everyone right now,
I forget people who say that everyone's,
apathetic. Everyone is asking me what they can do right now is to your point and I don't have a
good answer either. So let's try to build one. Well, I think a beginning is learn and get in the game,
like for example, like start engaging with it and then kind of have your voice be heard. You can't
have a perfect plan, but it's it's like join some movements, rally to the flags that try to help
stuff. All right. So where do you see progress or momentum outside of tech?
that inspires you.
I'm going to feel like a broken record,
but outside of tech,
actually, it's going to start with all the deliberative democracy stuff,
but we've already sort of talked about that.
Blaze, I'm going to say his last name wrong,
Aguay i. Aracas, at Google.
He and his team are doing some incredibly beautiful work
that I'm finding a lot of hope in,
because I sort of laid out my worry
that game theory is going to be
obligate and we're just going to get whatever the game theory says for the future of humanity and that
seems like a really terrible world I don't want to live in. And his work is on understanding how do you
model in a situation of multiple agents. How do you actually get non-Nash equilibrium solutions?
And he's discovering something which is that in order to solve the very hard problem of how you do
strategy and multi-agent reinforcement learning when I have to model what you know and what you have to
model what I know and I have to model what you know about I knowing that you know.
And that's just very hard.
And they're discovering some new math.
And it turns out you can start to answer this if you don't just model with yourself
outside the game board, but with yourself on the game board.
You have to model yourself, modeling other people.
And what's cool there is that suddenly non-Nash equilibrium states are found, not the worst
of the prisoner's dilemmas.
You can find these new forms of collaboration.
And I love this.
it feels so profound because first you have to inject the idea of ego and then transcend it.
If you don't have ego, you just find the Nashiklivum.
If you do have ego, you also find the natural loom.
But if you do have ego and you can transcend it, you can get to these much better states.
And that, to me, is very hopeful and very cool because I think of game theory as sort of like
the ultimate thing that we're going to have to beat as a species.
Always, Aza, our final question.
Can you leave us with a final thought on what you think is possible to achieve if everything
breaks humanity's way in the next 15 years, and what is our first step to set off in that direction?
This is sort of like the, what is possible if we could rearrange our incentives?
So we are both nourishing ourselves and nourishing all the things that we depend on.
Suddenly, I think people don't really look at their phones because the world that we inhabit
is just so rich and interesting and novel.
we are consistently surrounded by the people that can help us learn the most,
sort of in a developmental sense.
The entire world is sort of set up in a fiduciary where everything we can trust
is actually acting in our and our communities and our society's best interest
and developmental understanding where we are and helping us gain whatever that new next attainable self is.
I think we'll have made a major, major, major progress
towards solving diseases.
We'll have a deep understanding of cancer.
And I think we would have solved our ability
to socially coordinate at scale
without subjugating individuals.
So it looks something like that.
We will have solved the aligned collective intelligence problem.
And we'd be applying that to like getting to explore the universe.
Awesome.
Yeah, the universe outside and the universe inside.
Yes, exactly.
Always a pleasure.
Thank you so much, Reed, so much, Aria.
That was my conversation with Reed Hoffman and Aria Finger
on their podcast possible.
I hope you enjoyed it.
We'll be back soon with new episodes of your undivided attention.
And as always, thank you so much for listening.
