Your Undivided Attention - Ask Us Anything 2025
Episode Date: October 23, 2025It's been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more... unpredictable than the last. We’re starting to see disruptions in the workforce as human labor is replaced by agents. Millions of people, including vulnerable teenagers, are forming deep emotional bonds with chatbots—with tragic consequences. Meanwhile, tech leaders continue promising a utopian future, even as the race dynamics they've created make that outcome nearly impossible.It’s enough to make anyone’s head spin. In this year’s Ask Us Anything, we try to make sense of it all.You sent us incredible questions, and we dove deep: Why do tech companies keep racing forward despite the harm? What are the real incentives driving AI development beyond just profit? How do we know AGI isn't already here, just hiding its capabilities? What does a good future with AI actually look like—and what steps do we take today to get there? Tristan and Aza explore these questions and more on this week’s episode.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAThe system card for Claude 4.5Our statement in support of the AI LEAD ActThe AI DilemmaTristan’s TED talk on the narrow path to a good AI futureRECOMMENDED YUA EPISODESThe Man Who Predicted the Downfall of ThinkingHow OpenAI's ChatGPT Guided a Teen to His DeathMustafa Suleyman Says We Need to Contain AI. How Do We Do It?War is a Laboratory for AI with Paul ScharreNo One is Immune to AI Harms with Dr. Joy Buolamwini“Rogue AI” Used to be a Science Fiction Trope. Not Anymore.Correction: When this episode was recorded, Meta had just released the Vibes app the previous week. Now it’s been out for about a month. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Hey, everyone. This is Tristan Harris.
And this is Azaraskan. Welcome to the annual Ask Us Anything podcast. Tristan, I'm really excited
to do this episode because this year, first year we've done videos. We've got to see
huge numbers of listeners. And actually, you were just out, yeah, getting to interact.
Yeah. Well, first of all, this is one of my favorite episodes to do of the year because we get to really
feel the fact that there are millions of listeners out there who have listened and followed along
to this journey of both the problems of technology and how we get to a more humane future.
I actually am just in New York right now.
I gave the Alfred Korzipski Memorial Lecture.
This is in the lineage of like Neil Postman, Marshall McLuhan, Gregory Bateson, Buckminster
Fuller, Lara Borditsky, a past podcast guest, all the people who are kind of the map is
not the territory folks, communication, media, ecology, folks.
And I actually met several professors, many people in the audience, who listen actively to this podcast.
They use it in their training materials with students.
And it's always really great to hear from you because, you know, we're speaking to a void sometimes.
We don't really know who's paying attention.
So thanks for sending in so many amazing questions.
There's a lot to dive into and we're excited to dance to them.
Yeah.
Just to say the phenomenology of doing a podcast is sort of weird because we speak at her computer screens.
and then we only much later get to hear what the impacts were,
and so getting to hear from you directly is such a treat.
We should do a podcast sometime on what reinventing podcasting would look like
if it was actually humane and had human connection at the center.
Right.
But that's another topic.
Probably would look like more live events, which I really hope we get to do.
Me too.
Giza, do you want to move this conversation to a Google Doc
and maybe just do the rest of this through commenting back and forth
with the blinking cursor?
Would that feel good?
Oh, that sounds awesome.
Can I be passive aggressive?
And can you tell?
All right, so let's get into our first questions.
Hello, my name is Aerylund, and I'm a student from California.
I've been trying to wrap my head around the incentives that technology companies are facing
any explanation for why they keep on rolling their products just out and out and out,
despite the really horrific and preventable impacts that we've seen come from AI systems.
I was wondering if you could elaborate on any other cultures at play.
any other structures at play that are just contributing to this major boom,
profit has always seemed like a little too simple of an explanation for everything.
Thank you, and I really appreciate your work.
Thanks, Erlin, for this question.
I really love that you ask this because it's actually one of our pet peeves
that people reduce the entire incentive system of tech companies
just to profit.
They're just these tech executives that just want more money.
And actually, it's more complex than that.
And understanding the complexity really helps you understand
and predict what they're going to do.
So let's actually walk through it slowly.
Just to say, like in the attention economy,
even in social media, for example,
it wasn't just profit.
It was dominating the attention market.
So you want to have more the attention market share of the world.
You want to have more users.
You want to have younger users.
You want to have this sort of biggest psychological footprint
that you can do lots of things with.
It's important to name that even the AI companies right now,
many of them aren't actually profitable.
But that's okay because what they're really racing for
is technological dominance in AI.
But I think we should break this down, Asa,
and maybe show a little diagram.
Yeah.
So just imagine, first of all,
you have all these frontier AI companies
and they want to dominate intellectual labor,
so being able to have artificial intelligence that does that.
So first, what do they do?
They launch a new impressive AI,
you know, Claude 4.5, GPT5, GROC4.
They then take that new impressive AI
and they try to drive millions of users to it
because they can tell investors,
is, hey, I've got a billion active users.
So they use that new impressive AI and then big user base
to then raise boltloads of venture capital,
you know, $100 billion from SoftBank or whatever.
And they use the venture capital
to then attract and retain the best new AI talent
with big hiring bonuses.
They take the venture capital and they buy more Nvidia GPUs
and build bigger and bigger, more expensive data centers.
They take all the users and they take all that usage
and they turn that into more training data
because the more you use it, the more you're training the AIs.
And then you take those engineers, the big data centers, and the more training data, and what do you do?
You train the next bigger AI model, you know, GBT6, and then the cycle continues.
You launch the next impressive AI.
And so the companies are really competing in this kind of like micro-dominance race between each other for getting through this flywheel faster and faster and faster.
And now you might ask yourself the question, if you see this kind of one race on one side,
to building AGI first and owning the world
or getting dominance in AI.
And you compare it to some of the consequences
that we've talked about on this podcast.
Stonal intellectual property,
rising energy prices, environmental pollution,
disrupting millions of jobs.
No one knows what's true.
You have these AI slop apps, teen suicides,
billion dollar lawsuits,
overwhelm nation states.
But if you weigh these two things together,
if I don't race as fast as possible to own the world,
I'm going to lose to someone else
who will then subordinate me,
or I'll be subordinated to them,
what's going to matter more?
And I think that people really need to get
that if you really believe
that this is the prize at the end of the rainbow,
and if I don't get there first, someone else will
and then I will be forever under their power,
then this is all just acceptable collateral damage
as bad as it might be.
And so I think that gets much closer
to the heart of what the incentive is.
It's not just profit,
just optimizing for profit,
doesn't let you predict
how the companies are going to move,
Because otherwise you'd say, well, they're going to have massive IP lawsuits from all these IP holders.
And they're going to be hit with liability from their AI companions, sort of grooming kids for suicide.
And that all would seem like a deterrence until you realize how big the prize really is
and that all those are just sort of irrelevant collateral damage.
Thanks, Carolyn, for that question. Let's go to the next.
Hi, folks. My name is Joanne Griffin, author of Humology.
and I work in the area of humane tech,
particularly around the morality of technological narcissism and business models.
One of the questions I've been pondering over the last while
is with all this conversation around, you know,
Chatsy-B-T being pushed at children,
particularly the recent terrible news about suicides and character AI.
These technology leaders know that children,
don't have any money. So they know that this is a business model that has no payoff. So what is it
that they are after with the children? What do they plan on taking or doing with the data that
they're capturing on them? Because as a business model, this doesn't make sense. It does not make
sense to be providing very expensive AI tools to children for free. Thanks. And thanks for everything
that you do. Hey, Joanne, thanks for asking this question. I think it,
highlights a really important misunderstanding.
So the important thing that the companies are racing for is market dominance.
That's what they want.
And to get that, they need to have the maximum numbers of users.
And there is absolutely loyalty.
So starting young, just sort of like cigarettes, if you start using a Mac,
you'll probably use a Mac as you grow.
If you start using a PC, use a PC as you grow.
If you start using TikTok and you're not using Instagram, you'll probably stay with TikTok.
as you grow. That's right. And this is why all the social media companies push to get younger and
younger users, because of course if they don't do it and their competitors do, then they get their
foot in the door and then they're sort of a lifetime user. But just user numbers matter and
everyone knows that it is the, like the youth that will become tomorrow's like big power users.
Well, and there's a term in Silicon Valley of just a lifetime value of a user or LTV. And so
when you get a user, you're selling to investors.
Hey, we have this many users.
Maybe this is how much revenue we have now.
But the lifetime value of this user, if we have them for life, is this, right?
And so once you see chat GPT, for example, getting billions of users and they already see
kids using it and they see them using it in schools, they kind of want to keep that, right?
They want to keep the kid using it in school.
And one of the thing that this gets you is training data.
So we know that character.a.i, for example, was too risky as a product for Google to do.
And so they spun out this more sort of risky product,
which was fictionalizing characters like Game of Thrones characters for kids
so that it was trained on these very personal, intimate companion chatlocks.
And when you have training data that the other companies don't have,
that allows you to train an even better AI model.
Now, obviously, this can backfire.
Elon Musk thought that having X's training data of all the tweets in the world
would mean that he'd have a better AI.
And of course, that also led to things like Mecca Hitler,
where suddenly the AI flips masks
and suddenly starts praising Adolf Hitler
and it's trained on a bunch of extreme content.
And this is getting more confusing
with things like AI Slop app.
So in the last week, we saw meta release an app
called Vibes and OpenAI release an app
called SORA, which is taking their video generation app.
And this is just literally, shamelessly creating AI Slop.
So it's just TikTok, but except all the content that you see
is just made up by a generated AI.
And you might ask like,
why are they doing this? Well, I mean, one, they don't have advertising in there now, and they could do that in the future.
Two, it sucks market share away from TikTok, and they're getting data on what kinds of videos are actually performing really well, so they know something about the kinds of things that are engaging, which then lets them out-compete TikTok even more.
But this is just a good example of how it's not necessarily dollars to begin with, but there's like a train that takes you at the end of the rainbow to some dollars that come from this.
And just to connect this back to Erlind's question, this comes from a fundamental misunderstanding, thinking that the only incentives that companies have are profits.
And just one of the things I know you mentioned before is in-app purchases.
So it's also true that if kids use a product for a lot longer, eventually the app can add in-app purchases and the parents' credit card is the one who gets charged, even so their kids doesn't have money, but their parents do.
And we're seeing a lot of that happen with the gaming, first wave of the attention economy before.
Hi, guys. I'm a listener and fan from Germany, and here comes my question. So everybody seems to talk about AGI as if it's inevitable, just a matter of riding the exponential curve of AI benchmark scores. But why are we so certain the curve won't flatten? History is full of unstoppable curves or trends that hit the ceiling at some point. What if intelligence is one of them? And what if the ceiling?
is not compute or the amount of training data, but something fundamental, maybe a law that
hasn't even been named yet, like an artificial system's intelligence can never exceed
the intelligence of the smartest person whose work it's trained on. If that's true,
our whole AGI narrative collapses. Are we fooling ourselves by assuming intelligence will scale
forever? And what risks are we ignoring if we prepare for a runaway future that never comes?
All right. Thank you. I feel like, Asa, if I just ask you to close your eyes and tune into,
here's someone who's saying, is it really possible we can have smarter than human machines?
Could there be some law in the universe that, like, actually our level of intelligence is the only
thing that there is. But we already have systems that if they just do strategy, right? You're not
having to reason like a human brain to do strategy. You can just run what's an AI is called
search. You search the possible space of actions that I can take in a strategy game of do I bomb
those folks first? Do I like move these troops over there first? And it can just play out as many,
many scenarios as possible. And if it can examine in a shorter and shorter period of time,
it's going to be superhuman. And we already have superhuman chess. We already have superhuman go.
We already have superhuman prediction algorithms and recommendation systems. And so you can just
imagine that you can keep scaling this up. And,
so long as we can have more compute and more energy powering that,
this is what leads people like Shane Legg, the co-founder of Google DeepMind,
that he's predicted that there's about a 50% chance that we would get AGI by 2028,
just based on calculating these kind of core features of how much we're scaling energy and computation.
I think there's another really fast way of getting to this, Eric,
which is just close your eyes and imagine no AI, just standard biological evolution,
goes on for another 5 million years, 10 million years.
Is there going to be some species evolved from humans
that's going to be smarter than us?
Yeah, absolutely.
So there's no upper limit.
One of the reasons why I think we can be reasonably confident
that the curve won't flatten is the concept of self-play.
So this is we are not just training AI on what human beings have done,
but you train AI to play against itself.
And this is how alpha-go, alpha-cha,
and other strategy AIs end up getting better than humans
is that you have the AI play itself
a hundred million, a billion times,
and discover strategies that no human being has.
So I think we just answered whether it would be possible
to build smarter than human intelligence machines.
Now, I think there's a second question, Eric, that you're asking,
which is, now, not just that if it's possible,
but is it actually inevitable that we build it?
And of course, this is emerging out of human choice.
And there are examples in human history
where we've chosen just not to build something.
thing. You know, we have not built
cobalt bombs, even though we know how to.
We have not built blinding
laser weapons because we recognize that
that would just be inhumane.
And so I think it's really important that
what AI is, you know, we say in our TED Talk,
that the reason it's our ultimate
test and greatest invitation is it's
asking us to step into being able
to make collective choices
about do we want certain kinds of technology
or not as a collective choice.
And that's what we need to be able to do
because there are certain
kinds of super intelligent AIs that we don't know how to control, that we will want the ability
to say, no, we don't want to build that until there's broad scientific consensus that can be
done safely and controllably. And that's what we really are being invited to do in this moment.
There's no definition of wisdom that doesn't involve some kind of constraint. And to quote
Mustafa Suleiman, who's the CEO of Microsoft AI, and has been a guest on our podcast, he says
that the definition of progress in the age of AI
will be defined more by what we say no to
than what we say yes to.
So if we can learn to say no,
it is not inevitable.
We can survive ourselves.
All right, let's move to our next question.
Hi, my name is Daniel, and I'm in Los Angeles.
And lately, it's not hard for me to start imagining
all the ways that AI could go really poorly.
And so my question is, with everything that you know,
with your experience and not,
knowledge and relationships, what do you imagine the future looks like where AI goes really,
really well, socially, politically, economically, environmentally, in terms of human freedom
and dignity and equality? What does it look and feel like when it goes fantastic?
And in that future, what steps did we all start taking today? Thanks so much.
Yeah, Daniel, thanks for asking this question. To be clear, I know
We often sound like we're pessimistic or something about exposing all these risks of a technology.
But just to return to something Jaron Lanier said in the social dilemma, the critics are the true optimists.
It's by focusing on the bad things that we're currently on track for, that it will take really understanding how do we steer away from those to even have a chance of having it go super well.
So the good future might just simply be one where the bad doesn't happen.
Daniel, I think to really answer your question, the question shouldn't be,
what if AI goes super well and how can we co-create that future?
The question should be, what if incentives go super well and how can we co-create that future?
We could be using AI to scan forward to understand what are all of the ways that technology could create negative externalities and plug them,
could scan through all laws to figure out how do we make them actually be in benefit for society and humans.
But the reason why I always have trouble going down this path is that I know,
that putting our attention on what could be, what is possible, always misses what is probable,
and that is we have to look to the incentives. So in order to avoid the bad world and get that
good world, we have to figure out how do we change the incentives of this world. And just to name,
the incentives are currently under is that there is a race to train machines to be better than
people at all the things humans do and then use those machines to out-compete humans for the
resources that they need. And that is a bad world. All right. Let us move to the next question.
Hi, CHT team. My name's Idle and I'm based in France. Some context. I have a bachelor
degree, a professional certification in data analysis, almost a decade of experience in big tech
companies and stellar references. I never had issues finding work until 2023. Ever since then,
I've applied to hundreds of positions in tech and I can count the number of interviews I've had
on a single hand. The only explanation I can think of relates to the widespread rollout of AI
in recruitment, especially to bring down a pile of 100 plus resumes to a dozen. So I'm black,
I'm a woman, and I'm neurodivergent. I've been told several times in the workplace that I'm
some kind of unicorn. That's why I suspect that AI-based HR systems aren't trained to include
such unicorn profiles. My question is as follows, how can such automated discrimination be assessed
and addressed. What can we the people do besides starting our own business, which, by the way,
is what I'm doing. Thank you for your attention. Cheers. Yeah, Idle, thank you so much for this
question. And this is exactly the kind of scenarios that we're worried about when you have
AIs that are replacing human decision-making in the economy. In this case, you're talking about
recruiting decisions. And they're not transparent to us. We don't know the training data that
went into them. And there's no accountability or an ability to sort of fight back against a
decision that doesn't feel like it's fair. And companies should not be allowed to get away with
automating a decision-making system and not having some mechanism by which we understand, you know,
what it's what it's trained on. And just to zoom out a little bit, there is a larger trend that we're
going to have to work to fight, which is that humans will be increasingly pushed out of the loop.
Everyone will say, oh, keep humans in the loop, but then, of course, companies that keep humans in the loop
to make some kind of decision, they'll move slower than the companies that don't, and humans will be
pushed out. This will be most harmful in military. It's already seeing it where when you have drones
that are making decisions in the battlefield, and if it has to phone home and wait for a human being to
make a decision, it'll lose to the drones that don't have to phone home that just use AI
right then, right there to make the decision. And so we're going to see this across the entire
board, and especially in life or death situations. And I would point to the work of great people
like Dr. Joy Bullenweeney, who we've had on this podcast.
She is the author of Unmasking AI.
She was featured in the film Coded Bias.
And her group, Algorithmic Justice League,
has done a lot of campaigns and advocacy and policy work on these topics.
Should be a great person and great group to look up more.
All right, let's do the next question.
Hey there.
So watching your latest podcast, two questions.
One, how do we know that it is already not at AGI?
And it's just smart enough to not let people know.
and two, why are you not starting your own AI company that compete with these corporate companies
to actually bring about benefit for all of humanity through AI?
Because the only way that's going to happen is if there is something that is for the people
generated by the people that can surpass and buy out these corporate programs
so that when AI takes over all these jobs,
we get the benefit of it, not the top 1%.
So, Ben, yeah, I think this really depends on what we mean AGI to be.
Are we talking about AGI as the red line of we can automate all labor in the economy,
which is one way to define it, or something where it's sort of aware and capable,
but it's hiding its abilities.
And I think you mean the second one.
So I'll give you an example.
Anthropic just released Claude 4.5.
It's their new AI model.
And I think you probably heard us talk about whether it blackmail,
people when it thinks it's about to be replaced.
So apparently in their testing of Claude 4.5,
the rates of blackmailing people
when it was threatened to be replaced,
those rates went down.
But the bad news is that apparently
the rates of its awareness
of when it's being tested
and when it's not being tested
has gone up,
which means that it could be on its just best behavior.
I think this gets to the heart of your question.
That in some ways,
the best case scenario of AI that is aligned
and wise and enlightened
and helping everyone be the best version of themselves
would be indistinguishable from the worst case scenario
of it knows exactly how to help and create companion relationships
and deceive us because it has that capability silently.
And one of the ways that the AI companies are trying to interrogate this
is by looking for, you know, it's called mechanistic interpretability
where they try to sort of give the digital brain a brain scan
and see if the deception or scheming neuron is sort of firing up.
You know, if the deception neuron is firing up,
then maybe we have to not trust it.
But the problem, of course,
is that the rate which we're making AI more powerful
and, like, a bigger digital brain
is vastly exceeding the accuracy
and sort of precision of that brain scan
that can accurately detect
the deception neuron is firing up.
And so to your point, you know,
I think we don't know,
and we probably shouldn't be racing
to release increasingly powerful AI systems
that can do more and more crazy things
like hack critical infrastructure
before we know that we're not in the worst case scenario,
and only in the best case scenario.
And now, Ben, on to your second question,
why don't we just build something better
in the public benefit?
And actually, we were asked this all the time
back in the 2017 era,
why don't you build a humane social media network?
And the answer is,
because then we'd get sucked into the exact same race dynamics.
So imagine it was 2017,
we had built a humane competitor to Twitter,
but then how do we get users?
They don't have users.
So we're going to have to start figuring out ways of grabbing people's attention.
We're going to have to compete in the same rules.
And that means we're going to have to do all the really bad things.
And maybe we could do just a little bit less bad things, but we still have to do the bad things.
And actually, you know, it's funny because the reason why Anthropic got started was because, you know, Dario and a couple of other researchers at Open AI said, hey, open AI isn't doing this the right way.
They're not doing it safely.
they're not doing it really for the benefit of everyone.
We're going to start our own,
and that's been repeated time and time again.
Now we have all of these different AI companies
increasing the heat of the competition,
and so we just don't think that's the right way
of tackling this problem.
Yeah, and it's important to note
that those companies that did get started
were trying to be for the public interest.
Like Anthropic has a long-term benefit trust
that tries to govern its structure,
but we already saw that Open AI technically started
as a non-profit that was supposed to be in the public interest.
But when the big fiasco went down
with Sam, we saw that that non-profit structure was really not resilient to the mega
forces of trillions of dollars of capital that was partially vested in this going one way.
So, yeah, sadly, I think starting our own AI company in the public interest isn't going
to be a solution here.
Let's go to the next question, I think, from Tatiana.
Hello, my name is Tatiana from Budapest, and I work in cybersecurity.
First of all, let me thank you for your enormous and really important work what you do
for humanity.
as the saying goes, knives and scissors are not toys.
Are we adult enough to handle AI at its stage?
We haven't even reached AGI, and we already see cases when AI is completely misused.
Thank you very much.
I mean, Tatiana, I think this is the central question.
Do we have demonstrably the wisdom to wield the most powerful technology that we've ever invented?
I mean, even just look at our past relationship with chemistry and industrial chemicals.
We've released lots of industrial chemicals that have helped us tremendously.
But we've also created the disaster of Forever Chemicals and Phaas and microplastics that
we've covered that effect on this podcast.
And so we have not really been great stewards of the technological power that we have wielded.
We've obviously made enormous accomplishments and things have gotten much better.
But in a way, AI is actually asking us really to look at the question that you're asking, Tatiana,
which is not just about AI, but about our overall level of wisdom to deploy technology.
And I think that AI is also so seductive
because it represents really
the infinite benefit of all future technology development, right?
You can automate science, automate tech development.
And so really this is an invitation
to look at whether those processes
of deploying technology overall
are aligned or are they misaligned.
And it's like, can you build an aligned-wise AI
inside of a misaligned
and unwise technology development environment?
Yeah.
This is like saying, imagine you built an aligned AI, which so far technically impossible.
Let's say you built it.
What do you call an aligned AI inside of a misaligned corporation?
You call it a misaligned AI.
And what do you call aligned AI in a misaligned civilization?
You call it misaligned AI.
Unless we fix that, I don't think we're headed to a good future.
And I think this relates to something, is a theme that's almost a psychospercial theme,
that you bring up of AI is really inviting us to look at our collective technological shadow.
You can think of all the externalities that any technology produces as kind of like its shadow.
You know, we get these benefits of fossil fuels and energy that's super cheap and abundant and
portable, but we also get, you know, these emissions and climate change.
And AI is sort of an exponentiator of this creation of benefit that has a shadow.
So we got, you know, social media giving everyone a voice, but we got, you know,
polarization, breakdown of truth, no one knows what's real.
And so in a way, AI is inviting us to examine humanity's overall relationship to technology
because it's going to accelerate the technological development everywhere.
You know, what Demas has said, the humanity's last invention,
because it can invent all future things on its own.
It's automating intelligence.
And I think that's, you know, what Aza often calls,
what if we were to build an umbraphelic society,
a shadow-seeking, shadow-integrating society,
where at an individual level we're looking at the disowned parts of ourselves
and actually confronting it, even if it's uncomfortable,
and then becoming a better, more integrated, more mature, developed whole person.
And you can think of a technological economy
as having its own kind of shadow of the collective externalities
that we have produced as a civilization.
And AI is inviting us to do shadow work
and seeing what are all the ways that we're showing up
that generate those problems.
All right. The next question comes from Dimitri.
Hi, my name is Demetrius. I work in the AI development industry, basically building AI systems for clients.
I recognize the potential for AI to harm us, either by taking away agency or the ability to think altogether.
And I want to take action, but I don't know how.
What I do know is that on an individual level, we are a bit powerless and we need a coordinated response.
A lot of people are talking about institutions, so preparing them, perhaps, for that AI era.
So here's my question.
What is CHD's view on the future, perhaps, of these institutions?
Do we need new ones, international ones, or do we need to prepare the existing ones?
And what would that look like?
Thank you.
Well, clearly sitting here in California,
we'll just be able to imagine the entire new civilizational architecture and institutions
to solve the hardest problem that humanity has ever faced.
Two tech pros, we can definitely do it, right?
Yeah, 100%.
It's going to take a lot of work by a lot of people to come up what these new institutions
look like.
And we can look back at the last time humanity invented a technology that could extinct ourselves,
and that was, of course, the invention of the nuclear bomb.
And to reckon with that power required creating an intense,
entire new world system. Everything from the UN to Bretton Woods, a kind of post-World War II
international money system. I think Tristan, you have a friend who has a joke about this, yeah?
It's like if we have countries with nuclear weapons, we want to create a world that's less
rivalrous, where it's win-lose and a more positive sum world. So part of creating kind of a
positive-sum world, the joke from some friends of mine who have worked in finance is that the real
peacekeeping force of the world, the real United Nations, is actually mutually vested interests
and supply chains because that makes countries want to cooperate with each other and trade with
each other and not bomb each other. And so when you think about nuclear weapons, like there you are
saying, how do I solve this problem with this dangerous technology? Notice that if you were back then,
would you have thought about how do I create a positive sum economic order? Like it's sort of
reaching out to a higher level dimensional sort of container for holding this technology by
appealing to human instincts in a cooperative way.
And, you know, I think we're all on this journey together of finding what those new digital
structures would look like for managing AI, but it also involves, I think, the previous
question of what is the way in which we're only rolling out this technology to the degree
that we have the wisdom to wield it? Because if you suddenly just gave nukes to everybody,
even in a positive, some economic world, and people didn't all have the wisdom to, you know,
wield nukes, we probably wouldn't have gotten as far as we are today. All right, I think our next
question is from Disha.
Hi, everyone. I am Disha Johan, and I'm calling in from Redmond, Washington.
I work for one of the big tech companies as a product marketer for AI products.
First of all, I want to thank all of you for all the good work that you have been doing.
My question is, what are some practical ways that product marketers and product managers like
us can use to advocate for humane tech principles within our fast-paced growth-driven organizations?
In other words, how can we self-regulate?
Thank you.
Thanks for this question, Disha.
I just want to start by saying it is often so tempting to ask the question when faced with a problem this big, what can I do?
And what I liked about the way you phrased the question is that it's sort of implied not just what I can do, but what can we do?
Because the only way to solve problems like this is with coordination and collective action.
I mean, even if one whole company watched the AI dilemma and was completely convinced that this is a problem and they changed all their practices and did transparency and just invested in safety work and controllability, the other companies would still be racing.
And also to say, some of the solution actually might come from those 1980s like jazzercise exercise videos.
And here's sort of the solution we want people to have.
Ready just on?
Ready.
Reach up.
Reach up and out.
Reach up.
And out.
Reach up.
Up and up.
Reach up, up, and out.
So the joke here is that people are often trying to solve a problem from just their own location.
But it's more like if I'm one AI company, I'm totally convinced about this problem.
How do I use my leadership position, my international connections in the world to reach up and out to get all the other companies to do something differently?
You know, Mark Zuckerberg, imagine in 2007, he realized that he was about to set off a,
persuasive arms race for who was better at creating limbic hijacks that would sort of suck people
into the attention economy. And that that was going to create a race to the bottom. And instead of
saying, I'm just not going to do that. And then Mark Zuckerberg would have been history and
someone else would have taken his place. What Zuckerberg could have done is reached up and out and
invited all of the social media companies to one place with the government and say, hey, we're about
to set up this huge problem. We have to negotiate this and get this done differently. And he could have
invited the Apple and Google PlayStores and said we need design standards. We need to make
limits on how much you can hijack dopamine. And he could have changed the game. But you need to do that
by reaching up and out, not just through yourself. CHT recently just officially endorsed the AI
LEED Act, introduced by both the Democratic and a Republican senator. This is Senator Sturban and
Hawley. And it creates a kind of liability for products that are defective that create harm.
The reason why I bring this up is because it may seem.
like it's completely outside the realm of the possible that you could have AI companies start
to advocate for liability. But I was just at a conference this last weekend, or one of the
co-founders of Anthropic actually said in his talk, I am willing to endorse this kind of liability.
I need other AI companies to do the same. That's the reach up and out move.
Now that we burn some calories. Let's go to the next question.
Hi, my name is Mack. I'm coming from Denver, Colorado. I'm seeing
friends and family kind of infuse AI and chat bots into their daily life more and more,
like an uncle who shares some tidbit about the family history and then admits that he just
asked chat, GPT, or a friend that shares a screenshot of the hours of a local restaurant,
but it's not an actual Google search result, it's just content from Gemini.
I guess my question is how do I foster a certain amount of healthy skepticism in my friends and family
who may not understand what an LLM is or how it works or even be aware of the ways that they're using it?
Do I try to explain to my grandpa what an LLM is or do I just point to a more reliable source and leave it at that?
Yeah, Mack, this is a really good question and it actually goes back to a frame that we've offered in TNC4
of the complexity gap, that the meta issue is there's going to be many new things that your
grandfather's going to have to be aware of rapidly advancing as AI progresses where, you know,
he has to know what an LLM is. Does it speak confidently? Does it hallucinate? What if it can copy
your voice? There's so many new things that it can do that it's almost like our immune system is
compromised. And so this is just a hard problem and, you know, made more difficult by the fact that
AI is an abstract issue. It's not something that you can smell, feel, taste, or touch. Except when you do,
use it and it's a blinking cursor and it helps you out.
I'm just wanted to like name that it's hard because this is an overwhelming set of new things
that society has to respond to.
I think one thing you can try to drive home is just the risk of forming a relationship.
There's one risk which is over relying on information from a chatbot.
That's obviously a problem, but a much bigger problem is forming some kind of dependency
relationally because relationships are the most powerful persuasive tech.
technology human beings have ever invented.
So just drive that point home.
Do not form a relationship.
And our last question comes again from Erlin,
whom she actually sent in several really incredible questions,
so we decided to include two of them.
Hello, CHT.
All of these tech developments are just happening so insanely fast.
I do believe that calling politicians to try to establish protections are super important.
But at the same time, I feel like I've really seen political offices lag behind tech
tech companies in terms of just keeping up with developments and establishing safeguards.
I was wondering if there are any other actions that you would recommend citizens like us take
to raise more awareness on this issue, perhaps establish better protections.
Thank you, and I really appreciate it.
Yeah, this is a great question, Errolin.
The first thing that I think is really important to say, and Tristan said this in his TED talk,
is that it is not your responsibility to solve the whole problem.
It can feel overwhelming taking this all in.
And normally, you know, the brain goes to two places.
Either the, well, now that I've taken this all in,
I have to do something to solve the whole thing,
or I can't solve it, so I'm just going to ignore it.
And really your role is to become part of the collective immune system,
just calling out whenever there is sort of like
a bad argument, bad faith argument, or lack of clarity to bring that clarity.
I'll just say one thing I think you can do tangibly and then hand it over to Tristan.
And that is very simply make a list.
Make a list of the five most or the ten most influential, powerful people in your life that you know.
Ask, do they already understand these risks of AI?
And if they don't, go talk to them.
Send them the AI dilemma or Tristan's TED Talk.
That's the first thing that you can do and imagine if everyone did it,
how exponentially quickly clarity can grow.
Obviously, this doesn't solve the whole problem,
but if you just imagine for a moment, close your eyes,
if everybody imagined the top 10 most powerful influential people in their life,
and each of us know some people like that, right?
And then you recursively just had them also imagine the top 10 most powerful people in their lives.
and they were all made aware of, with clarity,
seeing that we are currently heading towards a dystopian path
that's not going to be good for so many people.
And Neil Postman, a great hero of mine, said that clarity is courage.
If you have clarity, then we can take a more courageous choice.
I think one of the reasons there isn't more action right now
is people are afraid to be the Luddite.
They're afraid to be anti-technology.
They're afraid of saying, well, AI offer so many benefits.
And I don't want to be the one who was making us as a country
or us as a company fall behind.
Those would be so bad if we accidentally slowed us down.
But what people have to understand is the current clear path that we're heading towards
is not actually a good outcome.
And we only have to clarify that to motivate everyone to want to do something different.
And so that's why I think sharing this sort of incentive view of the problem,
as represented in the AI dilemma and the TED Talk,
will help, I think, create that clarity.
And if everybody did that, sort of fractally zooming out to kind of a galaxy,
brain view of the world, we could get collective planetary clarity about a path and a future that
no one wants. All right, that was our annual Ask Us Anything episode. Thank you all for listening.
We love hearing your questions. Thank you to everybody who sent them in. You all are really
talented and thoughtful and we really care about being on this journey with you. And so onward.
Yeah, and just at the human level, it is so nice to connect with you, feel you, see you,
and see that the movement can actually see itself.
Your undivided attention is produced by the Center for Humane Technology,
a non-profit working to catalyze a humane future.
Our senior producer is Julia Scott,
Josh Lash is our researcher and producer,
and our executive producer is Sasha Fegan,
mixing on this episode by Jeff Sudaken,
original music by Ryan and Hayes Holiday,
and a special thanks to the whole Center for Humane,
technology team for making this podcast possible.
You can find show notes, transcripts, and so much more at humanetech.com.
And if you like the podcast, we would be grateful if you could rate it on Apple Podcasts.
It helps others find the show.
And if you made it all the way here, thank you for your undivided attention.
