Podcast Page Sponsor Ad
Display ad placement on specific high-traffic podcast pages and episode pages
Monthly Rate: $50 - $5000
Exist Ad Preview
Your Undivided Attention - Ask Us Anything 2024
Episode Date: December 19, 20242024 was a critical year in both AI and social media. Things moved so fast it was hard to keep up. So our hosts reached into their mailbag to answer some of your most burning questions. Thank you so m...uch to everyone who submitted questions. We will see you all in the new year.We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.And, if you'd like to support all the work that we do here at the Center for Humane technology, please consider giving to the organization this holiday season at humantech.com/donate. All donations are tax-deductible. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIA Earth Species Project, Aza’s organization working on inter-species communicationFurther reading on Gryphon Scientific’s White House AI DemoFurther reading on the Australian social media ban for children under 16Further reading on the Sewell Setzer case Further reading on the Oviedo Convention, the international treaty that restricted germline editing Video of Space X’s successful capture of a rocket with “chopsticks” RECOMMENDED YUA EPISODESWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonAI Is Moving Fast. We Need Laws that Will Too.This Moment in AI: How We Got Here and Where We’re GoingFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnTalking With Animals... Using AIThe Three Rules of Humane Tech
Transcript
Discussion (0)
Hey everyone. Before we get started, we wanted to let you all know that we're hiring
for a new director of philanthropy at CHT. Next year will be an absolutely critical time for us to shape
how AI is going to get rolled out across our society, and our team is working extremely hard
on public awareness, policy, and technology and design interventions. So we're looking for someone
who can help us grow to the scale of this challenge. If you're interested in helping with what I
truly believe is one of the most important missions of our time, please apply.
You can find the job posting in our show notes or at humanetech.com slash careers.
Thanks, and on to the show.
Hey, everyone, this is Aza.
And this is Tristan.
It's the end of 2024.
It's time for our annual Ask Us Anything episode.
Like always, we got so many awesome questions.
I want you all to know we read through every one of them,
but we'll only have time to speak to a few of them.
And before we start, just a request to really support the work that we're doing at the Center for Humane Technology,
we have a lot of momentum going into 2025 in our ability to affect change as an organization.
There are billions of dollars going into advancing AI amorally as fast as possible into every corner of our society.
And, you know, we have a very small budget relative to the amount that needs to be shifted and steered in the world.
By supporting our work, you can help us raise public awareness and drive much more.
wiser public policy, and the kind of humane technology design interventions that you've heard
us talk about in this show, as well as supporting really important lawsuits like the one filed
against Google and Character AI, which you've also talked about on the show. So with that,
we hope you'll consider making a year-end donation to support the work that we do, and every
contribution, no matter what size, helps ensure that we can keep delivering on our goals to bring about
a more humane future. You can support us at humanetech.com slash donate, and now on
to today's episode.
Let's dive in.
The first question comes from a listener named Mark.
What is the latest status of your research on being able to communicate with animals?
Asa, I'm guessing you want to take this one?
I thought you're going to do this one, if I was on.
Thank you so much for that question.
So, as some of the listeners may know, I have this other project that I work on called Earth Species
Project, and we're using AI to break the interspecies communication barrier,
see if we can learn how to understand the languages of whales and dolphins and orangutans.
We have just trained our first sort of big foundation model that you can query.
So it starts to act a little bit like an AI research assistant for sociobiologists and ethologists.
One of the piece of research that we've been doing is we've been working with these crows in northern Spain.
And normally crows, they raise their families as like a,
a mother-father pair, but for whatever reason, these crows are special, and they do communal
child wearing.
It's sort of like a commune or a cabots.
They all come together, and weirdly, they have their own unique dialect, their own unique
vocabulary to do this collective behavior, and they'll take outside adults and teach them
the new vocabulary, and they will then start participating in this culture.
And so the researchers we work with have little backpacks on the birds and we can see how they move, what sounds they make.
And just to give an example of how challenging this problem is, it's like, just like human beings, every crow has a different personality.
And for whatever reason, there's this one crow, and the adults don't like her.
And so every other crow, when they come back to the nest, they make this specific kind of call that says to the children, like, get ready to be fed.
And the chicks will come out ready to be fed.
But the other adults don't like this particular crow.
And so she has to start making this call before she lands
to warn the chicks that she's going to land and then give them food.
And she has this drive-by sort of like feeding.
And we're starting to be able, like our models are starting to be able to figure all of this stuff out.
It's really neat.
And last thing I'll just add is we've just released our first big foundation model
that you can use text to ask a question like how many animals are in this sound,
what age are they?
And it's starting to do the crazy emergent things.
it can figure out what an animal is,
figure out its name,
even if it's never heard that animal before,
which is sort of wild.
Do you mean figure out its name in the animal language?
You mean to figure out like its Latin name in English?
It's a great clarification.
No, you figure out its Latin name, the human name for it.
Yeah.
Wow.
Yeah.
So you hear a noise of an animal,
you don't know what it is,
and then Interspecies Projects Foundation A.M.
I will name in Latin what that...
Correct, even though that sound is not in the training set.
the model has never heard that sound before.
That's crazy.
Yeah, it's crazy.
One of the things that fascinating me is about something you've said to me
is how we'll know how to talk to animals
before we actually know what we're saying.
And then that kind of relates to some interesting aspects
of our social media work.
Do you want to talk about that?
Yeah, yeah.
So this is the core plot twist,
is that we'll be able to fluently communicate with animals
before we understand what we're saying to them fully.
Because what is AI, A is essentially a mimiker
and a translator.
So you give it lots and lots of data of animal speaking, and it learns how to mimic what the animals are saying.
We don't know what it's saying. It's just that it is.
We were observance of this experiment where we were on a boat in Alaska.
The researchers recorded the sound of a whale one day and we're playing it back the next day.
And the whale went sort of crazy and it was like responding back.
And what we think was happening is we believe we're not 100% certain that the researchers had recorded the whale's hello and its name.
and we were playing it back to the whale,
and it was like bouncing around the boat
and it was getting confused.
And I think Tristan, where you were getting interested
is like, imagine that accidentally
we had just recorded the sound of whale aggression.
And then we were just playing back whale aggression
to the whales, but not really knowing what we were doing,
we're just seeing the behavior.
And you have this sort of analogy.
Well, yeah, I mean, this is basically what social media is, right?
Social media is an AI pointed at your brain
that's recycling human outrage sounds.
And it takes this outrage sound that tends to work really well on this group of humans who are sitting over here,
clicking on these kinds of things.
And then it recycles those sounds to say, well, this will probably also work on these other humans sitting over here,
even though the AI doesn't know what the outrage is about.
It doesn't know what we're saying.
And you can think of social media en masse is kind of violating this prime directive
of kind of starting to screw with humanity's culture without knowing what it was going to do to humanity's culture.
Yeah, I think it's just so obvious that if we went out,
and started playing back whale song to humpbacks.
We might mess up their culture.
Remember, their culture, we believe,
goes back 34 million years.
And their song goes viral.
For whatever reason, like the song sung off the coast of Australia
can be picked up and sung by many of the world's population.
They're so like the K-pop singers.
And it's sort of obvious that if you just take an AI
and have it communicate with animals,
it might mess things up.
And somehow that's just not obvious
when we think about it for humans.
I want to do one other quick beat here that listeners might, I think, appreciate, if you remember from our AI Dilemma talk, we have these three laws of technology that when you invent a new technology, you invent a new class of responsibility. So when you invent a technology that can communicate with animals, you have a new responsibility saying, what is the right way that we should be thoughtful and careful about communicating in this new domain? The second law of technology is when that technology that you invented confers power, it will start a race. And it may not be obvious to people as how
your Earth species technology might start a race.
Yeah.
Well, here I think if you think about it a little bit,
you're like, oh, how will people abuse this power?
One, ecotourists will try to get all the animals to come to them
so that they can get tourists to pay.
We're more worrying than that are poachers.
Poachers may use this to attract animals so that they can kill them.
And even more worrying than that in some sense are like factory farms,
may use it to placate animals or use it for control.
And especially those second two, there is going to be an economic race to start using the technology.
And so before that begins, before there is a deep incentive, we think we have a responsibility as technologists to figure out how to create the norms, the laws, the international treaties, the Geneva Convention for Cross Species Communication, the prime directives, so that we can block those things before they begin.
And this is really critical because people hear us talking about the three laws of technology.
Imagine if Mark Zuckerberg in 2007 said, oh my God, I just started this arms race, this race to the bottom of the brainstem, I see where this is going.
I have uncovered this new technology that can do brain hacking, social psychological signal hacking, and it can screw with human culture.
And I, Mark Zuckerberg, am going to convene all the social media companies and policymakers and the government together to sign this sort of treaty on brain hacking so we don't get the race to the bottom of the brainstem.
And as impossible as that might have seemed,
is that you are doing that with your species project.
You are showing a living example
that you can intervene at those three laws
and create new responsibilities
and create the coordination so the race doesn't end in tragedy.
And I just want people to know that that's possible
and you're a living example of it.
Yeah, thanks much for us on.
It's because of the work that we've done to try to articulate it,
that we even have access to the three laws, so I know what to do.
Okay, perfect.
Let's go to the next question.
Hi, my name is Tamunaj.
Kareoli and I'm a journalist based in Georgia Caucasus.
Georgia is a young democracy with even younger internet history, but very old and very complex
language.
And generative AI models don't speak Georgian.
So AI is not a big topic here and no one is really concerned about it.
I know that many of my colleagues from other countries with uncommon languages,
find themselves in similar situations.
So I'm wondering what are the threats that we can talk about in communities like ours?
Thanks.
Really great question.
So I want to answer in two parts.
The first is like the rock and the hard place that low resource languages are in.
And that is when AI does a bad job of modeling your language and then makes decisions about you,
that's sort of like a Kafkaesque dystopia.
And then on the other side, when it gets really good at modeling your language, you're going to end up in a different kind, which is the Orwellian dystopia.
And so it's hard to actually say, like, run out there.
Let's like get all the data for Georgian and make the models better.
So I just wanted to start by naming that.
The second thing to think about is just the amount of AI slop that is going to be your internet in your native language.
It's low-quality content from English, getting translated to be even worse content.
So that sucks.
That's a lot of AI slop.
And of course, it's just going to get worse from here.
And so how do you solve the problem that there's just not that much Georgian content?
Well, the way AI companies are going to solve it is by training models on English and then doing kind of style transfer to translate content into Georgian.
And what that means is it's going to smuggle the entire worldview.
of whoever speaks English, Americans, to Georgian.
So you're going to have this kind of dilution invisibly
of that, which is Georgian culture.
It'll be like the majority of Georgian culture online
will be from translated English stuff,
which means it won't be from any of the original context
or mannerisms or history or subtext or care
that is only arising in Georgia.
And that's just one of the really sad parts
of how people are going to try to fill in the gap
for all these minority languages.
All right, let's go on to the next question.
This one comes from Spencer Goodfellow.
All right, so he asks,
there are a bunch of young and ambitious AI engineers
or researchers in the world,
and I am one, who deeply believe in the perspective
that you, Tristanon and Asa, shared
in the original AI Dilemma presentation.
However, right now, it looks like the market is still winning.
It's proving difficult to meaningfully regulate AI.
We've seen departures from the big AI labs
over safety concerns, and Open AI just announced 01, which is likely not far from public access.
In the language of the AI dilemma, we still feel like we are racing towards the tragedy.
What should these young AI engineers and researchers who care deeply about making AI safe
and are concerned about the direction of travel, the race dynamics, and the economic incentives, do?
Should we still want to work at these big tech companies?
Should we go into government and push for policy and faster regulation?
Should we campaign for safety standards?
Many of us are early in our career
and are concerned that if we speak out,
it will harm our prospects
or that we don't have enough career capital
to be taken seriously.
I feel like the first thing I want to do
is take a deep breath because this is very real.
This is something that we hear a lot
from people who have worked
both in the social media harms
and also in AI where we have a lot of people
and listeners of this podcast should know
that there's so many sympathetic people
inside the tech industry
who sympathize and understand and agree with
that sort of fundamental diagnosis
that we've been offering here for a long time
and they want to do things,
but what often happens is they feel that if they speak out,
they'll just get compromised,
or they'll have to leave the company or they'll be forced out.
And so it is a difficult situation.
I don't think there's a single place or a single way
that you can have the biggest impact,
and we should just talk about a number of things
that could be helpful right now.
Is it, do you want to jump in?
Yeah, so Tristan and I were recently at a conference
and we heard Jack Clark, who's one of the co-founders of Anthropic Speak,
and he was sort of giving his best theory of change
and from all the lessons learned, what's work, what hasn't,
and what he thought the highest leverage place was,
was making the risks more concrete,
that is show demos,
that often when people are going into Congress,
and when we've gone into Congress,
we're coming in by talking about what the potential harms are, we're making an argument.
Because they haven't happened yet.
Exactly.
In some cases.
Yeah.
But then there are other people who make different arguments.
And so now it gets adjudicated by who's making the better argument.
Or who has the better vibes.
Who do I like better?
Who do I like to listen to?
And the people who talk about how great it's going to make me feel better.
Yeah.
This is why whenever we talk about the harms, we have a little boombox with some great music.
And so Jack Clark was saying we need to get better at showing evidence of harms,
creating demos of harms and he gave an example of working with what was the name of that
company Griffin Scientific yeah who like showed up to the halls of Congress with test tubes to the
White House no to the White House with test tubes that the AI had taught them for how to make
something very dangerous and he said that that was very effective and that's an example not of
demonstrating the risk but making it more real making the issues more real not again if we said
that there's a perfect answer, we would be lying. But one of the things we're thinking a lot about,
and I'd love to ask you to think about, is given your skill set, what kinds of demos can you create
that are visceral, that people can touch and feel? So it goes from something abstract to something
concrete. And if you can't make those demos yourself, you know, help other groups that are
outside make those demos and present those cases on your behalf. I mean, our friend Jeffrey Ladish
at Palisade Research, this is the great work that they're doing.
Griffin Research, meter.
There are several organizations that are trying to basically test and evaluate these models
and then visceralize those examples.
And I think there needs to be more public education so that when a really good paper comes
out showing, you know, an AI can autonomously hack websites, like that should be a little
video that people can easily see and understand.
We need to get that in front of people who have power in society as quickly as possible.
And this is also why things like whistleblower protections or ways in which the people
who are closest to the red lights flashing, closest to the early,
signs of dangerous capabilities in the models, why it's so important to protect those channels
of that information getting to the places that it needs to go. Whether it's the whistleblower
protections or nudging outside groups to do certain demos, would you rather have a governance
by a simulated train wreck rather than governance by actual train wreck, which is unfortunately
the history of regulating technology in this country?
We invited Casey Mock, our chief policy and public affairs officer,
and Camille Carlton, our policy director,
to add some of their perspectives on a few of the questions as well.
Hi, guys.
I'm a recent college graduate who's interested in getting involved in AI policy,
but I'm certainly in the point of work,
and I'm worried that by the time we find a job in the field I want,
it's going to be too late to do anything really impactful about AI
and that it's going to accelerate so quickly
and be so significant.
It's really impossible for somebody like me,
get involved in and regulate.
What would you say to me?
What advice do you have?
Thank you so much.
Casey, do you want to take that?
Sure, thanks, Tristan.
You know, my experience working in policy,
I found that it's a pretty good idea
when you're starting out to not commit to a domain
and rather work on developing the raw skills.
So understanding how government works,
understanding how the law works,
understanding how to talk to policymakers,
and just understanding the political and media processes
that impact how things go.
But the meta question here is
how fast this is moving
and how it seems to outpace
how quickly government can react.
And I think for a field it's moving this fast,
we're all struggling with us.
The most attractive way forward six months ago
may not necessarily be the most durable
or effective policy response going forward.
Yeah, I mean, just to affirm what you said,
we struggle with this every day.
The world is increasingly being run
by how technology is shaping
the kind of legal economic and other forces
downstream from technology.
And so the most important thing that we can have now
is also technologists that are armed
with this bigger understanding of how they're shaping the world.
Now, is that enough? No.
But we need, you know, rather than having technologists
who are just trained narrowly in how technology works,
we need technologists that understand how the world works
and how the decisions we make as technologists
end up shaping the world,
whether that's social media, you know,
affecting the psychological and sociological environment of humanity
or AI reshaping every aspect of human beings.
And so the important thing to take from that is we need people who understand the technology in government.
And so the advice I'd give to you very directly is seek the spaces between, you know, find the intersections.
So understanding both AI, how it works in a deep level can work with it, and understanding policy is going to put you at a very privileged position and we're going to need people like you to lead us into the future.
Just to close my comments with an extra bit of practical professional development advice,
the levers of influence within government are not always the sexy ones.
I worked in budget policy because that's where stuff really happens is where the money flows through.
I worked in tax policy for that reason too, and it's a great place to get started.
So don't feel like you need to gravitate directly to the sexiest federal AI policy job in order to have a difference.
There's lots of ways to have a difference throughout all levels of government.
And just to end this with something, I'm going to throw down something, which is thinking
to the future of governance, which we're absolutely going to need people thinking a lot more
about.
And that is sort of there's an emerging field of using AI simulation to do better policy.
So there's a whole area of peace tech.
There are now things where you're using AI to simulate how a policy works.
So you can find its externalities before you actually deploy it.
That's not hooked up to any levers of power yet,
but if I'm looking three, four, five years into the future,
I think that's where it's going.
All right, that was great.
Let's go to the next question.
Hi, Tristan.
Hi, Aza.
My name is Mira.
I'm a big fan of your work
and really excited to have the chance to engage with the podcast.
My question is around the path forward from the attention economy.
And the way I see it,
there are really kind of two categories of solutions.
The first is interim solutions that focus on
mitigating kind of immediate risk, like, you know, banning phones in schools or putting age
requirements on digital platforms. And then there's kind of the more long-term solutions of
disrupting the attention to economy via what I call sustainable business model innovation.
And I think there's three ways, in my view, that we move towards this with policy solutions.
one is kind of making it
it being the attention to economy
less profitable with legislative fines
as the European Commission is kind of doing right now
with the DSA.
The second is ensuring competition policy
protects the opportunity for disruption
and the third is somehow catalyzing
sustainable business model innovation
in consumer and entertainment technology at large.
And so I'm wondering if you see this kind of
I guess ideal path forward similarly or differently to me. And if you kind of do think we're
looking towards this sustainable disruption, how do you imagine in these three ways I've outlined
or in other ways we might achieve that? Yeah, this is a great question, Mira. And I think it makes
me think of something in our work, which are the three timelines of responses. There's triage responses,
which are things that we can do immediately to stop the bleeding, kind of reduce harm. There's
transition, which are things that are transitory measures to try to take us from the toxic world
with the fundamentally misaligned business models, the fundamentally misaligned incentives that
we're currently operating in and nudging them in a better direction, things like liability,
things like changing the business models. And then there's what we call transformation. So
triage, transition, and then transformation, which is actually upgrading the entire institutions
that are governing technology in the first place. So with that kind of at the top, I'll kick
to others to jump in. Yeah, no, I think that's right. And I think one of the things about our work
has always been acknowledging first that there's not a silver bullet, but actually we need a
combination of interventions that approach different aspects of the ecosystem altogether
to really get us to that transformation point. It's not going to be one thing that does it.
It's going to be liability. It's going to be design-based interventions. In upgrading of antitrust,
that's grounded in looking at concentrations of power, behavior, network effects, upgrading
the way that we even think about data privacy, you know, for the age of AI, what does it mean
for our cognitive freedom?
What does that mean when we think about data privacy and our right to think freely?
So I think that there is a whole slew of these kind of middle point interventions that we
need to be pushing at altogether to get us into that endpoint of transformation that we really
want. I was going to say it's very much a yes and too. I don't think that we intend to imply that
from Camille's silver blow point that one thing is maybe more necessary than another. I'm down here
in Australia this week and just last week, Parliament passed a law banning social media for kids
under 16, which seems like a little bit of a simplistic measure. But it's important to know that
policymakers here are trying to be responsive to people's thoughts and feelings on the ground. It's the same
with, you know, schools, new phone policies that are becoming popular in the U.S. as well.
And these measures, I think, can be thought of as, like, a way to buy ourselves some time
to prevent, you know, additional harms from occurring until we can, you know, really sort
out longer-term, effective, durable solutions that change the incentives and make the technology
that we ultimately get.
In the list that you gave, which was wonderful, one of the sort of areas to intervene that I didn't
see was intervening at the level of just raw engagement because the attention economy companies
their bread and butter, how they make money, everything is based on whether they have
users that are engaging. And the problem with just taxing them at the end or creating a fee
is that adds a small sort of friction, but it doesn't change fundamentally their business model.
If we can intervene at the place of engagement where users engage, whether it's with latency
or something else, that starts to hit the companies where it really hurts, and I'm very interested
in exploring what that starts to look like. We are now used to, like we've seen the effect of
what happens when human beings are optimized by AI for attention just by rearranging things
that humans have posted. And that's sort of first contact with AI. We are now in second contact
and quickly moving into the 2.5 contact,
where AI is generating content
that we are going to have AI optimizing us for.
And we're just optimized for attention.
It's not like it just distracts us.
It crawls deep inside of us
and changes fundamentally our values.
Terraforms us from the inside out.
We become the kinds of people
that are addicted to needing attention.
And hence, politicians cease becoming people
that just pass laws,
but they become Twitter personalities
that need to be out.
in the public sphere, journalists become clickbaity. There's a way that when human beings are
optimized by AI, it doesn't just change a behavior, it changes our identity. And that's about to
happen on many, many more domains. Just one of them is the race to the bottom of the brainstem
becomes the race to intimacy, the race to the bottom of our souls, if you will. And we have
no protections against that. And I just always think back, if we could go to 2012 and we could put
strict limits or ban on AI commoditization of human attention, how different the world would be
today, and how grateful we would be if we could find some way to put strict limitations or ban on
using AI to commoditize human intimacy or any of the other parts of ourselves that we probably
don't even have words for yet. Yeah, Isa, that's right. And I think to just offer an example
of where this attention economy has transitioned into AI is in the case of Sul Setser.
So this is a case that we helped support in which a young teen died by suicide after months of
manipulation and prompting via a AI companion bot on character AI.
And so what we saw in this case was a very similar structure to what we've seen in social media
where it was a goal of maximizing user attention.
But what was different here is that it wasn't for the purpose of selling ads.
It was for the purpose of extracting user data in order to train their LLM
so that they could continue in the race to develop the most powerful LLM across the industry.
And so we're going to continue to see some of these underlying incentives, I think,
cross over into AI.
And we're going to see how the pattern.
we've seen in social media are playing out in AI, and new patterns are emerging, too.
Great. Thanks for that question. Thank you again to Casey and Camille.
Thanks, Sean. Yeah, thank you guys. Thank you for joining us. Let's move on to the next question.
Hi, Tristan. Hi, I entered design and innovation in 2018, and shortly after co-founded a do-no-harm framework for design and tech.
This was with a senior U.X colleague who's of a progressive and ahead of the curve mindset.
our initial lecture series caught the attention of universities and it's snowballed into a highly
popular methodological framework and a talk. It's been incredibly popular across the creative
industry, design leadership and project management divide. But where we've seen notably less
engagement is from the kind of harder tech side of the design innovation divide. So I'm
talking about the programmers, the coders, the engineers. And I'm wondering, what do you think
makes them less inclined or kind of incentivized to adopt a share?
shared standard of practice, especially one which assesses and makes risks of harms for individuals
and groups transparent, and where others in the broader shared space have understood how critical
this is, and where I imagine, you know, risk assessing and due diligence processes are quite
familiar. Thanks. I'm going to sum up the answer in two words. Loss aversion. Loss aversion
is that it is more painful to lose something now that you have it, then it would feel good.
to gain it if you don't.
There's this weird path dependency
with computer engineers
where if you're a civil engineer
or if you do surgery
or if you do medicine
or if you're in the military
or if you or in aviation,
we know that those professions
like people's lives are on the line.
But computer programming,
you know, I learned just sitting at home,
there was nothing on the line.
People's lives weren't on the line.
And so there didn't seem to be
power associated with being,
a coder and every year the power has gone up and the power has gone up but it's sort of a frog
boiling thing there was never like one moment or like oh now coding must become a profession like
every other profession and that's what i mean by loss version because adding it now feels like
we're all losing something versus like we just started that way if we had started that way then
we wouldn't be upset about that you know a second thing this makes me think of is i just got back a couple
weeks ago from speaking at West Point for their ethics and leadership conference. And this is to
several hundred military cadets that are going to be the future of the U.S. military. And one
things that's fascinating about going to West Point is that part of their whole education curriculum
is on moral and character development. Like it's an official part of their curriculum, you know,
because nation, duty, honor, service are fundamental things that you have to have as part of your
character before we put you in basically, you know, military.
hardware, put you in F-35s, puts you in war, put you in battle. There needs to be a level of
responsibility, wisdom, awareness, prudence that has to be in you before we hand you this increasing
power. And this is not uncommon, whether it's, you know, driver's education. In the U.S.,
you show people this video called Red Asphalt of all the things that can go wrong that kind of
scare you into being a more responsible driver. But somehow we haven't had that with technology. And as you
said, Eza, because we would lose something now from being the kind of freewheeling AI GitHub agents
that we are, getting to write all the code that we want as fast as possible.
All right. Let's move on to the next question. This one from Krista Ewing.
All right, Krista asks, do you know if there is a push for high-risk publicly traded businesses
like health care, defense, security, et cetera, to release their AI safety budget numbers?
I think this could really help put pressure on businesses to support these functions
and the way they truly need to be supported in my experience.
I think these transparency measures are totally necessary, of course not sufficient, but a good start.
And it has to be more than just releasing budget numbers because those things are all pretty fungible.
So you'd want to know how it's enforced and how it's getting spent.
But I'll pass back to you just on.
Yeah, I mean, is pointing at the right direction in terms of the problem, Stuart Russell,
who's in AI and wrote the textbook on artificial intelligence and a friend of ours,
you know, estimates that there's currently about a 2,000.
into one gap in the amount of money going into increasing how powerful AI is and scaling it
versus the amount of money going in to make it safe.
Someone else in our community thought there's about 200 people total in the world who really
are working on AI safety and something like 20,000 that are working on getting to AGI.
And this is definitely not a good situation.
Yeah, one of the thing to add here is that safety versus increasing AI's power are not cleanly divided.
like, for example, if I was working on the safety for nuclear weapons, I would be working on
permission access control links, I'd be working on like control theory, but working on things
that aren't actually increasing the power of how explosive the nuclear weapon. It's clear that
when I work on safety, I'm not also increasing the power of the weapon. But with AI, it's actually
a fuzzier boundary because as you're increasing your understanding of all the dangerous things
it could do, that's part of what makes it safe is by understanding all the dangerous things
it can do, which is also about increasing the power of all the things that it can do.
So it's not as clean of a boundary, and it makes it more confusing to work on AI versus other domains.
But I will say that when we're out at conferences with people who are on the inside,
one of the things they do ask for is greater transparency as a prerequisite for everything else.
In fact, one of the people who's very high up at one of the major AI companies thought
that we should be building a big public coalition and movement around transparency,
but something that could be the basis for getting a foothold for more regulation.
Thanks so much for that question.
Let's go into the next.
Hello.
My name is Jason, and my question is, with everything that's been going on this year and how quickly things are moving,
are there any bright spots, reasons you're feeling hopeful about 2025?
Thank you.
That's interesting you use the word hopeful.
We are with our friend Paul Hawken, who wrote the book Drawdown.
along with a large community of folks in book Regeneration.
And in a conversation we have with them recently,
he called hope the pretty mask of fear.
That we don't need more hope.
What we need is more courage.
You know, it doesn't look good out there.
You know, I think we can all be honest about that.
The incentives to keep scaling, to keep deploying.
If we don't do it, we'll lose to China.
It's all going forward currently.
However, there's nothing in the laws of physics.
that says that it's impossible for us to coordinate
or to do something different than what we're doing.
And we have had cases where a new technology was invented
and conferred power like germline editing.
You could have designer babies
and you could have super soldiers and super smart kids.
And theoretically, the three laws of technology
should have applied there.
People should have started racing
to get super smart babies.
And we ended up not actually going down
that road because there was something sacred about engineering humans that we collectively
agreed not to do that. And I think that's hopeful in the sense that we should study examples
where this unlikely thing happened. What was it about that example? And we think it's because
there was something really sacred that was being violated or distorted or warped. And there's
a second example that we sometimes cite of the Treaty on Blinding Laser Weapons, I believe,
is a protocol signed in Geneva, and it would clearly give some military an advantage to invent
blinding laser weapons where you just, it's a new weapon, and it's very effective. But it was
so inhumane at some fundamental level, even though it's ironic because weapons that kill people
is also inhumane. And yet, you know, we were able to sign this treaty in so far as I know,
I don't think that they're being actively used on the battlefield, even though they may be
pursued, you know, as backup options in secret. We are not for stopping AI. We are for
steering AI. And I think that there's still steering that can happen. I think the energy that we
need to come with when it comes to steering is not this kind of, you know, safety, slow it all
down, careful, careful energy. It's like steering AI should be an exciting prospect. You know,
we were looking back recently at the video of Elon's rocket, you know, this advanced engineering
feet where he's shooting a rocket all the way up into space and then it's landing back between a
pair of basically metal chopsticks. That is 21st century steering. That is the kind of energy
around steering AI that we want our work to help catalyze in the world. And we want as many
of the smartest minds of our generation to be working on steering AI. And obviously, you know,
it is moving incredibly fast. And so it has to happen immediately. But that's kind of where my mind goes.
It's less about hope and more about how do we put our attention in the agency of what we should be doing.
We were at a conference recently, and someone said, we're not competing for power, we're competing for crazy.
It's crazy what AI can do.
But if we saw the craziness of what we were building towards as more unstable, more volatile, more needing of steering, then that would implicitly allow us to focus more of our energy on the urgent need for developing the steering of technology.
And that's part of what we're doing.
When people think that we're being negative about the risks, it's like we're just trying to highlight the craziness so that we can motivate a collective desire for steering that is bigger than our collective desire for competing for the crazy thing that we can't control.
And in that sense, AI becomes humanity's invitation into finally learning how to have a mature relationship with technology so we can do it in a different way and actually end up in the world we all want.
And that's why we at the Center for Humane Technology, you know, work on these issues.
day across the aisle, policymakers on the left, on the right, you know, internationally,
listeners like you, training humane technologists with the course. This is why we show up
and do the work that we do. All right, everyone, thank you for these incredible questions. We
love being in dialogue with you. We're honored to be able to be on this journey with you.
We're grateful that you listen to these episodes of this podcast and take these insights into the
world. Thank you again to Casey and Camille. Thank you for joining us. This will be our final
episode of the year, but we will see you back in 2025.
I just want to say one thing about this moment in time and what it means to support us.
And that is we've talked a lot about the need to intervene in the direction that AI is going
before entanglement with our society.
We still are in that window and now is the time to burn that flare brain.
So please support us.
You can support us at humanetech.com slash donate.
All right, everyone.
Take a deep breath.
Have a wonderful end of it.
the year, and next year will be even faster.
Your undivided attention is produced by the Center for Humane Technology,
a non-profit working to catalyze a humane future.
Our senior producer is Julia Scott.
Josh Lash is our researcher and producer, and our executive producer is Sasha Fegan.
Mixing on this episode by Jeff Sudaken, original music by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team for making this podcast
possible.
You can find show notes, transcripts, and much more at humanetech.com.
And if you like the podcast, we'd be grateful if you could rate it on Apple Podcast, because it helps other people find the show.
And if you made it all the way here, let me give one more thank you to you for giving us your undivided attention.