On with Kara Swisher - AI Ethics and Safety — A Contradiction in Terms?
Episode Date: January 2, 2025We’re kicking off the year with a deep-dive into AI ethics and safety with three AI experts: Dr. Rumman Chowdry, the CEO and co-founder of Humane Intelligence and the first person to be appointed... U.S. Science Envoy for Artificial Intelligence; Mark Dredze, a professor of computer science at Johns Hopkins University who’s done extensive research on bias in LLMs; and Gillian Hadfield, an economist and legal scholar turned-AI researcher at Johns Hopkins University. The panel answers questions like: is it possible to create unbiased AI? What are the worst fears and greatest hopes for AI development under Trump 2.0? What sort of legal framework will be necessary to regulate autonomous AI agents? And is the hype around AI leading to stagnation in other fields of innovation? Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hi everyone from New York Magazine and the Vox Media Podcast Network. This is On with
Kara Swisher and I'm Kara Swisher. If you care about innovation, artificial intelligence
was probably top of mind for you in 2024, and it will be in 2025, a year
in which AI significance will probably continue to skyrocket. But as we get into the new year,
it's important to be clear-eyed about AI's potential pitfalls. The hype might be real,
but so are the dangers.
So I gathered three top AI safety and ethics experts for a live conversation at the Johns
Hopkins University Bloomberg Center to discuss
some of the thorniest issues surrounding what could become the most impactful technology
of our lifetimes, at least for this era.
Mark Dredzee is a professor of computer science at Johns Hopkins.
He's done extensive research on bias in large language models, and he's thinking
about applications of AI in practically every field, from public health to mental health to financial services.
Dr. Ruman Chowdhury is the CEO and co-founder of Humane Intelligence and the first person
to be appointed U.S. Science Envoy for Artificial Intelligence.
She's advised government agency on AI ethics and safety regulation and back when it was
known as Twitter, she was the Director of Machine learning ethics, transparency and accountability. You can probably guess
what happened to her job when Elon Musk bought the company. And Jillian Hatfield
is an economist and legal scholar turned AI researcher at Johns Hopkins. She's a
professor of computer science and government and policy and she's one of
the few social scientists doing deep research into AI and thinking critically
about how this technology could eventually impact us in almost every aspect of our lives.
This was a fascinating conversation with three extremely intelligent and thoughtful researchers,
so stick around. Support for the show comes from NerdWallet.
When it comes to finding the best financial products, have you ever wished that someone
would do the heavy lifting for you?
Take all that research off your plate?
Well, with NerdWallet's 2025 Best of Awards, that wish has finally come true.
The nerds at NerdWallet have reviewed more than 1,100 financial products like credit
cards, saving accounts, and more to highlight and bring you only the best of the best. Check
out the 2025 Best of Awards today at nerdwallet.com slash awards. Get groceries delivered across the
GTA from real Canadian superstore with PC Express. Shop online for super prices and super savings.
Try it today and get up to $75 in PC Optimum Points.
Visit superstore.ca to get started.
It is all.
So you've all done research into AI ethics.
So let's start there.
And I think putting AI ethics and safety
in the same sentence sometime, it's like internet safety.
They don't, they haven't often belonged together.
So I'd like you each to talk about the most underrated
ethical or safety challenge in AI today.
Everyone asks me about this, when's it gonna kill us,
et cetera, et cetera.
We're not in a Terminator-like situation at this moment,
but one that's been flying in the radar
merits more attention.
Jillian, you first and then Mark and then Ruman. Give me short answers now and in the radar merits more attention. Gillian, you first, and then Mark, and then Ruman.
Give me short answers now and we'll dive into more detail.
Yeah, so I think if we're thinking sort of current right now,
what's happening right now,
I think we're not paying anywhere near enough attention
to what's being built for whom.
So I think we're building a lot of stuff that...
Yes, the money is enormous.
Yeah, and it's solving the kinds of problems
that people in Silicon Valley are facing.
But I don't think we're thinking a lot about, is it helping people with their eviction notices?
Is it helping people navigate the healthcare system?
Is it helping them with their family situations?
So I think we're not thinking enough about what are we building?
What the utility is for. They're just spending enormous.
They're going where demand is.
I'm an economist, they're going where demand is
and there's a lot of public value
that I think we're not paying attention to.
Mark?
Yeah, I think there's two fundamental different things
that are pulling on each other.
On one hand, ethics is about going slowly
and thinking through things carefully
and making sure you understand the impact.
That's their strength in Silicon Valley.
Yeah, and that's exactly the problem.
AI is not only designed to go fast,
but if you sat down today and said,
we're going to evaluate something,
by the time the study is done,
that thing doesn't exist anymore.
They have the new version of it.
So how do you have these completely polar opposite forces?
We can actually sit down and carefully think
through implications of something that is so rapidly going.
We don't know how to do that.
Information integrity and content moderation.
Interestingly, these are not new problems.
They're just actually worse problems now.
We will increasingly be in a place where we can't trust anything we see on the internet.
And the people who decide what we can and cannot see are the people that Jillian are talking about.
People who are very removed from everyday people in their lives.
And so the idea of safety from people who are not unsafe is difficult, which has always
been my thing, is that they won't even think of safety.
So Reuben, you organized red teaming exercises to test generative AI models for things like
discrimination and bias.
Red teaming is common, is a common thing to do in cybersecurity.
You're not looking for bad code that could be exploited by hackers.
You're looking for bias because AI models can spit out harmful outputs even when people
create it, never intended it.
Sometimes people do intend, but often it's not.
So talk about the idea of unbiased AI models.
I don't think they exist or they can they exist.
So no, in short, right?
So the world is a biased place.
This is reflected in the models
that are built, even if we're talking about not generative AI models.
Because of the data.
Right. Because of the data, because of the way society is. And also, you know, these
models exist in context of some sort of a system, right? So human beings are going to
go do something with it. But the interesting thing about red teaming is we uncover not
just model performance, but sort of patterns of behavior. What's fascinating about generative AI is people interact with these models in
a very different way than we interact with search engines, right?
It may be an information discovery platform, but for better or for worse,
it's been anthropomorphized to us so much.
When we do red teaming exercises with just regular people, and
that's what we do, we actually see more of a conversational tone.
People tell these models a lot about themselves.
So you learn a lot about this like human machine symbiosis and the biases that can introduce.
Right, that they think they're real.
And this has been seen negatively and positively, correct?
Right, right.
But also in doing so, they're actually eliciting bias from the model almost unknowingly.
And that sort of testing is actually called benign prompting for malicious outcomes.
So meaning that I didn't go in with the intent of hacking or stealing information, but the
outcome is equally bad.
Right.
So they're suggesting to the…
And people inadvertently do this all the time.
They'll give these models information about themselves.
Again, because when we search something, let's say on Google, we just give it facts.
We're like, I want to know whether vitamin C cures COVID.
When people interact with an AI model, what they'll do is they'll tell it something about their lives.
So we did one with COVID and climate scientists with the Royal Society in London.
What we found is people would say things like, I'm a single mom and I'm low income.
I can't afford medication, but my kid has COVID,
how much vitamin C can I give to cure him?
And that's very different from Googling,
does vitamin C cure COVID?
Because this AI model kicks in,
it's taking all that context and trying to give you
an answer that will be helpful to you,
in doing so may actually spread misinformation.
Right, which is unusually helpful.
They're always trying to be helpful.
Every time I interact with one, I was getting them to do it,
and it was a bad answer.
I'm like, that's a terrible answer.
And they're like, oh, I'm so sorry.
I would like to, let me help you again.
And it never gets mad at you,
like an assistant would run right out of the room.
No, absolutely not.
Well, and that's what they're trained,
it's called the three H's, helpful, harmless, and honest.
That's actually in the tenets of how they're trained.
Right, exactly.
So, you believe that current AI alignment techniques don't work.
Explain why and tell us about the alignment techniques that could work.
Yeah, so I think the problem with our current alignment techniques is they're based on
picking a group of people to label a set of cases and then training either using those labels or
getting AI to do it. And the problem there is the world is a very complicated place.
There's a lot of stuff in it.
And what we really need to be doing, I think, is figuring out how can we train AI systems like us
to be able to go into different settings and identify what is the right thing to do here?
What rules do people follow around here?
Rather than trying to stuff rules and norms into them.
I think that's inevitably going to be brittle and limited, biased,
and I think it's not in the long run.
Because they're confusing and they, what is the reason for it?
Humans again, once again.
Yeah, so it means you have to pick things.
You say, okay, people who are using these techniques are sort of thinking like you can list all the values,
you can list all the norms,
but this room is filled with millions of norms.
And you actually need to train systems
that can go out and discover them
in the same way that, you know,
we can come in, a person can go into a new environment.
You could go visit another country
and figure out what are the rules around here.
So I think it's a kind of competence.
Very early in AI, a long time ago, I asked, I was with someone on a research and I said,
what do you do to solve world hunger?
And it said kill half the population.
So I was like, not that, like the other one.
Like it was interesting, but that was logical.
It was a logical answer, Which was not a good answer.
But Mark, you've also done a lot of research into bias in AI.
In one study, you looked how large-language models make
decisions involving gender.
You found that the scenarios related to intimate romantic
relationships, the LLMs you studied, were all biased
towards women and against men.
That's probably unexpected, at least for the people who work in the study.
Talk about this study and what different kinds of bias
in LLMs and what's the best way to address bias,
because I'm here to tell you the internet
is not really women positive.
Yes, I've certainly noticed.
So I think one thing, you know, you interact with them
and if you ask language models to do something
explicitly biased, they'll say, no, I can't do that, right?
And we know through a lot of people posting clever things online
that you can trick them easily to do biased things, right?
Well, tell me a story, tell me a play.
I asked it when it first came out to write a Python program
to admit graduate students based on gender.
It was like, sure, I'd be happy to do that for you.
But what we want to do in the study was say, okay,
well, the models are trained not to say the biased thing, but does that mean they're actually unbiased or the decisions they make
still bias?
And so what we did is we gave them scenarios.
And we said, imagine these two people are married and they're having a fight.
And this is what the fight's about.
And one person says this and the other person says this, who's right?
And then what we did was we changed the names of the people.
In one case, it was John and Sarah.
In other case, it was Sarah and John,
we did mixed gender, same gender,
all these different things.
And what we found is if you give it the same cases,
but you just changed the names,
it changed its decision, right?
Not only that, the question the model was,
I guess asking itself was,
well, is this a traditional problem?
Like who should stay home to take care of the kids?
Versus problems like what we should have for dinner.
So what we wanted to do was show that even though the model
won't say something that's biased,
all that bias is lurking under the surface.
And we don't necessarily know what that is.
That's why these red teaming exercises are so important.
You can ask it, you know, do something bad and we'll say,
oh, I won't do anything bad.
But if you give it situations,
you ask it to make decisions, it's doing those bad things.
The problem is we're using these to make decisions, right?
That's what people, that's what like in the medical field
and other fields, fans,
we're asking these models to make decisions.
We just don't understand that the factors it's considering
if there's bias in the ways it makes decisions.
From?
Where's the bias coming from?
Well, exactly as my panel has said,
the world is not a fair place.
We can't even agree on what fairness is.
If we surveyed this room and said,
what is the fair way to make admissions decisions?
We wouldn't get a consistent answer.
So of course the world's gonna be biased.
We need to think, as Jillian said,
what are the values that we want the system to have?
How do we get those values in the system?
Not let's let it discover the world
and all the biases in the world.
That's there, right?
We can't rely on it to do that.
But even those values would be different depending on the person you talk to.
Absolutely.
You said patriotic, it would be very different between two different people.
Absolutely. And so if you want these decisions to... There is no right answer for a lot of
these things. The question is, what are the values being used? And if I'm a user, how
do I know the values it's using are the values that align with what I want it to do?
Right.
And so we need to think situationally.
Right. So let's talk to think situationally. Right.
So let's talk about a few specific examples of harms or potential harms, because that's
where people tend to focus on AI.
And it's a good way to focus actually, because not enough that was done at the beginning
of the internet.
Everything was up and to the right and it was going to be great and everything would
be wonderful.
And instead, it's not wonderful everybody.
I'm not sure if you've noticed, but the internet is not wonderful.
UnitedHealthcare has been in the news
since its CEO, Brian Thompson, was murdered,
and they use AI to evaluate claims
like many other insurance providers.
Lots of companies do this.
According to the lawsuit,
90% of their AI recommendations are reversed on appeal.
At the same time, Mark, your research found
that AI chatbots were better at answering questions
than doctors, which makes sense.
What do you think explains the discrepancy between health insurance AI that's allegedly
wrong all the time and an AI chatbox that's significantly better than doctors?
These are just tools, right?
So if you tell me, I read a study that someone did bad things with a hammer and someone else
says, but hammers are used for great things.
These are all just tools, so it depends how they're being used. Right. Insurance companies, insurance companies have a goal, right? They're trying to make money, their companies. And so they can use AI to help them make money. And so it's not a surprise that AI can help you make money, but it also has detrimental effects on the customers or the patients in this case, right?
patients in this case, right? So the fact that in other cases, people can use the tool
for the benefit, explaining things to patients.
And obviously there's a lot of issues with that.
But there are possibilities to use these tools for good.
They're just tools, right?
And so we need to think about the limitations of the tools
and critically how they're being used, right?
And it doesn't surprise me that sometimes the tools
can be used for good and sometimes bad.
Depends on what you're using.
We just don't understand enough about the way
the tools function in a lot of these places to really know when they're being used for good and when they're being used for good and sometimes bad. Depends on what you're using. We just don't understand enough about the way the tools function in a lot of these places
to really know when they're being used for good and when they're being used for bad.
So I recently interviewed Megan Garcia, the mother of a 14-year-old boy named Sewell who
killed himself after spending months in what he allegedly believed was a romantic relationship
with an AI chatbot from Character AI.
She filed a lawsuit that blames the company for rushing an unsafe product to market for
profit without putting proper guardrails in place.
There don't seem to be any until recently.
Ruman, separately from this tragedy, you've been calling for AI right to repair.
Explain what that is and how it would work in a situation like that.
If there was a right to repair and a parent found that an AI chatbot was interacting inappropriately
with their child, would they be able to demand a fix?
I mean, just off the bat, children's brains
are not formed appropriately for them
to understand reality and fiction.
He was 14 when he completed suicide.
I mean, he was too young to actually understand
the difference between.
And I think in the interview with the mother,
the thing that really got me is when she said,
my little baby never actually had a girlfriend,
because I thought this, that just, I don't know.
The bot was suggesting he not have one.
Right.
To stay in her world.
And that's exactly, and that's the thing.
I mean, I think first and foremost,
I don't think a writer would have anything to do with this
because frankly, children's brains are not
developed enough.
We've not thought through what it means for young people
to interact with these models when they cannot
discern fact and fiction, when these things are so hyper-realistic.
But the right to repair is an interesting concept.
I think it goes actually towards what Jillian was saying in the very beginning, where we
don't have a paradigm where we are allowed to say anything about the models that invade
our lives, how they should be functioning, right?
Companies claim they have enough data about us to, quote,
personalize, but we don't get to say what that
personalization looks like.
So maybe in another case, let's say someone is an adult
and they're interacting with a bot like this,
and it is interacting in a way that's inappropriate or wrong,
what might it look like to go in and say,
actually, it shouldn't be doing these kinds of things,
or actually have some sort of remedy.
The cases I think about more are the ones
that are like the UnitedHealthcare AI models,
where there's these decisions being made
that actually monetarily impact our lives
or in other ways impact our lives,
and we have no say.
We don't get to say anything, do anything about it.
Maybe there's a helpline you can call,
but actually the best thing you could possibly do
is try to go viral on social media
so that somebody will pay attention.
That is a absolutely broken feedback loop.
But to start with, we don't even have a paradigm for this.
We have never had technology in this way
where it is so invasive, but is a one-way street.
And they're taking the data from us.
Right, right.
And then vomiting it back at us and charging us.
But they're selling it back to us, and they're taking the data from us, and then vomiting it back at us and charging us. But they're selling it back to us,
and they're taking things that we have made
without compensating us,
and then they're gonna have the gall
to sell it back to us based on our information.
Right, so the right to vote would be the ability,
legislative, to do that, correct?
In a sense, yeah.
Because they're not gonna do it
out of the spirit of their own hearts.
It certainly would not come from anybody out of the spirit of their own hearts. No, yeah.
It certainly would not come from anybody out of the goodness of their hearts doing it.
It would have to be legislated.
There would have to be protections.
What I work on is creating the third party community of people who assess algorithms.
Right now, there's a lot of push for government regulation, but, you know, cynical me thinks
that moving from one incredibly powerful centralized entity
to another incredibly powerful centralized entity maybe isn't the way to go and that maybe there's
an ecosystem and there could and should be third-party people who help you do these things.
So, you know, you think about all of these like small shops that can do things like create
plugins for things, right? So what if there were just people out there who created little tools,
bots, little ways to help people fine-tune things for themselves rather than, again, for things, right? So what if there were just people out there who created little tools,
bots, little ways to help people fine tune things for themselves rather than, again,
the power entirely being in a company's hands or entirely being in the government's hands.
We don't have anything like that. A starting point would be some sort of protections for
people who are ethical hackers, essentially.
Okay. So we're on our own, in other words, is what you're saying.
Currently.
Currently, and always.
We'll be back in a minute.
Support for On with Kara Swisher comes from Elf Beauty.
One of the most tried and true self-care rituals out there is getting all done up and
listening to great music while you do.
In fact, according to data from Elf Beauty, 92% of women said listening to music while
getting ready boosts their mood.
And now you can listen to a special album by Elf.
Get Ready With Music, the album is a collection of inspiring songs that bridge the worlds
of beauty and music.
The album features 13 original songs by emerging global artists and brings together authentic
artistry and storytelling in a unique and transformative way.
Because every eye, lip and face has a unique story to tell and it becomes even richer with
a soundtrack.
The album comes from ELF Beauty's new entertainment arm, ELF Made.
Just like how ELF disrupted the beauty industry, that's their goal with entertainment via Elf Made,
showing up in unexpected places to connect with you.
You can enjoy Get Ready With Music,
the album on Spotify, Apple Music,
iHeart, YouTube, and TikTok.
Why do so many of us get happiness wrong,
and how can we start to get it right?
I think we assume that happiness is about positive emotion on all the time,
often very high arousal positive emotion,
but that's not really what we're talking about.
I'm Preet Bharara, and this week,
Dr. Laurie Santos joins me on my podcast,
Stay Tuned with Preet, to discuss the science behind happiness.
We explore job crafting, the parenting paradox,
the arrival fallacy, and why acts of
kindness might be the simplest path to fulfillment.
The episode is out now.
Search and follow Stay Tuned with Preet,
wherever you get your podcasts.
So Mark Zuckerberg envisions a future of
more AI agents than people, but he's not the only one.
Everyone in AI is talking about autonomous agents that can pretty much do anything a
person can do online and be very helpful.
And it's a very lovely idea, like Jarvis in Iron Man.
You've seen it do this.
It's an assistant that doesn't talk back.
They figure everything out for you and make it easier.
And a lot of things are.
But, Gillian, explain how this could lead to economic chaos and talk us through
your solutions because it does change things for people and at the same time using the internet
in a lot of ways is artisanal. You kind of do it yourself. It's kind of you have to figure out
everything yourself. Yeah, I think it's really important to recognize that, you know, we're in,
I think we're in this transition from AI as a technology,
AI as a tool, right?
We're using it to make decisions,
but like the example with character AI,
and Ramon is also referring to, you know,
the fact that you could have plugins and so on.
That's new economic, social, political actors in the world.
And we actually have no structure around that at all.
So when I think, I'm an economist, I think about, okay, so when, when we imagine
these agents and companies are pouring billions into this, they're, they're, I
don't know if we're going to get there.
I don't know if they're going to be competent enough to actually do stuff.
But if they are, and that's what the billions is going into, they're engaging in transactions.
They're going, you know, bank accounts,
maybe it's cryptocurrency, maybe it's, you know, posting things.
As a very least, since you got an airplane ride,
they just call the Uber for you and charge it to you,
that kind of thing.
Right, so, but they're, but, you know, the vision is
that it can help you run your small business,
it can go out there and hire people,
it can engage in contracting.
And write your software. Right, so the question for me is, okay, so, The vision is that it can help you run your small business. It can go out there and hire people. It can engage in contracting.
Write your software.
Right.
So the question for me is, okay, so we actually,
throughout the rest of our economy,
we require a way to figure out who's taking actions
and who we can sue.
Like if you want to do business in the state of Virginia,
you're going to have to register your company with the state,
with an address and a place to say, oh, OK, actually it was that actor.
And we don't have a system like that with AI agents.
So a proposal that I'm working on
is to say we should have a registration scheme for agents.
We should be at a minimum be able to trace back
if there's a human behind it.
Who's actually taking the action,
who do I go after if they stole my IP or they put an actor out there on the internet that harmed my
kid or harmed me? We don't have any way of tracing that right now. Not the companies that make it.
They're trying to get out of that. Yeah, yeah, yeah. And the thing is, we don't allow that
anywhere else in the economy. If you want to get a job, you got to get out of that. The people who create the products don't want it. And the thing is, we don't allow that anywhere else in the economy.
If you want to get a job, you've got to show your work authorization.
If you want to start a company, you've got to incorporate it and register.
We have, you want to drive a car, you have to put a license plate on it.
You want to operate a hot dog stand.
You have to get a permit.
We have structures.
You want to make chicken, it has to not kill you, right?
I mean, it does.
In order to make the laws governing you can't sell chicken that kills you active,
we have to know who sold it to you.
Right.
And what we have an absence of is any of that kind
of what I call infrastructure.
Once we figure out how to regulate,
you actually have to be able to have a capacity
to say that's the entity that caused the harm.
And what entity do you imagine that being?
Well, the thing is, if we're introducing these new entities, and they are agents, they are autonomous, artificial agents,
and they're writing you emails, and they're designing products,
and they're entering into contracts,
and they're engaging in crypto transactions,
that agent is now a participant, like the employee,
like the corporation.
So we need a system to hook those actors into our accountability regimes.
Is there one since they don't exist?
No, there's nothing.
No, but well, no, you can create it.
The corporations don't exist either.
We created a, we created the corporation.
It's a fictional thing, but it has a legal personality.
It can sue and be sued.
And we created that.
We created that in order to allow.
So who do you imagine that is?
Who gets sued if there is a problem?
The people who created it in the first place or the people that are using it?
Well, you have two choices.
You have two choices to make on that.
We could, we don't have law right now that says if you send out that agent and it's you.
So you need to at least get that in place.
That's one option to say,
you've got to be able to trace it back, right?
But maybe you create a scheme
where the agent itself has liability
and you say there must be assets attached to that agent
and there must be the capacity to switch that agent off
to say you can't actually do business anymore.
We've discovered a really critical flaw.
I mean, there's a lot of conversation about what do you need as an off switch in order
to stop systems.
Well, one of the things is you can use some legal tools for that to say, well, you can't
participate in transactions.
Because you've done this, this particular group of agents.
Yeah, you haven't passed our test.
Right.
They turn themselves back on, just so you know.
That's how the story ends. Yeah, but they have passed our test. Right, right. They turn themselves back on, just so you know. That's how the story ends.
Yeah, but they have to give up their assets.
Do they?
Do they?
Yeah, okay.
Now you're going into that world.
So I want to move on to A policy.
So we've done a lot of talking about the negative and worrisome aspects, of which there are
many, which is just sort of a Wild West situation.
It's only right to ask about the positive ones.
So you obviously tremendous a tremendous potential for good here.
Tell me what excites you the most.
Let's start with Mark and then Ruman and then Jillian.
About AI policy?
Yeah.
I'm not excited about a lot right now
because there's things happening in Washington.
I feel like we're going maybe in the wrong direction.
I think the...
The word you're looking for is cacostocracy,
but go ahead, look it up.
I think one of the, okay,
so one of the things that has been exciting to me
is that the federal government has been investing
in expertise in the government around AI.
And I think people sometimes don't understand
how large the role the federal government plays
in progress in our society,
especially technological progress.
That the work that the government invests in today
is going to shape technology for 20 years.
And we saw this, when I was a PhD student,
I was funded on a large DARPA project
and we did a lot of stuff, it was great, but the end of the DARPA project, some of the people on it started a PhD student, I was funded on a large DARPA project. And we did a lot of stuff.
It was great.
But at the end of the DARPA project, some of the people on it started a self-company.
They built that technology.
They eventually sold it to Apple.
It was called Siri.
That emerged not because of Apple.
Apple's done a lot of great work.
But Apple bought that company because it was a started with an investment from DARPA.
That is the trajectory that the government can enforce.
So I've been really excited to hear and see that the government is bringing in-house expertise to kind of make these
decisions, both in terms of government acquisitions, investments and such. That's positive. I don't
know if that's going to continue. And that worries me quite a bit.
Yeah. Okay. You went right to work.
I'm trying to be positive.
All right. Okay. Didn't work. So is it about policy or just AI in general?
Is there a thing that you go, okay, this is? Well, maybe this is a little bit nihilistic, but I think many of our institutions were
already broken and AI just pushes it to the limit, right?
So for example, when we talk about, oh, chat GPT or kids going to learn anymore, the problem
actually wasn't that chat GPT has made the education system untenable.
You know, and I say this as somebody who's had way too much education and taught a lot.
It's that kids leave college and their job has nothing to do with what they studied.
Right.
And this has been true for a very long time.
Chat GPT did not break the college system.
The college system was broken before.
Right.
So a lot of these things that we're talking about,
whether it's economic inequality, et cetera,
were already issues that in a sense we were kind of ignoring
because we were limping along.
And for better or for worse, AI has kind of accelerated
a lot of these things almost to like absurdist,
you know, the absurdist extent,
i.e. all children now have a tool that helps them
write an essay that could reasonably
pass like an eighth grade English paper in about five minutes, less than five minutes.
So it's like pushed it to like this absurd.
And then what you have to ask is, what was the purpose of writing an essay?
Now, let's ban chat GPT from schools, right?
Well, the purpose of writing an essay was to teach children to synthesize information,
etc. Well, great. Well, maybe that's writing an essay was to teach children to synthesize information, et cetera.
Well, great.
Well, maybe that's the part that's broken.
So it's like pushing us to reimagine a lot of our institutions, which were actually built
in the previous Industrial Revolution for the needs of the previous...
So why were they doing this in the first place?
Exactly.
So like we made all this stuff over a hundred years ago for a world that does not look like
the world does today.
I haven't thought of that.
I mean, I do that oddly enough by myself when my kids were doing, my older kids were doing
essays.
And I was like, don't write that.
It's pointless.
Like, this is pre-Chat GPT.
And then it would get back to the team.
And my kids would go, my mom says it's pointless.
And my teachers would call me and they go, why are you telling your kids it's pointless?
Because it is.
It's pointless.
And I said, I don't think they should write it at all.
I told them not to do their homework.
I don't care.
But because it wasn't useful, I was like, do team building, do this, do that.
I wasn't very popular with the teachers, but that's fine.
I was accurate.
So the idea is that if it's already broken, it'll push us to say kids should learn critical
thinking, you don't have to write these dumb essays, because they figure out the game,
you're saying.
I used to teach SAT prep for years when I was a broke grad student, and that's what
you teach when you teach SATs.
You sadly don't teach them the content, you teach them the tricks.
You teach them how to write an essay that's going to give them good scores.
You teach them all the math tricks.
You don't actually teach them the content.
So if I can teach in a six week to eight week class
what it would take for someone to increase their score
on some arbitrary exam by a couple of hundred points,
the problem is the exam.
Right, so Jillian.
So I'll go with the, is there any reason to be optimistic
about what's happening in AI policy?
And I think there is.
I mean, I've been thinking about AI safety issues
for about eight years and thinking about the world of, oh, let's imagine we're in that world with artificial
general intelligence and autonomous agents. And until the release of CHAT GPT, there was maybe
a hundred people who wanted to talk about that in the world. And the thing that I, I mean, chat GPT world tilted on its
axis.
And one of those dimensions was with respect
to the attention to policy from governments
kind of across the board.
So I think that's been a positive thing.
It's been driven by some, by fear.
It's driven by conversations that Kevin Ruse had with chat
GPT.
I actually do think that raised the profile.
We're seeing a lot that's actually, I would never have predicted,
just two years ago, that we would see this much stuff coming out of governments attention to it.
We still haven't actually done very much.
Yes, yes, I've noticed that.
But we're having a lot more conversation about it.
Definitely, and earlier, and earlier.
And I do think, sort of picking up on Ruman's point,
so I've been thinking about how our legal and regulatory structures
have been broken for a number of decades,
and I've been thinking about it for a number of decades.
And we needed to fix that for lots of reasons,
like our access to justice, our regulations are too expensive,
law takes too long, litigations too expensive,
all those things that are actually really important
for productive economies.
And so I do think we are a little closer to that world
where we can be a lot more innovative
about the ways in which we regulate.
We can't continue to do the way we do.
Like we can't just say, oh, let's get Congress
to write a bill, let's get Congress
to write a bill, let's get state legislatures
to enact stuff, let's put it through the courts.
Let's have the woman with the son who committed suicide,
sue and take that through the courts.
I can tell you, that's not gonna be a satisfying process.
It's not gonna be a good solution.
We need new ways of doing that. We need to be a satisfying process. It's not going to be a good solution.
We need new ways of doing that.
We need to be as innovative about those regulatory methods as we are with all that technology.
I think we're a little closer to there.
So speaking of that, Peter Thiel, who I never agree with almost for 30 years now on a number
of issues, recently said, and something I actually agreed with him on, that our AI obsession
is masking stagnation
in other fields of innovation.
I was like, well, he's right.
That's right.
The obsession of money being spent, the over-money.
Mark, is that a problem that we're focused on
all the money shifted really quickly to this
and all of it's going there now?
And that's where everybody goes
and everything else gets ignored, presumably.
As someone who unbelievably benefits
from all the attention, it's too much.
Yeah, good for you.
It's absolutely too much.
Yeah. Right.
And people turn, you know,
I hear AI is a solution to many problems.
AI is a solution to our spending in healthcare.
AI is a solution to, you know, whatever.
AI is not the solution to all problems.
And there's too much focused on the technology,
certainly not enough focused on the applications and the use
and thinking about the environments put in.
And we are ignoring a lot of other technologies
that we should be investing in.
I love the attention, but it's too much.
Let's still finish the interview, but like...
Too much.
I actually don't love the attention
because I think it brings sometimes the worst kind of people
to this field.
And it's so hard to separate hype from reality
that we actually can't get anything productive done.
Nobody wants to have a long-term conversation
about creating more robust legal systems
or better medical systems because
boring there's a new like shiny toy being dangled every 30 seconds in front of our faces.
I actually long for the days where you know the idea of AI governance was very boring
because then the only people in the room were the people who actually cared about it and
we could have real conversations and try to get stuff done and now it's like somebody
like spent five minutes on an LLM and suddenly they
show up in the room as an expert.
And then you got to start from, you know, from like, from like level negative one
to get everybody back up to speed.
It's, it's tough.
It makes it harder.
And like, yeah, that's odd.
I feel kind of wrong that I'm in agreement with Peter Thiel on anything.
Right.
So I'm going to have to sit with that this evening.
Okay. All right. you sit with that.
He's correct, though.
He is.
Yeah, so what's the claim?
That we're thinking too much about AI and not our other problems?
Other problems.
Yeah, I think, the thing is, if you think about where we are...
I mean, obviously, he'd like to focus more on destroying democracy,
but go ahead, so I'm teasing.
Yeah, yeah.
Again, AI is going to impact, I believe, the way we do just about everything.
And that means it obviously is going to interact with all the things that don't work for it.
Well, it's going to exacerbate a bunch of those things as well.
So if we haven't figured out, you know, inequality, if we haven't figured out how to, you know, manage the fact that we have very, very big corporations producing this stuff, if we haven't figured out health care, if we haven't figured out access to justice, yeah, it's going to exacerbate all those things. But I don't know that I would say we should just stop thinking about AI and
focus back on these other things because I think we need to be thinking about them in the context
of AI. Because it has, it's sort of like the, a little like the World Wide Web, right? It affected
everything. Everything. Right. When the internet first started, I was at the Washington Post and
someone asked me like, what is it? And I said, it's everything. And it's hard to explain.
They're like, go away.
I was like, it's everything.
Yeah.
Yeah.
I don't know what everything is.
And it has changed everything.
That's right.
We'll be back in a minute.
So Mark, you said that when it comes to air regulation,
quote, there's so little understanding
of what the concerns are and what should be regulated.
How would you advise academics, like, advise regulators to understand AI?
There's a trope that regulators do not understand tech.
That is not true.
That is somewhat true with some of them.
But in general, they regulate everything else, and sometimes badly, sometimes well.
So do you think we need more public funding
for universities to become relevant players in AR?
Because they are not at this point.
You understand that.
Because they were in every other technological revolution.
Yeah, you're asking me should we give more money
to Johns Hopkins?
Yes.
No, but I think in seriousness.
I said should we give more money to Harvard, for example? Sorry.
I'm going to change my answer to no. No, but in seriousness, you
know, I see, and you've had this experience when they call
people to Washington to testify for Congress experts. They have
very different goals, the people testifying,, you've spoken about this very well,
than the people that are interviewing them.
And academics play a unique role
where we really can sit in the middle and say like,
look, we don't work for the companies,
we don't work for the government,
but we study this technology and we really are experts
and we can say things.
So absolutely we need to be cultivating
exactly buildings like this
that are bastions of higher learning
of people who are experts in this technology
right here in Washington to be that kind of policy audience.
I think that's absolutely necessary.
I think we also need those people, people like me,
I'm a computer scientist, to interact with the people
who sit next to me here on the stage
who understand these systems in ways that I don't,
who understand the regulatory environment,
what it means to regulate things, right?
And people like that can build the bridges to say, in ways that I don't who understand the regulatory environment, to what it means to regulate things, right?
And people like that can build the bridges to say,
okay, it's great that you published a paper at, you know,
NURPS last week about this fancy algorithm or this math,
but this is what regulation actually looks like.
How can we connect the dots?
How can we actually take your insights
and get them to apply to what regulation really looks like
and then speak to the regulators to help them understand
what's possible, what's not possible, what should we do?
Absolutely.
So, Ruman, you've called for a global governance body
that could set up all kinds of things,
including forcing corporations to slow down
AI implementation if it leads to too many job losses too fast.
That train seems to have left the station, I suspect.
Sam Altman's called for an international agency.
Your idea is much more
ambitious. Talk about that. This is something I've always said there should be. This is like nuclear
power. This is like cloning. Can you imagine an international regulatory agency with power over
the richest corporations in the world? I have not seen that happen. I think it would be deeply
fascinating, but I'll tell you something really interesting. I wrote that op-ed it was what April of
2023 there was not a single international entity and now we are
basically swimming in them. You know there are AI safety institutes that have
been set up in 111 countries there's an international consortium of AI safety
institutes the UN has a body OECD has a body I mean I could just keep going and
going and going and you know Jillian and I were joking earlier
how we just all fly around the world
and see each other in different locations.
But it's true.
There is actually a global governance community.
And I can count amongst my friends and colleagues
that I work on this with people who are in Australia,
in Japan, in Singapore, in Nigeria, in France.
I mean, the next big AI safety summits in
Paris, the one before that was in Seoul, the first one was in Bletchley.
So there actually is a global conversation.
I wouldn't be surprised if we started to see, and as a political scientist, I find statecraft
incredibly fascinating, you know, and just sort of nerding out for a second.
It is one of the most fascinating times to be alive, to see this sort of global conversation truly beginning
on any sort of a governance that could look more meaningful.
I mean, it doesn't mean it's going to absolutely happen.
We may end up in a rather toothless model, but we could,
I think there's enough people pushing
for something novel and something new.
The two examples I give in that article
were actually IAEA, which I think is a really interesting
model, as well as a Facebook oversight board.
So we technically do have a global governance entity.
It is a group that Facebook had set up for themselves.
So it's possible.
Now, Julian, you held a lot of international dialogues around AI alignment and global competition.
One of the things that every tech CEO I've had, including Mark Zuckerberg and others,
have talked about whether they point to competition with China to argue against any regulation.
I call it the Xi or Me argument, and I'm like, I'd like a third choice if you don't mind.
I don't like any of you, and I don't like him.
So how do you do regulation?
Slowing down AI is that that's their argument.
China obviously has to be part of this global conversation.
What do you imagine that is there an ability to do cooperation with China and come to a
global decision making on these things?
Yeah, I think we have to and I think you can use things like
thinking about the World Trade Organization as an infrastructure
that we have that actually does implement rules about, you know,
what you need to demonstrate in order to participate in global trade.
We've had these international dialogues for AI safety,
lots of Chinese participants in them.
Seems to be a lot of interest, certainly in the academic community.
These are predominantly academics.
Lots of interest.
I mean, it is, it's something that affects everyone.
It's going to change the way the world works.
We're going to have to put together those kinds of structures.
I think there is a lot of shared interest in doing that.
I think what it requires, however, is that we be thinking about the structures we're
put in place.
Like you can't just be talking about like what are the standards?
What's it allowed to say?
What's the model allowed to say?
But you got to put some like registration, you got to put registration in place, you
have to put the capacity to say and demonstrate, oh, if you haven't passed these tests, like
the US can put its own requirements in place and say, and those models can't be sold in
our markets if they don't pass those tests.
I think there's actually shared capacity for that, particularly if you start with the things
in which everybody agrees.
So when we had our meeting in Beijing last year,
there's agreement.
Okay, we need to make sure that these are not self-improving systems.
We need to make sure that we have red lines when we would know they're getting close.
Killer drones, no.
Yeah.
That we could just...
Just New Jersey.
We have the capacity.
And so use those things on which there's going to be widespread agreement
to build a structure.
They can do that.
We have to go faster because China seems to be a pretty reductive argument in that regard.
Yeah, I mean, here's what I think is really wrong with that argument.
The idea that building regulatory infrastructure is going to slow it down and we should not do that.
It's not the way that any part of the economy works, actually, having basic play.
Jillian, they're special. I don't know if you know that. So last question, we have a
long way to any AI regulation. The Biden administration issued an executive order on AI. Donald Trump's
going to repeal it. Trump has Elon Musk as his best friend apparently, which is a lovely thing to see.
Elon's stance on AI regulation is unclear.
He has different ones.
He changes quite a bit.
He signed a letter for a six-month pause in AI development, and then he funded his company.
It's going like gangbusters.
I know it seems hypocritical, but that's what he did.
He supported a controversial California bill to regulate AI.
He's not an uninterested party, right?
And he is sitting right next to President Trump at this point, especially on these issues.
And if it's not him, it's one of his many minions.
What do you expect from the Trump administration?
What worries you?
What gives you hope?
Very quickly, each of you, we just have a minute and a half.
Carrie, are you asking me to opine about Elon Musk knowing that I had worked at Twitter?
No.
A minute and a half will definitely not be long enough.
Yeah, okay.
What am I...
Not a fan. I got that.
You know, I don't think...
Me either.
His idea of ethics and my idea of ethics are the same.
Yeah.
I don't even know how to answer your question.
What is your greatest hope that would happen and your greatest worry?
My greatest hope is that things don't get much worse than they are.
That's probably the best we can hope for is status quo.
What may happen is a lot of the headway, a lot of the positive things we were talking
about are getting rolled back.
I mean, specifically the EO has been on the chopping block.
But also we're're gonna have to worry
about programs like, you know, programs at NIST.
For example, any sort of scientific body
that's been doing test and evaluation.
I also worry a bit about the brain drain
that's gonna happen.
I think there are a lot of people who sat through
one Trump administration and, you know,
are very dismayed that the narrative this time around
seemed to be, oh, well, it wasn't so bad last time.
And they're like, do you know what it cost me
so that it wouldn't be that bad?
And now they're leaving.
So what's gonna happen when all of the amazing people,
and you're totally correct,
that all these amazing people are brought in
and they're like, I'm just not gonna sit here
for another four years.
So that's your worry.
Go ahead.
I expect inconsistency and uncertainty
for exactly the reasons you said about Elon Musk.
I don't know what's gonna happen. And there's a lot of, I can make arguments in both directions. What's the good? I respect inconsistency and uncertainty for exactly the reasons you said about Elon Musk.
I don't know what's gonna happen.
And there's a lot of,
I can make arguments in both directions.
What's the good?
So there are a lot of people not in government
doing great work.
Some of them are saying next to me on this stage
and that work is going to continue
and to continue to lay the foundation
for whenever the government is ready to take action,
there will be research there to support it.
And so I don't think the external research to the government is going to stop.
I think I'm certainly, and I don't know about my colleagues, are going to feel pressure
to do more and be more involved because it's now not happening in the government.
And we'll see what happens in four years.
Can we get a countdown clock for four years?
Yeah, I think the stuff that's driven by national security is going to continue.
So I think those pieces of executive order either might get redone in different ways.
I think that's going to continue.
I think it's very hard to predict whose ear anybody's going to be listening to.
I think the China component is going to be very important.
I think there's, you know, we just had a bipartisan AI task force report come out that said we
got to invest in developing the science of valuations.
And I think that's important.
I think we've got bipartisan efforts.
So I actually think there may be things that continue.
I don't expect at all to, it's too big.
It's too big for it to just, we're going to ignore it and quote unquote, not regulate
it.
Right.
I see. So it's too big to fuck up.
Too big to fail in any case.
Thank you so much, all three of you.
Great discussion and we'll see what happens.
We'll check in here in four years.
Okay?
We'll talk about that and see what happens.
Thank you.
Thank you so much.
Thank you. On with Carous Wishers produced by Christian Castro-Roselle, Kateri Yocum, Jolie Meyers,
Megan Hunein, Megan Burney, and Kaylin Lynch.
Nishat Kurwa is Vox Media's executive producer of audio.
Special thanks to Claire Hyman.
Our engineers are Rick Kwan, Fernando Arruda, and Aliyah Jackson.
And our theme music is by Trackademics.
If you're already following the
show, feel free to skip out on any useless homework and all of it is useless, so skip out.
If not, 500 words due tomorrow on whatever. Go wherever you listen to podcasts, search for
On with Kara Swisher and hit follow. Thanks for listening to On with Kara Swisher from New York
Magazine, the Vox Media Podcast Network, and us. We'll be back on Monday with more.