TED Talks Daily - Sunday Pick: What really went down at OpenAI and the future of regulation w/ Helen Toner
Episode Date: June 2, 2024Each Sunday, TED shares an episode of another podcast we think you'll love, handpicked for you… by us. Today we're sharing an episode from our brand new podcast, The TED AI Show. Each week,... creative technologist and former TED speaker Bilawal Sidhu sits down with the world's brightest minds to chat about the technology that might change everything -- and the technology that's just hype.If there’s one AI company that’s made a splash in mainstream vernacular, it’s OpenAI, the company behind ChatGPT. Former board member, TED2024 speaker, and AI policy expert Helen Toner joins Bilawal to discuss the existing knowledge gaps and conflicting interests between those who are in charge of making the latest technology – and those who create our policies at the government level. For transcripts for The TED AI Show, visit go.ted.com/TTAIS-transcriptsYou can get more The TED AI Show wherever you're listening to this.
Transcript
Discussion (0)
TED Audio Collective.
Hey, TED Talks Daily listeners, I'm Elise Hu.
Today we have an episode of another podcast from the TED Audio Collective,
handpicked by us for you.
Last year, OpenAI's firing and rehiring of Sam Altman made waves in the news.
But what does that incident tell us
about the business of artificial intelligence?
This week, we're featuring an episode of the TED AI Show,
TED's newest podcast.
Listen as host Bilawal Sidhu sits down
with former OpenAI board member, AI policy expert,
and recent TED speaker, Helen Toner. They discuss how tech
leaders think and the role government regulations can play in managing this emerging field. If you
want to hear more fascinating AI conversations, you can find the TED AI Show wherever you get
your podcasts. Learn more about the TED Audio Collective at audiocollective.ted.com. And now on to the episode right after a quick break.
Support for this show comes from Airbnb. If you know me, you know I love staying in Airbnbs when
I travel. They make my family feel most at home when we're away from home. As we settled down at
our Airbnb during a recent vacation to Palm Springs, I pictured my own home sitting empty.
Wouldn't it be smart and better put to use
welcoming a family like mine by hosting it on Airbnb?
It feels like the practical thing to do,
and with the extra income,
I could save up for renovations
to make the space even more inviting
for ourselves and for future guests.
Your home might be worth more than you think.
Find out how much at airbnb.ca slash host.
AI keeping you up at night? Wondering what it means for your business?
Don't miss the latest season of Disruptors, the podcast that takes a closer look at the
innovations reshaping our economy. Join RBC's John Stackhouse and Sonia Sinek from Creative Hey, Belovel here. Follow Disruptors on Apple Podcasts, Spotify, or your favorite podcast platform. interview with Helen, she reveals for the first time what really went down at OpenAI late last year when the CEO, Sam Altman, was fired. And she makes some pretty serious criticisms
of him. We've reached out to Sam for comments, and if he responds, we'll include that update
at the end of the episode. But first, let's get to the show.
I'm Bilal Velsadu, and this is the TED AI Show, where we figure out how to live and thrive in a world where AI is changing everything.
The OpenAI saga is still unfolding, so let's get up to speed.
In case you missed it, on a Friday in November 2023, the board of directors at OpenAI fired Sam Altman.
This ouster remained a top news item over that weekend, with the board saying that he hadn't
been, quote, consistently candid in his communications, unquote. The Monday after,
Microsoft announced that they had hired Sam to head up their AI department. Many OpenAI employees
rallied behind Sam and threatened to join him. Meanwhile, OpenAI announced an interim CEO.
And then a day later, plot twist, Sam was rehired at OpenAI.
Several of the board members were removed or resigned and replaced.
Since then, there's been a steady fallout.
On May 15th, 2024, just last week as of recording this episode,
OpenAI's chief scientist, Ilya Setskever,
formally resigned. Not only was Ilya a member of the board that fired Sam, he was also part of the
super alignment team, which focuses on mitigating the long-term risks of AI. With the departure of
another executive, Jan Laika, many of the original safety-conscious folks in leadership positions have either departed OpenAI or moved on to other teams. So, what's going on here? Well, OpenAI started as a
non-profit in 2015, self-described as an artificial intelligence research company. They had one
mission, to create AI for the good of humanity. They wanted to approach AI responsibly, to study the risks up close,
and to figure out how to minimize them.
This was going to be the company that showed us AI done right.
Fast forward to November 17, 2023, the day Sam was fired,
OpenAI looked a bit different.
They'd released DALI, and ChatGPT was taking the world by storm.
With hefty investments from Microsoft, it now seemed that OpenAI was in something of a tech
arms race with Google. The release of ChatGPT prompted Google to scramble and release their
own chatbot, BARD. Over time, OpenAI became closed AI. Starting 2020, with the release of GPT-3, OpenAI stopped sharing their
code. And I'm not saying that was a mistake. There are good reasons for keeping your code private.
But OpenAI somehow changed, drifting away from a mission-minded nonprofit with altruistic goals
to a run-of-the-mill tech company shipping new products at an astronomical pace.
This trajectory shows you just how powerful
economic incentives can be. There is a lot of money to be made in AI right now. But it's also
crucial that profit isn't the only factor driving decision making. Artificial General Intelligence,
or AGI, has the potential to be very, very disruptive. And that's where Helen Toner comes in.
Less than two weeks after OpenAI fired and rehired Sam Altman,
Helen Toner resigned from the board.
She was one of the board members
who had voted to remove him.
And at the time, she couldn't say much.
There was an internal investigation still ongoing
and she was advised to keep mum.
And oh man, she got so much flack for all of this.
Looking at the news coverage and the tweets, I got the impression she was this techno pessimist
who was standing in the way of progress or a kind of maniacal power seeker using safety policy as
her cudgel. But then I met Helen at this year's TED conference, and I got to hear her side of the story.
And it made me think a lot about the difference
between governance and regulation.
To me, the OpenAI saga is all about AI board governance
and incentives being misaligned
among some really smart people.
It also shows us why trusting tech companies
to govern themselves may not always go beautifully,
which is why we need
external rules and regulations. It's a balance. Helen's been thinking and writing about AI policy
for about seven years. She's the director of strategy at CSET, the Center for Security and
Emerging Technology at Georgetown, where she works with policymakers in D.C. about all sorts of AI issues.
Welcome to the show.
Hey, good to be here.
So, Helen, a few weeks back at TED in Vancouver, I got the short version of what happened at
OpenAI last year.
I'm wondering, can you give us the long version?
As a quick refresher on sort of the context here, the OpenAI board was not a normal board.
It's not a normal company.
The board is a
nonprofit board that was set up explicitly for the purpose of making sure that the company's,
you know, public good mission was primary, was coming first over profits, investor interests,
and other things. But for years, Sam had made it really difficult for the board to actually do
that job by, you know, withholding information, misrepresenting things that were happening at the
company, in some cases outright lying to the board. You know, at this point, everyone always says,
like what? Give me some examples. And I can't share all the examples, but to give a sense of
the kind of thing that I'm talking about, it's things like, you know, when ChatGPT came out,
November 2022, the board was not informed in advance about that. We learned about ChatGPT
on Twitter. Sam didn't inform the board that he owned the OpenAI startup fund, even though he,
you know, constantly was claiming to be an independent board member with no financial
interest in the company. On multiple occasions, he gave us inaccurate information about the small
number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.
And then, you know, a last example that I can share because it's been very widely reported relates to this paper that I wrote, which has been, you know, I think way overplayed in the press. For listeners who didn't follow this in the press,
Helen had co-written a research paper last fall intended for policymakers.
I'm not going to get into the details,
but what you need to know is that Sam Altman wasn't happy about it.
It seemed like Helen's paper was critical of OpenAI
and more positive about one of their competitors, Anthropic.
It was also published right when the Federal Trade Commission
was investigating OpenAI about the data used
to build its generative AI products.
Essentially, OpenAI was getting a lot of heat and scrutiny all at once.
The way that played into what happened in November is pretty simple.
It had nothing to do with the substance of this paper.
The problem was that after the paper came out,
Sam started lying to other board members in order to try and push me off the board.
So it was another example that just like really damaged our ability to trust him.
And actually only happened in late October last year when we were already talking pretty seriously about whether we needed to fire him.
And so, you know, there's kind of more individual examples. And for any individual case, Sam could always come up with some kind of like innocuous sounding explanation of why it wasn't a big deal or misinterpreted or whatever.
But the, you know, the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us. And that's a completely unworkable place to be in
as a board, especially a board that is supposed to be providing independent oversight over the
company, not just like, you know, helping the CEO to raise more money. You know, not trusting the
word of the CEO, who is your main conduit to the company, your main source of information about the
company, it's just totally, totally impossible. So that was kind of the background that the state of affairs coming into last fall.
And we had been, you know, working at the board level as best we could to set up better structures, processes,
all that kind of thing to try and, you know, improve these issues that we have been having at the board level.
But then in mostly in October of last year, we had this series of conversations with
these executives where the two of them suddenly started telling us about their own experiences
with Sam, which they hadn't felt comfortable sharing before, but telling us how they couldn't
trust him about the toxic atmosphere he was creating. They used the phrase psychological
abuse, telling us they
didn't think he was the right person to lead the company to AGI, telling us they had no belief that
he could or would change, no point in giving him feedback, no point in trying to work through these
issues. I mean, you know, they've since tried to kind of minimize what they told us, but these
were not like casual conversations. They were really serious to the
point where they actually sent us screenshots and documentation of some of the instances they were
telling us about of him lying and being manipulative in different situations. So, you know, this was a
huge deal. This was a lot. And we talked it all over very intensively over the course of several weeks and ultimately just came to the conclusion that the best prevent us from, you know, even getting to the point of being able to fire him.
So, you know, we were very careful, very deliberate about who we told, which was essentially almost no one in advance other than, you know, obviously our legal team.
And so that's kind of what took us to November 17th.
Thank you for sharing that. Now, Sam was eventually reinstated as CEO with most of the staff
supporting his return. What exactly happened there? Why was there so much pressure to bring him back?
Yeah, this is obviously the elephant in the room. And unfortunately, I think there's been
a lot of misreporting on this. I think there were three big things going on that helped make sense
of kind of what happened here. The first is that really pretty early on, the way the situation was
being portrayed to people inside the company was you have two options. Either Sam comes back
immediately with no accountability, you know, totally new board of his choosing, or the company
will be destroyed. And, you know, those weren't
actually the only two options. And the outcome that we eventually landed on was neither of those
two options. But I get why, you know, not wanting the company to be destroyed,
got a lot of people to fall in line, whether because they were, in some cases, about to make
a lot of money from this upcoming tender offer, or just because they love their
team, they didn't want to lose their job, they cared about the work they were doing. And of
course, a lot of people didn't want the company to fall apart, you know, us included. The second
thing I think it's really important to know that has really gone underreported is how scared people
are to go against Sam. They had experienced him retaliating against people,
retaliating against them for past instances of being critical.
They were really afraid of what might happen to them.
So when some employees started to say,
you know, wait, I don't want the company to fall apart.
Like, let's bring back Sam.
It was very hard for those people who had had terrible experiences
to actually say that for fear that,
you know, if Sam did stay in power as he ultimately did, you know, that would make their lives
miserable. And I guess the last thing I would say about this is that this actually isn't a new
problem for Sam. And if you look at some of the reporting that has come out since November,
it's come out that he was actually fired from his previous job at Y Combinator, which was hushed up at the time.
And then, you know, his job before that, which was his only other job in Silicon Valley, his startup looped.
Apparently, the management team went to the board there twice and asked the board to fire him for what they called, you know, deceptive and chaotic behavior.
If you actually look at his track record, he doesn't exactly have a glowing trail of
references.
This wasn't a problem specific to the personalities on the board as much as he would love to kind
of portray it that way.
So I had to ask you about that, but this actually does tie into what we're going to talk about
today.
OpenAI is an example of a company that started off trying to do good,
but now it's moved on to a for-profit model. And it's really racing to the front of this AI game,
along with all of these ethical issues that are raised in the wake of this progress.
And you could argue that the OpenAI saga shows that trying to do good and regulating yourself isn't enough. So let's talk about why we need regulations. Great. Let's do it. So from my
perspective, AI went from the sci-fi thing that seemed far away to something that's pretty much
everywhere and regulators are suddenly trying to catch up. But I think for some people, it might
not be obvious why exactly we need regulations at all. Like for the average person, it might seem
like, oh, we just have these cool new tools like Dali and ChatGPT that do these amazing things.
What exactly are we worried about in concrete terms?
There's very basic stuff for very basic forms of the technology.
Like if people are using it to decide who gets a loan, to decide who gets parole, you know, to decide who gets to buy a house.
Like you need that technology to work well. If that technology is going to be discriminatory, which AI often is, it turns out, you need to make sure that
people have recourse. They can go back and say, hey, why was this decision made? If we're talking
AI being used in the military, that's a whole other kettle of fish. And it's not, I don't know
if we would say like regulation for that, but certainly need to have guidance, rules, processes in place.
And then kind of looking forward and thinking about more advanced AI systems, I think there's a pretty wide range of potential harms that we could well see if AI keeps getting increasingly sophisticated.
You know, letting every little script kitty in their parents' basement having the hacking capabilities of, you know, a crack NSA cell.
Like, that's a problem. I think something that really makes AI hard for regulators to think about is that it is
so many different things, and plenty of the things don't need regulation. Like, I don't know that
how Spotify decides how to make your playlist, the AI that they use for that. Like, I'm happy
for Spotify to just pick whatever songs they want for me, and if they get it wrong, you know,
who cares? But for many, many other use cases, you want to have at least some kind of
basic common sense guardrails around it. I want to talk about a few specific examples
that we might want to worry about, not in some battle space overseas, but at home in our day-to-day
lives. You know, let's talk about surveillance. AI has gotten really good at perception,
essentially understanding the contents of images, video and audio.
And we've got a growing number of surveillance cameras in public and private spaces. And now companies are infusing AI into this fleet, essentially breathing intelligence
into these otherwise dumb sensors that are almost everywhere.
Madison Square Garden in New York City is an example.
They've been using facial recognition technology to bar lawyers involved in lawsuits against their parent company, MSG Entertainment, from attending
events at their venue. This controversial practice obviously raised concerns about privacy,
due process, and potential for abuse of this technology. Can we talk about why this is
problematic? Yeah, I mean, I think this is a pretty common thing that comes up in the history
of technology is you have some, you know, some existing thing in society and then technology makes it much faster and much cheaper and much more widely available.
Like surveillance, where it goes from like, oh, it used to be the case that your neighbor could see you doing something bad and go talk to the police about it.
You know, it's one step up to go to, well, there's a camera, a CCTV camera, and the police can go back and check at any time.
And then another step up to like, oh, actually, it's just running all the time. And there's an AI facial recognition detector on there.
And maybe, you know, maybe in the future, an AI like activity detector that's also flagging,
this looks suspicious. In some ways, there's no like qualitative change in what's happened. It's
just like you could be seen doing something. But I think you do also need to grapple with the fact
that if it's much more ubiquitous, much cheaper, then the situation is different. I mean, I think with surveillance,
people immediately go to the kind of law enforcement use cases. And I think it is
really important to figure out what the right tradeoffs are between achieving sort of law
enforcement objectives and being able to catch criminals and, you know, prevent bad things from
happening, while also recognizing, you know, prevent bad things from happening while also recognizing,
you know, the huge issues that you can get if this technology is used with overreach.
For example, you know, facial recognition works better and worse on different demographic groups.
And so if police are, as they have been in some parts of the country, going and arresting
people purely on a facial recognition match and on no other evidence, there's a story
about a woman who was eight months pregnant having contractions in a jail cell after having done absolutely nothing wrong and being arrested only on the
basis of a bad facial recognition match. So I personally don't go for, you know, this needs
to be totally banned and no one should ever use it in any way for anything. But I think you really
need to be looking at how are people using it? What happens when it goes wrong? What recourse
do people have? What kind of access to due process do they have? And then when it comes to private use, I really think we should
probably be a bit more, you know, restrictive. Like, I don't know, it just seems pretty clearly
against, I don't know, freedom of expression, freedom of movement for somewhere like Madison
Square Gardens to be kicking their own lawyers out. I don't know. I'm not a lawyer myself,
so I don't know what exactly the state of the law around that is. But I think the sort of civil liberties and
privacy concerns there are pretty clear. I think the problem with sort of an existing
set of technology getting infused with more advanced capabilities, sort of unbeknownst to
the common population at large, is certainly a trend. And one example that shook me up is a
video went viral recently
of a security camera from a coffee shop, which showed a view of a cafe full of people and
baristas. And basically over the heads of the customers, like the amount of time they spent at
the cafe. And then over the baristas was like, how many drinks have they made? And then, you know,
so what does this mean? Like ostensibly the business can one track who is staying on their
premises for how long, learn a lot about customer behavior without the customer's knowledge or consent.
And then, number two, the businesses can track how productive their workers are and could potentially fire, let's say, less productive baristas.
Let's talk about the problems and the risk here.
And, like, how is this legal?
I mean, the short version is, and this comes up again and again and again if you're doing AI policy, the U.S. has no federal privacy laws.
There are no rules on the books for how companies can use data.
The U.S. is pretty unique in terms of how few protections there are of what kinds of personal data are protected in what ways.
Efforts to make laws have just failed over and over and over again.
But there's now this sudden, stealthy new effort that people think might actually have a chance.
So who knows? Maybe this problem is on the way
to getting solved. But at the moment, it's a big, big hole for sure. And I think step one is making
people aware of this, right? Because people have, to your point, heard about online tracking. But
having those same set of analytics and like the physical space in reality, it just feels like
the Rubicon has been crossed and we don't really even know that's what's happening when we walk
into whatever grocery store.
I mean, again, yeah. And again, it's about sort of the scale and the ubiquity of this.
Because again, it could be like your favorite barista knows that you always come in and you sit there for a few hours on your laptop because they've seen you do that a few weeks in a row, that's very different to this data is being collected systematically and then sold to,
you know, data vendors all around the country and used for all kinds of other things or
outside the country. So again, I think we have these sort of intuitions based on our real world
person to person interactions that really just break down when it comes to sort of the size of
data that we're talking about here. Support for this show comes from Airbnb.
If you know me, you know I love staying in Airbnbs when I travel.
They make my family feel most at home when we're away from home.
As we settled down at our Airbnb during a recent vacation to Palm Springs,
I pictured my own home sitting empty.
Wouldn't it be smart and better put to use welcoming a family like mine by hosting it on Airbnb?
It feels like the practical thing to do.
And with the extra income, I could save up for renovations
to make the space even more inviting for ourselves and for future guests.
Your home might be worth more than you think.
Find out how much at Airbnb.ca slash host.
I also want to talk about scams.
So folks are being targeted by phone scams. They get a call from their loved ones. It sounds like their family members have been kidnapped and being held for ransom. In reality, some bad actor just used off the shelf AI to scrub their social media feeds for these folks voices. And scammers can then use this to make these very believable hoax call where people sound like they're in distress and being held captive somewhere. So we have reporting on this particular hoax now, but what's on the
horizon? What's like keeping you up at night? I mean, I think that the obvious next step would
be with video as well. I mean, definitely if you haven't already gone and talked to your parents,
your grandparents, anyone in your life who is not super tech savvy and told them like,
you need to be on the lookout
for this, you should go do that. I talk a lot about kind of policy and what kind of government
involvement or regulation we might need for AI. I do think a lot of things we can just adapt to,
and we don't necessarily need new rules for. So I think, you know, we've been through a lot of
different waves of online scams, and I think this is the newest one, and it really sucks for the
people who get targeted by it. But I also expect that, you know, five years from now would be something that people are pretty familiar with and will be a pretty small number of people who are still vulnerable to it. So I think the main thing is, yeah, be super suspicious of any voice. Definitely don't use voice recognition for like your bank accounts or things like that. I'm pretty sure some banks will offer that, you know, ditch that. Definitely use something more secure. And yeah,
be on the lookout for video scamming as well. And for people, you know, on video calls who look
real. I think there was recently, just the other day, a case of a guy who was on a whole conference
call where there were a bunch of different AI generated people all on the call. And he was the
only real person, got scammed out a bunch of money. So that's coming. Totally. Content-based
authentication is on its last legs, it seems.
Definitely. It's always worth checking in with what is the baseline that we're starting with.
And I mean, so for instance, a lot of things are already public and they don't seem to get
misused. So I think a lot of people's addresses are listed publicly. We used to have little
white pages where you can look up someone's address. And that mostly didn't result in
terrible things happening. Or I even think of silly examples like,
like, I think it's really nice that delivery drivers or when you go to a restaurant to pick
up food that you ordered, it's just there. All right. So let's talk about what we can
actually do. It's one thing to regulate businesses like cafes and restaurants.
It's another thing to rein in all the bad actors that could abuse this technology.
Can laws and regulations actually protect us?
Yeah, they definitely can.
I mean, and they already are.
Again, AI is so many different things that there's no one set of AI regulations.
There's plenty of laws and regulations that already apply to AI.
So there's a lot of concern about AI, you know, algorithmic discrimination with good reason.
But in a lot of cases, there are already laws on the
book saying you can't discriminate on the basis of race or gender or sexuality or whatever it might
be. And so in those cases, you don't even need to pass new laws or make new regulations. You just
need to make sure that the agencies in question have the staffing they need. Maybe they need to
have the exact authorities they have tweaked in terms of who are they allowed to investigate or who are they allowed to penalize or things like that.
There are already rules for things like self-driving cars.
You know, the Department of Transportation is handling that.
It makes sense for them to handle that.
For AI and banking, there's a bunch existing systems that we have in place are doing an okay job at handling that, but they may need, again,
more staff or slight changes to what they can do. And I think there are a few different places where
there are kind of new challenges emerging at sort of the cutting edge of AI, where you have systems
that can really do things that computers have never been able to do before and whether there
should be rules around making sure that those systems are being kind of developed and deployed
responsibly. I'm particularly curious if there's something that you've come across that's really
clever or like a model for what good regulation looks like. I think this is mostly still a work
in progress. So I don't know that I've seen anything that I think really absolutely nails it.
I think a lot of the challenge that we have with AI right now
relates to how much uncertainty there is about what the technology can do, what it's going to
be able to do in five years. You know, experts disagree enormously about those questions,
which makes it really hard to make policy. So a lot of the policies that I'm most excited about
are about shedding light on those kind of questions, giving us a better understanding of where the technology is. So some examples of that are things like President Biden created this big executive
order last October and had all kinds of things in there. One example was a requirement that
companies that are training especially advanced systems have to report certain information about
those systems to the government. And so that's a requirement where you're not saying you can't build that model, can't train that model.
You're not saying the government has to approve something.
You're really just sharing information and creating kind of more awareness and more ability to respond as the technology changes over time,
which is such a challenge for government keeping up with this fast-moving technology.
There's also been a lot of good movement towards funding, like,
the science of measuring and evaluating AI. A huge part of the challenge with figuring out what's
happening with AI is that we're really bad at actually just measuring how good is this AI
system? How, you know, how do these two AI systems compare to each other? Is one of them
sort of quote-unquote smarter? So I think there's been a lot of attention over the last year or two into funding and establishing within government better capabilities on that front. I think that's
really productive. Okay. So policymakers are definitely aware of AI if they weren't before,
and plenty of people are worried about it. They want to make sure it's safe, right?
But that's not necessarily easy to do. And you've talked about this, how it's hard to regulate AI.
So why is that? What makes it so hard? Yeah, I think there's at least three things that make
it very hard. One thing is AI has so many different things, like we've talked about.
It cuts across sector. It has so many different use cases. It's really hard to get your arms around
what it is, what it can do, what impacts it will have. A second thing is it's a moving target. So what the technology can do is different now than
it was even two years ago, let alone five years ago, 10 years ago. And, you know, policymakers
are not good at sort of agile policymaking. They're not like software developers. And then
the third thing is no one can agree on how they're changing or how they're going to change in the
future. If you ask five experts, you know, where the technology is going, you'll get five completely different answers, often five very confident, completely different answers.
So that makes it really difficult for policymakers as well, because they need to get scientific consensus and just like take that and run with it. So I think maybe this kind of third
factor is the one that I think is the biggest challenge for making policy for AI, which is that
for policymakers, it's very hard for them to tell who should they listen to, what problems should
they be worried about, and how is that going to change over time? Speaking of who you should
listen to, obviously, you know, the very large companies in this space have an incentive and
there's been a lot of talk about regulatory capture. When you ask for transparency, why would companies give a peek
under the hood of what they're building? They just cite this to be proprietary. On the other hand,
you know, they might be these companies might want to set up a policy and institutional framework
that is actually beneficial for them and sort of prevents any future competition.
How do you get these powerful companies to like participate and play nice?
Yeah, it's definitely very challenging for policymakers to figure out how to interact with those companies, again, because, you know, in part because they're lacking the
expertise and the time to really dig into things in depth themselves.
Like a typical Senate staffer might cover like, you know, technology issues and trade issues and veterans affairs and agriculture and education, you know,
and that's like their portfolio. So they are scrambling, like they have to, they need outside
help. So I think it's very natural that the companies do come in and play a role. And I
also think there are plenty of ways that policymakers can really mess things up if they
don't, you know, know how the technology works and they're not talking to the companies that are regulating about what's going to happen.
The challenge, of course, is how do you balance that with external voices who are going to
point out the places where the companies are actually being self-serving?
And so I think that's where it's really important that civil society has resources to also be in
these conversations. Certainly what we try to do at CSET, the organization I work at, we're
totally independent and, you know, really just trying to work in the best interest of, you know,
making good policy. The big companies obviously do need to have a seat at the table, but you would
hope that they have, you know, a seat at the table and not 99 seats out of 100 in terms of
who policymakers are talking to and listening to. There also seems to be a challenge with
enforcement, right?
You've got all these AI models already out there. A lot of them are open source. You can't really put that genie back in the bottle, nor can you really start moderating how this technology is
used without, I don't know, like going full 1984 and having a process on every single computer
monitoring what they're doing. So how do we deal with this landscape where you do have, you know, closed source and open source, like various
ways to access and build upon this technology? Yeah, I mean, I think there are a lot of
intermediate things between just total anarchy and full 1984. There's things like, you know,
Hugging Face, for example, is a very popular platform for open source AI models. So
Hugging Face in the past has delisted models that are, you know, considered to be offensive or
dangerous or whatever it might be. And that actually does meaningfully reduce kind of the
usage of those models because Hugging Face's whole deal is to make them more accessible,
easier to use, easier to find. You know, depending on the specific problem we're talking about,
there are things that, for example, you know, social media platforms can do. So if we're talking about, as you said,
child pornography or also, you know, political disinformation, things like that,
maybe you can't control that at the point of creation. But if you have the Facebooks,
the Instagrams of the world, you know, working on, they already have methods in place for how to kind of detect
that material, suppress it, report it. And so that, you know, there are other mechanisms that
you can use. And then of course, specifically on the kind of image and audio generation side,
there are some really interesting initiatives underway, mostly being led by industry around
what gets called content provenance or content authentication, which is basically, how do you know where this piece of content came from? How do you know if it's real?
And that's a very rapidly evolving space and a lot of interesting stuff happening there.
I think there's a good amount of promise, not for perfect solutions, where we'll always know,
is this real or is it fake? But for making it easier for individuals and platforms to recognize,
okay, this is fake. It was AI generated by this particular model,
or this is real.
It was taken on this kind of camera
and we have the cryptographic signature for that.
I don't think we'll ever have perfect solutions.
And again, I think societal adaptation
is just gonna be a big part of the story.
But I do think there's pretty interesting
technical and policy options that can make a difference.
Definitely.
And even if you can't completely
control, you know, the generation of this material, there are ways to drastically cap
the distribution of it. And so like, I think that reduces some of the harms there. Yeah. At the same
time, labeling content that is synthetically generated, a bunch of platforms have started
doing that. That's exciting because like, I don't think the average consumer should be a deep fake
detection expert. Right. But really like if there could be a technology solution to this, that feels
a lot more exciting, which brings me to the future. I'm kind of curious in your mind, what's
like the dystopian scenario and the utopian scenario in all of this. Let's start with a
dystopian one. What does a world look like with inadequate or bad regulations? Paint a picture for us.
So many possibilities.
I mean, I think there are worlds that are not that different from now where you just have automated systems doing a lot of things, playing a lot of important roles in society, in some cases doing them badly, and people not having the ability to go in and question those decisions.
There's obviously this whole discourse around existential risk from AI, et cetera, et cetera. Kamala Harris had a whole speech about like, you know, if someone's, I forget the exact examples, but if
someone loses access to Medicare because of an algorithmic issue, like, is that not existential
for that, you know, an elderly person? You know, so there are already people who are being directly
impacted by algorithmic systems and AI in really serious ways. Even, you know, some of the reporting
we've seen over the last couple months of how AI is being used in warfare,
like, you know, videos of a drone
chasing a Russian soldier around a tank
and then shooting him.
Like, I don't think we're full dystopia,
but there's sort of plenty of things
to be worried about already.
Something I think I worry about quite a bit
or that feels intuitively to me
to be a particularly plausible way things could go
is sort of what I think of as the, um, the WALL-E future. I don't know if you remember that movie
with the little robot. And the piece that I'm talking about is not the like
junk earth and whatever the piece I'm talking about is the people in that movie. They just sit
in their soft roll around wheelchairs all day and, you know, have content and food and whatever
to keep them happy. And I think what worries me about that is I do think there's a really natural
gradient to go towards what people want in the moment and will, you know, will choose in the
moment, which is different from what they, you know, will really find fulfilling or what will
build kind of a meaningful life.
And I think there's just really natural commercial incentives to build things that people sort of superficially want, but then end up with this really kind of meaningless, shallow, superficial world.
And potentially one where kind of most of the consequential decisions are being made by machines that have no concept of what it means to lead a
meaningful life. And, you know, because how would we program that into them? Because we have no,
we struggle to kind of put our finger on it ourselves. So I think those kinds of futures
not where there's some, you know, dramatic big event, but just where we kind of gradually hand
over more and more control of the future to computers that are more and more
sophisticated, but that don't really have any concept of meaning or beauty or joy or fulfillment
or, you know, flourishing or whatever it might be. I hope we don't go down those paths, but it
definitely seems possible that we will. They can play to our hopes, wishes, anxieties, worries,
all of that. Just give us like the junk food all the time, whether that's like in terms of nutrition or in terms of just like audiovisual content.
And that could certainly end badly.
Let's talk about the opposite of that, the utopian scenario.
What does a world look like where we've got this perfect balance of innovation and regulation and society is thriving?
I mean, I think a very basic place to start is can we solve some of the
big problems in the world? And I do think that AI could help with those. So can we have a world
without climate change, a world with much more abundant energy that is much more, you know,
cheaper and therefore more people can have more access to it, where, you know, we have better
agriculture, so there's greater access to food. And beyond that, you know, I think what I'm more interested in is setting, you know, our kids and our grandkids and our great grandkids up to be deciding for themselves what they want the future to look some of the biggest problems that we kind of face as a civilization.
It's hard to say that sentence without sounding kind of grandiose and, you know, trite.
But I think it's true.
So maybe to close things out, just like what can we do?
You mentioned some examples of being aware of synthetically generated content.
What can we as individuals do when we encounter, use, or even you're worried about it, feel free to be worried. Like, you know, I think the main thing is just feeling like you have a right to your own take on what you want to happen with the technology. And no regulator, no, you know, CEO is ever going to have full visibility into all of the different ways that it's affecting, you know, millions and billions of people around the world. And so kind of, I don't know,
trusting your own experience and exploring for yourself and seeing what you think is,
I think the main suggestion I would have. It was a pleasure having you on, Helen.
Thank you for coming on the show. Thanks so much. This is fun.
So maybe I bought into the story that played out on the news and on X, but I went into that
interview expecting Helen Toner to be more of an AI policy maximalist. You know, the more laws,
the better, which wasn't at all the person I found her to be. Helen sees a place for rules,
a place for techno-optimism, and a place for society to just roll with,
adapting to the changes as they come for balance.
Policy doesn't have to mean being heavy-handed
and hamstringing innovation.
It can just be a check against perverse economic incentives
that are really not good for society.
And I think you'll agree.
But how do you get good rules?
A lot of people in tech are gonna say,
you don't know shit.
They know the
technology the best, the pitfalls, not the lawmakers. And Helen talked about the average
Washington staffer who isn't an expert, doesn't even have the time to become an expert. And yet
it's on them to craft regulations that govern AI for the benefit of all of us. Technologists have
the expertise, but they've also got that profit motive.
Their interests aren't always gonna be the same
as the rest of ours.
You know, in tech, you'll hear a lot of regulation bad,
don't engage with regulators.
And I get the distrust.
Sometimes regulators do not know what they're doing.
India recently put out an advisory saying
every AI model deployed in India
first had to be approved by regulators.
Totally unrealistic.
There was a huge backlash there, and they've since reversed that decision.
But not engaging with government is only going to give us more bad laws.
So we got to start talking, if only to avoid that WALL-E dystopia.
Okay, before we sign off for today,
I want to turn your attention back to the top of our episode.
I told you we were going to reach out to Sam Altman for comments.
So, a couple of hours ago,
we shared a transcript of this recording with Sam
and invited him to respond.
We've just received a response from Brett Taylor,
chair of the Opening Eye Board,
and here's the statement in full.
Quote, We are disappointed that Ms. Toner continues to revisit these issues. An independent
committee of the board worked with a law firm WilmerHale to conduct an extensive review of
the events of November. The review concluded that the prior board's decision was not based
on concerns regarding product safety or security, the pace of development, OpenAI's finances,
or its statements to investors, customers, or business partners. Additionally, over 95% of
employees, including senior leadership, asked for Sam's reinstatement as CEO and the resignation of
the prior board. Our focus remains on moving forward and pursuing OpenAI's mission to ensure
AGI benefits all of humanity, end quote.
We'll keep you posted if anything unfolds.
The TED-AI Show is a part of the TED Audio Collective and is produced by TED with Cosmic Standard. Our producers are Ella Fetter and Sarah McRae. Our editors are Ben Benshang and Alejandra
Salazar. Our showrunner is Ivana Tucker, and our associate producer is Ben Montoya. And I'm your host, Bilal Siddou.
See y'all in the next one.
Looking for a fun challenge to share with your friends and family?
TED now has games designed to keep your mind sharp while having fun.
Visit TED.com slash games to explore the joy and wonder of TED Games.