Your Undivided Attention - Inside the First AI Insight Forum in Washington
Episode Date: September 19, 2023Last week, Senator Chuck Schumer brought together Congress and many of the biggest names in AI for the first closed-door AI Insight Forum in Washington, D.C. Tristan and Aza were invited speakers at t...he event, along with Elon Musk, Satya Nadella, Sam Altman, and other leaders. In this update on Your Undivided Attention, Tristan and Aza recount how they felt the meeting went, what they communicated in their statements, and what it felt like to critique Meta’s LLM in front of Mark Zuckerberg.Correction: In this episode, Tristan says GPT-3 couldn’t find vulnerabilities in code. GPT-3 could find security vulnerabilities, but GPT-4 is exponentially better at it.RECOMMENDED MEDIA In Show of Force, Silicon Valley Titans Pledge ‘Getting This Right’ With A.I.Elon Musk, Sam Altman, Mark Zuckerberg, Sundar Pichai and others discussed artificial intelligence with lawmakers, as tech companies strive to influence potential regulationsMajority Leader Schumer Opening Remarks For The Senate’s Inaugural AI Insight ForumSenate Majority Leader Chuck Schumer (D-NY) opened the Senate’s inaugural AI Insight ForumThe Wisdom GapAs seen in Tristan’s talk on this subject in 2022, the scope and speed of our world’s issues are accelerating and growing more complex. And yet, our ability to comprehend those challenges and respond accordingly is not matching paceRECOMMENDED YUA EPISODESSpotlight On AI: What Would It Take For This to Go Well?The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen HaoSpotlight: Elon, Twitter and the Gladiator Arena Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Transcript
Discussion (0)
Hey, everyone, this is Tristan.
And this is Aza.
Your undivided attention is about to have its second ever Ask Us Anything episode.
So are there questions that you'd love to ask me or Aza about the show?
Or more broadly about our work at the Center for Humane Technology
and how we want to tackle these questions of AI?
So here's your opportunity.
Go to HumaneTech.com forward slash Ask Us.
Or, and we love this option, record your question on your phone
and send us a voice memo at Undividedat Humane Tech.com.
So that's humanetech.com forward slash ask us or undivided at humanetech.com.
We hope to hear from you, and we are really looking forward to continuing this dialogue.
So in this episode, we wanted to share with you some insights from the AI Insight Forum,
which was held this past Wednesday, September 13th in Washington, D.C.
This was a totally unique thing that has happened in history so far as I know in the Congress,
the U.S., in our democracy, which is that Senator Chuck Schumer, Senator Rounds, Senator Young,
Senator Heinrich, all hosted this unique forum in which all the senators that came in were listening.
Normally, when Congress needs to learn about something new, they hold a hearing. And a hearing
looks like a couple experts sitting, then the senators will have five minutes each. They'll ask
questions, but they're not really asking questions to learn. They're asking questions to get
a 10-second soundbite on Fox News or CNN.
And so Democratic Majority Leader Chuck Schumer had, I think, a really important insight,
which is that's not the way to learn.
They're going to have to do something new.
So what he did, along with Senators Round and Young and Heinrich, was innovate.
They had a set of experts, and we'll talk about who those people were,
sit and sort of have a structured dialogue where 50, 100 senators, Congress folk just sat and watched and listened.
from 10 a.m. until 5 p.m.
Tristan, do you want to talk about what that felt like and what happened?
I almost want to tell a joke about Elon Musk, Saty Nadalas, Sundar Pichai,
head of Googles, Mark Zuckerberg, Bill Gates,
Jensen, who runs NVIDIA, all walk into a bar.
Well, it's actually more like they all walked into a Senate Congress room.
So there is NIR.
We're in the halls of the Senate, and we're, you know, I remember we're walking,
we're just talking to our chiefs of staff, we've got a coffee in our hands,
and we turn the corner, and suddenly there's just this flash, flash, flash, flash of cameras
and people shouting and saying, what are you going to tell Mark Zuckerberg?
How are you going to, is AI being done safely?
And people are yelling these questions at us.
And I remember how surprised I felt turning that corner because it suddenly hit me what we were about to walk into.
Then I turn my gaze and I see, there's Elon Musk, there's Bill Gates,
there's Saty Nadell, the CEO of Microsoft, there's Sundar, the CEO of Google,
there's Mark Zuckerberg, there's Eric Schmidt, there's Sam Altman who runs OpenAI,
and then there's all these leaders of major civil society organizations.
This has never happened before.
This was, it felt like a movie.
Like, this is bizarre.
And to be honest, I felt quite nervous and anxious kind of walking into that room because it suddenly...
I think we both did.
Yeah, because we, you know, it's one thing to be working on these issues and talking about it for many years.
It's another thing to suddenly be directly across from the CEOs that are presiding
over it and both having to face them and them having to face us. Not that it's oppositional,
just that we have something very serious to work out here. And I actually opened my Senate remarks
this way, as Elon Musk hinted in this way, during his opening remarks too, that this feels
unprecedented. Like it feels like history in a weird way. And this to Senator Schumer and Young
and Rounds and Heinrich's credit was kind of actually making that historic meeting happen,
which is the thing that we've wanted. Just like earlier when we started working on the AI
dilemma we wanted the White House to invite the CEOs. We said, that's never going to happen.
And then it happened. This is another thing that's kind of out of a movie. And it needed to be
this because that's what's actually at stake. And that's really what this represents is,
to me, there's many ways to see this hearing. There's many ways to be disappointed about what
more it could have been. There's one of you guys to feel like, you know, it's just a photo
opportunity. I view it, if I want to view it as optimistically as possible, it was a forum that
treated this moment in history with the level of attention and platform that it deserves. And
And so, you know, 100 senators are not sitting around asking questions of the CEOs.
No, they sat down quietly in these chairs in front of 20 or so of all of us, the experts that
were brought in.
You know, I sat next to Bill Gates on my right, staring all the way across the room was Mark Zuckerberg
and there's Sam Altman, and then we got into it.
I'm curious as of what you felt like walking into the room.
Yeah, well, I mean, honestly, I felt a little intimidated, a little less by the people that
we're going to be there and more by the moment knowing that this is the point in the movie
where things have to turn and if the turn doesn't happen that's it this is going to be a tragic
movie they're going to be around 10 insight for them so if you imagine like the music for the
movie it hasn't gone to the hopeful music yet this is sort of like the strings doing like
their they're back and forth building energy like it could be the turn and I think after a
of the CEO spoke, after like Jensen of Invida spoke, Sam Altman spoke, Elon spoke. A lot of that
nervousness actually faded away for me because I realized nobody else was speaking about the set of
incentives driving the race. And if you know the race, you know the result. You'll get into
that when I ask you in a second about your opening statement Tristan. But that the things that we
talk about on this podcast really are not being represented in the most important rooms and I felt
very emboldened that when we speak everyone in the room's head turns and listens because it's
representing something that's incredibly important for AI to go well. I've been saying the
insight form went better than I could have hoped but not as far as we need and your retort is
of everything we need
is like 1% of what we need
but it went 30% better than we hoped
and I think that's roughly right
there was a moment when
Majority Leader Schumer
asked after all the opening statements
all right raise your hand
if you believe that the federal government
is going to have to regulate
for this to go well
and every single person
raised their hand
all the CEOs
every single CEO
and that was a really important moment
of consensus. Of course, if we put on our cynical hat, the way this is going to play out,
where they'll say, yes, regulate us. Yes, we need regulation, but not that regulation.
I think in terms of vibe in the room, it definitely was more open, more civil than I was expecting.
But when the CEOs did their opening statements, they were all canned statements. And it honestly
felt like many of them were written by their PR department. This was not from the heart here.
I'm showing up at this moment in history,
and I am grappling with the problem.
A lot of them felt sort of can.
Now, that wasn't everyone.
Like, Sam, Altman, and Elon Musk, surprisingly,
and Jack Clark from Anthropic,
like, there was a little more grappling, I thought,
than, say, some of the others.
One of my big takeaways was, you know,
you don't often see senators sitting hour after hour after hour.
normally you see them when they're giving a hearing and they're grilling people and you see them in their power
and here you saw them just as human beings sitting with hunched shoulders trying to take in an insane amount of information
so Tristan how did you lay it out when you were approached for this moment of like what is what do you say to the titans of this era what do you say to them and to Congress like what's the most important
thing to be said now.
I mean, what was crazy is there we are, and on the other side of the tables, there's more
than $6 trillion worth of tech that is advocating for the accelerating deployment of AI.
And so the stakes really feel very, very high.
Okay, so my opening statement, what I focused on was, why are we all in this room?
We're in this room because we want this to go well, and we all want a future that's going to go
well. The problem is this belief that the future is uncertain, that we don't know which way AI will
go. And if we don't know which way AI will go, we don't want to regulate because what if we
regulate too early and we lose out on the promises of AI? And the strong claim I made in my
opening statement is that to a degree we can predict the future, which is a bold claim to make
in front of that room. And I said the reason that we know that we can predict the future, and this is
borrowing from Charlie Munger, who was Warren Buffett's business partner,
if you show me the incentive, I will show you the outcome.
Show me the incentives at play, and I will show you the outcome you're going to get.
And in social media, I said that's exactly what we were able to do.
I said, I'm going to make a strong claim that I think we can predict the future,
because what is the incentive of the current AI companies that are building AI?
We all know that Facebook can connect people with cancer to cancer support groups,
and it can help long-lost loved ones
find the romantic sweethearts from high school.
But is that Facebook's incentive?
And so there's a difference between the positive benefits
that a technology can have.
Like AI obviously can do material science engineering
and help us solve climate change,
but is that the incentive of those AI companies?
And the answer is no.
I'm not saying that the AI companies
are not going to do those things.
We all want them to do those things.
That's why we're all here
because we want that future.
The point is to say,
what are the current incentives
that if left unchecked, where is that pulling us towards?
And with AI companies, the actual race that's pulling them
is to scale and deploy these new intelligent capabilities
to society as fast as possible without appropriate safety.
GPT4 can pass the MCATs and the bar exam.
A few years ago, you couldn't take three seconds of someone's voice
and clone them, and now you can.
GPT2 couldn't give you accurate answers
about how to make biological weapons,
but current models can.
GPT3 couldn't find vulnerabilities in code
to hack exploits.
and GPT-4 can do that.
Everyone's trying to one-up each other
to scale and deploy more and more capabilities.
And that is the race that predicts what will happen
as this is going to go on.
But as those harms accumulate,
they will overwhelm the institutions that we have.
So one of the most dramatic points in the hearing, Tristan,
was actually with you and your old sparring partner, Mark Zuckerberg,
And in particular, it was about open source models, Lama 2.
And you had a surprise ally of Bill Gates.
And I would love for you to just walk me through what Mark's position was, what you said, what happened.
Well, one of the things I talked about in the solution section of my opening statement is I highlighted that we are going to need to put certain restrictions on releasing open source AI models with dangerous capabilities.
And I used the example of META, and it was awkward doing this because there's Mark Zuckerberg sitting across from me.
And I said publicly in front of everyone in the room, including 100 senators, that META's Lama 2 model, which they claim was safe, if you ask it how to make a biological weapon, its safety controls will have it deny responding to that.
But I said to the room that we were able, with a single person on our team, with $800 to remove Lama 2's safety controls.
So Jeffrey on our team specifically created something called Bad Lama
to ask it, how do I make a biological weapon?
And it answered how to do that.
I just remember how it felt in the room when I said it,
which was there was sort of this hush and this quiet and this kind of gasp.
You were saying, like, look, the way Facebook is using open source
smuggles in a whole bunch, because open source used to mean safer
because it was more transparent and more eyes on it.
That is no longer the case.
You're now hiding behind open source.
And what open source means with AI and large language models is it is less safe because once you put it out,
no one knows what capabilities it has.
It's now out forever and anyone can fine tune it to elicit new specific, dangerous capabilities.
You, Mark Zuckerberg, are endangering us by just rushing to release these models open source.
That was sort of the implication of what you were saying.
And Zuckerberg then, he actually didn't get defensive exactly,
but his defense was, hey, look, those biological weapons
that now Bad Lama is telling you about,
well, actually, you can just find those with a Google search.
And that's when Bill Gates.
I saw him, like, get physically animated.
I did too.
He was sitting right next to me.
And he turned his placard up to be called on.
He immediately gets called on.
And he's like, that is incorrect.
You cannot just do a Google search
to find that kind of.
of information.
Yeah, so a really important question, of course, is like, okay, well, then why would Facebook
release anything open source?
It seems like maybe that's not in their business interest, put it behind an API in charge
for it, and the point being, this is a result of the race, right?
And actually, Facebook has no longer released the biggest and most dangerous open source
motto, that's now the United Arab Emirates that released Falcon 2, which has leapfrogged
meta, and so meta is racing to leapfrog them again. But the reason why meta is doing this
is that they are not competing as well on the largest frontier models. So for them, they need to
find a niche in the ecosystem. They need their area to be able to dominate. So they've been
pushing on open source so they can get developers on their sides.
so they can have people work on their models,
so they get a whole bunch of mine shares
so that their stock price goes up
because they are a leader in one of the areas of AI.
That's their incentive.
There's actually other angles here, too,
which is it's a race to get the best talent.
And the more you release these cool advanced models
that show your company has the coolest,
most advanced open source stuff,
the more of the engineers who are advanced in machine learning
and the PhDs, they want to work at your company.
There's another reason as well,
which is that when they release an open source model,
people don't need to pay for GPT 4 because now maybe I can use the free open source model
that Facebook built that's like equivalent to GPT 3.5 and I can maybe run it in the future on
my own laptop and so there are good incentives for them to do this but the question was
those incentives run up against safety and you know we have a history of Facebook unilaterally
deciding for the whole world what is safe if I wanted to really twist the knife I mean
ask the people of Myanmar or Ethiopia
whether Facebook has had a good track record
in setting the line for what is safe
for the rest of the world.
Just to be clear, those are places
where Facebook, by the, I think the United Nations
was basically saying
that they helped enable a genocide.
Why you were bringing this all up
is because this is the thing
that Congress has to step in
to regulate, to create rules of the road
to have a referee, because otherwise
the companies are,
and the ecosystem as a whole are going to fall into the race to deploy.
I want to name and use the word referee.
Elon Musk even was actually advocating for the need for regulation.
And he said, you know, even though I'm connected to a whole bunch of people who want to delete the FDA and remove these things,
he says, I agree with the FAA 99.99% of the time.
And I'm glad that there's an FDA.
And he says, I think we need a government regulator for AI.
And I think that was really important because Elon is followed by a lot of folks who are more in the Libertarian's
of the world, who are right to be very skeptical of government regulation, and even him saying
we need some kind of referee. We need rules of the road. And limits on open source is something that I
think people agree on. In fact, to Mark's credit, who is actually quite respectful in that
dynamic, by the way. I think people wanted to make it as dramatic back and forth between he and I
on retrospect. But he actually said, I think we agree, Tristan, that there needs to be future
limits on what open source models that we release.
There are a number of actors and companies in the room.
I'm thinking of Hugging Face, Palantir, Eric Schmidt,
that really harping on this idea of the race with China,
they were actually using UAE's release of Falcon 2
to say the U.S. risks falling behind losing in open source.
Other countries are leapfrogging us.
And they were using this to say, don't regulate us.
Well, we need regulation, but still don't really make it real.
We need to go as quickly as possible.
In my closing remarks, I think I got to use a reframe that Tristan, you and I have been using a whole bunch,
which is that we cannot let our rivals define what the terms of the race are.
The U.S. beat China to deploying social media as fast as possible, and what happened?
We beat China to creating a mental health crisis for our youth.
We beat China to creating polarization of our race.
our citizens. We beat China at enabling outrage engagement algorithms to drive an incoherent
unraveling of shared reality. And this actually got a whole bunch of the senators, both Republican
and Democratic, to nod along, that we do not want to beat China to AI in the same way. And I think
this is a critical reframe, because for so long as we have to beat China is the drumbeat,
then we will be moving at a speed, you know, to use Satina Della's term for how fast they were moving when they're releasing GPD4, which is frantic.
As long as we're moving at a frantic speed, then we will weaken America.
And what we all agree on is that we need to strengthen democratic, open societies with AI.
You know, again, to applaud the format at this hearing, like imagine if we went back to the Industrial Revolution,
and instead of just racing directly into the Industrial Revolution and just going through all the
disruption. You actually consciously had a conversation about how do we want to do this. If we had
that conversation about how do we want to do this, maybe we could have avoided 100 years of child
labor. And that was actually kind of a direction that in talking to Satya Nadella, the CEO of
Microsoft, you know, I think he was very pleased at a genuine human level to see that we were
having a conscious conversation about how do we want this revolution to take place. Now, at the
same time, of course, Satya is racing and self-describing the pace that they're releasing AI at
using the word frantic.
I think one of the biggest things I learned is that there is still a fear among both politicians and the AI companies to really go there when they're talking about sort of what we've been calling third contact harms.
That is when AI becomes recursively self-improving when it starts to automate science when you get an intelligence explosion.
And there's a lot of sort of pussy footing around it.
They just sort of intimated it.
And Elon said we need to take civilizational risks seriously.
Sam Olman said actually something I thought
was one of the most insightful things of the forum,
which was to point out how bad our intuition is
about where things will be in the future.
He said, imagine rewinding the clock to 2020.
And you were asked to give a prediction
of where AI would be in 2023,
would you have gotten it right?
Would you have said AI would be able to take a sketch
of a website on a napkin
and turn into a fully working web page?
Do you think that AI would be able to solve the MCATs?
Do you think that AI would be able to draw
photorealistic images that you can't tell
whether they're real or not?
And the answer is no.
None of us had that kind of intuition.
And then he asked, okay, now sitting in 2023,
if you project your mind forward to 2026 or 2029,
do you think your intuitions of how far AI will be are right?
And let that sink into your nervous system for a little bit,
because the answer is, again,
know that we are almost certainly underestimating
the kind of progress that will happen
when you're on a double exponential.
One of the frames I shared,
and this is originally due to Aja Kota from Open Philanthropy,
is that it's like 24th century technology,
crashing down on the 21st century. And just imagine if 21st century technology came crashing down
on the 16th century. So suddenly, imagine, like, the king is sitting around with, you know,
their advisors, and suddenly they have to deal with cell phones and radio and television and the
internet, all at the same time. Do you think that their kingdom would have held? Do you think
that governance would have worked? And the answer is obviously not. So why should we think
we think that our current form of 21st century governance, our democracies will be able to
hold. And unless we do something unprecedented, then they won't. And I think that line, that
frame of 24th century technology crashing on the 21st century, that certainly got picked up by a number
of different senators. Another thing that stuck out for me was Sam Altman, Eric Schmidt, and
Elon, and Jack Clark from Anthropic. They were really focused on what some people consider
be the sci-fi risks, but of the incredible dangers of how fast this stuff scales and where
we're going. And I think Eric Schmidt sort of famously said, it really caught the room that, you know,
he's a PhD in computer science and ran Google, and he doesn't know how the latest AI systems
are working. And that's because, again, these systems have emergent capabilities where
the engineers themselves can't predict it. And I think that was really helpful for many of the
senators here because they think of Eric Schmidt as the brilliant PhD who was CEO of Google for so many
years, and if he doesn't understand how it works, that says a lot because the field is going so
fast. And those of you remember in our AI Dilemma presentation, that's one of the reasons
that we're so concerned is because can you effectively govern something when it is moving at a
faster rate than you are currently able to apprehend? It's like every time you try to turn the
steering wheel for the car you're trying to manage, it's moving at a faster rate than your eyes
are even currently picking up. Do you think that where you nudge the steering wheel is going to be
accurate if it's moving at a faster rate than where your eyes are currently appraising of
reality. And that's one of the real conundrums with AI. And just to link this back for listeners
in our work more than a year ago, we talked about in this podcast, the complexity gap that the
issue here is that the complexity and speed and power of technology is scaling way faster
than the level of complexity of our governance. And that's the meta issue, no pun intended to
meta, that we have to solve. And that's actually one of the questions the senator asked,
I think he actually said that almost directly,
that if Eric Schmidt doesn't even understand this,
how can we possibly regulate it?
It's a great question.
We've already talked about how,
if you know the race, you know the result,
you don't have to understand all of the internals
to understand how it's going to impact the world.
We actually just had a meeting with folks at the White House
where we laid out everything we've been learning about
what are possible immediate and long-term ways
of binding AI and making it go at a pace that lets us get it right.
There was this interesting thing that I'll say,
which is so much of the head nodding in the room to the comments that we made
was based on, we've already seen this movie before.
We saw it with AI and social media.
And it was interesting that after lunch in the second part,
a lot of the people came back from the company's side
and they were kind of pushing back on the fact that social media had been this big problem
because I think they saw that it was actually getting a lot of
head nods from the room that we got that wrong, including for, say, liability.
The writer up in Sinclair said, you can't get someone to question something that their salary
depends on them not seeing. And the sort of quote that Issa and I were kind of batting around,
the phrase was people who've been Sinclair where their beliefs, their epistemology, are
basically just a predictive of the incentives that they operate with. And how many people
in a room, when we're actually trying to govern a technology,
are doing the thinking that is independent of their incentives.
And politicians have their incentives,
and CEOs have their incentives.
And if you think about what would actually entail good governance,
like answers to how we govern a technology
that are based on the truth value of whether open source is in fact safe,
should be based on a clean epistemology,
a clean sense of knowing what is true
that is unencumbered by incentives.
Either from the political side,
politicians who have to get re-elected or stick with their tribe,
with their tribe, and decoupled from the CEO incentive side.
And what we radically need in rooms full of governance is clean thinking.
I just wanted to thank everyone of you who's out there listening to your undivided attention.
Thank you so much, and we will see you, or rather, hear you next time.
Your undivided attention is produced by the Center for Humane Technology,
a nonprofit working to catalyze a humane future.
Our senior producer is Julia Scott.
Kirsten McMurray and Sarah McRae are our associate producers.
Sasha Fegan is our managing editor.
Mixing on this episode by Jeff Sudaken.
Original music and sound design by Ryan and Hayes Holiday.
A very special thanks to our generous supporters who make this entire podcast possible.
And if you would like to join them, you can visit humanetech.com.
You can find show notes, transcripts, and much more at humanetech.com.
And if you made it all the way here, let me give one more thank you to you.
for giving us your undivided attention.
