Offline with Jon Favreau - Is 2024 the AI Election? Plus, an OG Twitter Exec on Elon, Threads, and Tech Bros
Episode Date: July 16, 2023Jason Goldman, former Chief Digital Officer in the Obama White House, joins Offline to break down “the AI election.” He and Jon talk through their fears for AI in politics, the ways they wish they... could have used AI during their stints in the White House, and Jon asks Jason, a former VP at Twitter, his thoughts on Elon Musk’s leadership at the the app he helped build. Plus: Max and Jon talk about Sarah Silverman’s lawsuit against ChatGPT and watch some very, very weird TikTok lives. For a closed-captioned version of this episode, click here. For a transcript of this episode, please email transcripts@crooked.com and include the name of the podcast.
Transcript
Discussion (0)
Big news, everyone. You can now listen to Pod Save America ad-free by subscribing to Friends
of the Pod. When you join Friends of the Pod, add the Pod Save America subscriber feed to your
podcast app of choice. You'll be provided instructions after you subscribe. And that's it.
Your fast-forwarding days are over. To subscribe and start listening to Pod Save America without
ads right now, head to crooked.com slash friends. It's not one piece of content that we
look at and say, oh, that really convinced a bunch of people of something that wasn't true.
It's that it contributes to a general feeling that nothing can be trusted and that it's all
kind of bullshit and that like all politics is just whatever lie you can get away with.
And that it's not really about discussing real issues, but is instead just like, you know, how do you sling mode most effectively?
And therefore, I'm just not interested.
And it causes that cohort that you're not tuned in to grow because they just view this as some kind of dirty game.
I'm Jon Favreau. Welcome to Offline.
Hey, everyone.
My guest this week is Jason Goldman, former chief digital officer in the Obama White House, who was at Twitter and Google before that.
The 2024 campaign is here, and with it has come a technological breakthrough that everyone seems to be obsessing over, artificial intelligence.
Axios has already dubbed 2024 the AI election. Following President Biden's re-election announcement,
the RNC ran an ad of an apocalyptic future made entirely of AI images. Ron DeSantis is running
ads showing AI images of Donald Trump kissing Dr. Fauci. And that's just the stuff we know about,
with over a year left until election day. Concerning? Yes. Convincing? To be determined.
The truth is, deepfake images and audio
are only a small fraction
of what AI could mean for politics.
It also offers new ways for campaigns
to contact voters, message test, and fundraise.
Expensive, labor-intensive pursuits
that make up the backbone of every campaign.
So AI may pose some benefits
for campaigns and voters, but it could also supercharge the misinformation tsunami we've
been drowning in for several years now. Jason is the perfect guy to help us sort through all the
ways that AI might shape campaigns and politics, not in some distant future, but right now. He's
got a well-deserved reputation as one of the smartest people in the space
where politics and technology meet.
In 2015, after stints at Twitter and Google,
Jason became the first ever White House chief digital officer,
a role created to help guide a post-healthcare.gov Obama administration
into the digital first era.
Since then, he's served as a guide for countless others
as they
navigate our ever-changing technology ecosystem. We talked about our fears for AI in politics,
ways we wish we could have used AI during our stints in the White House, and the digital
strategies beyond AI that we think the Biden campaign should explore to ensure victory in 2024.
Of course, I also had to get his thoughts on what Elon has done to the app
that Jason helped build, what he thinks about threads, and the unfortunate interest that so
many tech bros have taken in politics lately. As always, if you have comments, questions,
or episode ideas, please email us at offline at crooked.com. And stick around after the interview,
Max and I talk about Sarah Silverman's AI lawsuit and watch some very strange TikTok lives.
Here's Jason Goldman.
Jason Goldman, welcome to Offline.
Thanks so much for having me.
We got a lot to talk about.
Yeah, it's been a busy couple of weeks.
You and I didn't overlap in the Obama White House, but we've known each other for a while.
You have a well-deserved reputation as one of the smartest people out there on how the tech world shapes the political world.
So I was hoping you could help us sort through all the fuss around artificial intelligence as it relates to politics, campaigns, regulations, all that good stuff. Axios had a very Axios headline the other day
that read the 2024 presidential race is the AI election. Do you think that's right?
Yeah, it's going to be the AI election because like instead of like NASCAR moms, it's going to
be it's going to be large language model moms or something. That'll be the key.
You have to have your constituency outreach for bots.
How much do you think AI, how much of an impact do you think it will have on 2024?
I do actually think it will have a significant impact.
I think that there will both be a number of legitimate, significant uses of AI, particularly
generative AI and large language models by the campaigns.
I think there'll be some breakthrough moments in which you see some surprising or
concerning events as it relates to deepfakes. We've already had some kind of signal on that.
And I think there'll be a bunch of stuff kind of bubbling under the surface that we will
have to unpack in time. Can you talk about the different ways campaigns are already using
AI right now? Yeah. So I think the ones that have gotten the most attention thus far
are the generative AI as it relates to images and voice, because you can do deep fakes as
pretending to be the candidate or create a rapid response ad as the RNC did when Biden did his
announcement and showed San Francisco being barricaded on all of this stuff. And so that's going to get a lot of attention right out the gate. You know,
DeSantis ran this ad of Trump kissing Fauci, which I think only annoyed MAGA voters. And,
you know, you're going to see a lot of examples of those types of things of fake generative content
from the campaigns. And that's what we've seen so far.
Let's talk about some of the positive use cases. It feels like using AI to make fundraising, voter contact more efficient are examples of good use cases. Are there others?
And are there dangers I'm missing even within those good use case examples of voter contact and fundraising? are very expensive. They require, particularly presidentials, take a tremendous amount of people
to run. And you can streamline these, particularly with respect to generative AI being used to create
fundraising emails, generative AI being used to train volunteers. There's a lot of ways in which
these technologies could make the campaigns more efficient. And I think that could be virtuous if
done responsibly. I think there's a whole aspect of AI as well,
that's about using the data that campaigns have to create better messaging and to enable better
testing in terms of how the campaign engages with the world. And look, there's a lot of parts of
campaigning that are just frankly broken. I mean, you guys have talked about a lot, polling and the
problems with polling. It's not as though that couldn't be improved with better technology if
we had a better sense. So there's a lot that could be done here. On the like using AI to make
messaging, fundraising appeals more effective. I think probably most people listening their
experience with AI is like a chat GPT, if there's any experience at all.
I have been playing around with that. And I was like, write a speech in the style of Barack Obama.
And it's like, fine. It's not super creative. How would AI go about helping sort of improve
a fundraising pitch, a fundraising email, a message or something like that?
Yeah, I think this is a really important point because I think a lot of folks think of these
tools like ChatGPT, which is incredibly powerful and interesting. ChatGPT's own terms of use
prohibits the use for campaigns. So it's not actually the most relevant tool in the political
context and political campaign context. What is interesting is the idea of creating a trained AI model that is trained on the data that campaigns
have on likely voters, on what works for their messaging. You can imagine campaigns that have
a large number of data points in terms of what they think is persuasive, what has moved people
in the past to actually show up, what has moved people in the past to donate, and feeding that
into a custom-created large language model that's based on an open sourced large language model base and using that to do things like message testing to say, does this make someone more likely to show up for us on election day?
Is this going to make them more likely to donate for us?
And so I think that's like a really specific campaign idea that is going to get developed and made into businesses in this cycle.
Well, and it also sounds like if you, you know, you're talking about the problems with polling,
but usually what happens in a campaign is people do a bunch of focus groups,
they do a bunch of polling and they say, okay, this demographic group tends to respond well
to this policy idea, this value message, whatever else. And then
they'll tell that to us speechwriters and we'll try to write something for the candidate.
Are you saying that AI could potentially sort of replace the consultant part of it?
So this is the part that I think, this is the part that I think is interesting. You know,
I haven't worked on campaigns, but I did work in the White House in communications. And one of the
challenges you always have in that is like, what's the ROI on this? Like we had this tweet and it had
this engagement, but what did people take away from that? You know, what did we win? And I think
what you could do is if you had a robust model that you could feed this content into that allowed
you to say for a given phrase or for a given policy definition, how does this move a likely voter? You'd have a much more
specific way of answering that that wasn't reliant on survey paneling and it wasn't reliant on doing
focus groups, both of which take time to do after the fact. You could just run this into the model
before you push it out the door. And I think to me, that's really interesting because it kind of
gets to one of the core uncertainties in politics, which is, you know, people have strong opinions about this is the way you talk about this to these voters. And
it's all based on some kind of science, but this would be another tool that you could use to augment
that understanding of the electorate. One of the things I'm wondering about now that I hear this
is, I mean, maybe the biggest problem with polling today is you have people who are low social trust voters, don't participate in polls, and don't participate in focus groups, and then show up and surprise us with who they vote for. dependent on what information and data you put into it. I wonder how you sort of capture
these low information, low engagement, low social trust voters into a data set that would then
sort of dictate a campaign's message and strategy via AI.
Yeah, I think the voter file ends up becoming like a bigger, you know, ends up becoming like
a bigger source of data and as well as as survey data because you'd get people that responded to things online and you would have some sense of what they actually had said they'd done.
And that would become a more meaningful input.
Now, I mean, look, I don't want to get too far down the this is a miraculous technology that will solve all problems.
And we should talk about the ways it should be used responsibly.
But I do think there's a part of this conversation with respect to AI and large language models,
where the tools are very interesting. And there are things that can be done with them that are new and potentially useful. And we should find the right ways to harness those things.
And to the point you made, I've heard a bunch of people saying that this could sort of democratize the process of running a campaign. So do you think we'll get to a point where know that writing content in the voice of a principal is both a challenging
and time-consuming task.
And if we can make that more accessible in a way that was still authentic to the campaign,
to me, that seems like a continuation of what campaigns already do.
Not everything a candidate says and not every statement that's put out by a candidate is
actually written by the candidate.
Like, you know, spoilers for people who had that fantasy.
Otherwise, I would not be here. Right. So it's like, you know, so instead we're
going to have, we're just going to have another tool that helps with that. And I think that can
be virtuous in the hands of people who use it ethically. Yeah. Look, from a speech writing
perspective, I am currently not worried that AI is going to take speech writers jobs, but the way
that I would use it if I was speechwriting right now
is like spit out a first draft
that has the right message, the right policy,
whatever else, and then I can then edit it
and put it in the voice of the candidate.
That would save me a bunch of time and energy.
That's right.
All right, let's talk about some of the more nefarious uses
for campaigns.
You mentioned the deep fakes, videos, voices,
you know, the RNC ad that you referenced.
You weren't fooled. You didn't that san francisco had gotten taken over well it's not even like i wasn't
fooled i was sort of like what's the point of this other than getting attention for creating an ad
using only ai which i think that was the point right because that didn't seem like it was and
i also thought that the trump kissing fauci seems sort of silly. What do you think the dangers are when it comes to more sophisticated uses of AI and campaigns?
So staying on the deepfakes one, for example, ones that are a little more complicated,
there was a Turkish candidate for president who dropped out after there was an alleged sex tape
leaked about him that he claims was a deepfake. And that's like an interesting case. And I think
in general, Turkey is actually, you know, relatively sophisticated information
environment, but there's plenty of places in which there isn't as robust a national
press or there isn't as robust an information environment.
And you could get something out there that demonstrably changed the way in which a candidate
or an issue was viewed in a given market.
And that could be a problem.
And, you know, in this case, this guy claims that it was a sex tape. I don't know. But like if it was that, you know, this
forced a candidate out of a out of a campaign. Well, the use case that I keep the example that
I keep hearing is people saying that if the Access Hollywood tape dropped right now, Trump would say
it was a deep fake. Yeah. And I think that is, I think that is the bigger threat, which is that it's not one piece of content that we look at and say, oh, that really convinced a
bunch of people of something that wasn't true. It's that it contributes to a general feeling
that nothing can be trusted and that it's all kind of bullshit. And that like all politics is just
whatever lie you can get away with. And that it's not really about discussing real issues,
but is instead just like, you know, how do you sling mode most effectively? And therefore,
I'm just not interested. And it causes that cohort that you talked about previously,
you're not tuned in to grow because they just view this as some kind of dirty game.
Yeah. I mean, look, we also, we don't have a shortage of people who've been fooled by some
fairly low tech disinformation. So so there's there's part of me
when i hear about the ai stuff i'm like well i don't know that we need more sophisticated
disinformation to fool people because we're not doing so well right now right but it does seem
like this would just push on an open door and sort of deepen the distrust people have towards
the institution of politics media etc i think that's right. And I think the big thing that I,
in addition to that happening,
I think what's also going to be the biggest harm
in this cycle is not going to be the deep fake Daisy ad.
Like it's not gonna be the one thing that we all see,
you know, the Pope puffer jacket political ad
that we all see got briefly fooled by
and becomes a big story.
It's going to be something that works
for some small number of people, and becomes a big story. It's going to be something that works for
some small number of people, but in a key area. I mean, presidential campaigns are decided by
relatively few number of people in very well-known places. And so all you need to do is try to figure
out something that kind of continually pushes that some, some kind of synthetic media.
And that's like a net positive.
And it's very cheap to do that.
And you can do that in a non-overt way.
You can just find the right, you know, WhatsApp group, you can find the right, um, you know,
Facebook group, and you can just kind of sort of create a trickle of content before it's
even detected as being fake.
You know, there was this, there was this fake Twitter account, um, Erica, uh, Erica Marsh.
Oh yeah. Uh, that was this like fake
leftist that was you know the washington post kind of exposed she just announced today that
she's taking a break she's i thought her account was suspended she needs to get it back and i don't
know she needs a hiatus from all this attention but that was like a very public account right i
mean that was you know the washington post reporter everybody saw it it became a whole thing
imagine someone like that who's just like you know, Washington Post reporter, everybody saw it, it became a whole thing. Imagine someone like that, who's just like, you know, Aunt Sally's
friend or whatever, who gets invited to the group. To me, that seems very possible and tractable.
Yeah. And I do think, I mean, I think that is dangerous for a few reasons. One, like you said,
it's if it's going to a specific group, it can be under the radar, harder to detect from the media.
Also, the media is not like, you know, we can talk in a second harder to detect from the media um also the media is not like you know
we can talk in a second about sort of like the uh the struggles that local media is having
you go to some community and there's a robocall right and the robocall is from joe biden and you
know we've done plenty of uh ai joe biden they're very good on this show thanks to tommy vitor um
those are sort of you obviously know it's a joke because joe biden would never very good on this show thanks to tommy vitor um those are sort of you obviously
know it's a joke because joe biden would never say shit like that but i feel like so far the
ai we've seen has almost been too exaggerated or the fake stuff has been too exaggerated like the
trump kissing fauci the san francisco ad and stuff like that it really just could be subtle
exactly and it has it could be something that would be believable for joe biden to say but he
says it and it's damaging in a certain community and then no one knows about it has it could be something that would be believable for Joe Biden to say, but he says it and it's damaging in a certain community.
And then no one knows about it until it's too late. Yeah.
Hank Green had this really interesting take on the on the Pope and a puffer AI thing, which is like the reason that worked.
He's like, you know, all these people are saying you could just zoom in and see that he's got like six fingers.
Like, why are you so stupid? You just need to look at the images like that's not the point.
The point is, is that the reason that worked is because it didn't run afoul of any of our previous
conceptions of the Pope. We don't think about the Pope that much. And if we did be like, yeah,
maybe he's got a funny jacket on. Like it was believable enough. It conformed to our existing
biases of what we thought was possible for the Pope. And so similarly, something that's
mainly in line with what something Joe Biden might say, and particularly for an audience who might be conditioned to believe that, that's going to work.
And it could just travel as a voice memo in a WhatsApp chain for weeks before anyone knows that it happened.
Well, so this is sort of my larger question, like AI or no AI.
I feel like we've all been fighting a mostly losing battle against the spread of
disinformation over the last couple of years. We got right wing propaganda outlets, social media
platforms are a mess. We now have Republicans in Congress suing and investigating disinformation
researchers. Like what's your latest thinking on how campaigns, governments can deliver
trusted sources of information that voters will actually believe?
Yeah. So I think it's important to start with the context that you laid out, which is that we've
actually backslid from, you know, where we are now compared to where we are in, you know, 2020.
I think like, I think January 6th and I think the pandemic both forced social media companies writ
large to kind of take more seriously their obligations to not, you know, play a role in fomenting a rebellion and also to not spread information that would potentially kill
people. And I think that the other side has done such a good job working the refs on that,
particularly with some of the investigations where you talk about where they're so committed
to free speech that they're going to haul in academics to Congress to force them to testify
about their work. You know, I think they've done such a good job politicizing any of those ideas, whether it's, you know,
election denialism, anti-vax, or even the notion that you should be able, whether you should be
able to include deepfakes in a political ad, you know, the FEC deadlocked on the question of whether
or not deepfakes should be, should be, should be possible in a deep, in a political ad. And, you know,
there's a bill in Congress that Klobuchar introduced to like, to disallow deep fakes
and political ads, which seems straightforward, will certainly not become law. And so, you know,
I think, I think we've backslid because they've done such a good job working the refs. That still
does mean that there's a big obligation on political campaigns from progressives and progressive
activists to kind of to continue to push this argument and say that platforms have a responsibility
and to try to get them back to some of the positions they had previously.
Well, so there's pressure on the platforms, public pressure on the platforms. There's,
you know, government regulation, which we can get to in a second. There is, you know,
reporters sort of trying to call people on their
bullshit more we can i'll yell at them some more too but if you're in a campaign right and you're
or you're in the white house uh you're in the biden administration and you are trying to
fight disinformation like what are some of the best ways to actually communicate with the groups that you're trying to communicate with in a way that sort of builds trust?
Like, is it more is more in-person campaigning?
Is this more like spreading messages to your social networks of people who trust you?
Yeah. So I have a theory about this that I think is something the Biden administration has actually done really well in their digital strategy.
And it's what I would call generally like, you know, third party, third party channels.
Like, you know, in a traditional in a traditional communications, political communication shop, you've got your, you know, you've got the folks who talk to the press corps and you've got, you know, specialty media and digital kind of became this new creature. And during the time that I was in the White House, we got a lot of benefit from operating the White House digital channels because they were novel, whether that's, you know, Twitter
or Instagram, anytime we launched something, it was, we got a lot of earned media from just doing
that. But now, you know, eight years later, that's old business. Everyone has one of those. You don't
get, you know, you don't get a lot of attention from joining threads, right? You know, it's just,
that's expected. But what you can do is go to third-party channels,
people who have an audience that you want to reach
and find a way to engage with them on topics
that you care about.
So it's, you know, it's the,
what's been called the White House's influencer strategy
where they've done a good job of like, you know,
bringing TikTok folks in to like,
to hear national security briefings.
I think this is a really key part
of a modern campaign infrastructure. The issue is that it's actually fairly expensive to
run because you have to, in terms of time, you have to go find these people, you have to cultivate
them, figure out what issues they care about, bring them in, find the right way to engage them.
But I think this is the way to both effectively campaign and reach people who you otherwise
wouldn't be able to talk to, and also inoculates you from a
lot of disinformation because you have a trusted person who for a given audience is trusted on a
topic. Maybe it's sports, maybe it's a murder podcast, whatever it is, but they're trusted
for that audience. And if you can find a way to engage with them, you then have a degree of trust
that you're not going to earn by having your op-ed in the New York Times or giving
a statement from the podium even. Like it's just not even, you're just not going to reach or be
trusted in the same way by doing that. That's interesting. So, you know, if there's a bunch
of people listening to RFK Jr. on Joe Rogan, spout a bunch of crazy conspiracies, instead of maybe
just going on Joe Rogan yourself, if you're the candidate to
do another round with him or RFK Jr., maybe you find out who else is listening to Joe Rogan that
might be listening to another podcast or another piece of media that maybe the candidate could go
on and reach those people that way. Yeah. This connects to the AI conversation because
the world that we exist in right now is this increasingly fractured online media environment with the collapse of Twitter as
like a central, you know, news media hub and the rise of all these other places. People are going
to find their own niches, multiple places in which they hang out. And some of those are going to be
the discord for a podcast. Some of those are going to be, you know, a WhatsApp group that they, you
know, is with, you know, all of their high school buddies, you know, there's going to be different
places that are the most trusted media source for those folks. And campaigns are going to have to
figure out that it's just not sufficient to just put the tweet out and earn the headline that you
hoped you got, which worked eight years ago. It's that, that same factor is not going to work.
And the way that AI plays into it is that you should be able to operationalize some of this a bit easier. If you have some
more assistive technology that can, you know, help you write the right message for the right place,
identify what those places are. You know, there, there is some things there that we could hope,
um, the technology could help with. And particularly I'd imagine it'd be helpful in
finding audiences, segmenting audiences,
and figuring out where those audiences are getting their information from.
Which is the biggest challenge.
I mean, we had this idea when I was in the White House, and it was very labor intensive to find those folks.
And it also classically just pisses off the traditional political press.
The famous Obama doing an interview with GloZell thing remains.
You still see it mentioned, even though though it was, you know, nine years ago or whatever,
it's a real sore point. And yeah. And so that and so that like is that that does require a lot of
effort, though, to find those folks. And so, yeah, you can imagine the technology being helpful.
Just on the topic of media and reporters and journalism, what do you think AI does to
journalism, which is an industry that's already struggling, especially, you know, it's the decimation of local media.
Right.
You know, cable news ratings dropping, as you said.
You know, Twitter is sort of falling apart.
Like, if media outlets start using AI to sort of replace writing and even reporting? What does that look like?
I mean, probably not great to start. It's not great. I think particularly, and this goes back
to your conversation about campaigns on a local level, I think this is a real concern. Because
with the decimation of local media and trusted sources that are impartial, how are you going to
know in a school board election that some claim is completely fabricated if you're not in a major city with a robust media environment?
And like AI is not going to solve that.
There's going to be-
In fact, it may contribute to the problem.
Exactly.
Like, you know, you're going to need to actually, we need to rebuild some of this infrastructure.
And, you know, whether that's through, you know, there's a lot of people who are trying
to approach this through new models for local journalism, whether that's through, you know, philanthropic aided models or whether that's
through a new, you know, new bottoms up base models of restarting local journalism, places
like Chicago, like there's ideas for how to do this, but we really need to turbocharge
that.
I mean, there's no kind of substituting a fundamental pillar of the democratic process
with the bots.
So on Pod Save America a few weeks ago,
Dan Pfeiffer asked White House Chief of Staff Jeff Zients about AI and regulation.
And Jeff actually said he was like,
it's one of the top three issues that we
care about in the way that we're working on. And he said that we're like right in the middle of
figuring out what we can do on the regulation side. What do you think they should be looking at?
Well, look, look, let's first of all, I think that I've been favorably impressed from the outside,
just how the White House has thought about this issue. Like, you know, they've they did the
convening with the CEOs like, you know, Biden was, I live in San Francisco,
and President Biden was out there meeting with, you know, some of the top researchers in the world,
like hearing from this, Vice Presidents hearing from civil rights leaders. So they're really,
they're really doing a robust effort. The other thing I'll say is they have,
there's tremendous in house talent, as you know, at the White House, you get like the best people
in the world to work on these questions. That's true in technology as well. Like the Office of Science and Technology Policy
previously was headed by Alondra Nelson, and she wrote the blueprint on an AI Bill of Rights,
which is a really interesting document. And then OSTP is now headed by Arthi Prabhakar,
who also is an expert in this field, and they're doing really interesting work on risk mitigation.
So they have tremendous talent and are focused on this problem in an interesting way.
In terms of what should be done, I'd like to take a step back on how we think about
regulation with respect to tech. I was one of the nerds that Obama brought in to the White House
from Silicon Valley. There's a bunch of us who came in in the wake of, I don't know if you know
this, but there was this website called healthcare.gov. It didn't work well. And he brought in a bunch of people who had worked on websites to, you know, kind of restart, restart how the government is prox technology. And we were in this meeting once with Dennis McDonough, chief of staff, and Dennis and everyone was sort of, we were talking about technology policy and everyone was sort of voicing the kind of standard Silicon Valley
spiel on regulation, which is like, look, we've been able to grow into this dominant,
you know, quadrant of the American economy because we've been relatively lightly regulated,
you know, and that's what's led to innovation that's led the world and produced all this
miraculous stuff. And Dennis McDonough paused and said, that's what's led to innovation that's led the world and produced all this miraculous stuff.
And Dennis McDonough paused and said, that's great.
We're the federal government.
We regulate everyone.
You don't get to come in here and say, we're an oil company.
We've powered the world's cars and boats.
And there's just no oversight needed.
And to me, I think about this moment all the time, because if you just view this industry as analogous to the other types of
technological change that have happened in America over time, we have continued to be a great
innovator, not despite of, but because we've had a good regulatory regime that's made technology
safer, has inspired trust in those technologies, and has allowed civil society writ
large to have input on how that technology should evolve. So that should just be true for anything
else up to and including AI. I'll tell you why I don't think that happened with tech is because,
you know, and Dennis said, we're the federal government, we regulate everything.
If it's a democratic administration, one you have one party that doesn't
want to regulate anything anyway democrats do want to regulate most things but but tech had a
brand of course as you know early on that was innovation progress inclusion connection and it
just seemed very like oh this is this is in line with liberal values that's right and so they sort of got
away with a bunch of shit until we all realized that social media was making us all crazy uh we
all we all make mistakes you know i was there i was like yeah it's great i was i went with him
to the facebook on the facebook trip i was there yeah um but so i think that's that's been part of
the problem with with democratic administrations or Washington in general, regulating tech. And tech is obviously, technology companies have taken advantage of that.
And clearly, Washington missed the opportunity to do anything about social media.
Right.
What lessons do you think, do you think AI will be any different, number one? And do you think
like DC has learned any lessons from its past failures in this area?
Yeah. So, I mean, I think the other reason why
tech, particularly social media, was different as well is because there's real actual First
Amendment questions that are not true for a car and are not true for airplanes. So, like,
there is a substantive distinction there as well. However, I think that that is not as true for some
of the things that we're talking about here. And I do think that there was a lesson learned
that we're just not going to let this play out in the same way.
For what it's worth, I think the technology companies
believe that as well at this point.
I don't think the tech industry exists in that world
and has that same viewpoint that I was talking about
that we were saying in 2015.
I think the industry now welcomes active engagement
with government and civil society
because they want to know where the lines are and they know that they need to engage because
there was so much bad blood built up during the social media era. And that, frankly, the technology
evolves better when there are clear lines. With respect to AI, it's the same thing. I think there
are real risks, both the types of things that we're talking about and more serious risks of
cybersecurity and bio threats that need to be looked at and are the appropriate venue for
government to think about. But the same kind of calculus applies as with any other area of
regulation, which is that if you can draw lines
where you say above this threshold, we need to apply some increased level of scrutiny and here's
how standards are set and here's how input is solicited and incorporated into that process,
then you create a wide area for innovation to happen and for there to be some of the exciting
things that we talked about already that can happen. So I don't want to ask you to go into the weeds or give me the entire summary of the AI Bill of
Rights, but what do you think are top principles for regulation of this very powerful new technology?
Look, some of this stuff should just be caught. The Klobuchar Bill of just there shouldn't be
deepfakes and ads, the fact that it's not going to become law because of weird
Congress stuff that now withstanding, like that is the kind of thing that can happen for me.
It's some of these are less about the specific things that are in and out and more, what are
the layers at which these questions get asked and tackled? Like who, you know, what are the norms that industry is developing for itself to govern AI safety
and AI ethics? You know,
what are the ways in which activists and advocacy groups get input into the
process to make sure that questions of bias and discrimination are argued for?
There's really hard questions like, you know,
job displacement where we're not going to know what happens until the
technology is out there for longer. But that's something where there's going to need to be a government society response
to. It's going to have to say, what are the right ways to ensure against job loss? How do we
strengthen unions? All of those questions are the appropriate domain. And some of those things are
being worked on. I mean, the Screenwriters Guild strike is not wholly about this, but it is one of the issues is to figure out how unions maintain a seat at the table in the face of advancing generative AI.
And I think that's the appropriate course for some of these questions to get worked out.
So, you know, it doesn't seem like we'll get a bunch of regulations between now and, well, the campaign's already started.
So we're going to go through this campaign, probably in a regulation free environment around AI. We talked a little
bit about how campaigns can navigate around this. Like what should voters be looking for?
So I kind of, I do go back to that, to that comment about, you know, if something conforms
too closely to your kind of preexisting biases of what should happen, you should, you should,
it should raise a flag to you. If like you see something and you're like, oh, that is so true.
You're like, how, how true is that? But it's a good opportunity to ask how true is that?
And, you know, I think there's also places where, you know, I think also if you're someone who's a
progressive and you're looking to help out and during the course of campaigns, I think you
should ask yourself too, like to whom are the audiences that you are a trusted voice? Like who is like someone that you can help like
sort of say, actually, that was not real. And here's the real thing on that, like, some of that
is just being a good active citizen and being willing to participate with your, you know,
friends and neighbors. But some of it is also just realizing that we all exist in different media
environments in which we are either, you know, audience or trusted source? And how do the
issues that you care about end up playing in that context? So I know from your White House days,
you thought a lot about ways to reach voters in a very fractured media environment. You sort of
talked about the Biden administration's influencer strategy. Are there other strategies you think campaigns
should be thinking about as we head into, even aside from AI, as we head into 24?
Yeah. And I tend to end up seeing a lot of problems as organizational and structural problems
with regards to entities, as opposed to being like tactics based. And that the organization
ends up informing the strategy. And by that, I mean,
with the Obama White House, the context that allowed me to come into the job of chief digital officer was that there was a new media team, like a digital team, but they were literally in the
basement of the EEOB and no one had talked to them for a while. And they were sort of had a
bunch of mini fridges that were full of questionable chow mein or whatever. And it really
was that there was just no one at the senior staff level that was advocating for actually this doesn't need to be a speech
we could just do this as whatever and you know pfeiffer was really i would have loved that yeah
could he use that when i was there well you know pfeiffer pfeiffer really led a lot of this like
before you know before he helped bring me into this job like you know they're releasing the
state of union on medium and the you know two, two firms thing. And like, you know, this, this was the start of that,
but it took someone at Pfeiffer's level to, you know, be able to advocate that that's it.
Those same issues exist on campaigns too often. And in other organizations too often, it's the
case that like the digital team is seen as like sort of the internet magic tricks department.
That's like supposed to make it like cool or like you know do something that works for kids or whatever and that's just wild that that's still how they're seen
like that was that was the case in 2008 on the obama campaign i can't believe this many years
later it's still it's gotten better i think it's gotten better there's always a new generation of
kids that no one understands is like part of the issue um and, that is true. And so there's that challenge as well.
But the way in which this gets solved
is by having, you know,
digital be an equal part of the tripod
to, you know,
how you engage with the political press corps,
how you engage with earned media generally,
you know, magazine articles,
you know, TV pieces, whatever.
Digital should be a third part of the tripod on that.
You know, I just want to shout out my successor,
Rob Flaherty,
who just left the White House. And this is a very wonky point, but listeners might appreciate it.
Like Rob in his time there managed to like upgrade the Office of Digital Strategy to be an assistant
to the president office, which was the highest like staff level, which wasn't true before.
And got a West Wing office like, you know, for that off, you know, these, these things sound like ceremonial, but as you know, this actually matters in terms of, are you in the conversation
when they're putting together the thing about what the president's going to do on Tuesday?
And that's true in a campaign as well. And Rob was able to do that. And I think that bodes well for,
you know, the Biden campaigns like approach to these questions.
So before the White House, you were VP of product at Twitter.
Yeah.
I know you aren't exactly a fan of what Elon Musk has done with the platform.
It's not going great.
I mean, I don't think we can say it's going great.
It's not.
I feel like we've all been on like Twitter death watch for a while now.
What do you think happens?
What's your sense of how this is?
So look, I'm I'm biased against Twitter
dying because I worked there and we actively tried to kill it for like four years through
like incompetence and infighting. And so like, you know, we were unable to bring the bird down
despite the fact that, you know, we switch CEOs like every 18 months and like the site didn't
work. And so, you know, it survived that time.
And so I continue to believe that Twitter will not die. It will just become more broken and
more stupid. And that I think is kind of the trajectory that it's on. I think during the
2024 cycle, I think it's likely that Twitter ends up still owning particularly a right leaning
aspect of the political conversation that's happening.
And I think they, and you know, there's this argument that, well, without any libs to fight
against, like, you know, we'll still be interesting, but there's enough libs to fight against. Like,
it's still, it's still happening. Like we're still fighting today. So I think that part will continue.
What I do think is that we're at a very strange moment for social media generally, because,
and I think you have to really temper your predictions because
we've never quite seen this before, where one dominant platform in six months has essentially
just shot itself in all feet available, grew new feet, shot those. The rate limiting and the
blocking off of tweets is just truly crazy behavior if you're running that business.
And we've seen, you know, dominant new entrants like Threads, as well as a lot of experimental
new types of platforms like Blue Sky and the Mastodon entrances and Spill.
And I think those are going to work.
I just think they're all going to work.
I think they're all going to find audiences for specific things.
And those niches will matter for the people that
are on them. And I think that we've not yet seen either with blue sky or with threads,
what happens when this federated play kind of comes out because both of them are meant to be
federated protocols that allow anyone to kind of, you know, interact with them off of the main app.
Neither one of them have actually built that yet. And so maybe that produces a whole, I think that will just continue the trend
of people finding their own niches that really work for them with their own rules, their own
content rules that make sense. And I think it makes it a lot harder going back to the political
conversation of finding the right places to play in. Because it may well be that there's some
conversation in some federated instance that's really relevant to an audience that you care about and you're
going to have to go find that and figure out a way to play with that well it's tough if you're a
political professional or in the media yeah it's like i found it yeah i found it very annoying to
like now go on twitter check out threads yeah and neither of which is giving me
all the news right and all the in in the way i want it all the curated news that twitter used
to give me when it was working right and but like from a consumer a media consumer perspective or a
voter perspective like i don't know how you find some space where like all the relevant news is at one time.
Yeah, that's right.
And I think that is the new reality is that you're not going to see everything in one
place.
You're going to see some slice of things.
And I've heard, I'm sure you've had this experience as well, where like you catch up
at some point later in the day and you're like, wait, that's a whole thing.
Like someone got, some general got assassinated.
What, you know, like whatever it is, you know,
and you know,
someone group sex,
you're like,
Oh,
like what about this crazy thing?
You're like,
I'm supposed to be the one that knows about,
like,
I'm the one who texts the group thread about this.
I'm the addict.
Yeah,
exactly.
And,
and it's very,
it's this like kind of free fall experience.
If you're a sicko like us,
um,
although I heard you had gotten better,
I'm sad that it didn't.
I got a,
yeah,
I got a little better.
And now we, yeah, the threads coming on. I'm like, now there's two better. I'm sad that it didn't. I got a little better now.
Threads coming on.
I'm like, now there's two apps that I'm checking.
Now there's two of them.
Yeah, no, it's really complicated.
And I think that will be the reality is that you see different stories pop off in different places. And there's a whole culture that happens around it that's distinct from another one.
And you're going to have to choose where you spend the majority of your time. And if you're in media, if you're in politics, if you're one of these places
that wants to be in the water cooler conversation, now there's a infinity of water coolers. I think
that's just, I don't think anyone's going to win. Um, and I don't think Twitter's going to die.
So I think there's just going to be a lot of stuff around that. Although the, I feel less
confident about my Twitter, not dying prediction than I did like six months ago because it's gotten so very
broken. What do you think Elon's like biggest, most impactful mistake or mistakes has been
during his tenure? Definitely the Twitter blue thing in my view. And it goes back to the
disinformation AI question. You know, there was this, someone posted this image of something crashing in the Pentagon and there was a flash crash on wall street because
someone, you know, it was a terrorist attack. It was like four weeks ago or something like that.
And the relevant part of that is not the AI. Like the, the image was not like, you know,
required some quantum computer to generate, you know, this amazing facsimile of a terrorist attack.
The issue was that the image was distributed by a blue check account on Twitter.
And so it seemed credible.
And so Elon, out of essentially peak, decided to punish the lib blue checks who had been
verified, you know, in order to overthrow the lords and peasants system and instead
allow, you know, the cat turd twos of the world to become the
dominant source of news media.
And apologies if, you know, to people or if you know what that is or don't know what that
is.
But yeah, I mean, that ends up becoming this seminal moment because I think it does two
things.
One is he foregrounds all of those weirdos who are like
right wing lunatics. And it's this moment of audience capture for him because he wants to
be seen and liked by your cat turd twos and your Andrew Tate's and like, you know, all of these
people who are really poisonous to a mainstream advertiser business, which is what he bought. And then two, he punishes the
people who are actually producing the content that people wanted to see, you know, your celebrities,
music, you know, stars, film stars, media personalities. And he views it as like,
I'm lowering the playing field. And if you want to be here, you should pay me $8. And they view
it as I've been giving you free labor for years to make this network
valuable.
This blue check is more valuable to you and your network than it is to me.
If you think I'm going to pay you anything, you can go pound sand.
And that's what happened.
So I think that inversion both poisoned the network and created a lot of these information
integrity problems.
One of the reasons I asked that question is you mentioned the time that you were at Twitter and I don't want to make you have to
relive every moment. But like, why do you think Twitter couldn't be as successful as maybe you
guys wanted it to be even before Elon? Like what, is there a version of Twitter in your mind that
could have, that could work, that could solve some of the biggest problems without solving all the problems?
Yeah.
So I tend to be very fatalistic just in general and particularly respect to technology.
And so my read on what happened with Twitter is that we had a good idea that fortuitously caught this amazing wave in terms of a paradigm shift in society and technology, which was the mobile
revolution. Twitter comes out shortly before the launch of the iPhone. And all of a sudden,
we've got a short form messaging service, which is built for people to check all the time on
their phones. And we ride that wave very successfully. However, it just wasn't as
successful as Facebook was able to ride that wave. Facebook, it turned out, had a more valuable piece of real estate in this revolution, which
was the friend graph.
And they were able to build more interesting properties on top of it.
Also, Facebook is just a better executor, we should confess.
Like Zuckerberg was a much better executor than any of the people who ran Twitter and had a better way of operating
that company. Do you think the ad business was easier because of the structure, the format of
Facebook than Twitter? Yes, 100%. Yeah. Both the structure and the data that they had on the users.
Now, I mean, part of this is, it was not just Facebook was really good at executing. They were
also very aggressive in terms of how they built that ad business, in terms
of what data and signal they mined.
And we did not have that, nor did we engage in some of the same tactics.
And as a result, when Twitter was moving towards becoming a public company, there were two
issues.
One was a relentless focus on increasing top of funnel growth, making more users come in
the door.
And very little focus on preventing churn of people who actually
got what the product was. And the second mistake was comparing continually the business of Twitter
to Facebook and saying to the street, this is going to be Facebook, but better, which was just
never going to be possible from an ads business standpoint. It just didn't have as good a piece
of real estate as Facebook had for a variety of reasons, including the ones you mentioned. And so it kind of doomed Twitter as a public company
in some ways, because it was just going to always be continually compared to Facebook.
So I think those two decisions, if we could go back and change, maybe would have had some
effect. But I think it's probable that we just slightly had a less valuable piece of turf in
this revolution that happened,
which going back to your question, what's going to happen next is why now is a very difficult time to predict what's going to happen because we're on the precipice of another one of those revolutions.
Like, you know, we flirted with, we tried to force two other technological paradigm revolutions to
happen. Crypto, which turned out was only good for scams and goggles, which it turns out no one
wants to wear. AI, it turns out is is actually useful for things, as we've talked about and many other use cases besides.
And you can feel the urgency by which this wants to be the next paradigm evolution.
And that's going to provoke a new type of growth and a new type of social experience as well.
We don't know what that's going to look like, but it's in the cards.
Well, last question, I'll just go back to,
you know, who's leading this revolution.
Like, why do you think so many of these
tech and VC characters, mostly dudes,
seem to have political opinions
that range from obtuse to awful?
Right.
Elon, David Sachs, the all-in podcast dudes
all these people running like you've worked with some of these types of people yeah like what what
is going on there yeah i think so there's two things one is i think a lot is written about the
sort of libertarian mindset in tech and i kind of alluded to that myself around the regulation
conversation i think that is true like there is kind of a, you know, I'm, uh, you know, I'm socially liberal, but fiscally
conservative or, you know, like, uh, I don't want this stuff and you know, whatever I want,
I want my nice, I don't want any people building houses near me and stuff like that. Right. Right.
Is, is what a lot of this comes down to like, you know, Mark Andreessen who wrote that it's
time to build posts, but like, you know, also took it on op-ed, like, please don't increase
zoning in Menlo or whatever. Um, and, uh, and so I think there's that aspect of it. I think a more important factor for people
to be aware of that is less considered is it is often the case that the right thing to do with
technology, particularly the internet is to invent from first principles to like consider like, oh,
what would it mean to like, you know, how would a restaurant work? If like, you know, you could
just do anything in code, like, you know, how would, mean to, how would a restaurant work if you could just do anything in code?
How would transportation work?
There's terrible downsides to this, right?
I don't want to overhype that as a great way of doing things.
There's terrible downside to that.
But in terms of building businesses,
it is often the right mentality of,
what if we were unconstrained, by the way,
and we just discovered it from first principles?
As a result, you see this in both Elon, the all-in guys,
the tech industry writ large, there's this assumption, you see this in both Elon, the all-in guys, the tech industry writ
large, there's this assumption you saw a lot during the pandemic, which is that we can just
apply from first principles to any problem. Like during the pandemic, you could just say like,
all right, I'm just going to start doing my own research on sort of where this, where this virus
came from and what can be done. I had a lot of friends who were like, you know, funding research,
like, you know, building kits, like, you know, trying to, you know, sharing to you know sharing uh sharing evidence sometimes well intention and sometimes funding really important
projects i'll figure out the vaccines the government doesn't need to do that i'm just
gonna just google it yeah but a i and as a result a a surprisingly large number of very successful
tech people ended up taking the horse pace because like you know they like it turns out that you can
just go like anyone can
go down the rabbit hole like the rabbit hole does not care if you have you know a house in the
hamptons and a private jet the rabbit hole is there for you too um and you know you will and
you will find your own way in elon is the best example exactly richest man in the world has been
completely captured by cat turd too it's wild it's absolutely wild that is my take on that is my
fundamental take on what happened to elon right it's like he can build these cars he can build
these rocket ships and then he just became too online he went down a rabbit hole and his brain
got fried just like anyone else and the attention he gets back from these people is tremendously
meaningful to him and as a result he's turned twitter into a product that amplifies all of that
it's become the subreddit for,
you know, it's r slash Elon Musk at this point. Like it's the things that he wants to see
that centers like his own point of view. So that, that, that need to invent things from first
principle is interesting in some business context and super dangerous and weird in politics,
international affairs, health, like the idea that you're just
gonna like you know look at some tweets read the wikipedia page for you know the russian empire
and yeah and then yeah and suddenly david sacks is a general let me tell you something let me tell
you something i read a whole bunch of tweets about ukraine and these fucking experts they're not
really it is this this bias against expertise to other people's expertise.
And it's like, I am very successful and I was really smart and I innovated and I built this company.
And because I built this company that was so successful, therefore, I can solve any other problem.
I can go to other industries.
I can go to other areas and I can be an expert on everything.
And I think that's why RFK Jr. is actually a compelling candidate to these people. Now, I think there's a political nihilism reason as well, which is that they're super down to just
like do something that, you know, that messes up the election and that nothing really matters.
So let's just do a bunch of let's endorse a candidate we never vote for just because it's
because also it's never going to really affect them personally. Yeah, exactly. But there is an
aspect and you see this from others of like, yeah, like, you know, the vaccine is a hoax and,
you know, vaccines generally, and the CIA did kill, you know, JFK and, you know, RFK is going to be the one to expose this. Probably they're hiding alien. No, you just can see the, you can
see this activating all of those flavor centers for someone who believes that actually the world
is bullshit. And I'm smart enough to figure out what's real. If I just look at the problem with the appropriate lens. And also every, if you look at the text that Elon got,
everyone in my phone is telling me that I am the solution to everything. Right. I mean,
like literally like, you know, like everyone in the world, world's most famous, richest,
smartest people are like constantly being like, we need you to work on this problem. Like no one
else is going to get us to Mars, to AI, to, you know, a fossil fuel-less future.
You're the only solution
for this.
Yeah.
And everyone who works for me
thought that my
kitchen sink tweet
was really funny.
It was amazing.
They all laughed in the meeting.
This is a piece of advice
I have for anyone
who's listening.
It's a niche piece of advice,
but you never know.
If you're in the text thread
with like a billionaire,
if you happen to be
in the group thread,
you have a disproportionate obligation to speak up when a dumb idea comes your way and just be like, nah, chief, not this one.
Let's pass on this one.
I'm telling you that you can make a big difference in the world just by like hitting the thumbs down react on some of these ideas.
We will leave it at that piece of fantastic advice. If you happen to
be in a tech thread with a VC guy, a billionaire, someone like that, just you know what you have to
do. Jason Goldman, thank you so here with my friend Max.
Another fascinating week in tech.
I know.
There's so much going on.
We're being blessed by the news gods, if not the tech gods.
The AI news gods.
The run determines all the news for us.
So Jason and I talked a bit about new laws and regulations to deal with ai but we didn't cover
the various attempts to hold the ai industry accountable with laws that are currently on the
books we found out this week that comedian and author sarah silverman joined a class action
lawsuit against open ai which runs chat gpt and meta which has a large language model of its own
for copyright infringement.
And basically, the suit says that these companies have, quote, copied and ingested the protected
work of Silverman and other authors in order to train their chatbots.
Rolling Stone wrote the other week also about two other class action lawsuits filed recently
that, quote, call into question whether AI violates privacy rights, scrapes intellectual
property without consent, and negatively affects the public at large.
What's your take on dealing with potential AI abuses, dangers through a legal strategy?
It's interesting because there have been so many rounds in the past of like people developing bots that scrape text or that scrape images. And it seems like the fact that these lawsuits are coming so quickly
really reflects an understanding that these aren't just university lab experiments anymore,
that they are both derivative works of enormous commercial value on their own
and also potential competitors.
I do think it's striking that the Silverman lawsuit that she's joining has a pretty narrow claim, which is they say that the text database that the large language
models are scraping includes another database that was partially pirated. So it's this kind
of like specific, like we're not objecting to the act of scraping, but rather we are kind of
using this copyright infringement claim based on pirated text to like get our foot in the door to try to get the courts to litigate this before it advances
so far that we can't roll it back, which I think is a lesson that people learned from
a lawsuit that the publishers had against Google in 2005. Do you remember this?
Yeah.
Over Google was mass scanning like every book in existence to put on their libraries and publishers
tried suing. The lawsuit took 10 years and then was also dismissed by a judge at the end of it
because they claimed or the judge said that they didn't have standing because it was fair use.
Fair use. Exactly. And that's what some people are worried about with this one.
Right. And I think there's a fear that if it takes too long, then the like AIs will create
facts on the ground. It'll be too late to roll this back?
The AI will become sentient.
Based off of reading
Sarah Silverman's book about being herself.
That'll be the thing that
pushes it into the super intelligence.
From what I've read about this
the legal bar is high
on copyright infringement that you have to
prove that like it's
the full book you know and it's not
because fair use otherwise is the argument that usually wins this for the people who are right
taking the the u.s has has pretty strong fair use protections because we have such a large
entertainment um industry but i think it it feels like this is a good way to slow things down. But if there is a concern that the existing
laws do not offer sufficient protections, given the size, the potential size of the like AI-based
scraping-based industry, it seems like it would have to be legislative ultimately. There would
have to be changes to laws, I feel like. I also thought it was interesting that one of the
changes that sort of led to these lawsuits is the fact that OpenAI started off as a
non-profit and then became a for-profit entity. It's pretty telling. Yeah, right. And so now that
all of these AI companies are companies trying to make money, then it does open them to a lot
of legal challenges for copyright infringement. You know, that's one argument. There's also the
privacy argument. Yeah. and then a broader argument
that uh one group made in their lawsuit that um ai could cause civilizational collapse uh which
one do you think is most persuasive is that what legal statute forbids civilizational collapse
that feels like kind of an airbud thing like there's nothing in the rule book that says you
can't collapse civilization it starts with like like a Stephen Hawking quote about AI,
how it could like destroy.
It's so funny reading these legal filings.
This is like, as a professional writer,
seeing the lawyers get so excited
about like getting their quotes together,
going to like getting some good images and metaphors.
And it's like, okay, all right.
A couple, yeah.
A couple of associates are pretty excited
about what they've written.
What do you think? I think the privacy argument is interesting.
I mean, I agree. And it makes sense as a way to try to have another angle at like,
who owns these works? And I feel like it's a good way to attack the entire idea of scraping as something that is like in itself a violation something I always
found really telling is companies like Google that are now doing a lot of this
scraping themselves have very strict rules against scraping their platforms
like if you read the YouTube terms of service and I assume Facebook has the
same thing and this is something I ran into when I was doing reporting on these
platforms they have very strict rules against scraping their platforms because they know that data at this size, even if the individual bits of data are free to access, data at that size has enormous commercial value.
Yeah.
One of the arguments in one of these lawsuits on the privacy stuff is that they are scraping like, you know, all of our data or a lot of our data, too much of
our data is on the internet. And if these large language models or other AI programs are scraping
like the entire internet, then it's going to be able to compile a ton of data on each of us without
our consent. Which I mean, the social media company is already doing that. That's the basis of their
business model is compiling enormous amounts of data about us to figure out what is the ad that's going to get us to or that they can
sell against our eyeballs yeah and with with them you know they're ostensibly uh doing it for selling
advertising purposes right but for the ai stuff it's like and also with that level of computing
power could the data be more predictive about who we are what we do behaviors like i don't think we
don't know that yet but i thought it was interesting that one of um in one of the lawsuits
it basically says you know microsoft and google have admitted they don't know where this technology
is going and then the full power does and yet they are scraping away it's for-profit companies
i mean like what are we doing here?
They've always had an attitude of, like, scrape first and litigate later, which works well for them because the courts move slowly enough that they can get all the data, develop the technology.
And then I was reading about the, like, 2005 publisher's lawsuit against Google, and they offered a settlement equivalent to, I think, $200 million, which is
just like, that sounds like a big number, but when you're capturing the entire publishing industry,
that's like basically flipping the bird. Yeah. So there's the publishing industry that has to
contend with this. And also as we saw from Sarah Silverman, there's the entertainment industry.
We're looking to do an episode on this next week or soon, but the use of AI does seem
like it went from a fringe issue to something that's now at the center of the writer's strike
and now the actor's strike. You have any thoughts on that as you've been watching it all unfold this
week? So as you know, from when we have talked before, initially my attitude was like, everyone's
overstating the reach of this,
what it's actually going to do in the very near term.
And we're like getting ourselves bent out of shape.
And then I saw this news story
about the movie production companies
offering this deal to SAG as part of their talks
that was just like the most dystopian thing I'd ever seen
that they would pay actors.
Very Black Mirror.
Very Black Mirror, yeah. thing i'd ever seen that they will they would pay actors very black mirror very black very black
mirror yeah um one day's salary and then they would get to use their likeness in perpetuity
and this is for my understanding is this is for background actors so they're like when you see a
big crowd a lot of the faces in the background and there is already like they're already using
um cgi to like when you see a thousand
people ten thousand people in the frame that's actually 30 people and they've just copy paste
them so a lot of this is already happening but seeing it in black and white one day's salary
and then we own your likeness in perpetuity and can do whatever the hell we want with it
it's really dystopian extremely dystopian i we gotta see we gotta figure out what the real
proposal is here yeah because it's all this reuter story about it and yesterday it was going around
and you're like what the this is you know right and now this of course this is the studios right
but the studios are saying that it's false that that that the claim that they can use them in
perpetuity is false.
And it said the current proposal would restrict the use of the digital replica to the motion picture for which the background actor is employed.
Any other use would require that actor's consent in bargaining for the use subject to a minimum payment, the studio said.
But, like, let's see the actual proposal.
Like, I think this is going to be a challenge going forward right in any event yeah the idea that um that these studios could uh just take your likeness yeah and just use it anywhere or your voice and then i'm sure with
the writers it's again it's scripts it's premises it's yeah the speed with which this is displacing
jobs like background actor is is pretty striking yeah um one other i thought very striking
uh show of the kind of anxiety around ai in hollywood mission impossible dead reckoning
okay what what a picture so you saw it tell us i uh i saw it um i don't know if you've seen any of
the christopher mcquarrie Mission Impossible movies.
I think I saw the last one.
Or maybe the one before that.
I saw one of them recently.
Amazing action and the dumbest possible plot exposition you could ever imagine.
That's what I remember, yeah.
But the reason I bring it up is that the plot, while very stupid, I thought was a really striking reflection of where we are as a culture and society and
how we feel about AI so the I won't spoil anything that doesn't come in the first like 20 or 30
minutes but the the like big villain in it is an AI that has become sentient that wants to
destroy humanity or is an enemy of humanity for some reason which we have heard before right
and but what was new about this is instead of being what typically happens where it's like the
AI is going to launch all the nukes or it's going to like send our armies against the other armies Yeah. around the world sees and processes information and gets the news and will subtly manipulate it in a way that will make us all a little bit more radicalized and a little bit more extreme
and i was like am i watching a fucking offline episode on the big screen here so the ai is just
like a supercharged social media that they were just they were they were literally it was describing
the facebook algorithm and there was this line that one of them had that was like it can without
us even realizing it will manipulate what we'll see and it'll turn friends into enemies and enemies into aggressors.
And I was like, yeah, I've been on Twitter.
I get it.
I have been thinking that on one end of the should we be freaked out by AI debate, there's like chat GPT isn't that exciting?
Who cares?
And then the other end,'s like uh the robots are
coming to kill us all it's only a matter of seconds right i do that's the that's the that's
the rating it does feel like that's where we are yeah but i do think that there are dangers much
like um our friends at mission impossible laid out right uh that are somewhere in the middle that are still pretty significant that we have not thought about yet,
grappled with, imagined in many cases.
And it couldn't end up being quite bad for humanity
in the way that social media has been quite bad.
And it also could have benefits
like social media originally had,
which then are overshadowed by sort of the darker stuff.
And with the computing power for AI being so much more advanced and powerful than anything we've seen, that could happen more quickly and it could be a little bit more dangerous than we thought.
Short of the like, you know, the AI nukes the world or there's a bioweapon or something like that.
Right. And it did. I mean, like I remember
like around the time you started the show,
around the time I started reporting on social media,
the idea that algorithms and AI
could influence what we all think
without us realizing it
and steer our politics in dangerous directions
was like kind of controversial
and people would roll their eyes at it.
So I thought it was amazing
that this like low to middle brow,
big budget studio movie just assumes that like the audience will not blink
at the idea that like, boy, Facebook sure is bad.
Yeah.
It's pretty ingrained in the culture at this point.
I was thinking a lot about the portrayal of AI in movies
and how it's changed over the years.
And I like went back and was looking through
all of the movies that had AIs in it.
And it was really striking
because there was this long period
from like the 60s through the 80s and 90s
where it's like the AI is gonna raise up and destroy us.
And they're like gonna be the enemy of humanity.
Terminator.
Right, yeah, Terminator 2001 is the first one, which is based on a short story from like 1951.
So like a really old idea that like Hal 9000 is going to kill us.
Yeah.
But then there's this weird interregnum in the 2000s and 2010s where like the robots
become really chill and become our friends.
Like her.
Like her.
Yeah.
I'm going to go back and watch her.
Iron Man.
Interstellar.
Yeah. I don't know if you ever watch her. Iron Man, Interstellar.
I don't know if you ever watched Star Trek The Next Generation,
but there's like Data is just like a chill robot who is like learning how to be a person.
And it was this funny glimpse back into like Obama era optimism about tech
that is like maybe the robots will be cool
and maybe like in her we'll get to fuck the robots.
And now we still might get to fuck the robots and now right still might get to fuck the robot
then they might kill us but then they will also destroy it all together yeah yeah yeah that's
that's unsettling well before we go there's another very scary thing out there oh my god
maybe scarier than ai and it is tiktok live um this is a feature on the app where you can do live streams
and get paid for it
few really weird ones
went viral this week
one particular
TikTok
live
that chilled me
to my core
can we play it
thank you K
it was you
yes yes yes
thank you 408
thank you 408
thank you King
thank you Olivia
yes yes yes
thank you Yari thank you 408 thank you king thank you olivia yes yes yes thank you yadi thank you abby
oh what is that gang gang gang gang yes popcorn yes popcorn yes popcorn yes popcorn oh what is
that thank you sophie thank you emoji thank you emoji balloon so this is um her name is this is pinky doll sure i i don't even know what to say
about this so as a recovering tiktok addict thank you to the offline challenge for breaking me away
from this i would see a lot of these you're scrolling through tiktok you're looking at your
like 20 second like little skits whatever videos and then these will pop up where it's someone
broadcasting live oh just pops up into your just they just did this push it into your feed yeah because they want
you to you know tick tock they're so nice yeah i guess and it is always the most insane weird thing
that you've ever seen but something that is really striking about it is it's always someone who looks
like they're about to do something that will be like really satisfying to watch like in this video
she's about to pop popcorn in a curling iron and you're like kind of curious like what's that going to
look like ryan broderick who wrote about this said that it's like an equivalent of like someone
about to finish a house of cards about to put like the last card up i've seen like rube goldberg
machines they look like they're about to set off but they just never do and they will they will go
for like 10 20 minutes and the idea is that you're scrolling through it
and it's like, oh, I just want to see them do this thing.
So I'll watch for a second.
And they keep doing every like-
It's so dark.
Every like half a second, they will do something crazy.
Just like make a weird noise
or like do something weird with their face
just to keep you watching.
And the whole time they are soliciting donations
from the people who are watching the live stream
and people will do it
people will give them like 5 10 20 bucks and he can make a lot of money this way why why is anyone
paying the money to watch the fucking popcorn pop i like i don't under so there's there's a lot of uh
comments floating around the internet like tweets about this because it's one of those tiktok videos
that like hop to Twitter where everyone's...
Sure, right.
And then everybody was like shocked
to see this like weird underbelly of the internet.
A couple of the funny tweets
were TikTok lives are proof
that society is crumbling right in front of our eyes.
What part of the human brain
is this meant to light up?
I don't know.
I mean, I think this is the CCP right here.
This is the Chinese government.
This is their off against us.
We just have like a whole generation of kids who are just going to be fucking zombies watching this woman pop popcorn and say weird shit and then just giving her money.
It's weird to see.
It's just like jangling keys, but like amped up by a thousand.
But it actually did not.
This did not start on TikTok.
This is something that actually started with like twitch streamers and it would be like a lot of youtubers or like video game live streamers
would start these channels and they would just do anything they could to hold people's attention
and it's this like weird participatory thing and a lot of the especially like life live streamers
the parasocial people who are walking around like recording themselves all the time it's like
give me five dollars and i'll react give me $5 and I will do something and you will
get to feel like you're participating in this broadcast. And this is what the TikTok live
streamers do too, is they will react every time they get a donation or it pops up on the screen.
And it's something that comes out of the two pillars of the internet, which is video game, live streaming and porn.
Just,
it's where it all starts.
The foundations of our new digital civilization.
Well,
it's this,
I mean,
this need for connection and to be seen.
Right.
And right.
Like,
oh,
I gave a donation and that,
cause she's like thanking people.
She's thanking different users.
And it's a little bit of the,
yeah, it's also a little bit of the, theris hayes interview i did a long time ago on this and he's writing a book about this like everyone's famous now on the
internet and this like desire for fame right and that's not fame like in a large scale but if
there's someone with all of these followers and suddenly they're they're acknowledging you right
that's like cool for people well and if you know hundreds of people
or thousands of people are watching,
that's, I mean, this is something we've talked about a lot
is our brains are evolved for communities
of no more than 150 people.
So if a thousand people react to something you do,
that's a really powerful jolt to your brain.
I don't know that there's anything like inherently harmful
about the TikTok live streamers,
but it is very weird to see
what the internet is
becoming laid bare like this and have it be so like creepy and disturbing yeah it's just funny
like reading the way different outlets have described this little phenomenon like i was
reading this piece in the daily dot and they were like this clip shows a woman robotically licking
the air and repeating phrases like yes yes yes yes, yes, and ice cream so good.
But we're talking about it, right?
Because it isn't just like anything you could do to hold people's attention for 0.3 seconds.
What do you think we should do for our-
Touch fucking grass, people.
Get out.
Put the phone down.
I'm ready to launch our TikTok live stream.
Yeah, okay.
What do you want me to do?
I'm just going to start acting out.
Give me $5.
I'll tell you Twitter is bad.
Put your phone down for $10.
Get some legal strike takes.
Yeah.
Oh, terrible.
Anyway, between AI and that, we're fucked.
Well, you've got some fun stuff to watch this weekend.
Mission Impossible and TikTok live streams.
Yeah, I'm going to...
I'm excited for your journey.
Coming on Mondayay i'm
just gonna be like broken ass brain good stuff good stuff um thank you to jason goldman for
joining us today thanks max and uh everyone uh have a good week see you next week
offline is a crookedoked Media production.
It's written and hosted by me, Jon Favreau.
It's produced by Austin Fisher.
Emma Illick-Frank is our associate producer.
Andrew Chadwick is our sound editor.
Kyle Seglin, Charlotte Landis, and Vassilis Fotopoulos sound engineered the show.
Jordan Katz and Kenny Siegel take care of our music.
Thanks to Michael Martinez, Ari Schwartz, Amelia Montooth, and Sandy Gerard for production support.
And to our digital team, Elijah Cohn and Rachel Gajewski,
who film and share our episodes as videos every week.