Your Undivided Attention - Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook
Episode Date: March 20, 2025One of the hardest parts about being human today is navigating uncertainty. When we see experts battling in public and emotions running high, it's easy to doubt what we once felt certain about. This u...ncertainty isn't always accidental—it's often strategically manufactured.Historian Naomi Oreskes, author of "Merchants of Doubt," reveals how industries from tobacco to fossil fuels have deployed a calculated playbook to create uncertainty about their products' harms. These campaigns have delayed regulation and protected profits by exploiting how we process information.In this episode, Oreskes breaks down that playbook page-by-page while offering practical ways to build resistance against them. As AI rapidly transforms our world, learning to distinguish between genuine scientific uncertainty and manufactured doubt has never been more critical.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIA“Merchants of Doubt” by Naomi Oreskes and Eric Conway "The Big Myth” by Naomi Oreskes and Eric Conway "Silent Spring” by Rachel Carson "The Jungle” by Upton Sinclair Further reading on the clash between Galileo and the Pope Further reading on the Montreal Protocol RECOMMENDED YUA EPISODESLaughing at Power: A Troublemaker’s Guide to Changing Tech AI Is Moving Fast. We Need Laws that Will Too. Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnCORRECTIONS:Naomi incorrectly referenced Global Climate Research Program established under President Bush Sr. The correct name is the U.S. Global Change Research Program.Naomi referenced U.S. agencies that have been created with sunset clauses. While several statutes have been created with sunset clauses, no federal agency has been.CLARIFICATION: Naomi referenced the U.S. automobile industry claiming that they would be “destroyed” by seatbelt regulation. We couldn’t verify this specific language but it is consistent with the anti-regulatory stance of that industry toward seatbelt laws.
Transcript
Discussion (0)
Hey, everyone, it's Tristan.
It's Daniel.
Welcome to your undivided attention.
So Daniel, something I think about often is how throughout history society takes a lot of time to confront the harms caused by certain industries.
I think about Upton Sinclair writing about the meatpacking industry in the early 20th century.
I think about Rachel Carson talking about Silent Spring in the 1960s and the problems of pesticides or tobacco in the 1990s.
And with social media, we're seeing it happen again.
that can just keeps getting kicked down the road.
And with AI moving so fast,
it feels like the normal time that it takes us to react
isn't compatible with doing something soon enough.
You know, we can become aware of serious problems,
but if it takes too long to respond,
meaningful action won't follow.
Totally.
And I think this has to do with the way that we manage uncertainty in our society.
You know, with any new thing, with any industry,
it's important that we sit with the uncertainty as we discover what's happening.
But also, uncertainty is scary.
And it's really easy for us to react to that fear that we experience.
sitting with uncertainty by avoiding thinking or speaking about topics when we feel uncertain.
And then, you know, as a society, I often think about when we're uncertain about what's true
or who to trust. We struggle to make collective informed decisions. And when we watch experts
battling it out in public, when we hear conflicting narratives and strong emotions, it's easy to
start to doubt what we think we know. And it's important to recognize that that's not by accident.
You know, it's because companies and individuals with a lot of money and a lot of power
want to hide the growing evidence of harm, and they do so with sophisticated and well-funded
campaigns that are specifically designed to create doubt and uncertainty.
So how do we sit with this?
Our guest today historian Naomi Oreskes knows this better than anyone.
Her book, The Merchants of Doubt, reveals how this playbook has been used repeatedly across
dividend industries and time periods.
And Naomi's most recent book, The Big Myth, just came out in paperback.
So how do we make bold decisions with the information that we have right now
while being open to changing our minds as new information comes?
How should we sit with uncertainty, which is everywhere and unavoidable,
while inoculating ourselves from weaponized out?
We discuss all of these themes and more.
This is such an important conversation, and we hope you enjoy it.
Naomi, thank you for coming on your indivited attention.
Thank you for having me on the show, and thanks for doing this podcast.
So, Naomi, 15 years ago, you and your co-author Eric Conway wrote this book, Merchants of Doubt,
which really started this conversation about the ways that uncertainty can be manipulated.
Let's start with the simple question.
Who were the Merchants of Doubt?
The original Merchants of Doubt were a group of physicists.
So they were scientists, but they were not climate scientists.
They were Cold War physicists who were quite prominent.
They had come to positions of power and influence and even fame to some degree during the Cold War
for work they had done on U.S.
and rocketry programs. So they had, they had been in positions of advising governments.
They were quite close to seats of power. These four physicists who had been very active in
attacking climate science, it turned out had also been active attacking the science related to
the harms of tobacco. And that was the first indication for us that something fishy was going
on because that's not normal science. Normally physicists wouldn't get involved in a debate about
public health. They probably wouldn't even get involved in debate about chemistry. I mean,
maybe if it overlapped their work. But these guys were so outside their wheelhouse that was pretty
obvious that something fishy was going on. The other real tell was that the strategies they were
using were sort of taking legitimate scientific questions, but framing them in a way that wasn't
really legitimate. So it's normal in science to ask questions. How big is your sample size?
how robust is your model, how did you come to that finding?
Those are all legitimate questions,
but they began to pursue them with a kind of aggression
that wasn't really normal
and the real tell to do it in places that weren't scientific.
So we expect scientists to pose questions
at scientific conferences, at workshops,
in the pages of peer-reviewed journals,
but that's not what these guys were doing.
They were raising questions in the pages of the Wall Street Journal,
a fortune and Forbes.
So they were raising what on the face of things on the surface
looked superficially to be scientific questions,
but they were doing it in an unscientific way
and in unscientific locations.
So as historians of science,
it was very obvious to us that something was wrong,
and that's what we began to investigate.
Right, but if I'm naive to that story,
I might come to that and think,
you know, here are people who might be curmudgens,
here are people who might be fed up,
here people who might be angry. But that's
not the claim, right? The claim is deeper than that,
that these were people who are actually deeply incentivized.
Crumudgeons are normal in science,
and they're not necessarily bad. I mean,
they can be a pain in the ass. There's nothing
per se wrong, particularly
there's nothing epistemologically wrong
with being a curmudgeon, but there is something
pretty weird when you start questioning
climate science in women's wear daily,
right? And so
we started looking into it. And then that's
when we discovered this connection to the
tobacco industry. And so then we
thought, well, why the heck would anyone, anyone at all, but much less a famous prominent scientist
make common cause with the tobacco industry? And one of the key players here was a man named
Frederick Sites, who was an extremely famous physicist, someone who was very close to people who
had won Nobel Prizes. He had been the president of the U.S. National Academy of Sciences, so the
highest level of science in America and the president of the Rockefeller University, one of America's
most prestigious scientific research institutes.
So why would this man, famous, respected, successful, make common cause with the tobacco industry?
And this is where being a historian is a good thing, because you can go into Desley Archives
and you can find the papers where people answer these questions in their own words.
And what we found was that all four of these men did this for what was essentially ideological reasons.
That is to say, it had nothing to do with the science.
They weren't really, in private, they're not having robust conversations about, you know, how good is the evidence that smoking causes cancer.
No, that's not what they're saying.
What they're saying is this threatens freedom.
They're saying if we let the government regulate the economy, if we let them ban smoking or even regulated strictly, like banning advertising.
And this is a really important point.
We'll come back to about free speech issues.
If we let them ban tobacco advertising, then we're going to lose the First Amendment.
If we let them ban smoking in public places, the next thing you know, they'll be telling us where we can live, what jobs we can have, what cars we can drive, and we'll be on the slippery slope to totalitarianism.
And so for them, it's deeply connected with their work in the Cold War.
So the Cold War part of the story is not just incidental.
It's actually central.
they feared that if the government became more involved in regulating the economy through environmental or public health regulation, it would be a backdoor to communism.
So there's this sort of slippery slope in their own argument.
They're accusing their enemies of being on a slippery slope, but they themselves go on the slippery slope of going from climate scientists doing the work that shows why we might want to regulate fossil fuels to accusing them essentially of being communist.
and wanting to see a kind of communist government in the United States.
Sure, and honestly, this is one of the oldest debates in science.
The whole enlightenment story that really stuck was the story of Galileo versus the Pope,
and the Pope saying, you know, you basically can't say this
because it would erode a lot of things about the world.
And so there's always been this thing with science of how do we tell the truth
separate from values we may care about.
If I can just say on that, one of the ironies of this, though,
and we see this throughout this story,
these guys like to present themselves as if they are Galileo,
Leo, that they're the ones who are standing up for truth. But of course, it's the opposite.
They're on the side of very, very powerful corporations like the fossil fuel industry, but they try
to flip that script and claim that they are the martyrs. They're the oppressed ones. And we see
that going on even today. And that's one of the reasons we wanted to have you on the podcast,
because it's actually a really confusing time to be a person today in our news environment,
to figure out who is being suppressed, what opinions are real, what opinions are manufactured.
And so we really want to come back to that theme again and again as we talk about this.
because it has such relevance to where we are today.
But before we do that, I want to go back and talk about some of the mechanics of how doubt is seated.
Can you talk a little bit about the way that they did this?
Absolutely. Thank you.
So, well, the name of the book is really tries to convey the key thing.
The idea is that they're selling doubt.
They're trying to make us think that we don't really know the answer, that the science is unsettled,
that it's too uncertain, that the uncertainties are too great to justify action.
And it's a super clever strategy.
These people are very smart, right?
They're not dumb because they realize if they try to claim the opposite of what climate scientists are saying.
So if climate scientists are saying the earth is heating up, it's caused by human activities,
if they were to try to say, no, it's not heating up, they would lose that debate.
They have already lost that debate because the scientific evidence is overwhelming.
But if they say, well, we don't really know, you know, we need more data, we should do more research.
and there's a lot of uncertainty.
The uncertainty is a key part of this story.
That's a much harder thing for scientists to argue against
because if I say there's a ton of uncertainty
and you say, well, I mean, yeah, there is uncertainty.
Of course, there's always uncertainty in science,
but it's not that bad.
You know, the scientist is now on his back foot or her back foot, right?
The scientist is now put in a defensive position
because they cannot deny categorically
that there are uncertainties.
So the scientists are placed in this kind of defensive position.
And the other reason why this strategy is so clever
is because they're saying it's uncertain,
the science isn't settled, there's a big debate.
And then they say, in fact, I will invite you to debate me
on my podcast, on Fox News, in the pages of the Wall Street Journal.
Now, the scientists often agrees,
because the scientist believes in free and open conversation,
the scientist thinks I have nothing to hide,
why wouldn't I debate?
but the fact is by agreeing to debate the scientist loses
before he or she has even opened their mouth
because the purpose of this argument
is to make it seem that there's a debate.
Right. They win as soon as there is a debate.
Then the audience says, oh, there is a debate.
Bingo, the emergence of doubt have won.
That's right. So it's like people's minds are left with the idea
that there is a controversy. We still don't really know.
And, you know, there's so many other strategies that.
I'd love you to sort of talk about, you know,
keeping the controversy alive, you know, delaying, let's commission an NIH study or a study to figure out what the true effects are, astroturfing, these fake organizations that get sort of spin-ups.
You talk about Citizens for Fire Safety or the Tobacco Institute.
You just give us more of a tour like basically, how is our information landscape getting weaponized so that it's harder to see the truth?
Because basically, unless we have antibodies for understanding these different strategies, we're vulnerable to them.
So essentially, you are the kind of a little vaccine here to help us have the antibodies to,
to understand this. Yeah, and it's interesting because some of my colleagues have now started to talk
about inoculation in the context of bad information. But of course, that's a really tricky
metaphor, given that we have lots of fellow Americans who are suspicious now about vaccination,
so an inoculation. So it's tricky landscape, but it's all the things you just said. So one of the
strategies kind of involves buying out scientists. I hate to say this, but it's true. One of the
strategies is to say we need more research. It's too soon to tell. And it sadly is relatively,
easily to get scientists to agree to that because the reality is, well, you know, scientists
love to do research. And there always are more questions that can be asked. And as I've already
said, there are always some legitimate uncertainties that we would do well to look more closely
at. So it's proved very easy to get scientists to buy in sort of inadvertently by just saying,
oh, let's have a big research program. So for example, back in the first Bush administration,
President Bush established the global climate research program. Now, back,
Back in 1990, that wasn't necessarily a bad or malicious thing to do,
but it contributed to this narrative that it was too soon to tell
that we needed to do a lot more research,
even though in 1992, President Bush signed the United Nations Framework Convention on Climate Change,
which committed the United States to acting on the available knowledge,
which was already quite robust at that time.
Another thing you mentioned were the AstroTurf Organization.
So now we're going from less dishonest to more.
dishonest. So there's a whole range of activities, some of which are catastrophically dishonest and
deceitful and really appalling and maybe even illegal to others that are more manipulative.
So AstroTurf organizations involve creating organizations that purport to be citizens groups
or purport to be representing important stakeholders like firefighters and getting them to do
the dirty work of the industry. So you mentioned the Citizens for Fire Safety. This was an organization
that was created and wholly funded by the tobacco industry
to fight tobacco regulation by fighting back
against the overwhelming evidence that many house fires were caused
by smoking, particularly smoking in bed.
And so there were all kinds of campaigns
that pointed this out to try to discourage people from smoking,
particularly from smoking in bed.
The tobacco industry made the claim
that the real culprit wasn't the cigarette.
It was the sheets and the pillowcases.
and that these things needed to be fireproofed.
And so they persuaded people across the country, states,
the federal government, to pass regulations requiring flame retardants in pajamas.
And I remember when I was a parent,
it was incredibly hard to find comfortable cotton pajamas for my children
because they were all made out of these disgusting synthetic fabrics
filled with flame retardants.
That was pushed heavily by this group called the Citizens for Fire Safety,
represented by firefighters who were in the pay of the industry.
this was like true industry shows.
People should just stop here for a moment
and recognize just how diabolical
is very diabolical.
You've got a product that is literally
causing houses to burn down
and instead of actually that product
because they don't want to change it,
they can't really change it,
it's not really changeable.
And so they want to externalize the source of this harm,
this thing that's happening in the world,
saying, well, there's another place that it's coming from.
It's coming from the flammable materials,
let alone the fact that that probably gave us
more PIFS and forever chemicals
in all of our furniture.
and bed sheets.
You now know that for sure it did, right.
Right.
And the idea, though, that I think most people don't know,
there's sort of this asymmetry,
just how much effort would a, you know,
incentivized actor go through to spin up, you know,
lots and lots or dozens of fake organizations,
fake institutions,
in order to so doubt about this thing.
And so that's what I was so excited to have you on
because I just don't think people understand.
So in the case of social media, you know,
they might say, well, we need to do research,
or let's fund parent education programs
so that parents are better educated
about how to manage their kids' use of screen time,
which is, of course, not an actual solution
to the fact that they've created network effects
and lock in hyper-addictive products
that continue to manipulate people
much more powerfully than their parents
could ever be educated by.
And so there's this sort of common strategy
of distracting people from the true source of the problem.
Exactly. And, you know, the word diabolic
it was so apt here because, you know,
this is also happening now with this issue
of hyper-palatable foods,
you know, hyper-process, ultra-process foods
that are really hard to stop eating.
And you might be too young to remember this.
But when I was young, there was an advertising campaign on television for lays potato chips.
And it featured a young girl, a blonde, very pretty young girl.
And she's talking to the devil.
And the devil hands her a potato chip and says, I bet you can't eat just one.
And I look back on that ad now, and my mind is blown because in a way, they're admitting what they were doing.
It turned out they were doing research to figure out how to manufacture a potato chip.
that you couldn't eat just one or five or ten,
that you would eat, you know, the whole bag.
And it was deliberate and it was knowing.
And they even weirdly tipped their hand in the ad,
except none of us realized that that's what they were doing.
Well, this seems like also just to do a couple more here,
there's another strategy which is emphasizing personal agency,
saying, well, it's up to you to have personal responsibility
with how many Doritos, you know, you have.
It's up to the person who's addicted to cigarettes to choose,
do they really want to be addicted or not.
They can still choose that.
Social media.
It's up to you do that.
saying here's your personal carbon calculator
where you can calculate your own personal carbon footprint
which distracts attention from the sort of systemic issue
which would threaten trillions of dollars of value
if they had to change in any way.
Yes, well the agency, one is crucial
and it relates to the sort of bigger framework,
which is the framework of freedom.
So as you pointed out,
there are many ad campaigns both on social media
and in legacy media,
basically trying to shift the burden away
from the producer of the damaging product
to the consumer.
and to say, well, this is our fault because we drive too much.
And so BP ran a big ad campaign that many of us have seen,
and it was super successful to calculate your own carbon footprint.
And how many of us even now think about that?
They'll say, oh, I'm traveling less because I'm trying to reduce my carbon footprint.
Right.
And, of course, reducing your carbon footprint isn't a bad thing.
If you can do it, it's a good thing.
But the net result of this is to shift agency,
to shift it away from the producer that is knowingly making a harmful
product and saying, no, it's my fault because I made that choice. But it wasn't entirely a choice
because at the same time, the industry is fighting regulations that would restrict fossil fuels.
They're fighting tax credits for electric cars. So, you know, I'm not really making a free choice.
I'm making a choice that is heavily affected by what the industry has done. This is another
strategy that we can track back to the tobacco industry. Early on, the tobacco industry realized,
and again, this is in the documents. We can find them saying it in their own words,
that they would not succeed if they said to the American people,
yeah, we know cigarettes will kill you, but, oh, well, you know, enjoy it while it last.
No, that was not a message that would work.
Lots and lots of people would say, oh, I should try to quit.
But if they said, this is about freedom, this is about your right to decide for yourself,
how you want to live your life, do you want the government telling you whether or not you can smoke?
And that was a very powerful message.
I think for two reasons.
One is because none of us do want the government telling us what to do.
I think most of us feel like, yeah, I want to decide for myself where I live, where I work, whether I smoke or not,
but also because it's tied into this bigger ideal of America as a beacon of freedom,
that what makes America America is that this is a country of freedom.
And so the industry ran all kinds of campaigns with American flags, with the Statue of Liberty.
And we talk about this in our new book, The Big Myth.
We can track this back actually into the 1920s and 30.
30s, newsreels and documentaries evoking all these icons of American freedom.
And this was a very powerful argument because it meant that you weren't fighting for a deadly product.
You were fighting for freedom. And who was going to argue against that?
Yeah. So it occurs to me that when we talk about this, what we're really talking about is not doubt itself.
What we're talking about is sort of unfair conversational moves, right?
It's unfair to turn a fact conversation into a values conversation. It's unfair to,
pretend that everyone is just saying this when you're bankrolling this. And so I kind of want to
come back because I have to admit I bristle slightly about just focusing on doubt because science
and the process of honest inquiry demands that we sit with uncertainty. And it's part of our
ability to act in this world. We don't know things. Sometimes longitudinal studies do take 20, 30,
40 years. What is the difference between manufactured doubt that is this deeply unfair
our conversational move that destroys our ability to be together versus a more wise sitting
with doubt.
Yeah, that's a great question.
And it's one of the things we talked about in the book originally that it's the doubt strategy
is very clever because it's a kind of jujitsu move.
It's taking what should be a strength of science.
The fact that scientists are motivated by doubt, which in a different context we call curiosity,
scientists do spend a lot of time worrying about uncertainties and how to characterize them
accurately, fairly, and honestly.
And without some degree of doubt, there wouldn't be progress in science.
So that's a good thing.
But the emergence of doubt take that good thing and they turn it into a liability.
And they want to make us think that unless the science is absolutely positively 100% certain,
that therefore we don't know anything and can't act.
And so it's really about exactly what you said, that we as citizens have to understand
that we have to live with uncertainty.
I wrote a paper once it was called Living with Uncertainty.
And the reality is we do that in our ordinary lives all the time.
We get married, we buy a house, we buy a car, we invest for retirement, even though we might die beforehand.
So we live with uncertainty in our daily lives all the time.
And we trust ourselves to make judgments about uncertainty in our daily lives because we think we have the information we need to make those choices.
And so this leads to another strategy we haven't talked about, which is the direct attacks on scientists.
part of the way this works also is to try to undermine our trust in science generally
to say that scientists are crooked, they're dishonest, they're in it for the money,
which is again pretty ironic coming from the tobacco industry.
Very common.
And this is one of the things that we've tracked in our work that's particularly distressing
about what's going on right now.
Many of the things we studied began as attacks on particular sciences
that seem to show the need for regulation,
like science related to tobacco, the ozone hole, climate change, also pesticides.
But then it's spread.
And what we've seen in the last 10 years, really since we published the book,
is this broader expansion to trying to cast doubt on science more generally.
So this broad attack on science and scientists in order to make us think we can't trust scientists,
but then who should we trust?
So as you say, now we're in this saturated media landscape
with information coming at us from all directions.
and it's really, really hard for anyone to know who they should be trusting.
I feel like there's a distinction between reflexive mistrust, which is a problem,
and then reflexive trusting, which is also a problem,
and what we're looking for is warranted trustworthiness.
And one of the things I'm worried about the most in this space is that I've seen
the response of scientists, even friends and colleagues,
is to try to push for more certainty.
And they'll say, no, no, we know this.
We're more certain.
And I have to admit, I sort of doubt that that's the right response.
I kind of think we all need to sit with more uncertainty.
I mean, if anything, I blame the marketing teams.
In the tobacco example, I blame the cigarettes are safe.
Eight of ten doctors agree.
Pulling us up to a place where we believe they were safe.
And so how do we counteract that?
Because I'm a little worried that science will be a race to the bottom of people shouting
and claiming what we know as sort of a false certainty in reaction to this very combative environment.
Yes, I agree.
I think you're absolutely right. I think it's a big mistake for a scientist to say,
oh, we know this absolutely. I think it's much better to say, of course, there's uncertainty
in any live science. The whole point of science is to learn, right? It's a process of discovery and learning.
And this is, of course, where history of science is so helpful because, of course, we learn new things,
and that's good. But we have a issue right now. We have to make decisions that in some cases are literally life and death.
And in a case like that, it does not make any sense to say, oh, well, I need to wait another 10 years till we better understand this virus.
Or I have to wait until sea level is on my window sill, because then it's too late to act.
We make decisions based on the best available information we have right now, but we also prepare to change in the future if we need to.
And we have a term for that in science.
It's called adaptive management.
and it was used very, very successfully in the ozone hole case.
The International Convention, the Montreal Protocol that was signed to deal with the ozone hole,
had a feature in it for adaptive management
because scientists knew that there were still things they didn't understand about ozone depletion.
And so the politicians put in a feature that as they learn more information,
the regulations could be made more strict or they could be made less strict.
And we could do the same thing for climate change.
I mean, it's what we should do.
We should always start with the least regulation
that we think will get the job done,
but be prepared to tighten the regulations
if more science tells us we need to
or to lessen them, as the case may be.
What I love about the example you're giving
with the Montreal Protocol Agreement
is its law that recognizes its own humility,
that it's not always going to be accurate,
that the letter of the law and the spirit of the law
are going to diverge,
and we need to be able to update the assumptions
of the law as fast as the sort of situation requires it.
And that's building in kind of the right level of uncertainty.
Yeah, and if I could jump in on that, you know, a lot of people have criticized the IPCC for,
you know, a variety of different reasons.
But I think it's really important for people to understand that the UN Framework Convention
on Climate Change was modeled on the ozone case.
Because the ozone case was such an effective integration of science and policy, and it has
proved effective and has done the job it was intended to do, the UN Framework
Convention was modeled on that. Now, it hasn't worked, but I think the main reason it hasn't
worked is because of the resistance of the fossil fuel industry. And, you know, we've now been
witnessed to 30 years of organized disinformation and campaigns to prevent, really to prevent
governments from doing what they promised to do back in 1992. So, Naomi, one of the things you
write about in your new book, the big myth, is how those who are advocating for the maximum
unregulated sort of free market approach have a selective reading.
of history and you have this great example of Adam Smith. Could you speak to that? Yeah. So one of the things
we talk about in the book is how the Chicago School of Economics really misrepresented Adam Smith
and how many of us have this view of Adam Smith, the father of capitalism, as an advocate of
unregulated markets that business people should just pursue their self-interest and all good
will come from people pursuing their self-interest. That is not what Adam Smith wrote in the
wealth of nations. In fact, he has an extensive discussion of the absolute essential
nature of banking regulation. He says if you leave banks to bankers, they will pursue their own
self-industry and they will destroy the economy or at least put the economy at risk.
You can't let factory owners just pursue their self-interest or they'll pay their workers' starvation
wages. And he has multiple examples of this, which he goes on to describe at quite great length.
Yet all of this has been removed from the way Adam Smith has been presented in American culture since
1945 and in a fact it's a kind of um you know i teach agnotology the production of ignorance the
study of ignorance and it's really interesting to see how this is a beautiful example of it because
in the 1920s and 30s there were people even at the university of chicago saying no that's not
what adam smith said but by the 1950s that had all been erased it had been expunged and they
were producing volumes edited volumes of adam smith that left out all of his discussion
of the rights of workers, the need for regulation, etc.
So I want to take us a little bit to a different direction,
which is there's another way that science can get weaponized.
So one of the other areas of our work, Naomi, is around AI risk.
And artificial intelligence is the most transformative technology in human history.
If you have intelligence is what birthed all of our inventions and all of our science.
And if you suddenly have artificial intelligence, you can birth an infinite amount of new science.
it is so profound and so paradigmatic
I think it's hard for people to get their minds around it
there's obviously a lot of risk involved in AI
and one of the things that I've noticed
some of the major frontier AI labs like OpenAI
they came out after these whistleblowers left Open AI
saying hey we have safety concerns
and what they said in response was
we believe in a science-based approach
to studying AI risk
which basically meant they were pre-framing
all of the people who are safety concerned as sci-fi oriented,
that they were not actually grounded in real risks here on EarthPal,
but they were living in sort of the Terminator scenarios of loss of control and sci-fi.
And that's one of the things that I just,
one of the reasons I wanted to have you on is I want to think about
how can our collective antibodies detect when this kind of thing is going on?
Because that sounds like quite a reasonable thing to say.
We want a science-based approach to AI risk,
and we don't want to be manufacturing doubts
or thinking hypothetically about scenarios.
Just curious your reaction to that.
I have to say I do sometimes get a little nervous
when I hear people say we want a scientific approach
because I want to know who are those people
and what do they mean by a scientific approach
because I could show you people in the chemical industry
saying that, the tobacco industry saying that
and using it as an excuse to push it off regulation.
So I would need to learn more about who those people are
and what they mean by a science-based approach.
But I guess what I would say
you know, it's interesting as a historian thinking about how radical this is and how serious the risks are, because I agree with you.
I think it is radical, and I think both the risks and the potential rewards are huge.
But it does remind me a little bit of about the chemical revolution, because many things were said the same about chemicals, particularly plastics, but also pharmaceuticals, other chemicals in the early to mid-20th century.
and chemicals did revolutionize industry.
They revolutionized textiles, plastics was huge, you know, all kinds of things.
And similarly, there were many aspects of the chemical industry that were very helpful to modern life,
and there were some aspects that were really bad.
And so how do we make sense of that?
And I think one thing we know from history is it gets back to my favorite subject that people in Silicon Valley love to hate,
which is regulation, that part of the role of government,
is to play this balancing act between competing interests. In fact, you could argue the whole
role of government is to deal with competing interests, that we live in a complex society. What I want
isn't necessarily the same as what you want. And in a perfect world, we'd all get we want. In a perfect
world, we could be libertarians. We all just decide for ourselves. But it doesn't work because
what I do affects you and vice versa. And so that's where governance has to come in. And it doesn't
have to be the federal government. It could be corporate governance. It could be watchdogs. But I do
think that the way in which some elements of the AI industry are pushing back against regulation
is really scary and really bad. Because if we don't have some kind of set of reasonable
regulations of this technology as it develops, ideally with adaptive management, we could find
ourselves in a really bad place. And one of the things we know from the history of the chemical
industry is that I think it's fair to say that many chemicals were underregulated. You mentioned
PFAS a few minutes ago. Again, DuPont knew a long time ago that these chemicals were potentially
harmful and were getting everywhere. So the industry knew that this was happening and pushed
hard against revealing the information they had, pushed hard against regulation. And we now live in a
sea, a chemical soup where it's become almost impossible to figure out what particular chemicals
are doing what to us because it's not a controlled experiment anymore.
Well, I think that points at one of the core problems here is that, you know, as much as you want good science, good science takes time and the technology moves faster than the science. And so the question is, what do you do with that when the technology is moving and rolling out much faster than the science? So what does it mean to regulate this wisely? You talked about one thing, which is adaptive management. Are there other tactics that you can make sure that as you begin to figure out how to roll this out, the regulation actually helps us adapt and helps us stay with the science?
Yeah, that's a great question.
And again, so good news here is that we do have the ozone examples.
We have at least one example where it was done right, and we can look to that example.
And I think one thing that we learned from that case is to do with the importance of having both science, industry, and stakeholder voices involved.
Because I thought one of the really terrible things that someone said recently about AI, I think it was Eric Schmidt, correct me if I'm wrong.
But he said something like, well, no one can regulate this besides industry because we're the only ones who understand it.
Do you remember that?
And I thought that was a very shocking and horrible thing for an otherwise intelligent person to say, because first of all, I don't think it's true.
I mean, I could say the same thing about chemicals.
I could say the same thing about climate change.
But intelligent people, you know, who are willing to work and learn can come to understand what these risks are.
And you talk about this in your book, right, as epistemic privilege.
And one of the challenges that's sort of fundamental to all industries is the people inside of the plastics industry or inside of the chemicals industry, they do have more technical knowledge than a policymaker and their policy.
policy team is going to have. That doesn't mean you should trust them because their incentives are
completely off to give them the maximum agency and freedom. We've covered that on some of our
previous episodes. But that's actually one of the sort of questions we have to balance is, okay,
well, we want the regulation to be wisely informed, we want it to be adaptive and never fixed.
We want to leverage the insights from the people who know most about it, but we don't want
to have those insights be funneled through these bad incentives that then end up where we don't
actually get a result that has the best interest of the public in mind. And I feel like that's
sort of the eye of the needle that we're trying to thread together here.
Yeah, exactly.
And so I think that really feeds into the point I want to make here,
which is absolutely the technologists know the most about the technology,
and so they have to be at the table, and they definitely have to be involved.
But they don't necessarily know the most about how these things will influence the users.
They don't necessarily know the most about how you craft a good policy.
And so for that, you might want, you know, in this case,
you might want people who are involved in the ozone regulation,
who knows something about how you craft good policy,
or stakeholders, or what about labor historians who have looked at automation in other contexts?
I mean, one of the big worries about AI is that a lot of us will be put out of work,
and that can be really socially destabilizing.
Well, there are people who are experts on that.
And so you could imagine bringing to the table some kind of commission
that would bring the technologists, policy experts,
and people who could represent, you know, the risk to stakeholders,
maybe even some psychologists who study children.
I mean, the point is there's more.
more than one kind of expertise that's needed here.
And the technical expertise is absolutely essential,
but it's necessary but not sufficient.
Yeah, and I certainly agree with you in that
we need all of society to come together
to figure out how to do this well.
But, you know, having lived through the early Internet
and the Ted Stevens, the Internet is a series of tubes
and the inability for Congress to understand
what they were dealing with,
I have a certain amount of sympathy
for this learning curve that we're all on together.
I mean, Tristan and I can't even keep up with the news
and this is our full-time job.
And so I'm curious because not only will people say
that certain people outside of industry don't understand,
but people say that our society has become over-regulated
or the regulatory apparatus is too slow,
not just from the right, but from the left.
People will say that building new housing is too onerous
because of environmental regulations, for example.
And I'm curious how you respond to that
because you want to pull all of society,
you want to build committees, you want to do this.
And I think I agree with you from a values perspective
that we need more of society in this conversation.
But I'm not sure how good we are at doing that.
Yeah, no, you're absolutely right.
And I'm not, you know, I don't want to come across sounding like a Pollyanna,
although I should always point out, you know,
the moral of the Pollyanna story is that the world becomes a better place
because of her optimism.
And I think we often forget that.
We think calling someone a Pollyanna is a criticism.
But I think, I guess I would say two things about that.
First, I'd want to slightly push back on the idea that we have people on the left as well as the right
who are anti-regulation.
I mean, yeah, but, like, I mean, I've just written a 500-page book about the history of business opposition to regulation in this country, and it's almost all from the right.
There are some examples, but even the housing stuff, I mean, I was just talking to an urban historian the other day about how the real estate industry is really behind a lot of this pushback against housing regulation, not communities.
I mean, there are some exceptions, particularly in California.
but, you know, there's been a hundred-year history.
I mean, this is the story we tell in the big myth
of the business communities insisting that they are over-regulated
and they've used it to fight back against regulation of child labor,
protections of workers' safety, tobacco, plastics, you know, pesticides, DET,
and also saying that if, you know, if the government passes this regulation,
our industry will be destroyed.
The automobile industry claimed that if we had seatbelt laws,
the U.S. auto industry would be destroyed,
and none of that was true.
And every time a regulation was passed, industry adapted
and typically passed the cost onto consumers,
which maybe wasn't always great.
Maybe sometimes we paid for regulations we didn't really need.
But in general, the opposition to regulation
generally comes from the business community
who wants to do what they want to do
and they want to make as much money as they want to make
and make it as fast as possible.
So it gets back to what Tristan said about the incentives.
I understand that.
If I were a business person, I would probably want to run my business the way I want to run it as well.
But in a democratic society, we have to weigh that against the potential harms to other people,
to the environment, to biodiversity, to children.
And so this gets back to another thing that's really important, especially in Silicon Valley,
which is the romance of speed.
We live in a society that has – American society has always had a romance with speed,
railroads, automobiles, space travel.
We love speed.
We love novelty.
and we like the idea that we are a fast-paced, fast-moving society.
But on the other hand, sometimes moving too fast is bad.
Sometimes when we move fast and break things, we break things we shouldn't have broken.
And I think we are witnessing that in spades right now.
I mean, we have a broken democracy in part because we move too fast,
in my opinion, with telecommunications deregulation.
Something that was supposed to be democratizing and give consumers more choice
has ended up giving us less choice,
paying huge bills for our streaming services,
and really contributing to political polarization
because of how fragmented media has become.
I have an idea.
Let's go even faster with AI.
Yeah, exactly.
So, you know, this is a really good moment
to be having this conversation
because one of the things we're seeing now
is exactly what we wrote about in our last book,
the big myth, which is the business attempt
to dismantle the federal government
because they resent the role
that the federal government has played in regular,
business in this country. And this is a story that has been going on for 100 years, but is
suddenly unfolding in real time incredibly rapidly in front of us. And part of this argument has to do
with this idea that government regulation is a threat to freedom and that any restriction on
business puts us on this slippery slope to loss of freedom. But of course it's not true because we
make choices all the time. And so one of the examples I like to cite, which was actually from a debate
among neoliberals in the 1930s about what it meant to be a neoliberal.
And one of them said, look, being against regulation because you think it eliminates freedom,
is like saying that a stoplight or a stop sign or a red light is a slippery slope on the road
to eliminating driving.
No one who thinks we should have stop signs on roads is trying to eliminate driving.
We're trying to make driving safe.
And most regulations that exist in the world, or I don't know, most many, but I don't know,
probably most, have to do with safety, have to do with protecting workers, children,
the environment, biodiversity, against, you know, other interests.
And so it's always a balancing act.
It's about, of course, we want economic activity, and of course we want jobs.
And of course we know that business plays an essential role in doing those things.
But we also don't want business to kill people with dangerous products.
And we don't want business to trample the rights of working people.
We don't want business to exploit children.
Absolutely.
You know, as we talk about the urgency that we're all feeling and the urgency of these problems
and how AI even makes that worse, I want to fold in that everything feels so urgent.
And some of that urgency is real in that we're hitting these really real limits
and we're undermining parts of our society.
And other parts of it seem like a hall of mirrors that the Internet has created
where everyone can't slow down to even think about a problem
because it's all so urgent that we just have to act now.
so I can't even sit with my uncertainty and something.
How do you think that this conversation space
or this compression that we're all feeling
around conversations that may take a decade to settle the science?
How do you think that plays into the problem?
And what would you do?
Yeah, I think that's a great question.
And I feel like in a way it's one of the paradoxes
of the present moment.
We are facing urgent problems.
Climate change is irreversible.
So the longer we wait to act,
the worse it gets and the less we're able to fix it.
So there should be some sense of urgency about it.
And the same with AI, right?
I mean, as we've been talking about this whole hour,
this technology is moving very quickly.
It's already impacting our lives in ways we wouldn't have even imagined five or ten years ago.
But at the same time, I think it would be really bad to panic.
Panic is never a good basis for decision making.
And there's a way in which the very urgency of it really requires us to stop and to think and to listen.
And especially if we think about adaptive management, adaptive management is all a
not overreacting in the moment, making the decision that makes the most sense based on the
information you have, but being prepared to adjust in the future. And one of the ways that
the Montreal Protocol worked was by setting specific deadlines, dates at which the people
involved would review the evidence and decide whether an adjustment was needed. And I think that's
a beautiful model because it incorporates both acting on what we know now, not delaying, not
making excuses to delay, but also recognizing human frailty, recognizing the benefits of learning
more information and being able to work in that benefit and making it structured. So it wasn't just
a sort of promise, oh yeah, we'll look at that again next week, but it was actually structured
into the law. That feels like something that all laws should be doing, actually, especially all laws
that have to do with emerging science or technology. Is this a common practice or is this a one-off
that Montreal did? Yeah, that's a great question. It would be a good thing to study. I don't really
know the answer to that. I certainly know that some agencies have been created with sunset clauses,
although mostly not. So I do think, you know, the conservatives are right about that, that we should
have better mechanisms for if we set up a government agency to think about, you know, how long do we
want this agency to operate? And should there be some mechanism for, you know, after 10 years,
deciding if you want to renew it, almost like when you take out a library book, you know,
you could renew it. I think that would be a useful thing to do. And certainly, you know,
One of the things that Eric Conway and I write about in our new book is that in the 1970s, it was
absolutely the case that there were regulations from the 20s and 30s that needed to be revisited.
I mean, there was a whole world of trucking regulation that made no sense, given that we now
had airlines.
Telecommunication, it was absolutely right in the Clinton era that we revisited telecommunication
that was based on radio now that we had the internet.
But again, there wasn't a good mechanism for doing that.
And I think the Clinton administration moved too quickly and made some really big mistakes and broke some really serious things.
So I think that Montreal is a good model for thinking about how could we do something like that, you know, maybe for AI.
Maybe we should have some kind of commission on AI safety that has a 10-year term, but that is renewable if Congress or whoever votes to renew it at that time, otherwise it's sunsets.
You know, that's really striking because this is a new thought for me, which is you either hear people saying,
look there's too many regulations or people saying
well there's not regulated enough but what you're saying
is it's both at the same time we always
have old regulations that we need to pull off
and we have new ones that aren't protecting us
in the ways we need that we need to put on
and that we should expect that always we should be doing that
continuously I like that way of putting it
and it's kind of like you know there's that saying about generals
are always fighting the last war
I mean one of the problems of history
and as a historian I believe absolutely
in the value of history and all the lessons we can learn
but sometimes people learn the wrong lessons
or they carry forward experiences from the past
that maybe aren't necessary relevant now.
And so some balance between creating a thing that we think we need now,
but also creating a mechanism to revisit it
and to learn from our mistakes.
There's also a way that AI can play a role
in helping to rapidly accelerate our ability to find those laws
that need updating are no longer relevant
and to help craft what would those updates be,
find laws that are in conflict with each other.
I'm not trying to be a techno-solutionist
or say that AI can fix everything,
but I think to the degree that law is actually part of how we solve
some of these multipolar traps,
the if I don't do it, I lose to the guy that will.
Law is the solution, but the problem is so people have seen so many examples,
rightly so, of bad laws, bad regulation.
And so this is about how do we get more adaptive, more reflective,
ways of doing this, and AI can be actually a part of that solution
when I think about a digital democracy.
So we've talked a lot in this podcast about how hard it is to make
sense of the world right now, these competing doubts and over-certainties and these different
cultic takes that social media has riven our world into. What are ways that individuals can
actually stay grounded and understand when something is distorted? What are the antibodies that
prevent people from being so susceptible to disinformation right now? Well, I think, you know,
this is a really tricky question. And if I had a simple answer, that would be my next book,
right? Ten ways not to be fooled by nonsense or something like that. And maybe I'll write that book.
I think an important thing to realize is that, you know, we all have our, we all have brains and we all have the capacity to use our brains.
So I really encourage people to kind of embrace their own intelligence and then to ask questions.
So if someone is telling you something, the most obvious question to say is, okay, well, who is this person and who benefits from what they're saying?
And what is their interest?
And, you know, that can be used in a hostile, skeptical way and it sometimes has been.
but in general it's always legitimate to say well what does this person get out of it so i admit freely
i want you to read my books i get some money from my books but not a lot it's like a buck a book
you know i can't quit my day job um as opposed to the fossil fuel industry that is looking in trillions
of dollars in profits so if you ask who are you going to trust about the climate climate scientists
most of whom get paid good middle to upper middle class salaries but they don't get paid any more
if they say climate change is serious
than if they say it's not serious
or the fossil fuel industry that stands
to earn trillions of dollars more
if they get to continue doing
what they're doing. So the vested
interest there are pretty lopsided
and you don't have to be a brainiac
or Harvard professor to see that
difference. I remember when we
did our AI dilemma talk about AI
risk and people said but these guys
profit from speaking about risk and
doomerism and here's all the problems of technology
as if that's what is motivated
motivating our concerns. And to the degree that we profit in any way from talking about those concerns, how does that compare relative to the trillions of dollars that the guys on the other side of the table can make? And I think how does one demonstrate that they are a trustworthy actor, that they are coming from a place of care about the common good? And that's built over time. And I think it's becoming, especially in the age of AI, when you can basically so doubt about everything and people don't know what's true, the actors that are consistently showing up with the deepest care and trustworthiness will sort of
win in that world as we erode that trust. Yeah, I think that's right. And that's one area where I think
scientists could do a better job. A lot of scientists, we've been trained to be brainiacs, to use
technical knowledge, choose mathematics. And in our science, those tools are important and good,
but we also have to recognize that when you talk to the broader public, those tools are not necessarily
the best ones. And then you have to relate to people on a human level. One thing I've been thinking a lot
a lot about in recent years.
I feel that in academia, we are taught to talk, right?
We're talked to get our ideas out, to write books.
And it's all about, you know, I'm getting my ideas out there.
And we aren't really taught to listen.
And so I really think that it's important for anyone who's in any controversial space,
whether they're coming at it as a scientist, a journalist,
a technologist, whatever, to recognize the importance of listening
and to try to understand people's concerns.
because, you know, I spent some time in Nebraska some years ago
talking with farmers, and one of the farmers said to me,
I just don't want the price of my fuel to go up.
I thought, well, that's totally legitimate.
If I were a farmer, I wouldn't either.
So it means if we think about climate solutions,
we have to think about solutions that don't hurt farmers.
Tax credits, you know, people have talked about fee and dividend systems
for carbon pricing, but to be mindful of how is this affecting people
and how can we structure solutions that take those considerations,
considerations into account.
Naomi, thank you so much for coming on Your Undivided Attention.
Your work on The Merchants of Doubt and the Big Myth is really fundamental and deeply appreciate what you're putting out in the world.
Yeah, thanks, Sammy.
Thank you. It's been a great conversation.
Your Undivided Attention is produced by the Center for Humane Technology, a non-profit working to catalyze a humane future.
Our senior producer is Julia Scott.
Josh Lash is our researcher and producer, and our executive.
producer is Sasha Fegan. Mixing on this episode by Jeff Sudaken, original music by Ryan
and Hayes Holiday. And a special thanks to the whole Center for Humane Technology team for making
this podcast possible. You can find show notes, transcripts, and much more at HumaneTech.com.
And if you like the podcast, we'd be grateful if you could rate it on Apple Podcast, because it helps
other people find the show. And if you made it all the way here, let me give one more thank
you to you for giving us your undivided attention.
Thank you.