Benjamen Walker's Theory of Everything - Second time as fake (False Alarm! part x)
Episode Date: August 23, 2018The Nazis believed the secret to turning a lie into a truth was repetition, for the Spiritualists it was denial, have computers come up with something new? Phase two of our mega-mini series... concludes and we return from our tour of the 1930s and 1880s. ToE’s Special correspondent Chris wraps up his liar’s guide to American history and your host is forced to deal with the Backfire effect! 2018 is not the first time truth, fiction and lies have merged together. In the 1850s people turned to the the dead for answers. In the 1930’s, Hitler and the Nazis tried to remake the world using magic and pseudoscience. In phase two of False Alarm! we’re going to bounce between the second half of the 19th century, the interwar years and the present to find out if we are doomed for a repeat?
Transcript
Discussion (0)
You are listening to Benjamin Walker's Theory of Everything.
At Radiotopia, we now have a select group of amazing supporters that help us make all our shows possible.
If you would like to have your company or product sponsor this podcast, then get in touch.
Drop a line to sponsor at radiotopia.fm. Thanks. episode. Why is there something called influencer voice? What's the deal with the TikTok shop?
What is posting disease and do you have it? Why can it be so scary and yet feel so great to block
someone on social media? The Neverpost team wonders why the internet and the world because
of the internet is the way it is. They talk to artists, lawyers, linguists, content creators, sociologists, historians, and more about our current tech and media moment.
From PRX's Radiotopia, Never Post, a podcast for and about the Internet.
Episodes every other week at neverpo.st and wherever you find pods.
This installment is called Second Time as Fake.
Even though Donald Trump is proving himself to be such a master of lies, he still has nothing on W.
George W. Bush is the greatest liar in recent American history.
You mean because of the lie that Iraq had weapons of mass destruction?
No, the WMD story, that was journalists like Judith Miller putting it on the front page of the New York Times.
Amateur stuff, really.
George W. Bush, he figured out how to use torture to turn lies into truths.
W. authorized the invasion of Afghanistan,
or Operation Enduring Freedom,
on October 7th, 2001.
And in less than a week, he had on his desk CIA reports
detailing the results of enhanced interrogations
that were taking place on the battlefield.
And?
They were all over the place.
One guy started the whole Tora Bora story about the secret caves.
Another guy said bin Laden had escaped to Pakistan.
Another guy said China.
So what's the reason for all the different stories?
Well, as one CIA analyst boldly put it,
if you beat someone hard enough,
you can make them say anything.
So you're saying there were warnings
about how torture could lead to faulty intelligence
like at the beginning of the war?
Yep.
But W, he doesn't see a warning sign.
He sees an opportunity.
In the wake of 9-11,
he knew he had a shot at getting the country on board
with the idea of invading Iraq.
But to do this,
he needed to connect Osama bin Laden and Saddam Hussein.
But there was no connection.
Al-Qaeda and Saddam hated each other.
Exactly.
They needed to create a connection.
And what W recognized pretty much immediately
from these CIA reports from Afghanistan
was that if we use torture,
we could get one of these Al-Qaeda dudes to come out and say
that Saddam Hussein was training al-Qaeda members in bomb making and poisons and gases.
And on November 11th, 2001, they found their guy, Ibn al-Sheikh al-Libi.
He's the guy Colin Powell cited in his UN speech.
Yep.
What exactly did he confess to again?
He claimed that Saddam had invited two al-Qaeda associates
to come to Iraq in December 2000.
But this is actually irrelevant.
It's when and why he says this that's important.
When al-Libi was captured, he ended up under the jurisdiction of the FBI.
And al-Libi, who could speak English, cooperates.
He even provides some intelligence on Richard Reid,
the British moron who later
tries to fucking take out a plane with his shoe.
But in January of 2002,
on W's orders,
he's
taken from the FBI
and given to the CIA,
who then render him to Egypt.
Why Egypt? Well to the CIA, who then render him to Egypt. Why Egypt?
Well, the CIA was confident it could get al-Libi to say what they wanted him to say,
but they thought it would take a few months.
W demanded a faster timeline.
And, well, the Egyptians are really good at torture.
They take Alibi off the plane, cram him into a 20-centimeter-wide box,
and leave him like that, all bent up and twisted overnight.
And then in the morning, five guys pull him out, start beating the shit out of him,
and just like that, he confesses.
But come on.
This is like the definition of an unreliable confession, though.
But the liability wasn't the objective.
The objective was getting this guy to say something people wanted him to say.
So it's like the Saddam Hussein version of uncle.
Yes.
They beat him until he repeated back
that Al-Qaeda's uncle was Saddam Hussein.
If your objective is to get people to say
what you want them to say, to lie,
torture is very effective. I'm going to go. On the evening of October 21, 1888,
Maggie Fox, now in her mid-50s,
stepped out onto the large stage of the Opera House on East 14th Street to face 4,000 people.
She had been sleepless for days,
pacing her apartment in a manic state, playing
the piano, talking excitedly to visiting friends about the blow she was about to deliver, and
of course, drinking. The audience whispered to each other, wondering what the legendary
Maggie Fox had to say. They called out taunts and cries of support. Maggie didn't react to either her fans or detractors.
By this point, she had been famous for 40 years.
She surveyed the room, put on her glasses, curtsied,
and with her words, sent a shockwave through the auditorium.
My sister Katie and I were very young children
when this horrible deception began, she said.
We were very mischievous children and sought merely to terrify our dear mother, who was a very good woman and very easily frightened.
It took the crowd a minute to realize what was happening. Maggie Fox, star of the most famous medium family in the world,
was saying that her career, and therefore the religion of spiritualism,
by then some eight million strong, was built on a childhood prank.
She and Kate had made up the ghost as a joke.
The girls had noticed how scared the rapping made their mother,
and so they egged each other on to knock ever louder on their bed frame.
After those first few days of wrapping in Hydesville, Maggie explained,
the sisters had begun to add props, tying lines around objects and furniture,
so that they could cause things to fall, making ever louder noises in the night.
They took apples from the cellar and tied strings around them.
Then they would throw the apples from their beds
and yank them back under their covers,
making a bumping sound along the dirt floor through the room.
When their mother ran into their bedroom,
they would look at her startled and wide-eyed.
As time went on, the girls also cultivated a special skill.
They found they could loudly crack their toe knuckles and ankle bones.
They practiced throughout the day.
When they did this against their bed frame at night, the wood would even produce the vibration.
Like most perplexing things, when made clear, it is astonishing how easily it is done, Maggie said from on stage.
The wrappings are simply the result of a perfect control of the
muscles of the leg below the knee, which govern the tendons of the foot and allow action of the
toe and ankle bones that is not commonly known. Such perfect control is only possible when a child
is taken at an early age and carefully and continually taught to practice the muscles, which grows stiff in later years.
A child at 12 is almost too old.
With control of the muscles of the foot,
the toes may be brought down to the floor
without any movement that is perceptible to the eye.
The whole foot, in fact, can be made to give wrappings
by the use only of the muscles below the knee.
In a Chicago Tribune article called Mrs. Fox Kane's Big Toe,
a reporter describing the event said,
one moment it was ludicrous, the next moment it was weird.
According to the article, the spiritualists in the audience
almost frothed at the mouth with rage
and muttered furious threats against their foes.
With Kate looking on from a box and applauding, Maggie even offered a demonstration,
taking off her shoes and tights to show in bare feet
how she could strike her joint against wood to make a loud rapping sound. Maggie was paid $1,500 for that performance,
and her confession was published in the New York World.
Those proceeds only lasted so long,
especially because the sisters seemed fully committed to drinking themselves to death.
All three sisters died within just a few years of Maggie's confession.
Leah in 1890, Kate in 1892, and Maggie in 1893. That's writer Ada Calhoun giving us the final chapter
in our story about the life and death of the Fox sisters.
But let's return to that night of October 21st, 1888,
because Maggie's confession and its repercussions
are relevant to our situation today.
You see, Maggie's confession that she and her
sisters had been faking it from the very beginning made headlines, international front page headlines.
At this point, there were millions of confirmed spiritualists living all over the world. A
journalist even published an account of the confession and his interviews with Maggie and Katie in a pamphlet that he called The Death Blow to Spiritualism.
But the only deaths were Maggie's, Katie's, and Leah's. Spiritualism suffered not a single blow.
In fact, its greatest days were still ahead. For spiritualists, the only thing that was fake was Maggie's confession. Some went as far
as to claim that the $1,500 that she'd been paid for her performance was somehow irrefutable proof
of the fakery. After Maggie's death, spiritualists retook control of her story, and she regained her footing in the movement.
It was as if her confession and demonstrations as to how she and her sisters had used the bones in their feet to create rapping noises had simply never happened.
In his 1926 two-volume History of Spiritualism, Arthur Conan Doyle does devote a few paragraphs to the incident, but only to express his doubts as to what actually took place that night at the New York Academy of Music.
The rappings, he wrote, might be discounted upon the grounds that in so large a hall, any prearranged sound might be attributed to the medium. The backfire effect is a term that my co-author Jason Reifler and I used to describe a finding
in one of our early pieces of research on the difficulties
of correcting political misinformation.
This is Brendan Nyhan, a political scientist and a professor at Dartmouth College.
About a decade ago, with his co-author Jason Reifler, he conducted some groundbreaking
scientific research into how people who are committed to believing things that are not
true deal with
corrective facts. The first study we conducted, which launched us into this research, concerned
the belief that Iraq had weapons of mass destruction immediately before the U.S. invasion
in 2003. We presented people with a realistic mock news article quoting President Bush using
real language that the president used at the time in which he described the reasons for the war in
Iraq in a way that suggested there had been weapons. There had been a threat from weapons
of mass destruction that the invasion had avoided or preempted. And what we wanted to see was whether giving people that corrective information,
saying actually Iraq was found according to the official government investigation
conducted after the war to have not had weapons of mass destruction
or an active weapons of mass destruction program,
we wanted to see if giving people that corrective information
would make them less likely to believe in this myth, which persisted for years.
And when we gave conservatives that corrective information, at least in this study, they
reported believing in the myth more than those who hadn't seen the corrective information
at all.
And we found this to be a quite striking finding and a discouraging one, because the hope
would be that if we gave people factual information, that people would update their factual
beliefs in response to receiving this new information. Well, I thought there would have
been some mass destruction. Now I found out that that doesn't seem to be the case, and I'll update
my views. We didn't find that.
In two of the five studies we conducted, we found what we considered to be a backfire effect.
Now, I saw Brendan Nyhan speak at a conference around the time his study came out.
And his ideas totally resonated with me.
It was like I finally understood how climate deniers and anti-vaxxers' brains worked and why, when presented with facts, they doubled down on their misconceptions of reality.
When we showed people corrective information, it was often ineffective or counterproductive
with the group whose beliefs we might be most concerned are
distorted or unsupported by credible evidence. Around 2013, two young scholars, Ethan Porter
and Tom Wood, tried to duplicate the study's original findings, and they couldn't.
They started doing a series of experiments, building on and modeled on our original research, and they found no evidence of backfire in these experiments they conducted.
So frequently, they found differentials in the extent to which people updated their beliefs that correlated with people's political viewpoint.
So it might be that one side updates more in response to corrective information than another.
But in general, the finding was
backfire effects seem to be relatively rare.
And they conducted a large number of studies.
It's very good research.
When Brendan Nyhan and his co-author
learned about the new research,
they reached out and proposed a collaboration
to observe how people deal with corrective information
in a supercharged,
politicized context.
Like 2016.
So we conducted an experiment during the general election campaign in 2016, correcting President
Trump's suggestion in his convention speech that crime was rising.
And we actually gave people corrective information saying, actually, in the long run that crime was rising. And we actually gave people corrective
information saying, actually, in the long run, crime is down. You know, America's become a much
safer place in the long term, right? This idea that we're in a, you know, this crime-ridden
hellscape is not supported by the data. And we wanted to see how effective that would be at
changing Trump supporters' minds, given the way he had portrayed levels of crime in this country.
And encouragingly, we found that people did, in fact, update their beliefs.
They came to have more accurate beliefs about changes in crime
in the United States over the long term.
And that included both Trump supporters and Hillary supporters.
So that was the good news.
The bad news, if you want to think about it that way,
is that learning that a candidate had presented you with misleading information had no effect on
how people felt about the speaker. So in this case, Trump supporters updated their factual
beliefs about changes in crime, but learning that the way that crime
models have been portrayed was misleading had no effect on how they felt about Trump.
And that seemed, we thought, to provide some important insight into how the 2016 campaign
played out.
So if I'm following this correctly, instead of something called the backfire effect, we
now have something that political scientists like you call
motivated directional reasoning. But I'm having a hard time understanding how the two are actually
different. Aren't the results in the end the same? So in our original article, we argued the
following. When people get corrective information that's unwelcome, that's in some way inconsistent with their existing attitudes or beliefs or political
viewpoints, they're going to often be motivated to resist that information. It calls into question
something they believe or would like to believe. It undermines a statement by a politician they
admire and so forth. And so they might counter argue that information. They might think of
reasons to disbelieve it, to distrust the source that provided it and so forth. And so they might counter argue that information. They might think of reasons to disbelieve it, to distrust the source that provided it, and so forth. We argued in the
process of counter arguing that information, they might actually come to convince themselves even
more strongly of that view that was being called into question by the corrective information.
What we found in this more recent study is a little bit more subtle people said okay fine crime
is down in the long run but they express higher levels of skepticism about the accuracy of the
data and more concern about bias and in in how the the information was being described okay
things have actually accelerated even more in the two years since you did this work, which is maybe why I mean when I say I'm having a hard time understanding the subtlety, because now it just feels like subterfuge.
To put it bluntly, could you explain the difference between someone rejecting a fact because they don't want to accept factual reality
and someone rejecting a fact because they think it's fake news.
Yeah, so what you're describing gets into this question about how we react to unwelcome
information. And the reasons we originally thought a backfire effect was a plausible
finding in the first place were these processes of directionally motivated reasoning.
And it's hard to observe American politics today
and not think about directionally motivated reasoning, right?
This idea that people aren't simply motivated
by accuracy goals,
but they have directional preferences
about who they would like to believe to be true
and what they'd like to believe about the world,
right? Those seem to play a really important role in how people understand what's going on.
And the question is, what can break through those filters, right? What kinds of information
can break through them? Yeah, I just want to say that I am totally aware of how ridiculous
I must sound and that I'm, you know, trying to argue with you over why we should, you know, keep the backfire effect when,
you know, factual science has, is showing us that it might not be real. But that said,
I am having a hard time imagining how personally I'm going to make sense of the world because this theory
has actually become, you know, almost like a security blanket for me. It's become, you know,
like something deeply important to me and helped me make sense of this world we live in, especially,
you know, this current crazy moment. I mean, I've talked to a lot of people for this series, Brendan, and
I don't think I've been as nervous for any of the interviews like I am now.
Yeah. You know, there's some irony in a study of the difficulty of correcting misperceptions,
generating misperceptions about what the study found. But that's some
version of that is what we've experienced. The original study simply said there is a threat that
corrections could not only be ineffective, but potentially counterproductive. And the world
ultimately took that as it was paraphrased and re-paraphrased to mean every effort to correct misinformation will produce a backfire effect, which is neither what we found nor what we argued in the article. latched onto that idea because it seems to help them make sense of why it was so challenging to
correct the misperceptions that were out there, whether it was, whether those were about Iraq
or climate change or something else. Well, now you're being generous and kind,
and I'm feeling even more embarrassed. And I should point out that I do also recognize
that one of the problems with clinging to something like the backfire effect
is that it definitely makes it easier to dismiss people
who you believe can't deal with factual reality,
which obviously is only going to deepen this partisan divide that we're in right now.
Yeah, no, and that's part of why I worried about how the backfire effect has been understood,
is that it would cause people to give up on engaging with the other side or to think that
there's no point in trying to fact check misleading claims. And I think that would, you know, do us a terrible disservice.
The backfire effect to be an excuse for passivity or a reason to stop trying.
We should all keep trying as best we can.
And, you know, this intensely partisan age we live in is going to present really difficult challenges because
people's partisan commitments are just at some level more about their identity than anything
connected to evidence or facts, right? And as soon as we recognize that and start to understand
partisanship as more as a form of identity to many people
than a kind of rational calculation, I think the better off we'll be at figuring out what
to do next. I've mostly been hanging out with artists for this False Alarm series,
but after my interview with Brendan Nyhan,
I realized I need to talk with some more scientists.
You could just call me Blaise Aguilar-Arcas.
I am a distinguished scientist at Google AI.
Okay. Blaise Aguilar-Arcas isn't just one of the main scientific brains at Google AI.
He's also the founder of the Artist and Machine Intelligence program at Google,
which is how I came to cross paths with him.
And what really blew my mind was when he told me what his fears were about AI.
He's not frightened of sentient AI blowing up the planet or AI robots taking all our jobs.
He's scared about fake news.
It's been really frustrating to see fake news reappropriated as a way to insult journalists. When we're actually at this extraordinary moment,
when the synthesis of media by AI systems starts to be completely convincing, which is so,
so different from when we started the artists and machine intelligence program
two, three years ago. At that point, when we were using neural nets to do something like, you know, hallucinate a face, uh, you know, those faces were really, were really trippy looking.
We were using these deep dream type techniques that Alex Morvinsa have invented. And, uh, you
know, all of that was, was really interesting artistically, but there was no way that you could
think about any of that as photography, you know, as something that would actually convince somebody
that it's, that it's real. And, and honestly, we thought that that was years, you know it's something that would actually convince somebody that it's that it's real and honestly we thought that that was years you know many years away and we were we were wrong it's
it's uh now yeah so i'm doing this series on the real and the fake and what's going on now
and one of my biggest challenges is communicating that some of the problems we have have these deep historical legacies
while others are new, like brand new,
and require more of a new kind of thinking.
And one of these problems, these new problems,
seems to be what's going on with machine learning and neural nets.
Can you break this one down for us?
Like, what exactly is new?
Well, up until about 2006, the huge majority of machine learning was done using feature engineering techniques.
And what this means is that it happens in two stages.
First, you take your data and you have these hand-coded, hand-engineered feature detectors that run on it. If it's faces, it might be detecting
where the corners of the eyes are and the corners of the mouth and the nose and so on.
You just get a handful of numbers out of that. Then once you've reduced this complicated image
into a handful of numbers, then you can do the machine learning part,
because we could only have very simple models, basically. The big change is that
starting in 2006 and then accelerating dramatically up until today, we've started to have these deep
neural nets that allow you to skip the feature engineering stage altogether and go straight from
pixels to meaning or straight from audio samples to what was said for the speech-to-text problem.
And it sort of took everybody a little while to realize
that since you're going straight from the pixels, for example, to meaning,
you could invert those systems.
You could go from meaning to pixels as well.
And that just wasn't possible with the old systems
because you can't invert through that feature engineering layer.
On Google's AMI site, you can see some of this early work that Blaze is referring to.
Artists using technology to experiment with inverting and reversing these machine learning processes. Well, early as in like a few years ago. Trippy hallucinations of computers imagining and dreaming.
It's the GANs, or Generative Adversarial Network,
that now makes it possible for computers to learn and create reality on their own.
What you do with a GAN is you train two neural networks simultaneously.
One of them is the artist
and the other one is the critic.
And the critic's job is to distinguish real from fake.
So it just has this one bit output,
you know, is what I'm looking at
drawn from the real world or is it fake?
And the artist's job is to fool the critic.
And you sort of train these two up together.
They're sort of duking it out.
And it's hard.
It's hard to train these adversarial systems. But if you do it right, then they kind of ladder up together. They're sort of duking it out. And it's hard. It's hard to train these adversarial
systems. But if you do it right, then they kind of ladder up together. It's actually not unlike
the way AlphaGo Zero is trained to play itself at Go and get better and better.
So when you do that, of course, neural nets are really, really good at detecting patterns. So
anytime the artist is making something unrealistic, it gets corrected
by a critic and you end up with these extraordinarily realistic images. And that's
really brand new. That's just in the last year. So the reason that that matters with respect to
the whole fake news problem is because this is no longer about an artist either airbrushing things out or about a computer scientist sort of acting like an artist
doing detailed sorts of fussing around and engineering for hours, days, weeks in order
to make something convincing, it's now something that you can literally just have a neural net
do on its own based on very high level directions.
Politics and technology have always been intertwined, especially in the 20th century.
But these recent developments in machine learning have totally changed the nature of this relationship.
More and more information is now coming out about the way social media hacking had a bigger role in Brexit than we thought and it may have had a significant role in the election of Trump.
If you look at things that way, then maybe the political moment is also partly a function
of the fact that the technology is getting to this point. We're now not necessarily talking
about social media being sort of photographically
synthesized the way the neural nets make possible now, but just the fact that we're living in this
kind of umwelt now in which the majority of our sensory input is coming from our phones.
And that is a very new condition, not quite as new as convincing neural synthesis,
but still only
a few years old, and a radical departure from what we had before.
So another more traditional warning about AI is the idea that these systems could one day
trap us in simulations, or we ourselves might end up as simulations in the future.
Is this where our fake news generating AI systems are taking us?
We are kind of living in simulations already.
The moment that we went from experiencing most of our umwelt,
most of our environment, in face-to-face interaction,
in real-world interaction,
or maybe just getting our dose of nightly news from TV, which was this shared medium that everybody was experiencing, or the
same thing with newspapers. The moment we shifted to experiencing the majority of what we experienced
through the phone, the fact that our brains are not being simulated yet is almost beside the point.
Our umwelt is already simulated.
So it's sort of the perfect moment for this technology to arrive on the scene that allows that world to be manipulated to arbitrary ends
at very, very large scale, at industrial scale,
at a scale that wouldn't be possible for any human to enact.
In just the same way that a very, very large-scale AI used in an analytic way
can create a surveillance state, as we see happening in China,
at a scale that is absolutely unprecedented and would never have been possible in the days of the DDR,
the synthesis side of that, being able to make arbitrary media using computer
algorithms at that kind of scale also creates a simulation for each of us individually,
potentially, that then becomes just sort of impossibly corrosive to the whole idea of democracy.
I want to finish this off by coming back to art. Maybe a weird question, but you did
found a program for artists and AI. Yeah, what do you think we can learn from artists right now? Or
what can artists do with this technology right now? For sure, we have a lot to learn from art.
And what we need to learn maybe most of all
is when we are looking at art
and when we're looking at reality.
Obviously, I would have liked to have shown Blaze my AI plant.
If you recall, this is the plant I got from that weird startup CEO a few months ago. The plant I took with me on the Radiotopia Live East Coast tour.
On stage, this plant and I performed our own version of deep learning.
I couldn't afford a Gans, nor did I have the know-how to build one, but I did design a series
of flashcards based on some of the stories I've been featuring on the podcast. A series of flashcards
that enabled me to train my plant to understand the relationship between the real and the fake.
Yeah, we're talking plant-based machine learning.
I figured if I could teach this plant how to tell the difference between what is real and what is fake,
then I could reverse the process.
And then this plant could teach me how to do something.
Something I've never figured out how to do.
And that is fake it till you make it.
And you know what, dear listener?
I was this close to finally learning the secret from this plant that I had spent weeks and weeks painstakingly machine training by hand.
Someone stole it.
It was the last night of the tour in Boston, at the after party.
I was showing it to these guys from a robotics startup,
and then there was some drama.
Some angry husband.
Nothing my fault, definitely nothing interesting,
but when it all died down,
my plant was gone. So that's one thread I'm just gonna have to leave hanging for now.
But don't worry, there's still another five episodes in the series.
I'm sure it'll turn up before the grand finale. But this is definitely a wrap of Phase 2 of our mega-mini-series.
And so, a quick round of thanks to a few people.
Like TOE's special correspondent Chris,
who gave us that five-part liar's guide to American history.
And Ada Calhoun, who gave us her multi-part telling of the Fox Sisters,
and Eric Kurlander, our guide to the Nazi supernatural imaginary.
His book, Hitler's Monsters, by the way, was actually what inspired this entire series.
When I read it in the library, all the alarms went off.
And so, before we say goodbye to him, let's find out what he makes of our present moment.
The scary thing about today is it's not that crisis-ridden a time. You have 4% and 5% unemployment, stock markets at near-record highs, or at least was when the last elections took place.
And yet people are still leaning what I would call alt-right or fascist.
When it comes to why, here's my thinking.
Even though things aren't horrible statistically, the level of uncertainty, frustration, anomie,
as Durkheim would put it, is greater than it's ever been.
And the inequality is greater.
And because people haven't rationally greater than it's ever been, and the inequality is greater.
And because people haven't rationally accepted that that's the problem, I know this sounds somewhat left-leaning, the statistics all show that I'm right, instead of blaming the
fact that capitalism's no longer moderated or controlled by the people in a way where
there's a compromise, right, between labor and capital.
Since we have these people who just can't accept that as a solution because they've been told how evil the state is and how evil welfare is, but they need that, they don't know where to go.
If we simply had a distribution of resources and a meeting of labor and capital the way we
used to have with all the wealth we
have now, people wouldn't be flocking to Bernie Sanders or Trump. And that means that if we do
hit a crisis like the one we had in the 20s and 30s, where most Western states form coalitions
between the left and the center. So liberal capitalists and socialists basically said,
we don't want to go fascist.
Let's figure out how to distribute less income in a more equitable way.
Well, now I don't see how that happens
in a period of crisis.
When the center and the left are so weak,
how can we not go fascist? you have been listening to benjamin walker theory of everything
this installment is called second Time as Fake.
This episode was produced by me, Benjamin Walker, and Andrew Calloway.
It featured Brendan Nyhan, Blaise Aguilar-Arcas, Ada Calhoun, Eric Kurlander, and TOE's special correspondent, Chris.
This episode was supported in part by the Alfred P. Sloan Foundation, enhancing public
understanding of science,
technology, and economic performance. More information on Sloan at sloan.org.
Special thanks to everyone at PRX for all the support we've been getting for this mini-series,
especially Carrie Hoffman, who deserves a big congrats as she is now the CEO of the newly merged PRI PRX.
The Theory of Everything is a proud founding member
of Radiotopia from PRX,
home to some of the world's greatest podcasts.
Find them all at radiotopia.fm.
Radiotopia from PRX.