The Checkup with Doctor Mike - Why Stanford Dismantled Her Research Program | Renee DiResta
Episode Date: March 31, 2026I'll teach you how to become the media's go-to expert in your field. Enroll in The Professional's Media Academy now: https://www.professionalsmediaacademy.com/00:00 Intro1:44 Pro Vaccine P...arents11:00 Bot Accounts34:00 Getting More Experts Online45:20 Parasocial Relationships57:01 Jubilee's Surrounded1:04:30 My Videos vs. Public Health Orgs1:22:49 Joe Rogan / Election Interference1:34:29 Fired From Stanford1:44:52 Out Of Touch InstitutionsHelp us continue the fight against medical misinformation and change the world through charity by becoming a Doctor Mike Resident on Patreon where every month I donate 100% of the proceeds to the charity, organization, or cause of your choice! Residents get access to bonus content, and many other perks for just $10 a month. Become a Resident today:https://www.patreon.com/doctormikeLet’s connect:IG: https://go.doctormikemedia.com/instagram/DMinstagramTwitter: https://go.doctormikemedia.com/twitter/DMTwitterFB: https://go.doctormikemedia.com/facebook/DMFacebookTikTok: https://go.doctormikemedia.com/tiktok/DMTikTokReddit: https://go.doctormikemedia.com/reddit/DMRedditContact Email: DoctorMikeMedia@Gmail.comExecutive Producer: Doctor MikeProduction Director and Editor: Dan OwensManaging Editor and Producer: Sam BowersEditor and Designer: Caroline WeigumEditor: Juan Carlos Zuniga* Select photos/videos provided by Getty Images *** The information in this video is not intended nor implied to be a substitute for professional medical advice, diagnosis or treatment. All content, including text, graphics, images, and information, contained in this video is for general information purposes only and does not replace a consultation with your own doctor/health professional **
Transcript
Discussion (0)
This episode is brought to you by FedEx.
These days, the Power Move isn't having a big metallic credit card to drop on the check at a corporate lunch.
The real Power Move is leveling up your business with FedEx intelligence,
and accessing one of the biggest data networks powered by one of the biggest delivery networks.
Level up your business with FedEx, the new Power Move.
After 19 years, they're back.
Frankie Munes, Brian Cranston, and the rest of the family reunite in Malcolm in the middle, life's still unfair.
After 10 years avoiding them, how and lowest demand Malcolm be at their anniversary party, pulling him straight back into their chaos.
Malcolm in the middle, life's still unfair.
A special four-part event, streaming April 10th on Hulu on Disney Plus.
Do you remember eating the pets?
A lot of towns don't want to talk about it because they're so on.
embarrassed by it. In Springfield, they're eating the dogs, the people that came in, they're eating the
cats. It was a rumor, right, that started on Facebook, that made its way to Twitter. One of the reasons
that it blew up in the way it did was that it had that blend of, like, weirdness, right, that just
lent itself to content creation. The power of it gets missed, I think, a lot of the time. People think of it
It's just something that happens online and stays online, but it doesn't know.
It becomes a lot bigger.
Welcome back to the Cheka podcast.
Today's guest is Renee DeResta, a professor at Georgetown University who specializes in
adversarial abuses online or ways that people attempt to manipulate, harass, and target others
on digital platforms.
She got on my radar last year after I learned her contract was not renewed as the managing
researcher for the Stanford Internet Observatory.
The department had been specifically studying U.S.
election interference and ways to mitigate the spread of misinformation on social media.
Stanford has now fully dismantled the observatory.
It's not up to me to say why Renee was let go or why Stanford halted this work,
but when a major university disbands a department investigating claims made by the president of
the United States, alarm bells start going off. To get to the bottom of it, I invited Renee to the
studio to talk about what tactics foreign governments may be using on the internet. What happened
during her 2019 appearance on Joe Rogan.
And if there's anything we can do to bring misinformation online under control,
please join me in welcoming Renee Doresta to the Checkup podcast.
You use the term accidental academic.
I love the term.
And I'll tell you why I'm really excited for our conversation.
I view you as perhaps not an academic,
but a researcher in this space of social media communication,
mass communication,
whatever title you want to give it.
And I view myself as a practitioner.
So when I have conversations on this podcast with people who are health researchers
and me being a family practice doctor who sees patients on a day-to-day basis,
it makes for an interesting discussion.
I feel like we're doing the same today, but not from the healthcare space,
from the social media space.
And I actually get very passionate about it because I don't get a chance often to discuss it
with people who truly understand the nuance of,
of what happens on social media
and what it's like to communicate
with millions of people at once,
because from a human standpoint, that's very unnatural.
So tell me about how you've fallen into this accidental academic position,
as you call it.
Actually, it was as a pro-vaccine activist,
so that's the kind of funny thing, right?
So healthcare has gotten you here as well.
Yes, almost exactly 10 years ago,
we had the measles outbreak in Disneyland, right?
I had just moved to California.
I had a baby who was too young to be fully vaccinated, my first child.
He was born in December 2013.
I was doing this thing that you have to do in, I lived in San Francisco, where you put your
child on these waiting lists for preschools and daycares and things like that.
There's a lot of demand, not a lot of supply.
And, you know, I had read these articles about like the crunchy Californians because remember,
it was like a left-wing thing 10 years ago, right?
And I, candidly, you know, I grew up in New York here in New York.
And I just didn't want my kid in an anti-vax preschool.
That was my personal values.
I just wasn't cool with it.
So I pulled down the California Department of Public Health Data, as one does when you're
a data science, you know, computer science person, right?
I was working at a startup.
I was in San Francisco for tech.
That was my field.
That was my job.
And I pulled down actually 10 years of public health data because you could actually see
by kindergarten, right?
could see the vaccination rates. And when I had pulled down that first year, there were some schools
that were at like 35% MMR rates. And I thought this is insane. And so I looked, I wanted to pull
down the tenure. And so I did that. And I decided I was going to make a data visualization. And I was
going to show, I was going to animate it actually. So you could see over time those numbers changing.
And I decided I was going to pull down the census data, right? Because you can actually cross-list some
of this stuff with like with zip codes with socioeconomic data with a whole bunch of different
ways that you can make this into a visualization and and I wrote a blog post I had a tumbler at
the time right throwback um so I wrote a tumbler post and um and I and I called it something like
you know California's vaccination policy like really hurts its kids because I thought this was insane
because in New York you had to have like a religious exemption where you was like signed by
signed by a right a religious figure sort of saying that you had to sincerely held
belief. Whereas in California, we had these things called personal belief exemptions where you could
just write like, I'm not vaccinating. And like, that's it. That's my choice and I'm not doing it.
And then we had, so I called my local representative, my state, you know, state rep. And I was like,
is there anything that we can do about this? And he was like, no. You know, the anti-vaccine movement
is a force and we're just not willing to do anything about it. But then the Disneyland measles outbreak
happened. And I called again. And they said, actually, yeah, we're going to do something about it.
there's a there's a state senator in sacramento dr richard pan he's he's a pediatrician also actually and
and he uh he's introducing a bill and why don't you get in touch with his office so i called and i said
look you know i'm just like a mom we did this data science work um i can actually make you
some charts and stuff for your senators if that's useful i can like show you know these graphs
that can help people see like in your district here are your schools here are your trends you know is
that useful, can I help in some way? And I'd never done anything like this before. And his staffers
put me in touch with a couple of other moms who it turned out had also called kind of upset about
the measles situation in Disneyland, including one mom whose kid had gotten measles at Disneyland.
And we decided that we would start this group called Vaccinate California. And so we
decided that we would just try to be a pro-vaccine parent group. And, and
Then we had to like be a social media pro vaccine parent group.
And and this was when I realized like there was really just not very much out there that did that,
particularly not with like a political valence, right?
Where like we wanted to pass a bill to, you know,
to require these vaccines with, you know, for kindergarten in California.
And to eliminate the personal belief exemption to say medical exemptions are great.
You know, some some people need medical exemptions.
they have contraindications, absolutely, but the personal belief exemption just shouldn't be this
easy to just opt your child out and leave other kids vulnerable to preventable diseases.
And so we made a Twitter account, we made a Facebook page.
And this is when I realized that it was actually very difficult to reach pro-vaccine people,
even though 85% of the public was vaccinating their kids, there was no such thing as like
a pro-vaccine ad-targeting category.
Anytime you would type in vaccine on Facebook,
the ad-targeting categories that the ad-targeting tool would serve you
were anti-vaccine.
It would automatically would return anti-vaccine keywords.
It would return anti-vaccine.
You know, we'd return, like, vaccine controversies was a category
that Facebook would proactively suggest to you if you started to type in the word vaccine,
but nothing on the pro-vaccine front.
So I said, okay, well, I guess we're going to be targeting,
like doctors because you can target medical professionals.
So how do we grow a movement of ordinary people when we have no way to reach them?
Because the ad targeting tool is keying off of things that people are typing into their profiles.
And anti-vaccine activists are self-identifying, people who are hesitant already,
people who are skeptical, people who have made this their identity.
There's a movement on this platform already.
But there is no movement of pro-vaccine people because most people vaccinate their kids.
and go on about their day.
It doesn't become an identity for them.
So realizing that there is this asymmetry
was kind of what got me into
the recognition that all of a sudden
social media was where public opinion shaping
was going to happen.
You had to be there for political campaigning.
You had to be there for public opinion shaping.
And yet, what was ostensibly a majority position
was not an identifiable, visible community,
but the minority looked like the majority because they had invested the time to make themselves a movement.
And that was really what got me into understanding like how information moved.
I did a lot of network analysis.
Again, you know, data science, right?
And went to Twitter and with another data scientist named Galad Lottan started doing network maps of Twitter, right?
Who were the different communities as we moved this bill through the California legislature,
got referred to three committees in the House, three committees in the Senate.
It's going to be a very, very vicious fight.
Who is in this conversation?
Who are the influencers?
Who are the influential figures?
Realizing they were 100% anti-vaccine activists, right?
100% of the key pivotal nodes in the conversation, the large follower accounts, 100% anti-vaccine.
Robert F. Kennedy, Jr., huge in the conversation.
10 years ago, here we are today, right?
But the public health community was entirely outside of the conversation.
There were a handful of doctors who were kind of in it, but they were very small accounts.
They weren't really integrated into the conversation.
They were kind of talking to themselves sort of over on the side.
So when you look at how information moves just structurally, like who's communicating with whom,
it was very clear that the anti-vaccine movement was networking very heavily into conservative spaces.
and the pro-vaccine, very, very small communities just weren't.
So you could see it happening, right?
In California, 10 years ago, as this fight was happening,
you could see the evolution of the movement structurally.
You could see the messages that were working.
You could see how influencers were involved.
And I found it absolutely fascinating.
So I didn't really stay involved in the pro-vaccine fight as an activist.
Instead, I got very involved in the question of when,
the sort of means of shaping public opinion has changed, what does that do writ large?
And that was how, when I say accidental academic, I went from that to looking at ISIS,
actually.
So you followed all the good actors.
I know.
So, yeah.
So I wound up looking at what came to be called kind of adversarial abuse.
Like how do small groups of people make themselves look much larger?
How do you use things like automated accounts to actively manipulate an online space?
How do you use kind of new and novel tactics?
Anytime you have a new platform emerge, you kind of change the terrain, right?
I did a lot of work on Russia.
So I led the outside investigation for the Senate Intelligence Committee on the Russia interference in 2016.
And it actually was a lot more than just 2016.
So I wound up looking quite a bit at state actors and propaganda campaigns writ large.
And then in 2019, somebody I knew from, you know, actually from fighting with him on the internet,
was starting the Stanford Internet Observatory.
And he asked if I'd be interested in coming to Stanford and, you know, doing these sorts of investigations as,
as, you know, kind of an academic discipline.
And that was how I became an accidental academic.
And then you're at Stanford.
Few months to a year before pandemic hits.
Yeah, so summer of 2019, I started at Stanford.
What was the initial goal of the work and how abruptly did that goal change when
the pandemic started?
Well, you know, you'd be surprised.
It actually didn't change that abruptly because we set up the center to, you know, to study
adversarial abuse online, right?
And how would you define that?
Yeah.
So let me, um, so if you see social media platforms, um, and technologies as a playing field, right,
where there are many, many, many wonderful use cases, connecting people, ordinary people
using them in perfectly legitimate ways, there's always going to be some actors who are
using them in, um, explicitly manipulative and harmful ways.
So, for example, spammers, right?
That's the, everybody knows that normal people send millions of emails.
Spammers also use email, right?
You can think about generative AI.
People use it for many legitimate purposes.
You also have people who use it to create fake persona accounts and be manipulative.
When you're creating social media accounts and talking to people on Twitter,
you also have state actors pretending to be things that they're not.
Some of the work focuses on what we call
trust in safety. That's the sort of discipline or business line that technology platforms will also
put it under. So that's child safety. So we look a lot at how our children sometimes targeted online,
child exploitation content. We did a lot of work on child exploitation content, actually.
That's work that is unfortunately perpetual. There is no period in which that happens, that is
always happening. So a lot of work on child safety. A lot of work on, again, spam and scams don't move
seasonally. There are certain topical areas where the focus may change because there is some
major event happening in the world. So obviously the pandemic is one such thing. Occasionally every two
years, there's an election. There's always an election somewhere. Yeah. Right. So, you know, one of the
first elections that we looked at when I was at SIO was actually the Taiwan election.
And that's because if you're looking at adversarial abuse, particularly for me, a big focus was state actor influence operations.
China is absolutely influencing the Taiwanese election.
They are working very, very hard to do that.
And so studying and understanding what Chinese influence operations are looking like in the Taiwanese election shows you how China's tactics have evolved in that particular moment in time as, again, a new technology platform might have emerged.
new use cases have evolved, generative AI has, you know, new capacity has come about.
Whenever you have a new technology or a new platform, the playing field changes, and so the
adversarial behavior changes. So we think about it almost like, almost like being akin to
cybersecurity, where you always have some sort of adversarial actor doing something
manipulative. And so we tend to, you know, look at it as this hybrid of studying tactics and
social behavior. How aware do you think the average social media user is of this adversarial abuse?
Sometimes they are hyper aware, meaning sometimes people think that there's a Russian bot,
you know, anytime they see a social media post that they disagree with, right? So that's the downside.
So you want, you want a good degree of awareness, but not an unhealthy degree of skepticism.
That sometimes is hard, hard balance to strike. I know that.
when we started, you know, again, when I was doing the Senate, the research for the Senate
Intelligence Committee on Russia, it was very important to try to get people to hold two ideas
in their heads at once, which is, this is real and this is happening. This is not the be-all
end-all, right? This didn't necessarily swing the election. This doesn't mean that every time
you see a, you know, for example, right-wing account that you disagree with, that it's a Russian
troll, you know, making people realize that, but at the same time, sometimes you are.
in fact, engaging with an account that's there solely to get a rise out of you, solely to manipulate you,
and these things wax and wane over time. For a while, automated accounts, bots, were useful in part
because they helped just draw attention to a topic, right? Making something trend on Twitter was a big prize
because that would draw real people's attention to it and then real people would start to talk about it.
And so even if that was started under false or manipulative pretenses, real people would eventually start to talk about it.
And so it wasn't that it was completely manufactured.
It was that there was a degree of like throwing gasoline on the fire.
And so this was another thing that was hard to convey sometimes, which is that real people hold these opinions, right?
When the Russians are pretending to be Texas secessionists, there are real Texas secessionists.
These are not opinions that Russia simply made up and implanted into the American mind.
These are real opinions that real people hold.
And so we're not going to content moderate our way out of this by taking bad ideas off the internet.
So a lot of what we look at is more things like, are there technological or design solutions that make some of these behaviors less impactful as opposed to overfocusing on the content or the message it's.
Have you been able to gather any tips that you could give to an individual who's trying to be mindful without becoming a cynic?
It's really not the content. I mean, I think that's the thing that we've hit a point now where, you know, you used to be able to tell people there was a type of generative AI, sorry, a type of AI that was used to generate a lot of fake profile pictures, generative adversarial networks, where there are,
One AI would kind of work against another AI to produce a realistic-ish.
Now they don't look very realistic anymore, several years out.
But two years ago, they looked fairly realistic for the time.
Human face.
But you could tell people like it gets the ears wrong often.
The earrings are often mismatched.
The teeth are not numerically accurate.
They're not symmetrical.
The collar tends to blend into the hair.
So you could give people tips.
And I remember working with NPR because we found a network of a couple,
maybe a couple thousand of these things on LinkedIn.
And I actually didn't think it was a particularly interesting find.
I thought of it as like kind of a spam network.
But it was like these spammers who were pretending to be lead, you know,
lead gen salesmen.
And what they wanted to do was connect because once they connected with you,
they looked more real because you could see like, oh, my friend is connected to this.
person also and so they were just trying to network their way into communities and um i thought it was a
fairly boring thing actually but it was one of the things that i got more inbound emails from people
who were like thank you so much i see these these accounts people reach out to me on linked in i never
know whether to accept them or not you know these tips that npr put out i found them so helpful
so i think that people really do feel acutely that there's so much manipulation on the internet and they
are, you know, they're very worried about, about scams. And the sort of sad reality is that those
tips from two years ago are useless now, right? Like, that's just not those, that, that form of
AI is not used at all anymore, generative AI. Diffusion models produce much more realistic
faces. They are, you know, you can, the very plastic looking diffusion
faces generated with the fusion models like mid journey and things two years ago you know we've moved
you know two years two years out now like they're increasingly indistinguishable you can actually use
you're called laura's like sort of add-ons that that that will actually add like blemishes to the
skin now right so they don't have that doll-like kind of hyper-realist look anymore they they look indist
And so, you know, we've hit a point now where, like, I don't trust my eyes when I look at these things. And it's, you know, it's kind of hard to say to somebody, like, you know, look, when we're looking at a network of accounts that we think are inauthentic, the way that we identify them is, you know, through a series of analyses, you know, using, using AI models to try to detect whether the image is AI, right? So, and, you know.
Can it see itself in itself?
And sometimes they return false negatives.
I don't know if you notice this, but Grock was asked to identify,
to determine whether or not an image of soldiers sleeping in California
that Gavin Newsom posted to Twitter was AI generated.
And it said that it was, and it was not.
There was a moment around October 7th where the Israeli government posted some images of bodies, right,
that were upsetting and they were partially blurred.
And Ben Shapiro posted this.
And somebody ran it through one of these AI identifiers and returned to false negative.
It said it was AI generated because of the pixelated blur in one portion.
And it was not.
It was a real image of a war, you know, of a conflict moment.
And when you have the false negatives or the false positives, it just creates this sort of fear,
uncertainty, and doubt.
And people don't trust what they see with their eyes.
And this, unfortunately, is the world that we are moving into.
Even for those of us who do this professionally, whenever I'm looking at something and trying to do an authentication, I'm usually in touch with multiple people in a back channel, right?
Hey, guys, have you seen this thing? What do you think it is? There's like four or five of us talking. There are some people who are really experts in authentication.
Honey Farid at Berkeley is one person who comes to mind where like that is kind of the thing he does.
there are people at social media platforms
who have additional visibility into the accounts
that are disseminating the content,
because that's also a signal.
Did these accounts just spring up out of nowhere?
Do they appear to have changed hands?
Did they suddenly move from posting in one country
to posting in another country?
Only the platforms have visibility into that.
You or I don't, even as a researcher on the outside,
I don't have visibility into that.
So there's just, it's really a very,
collaborative investigative process when people are looking at and trying to sort these things out.
That's the unfortunate reality of it. I wish I could say that there are good tips.
Are the platforms themselves aware of this issue?
Absolutely, yes.
And what are they doing to combat that, if anything?
Because I remember there was conversations about Elon Musk saying we need to get rid of all these fake accounts off Twitter.
And that didn't seem to really go anywhere.
Well, I mean, so that didn't seem to go anywhere.
A lot of things didn't seem to go the way Elon said they were going to go on that platform, did they?
So there's a couple ways to answer that.
So different platforms have different incentives and different business incentives.
One of the things that was interesting with X was it created some perverse incentives for more bots
and more types of accounts to come into the conversation.
Some spammers and scammers who want to manipulate people,
particularly I think it's important to note financial motivations
or the number one motivation for a lot of this stuff, right?
The accounts on LinkedIn, sometimes you'll see accounts
creating LinkedIn profiles for political reasons.
Oftentimes it's for financial motivation.
Same thing with Twitter.
A lot of the time they're there because they want to run a crypto scam
or something like that.
So you'll see accounts that go to Twitter because they want to impersonate people.
And because you can kind of buy a blue check, you know, and spend a certain amount of dollars
per month to get your content boosted in replies to increase your visibility by paying for
it now, it kind of creates a perverse incentive.
So if you can run a scam that is more lucrative than the cost to run the scam, then you're
going to see more people try to use that platform in kind of new and novel ways.
One of the things that we were doing at Stanford right before I left was, or right before
the center ended, was the, you can see what we call AI slips.
There are certain error states or messages that an AI will return when it can't do something.
And occasionally you will see accounts on Twitter spit out that language.
And that's how you can tell that certain clusters of accounts are not here.
human operated in that moment. Sometimes accounts are operated at times by a human and at times by a
machine. But when you look at some of these slips, and then you look at what the account posts about
when it is not slipping. So a slip might be, as a large language model, I cannot, right? That was a common
phrase that chat GPT or OpenAI's API would return. That actually became kind of a meme,
so that one got, became useless because other people started saying it as a large language model,
I cannot became something that.
Which makes it harder to study.
But this is what I mean, right?
When you're studying like, this is why you can't just rely on the content because then
you'd be like, oh, well, anybody who says as a large language model, I cannot is a bot.
And then you're going to have a ton of false positives because anybody who uses it as a meme,
jokingly, you're going to misidentify as a bot.
Which happens frequently in the health space where I remember there was deaths from people
overconsuming ivermectin.
and YouTube tried to autocorrect as quickly as possible by demoting or blocking information surrounding
ivermectin, it hurt legitimate medical providers who were putting accurate information saying the word
ivermectin.
Yeah, no, I was in a pro-vaccine Facebook group, and it was demoting and penalizing accounts
that were sharing anti-vaccine links, but sometimes it was doctors in the group who were
throwing in the links saying, like, oh, somebody should write a response to this.
Right.
Yeah.
And so you had to come up with new and.
novel ways to get around the auto mods to make sure your group didn't get penalized.
Yeah, I mean, these are the, again, this is the sort of like the cat and mouse game of,
you know, you would see, I remember during COVID, people would write the word vaccine
using like two apples for the seas, like emoji apples or like two skulls, whatever.
You know, the kind of futility of automods.
And I mean, this is where, you know, you saw Zuckerberg,
say this in, I think it was January where he says there's too many false positives. There's,
you know, it's like precision versus recall is how it's described in machine learning when you're
training a system. Do you, you know, what is the threshold that you want to set your system to? Do you
care more about false positives? Sensitivity versus specificity. Yeah. So it's, is it worse harm to
leave it up or to take it down. And that's, that trade-off might be dependent on the type of content
it is. If you're building a classifier that looks at nudity, and maybe certain types of nudity are
illegal, right, or in a particular locale, it might be illegal, for certain ages it is explicitly
illegal, right? So you want to have certain types of detection that are going to err on the side of
absolutely not. This is going to come down and we're going to take this very, very seriously.
whereas for certain types of speech or policies that relate to, you know, as you mentioned, like
ivermectin and things like this, you might want to err on the side of leaving it up.
Yeah.
Do you think it's inevitable that we're going to get to a place where your blue checkmark is
basically you setting in your ID and proving you're a real person?
So I worked on a paper on that, like in July of last year, there's a really interesting
question that goes along with that.
So as there's a type of AI called agentic AI, which is an AI that acts as a human agent.
And not all of these, again, are bad.
Some of them are you might want to allow an agent to act as you, right?
You might want to have your chatbot, for example, which operates as you.
And you're going to disclose, because you're an ethical person, that this is your chatbot you.
But other people, you know, might not want to disclose, right?
There are different ways in which people are going to engage with agenda K.I.
I think that there's going to be, as this becomes more prevalent, you're going to see a need to declare kind of two different thresholds.
One will be just humanness, right, bot versus not bot, bot versus human.
And then there's going to be in certain types of cases, particularly areas related to maybe finance or health, specific humans, right?
I am Renee versus I am human.
on social media, I think you might just want to verify humanness in some capacities.
I think that you are going to start to see, maybe not every platform, but you'll start to see some
platforms that might want to differentiate around that, maybe new social media platforms.
There's some like Reddit that have persistent pseudonymity, right, where you can tell
that an account's been around for a while.
You don't know the person's name.
You don't know who they are, but you know that they're not a brand new account because they have
like, you know, kind of like a cred number.
You can kind of see some information.
So the...
Which I'm sure people will start selling at some point.
This happened already.
No, this already happens.
But there's ways that you can do sort of self-sovereign ID where you reveal certain bits
of information and privacy protecting ways through certain types of cryptographic proofs
that don't provide the information to the end user like the social media network.
but that verify the thing.
So as people want to have more age verification capacity in certain areas,
you're seeing demand as, you know,
I think that as you have more and more agentic agents,
as you have more and more sophisticated AI,
this question of what does identity,
what does the identity layer of the internet look like,
becomes more and more important.
Again, as you have more and more sophisticated fraud situations emerging,
you're starting to see that demand happening.
So I think the question becomes,
how do you do it in a way that's privacy protecting
that allows people to remain anonymous or pseudominious?
I never can say that word properly.
And how do you, you know, what does that look like?
But I do think it's not going to be every platform
and not everybody doesn't want to use it.
Some people might say like the tradeoff is just not worth it for me
and I don't want to do it.
But I do think that you'll see something start to emerge
in the next couple of years where
some, you know, some platforms are going to want to differentiate.
Well, YouTube does this underneath.
If you're watching this on YouTube, you'll see underneath it says from a licensed
health care provider.
And they do verify that I have an active medical license, that I'm a board certified
physician through a partnership with a third party.
Obviously, they're not doing it themselves.
But it's interesting that they've taken that step because no other platform has taken
the step.
Perhaps they'll include a Wikipedia link to if there's a discussion of vaccines.
Right.
Right. They were one of the first to do that too, actually. I remember that was their, and you know, they didn't tell Wikipedia when they did that. I don't know if you know that. Yeah, that was, I remember that that was a big deal. That was like the, they announced it like on stage. It was, oh, we're going to use Wikipedia to help shed some kind of crowdsourced or, you know, Wikipedia is like negotiated facts, right? We're going to include Wikipedia, a link to the moon landing page or the, you know,
know, flat earth page or whatever, um, in that little bar on the bottom there. Yeah, they were one of
the first to, to try to incorporate that in. Yeah, it's a, it's a scary place to be because for the
average person that's spending time online, how does one distinguish just truth? So during COVID,
I don't know if you were part of this, but Twitter did try to blue check a bunch of doctors really
quickly. That was under like the pre-Elon, you know, regime, if you will. The now it has really
splintered into what platform do you trust, what accounts on the platform do you trust.
I think that that trust piece is becoming more and more fragmented and truth is increasingly
tied to trust and not so much to people going and relying on expertise, right?
I think expertise is being decoupled from influence in a lot of different areas.
And that question of trust and expertise in some communities, that has also decoupled quite a bit, right?
the active distrust of experts, which, you know, there are varying opinions on to what extent
that's caused by failures of experts. I think it's a combination of experts not communicating
in quite this, you know, in ways that people like you do, right? This, the dynamic of, are you
putting yourself into the public, you know, sphere where the public is having conversations versus,
or like, you know, the example I gave from 10 years ago where, you know,
CDC was absent from that conversation.
They actually had a stance that they did not talk about anti-vaccine misinformation.
No, they didn't even talk about conspiracy theories about themselves, actually.
I remember the CDC whistleblower conspiracy theory that they were literally burning the proof
that vaccines caused autism, which they do not, in the basement of the CDC.
and this was a conspiracy theory that was trending on Twitter daily,
because again, you could use bots and automated accounts,
and they were, the anti-vaccine movement was.
And also, beyond the bots, they were very passionate about making it trend every day.
Donald Trump was involved, funny enough, at the time he would occasionally tweet about it.
Robert F. Kennedy Jr. would, they would get these celebrities,
and they would try to tap them in.
And the CDC had a policy of just not responding.
And unfortunately, this is a very 1990s style of crisis communications, which is that, oh, you just don't give it oxygen.
You just say nothing.
And now that just means you've created a void wherein the only narrative in the space is the narrative about you.
So someone else is telling the story about you, which means that when somebody sees that keyword and goes and searches for it, the only thing they're going to see is the content that that side has put out about you.
and no counter message, no counter content,
unless somebody else takes it upon themselves
to go and write it on your behalf.
So it's a, it's kind of a mess.
I feel like I just went down a bunny trail, though.
No, it's very messy because what ends up happening is
in every topic, whether scientific or otherwise,
sometimes even pop culture wise,
you're having point and counterpoint
and the weight between them is shaped by public opinion and or success on social media.
Yeah, it's visibility.
Well, because it's an attention game.
It's how do you capture attention?
How do you get your content in front of people?
How do you break through?
I mean, you, I think, have spoken about how long it takes to build up an audience,
how, you know, you work at it for years until you hit a point where you've amassed that trust.
you've amassed that reach.
I write about it, you know, in the book I wrote,
I wanted to point to influencers.
Academia didn't really take influencers seriously.
It was, again, these are some people online.
And, I mean, they did to some extent, like network science.
You know, people who study, like, influential nodes do.
But this question of influencers as political communicators,
as culture driving politics, culture really,
online culture shaping politics in a profound way, I think, is still something. I still see people
who I consider to be incredibly astute, you know, political science professors having these
moments of realization where they realize that a meme that came from some random wellness influencer
has just, you know, is shaping CDC policy. And, you know, and I, I feel a little bit sad,
because I feel like it means that nothing I've said over the last 10 years is like, has resonated.
yeah, where you just think how many more times can I write that same article or, you know,
clearly maybe writing it isn't the way anymore. Maybe I need to be saying it in video clips at this
point, right? Yeah, I know, I know. But I think there's a, that's actually, though, I've talked
about this with Destin Sandlin, who I think is who originally connected us like years and years ago.
how do you if you're not innately like a video person I think it takes a while to feel comfortable
doing that like I feel very comfortable on podcasts having conversations this is a thing that I've
done for years and and and that that doesn't feel weird to me anymore but um picking up a camera
and doing it myself by myself is weird and yeah and yet I see
see kids like my oldest is now 11 and he's just like you know he's like dying to do it and I'm like
absolutely not you will not post anything to social media you know there are weirdos out there
um but uh you know but it's it's like just generationally because I I grew up in an age when like
text social media was the thing whereas for them it's all video native um
you know, I had a student show up to my office hours.
I'm at Georgetown.
I'm a professor there.
And I had one student show up to office hours.
And he came in and he was like, why don't professors, like, why don't you do TikTok?
Why aren't you on Instagram?
Like, you don't, I see you never use your Instagram.
And I was like, I don't know.
I just doesn't feel natural.
It feels weird.
Actually, it feels like a thing I have to really prepare for.
And he just, you know, he, like, wanted to make it like his mission for this semester
to get us doing it, which is endearing.
How's that going?
I mean, I do it a little bit more, but, you know, I sort of, what we're in, we're almost
in July, and back in January, I was like, okay, I'll make a New Year's resolution to do it,
and I've done it like twice.
Okay.
So I'm clearly not sticking to it.
So that's not going very well.
Two steps forward, one step back.
Still a good strategy.
It's hard.
It's not natural.
It goes against a lot of what we're taught.
in academic circles, it also, when we think about research and we think about understanding,
we come from a place of humility.
Yeah.
I think that's how you have to approach learning.
That there's so much you don't know.
You're hungry for more.
You constantly want to ask better questions instead of saying you have all the knowledge.
Right.
But on social media, you kind of have to do the opposite.
Yeah.
Or I have like an intern who I've been working with on this process.
And one thing that I always find so, I feel so bad for him in how difficult I am actually,
in that, you know, he speaks, he gives me things where he's like, you're not using declarative sentences.
And I'm like, no, I'm hedging everything, right, which is how you would do it in a paper or something.
And it is so counter to, like, to how you would write something where, you know, because in academia,
you do your, like, your 10 paragraphs of throat clearing before you get to the point.
Of course.
It's so bad.
And I don't do it when I'm talking to people.
I don't do it when I'm lecturing.
I definitely don't do it in public speaking.
But there's something about, you know, the one-minute reel where you just look at it and
you're like, what the hell can I possibly say in one minute?
Without making sure all my bases are covered and someone doesn't use something out of it.
And, you know, if you say something absolute, God forbid, some other person is going to be like,
well, you know, you didn't consider this case.
And this is why my book is 200 pages long, you know.
I think those opportunities that sound really fearful,
what I've learned is that you have to lean into those opportunities
as learning moments where you can actually create some line of dialogue
with those who disagree with you if you've been wrong.
And I think they'll celebrate the victory
and you can give them the victory if they're right in a certain sense.
But at the same time, it can allow you the platform to stand up
and double down and say why you're right.
Yeah.
And that sort of conflict is very watchable.
and perhaps can make good content,
which is why we see a lot of the videos
that do well in social media
are conflict-driven.
And it's tempting to make all your content conflict-driven.
And some podcasters have made a career out of that.
Yes.
We won't because I feel like then you're destroying
the purpose of the message.
Yeah.
Which is we shouldn't always be fighting.
This is the incentives I was talking about, right?
This is one of the things that I think is also challenging.
Even on the text place platforms that I do spend a lot of time on, you're always incentivized to pick fights with somebody.
You're always, the quote tweet is going to get you much more reach than writing the original post in most cases, right?
And that is, you know, the morally strident language is going to get the lift.
Using words like they, right?
They don't want you to know they didn't do this.
They didn't do that.
The oppositional framing.
Like you're fighting in the gladiatorial arena.
The common enemy.
Exactly.
And this is why, and more than that, it's fun for the crowd.
And this is the thing where when I say like accidental academic, besides the fact that, you know, I used to feel like you could always tell who had actually run a Facebook ad campaign and who hadn't just in terms of because you make decisions, right?
Again, where I'm like, okay, I got to.
reach people. I've got like $2,000. What am I going to, I'm going to make choices there, right? And I'm
going to make choices that are going to, or, you know, they're like almost Machiavellian, right?
What is the, what is the strategy that I'm going to use to absolutely maximize the revenue,
the drive, the clicks, the, you know, this is what I am going to do in order to like win that
fight. And so there's that component. And the other thing, I think, was the dynamic of,
when you're doing it for political purposes specifically, I think sometimes what gets lost is how much
fun it is for the people in the faction. And this is where, unfortunately, the incentive that
I think the piece that gets missed is a lot of times you'll see media write about the influencer
or about the algorithm, right? So the algorithm makes people do a thing.
It removes the agency from the people clicking on it.
Yeah, it went viral.
It's like the passive voice.
No, it went viral because people clicked.
That's what happened.
Everybody clicked at the button.
The human psychology is the problem.
The algorithm is just the mirror.
I mean, the algorithms can do better.
We can talk about how.
Yeah, of course.
But it removes the agency from the people.
The influencer is responding bi-directionally to the feedback they're getting from the audience.
I talk about this with friends who are, you know,
know, they study their, you know, their YouTube, like, oh, people drop off here, they drop off
there, this title performs better, that thumbnail performs better. But what the crowd is getting out
of it also is like, this person is speaking for me, like in that political realm in particular,
this person is advocating for me, they're fighting for me, and they're really in it. They're
also very excited when they create a meme or they create content. Like, there's a very active
participation there where they feel like I made the meme of Donald Trump with the kitten
that J.D. Vance retweeted, right? And like that is such a huge moment for them. So that that act
of political participation is so much of what is like drives the political dynamics of what happens
on social media. And that incentivizes the politicians in turn to engage in that way as well.
and this is where you start to see politics responding to the incentives that are,
you know, to look like the mutual incentive of the influencer algorithm and crowd,
like operating as a unit.
And that's what's happening there at all times.
Yeah, that's always scary for me, the idea that you can manipulate your viewers and followers
in such an easy way, something even as simple as if someone reposts one of my
Instagram posts and I reshare it, I know I'm going to be.
influencing others to do the same because they saw that they might have a chance,
like a lottery chance that I'll reshare their content next.
And I don't like that.
It feels disingenuous.
I've tried to stay away as much as humanly possible from parissocial relationships as
someone who has a lot of followers.
Because most people who have above 10 million followers on YouTube, above 10 million across
other platforms, they form parasocial relationships.
They're always there for their audience.
And I post often, if you think about all the content that we post on Snapchat, Facebook, TikTok, YouTube, podcasts.
But I don't make that relationship.
And that is harmful to my bottom line.
Because if I did have parasycial relationships, I could be a lot more financially successful.
I could grow the audience better.
I can create an us versus them mentality.
And intrinsically, this wasn't a really thoughtful decision.
I didn't like it.
I didn't like creating that relationship.
saw a lot of harm from it. And I always viewed it with an air of disingenuousness where you
aren't really talking about what you stand for. You're just weaponizing a military online.
Do you ever sense that from certain creators that they're doing that?
Yeah, I think I've noticed it. I've noticed it in the, again, it's where you see people
move into the political realm often. It's very common there.
what you start to see happen is people who there's an essay that I absolutely love called
the Internet of Beefs and it has really stood the test of time. It's a friend of mine,
Venkatesh Rao. We were writing around the same time for this kind of like content community
called Ribbon Farm, this essay community. And we were writing in kind of the 2015 to 2018 range,
So really early on about the rise of the gladiatorial arena, right?
Like what happens with online crowds specifically?
And the idea of the Internet of Beeps is that the point is to fight,
that the fight draws the bystander in.
The point is not to win, just to be clear, you don't want to win.
Because if you win, then the content ends and the crowds disperse, right?
The point is to keep the fight going.
And so it's actually, like, fantastic for creators to be beefing with each other
because people are like, whoa, it's like a soap opera.
The parisocial fans are there.
And like, is it my guy who's like,
like is my guy beating your guy whose dunk is better, right?
And you start to see these like the crowds assemble and participate.
And he describes it as like, you know, they're like the knights and like the mooks is the term he uses.
Like the mooks kind of assemble under the banner of the knights and like they fight.
And then the, you know, the sort of followers really want to be recognized by the knight.
And one of the things I would see early on is people would put in their Twitter bio like, you know, retweeted by Dan Bungino three times.
Oh yeah.
Yeah. And it would be like a point of pride that they had done something or said something. And it was like clout for them. Like they had risen slightly higher in the, you know, hierarchical. Right. fandom. And so when you watch the like the fandom dynamics that transpire there, it is something where the like just the beefing and the, you know, it kind of like it situates people. Like they know where they stand. They know their social hierarchy.
they have a purpose. It also gives them something to do. There's an activity. There's a mission.
Like, I'm coming onto social media to do this thing. And it's exciting. It's entertaining.
There's a point to it kind of. It's gamified to some degree. Yeah, 100%. And that is,
I think when you then add a political layer, not only are you owning your enemy, but you're getting
your point out there. Maybe you're getting your guy elected. Maybe you're expanding.
the Overton window for your weird niche political view, you know.
So that, that dynamic, maybe you come up with the meme that all of a sudden everybody uses, right?
Do you remember eating the pets?
I mean, no one will ever forget that.
No.
No, my gosh, really.
Sam is the meme person.
Oh, they're eating the dogs and the cats from Trump?
Yeah, eating the cats.
Okay.
Eating the pets was our...
The immigrants from Haiti.
Yes.
Yes.
It was a, I think it was, it was like the canonical example.
of something that was, it was a rumor, right, that started on Facebook that made its way to Twitter
and, or X, one of the reasons that it blew up in the way it did was that it had that blend of like
weirdness, right, that just lent itself to content creation. Now, it was like a horribly
racist thing, right? And it really negatively impacted people in some pretty terrible ways
because these things don't stay online. And that's one of the real problems here. A lot of the time
we talk about the meme, we talk about the jokes.
And in this particular case, people made memes of Donald Trump
going and pulling ducks out of ponds,
going and running with kittens.
But it kind of gave people a thing to do
for this period and time.
And it originally, I think, felt kind of fun
for the Trump supporting audience
who kind of picked this up and ran with it.
But then what you see happen is like,
it goes from being something where people are creating content, it's fun, it's funny,
and then you see the political elites realize that this is something that the online base is
so amped up about, so into, and then they go and they take it and they really make it
something that is like central to the campaign. All of the online people are talking about it.
This isn't something that comes like from the top down the way you would think political
communication does or used to. Instead it kind of comes, you know, this groundswell from the
bottom up, J.D. Vance picks it up. Donald Trump talks about it on the debate stage. It becomes
something that then all of a sudden is an immigration. It becomes a propaganda slogan that is
really about immigration. They're using it to talk about immigration. There's no reality to it.
There's no truth to it. But it's something that the base rallies around. And then what you see
happen is as it gets mocked on the left, as Republican leaders in Ohio say there's no truth to this,
nothing is actually happening here.
As it ceases to be funny, it becomes much more serious.
It stops being fun.
Then they really begin to double down.
And it becomes something where you start to see bomb threats called in.
You start to see people who, you know, media influencers actually on the right kind of descend
on the community trying to find evidence of it after the fact.
These are the moments where the things that start out as these like little moments of online
weirdness that people feel like they're participating in again because it's entertaining for them,
wind up snowballing and becoming these huge things. And it's I think the, it's the power of it that
gets missed, I think a lot of the time. People think of it as just something that happens online and
stays online, but it doesn't know. It becomes a lot bigger. Maybe I'm asking you to play psychologist
and that's unfair. Why do you think that this whole parisocial and creating teams feel so
uncomfortable to a physician. Well, I think you want to operate in the realm of facts and truth
and expertise. And so the idea that there are, you know, very large groups of people that are
simply resistant to that, where that just isn't the thing that matters anymore, where there is an
entire alternate constellation of what they would consider to be experts and, you know, telling them
what they would consider to be truths is something that is incredibly frustrating to many people
in the scientific community. One of the things that I hear a lot when I talk to physicians,
I mean, I was on this thing called the Lancet Commission. It had maybe a, there were many
Lancet Commissions. I don't remember what the full name of this one was. But I was operating during
COVID and one of the things that there were a lot of physicians on it and then me. One of the things,
one of the papers we wrote was on, you know, sort of social media. And I think there is a sense
from people in people outside of the social media field where the solution to what I've just described,
eating the pets or health misinformation is like, well, obviously the platform should just be
moderating it. They should just be dealing with it. That's a social media problem. And there's a
desire for some sort of like a neat solution, right? Well, you should just put a fact check on that,
obviously, because there is a fact. And we know the fact. And when nobody is, when we can show that
there's nobody eating the pets, people should stop believing that they're eating the pets,
because this is an irrational belief and they shouldn't believe it anymore.
When you tell people that vaccines don't cause autism, they shouldn't believe it anymore.
When you tell them that COVID vaccines aren't killing people, they should stop believing it.
And when you realize that that just isn't what is driving the beliefs, that's not what's driving the messages,
that there is an entire snowball effect that is happening, that information is moving from the bottom up,
and that the sort of old top-down mechanisms of communication and trust and authority and influence
don't work in the same way that they used to.
It requires a foundationally different approach to communication,
to thinking about authority.
And I think that the desire to just say,
like, well, the social media company should do something about it
is where a lot of people go.
And it takes some convincing to say, like,
that's just not going to work.
That's not going to do the thing you want it to do.
It's not very realistic.
I even think about, I did, I guess you could call it a debate.
I sat surrounded by 17.
Yeah, the Jubilee thing.
Yeah, the Jubilee thing.
Yeah, vaccine skeptical, perhaps anti-vax.
And I find these Jubilee things interesting from a practical standpoint as such.
I went against people who were skeptical or didn't believe in vaccines.
And I didn't make it a me versus them.
I tried to bring them in.
I tried to create empathy with what they were going through,
explain to them where my knowledge comes from,
understand where their knowledge comes from,
and find some sort of common ground.
And it worked in a lot of cases.
Some people changed their minds during the conversation.
I think in general, no one was angry.
Maybe there was one or two people
who disagreed so heavily that they were.
The video got 10 million views, I think the original.
So it did really well in that sense.
A lot of the comments,
In fact, the majority, as I was looking at them, were very pro-vaccine.
But if you look at the rhetoric online, like the Facebook targeting, et cetera,
it looks like everyone is questioning vaccines.
Yeah.
And then, on the flip side, Jubilee had an episode where I believe there was someone
discussing in the center of Palestinian resistance,
and the other people were trying to be pro-Gaza.
and I think that person was anti-Gaza,
some kind of discrepancy in the Gaza belief system
of which side is wrong.
And that episode garnered less than a million views
in two weeks or something.
Yet if you look at social media,
it seems like the conversation is only around Gaza,
but yet people aren't watching that.
So the conversation seems very anti-vaccine,
yet all the comments are very pro-vaccine.
There are some that are anti.
The conversation around Gaza seems to be
at the forefront of social media,
yet the viewership isn't there.
And I know the viewership number is the most accurate view
because that's the monetized view.
So they're going to make sure that there's no bots in those views.
But they don't really care about the comments
or what's going on on Twitter discourse.
So does that show that perhaps our view
of what the discussions actually are,
our sample size perhaps is wrong,
that there's some kind of discrepancy
between what the actual belief system
is versus what is shown on social media?
Yeah, I think in a lot of issues, yes.
And I think that this is something that is very, very difficult.
Right now, polling is, I think one thing I hear from people who are pollsters is that
polling is also significantly broken right now.
The question of how do you reach different demographics when, you know, people under 40
you don't pick up their phones, right?
How often do you pick up your phone when you see a number you don't know?
never. How do you, how do you understand, this is why you see these constant articles every time
there's an election like, wow, how surprised we all are, right, about whatever, you know,
I think even the New York primary last night, you're seeing some articles about people being
surprised about different pockets of the city, different demographics of the city, things like this.
So there's that question of how do we know what public opinion is? Do we know what public opinion
is? How do you gauge it? Then there's questions about authenticity of comments.
online. Again, how many of the commenters are real versus not real? Do, you know, do certain
governments flood comment zones about certain topics at certain times? Yes, they do. Is that a
common thing? Really depends on how well certain platforms police it. God knows what's happening
on X right now. YouTube, I really don't know. Then there's questions about what comments are
upranked, which also creates different types of perceptions, right? Because there is no such thing as a
neutral feed. And what information is curated for you is very personalized. So there is what's
called a majority illusion, right, where depending on who you talk to, what content you see, what
comments you read, you're going to form a perception of what a majority opinion is or where people
are based on what is around you. This is where, again, you see people who are very, very surprised
to learn that their entire, you know, pockets of opinion that they're, you know, that they never
encounter.
This actually was very common during COVID, depending on like what bubbles you found yourself
in.
It's very common and highly divisive political topics where an algorithm decides that you're
going to want to see something because you're a particular type of person and you start getting
a whole lot of content curated for you.
TikTok is notorious for this because it doesn't even really matter.
or who you're following. It's just, you know, so much of it is what's called unconnected content,
the algorithm just deciding that this is something serendipitously you might like to see.
And your experience is so fundamentally different than somebody who's sitting right next to you.
So this, it can be quite different on certain types of issues.
Again, even 10 years ago, when we were doing this vaccine bill in California, I wrote up,
I presented this at the CDC at a conference at the CDC when they were like,
how did you guys get this bill passed in California?
And we did a little, you know, we gathered a bunch of Twitter data from the hashtag for
the bill, which was SB277, the hashtag for the bill.
And, you know, the hashtag was overwhelmingly anti-vaccine.
And yet there were things like, I don't remember the stats off the top of my head now because
is a decade ago, but, you know, some very, very, very small number of accounts produced,
you know, 25% of the tweets, right? Just that there's this like 991 rule in social media,
90% of the accounts do nothing. Nine percent produce some of the content. One percent
produce the majority. So that question of like, who were the highly vocal accounts,
what are they talking about? That really shapes your sense of where opinions are,
what is normal, what is appropriate.
So there's just a lot of different ways in which you can have a, you know, a perception
that is not in line with reality when small numbers of accounts shape what you see, algorithms
curate and rank it for you.
And then large numbers of people either don't talk or, you know, quietly go about their day
and don't weigh in.
Just acknowledge that it happened on their own and don't flag it.
And you see 85% of people vaccinate their kids.
And in California, I think the senators at the time were doing polling for the bill,
and it was polling quite high at the time in California.
And social media conversation was overwhelmingly negative.
Because again, anti-vaxxers from all over the country were participating in the SB27 hashtag.
So Texas, you know, all of the, because they're very, very good at,
galvanizing, yeah, mobilizing. And so you had a lot of people who were not Californians
weighing in on a California bill, and that also tilts the perception of the conversation.
Yeah, I think public health experts have really done a disservice by shunning social media.
And I understood around the timeframe that you were talking about this anti-vaccine bill
in California, I wrote an article for the American Academy Family Physician saying it's the absence of us
online, that's going to be our downfall. And it really turned out to be that way because when the
pandemic hit, the video trending on YouTube was me. It shouldn't have been me. It really,
it shouldn't have been me giving updates. I think I wrote about that video, you're debunk,
this pandemic. Yeah. Well, that and I mean, we were putting up so much content that we were getting
COVID fatigued from putting up content all the time. But it shouldn't have been me. It should have
than the CDC. Well, let me push back on that because I can give you my perspective on that.
So we did this thing called the Verality Project during COVID where we were studying rumors that went
viral about the vaccine. And I think just prior to that starting, prior to the vaccine, right,
because Plandemic was prior to the vaccine rollout. Plendemic was when the lockdowns were starting.
It was like the beaches were closed in California. I remember this because she talks about the
healing microbes in the sand. Yes.
Ah, her. Okay, so when we started looking at the COVID conversation, we actually started looking at it internationally. We were very curious about how different governments were talking about it because we were looking at me specifically. I was looking at how China was messaging it because remember I mentioned I do a lot of work on state actors. China and the U.S. were having this kind of propaganda war about that it originated in Fort Dietrich.
That was actually a whole thing, that the wolf warriors, which are these accounts, that the Chinese foreign affairs, the foreign ministry of China runs these social media, these Twitter accounts that are their diplomats actually.
But they tweet like shit posters.
And it is really something to see because American government accounts, prior to this administration, really didn't communicate like that.
So they're out there like, you know, why is the American government lying?
you know, COVID originated in Fort Ditrich.
And what they would do is they would go
and they would grab these random American conspiracy theorists
who were like COVID-O-Rid, COVID is a bio-weapon
that originated in Fort D-Trick.
And they're like, look, some Americans are saying it.
And so it'd be these like Chinese government accounts
that were doing this.
And so I was kind of following this kind of propaganda war
that was happening.
So we're studying these state actor communications
and all of a sudden we start to see
Judy Mikovitz, who is the,
pandemic lady, just starting to pop up in a whole lot of these conspiratorial communities that I was
paying attention to again.
Well, the world of Del Bigtree.
Yeah, yeah, yeah.
But she just started popping up because I happened to be looking at conspiracy theories that
were feeding into these like state communications.
And all of a sudden, you know, it became very clear, you know, she was being kind of promoted.
She had his Twitter account, new Twitter account.
It was growing by like leaps and bounds, didn't look particularly authentic and just clearly being kind of like boosted.
And there were like, you know, there was like a go fund me to sort of support somebody who was kind of doing some boosting.
I wound up writing about this.
And the thing that was so interesting was like, pandemic drops.
And I wrote about this in the book.
And Mickey Willis, who was the director, you know, he made something that like checked all the box.
It was compelling. It had cinematography. It had like the right hooks from a storytelling standpoint.
And he also did the thing where he's like, they are going to take this down. So you need to
download it and you need to re-upload it and you need to get in on this because like it's going to
be you who breaks through the censorship, which is how you enlist your army to do the thing.
Right. And so I thought, okay, this is going to be everywhere.
This is going everywhere, you know.
And so we gathered data, right?
Because it's a dampered in an observatory.
So we start gathering data.
We only gather open source data, but we can pull from at the time through crowd tangle,
every mention of pandemic, every share of it, every reshare of the video.
And again, I don't remember, I didn't prepare to talk about all the stats on it.
But this video, because then they do this ham-handed thing where YouTube takes it down
after it gets like 8 million views.
Right.
Like, okay, guys.
You know, because that's exactly what they want to happen, right?
Because then they get the secondary boost of the censorship story.
Terrible call.
So they take it down after 8 million views.
And then by this point, though, it's like up on BitShoot.
And so we're pulling the bit shoot links, looking for the URLs.
You can basically look at dispersion by looking at URLs or by looking at, you know,
kind of like hashes of screen, you know, screen shares and stuff.
YouTube is trying to play whack-a-mole with re-uploads of the video and stuff at this point.
I don't know at the top of my head remember what Facebook did.
Maybe they took it down too.
But this video is making the rounds, all these re- uploads of it everywhere on the internet.
Because not everybody is going to take it down.
This is the thing.
You cannot take something off the internet.
It's going to be up on some bit-shoot account somewhere.
So the bit-shoots are going back up.
And it's making its way into like we can classify, roughly speaking,
what a group focuses on.
So it's normal to see anti-vaccine content in anti-vaccine groups or in Q&N groups.
Totally normal.
It happens all the time.
This is what they talk about.
But when you see Plandemic land in like the Corgi Club group or the like local neighborhood
group or the mom group or the other group with like, hey, have you guys seen this video?
What do you think of what she's saying?
That's when you realize that like this is something that needs a response.
right? This is something that really just resonated with people. And when you see how fast it's
happening, when you just watch the like the firestorm and, you know, I thought like, all right,
who is going to, who's going to do it? And the problem with the video was that it was about 30 some odd
minutes long, somewhere 25 to 30 minutes long. And it's a gish gallop. She just goes, right? And for those
who don't know, I imagine most of your audience is familiar with the term, but like, it's just a
series of false claims one after another after another, but there's so much in there that,
because she starts with her own personal agreement story with Anthony Fauci, which means that
anybody doing a debunking is going to have to figure out that he said she said of like the first
five minutes of like he censored my research and got me fired and destroyed my livelihood.
And so she seems like a, you know, a, like a, like a, a victim.
A wronged woman, yeah.
And she has credentials, you know, she was an NIH scientist, so she seems authoritative.
There's so much that is layered into that video.
And so I said, okay, we're going to, we're going to data gather, we're going to URL gather on all the fact checks too, right?
And so they take about 36 hours to come out, right?
So we start gathering like NBC science, you know, all of them, PBS, Politifact.
And there's like, but they're written, which means that you have to read these like multi-thousand-word takes on.
And then in this minute she says this, and then this minute she says this and this minute.
So you were one of the only people who made a video response, right?
And when you look at the shares of the URLs, of the written fact checks, one of the things that's very interesting.
So not only do we gather where do the shares go,
so we can see again the types of groups,
public groups only,
that the shares wind up in.
When they wind up in the anti-vaccine public groups
or the Q&N public groups,
they're being hate shared, right?
The comments under them are mostly like,
okay, guys, someone needs to go over to this page
and, like, flood the comments with the truth.
So it's not like there's a whole lot of receptivity
to the information.
Information.
Yeah.
So there's, it's not really.
It's the pollsters that are calling people who aren't picking up their face.
Yeah.
So there's so much that you can get from even these small case studies when you just look at,
does it land in the same places?
Honestly, no, it doesn't because the other thing is like the fact check isn't making
its way back into the same corgi club, right?
The corgi club was captivated by the 30-minute cinematography of the rumor or false claim
or whatever.
not by the thousand word like response.
And so you have a real challenge of just the structure of like where does the information wind up
become something that is a very persistent issue of the format of the correction is usually uncompelling.
There's a huge time delay.
People have moved on to something else.
They're not going to see the correction.
And then the correction is going to be very boring.
So even when the CDC does come out with something,
unfortunately a lot of the time, and this was something that Facebook would occasionally say,
like in communication, because they occasionally consult with academics, like people like, you know,
SIO and others, what should our policy be? What should we be doing? You know, we talk to them.
They solicit opinions from a lot of people. And during, you know, these questions about,
how do you counter this kind of, you know, anti-vaccine content? I was always a big proponent of like,
you can't take it down, right, because you have this disastrous backfire effect.
what do you serve up in its place?
How do you reach people, the same people who've seen,
who need to see the counter content?
And, you know, the honest thing that they would say a lot of the time is, like,
the CDC content is boring.
Nobody shares it.
It goes nowhere.
We can put it in a carousel at the top of the page,
but nobody engages with it.
And that's the honest-to-God truth.
Because when you're presenting stats versus a story of a wronged woman
and a government conspiracy to prevent you from the healing microbes on the beach,
it's just not, you know, you're not reaching people.
You've got like a spy story.
You've got like a novel over here.
And then you've got some boring over there.
Yeah.
That was a scary time because I was trying to do that with no script.
Right.
So I had like bullet points on a yellow pad.
of just sections I wanted a cover.
And I remember we filmed it.
And I was just saying it's a camera and it's hard talking to a camera.
So we messed it up.
I remember Dan.
I think we killed the first day shooting because I just said I'm so bad at it.
My armpits were soaked.
If you watch that video progressively, I get bigger, bigger pit stains.
And then when it came out, I remember there's a famous podcaster, Patrick Bet David,
who is very, he loves that.
that style that I mentioned earlier of pinning people against each other. And it does really well for him
viewership-wise, like the us versus them mentality, no matter the topic. He called, he called me on
my cell one day. He said, do you want to debate Judy Minkevitz? And I'm like, what? Like, who are you?
How are you calling me? And he's like, I represent this show. Would you want to debate her,
talk to her? And I'm like, maybe this could be interesting. Send me some information. You're calling
me in the middle of the hospital day. And they sent the information.
And I realized, like, there's no debate to be had.
She's talking about her personal story, which I have nothing to say about.
I wasn't there.
And then the facts is in her imagination.
Like, me arguing against someone's imagination wasn't going to be helpful.
Plus, I would have to take time off work during the pandemic, which we're already
short-staffed.
So I said time-wise, we couldn't do it.
And then all this hate came my way.
You're scared to face the truth.
And I'm like, you can keep making all this content.
I'm not scared to face the truth.
It just there's no benefit to doing this.
this. And if you were being honest with yourself, you'd see that, but you just want to make
content, which is fine. Go make content. But that's not the truth. And I think the more we air that
out to people and allow them to see the transparency behind it, I think people have a good
internal BS detector if they get all the facts. If you hide the facts or you only allow them
to see one side of the facts because it's way more compelling than the other, that's where
you run into trouble. I think that it is the importance of like getting the facts out.
there and making them discoverable and that the two-day gap is one of the real hard parts about this.
This is where I do wonder like with AI generated video, you know, what is like does that time gap close?
I mean, there's going to be a lot of like really disastrous ways in which that is used to generate the crap.
Sure.
But this is also the question of, you know, how do you close that, how do you close that?
gap faster because right now the void is the problem because when people are
searching for Plandemic when people are searching for microbes in this in the
sand or whatever that's a you know the the data void term in you know social
media research just refers to a keyword that were the you know the search
engine can only return what it has available to it right and now this is
actually becoming increasingly challenging because as you have AI answer
engines up top
they are trying to synthesize something.
This is something that I know Google is trying to figure out what to do because you don't
want the AI answer engine to hallucinate some BS.
It used to show you occasionally a warning.
They call them topicality and freshness voids, right?
Where it would say like there are not very many results that meet the search term you're
looking for.
That was how it would let people know kind of like, okay, you're looking for microbes in the sand,
but that's not really a thing that we have a lot of.
it would let people know that, you know, sometimes during a breaking event when a lot of people
are searching for somebody's name or something like that, it's just not like, sorry, we don't have
anything to give you yet. This is trying to let people, help people realize something that I think
is very challenging in the age of social media, which is it takes time to get information.
It takes time to gather facts. It takes time to know truth. And we expect that to be available to
us the minute we want it, right? The minute we search for it. And there is a delay there. And this was
something that was when we were doing the Verality project during COVID, you know, I don't really
use the term misinformation very often anymore, except in occasional cases where there is a clear
fact there, right? Vaccines don't cause autism. But when we talk about, like when we were doing the
work on the COVID vaccine, rumor is much more the appropriate term because we didn't know
what a side effect, whether this is a true or false claim, right? We had no answer. And what I saw
our role as was like, there are people concerned about a thing. Everybody is searching for it
on social media. I'm certainly not equipped to answer this, right? So what we're going to do is we're
going to, meanwhile, there are people like you, right, or scientists, public health officials,
whose job is not to study social network narrative trends or spread, right? Who, unfortunately, like,
if you were to just pull out your phone or look at your social media account, you're going to have
maybe an incorrect perception of what the public cares about, again, because you're going to see
what is curated for you as opposed to more of a holistic, you know, informed view of like the
entirety of the landscape. So if I can say this is something that has broken out and is in the
Corgi Club site, right? This is what actually needs to be addressed. If we can surface that for public
health communicators. Like Dr. Mike, if you can fact-check plendemic while using corgis as an example.
Yeah, or just like this is what people care about. Like, this is where you guys need to be.
Also, it would be great if you could be there within like, you know, 12 hours, six hours maybe,
too. It would be great. That's where, you know, we worked with this group called, this is our
shot, which was this group that the, just a group of physicians, California Medical Association
and some folks had pulled together. And just, you know, just, you.
doctors out in California, and I think that they grew, they expanded from there,
they were just doctors who wanted to use their own social media presence to put out
counter content, basically, like, hey, you guys are concerned about magnetism, right?
Let us talk about this.
Whereas the CDC is going to go through 10 layers of comms.
It's going to take, you know, two days to get anything out the door.
It's going to come out in some boring PDF or some lame meme, whereas these doctors can be on it.
And that was a much more effective way of doing it.
So rather than looking at misinformation, we were like, okay, these things are rumors.
This is people who have a concern, they're afraid.
They want to know the truth.
The truth is unknowable in this moment.
But somebody should be responding and telling them that the truth is unknowable in this
moment and at least giving them something where they're like, look, we actually can't know.
Right? We can't. But here's our best understanding based on the information available to us,
whether that's based on some past disease, some, you know, understanding of other symptoms,
whatever it is that you guys have available to you. But for the love of God, like, be in the
conversation. And that was really the, that was the kind of main takeaway from the project as we
wrote it up was just, we need to close that time gap. You got to be there. Yeah.
Speaking of controversial podcast hosts,
are your feelings on Joe Rogan? I know you've appeared on the program. I did. Yeah, I did it in
2019. So I went on Joe's podcast. How did that happen? Through Sam Harris. I think he heard me on
Sam. So when I did the Senate Russia work, so I was asked to do that work as an outside researcher
and I was asked by both Senator Warner and Senator Burr. This is important because in right
when media, they wrote that I was like some Democrat-Clintonite appointee tapped by the Dems
to do this. No. I had a bipartisan appointee role there. It was not partisan work at all, actually.
I was asked to field a bunch of outsiders to outside academics to look at this data set.
And I wrote what I think to this day is a very dry and boring report analyzing this massive
data set that the platforms identified and turned over to the Senate of evidence of the Russian
Internet Research Agency creating propaganda and pretending to be Americans on their platforms
from 2015 to mid-2017 or so.
So the election happened to be in that time, but the operation was bigger than that.
And so I did, you know, your basic sort of statistical analyses, descriptive analyses,
like here's what happened.
The report was released to the public
and then the Senate used it to inform
their very, very large reports
which intersected
with Mueller's investigations and a bunch of other things, right?
So I was not asked to look at,
did it swing the election?
I didn't have data that equipped me to do that.
It was really, and I don't think it did for the record,
but that was just the work that I was asked to do.
So I talked about the report on Sam Harris
and then Joe reached out.
And he said, you know, will you come on?
And, you know, I was a little bit wary.
I wanted to reach his audience, though,
because, like, it had become so partisan by that point
because there was the collusion investigation.
Nothing I did had anything to do
with the collusion investigation.
Because, again, we should be able to hold two ideas
in our heads at once,
which is the interference happened.
it is real. It is a thing that we should want to understand, right? Foreign propagandists
interfering in the American discourse is a thing that will continue to happen forever. And is it the
most serious thing? It is not the most serious thing. Is it serious? Yes. If we don't treat it
seriously, they're only going to get better at it. So why wouldn't we want to understand what it is,
how it works, how to be aware, and how to disrupt it? This is not novel, right? Reagan in the 80s
pulled together the active measures working group to study Russian interference in the discourse
during the Cold War, right? So I said, okay, I will come on. And he was very candid and open and,
you know, quite nice. I mean, he's very gracious and friendly. And he was like, half my audience is
going to hate you. I was like, yeah, you know, like we knew what it was going to be going into it.
You could still see downvotes on YouTube at the time. And I was totally 50% downvoted, right?
know I thought though this is important to do because like what good does it do me to only tell
Democrats or left-wing audiences what happened here I think it's very important that the right here
this also if they trust this person and this person is asking me questions and I didn't like
there's nothing that's off the table I'm going to sit here for two hours you can ask me whatever
he wants then why would I why would I turn that
down. I wound up getting more flack from, well, okay, I got a lot of flack from his audience for doing it
because they wanted to scream at me about rushagate. Rushagate is a term that means nothing at this
point. Like everything is rushagate. Rushagate just means like things I don't like. This podcast is
like a gate because I'm Russian. So, you know, so I got like rushagate, whatever,
whatever that word means at this point. But then I also got like crap from the left for going on.
and that I found annoying because, you know, like, you got to, what, again, people need to hear the message.
And the fact that somebody hosts an unpalatable guest at a different time, like, that doesn't mean that I have magically absorbed the opinions of the unpalatable guest.
I find that that kind of like that sort of purity testing very frustrating, actually.
I wish that we didn't do it.
I think that all it does is prevent us from reaching people with messages that are important.
And so I really do make a concerted effort to engage.
And I actually do, you know, I've done Megan Kelly, you know, also.
And I try when I'm invited if I feel like I'll have a good faith conversation.
I don't do the gotcha ones.
Like I'm not interested, per your point, in something where it's just going to be like a miserable experience of, like, fighting BS facts.
But I've done debates where I feel like if I can guarantee that the moderator is informed and will do the fact check,
then I actually find them generally quite enjoyable, actually.
And per the point, the clips go viral.
People want to see that stuff.
They like to see the, you know, they like to see the fight.
Although traditional media is kind of weird about that.
Case in point, the Jubilee episode I mentioned with the vaccine conversation I had, it did really well.
The episode before it with Sam Sater and the episode after it, I forgot which one followed it,
all got press because they were kind of politicized topics, I guess, to some degree.
But they did less viewership.
Ours got zero.
Not one rewrite, not one article.
not one, like look at doctors.
And it was so foreign to me, and I realized why.
I didn't start any arguments.
Yeah.
There were no tense exchanges.
It was peaceful.
It was empathetic.
It was kind.
No one wanted kindness.
Yeah.
I think I have done somewhere where it's gone well for me and not the brawler.
But because I don't clip and
do like, you know, Renee owns so and so. It's useless to me. You mentioned, like we were talking a
little bit earlier about people who form the parasycial relationships. The one area where I think
the parasycial relationship really comes in handy, though, is when you do get attacked and
that fandom is there. Yeah. And if you don't have that, you do notice it when the mob comes
for you. Yes. Because that's where it matters when you do find yourself in the situation where somebody
clips the like she got owned or like you know um the political BS that I wound up having to go through
with like you know my work getting investigated by Congress or I you know I got sued by Stephen
Miller and I found out on Twitter that I got sued by Stephen Miller when Breitbart tweeted it at me
right I was like is this even real you know my Twitter sued um but when you find out you're getting
suit on Twitter, it turns out, an entire mob of people is there to gloat about it. And these are the
moments when you're like, where is my pariscercial relationship now. No, I feel that where that's coming from.
So since you did your Joe Rogan appearance in 2019, a lot happened after that with the pandemic.
Did you reach out to him during any of that time? Were there any conversations about correcting
some misinformation? I reached out to him after Michael Schellenberger went on his show.
and said that I was like, I had like, so for those who don't know, there was the Twitter
files, right? And Michael Schellenberger was a Twitter files writer. And in the Twitter files,
which was Elon kind of opening up Twitter's internal, you know, files, emails and everything,
there was this crazy conspiracy theory that was not supported by the Twitter files, but this other
person, this other kind of right-wing crane candidly, told Michael Schellenberger and Matt Taibi,
another Twitter files writer, that my team had somehow censored 22 million tweets during the 2020
election. What had actually happened was we had after the election written a report, and in our
report we described data gathering the most viral stories of the 2020, the 2020 kind of rumor mail,
and we described that when we added up the number of tweets, there were 22 million of them.
he reframed that deliberately and intentionally lied and said that we had censored 22 million of them,
and this lie was entered into the congressional record by Matt Taibi and Michael Schellenberger.
And Schellenberger went on Rogan to say this.
And to say that I, because I had interned for the CIA when I was a kid during college,
went on and said that I was like, I had risen to the highest levels of the CIA.
They had gotten me my job at Stanford.
and I had been like...
This is Putin-esque Russia gate level.
It was surreal.
It was surreal.
And I was like, he goes on about me for like three minutes there.
And I was just like, did this man just tell 12 million people that I rose to the highest levels of the CIA and all this other stuff.
And then he just like goes down this like personal stuff bunny trail.
Like, well, I'm sure she's a good mom butt.
And I was like, whoa.
So I sent a note.
And I was like, Jamie, just for the, just for the record, like, none of that is, I mean, aside from the good mom, like, that's true.
But the rest of that was all bullshit.
And I, you know, I got a note back, but like, he didn't invite me on to, like, give a counterpoint.
And honestly, by that point, this was around the time that right Bart that Stephen Miller sued me.
And one of the things that happens, it turns out when you are sued is that your lawyer's,
tell you, shut up and say nothing.
And so since Stephen Miller
was America First Legal
was suing me personally by name, in addition to
Stanford and a couple of my other colleagues
alleging that we had censored 20 million
tweets as agents of the government and all this other
insane stuff.
Like,
I couldn't, I can't sit
for two hours
and talk about
an open, pending
court case, which, by the way, two years later, is still going. Because this is a really great way,
it turns out, to shut somebody up, which is to file a vexatious lawsuit, you know, in a jurisdiction
with a friendly judge that doesn't immediately toss it and to just, like, bury you in paper and
proceedings and tie you up for two years. So I cannot talk to you about the, sorry, I cannot talk
about the specifics of the pending litigation is the sentence that you are taught to memorize.
Wow. So messy. And then what happened?
with the Stanford Internet Observatory? What happened there? So the university decided that
doing election research was a liability. That studying American elections was too much of a liability
for American researchers and that we were no longer going to do it. And so, you know, the,
so our contracts were not renewed. And that was one year ago this month. It was kind of like
the canary in the coal mine. I wrote a New York Times op out about it. Jim Jordan had,
you know, after these allegations were made that we had censored 20 million tweets, we got a letter
from Jim Jordan the next day saying he was opening an investigation into the vast censorship
cabal, you know, the Bidenite censorship cabal. By the way, Trump administration ran the government
in 2020, but nonetheless, the Biden censorship cabal that had interfered and somehow stolen the
2020 election. Because that was like where they took it. It went from like they censored tweets in the
2020 election to, you know, these people at Stanford colluded with the Department of Homeland
Security and the FBI and all these other entities to steal the election from Trump. One of the
things that they do is like they reframe. In this case, it was people who had participated in claiming
that the election was stolen, like Jim Jordan, ran these investigations into those of us who
had investigated and very carefully and meticulously documented the rumor mill and the propaganda
machine around the 2020 election.
Again, in these reports that were done publicly at the time, not government-funded research,
you know, all of the, we had a Twitter account live at the time, a blog, 220-page report
sat on the internet for two years.
All of this stuff was, we thought, quite transparent.
Nonetheless, it turned into subpoenas and lawsuits and two years of investigations.
Stanford paid, you know, I can say this because it came out in congressional testimony,
four million or so in legal fees.
That's not a lot for Stanford, but, you know, it's a lot in general.
And ultimately decided that it, that this was, you know, not worth it maybe.
Even though you were looking at more than just politics, right?
because you were looking at health care.
This is the problem.
This is the thing, right?
So the election work, like I mentioned at the start of our chat,
we were doing child safety work, spam, scam, state actors.
I mean, AI research.
Yeah, like I had the FBI sitting here not too long ago from Minneapolis
talking about cyber ransom for children.
We did a ransomware project.
I mean, so much child safety in CSAM research just to, you know,
I mean, we did research into CSAM and AI training data sets.
that led to major training data sets coming down and being cleaned.
I mean, the work that we did at SIO was so important,
and because the people who worked to claim that the 2020 election was stolen,
needed a scapegoat and wanted to tear down entities that they saw
as presenting an obstacle to this sort of continued, you know,
to continued dominance, right?
they came for us and they did it quite successfully and effectively.
And unfortunately, that was the way that it went.
And you can now see, I think, in what has happened,
what has transpired in the first six months of this year,
ways in which academia is vulnerable to attacks on, you know,
First Amendment protected research.
if it becomes politically inexpedient.
So when Stanford said that it was not going to do this work,
it said it would reorient the observatory,
it said that it was not going to seek funding for that kind of work anymore.
You know, it didn't renew my contract,
a couple of others of us didn't have our contracts renewed.
Again, that was in June of last year.
ultimately the Internet Observatory doesn't exist anymore now.
And I think, unfortunately, again, the sacrifice of all of the other areas
because of the fact that the election work was politically an expedient for powerful people
meant that they, you know, they backed away from it.
They are continuing to fight the legal fight, and I'm very grateful for that.
So the court cases, as I mentioned, continue.
And I'm happy that they continue to stand up for us on that front.
This is not the first time that political machines have come for academics.
I know Michael Mann has written about this and how it came for climate scientists.
And certainly, you know, vaccine researchers, Dr. Hottes has spoken about this.
It's not the first time, unfortunately, it's that these types of attacks are effective,
and they create a chilling effect.
And after Stanford backed out, Jim Jordan took a victory lap and said that he had exercised
robust oversight over Stanford University.
This is kind of a wild claim for an American congressman to make over the, again,
not government-funded First Amendment protected research of an American research institution,
but here we are.
Do you think any institution is going to pick up the cause?
I think that a lot of the chilling effect was caused by the fact that we spoke with state and local election officials during our work.
So the same way that we spoke with public health officials during the vaccine work to say, hey, here's a viral rumor.
Be great.
You know, you guys should be responding to this.
We spoke with election officials.
Hey, Sharpie markers in Arizona.
This is a very big deal.
this is a very big deal, right?
Somebody should be responding to that.
When you try to interrupt the rumor mill,
not by arguing that the stuff should come down, ironically,
by not censoring it, in fact,
but by saying somebody really needs to be labeling this,
somebody needs to be getting the facts out,
somebody needs to be doing that faster,
those communication channels were reframed
as some kind of collusion.
And so it created a chilling effect.
So I don't think that I certainly wasn't working in the space in 2024 because I had just been not renewed.
But as those, you know, I think that there are still people who are doing pieces of the research.
You know, I know Kate Starboard's team at University of Washington continued to track and report on viral rumors in 2024.
that question of are they communicating them to the state and local election officials who then
respond quickly, it's those connections, right? Or how quickly are platforms seeing that and
deciding to respond? You have to realize also that platforms were subpoenaed as well. Platforms
were also their moderation efforts were reframed as censorship. Fact-checking and labeling, too,
were reframed as censorship. So when you reframe fact-checking as censorship,
and then the platform say like, okay, we'll be totally hands off, we'll do nothing, then you have
created an environment that advantages liars. You've created an environment that advantages people
who are willing to do, you know, where the end justifies the means, meaning, you know, we are going
to essentially make it such that anything, any response at all is seen as egregious tyranny
and censorship, and so we want to make it so that there is no response. There is nothing that
happens here, and that is where I think we are at this point. You've seen the government
dismantle its efforts to track foreign interference. A lot of the task forces within the FBI,
DHS, State Department and the intelligence community that handled this kind of stuff within
the government that were started in the first Trump administration,
have now been dismantled under the second Trump administration.
So it's not entirely clear.
You know, we just engaged in a conflict with Iran, right?
We just bombed Iranian reactors.
I'm kind of curious where the capacity is to look for Iranian influence in cyber operations now.
Because we've dismantled a lot of it in response to the allegation that all of the
actions to look for those accounts were secretly biased against conservatives or secretly trying to
stifle Trump's speech. It's really, I know it's a very complicated thing for people to follow.
And this is where anytime I try to explain it or get into it, even in conversations like
this, you feel vaguely like, you know, you sound like you're explaining a conspiracy theory
because it is such a complicated set of pieces of a machine.
that really came together to make it a liability for different types of stakeholders in ensuring free and fair elections to talk to each other,
or for different types of stakeholders ahead of a next pandemic, what social media entity is going to want to communicate with the Surgeon General's Office?
Nobody. What academic researcher is going to want to communicate with the Surgeon General's Office,
knowing that that communication is going to be reframed as some sort of weird secret collusion cabal.
Sure.
This is the chilling effect, I think, that's going to happen as a result of some of what happened
over the last two years.
For invisible rulers, what person should read that book?
What would they get from reading the book?
So I wrote it for general audiences in part because it's the agency of the individual, I think,
that really matters a lot, right?
It's making ordinary people aware of like, you know,
how is the game played?
How does the system work?
Why are influencers so persuasive and so interesting?
How does the algorithm curate things for you?
What are the ways in which it operates,
but not in complicated technical terms
and like, I think, I think interesting ways of framing it?
How does this then get,
picked up by political machines. How did it work or not work during COVID? Why are institutions
so bad at it? So anybody who works at an institution should read it. Anybody who wants to
understand, I think, how public opinion is shaped and what is the, like how the sausage is made.
You know, one of the things that I got very excited about was when the Blake lively story broke
because I thought like, oh my God, people are talking about it with celebrities.
Now people will understand how like very subtle things that are planted or that
influencers say or ways that information trades up the chain about specific people really can
just twist perceptions.
So makes sense.
Do you have any questions for me as an influencer?
I'm curious.
Do you hate that term?
I used to.
And then I came to the realization that like.
Like everyone's an influencer.
If you're a professor, you're influencing your students.
If you're a parent, you're influencing your kids.
So it's just on a different scale.
What do you think people in academia or institutions should be doing?
I think they should be forming collaborations with people like myself.
Like you were saying during the pandemic issue, the CDC would have put out some boring report.
But had the CDC hired us, hired me,
they could have had all of that
and I would have happily given it to them
because I would want to do it with a reputable institution.
How many times have I reached out?
I have a course on educating experts
on how to succeed in social media.
It's everything I've learned in my playbook.
And I've reached out to institutions
that said, hey, I'll give you this course.
I don't even need money.
Can you just call me a professor,
an adjunct professor, something in your institution?
No one is willing to take on the liability of that.
Because the liability is the scariest part for everyone, working with an influencer.
Well, that, that I think is an interesting point, right?
So let me tell you what happened after the Rogan appearance.
Because my response to the Schellenberger Rogan appearance when I was like, okay, I was like, this is a huge void.
Stanford's comms team, they were like, don't worry, it's not the Washington Post.
and I just about died.
Yeah, there you go.
Because I thought, like, no one sentence has ever been so evocative of the difference
between institutional comms and like internet native comms, right?
In that there is a such a difference in what they see is like legitimate or meaningful or
having reach or carrying or impact. And I said, you know, I know Michael Schellenberger. I've
been emailing with him for months, actually, and I'm going to drop all of my communications with
them on substack, all my emails, everything. I want people to see what I actually said, what I
actually talked to him about, and then they can compare that with what he just said on Rogan,
and they can make up their own mind. I'm not going to go back and forth. There's no reason to do
that, but I want there to not be like a complete void response for me. And that was what I did.
And that got play on X and also when you search, that comes up, right?
And it was important to me to have my side out there in some way.
But this was an example, I think, of the complete, you know, the old Com's playbook for an institution is you say nothing, you let the media cycle pass, you,
bear more liability by talking. You can only make things worse if you speak. And it is the complete
opposite. They really believe there is a press cycle that goes by. And when you try to explain that
there is no such thing as a press cycle, right? You have been made into a character in a cinematic universe
and they will keep returning to you over and over and over again every time you are useful.
Your typecast. Yeah. And when you just became like the CIA plant at Stanford University,
Like my God, like that's a gold mine.
Like, I'm not going away.
And trying to, but again, per your point, the liability, right?
Like, this is something where I don't know how to break through sometimes with institutions on this front, even as a person within one.
And so bringing in somebody from outside, I imagine that is also seen as, you know, unpredictable.
because then their brand becomes associated with somebody else's brand.
And then per the point about, you know, what third-party person did you have on who said an unsavory thing once 10 years ago on somebody else's podcast?
And that winds up, I think, that fear of unmanageability, right?
They can't protect the brand in a tight way and it paralyzes them.
And that paralysis, which is actually worse and they don't realize it, but that paralysis,
is what prevents them from moving forward.
And until we can have people in those comms roles
that understand that the paralysis is worse,
I feel like the time, unfortunately,
and I understand that the court cases and the lawsuits
create significant challenges,
and I understand that that is a tactic
that is used to put people in this.
that jeopardy. But when when these moments happen and these very, very high stakes, rumors and political
propaganda campaigns are happening, if you are not in the conversation, if you are not defending
yourself, institutions collapse as a result. They just can't, they're not in the fight.
Yeah. They were ragging on academic institutions here, but it's also old school streamers,
NBC, when we're pitching them shows, they don't get what social media is.
Advertisers will come in and want you to repeat a slogan.
I'm like, that's not what the audience wants.
You're creating a fake interaction in a place where usually transparent interaction is happening.
So I think a lot of people are just behind the times in these positions.
And I think you're going to see a swap.
I think things have to break usually to see a swap.
And we're starting to see that in the healthcare segment
with vaccinations, clinics closing, research shutting down.
And when other countries are going to be having the breakthroughs and we're not,
I think people will start waking up and realizing, oh, man, we're losing revenue share.
And when revenue shares dip, that's when you see action and change.
I've been really encouraged by how many more physicians and public health folks
and epidemiologists I've seen participating in the conversation on social.
that has been huge.
I mean, it was so different during the Texas measles outbreak
and what is happening now.
I think also with RFK Jr. being the government, right,
which is very weird to say,
it creates an opportunity to have that counterpoint.
One thing that I have noticed is a,
I think there's still a sense that waiting for permission a little bit.
I really like to see, for example, the members of the former ACIP, right, the vaccine board that was kind of fired, speaking up a little bit, even as a, you know, like a shadow ACIP almost, but just speaking out about what is about to happen or what they've heard.
They recently put out a little stance on what they think is about to happen.
But I want to see more.
Yeah, I mean, you could even do it on a substack, right?
Like there's ways to create this content to continue to get messages out to be that,
to continue to be that voice to speak and to reach the public without necessarily the formal
institutional and premature, but to have that influence.
Cool.
Well, where can people follow along your journey?
I guess, substack, newsletter, blue sky, X, but not so much.
I don't know.
I go back and forth about being on there again.
I think I have Renatea Resto.com has my various socials in the book and all that good stuff.
Cool.
Well, look, thank you so much for your work.
And I'm excited to see where the future goes.
I'm an optimistic one.
Thank you.
Me too.
If you like this episode, you might also like my interview with Dr. Sanjay Gupta about how he's managed to balance
medicine and journalism in his decades working at CNN.
Scroll on back to check out that episode.
Also, please, if you enjoyed, give us a five-star review.
leave us a comment telling us what you enjoyed,
as it's the best way to help bring in new listeners.
And as always, stay happy and healthy.
