The Highwire with Del Bigtree - SPECIAL REPORT | HEARING ON “PRESERVING FREEDOM AND REINING IN BIG TECH CENSORSHIP”

Episode Date: April 10, 2023

SPECIAL REPORT: HOUSE ENERGY AND COMMERCE COMMITTEE’S COMMUNICATIONS & TECH SUBCOMMITTEE HOLDS HEARING TITLED, “PRESERVING FREE SPEECH AND REINING IN BIG TECH CENSORSHIP”WITNESSES:Mr. Seth D...illon, CEO, The Babylon BeeDr. Jay Bhattacharya, M.D., Ph.D., Professor of Health Policy, Stanford UniversityMr. Michael Shellenberger, Founder and President of Environmental Progress Spencer Overton, Patricia Roberts Harris Research Professor, George Washington University Law School; President, The Joint Center for Political and Economic StudiesBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-highwire-with-del-bigtree--3620606/support.

Transcript
Discussion (0)
Starting point is 00:01:41 Come to order. The chair recognizes himself for an opening statement. Again, good morning and welcome today's hearing on preserving free speech and reigning in big tech censorship. I'd like to begin this hearing with a simple statement, free speech is the cornerstone of democracy. In fact, it's free speech that separates the United States for the monarchies of yesterday and the authoritarian governments of today. When we discuss the importance of free speech in the 21st century, it's impossible to ignore the large-scale online platforms from which our ideas are shared and heard most frequently social media. For better or worse, social media has fundamentally changed the way we communicate.
Starting point is 00:02:22 It has allowed us to connect with people all over the world and express our thoughts to a wider audience than ever before. Its vast online reach expands from coast to coast and across almost all nations. But as social media companies have grown over the years, so has the influence of big tech. It's a scary truth, but the power these companies have to influence public debate has become increasingly emboldened. In fact, big tech companies have the ability to influence almost every part of our lives. They can determine what a user sees, hears or learns, and can even target what they purchase online. Now, more than ever, we see online platforms engaging in the wrong types of content moderation. This includes removing
Starting point is 00:03:10 content of opposing viewpoints that aids in important public discourse and amplifying content that enables drug trafficking and promotes self-harm and dangerous children. In recent years, online platforms have had the capabilities to remove duly elected officials and blocked trusted news stories from emerging. When this type of censorship is used to silence dissenting voices, it can have a damaging effect on democracy and public discourse. At the dawn of the Internet, Section 230 of the Communications Decency Act provided vital protections for Internet startups to engage in content moderation and removal without fear of being sued
Starting point is 00:03:50 for content posted by their users. Section 230 has been the foundation of the modern Internet, allowing the Internet economy to bloom into what it has become today. However, Section 230 is outdated. The law was enacted in 1996 when print newspapers were delivered to nearly every household and before the creation of social media and explosion of online content. It has been interpreted by the courts provide a blanket liability shield to online platforms. As a result, it lacks the nuance needed to hold today's digital world accountable, especially
Starting point is 00:04:28 as the power of AI-backed algorithms continue to evolve. Big Tech's role in directing and amplifying the type of content that has served to users becoming increasingly apparent. While all tech companies should strive to uphold American values in their content moderation practices, not all tech companies face the same challenges. For instance, small businesses still need the protection of Section 230 to grow into vibrant members of the e-commerce community and to compete with the big tech community companies like Google and Facebook.
Starting point is 00:05:01 Small online businesses deserve the same benefit of protection that big tech companies received when they first started out. But as they grow, so does their responsibility protect our kids and all their users across America. As this subcommittee continues to consider Section 2 to the reform legislation, we must strike a delicate balance. For too long, Big Tech platforms have acted like publishers instead of platforms for free speech and open dialogue, so they must be treated as such. I look forward to hearing from our witnesses and working with our colleagues to reform Section 230 so we can hold big tech accountable and preserve Americans' freedom of speech.
Starting point is 00:05:43 I thank you all for being here today, and at this time I yield five minutes to the ranking member of the subcommittee, the General Lady from California for five minutes. Thank you very much, Mr. Chairman. At last week's TikTok hearing, there was bipartisan concern about the rise and harmful content on the platform. While some of the examples highlighted by members were jarring, TikTok is by no means unique. This hearing provides another chance to explore those same concerns across the wider Internet ecosystem. The spread of misinformation, hate speech, and political extremism online has been meteoric. During the early days of the pandemic, hate speech targeting Chinese and other Asian Americans boomed.
Starting point is 00:06:25 One study from the AI company, Light, documented a 900% increase targeting Chinese people in China. That same study showed that the amount of traffic going to specific posts and hate sites targeting agents increased three-fold over the same period. But this increase wasn't limited to just racial motivations. Young people of all backgrounds have been subjected to some of the most appalling examples of cyberbullying and hate speech. There was also a 70 percent increase in a number of instances of hate speech between teams and children during the initial month. of quarantine. But that's not all. Political extremism and dangerous conspiracy theories are also on the rise. A study by the double verify, a digital media and the analytics company, found that inflammatory and misleading news increased 83% year over year during 2020 U.S. presidential
Starting point is 00:07:23 election. And perhaps most disturbingly, hate speech tripled in the 10 days following the capital insurrection compared with the 10 days preceding that violence. The week after the capital insurrection, the volume of inflammatory politics and news content increased more than 20% week over week. So across all sections or sectors, the amount of online speech related to political extremism, race-based violence, and the targeting of other protected classes is growing. The reason this increase is so concerning to me is because it rarely stays online only. A 2019 study by New York University analyzed more than 530 million tweets published between
Starting point is 00:08:09 2011 and 2016 to investigate the connection between online hate speech and real-world violence. Unsurprisingly, the study found more targeted discriminatory tweets posted in a city related to a higher number of hate crimes. This backs similar findings from study. in the UK and Europe. This trend is backed up by the FBI's own real-world data on hate crimes, which show that the number has only increased. This escalation isn't a one-way problem. Social media platforms are taking daily steps to ferment it and see that it reaches as many people as possible. The algorithms that promote harmful content with the users that will resonate with most have benefited from massive and
Starting point is 00:08:59 investments in R&D and personnel. In many ways, these platforms are competing over the effectiveness of their respective algorithms. They represent a conscious choice by online platforms, and one that I believe means they must assume more responsibility and accountability for the content they're actively choosing to promote. In a 2020 academic article describing racial bias online, Professor Overton notes that through data collection and algorithms that identify which users see suppressive ads, social media companies make a material contribution to the illegal racial targeting.
Starting point is 00:09:41 This point is an important one. Online platforms are making regular and conscious contributions to the spread of harmful content. This isn't about ideological preferences, it's about profit. Simply put, online platforms amplify hateful and misleading content, because it makes them more money. And without a meaningful reorganization of their priorities, their behavior won't change. And that's where this subcommittee must step in. On a bipartisan basis, there is widespread agreement
Starting point is 00:10:13 that the protections outlined in Section 230 of the Communications Decency Act need to be modernized, because continuing to accept the status quo just isn't an option. Without bipartisan updates to Section 230, it is naive to think large online platforms, will change their behavior their profit motive is too great and the structural oversight too weak the discussion will have to at today's hearing is an important one and one that I hope serves as a precursor to substantive bipartisan legislation section 230 needs to be reformed and I'm ready to get to work with that I yield the remainder of my time thank you the generality yields back the chair now recognizes the chair of the full committee the gentlelady from Washington five minutes for opening statement Good morning and thank you, Mr. Chairman.
Starting point is 00:11:02 I want to begin today by celebrating why Americans cherish our most fundamental right of free speech. It is how we the people innovate, create new things, make our own arguments stronger, and engage in the battle of ideas to make our communities better. Perhaps most importantly, it is the strongest tool people have to hold a politically, the politically powerful accountable. It is why regimes across the world shut down free speech, arrest journalists, and limit people's rights to question authority. Free speech is foundational to democracy. It's foundational to America. Big Tech is shutting down free speech. It's authoritarian actions violate Americans' most fundamental rights to engage in the battle of ideas and hold the politically powerful accountable.
Starting point is 00:11:55 for the crime of posting content that doesn't fit the narrative. They want people to see, hear, or believe Big Tech is flagging, suppressing, and outright banning users from its platforms. Today we are joined by several of these people who have been silenced by Big Tech. They will have their voice before this subcommittee. Big Tech proactively amplifies its allies on the left while weakening any dissent, creating a silo. an echo chamber, a place where only the right ideas are determined by the faceless algorithm or a few
Starting point is 00:12:34 corporate leaders. House Energy and Commerce Republicans have repeatedly condemned these censorship actions, even in the challenges to mainstream media when they turned out to be correct, as was the case with Hunter Biden laptop story. What's worse is the government collusion with big tech companies to censor disfavored views and be the gatekeepers of truth? Who deserves to be the arbiters of truth? Big tech companies and government officials? That sounds like the actions taken by the Chinese Communist Party. We had the CEO of TikTok before this committee last week, where we expose them for their ties to the Chinese Communist Party and the censorship TikTok does on its behalf. Let me be clear. Government
Starting point is 00:13:22 sponsored censorship has no place in our country. It never will. A healthy marketplace of ideas is integral to everyday American life and a healthy democracy. Social media is a place for us to connect with friends and a place where we should be able to share our views and learn from one another. Big tech companies in America have benefited from the liability protections given to them by Congress in 1996 under Section 230 of the Telecommunications Decency Act. As a result, they should be a forum for public discourse and a place for people to openly debate all ideas. But instead, censorship on their platforms shut down these debates and risk a long-lasting stain
Starting point is 00:14:10 on our society by undermining the spirit of our First Amendment. At the same time, this censorship is happening, big tech. is failing to invest in tools to protect our kids. Snapshot, TikTok, Instagram, their platforms are riddled with predators seeking to sell illicit drugs laced with fentanyl and exploit our innocent children. Over and over, I hear from parents who've lost a child due to targeted content by a social media platform. And yet instead of addressing this, Big Tech chooses to focus on shutting down certain speech. As I've said before, and I'll say it again, Big Tech remains my biggest fear as a parent, and they need to be held accountable for their actions. President Joe Biden and his
Starting point is 00:14:58 administration are on a dangerous authoritarian mission to institutionalize censorship of American voices and control the narrative to benefit their political agenda. They've admitted to flagging problematic content for big tech companies to censor. the CDC, the Surgeon General, the Department of Homeland Security, and... Are any of them working? Mine's not. Well, we know that these companies sought to establish a disinformation governance board with Department of Homeland Security to monitor and censor Americans online.
Starting point is 00:16:05 This hearing provides us an opportunity to hear from those that have been silenced by Big Tech censorship. Americans must have their voices heard, and I look forward to hearing from our witness. Thank you and I yield back. Well, thank you very much. The General Leal Yields back and again, this is the Communications and Technology Subcommittee, and we can't get our mics to work. The chair now recognizes the ranking member of the full committee, the gentleman from New Jersey for five minutes.
Starting point is 00:16:33 Thank you, Chairman Lada. I have to say that I am deeply disappointed with this hearing today. We could be having a serious discussion about the need to reform Section 230 of the Communications Decency Act, But instead, Republicans have chosen to focus on so-called big-tech censorship. This hearing is nothing more than red meat for the extreme conservative press, who will certainly eat it up. They'll share it on social media where studies show conservative voices are dominant. The voices of the Republican witnesses have been far from silence. They're incredibly popular on big tech platforms.
Starting point is 00:17:10 They're featured in countless videos on YouTube and TikTok. They have books for sale on Amazon. website and email newsletters with paid subscribers. Their guests on popular podcasts and regularly appear on right-wing cable and streaming channels. Say what you want about them that they certainly aren't censored. The Republican witnesses have engaged in pseudoscience to minimize the worsening climate crisis and see dangerous ideas about COVID-19 and vaccines. One bank rolls another social media personality that he's calling heroic for spewing vile, anti-
Starting point is 00:17:45 LGBTIQ hate, resulting in harassment, threats of violence, and intimidation across the country. And like the big tech platforms themselves, I'm sure they profit handsomely from the controversy. Now, that's not to say there isn't real censorship happening across the country, but it's not the Democrats or the tech platforms that are responsible. It's the Republicans. In fact, the Republican Party is responsible for some of the most egregious First Amendment violations and censorship that we've witnessed. witnessed in years. Republican-led states across the nation have considered bills that promote
Starting point is 00:18:20 censorship and threaten free speech, giving a vocal minority the power to impose their extreme beliefs on everyone else in their community. They've banned books about African American history, suppressed information about safe abortions, and demanded teachers don't say gay. Now, that's real censorship, in my opinion. What Republicans are trying to do here today is to force private companies to carry content that is misinformation or disinformation, dangerous or harmful. Companies have been moderating content since the beginning of the Internet, and research has repeatedly refuted Republican claims of an anti-conservative bias in that moderation. As I said, it's disappointing that we could not have had a serious discussion about Section
Starting point is 00:19:04 230 reform. We all seem to agree there's harmful content on these platforms that should be taken down. Last week at the TikTok hearing, we were deeply troubled when we saw an implied threat against the committee with the imagery of a gun. We also saw examples of disturbing videos, glorifying suicide and eating disorders, dangerous challenges leading to death, merciless bullying and harassment, graphic violence, and drug sales. And this terrible content is harmful to all of us, but particularly our kids. There's no doubt that Republicans and Democrats want social media platforms to better protect users from harmful content. We want to hold platforms accountable and bring about more transparency about how algorithms and content moderation processes work.
Starting point is 00:19:47 And of course, the details matter tremendously here. And that is why our inability to have a serious conversation today is so frustrating to me. Every day we allow courts to interpret Section 230 to indiscriminately shield platforms from liability for real-world harm. Every day like that is a day that further endangers our young people, our democracy, and our society as a whole. Now, Democrats today are going to try to have a productive conversation about these issues with our expert witnesses. But it's a shame that, in my opinion, our colleagues on the other side of the aisle are not going to joining us in this endeavor. And with that, I yield back, Mr. Chairman. Thank you very much.
Starting point is 00:20:27 The gentleman yields back the balance of this time. The Chair reminds members that pursuant to the committee rules, all members opening statements will be made part of the record. Are there any members wishing to make an opening statement? Seeing none, I now would like to note for the witnesses that the timer light in front of you will turn yellow when you have one minute remaining of your five minutes, and it will turn red when your time has expired. We'll go down to our list of witnesses. Our first witness today is Seth Dillon, the CEO of Babylon B. and I'm going to turn to the gentlelady from California 16th District for an introduction.
Starting point is 00:21:15 Thank you, Mr. Chairman. Let me get to, well, I'm not going to go to my notes. My constituent, Dr. Baj, how do you pronounce your name? Barasharia is a professor at Stanford, a critic of mine, which is very fair and I've never attempted to censor anything he's had to say about me. But I want to welcome you and thank you for traveling across the country to be with us. So thank you, Mr. Chairman. Thank you very much.
Starting point is 00:21:51 The General Eelts Back. Our next witness is Spencer Overton, who is the President of the Joint Center for Political and Economic Studies and Research Professor at George Washington Law School. Thank you for being with us. And our final witness is Michael Schellenberger, the 3rd. founder and president of environmental progress, and we appreciate you being here. And Mr. Dillon, you will start for our witnesses today,
Starting point is 00:22:15 and you have five minutes. So thank you very much for being with us today. Because you, hopefully the mic's working there on your end. Do I have to turn? There we go, got it. I'm being censored. I want to start by thanking this committee for giving me the opportunity to speak today and for the willingness
Starting point is 00:22:35 of its members to address this important issue of censorship. My name is Seth Dillon. I'm the CEO of the Babylon Bee, a popular. popular humor site that satirizes real-world events and public figures. Our experience with big tech censorship dates back to 2018 when Facebook started working with fact-checkers to crack down on the spread of misinformation. We published a headline that read, CNN purchases industrial-sized washing machine to spin the news before publication.
Starting point is 00:23:01 Snopes rated that story false, prompting Facebook to threaten us with a permanent ban. Since then, our jokes had been repeatedly fact-checked, flagged for hate speech, and removed for incitement to violence, resulting in a string of warnings and a drastic reduction in our reach. Even our email service has spent us for spreading harmful misinformation. We found ourselves taking breaks from writing jokes to go on TV and defend our right to tell them in the first place. That's an awkward position to be in as humorous in a free society. Last year, we made a joke about Rachel Levine, a transgender health admiral in the Biden administration. USA Today had named Levine Woman of the Year, so we fired back in defense of women insanity with this satirical headline.
Starting point is 00:23:39 the Babylon Bs man of the year is Rachel Levine. Twitter was not amused. They locked our account for hateful conduct, and we spent the next eight months in Twitter jail. We learned the hard way that censorship guards the narrative, not the truth. In fact, it guards the narrative at the expense of the truth. All the more outrageous was Twitter's lip service commitment to free expression. Twitter's mission, they write,
Starting point is 00:24:03 is to give everyone the power to create and share ideas and information and to express their opinions and beliefs without barriers. As promising as that sounds, it rings hollow when you consider all the barriers that we and so many others have encountered. The comedian's job is to poke holes in the popular narrative. If the popular narrative is off limits, then comedy itself is off limits. And that's basically where we find ourselves today. Our speech is restricted to the point where we can't even joke about the insane ideas that are being imposed on us from the top down. The only reason Twitter is now an exception is because the world's richest man took matters into his own hands and declared comedy legal again.
Starting point is 00:24:37 We should all be thankful that he did. The most offensive comedy is harmless when compared with even the most well-intentioned censorship. I hope we can all agree that we shouldn't have to depend on benevolent billionaires to safeguard speech. That's a function of the law. But the law only protects against government censorship. It hasn't caught up to the fact that the vast majority of public discourse now takes place on privately owned platforms. So where is the law that protects us from them? The lovers of censorship will tell us that there can be no such law.
Starting point is 00:25:06 The Constitution won't allow it, but they're wrong and their arguments fail. I only have time to deal with a few of them very briefly. One, they say private companies are free to do whatever they want. That's nonsense, especially when applied to companies that serve a critical public function. A transportation service can't ban passengers based on their viewpoints, nor can telecom providers. Under common carrier doctrine, they're required to treat everyone equally. That precedent applies comfortably to big tech. The argument that only the government can be guilty of censorship,
Starting point is 00:25:36 falls short because it fails to make a distinction between the way things are and the way they should be. If these platforms are the modern public square as the Supreme Court has described them, then speech rights should be protected there, even if they presently are not. The current state of affairs being what they are is not a good argument for failing to take action to improve them. But beyond that, these platforms have explicitly promised us free expression without barriers. To give us anything less than that is fraud. Two, they say these platforms have a First Amendment right to censor as if censorship were a form of protected speech, but it isn't. Censorship is a form of conduct.
Starting point is 00:26:11 The state has always been able to regulate conduct. The idea that censorship is speech was forcefully rejected by the Fifth Circuit Court of Appeals in their recent decision to uphold an anti-discrimination law in Texas. The court mocked the idea that buried somewhere in the enumerated right to free speech lies a corporation's unenumerated right to muzzle speech. No such right exists. And how could it? The claim that censorship is speech is as nonsensical as saying war is peace or freedom is slavery.
Starting point is 00:26:39 Three, they say these platforms are like newspapers. They can't be forced to print anything they don't want to. But they aren't like newspapers. They aren't curating every piece of content they host and they aren't expressing themselves when they host it. They're merely conduits for the speech of others. That's how they've repeatedly described themselves, including in court proceedings,
Starting point is 00:26:56 and that's how Section 230 defines them. As a final point, I think it's important to acknowledge that the call for an end to censorship is not a call for an end to content moderation. Some will try to make that claim. But Section 230 gives these platforms clearance to moderate lewd, obscene, and unlawful speech, and anti-discrimination legislation would respect that. The only thing it would prevent is viewpoint discrimination. And such prevention would not be unconstitutional because it would only regulate the platform's
Starting point is 00:27:24 conduct. It would neither compel nor curb their speech. Thank you. Thank you very much. Mr. Bottacharya, you are recognized for five minutes. Thank you. And thank you for the opportunity to present to this committee. I'm a professor of health policy at Stanford University School of Medicine. I've been, I hold an MD and a PhD from Stanford University. I've been a professor for 20-some years. Because of my views of the COVID-19 restrictions, I have been specifically targeted for censorship by federal government officials. On October 4th, 2020, I and two colleagues, Dr. Martin Kulorf, a professor of medicine,
Starting point is 00:27:59 on leave now at Harvard University and Dr. Sennettra Gupta, an epidemiologist at the University of Oxford, published the Great Barrington Declaration. The declaration called for an end to economic lockdowns, school shutdowns, and similar restrictive policies on the grounds that they disproportionately harm the young and economically disadvantaged while conferring limited benefits.
Starting point is 00:28:21 We know that the vulnerability to death from COVID-19 is more than a thousandfold higher in the old and infirm than in the young. The declaration and doors a policy of focused protection that called for strong measures to protect high-risk populations while allowing lower risk individuals to return to normal life, including specifically opening schools, with reasonable precautions. Tens of thousands of doctors and scientists signed on to the Declaration. Because it contradicted the government's preferred narrative on COVID-19,
Starting point is 00:28:51 the Great Barrington Declaration was immediately targeted for forced oppression by federal officials. Four days after we wrote the declaration, the then director of the National Institute of Health, Dr. Francis Collins, emailed Dr. Tony Fauci about the declaration. I have an email that I found via FOIA, which I can enter for the record. The email stated, Hi Tony and Cliff, this proposal from the three fringe epidemiologists, that's me, Martin Kuldorf for Harvard and Senator Gupta of Oxford, who met with the secretary seems to be getting a lot of attention, and even a co-signature from no Nobel Prize winner Mike Levitt at Stanford.
Starting point is 00:29:31 There needs to be a quick and devastating published take down of its premises. I don't see anything like that online yet. Is it underway? Francis. This email is produced over a year later in response to FOIA requests. It is possible to surmise from this email that Collins viewed the Great Barrington Declaration as a threat to the illusion that there was a consensus, a scientific consensus, of people who agreed with them about the necessity of lockdown.
Starting point is 00:29:53 In the following days, I have subjected to what I can only describe as a propaganda attack. the Great Barrington Declaration called Public Health Authories to think more creatively about how to protect vulnerable older people, reporters accused me of wanting to let the virus rip. Another FOIA email, which I also have available, I'd like to introduce for the record, showed that Tony Fauci forwarding a Wired magazine article saying something along those lines to Francis Collins only a couple of days after Collins' call for a devastating take down. A key part of the government's propaganda campaign supporting lockdowns and other pandemic strategies have been censorship of discourse by scientists and regular people.
Starting point is 00:30:26 On party to a case brought by the Missouri and Louisiana Attorney General's Office against the Biden administration. Through this case, lawyers have had the opportunity to pose under oath representatives from many federal agencies involved in the censorship efforts, including representatives of the Biden administration and Tony Fauci himself. What this case has revealed is that there is nearly a dozen federal agencies, including the CDC, Office of Solution General, and White House pressured social media companies like Google, Facebook, and Twitter to censor and de-boost even true speech that contradicted federal pandemic priorities. especially inconvenient facts about COVID vaccines such as their inefficacy against COVID disease transmission. I know for a fact that the Great Barrington Declaration suffered from censorship from many media companies, including Google, Reddit and Twitter, which removed, where I was placed on a trends blacklist the moment I joined in August of 2021. In March 2020, I was part of a ground table with Governor DeSantis that was filmed where we discussed masking children. That video of the governor of the state of Florida talking to his scientific advisors was censored off of YouTube.
Starting point is 00:31:40 The suppression of scientific discussion online clearly violates the U.S. First Amendment, but perhaps even more importantly, the censorship of scientific discussion permitted a policy environment where clear scientific truths were muddled and as a result destructive and ineffective policies persisted much longer they would have otherwise government censorship permitted ideas false ideas for instance that the that the risk of COVID is not steep steeply a stratified or that the recovery from COVID does not provide substantial immunity against against a future infection or severe disease on future infection that the COVID vaccines do stop disease transition all these the school ideas school closures were warranted all of these
Starting point is 00:32:20 destructive ideas harm the health and well-being of the American people. And many people that are dead today would be alive had those ideas been countered. Government censorship, if there's anything we've learned from the pandemic, it should be that the First Amendment is more important during a pandemic, not less. Well, thank you very much. And Mr. Overton, you're recognized for five minutes for your statement. Thank you. Chair's, ranking members and members of the committee. Thanks for inviting me to testify. My name is Spencer Overton. I'm the president of the Joint Center for Political and Economic Studies. We research the impact of tech platforms on black communities.
Starting point is 00:32:57 I'm also a professor at GW and focus on democracy and tech platform accountability. Now, while I favor tech platform accountability, this hearings framing preserving free speech and reining in big tech censorship, it is inaccurate. This framing suggests that the way that government... preserves free speech is to prevent tech companies from engaging in content moderation. In fact, the First Amendment protects private sector tech companies in their right to determine what to leave up and what to take down on their platform. That's the part of freedom of association, freedom of speech.
Starting point is 00:33:41 The censorship, the First Amendment prohibits, is government attempting to restrict or compel private actors to speak in a particular way. Congress shall make no law that abridges the freedom of speech. Now, if we were to accept this characterization that tech platforms censor every time they remove a post, that that's going to mean that Fox News censors every time it selects hosts to lead its prime time lineup. It means that the Wall Street Journal censors every time it declines an op-ed. Now, some partisans may want to tell Fox News and the Wall Street Journal how to moderate their conduct. They may want government to silence those institutions, but that's not in line with the First Amendment. Because the freedom of speech that private platforms enjoy in terms of content moderation, because of that,
Starting point is 00:34:40 TripAdvisor has the right to take down comments that have nothing to do with travel. Truth Social enjoys the right to take down posts from users about the January 6th committee hearings or those people who express pro-choice opinions here. These institutions are not common carriers. I'll discuss that. Maybe if we have time in terms of our discussion piece, the period, the 11th Circuit explained it in detail in 2022. Now, while existing research suggests that large platforms like Facebook, Instagram, YouTube, do not disfavor or target conservatives for removal here.
Starting point is 00:35:31 You know, they, in fact, favor, go out of their way to favor conservatives for fear of accusations of political bias. and because these folks are an important and valuable advertising base. But in fact, that's really beside the point, right? That's beside the point. The real point is that private companies have this First Amendment right to engage in content moderation. Now, also, if we were to treat these tech platforms as state actors and require that they keep up all constitutionally protected speech,
Starting point is 00:36:07 the Internet would be even worse, particularly for, for teenagers, for young children. We'd see more violence, more pornography, more graphic content. We'd see more instructions on self-mutilation and suicide and more swastikas, more holocaust denials, more white supremacist organizing. All of this is constitutionally protected speech, right? But right now, platforms can take it down
Starting point is 00:36:36 because they are not state actors here. We'd see more deepfakes, more political disinformation, more spam. Now, even though the First Amendment protects private tech platforms, it doesn't demand that they bear no responsibility for what they choose to amplify and the harms that they create. That is not a part of the First Amendment. That's a part of the over-interpretation of courts of Section 2, I'm sorry, of Section 230 of the Communications D.
Starting point is 00:37:10 Act. I think Republicans and Democrats can agree on several issues, including the fact, as you said, Mr. Chair, that this isn't 1996. The world has changed since 1996 when 230 was enacted. Democrats and Republicans can act in a bipartisan way to ensure that tech companies don't impose harms on others through their algorithms and other activities that they use. to profit. Thank you so much. Thank you. And Mr. Schellenberger, you are recognized for five minutes for your statement. Thank you, Chairman Lada, ranking member Matsu, and members of the subcommittee for inviting me to testify today. Here are events that actually happened. Twitter suspended a woman for saying, quote, women aren't men. Facebook censored accurate information about COVID vaccine side effects. Twitter censored a Harvard professor of epidemiology for expressing his opinion that children did not need the COVID vaccine. Facebook censored speculation that the coronavirus came from a lab.
Starting point is 00:38:17 Facebook censored a journalist for saying accurately that natural disasters were getting better, not worse. Twitter permanently suspended a sitting president of the United States, even though Twitter censors themselves had decided he had not violated its terms of service. Now, maybe that kind of censorship doesn't bother you because people were doing their best to prevent real-world harm with the knowledge they had at the time. But what if the shoe were on the other foot? Consider how you would feel if the following occurred. Twitter suspended a woman for saying trans women are women.
Starting point is 00:38:51 Facebook censored accurate information about COVID vaccine benefits. Twitter censored a Harvard professor for saying children needed to be COVID-vaxed annually. Facebook censored speculation that the coronavirus came from nature. Facebook censored a member of Congress for saying the world is going to end in 12 years because of climate change. Twitter permanently suspended President Biden, even though, according to Twitter's top censor, he had not violated its terms of service. Now, it's true that private media companies are allowed by law to censor whoever they want, and it would violate the First Amendment of the United States for the government to try to prevent them from doing so. But internet platforms including Twitter, Facebook, and Google only exist thanks to Section 230 of the Communications Decency Act, which exempts them from legal liabilities that burden traditional media companies.
Starting point is 00:39:47 If Congress simply eliminated Section 230, Internet search and social media platforms would no longer exist. And maybe that's what Congress should do. These platforms are obviously far too powerful. They are making the American people all of us dogmatic and intolerant. And the evidence is now overwhelming that they have played a primary cause, if not the primary cause, in America's worsening mental health crisis. We might be healthier nation if we simply reverted to the good old days of websites that have the same liability as newspapers. But doing so would reduce rather than increased freedom of speech and may not be necessary to protect American citizens. As such, I would propose an immediate and partial remedy,
Starting point is 00:40:36 which would also allow us to understand what else, if anything, is needed to protect the free speech of citizens. And that would be true transparency. By transparency, I do not mean that which is being proposed by a Senate transparency bill, which would only allow National Science Foundation certified researchers across, allow NSF certified researchers access to content moderation.
Starting point is 00:41:02 decisions. That bill would increase the power of the censorship industrial complex, which is actively undermining our free speech. Rather, I mean immediate public transparency into all content moderation decisions relating to matters of social and political importance. We do not need to know how the platforms, for example, are removing pornography or criminal activities. Those things should be cracked down upon immediately. But when Twitter, Facebook, and Google censor people for expressing disfavored views on transgenderism, climate change, energy, vaccines, and other plainly social and political issues, they must immediately announce those content moderation's decisions publicly and give the
Starting point is 00:41:46 censored individuals the right to respond. And to protect free speech from government, Congress could require government contractors and government employees to immediately report any content-related communications they make to internet platforms. What I'm proposing is rather simple. If the White House is going to demand that Facebook censor accurate information about COVID vaccine side effects, which it did do, then it would need to immediately send an email to be posted on a website, to be tweeted out, to be put on Facebook, that that's what they did. And if Facebook is going to take down accurate information about side effects of COVID vaccines, it should be required to explain that it did that.
Starting point is 00:42:31 If it's going to censor Dr. Bottacharya or Mr. Dillon, then it should be required to explain why it did and how it did that, and it should be required to give them a chance to respond. Such a solution would not eliminate unfair censorship and content moderation, since those things are always subjective, but it would bring it out into the open. It would restore the right of free citizens to have voice, and it would open the possibility for better, for your content moderation in the future.
Starting point is 00:43:01 Thank you very much. Well, thank you very much to all of our witnesses, and that will conclude our five-minute openings with our witnesses, and I'll now recognize myself for five minutes for questioning. My first question is to all of our witnesses, and hopefully just pretty much a simple yes or no answer will suffice. This subcommittee has sold jurisdiction over legislative, that would amend Section 230 of the Communications Decency Act.
Starting point is 00:43:30 Given the proven censorship action is taken by Big Tech, not limited to satirical, scientific, and political viewpoints, do you agree that Section 230 must be reformed? Ms. Jelen, would you like to start with a yes or no? Is it a simple yes or no? I think reform would be helpful, yes. I do think there's also room for legislation that would address the issue of viewpoint discrimination outside of reform to Section 230. Thank you.
Starting point is 00:43:56 Ms. Bauticharia? Yes, and I think that there should be restrictions on the ability of government officials to use Section 230 and other mechanisms to try to censor scientific debate online. Thank you. Mr. Overton? I do think reform to 230 is in order. I think it's a question about what kind of reform. Thank you. Mr. Schellenberger? Thank you. Thank you very much. Mr. Bauticharya, in early 2021, you published a third. scientific article that discussed age-based mortality, risk, and natural immunity to COVID. Is that correct?
Starting point is 00:44:33 Yeah, I've published several articles on this. At the time it was published, were the findings in your article consistent with public health authorities and with the view on your topic? So I think that the main findings of, if it's the article I think you're thinking of, was that the lockdown restrictions that ignored age-based risk from COVID, had not been successful in actually restricting the spread of COVID. And that the other thing from other people's findings, very clear in the scientific literature,
Starting point is 00:45:11 is that those kinds of restrictions were very damaging, especially to young people. Let me follow up then on this. What you were talking about on findings, as a professor of medicine at Stanford University over the course of your career, how often is it that researchers disagree through the scientific process?
Starting point is 00:45:29 It happens all the time. Thank you. You know, after you were banned on Twitter, you were unable to have an open discussion and provide medical research data to the most consequential public health decisions made in generations. How do you believe censoring that scientific content
Starting point is 00:45:46 impacted the ability of Americans, parents, small business owners, and others to make educated decisions related to COVID-19 during the pandemic? I think that the government's actions to create an illusion of scientific consensus on those topics harmed the health and well-being of every single American. I think it closed small businesses. It meant that little children couldn't go to school. Minority kids specifically were harmed more than, because there's minority kids' schools that were closed more.
Starting point is 00:46:18 And many people who were under the impression that the vaccine would stop transmission and it didn't were also harmed because they were, refuse the ability to get the full set of facts about the vaccines when they were making those decisions, whether to take them. And what recourse did you have with Twitter? None until Elon Musk bought Twitter. What I found out after he did buy Twitter is he invited me to come visit Twitter headquarters, and I found that I was placed on a blacklist. This is the day that I joined Twitter.
Starting point is 00:46:51 Thank you. Mr. Schellenberg, according to the information recently uncovered through the, the Twitter files, we know that Twitter censors specific conservative users through its visibility tools and has used this tool by tagging the accounts of conservative activists as do not amplify. This was after assurances from Twitter's head of legal policy and trust that Twitter does not shadow ban. Based on your reporting, what other tools have you uncovered that used by Twitter or other platforms to censor conservative voices. Thank you for the question.
Starting point is 00:47:31 I would just say we described the censorship that occurred as occurring against disfavored voices, because I don't think, and this is why I'm very skeptical of any of these studies which claim to measure bias being more liberal or conservative, because we can't agree on what's liberal or conservative. The concerns that Dr. Botichari had just raised about the disproportionate impact of school lockdowns
Starting point is 00:47:51 on students of color, I don't think those are necessarily conservative or liberal. I think those are just human rights concerns. But we saw there's a range of tools that were used both to do not amplify voices to censor tweets. The Harvard professor Marshall Kildorf was censored by having a warning on one of his tweets about where he said that kids don't need to be given COVID vaccines. We see, which I think is important to point out, it's a particular form of censorship that's also humiliating and discering. I mean, here we have the most powerful mass media communications and human history basically accusing people of being liars or misleading or deniers, really toxic kind of labeling.
Starting point is 00:48:39 So it's occurring both through removing tweets, putting people, de-platforming people, and also attempting to do it. Well, thank you very much. My time's expired. So these are examples of censorship by big tech companies, underscores the need to reform Section 230 and they are acting as bad Samaritans on their platforms and don't deserve that blanket liability. So I yield back and at this time I recognize the ranking member of the subcommittee for five minutes.
Starting point is 00:49:10 Thank you very much, Mr. Chairman. I want to focus on the algorithms. Section 230 protections were initially conceived to protect neutral platforms that passively host information from third parties. By this approach allowed the Internet ecosystem to flourish, I believe the central tenet is flawed. Modern platforms consciously promote some speech over others through sophisticated algorithms and data collection practices. Professor Overton, can you describe how algorithms and data collection practices materially contribute
Starting point is 00:49:46 to discrimination online? Yes, thank you so much, Ranking Member. Essentially, platforms like a Facebook or a Twitter make money off of ad revenue and views and this type of thing. And so what they do is they try to use these algorithms to deliver content, ads, et cetera, to make money and to profit. Facebook, what they've done is a couple of things. One, they've had drop downs that basically allow people to target particular racial groups in the past or ethnic affinity drop downs. And as a result, advertisers have been able to, for example, target employment or housing ads away from African American communities or Latino communities. But then also the algorithms, as you've talked about, are also problematic.
Starting point is 00:50:50 The advertising may not even know, and then the algorithms steer the ads away from black and Latino people. Okay, so could I ask this, do you believe the use of algorithms to target the distribution of certain content should alter our understanding of the 230 framework? I do think, yes, absolutely. Okay. Now, in Gonzalez versus Google, a court found that Google did not act as an information content provider when using algorithms to recommend terrorists. content because Google used a neutral algorithm that did not treat ISIS created content differently than any other third-party created content, and Google provided a neutral platform that did not encourage the posting of unlawful material.
Starting point is 00:51:38 So, Professor, I often see the phrase neutral used to describe social media algorithms. However, I have concerns that phrase glosses over the inherent biases and certain the algorithms, Yes. Construction and effect, do you believe algorithms can ever be truly neutral? And if not, how should that fact inform our understanding of Section 230? Yeah. I think it's wrong to have a broad neutral rule here that all algorithms are neutral and mechanical.
Starting point is 00:52:08 Certainly they have harms in terms of particular communities. Okay. Social media and online platforms are shown consistent success in preventing many forms of objection content like obscenity and nudity. They've also moved quickly in some cases to identify and label misinformation around COVID-19 and vaccines. However, this same efficiency does not extend to racial equity and voting rights. Professor Overton, why do you believe online platforms haven't had commensurate success and preventing harms to racial equity and voting rights? Yeah, I think that there's some steps that have been made by some companies, but it's not enough,
Starting point is 00:52:47 And part of it is that profit is a big motive in terms of company, so that's what they're focused on in terms of the advertising dollars or whatever is going to drive the bottom line. Okay. Well, Section 230 establishes broad protections for online platforms. It doesn't extend to an information content provider, which Section 230 defines as any person responsible in whole or in part for the creation or development of information,
Starting point is 00:53:17 Courts have generally understood development in this context to mean making information usable, available, or visible. Professor Overton, how has our understanding of this phrase changed as technology has evolved? And where does it fit in the broader Section 230 discussion? Certainly. Roommates, a case on the Ninth Circuit, you know, introduced the fact that there may be some material contributions where platforms don't enjoy the protection and the problem is that it has not been clear. The difficulty about broad rules in this space is on one hand, algorithms are troubling
Starting point is 00:53:56 and can be discriminatory. On the other hand, they can be used for content moderation in cleaning up the Internet. So we want to be careful in terms of flat, broad, straight rules here and be just very thoughtful about this space. Okay, well, thank you very much. I realize we have a lot of work to do to help reform this. So thank you.
Starting point is 00:54:16 I yield back. Thank you. The general yields back and at this time the chair recognized the gentleman from Florida for five minutes. Thank you, Mr. Chair. I appreciate it very much. I want to thank the witnesses for their testimony. Two years ago, I put out a survey to my constituents on Big Tech. I asked the citizens of my district the following question, do you trust Big Tech to be fair and responsible stewards of their platforms? Over 2,700 constituents responded with 82% of them saying, no. That's a terrible performance, in my opinion, for Big Tech. A year and a half later, I asked the same question to my constituents.
Starting point is 00:54:58 Maybe Big Tech got the hint and had worked to gain public trust. This time, we had even more constituents respond to the survey, over 3,200 participants in my district. Same question, do you trust Big Tech to be fair, and responsible stewards of their platforms. Once again, 82% of them said no. Zero improvement whatsoever. In 2020, the documentary The Social Dilemma
Starting point is 00:55:26 brought to light how social media platforms moderate their content. It showed the power that social media platforms have to polarize the views of its users based on the algorithms they used to promote certain content and the incentives to do so to keep us on their platforms longer. Mr. Schellenberger, to what extent is big tech to blame for the political polarization in America today? Thank you for the question. I think a very significant amount.
Starting point is 00:55:58 Obviously, there was trends of polarization occurring before the rise of social media, but we know that social media reinforces people's existing beliefs. It creates a sense of certainty where there should be more open. openness and uncertainty. I think it's clearly contributed to a rising amount of intolerance and dogmatism that we've seen in the survey research. So unfortunately, it has not played the role of opening people up to wider perspectives that we had hoped. Mr. Dellen, thank you. Has the censorship you experienced by social media impacted your livelihoods?
Starting point is 00:56:33 If so, can you explain how that has impacted your family or business relationships, please? That was directed at me, right? Yeah, to you, Mr. Dillen. Yeah. Yeah, well, I mean, we were knocked off of Twitter for eight months, which is one of our primary traffic sources, so it impacted the business performance in terms of how much traffic and revenue we were driving from that, yes.
Starting point is 00:56:58 Very good. Question for you and Mr. Patrick Larr, I'm sorry, I'm put you the name. Are there opinions or ideas that you have wanted to post on social media which you ultimately choose not to because of fear of retaliation by the platforms. You can start with Mr. Dillon and then Mr. Betocharia. I did a little better that time. Please. Can you repeat the question?
Starting point is 00:57:25 Yeah. Are there opinions or ideas that you have wanted to post on social media, which you ultimately choose not to because of a fear of retaliation by the platform? In my view, self-censorship is doing the tyrants work for him. And so I refuse to censor myself, and I say what I think, come what may. And that's why when we got locked out of Twitter, we were asked to delete that tweet, and we could get our account back if we deleted the tweet and admit that we engage in hateful conduct. And I refuse to do that, too.
Starting point is 00:57:58 So, no, I don't censor myself, and I refuse to delete tweets that they want me to delete for hateful conduct when I don't think they're hateful. Okay, well, I commend you. that. Mr. Vedicarya. I have self-censored because I didn't want to get booted off of Twitter or for social media. I tried to figure out where the line was and I think as a result the public didn't hear everything I wanted to say. I also say that there's a lot of younger faculty members and professors who've reached out to me told me that they also self-censor do I not going on social media at all by not making their views public at all.
Starting point is 00:58:37 because of the environment created around the censorship. I understand that as well. Mr. Schellenberger, in your experience, are there some platforms that have a better track record at maintaining free speech principles over others or have any improved over time? If not, why do these companies continue to engage in biased content moderation decisions?
Starting point is 00:59:00 And what can Congress do to better enable constitutionally protected speech? I'm not sure of the answer to the first question. I will say that we have seen some particularly Twitter censoring some things that Facebook does not censor and Facebook censoring some things that Twitter does not. So I think some of it just depends. But I think the most important thing, and I'm really trying to propose something here that I think both parties can agree to, is transparency.
Starting point is 00:59:28 I don't think that we're going to ever – we can't agree on what a woman is as a society. So there's this famous, people say sometimes you're entitled to your own opinions, but not your own facts. We're entitled to our own facts, too, under the First Amendment, and that's just how we are. So I think if you can't, we're not going to be able to legislate particular content moderation, and so we need to just move forward with transparency. Well, I appreciate it very much. Very informative. Thank you for your answers. Thank you.
Starting point is 00:59:56 The gentleman's time has expired. The chair now recognizes the gentlelady from New York for five minutes. Good morning, everyone, and let me start by thanking our panelists for joining us today, as well as our chairman, Chairman Lada and ranking member, Matsui, for convening this hearing. I'm extremely proud of much of the work this committee has done in this space. While content moderation policies and reining in the ever-increasing power of big tech are certainly topics worth exploring in this venue, I'm concerned about the potential for this hearing to devolve into another airing of partisan grievances and personal anecdotes cherry-picked to spark outrage and push forth certain narratives for personal or political gain. It is widely understood that both online and here in the real world, topics that spark controversy, outrage, fear, and anger, are highly effective tools for attracting attention. So I urge my colleagues to be careful not to fall into that trap.
Starting point is 01:01:02 We have an opportunity to discuss substantive issues impacting all Americans, and must take care not to let those issues take a backseat to the performative politics of outrage and fearmongering. Our current content moderation regulatory framework is a product of decades-old legislation passed when the Internet was in its infancy, as well as the Court's overly broad interpretation of Section 230 in the years that followed. What began with the intent to incentivize the removal of certain harmful, objectionable, or obscene content has seemingly transformed into an all-encompassing shield protecting big tech firms from accountability for the unintended harms caused by their platforms and moderation policies. There's certainly no shortage of issues big tech can and should be taking a more aggressive stance on.
Starting point is 01:02:03 Harassment, hate speech, white supremacist radicalization, deep fakes, organized disinformation campaigns, sexually explicit material of children, and the list is almost endless. While imperfect, Section 230, add. As it is currently understood, along with the First Amendment does not appear to provide big tech with the legal protections to tackle these issues. And yet, this harmful content remains all too prevalent online. Unfortunately, the original intent of Section 230 has been lost as technology is developed and all too often vulnerable communities are paying the price. So my first question is for Mr. Overton. In your testimony, you noted that certain moderators, regulations for major tech platforms differ from that of common carriers. Can you expound on why?
Starting point is 01:02:56 From a legal perspective, that distinction was made and what it means for users of the platforms. Sure. Thank you so much. The 11th Circuit just laid this out last year. So when people sign up, they sign agreements in terms of user agreements, which says that they'll comply with community standards. Also, it's not like broadcasts where there's scarcity in terms of waves. It's more like cable. And the court has found that cable is not a common carrier. Also, the Telecommunications Act of 1996 explicitly says, hey, these aren't common carriers. So, you know, a variety of reasons.
Starting point is 01:03:41 I really encourage folks to take a look at that 11th Circuit opinion. Thank you. Studies have shown that. that not only are black Americans subject to a disproportionate amount of online harassment due to their race, but have been purposefully excluded from receiving certain online advertisements related to housing, education, vocational opportunities. So, Mr. Overton, can you explain for us the role intended or not that algorithms can play in this kind of online discrimination? Thank you.
Starting point is 01:04:14 And also thank you for the Civil Rights Modernization Act. that you introduce, which addresses some of these issues. In short, Facebook, it's algorithms and drop downs. They were steering housing and employment ads away from black and Latino folks and toward white folks, and users didn't even know about it. It was a problem. Facebook said they don't have to comply
Starting point is 01:04:38 with federal civil rights laws because of 230. Clearly if the New York Times had an ad for housing of all white folks, there'd be a civil rights problem. That's not a scenario where Facebook should get a pass. It's not just there, though. Entities like Airbnb and Verbo, they count for about 20% of lodging in the United States in terms of revenues. Hilton, Hyatt, they've got to comply with public accommodations laws,
Starting point is 01:05:08 but Airbnb and Verbo basically claim they don't have to comply. Thank you. The General Lady yields back. The chair now recognizes the gentleman from Michigan. for five minutes. Thank you, Mr. Chairman, and thanks to the panel for being here. And Mr. Dillon, thank you for not self-censoring in your frame of work. I don't self-censor either. I set priorities. I try to be sensitive. I try to be proper, and my staff worries about me all the time. But I believe in truth. And truth can be
Starting point is 01:05:43 put out in various ways without offense, except for those who want to be offended. Mr. Badacharya, in October 2020, you and two colleagues from Stanford University published the Great Barrington Declaration. It was document outlining the need to implement focused protection, your terminology, i.e. eliminating COVID lockdowns and school closures for everyone except the elderly and high risk, which is proven to be right. The document had a simple message, but it was immediately targeted by Biden and administration officials and subsequently social media companies has misinformation and downgraded across the platform. Mr. Bottacharya, how has the suppression of concerns about school closures from Big Tech and the Biden administration impacted our nation's children?
Starting point is 01:06:39 And secondly, can you speak to both the effects on their well-being and their educational success? So there's a very simple data point to look at as far as what the impact of school closures are. And that is that children in Sweden have suffered zero learning loss through the pandemic. In the United States, we have created a generational divide in terms of the educational impact from these school closures and lockdowns. In California, where I live, schools were closed from physical, for in-person contact for almost a year and a half. And it's minority kids in particular that have been harmed by these school closures. We've created a huge deficit in the learning, and that will have consequences to the entire lives of these children.
Starting point is 01:07:28 The literature on the human capital investments suggest that investments in schools are the best investment we make as a society, and the school closures as a result will lead to children leading shorter, less healthy lives. I appreciate that information being laid out. I mean, we've been always told to follow the science, and we didn't follow the science. And now we're starting to grudgingly accept the science. And in Michigan, our governor closed to Governor Gretchen Whitmer, closed all in-person learning starting in March of 2020, and it took until January 2021 for Governor Whitmer to agree to plan a fully reopening of schools in March of that year.
Starting point is 01:08:12 The consequences in Michigan. Like you've said, Michigan's average math score dropped four points for fourth graders and eight points for eighth graders since 2019. In reading, they dropped seven and four points respectively. Dr. Bottacharya, how did the prevailing narrative standard opposed by social media companies boast or efforts to keep schools closed? I think the social media companies promulgated voices that panicking, people regarding the danger of COVID to children, far outside what the scientific evidence
Starting point is 01:08:47 was saying at the time. And as a result, spread panic in school board meetings and elsewhere that allowed schools to stay closed far past the time when they should have been opened. From very early in the pandemic, there was evidence from Sweden and from Iceland and other places from Europe that school openings were safe, that they were unnecessary to protect people from COVID and that there were alternate policies possible that would have protected older people better than school closures and caused much less harm to our children. And yet we didn't follow those policies and the voices that pushed the panic that led
Starting point is 01:09:26 to school closures were amplified on social media settings. I appreciate that. A constituent from Carlton, Michigan in my district wrote to me about his attempts to post an article from the NIH on his Facebook page. Facebook blocked the article from being shared because it violated their policy against misinformation, their policy. As a reminder of the article, which was entitled Face Masks in the COVID-19 era, health hypotheses, was published by the NIH itself.
Starting point is 01:09:56 But six months after its publication, the NIH retracted the article, and I assume because it didn't align with their ongoing efforts to keep people wearing masks. Mr. Schellenberger, it can't be a coincidence that an article that the NIH retracted was also deemed misinformation by Facebook. How did the two Biden and the big tech companies work together to downgrade or suppress information that did not support COVID goals? E-mails released by the Attorney General of Louisiana
Starting point is 01:10:27 Missouri show the Biden administration repeatedly haranguing Facebook executives. And we also saw the president threatening Section 230 status demanding that they censor information that they felt would contribute to vaccine hesitancy. And Facebook went back to the White House and said that they'd been taking down accurate information about vaccine side effects.
Starting point is 01:10:49 We also now know that the White House was demanding censorship of private messaging through. I'm sorry to interrupt the gentleman's time has expired. Thank you. Thank you. Thank you. The chair now recognizes the gentleman from Texas for five minutes.
Starting point is 01:11:05 Mr. Chairman, thank you very much. Before I get into my remarks, I do also want to remind Mr. Vodotrera and Mr. Schellenberger in particular that something else that hurts black children too is when there's misinformation put on social media about their parents and their grandparents, stuff in ballot boxes, cheating and elections being stolen in places like Atlanta and Milwaukee, and people know that that is specifically meant to be targeted at black people, that that hurts black children too. And when misinformation like that is allowed to stay on, which it
Starting point is 01:11:41 routinely is that that is bad for black children also. And so when it's on a meet on a social media platform like that is, there needs to be some sort of way how to do that. I hope that no one is self-censoring. But like I tell my 16-year-old every day when he goes off to school and inappropriate things can sometimes come out of his mouth like anybody in here that has had a teenager, Democrat or Republican knows that. What I do tell him is, dude, use a filter. Use a filter, dude. You can say that, but should you really say that? And so, don't self-censor, but use the filter, dude.
Starting point is 01:12:20 Use the filter. At a time when public trust and government remains low, as it has for much of the 21st century, I think that it is disingenuous for the other side of the aisle to politicize free speech in the digital age. There's stuff on Facebook right now that I saw on Hannity, that's fake, and you can go on any of these social conservative sites on Facebook right now and see tons of information. It's on my personal Facebook page. You can see all of this. And the truth is that free speech in the digital age will continue to dominate headlines because the Internet, as it operates today, really does afford Americans all of the opportunity to freely express themselves
Starting point is 01:13:00 in ways that were literally unimaginable 20 years ago. I don't believe anyone in this room can deny that digital communication is going anywhere in the foreseeable future. Instead, we need to focus on a bipartisan basis to find a path forward so we can have common sense policy solution reforms as it relates to Section 230. We all know that the Internet is not the same phenomena it was when Section 230 was enacted back in 1996. And so let's just take a quick step back and think about the actual sensory. that is going on today as it relates to something like voting.
Starting point is 01:13:42 Right now in Texas, they're trying to make it harder for people to vote on college campuses. And to me, that's the ultimate censorship, and that's bad. And so I would hope that we can seriously, again, have a real discussion about how we can make some reforms in Section 230 and come up with some just common sense language on some filters. Professor Overton, I want to thank you for being here today and testifying once again before this subcommittee about how disinformation is dangerous. In your 2020 testimony in front of the subcommittee, you talked about how disinformation on social media presents a real danger to racial equity, voting rights, and democracy. And I want to ask your social media platforms doing a better job now than they were three years ago to curtail the spread of general disinformation that you previously discussed in front of. front of this subcommittee.
Starting point is 01:14:38 They are better in some ways and in other areas they've fallen back. 2020 wasn't as bad as 2020 in terms of the aftermath with this information about so-called stolen elections. We've got some new factors in terms of Elon Musk buying Twitter and laying off the content moderation staff. So things are different. I also want to talk about bad for black children and the fact that death-refer rates are higher in black communities is also bad for black children. The fact that kids don't
Starting point is 01:15:13 have access to internet and as a result have more learning loss during a pandemic as opposed to other communities is also bad for black children. Thank you very much. And as we continue to talk through these things, I hope particularly when it comes to public health, we can try to find the consensus. I know five people in one house, they were dead in a month, dead in a month over COVID. We need to try to find some consensus on these things and stop making them so divisive when people in our community had so many stories that we knew like that. Of course, we weren't our kids in school. We know that it was not good for our kids to be in school, but we also had bodies and places like Detroit that were so stacked up that the more couldn't even
Starting point is 01:16:02 handle them. And that's the reality in Black America also. Thank you very much, Mr. Chairman. I yield back. Thank you very much. The gentleman yields back. And chair now recognizes the vice chair of the subcommittee, the gentleman from Georgia for five minutes. Thank you, Mr. Chairman. And thank each of you for being here. This is extremely important. Let me begin by saying I agree with my colleague from Texas, who just made the comment that trust in the federal government is a historical low. It's also low with the social media companies. So when the two of these combined or collide, then Americans are worried and concerned, and I think we're all concerned here. You know, we had the former CEO of Twitter, Jack Dorsey,
Starting point is 01:16:42 who testified before this committee and made the statement that Twitter does not use political ideology to make any decisions. Well, we know that wasn't true, and it's clear that the big tech platforms are no longer providing an open forum for all points of view. and that's extremely important. We want that. Mr. Schellenberger, I know that you've been before, you've testified before Congress a number of times. Thank you for being here again.
Starting point is 01:17:08 I appreciate it. It's good to see you. But two weeks before the 2020 election, there was damning information about the president's son, Hunter Biden, that was suppressed, but then later authenticated once President Biden was in office. You were covering, as I understand, the Twitter files. What was your takeaway from how Twitter had made the decision? to suppress news articles related to the Hunter Biden laptop story.
Starting point is 01:17:35 Yeah, thank you for the question. So it's important to understand that October 14th, New York Post published this article about emails from the Hunter Biden laptop. Everything in the article was accurate, despite some people claiming it's not. It was accurate article. Twitter's internal staff evaluated it and found that it did not violate their own policies. Then the argument was made strenuously within Twitter by the former chief counsel of the FBI, Jim Baker, that they should reverse that decision and censor that
Starting point is 01:18:06 New York Post article on Twitter anyway. That appears to be part of a broader influence operation, most famously including former intelligence officials and others, to claim that this was somehow a result of a Russian hack and leak operation. There was zero evidence that this was hacked and leaked. they had the FBI subpoena of the laptop published in the New York Post. FBI took the laptop in December 2019. So it appears to me like that was some sort of coordinated influence operation to discredit what was absolutely accurate information. Well, let me ask you that the administration had proposed to establish a disinformation
Starting point is 01:18:47 governance board within the Department of Homeland Security. Thank goodness they didn't go through with that. but what kind of danger do you think there would have been with a disinformation governance board? Well, unfortunately, that disinformation governance board was just the tip of the iceberg of the censorship industrial complex that my colleagues and I discovered. That includes agency at the Department of Homeland Security. It includes various entities, including National Science Foundation, is now funding 11 universities to create censorship predicates and tools. that includes DARPA funding, that all needs to be defunded and dismantled, and investigation needs to be done to figure out.
Starting point is 01:19:26 All right, I need to get on. Thank you for those answers. Mr. Dillon, you've been before Congress before as well, and thank you again for being here. When the advanced algorithms that the big tech companies use, when they give the inordinate power to amplify or suppress certain posts, and we all know that happens, if these companies were determined to be publishers of content, when they amplify or suppress using an algorithm, of them, what do you think the impact would be on content moderation practices? Would it be better,
Starting point is 01:19:56 more, worse, or what? Saying if they were treated as publishers, would they moderate more aggressively? Exactly. Or less. Yeah. Well, under Section 230, even publisher activity is not treated as publisher activity, right? They're not treated as the speakers. They're treated as conduits for the speech of others. but if they were to be treated as publishers, then I imagine they would be much more mindful of what they allow to be amplified and what they don't. Okay, thank you. Thank you very much.
Starting point is 01:20:28 Dr. Bacta, I'm sorry, but anyway, look, I'm a health care professional. I'm a pharmacist. And when the vaccine first came out, I wanted to set a good example, both as a health care professional and as a member of Congress. So I volunteered for the clinical trials, and I did that. However, I believe very strongly that people should have the choice, whether or that they're not. they want to do that or not. I encouraged them to. I thought it was safe, but that ought to be a personal decision, in my opinion. What are the consequences of suppressing legitimate scientific and medical studies that don't fit the mainstream media? People no longer trust public health. People no
Starting point is 01:21:04 longer trust doctors. And as a consequence, people won't follow the even true good advice. I argue for older people to be vaccinated because that's what the evidence said. And I was really I believe when my mom was vaccinated in April 2021. What I've seen now is a huge uptick in vaccine hesitancy for really essential vaccines like measles, mumps, rubella, as a consequence of the lack of distrust. And it's a real disservice the American people that we allowed this to happen. Great. Thank you all very much for being here.
Starting point is 01:21:35 And thank you, Mr. Chairman, and I'll yield back. Thank you. The Chairman yields back. The Chair now recognizes the gentleman from California's 29th District for five minutes. Thank you very much, Mr. Chairman. And there are real abuses right now on the part of social media companies, not only in America, but around the world. We talked about a lot of them last week when the CEO of TikToks was before us.
Starting point is 01:21:59 There's a real need for accountability here, and reforming Section 230 in a targeted and thoughtful way is going to be a big part of what we should be doing in Congress, and hopefully we'll get around to doing that. Many bills have been introduced, but we haven't been able to pass the legislation. Hopefully we'll have success. time. But the conversation that the majority seems to be having back and forth with some of the witnesses today is a bit bizarre to me. Conservative censorship seems to be what a lot of my colleagues
Starting point is 01:22:27 are focusing on, but there's a lot more going on, especially when it comes to life and death issues for the American people, especially American children. The idea that the big fix we need to Section 32 is that we should be preventing social media companies from taking down harmful content. Like I said, we should definitely make sure that they're taking down content that is harming, especially our children. That's not what I've been hearing from my colleagues last week, and I'm not shocked that we're hearing the same thing today. So I'm going to use my time to talk about very real myths and disinformation that targets Voldemortembourg communities like the predominant Latino community I represent in the San Fernando Valley. I'm glad we have an actual expert here,
Starting point is 01:23:14 Mr. Overton to explore this. I've seen firsthand how powerful social media misinformation and disinformation created vaccine hesitancy, which actually has caused human life. I've told the story of how my mother-in-law, whose primary language is Spanish, asked me if it was true that there were microchips in vaccines. That came from her Spanish-speaking colleagues who spend way too much time on social media who, by the way, all of them in their 60s and 70s, these are not children, who actually were convinced or led to believe that there are microchips in the vaccines. Other Spanish language misinformation said that vaccines would lead to sterilization or alter your DNA, et cetera, et cetera, et cetera.
Starting point is 01:24:00 We know the companies do a terrible job taking down Spanish language misinformation and also don't do a very good job of pulling down misinformation and disinformation. English and we know that this lack of content moderation doesn't make social media better like some of the witnesses today suggests it makes it dangerous so my question is first question is to you professor Overton if we follow some of the proposals here today and alter section 230 in a way that would limit the ability of platforms to moderate content like miss and disinformation what could be the potential consequences for communities like the ones that I would just mention
Starting point is 01:24:42 mentioned a minute ago. Thank you very much, Congressman. Things could be worse. Things could be worse in terms of medical misinformation, political disinformation, scams in terms of economic. And you focused on it in terms of content moderation being key. That was the original point here in terms of prodigy and a concern about platforms not taking down bad stuff because they were afraid of being sued.
Starting point is 01:25:10 That's the whole point of it. Thank you. We also know that election miss and disinformation is a huge problem, and another one that often spreads unchecked on platforms when it's in Spanish. We saw in the run-up of the 2022 midterms that election misinformation in Spanish was widespread on YouTube and other platforms. Professor Overton, I know this is one of special interests to you. Can you talk a bit about the special harms associated with spreading? information that misleads voters and why it's important that social media platforms have the ability to remove such content.
Starting point is 01:25:49 Well, this is incredibly important because voting is preservative of all other rights. And we have seen polarization in terms of us being pulled apart. We've seen foreign interference in terms of Russia, Iran, other entities dividing us. We've also seen voter suppression in terms of targeting, for example, in terms of 2016 particular communities targeted. So there have been some studies that found that this work is still happening, these activities in terms of operatives financed by Russia and Iran, but folks who are in places like Ghana and Nigeria, scamming and basically changing our political debate. It's a real danger.
Starting point is 01:26:37 One of the things that people don't realize just because they see it in print doesn't mean it's news. It's just opinion. And so thank you so much. My time's expired. I yield back. Thank you. The gentleman's time has expired, and the chair now recognizes the gentleman from Utah for five minutes. Thank you, Mr. Chairman.
Starting point is 01:26:55 Before I begin, I'd like to give my home state a shout-out. Just last week, they passed a law prohibiting social media companies from allowing people under 18 to open an account. And I'd like to quote from the podcast. the Daily from New York Times, it was as if the governor of Utah was saying to Congress, you folks, while you're blathering away about the harms of TikTok, here in Utah, we're actually going to do something about it. We're taking action while you're having a hearing. Pivoting a little bit. Mr. Schallenberger, I don't know about you, but I'm having a little bit of a deja vu moment. Yesterday I boarded an airplane in California, and you were sitting to my right, and the great
Starting point is 01:27:33 congresswoman from California was sitting to my left. Unlike yesterday, I only have five minutes, not five hours to question you, so I'm going to push you to go a little bit quick. But I'd like to just explore this idea of, are we missing the mark here? And let me tell you what I mean by that. Somehow we're having this conversation about human beings, deciding what is acceptable for us to hear and see, imperfect human beings. I don't know about you, but I have spent my life in the pursuit of truth. and I don't know anybody that can define it.
Starting point is 01:28:08 If you go back to COVID, we've had a couple of examples that were obviously problematic. But if you go back to COVID, the science said no masks. Then the science said masks. Then it said double masks. It said kids shouldn't play on playgrounds because it was spread by surfaces. It got it wrong. And so how is it that we're supposed to objectively decide what people can see and what they can't see? and I know from your testimony, at least your written testimony,
Starting point is 01:28:37 this concept of objectionable, can you just take a second and describe how maybe we're off track on this? Sure, thank you for the question. I mean, I think it's important to remind ourselves just how radical the First Amendment is and how the people that created this country were very clear that it wasn't a piece of paper that gave us the right, the freedom of speech.
Starting point is 01:29:00 It was an unalienable right. It was something that we were born with. It's a human right. It's a right to be able to express yourself to make these noises, to make these scribbles. That's fundamental to us. I'm just going to so much I want to ask you, so I'm going to shortchange you there a little bit. So like, do you think the founders perceived a situation where there'd be a little bit of a jury appointed by Facebook that would make these decisions? Absolutely not.
Starting point is 01:29:29 Is there any way, even with good intent, they can do that right? Absolutely not. I mean, it's actually, we think that we are so much more advanced than we were 2050 years ago, but 2050 years ago, there was a very strong understanding that you needed people to be wrong. You needed people to build. So yesterday on the plane, I pointed out how in my district, Native Americans actually wrote on rocks. And some people, quite frankly, would find some of the things they put up their offensive. I'm not sure I'd want my kids to fully see them. Yet, they put them up there, and that's the way it is. Okay, very quickly because I'm out of time. A couple of my colleagues have poo-pawed this hearing and this concept that it's not that big a deal.
Starting point is 01:30:08 Can you explain as an individual what it feels like to be censored? It's absolutely horrible. It's one of the worst experiences you'll ever have. It's humiliating. It's being told by one of the most powerful corporations in the world that you're wrong. And not in my case, it wasn't that the facts were wrong. It was that the concern that it would be misleading, that people would get the wrong idea from it. It's dehumanizing.
Starting point is 01:30:33 It's not what this country is about. It's grossly inappropriate. There's no appeals process. There's no your voice is denied. Yeah, no accountability. It's a star chamber effectively. Thank you. Mr. Dillon, let me pivot to you for just a second.
Starting point is 01:30:49 Let me talk about Section 230 and algorithms. To try to put us simply, there's, I think, 230 sees two buckets. One bucket is a published content. John Curtis can publish content on there. The other bucket is kind of a distributor of that content. And I think Section 230 tries to protect the distributor of that contact. But this assumes that distributors of social media platforms are nothing more than a large bulletin board. You use the words conduit for others.
Starting point is 01:31:18 Where we can all place content for the world to see. And with the exception of some predefined bad behavior, we don't hold them liable for that. But is it possible that instead of black or white, there's actually a gray area between hosting that post and making decisions that hide or amplify that post, where somebody actually shifts from a bulletin board to actually ownership of that content and shouldn't be protected from 230. Yes, I definitely think so. When they get two hands-on with how the content is displayed, yes.
Starting point is 01:31:49 And when they're also deciding who can speak. I think the main issue is the viewpoint discrimination when they start deciding who can speak and what they can say. They go far beyond what Section 230 had in mind, which was objectively, you know, unacceptable speech, like unlawful speech. Clearly to find. I wonder today if my favorite sitcom, Signfield, would be taken down from some of these social media platforms. I really regret I'm at a time. Mr. Chairman, I yield back. The Chairman yields back.
Starting point is 01:32:18 The Chair now recognizes the gentleman from Florida's 9th District for five minutes. Thank you, Chairman. And hailing from the great state of Florida, we see book banning, eliminating AP African-American studies, silencing the LGBTQ community and voices, downplaying or denying the Holocaust, slavery, or genocide of Native Americans. As someone of Puerto Rican descent, I would find a particularly offensive that they're censoring Roberto Clemente's own biography and books about them. a amazing Puerto Rican and amazing baseball player and one who contributed so much to helping out children and families. And then I think about the Burricaneers, Puerto Ricans who were discriminated against
Starting point is 01:33:06 and fought for our country nonetheless in World War II and before, who were honored in a bipartisan fashion. These stories need to be told. We even see a new bill that's attempting to allow politicians to sue news media easier because they use anonymous sources or not. I mean, if it's too hot, get out of the kitchen, right? This is part of our First Amendment rights. These are all censorship efforts happening in Florida under the grip of Governor DeSantis. The Republican maturity last week got on the censorship crusade by continuing the book-banning efforts.
Starting point is 01:33:48 So I think we all agree there is some need for censorship discussion. And so I appreciate us having that here today. In the context of social media, the question is what to do about it. And Professor Overton, I appreciate you being here today. I want to talk briefly about 230 reform, since that's really a lot of what we're talking about. I am empathetic to the discussions that other witnesses have said here today about being silenced. I think, Professor, first of all, it's great to have a GW Law professor here as being an alum. being an alumni and Dr. Dunn is also an alumni of the med school.
Starting point is 01:34:25 We'll give them credit for purposes of this hearing. I want to focus on two common ground issues. Federal civil rights violations that happen over social media and then protecting our kids. So let's first start out with efforts that would be clear violations if someone did it outside of social media. What are some ideas and what we could do to draft legislation to ensure that civil rights are protected? within the social media sphere? You know, one idea is a carve-out for civil rights violations. So we have a carve-out for IP, for federal criminal law,
Starting point is 01:35:01 and for a few other categories, and one would be this carve-out for federal civil rights violations. Can you expound on that a little bit? Sure. How do you think we should put it together? So Airbnb, for example, they design a platform that shows somebody's face and, you know, their name, and there's discrimination happening on their platforms,
Starting point is 01:35:22 but right now they're saying they're not liable because of 230. Facebook, basically, their algorithms steer housing and employment ads to white folks away from black and Latinos, and they've got drop downs that allow for folks to target on those bases, but they say, hey, we're not liable because of Section 230. So this carve-out would basically say, hey, 230 applies generally, but not for federal civil rights violations, just like it does with IP,
Starting point is 01:35:53 just like it does with federal criminal law. And about protecting our kids, you know, we have disagreements over books and things like that, but where there's common ground, yes, we all believe parents should be able to have a strong say in which books their kids are reading. They should just not be able to ban what other kids and their parents decide as best for their kids.
Starting point is 01:36:14 And in the case of the Utah law, They're empowering parents to make decisions about access to social media pre-18, which I think there's definitely some positivity there as far as where we could go with something like this. What would you say we could be doing to protect our kids better vis-a-vis 2.30 reforms? Yeah, I certainly think this concept of requirements in terms of Utah is not a bad, a bad thing. I think the big thing, though, is if we're chilling people from moderating and platforms from moderating, we're going to see more pornography, we're going to see more obscenity, sexual solicitation, all of that comes with restraints on moderation.
Starting point is 01:36:59 So that's a big concern I have. But if we established having some requirement pre-18 for parental consent, do you think that would have a substantial effect based upon your research and help and protect our kids from some of those things. I think that that could, and I certainly would love to talk to you more about it and study it in more detail. Thanks, and I yield back. Thank you.
Starting point is 01:37:22 The gentleman yields back, and the chair now recognizes the gentleman from Georgia's 12th district. Oh, I'm sorry, Mr. Joyce came in. I didn't see you. I'm sorry. The chair recognizes the gentleman from Pennsylvania for five minutes. Thank you, Chairman Lada and Ranking Member Matsui for holding today's hearing. And for you, the witnesses, for your time and your testimony.
Starting point is 01:37:42 Last week, we held a hearing regarding data privacy and the pervasive manner in which nefarious actors can manipulate content and exploit user data for financial gain. Today, we are faced with another issue. Big tech companies that have inconsistently applied content moderation policies, manipulated content on their platforms, and even gone so far as to ban or blacklist users for exercising their right to free. speech. These companies claim to operate as politically neutral public forums where speech, ideas, and thoughts are supposed to be shared equally and unabridged. Unfortunately, this has not been the case as evidenced by the witnesses here today and your testimony. These companies often silence opposing ideas that do not align with their platforms' ideologies, all the while
Starting point is 01:38:38 unabashedly using Section 230 as a vehicle to indemnify themselves. It goes without saying that the Internet, our use of the Internet, and how we communicate, exchange ideas, and interact across the Internet has evolved, and Section 230 is long overdue to evolve, and reflect the reality of what we are facing today. Dr. Batachara, thank you for being here today. As a physician myself, I understand that robust scientific discussion and discourse, especially amidst an unprecedented public health emergency, is critical to a healthy medical community. But in your case, Twitter, and I'm quoting here, trend blacklisted, unquote, and stifled scientific discussion.
Starting point is 01:39:30 Is that correct? Yes, I was on a turn to blacklist. And can you please briefly describe what a trend blacklist? list means? It limits the visibility of my tweets so that only my followers can see it, that it has no chance of going outside of the set of people who happen to follow me. And you find that by limiting your ability to communicate, that that is a healthy medical community?
Starting point is 01:39:51 No. I think it is a terrible thing to limit the ability for scientists to discuss openly with one another in public our disagreements. Thank you, Dr. Batachara. The Great Barrington Declaration offered a sensible alternative approach to handling COVID-19, emphasizing more focused protection of the elderly and other higher-risk groups. Tragically, this approach was not followed in my home state of Pennsylvania, where our former governor ordered nursing homes to receive COVID-19 positive patients,
Starting point is 01:40:26 and that was to a devastating effect. Dr. Belichara, can you briefly describe what this reaction was to the Declaration by Public Health Officers? and how our own government tried to suppress that free flow of ideas? So Francis Collins, then head of the NIH, labeled me a fringe epidemiologist. Then I started getting death threats. I started getting essentially questions from reporters accusing me wanting to let the virus rip when I was calling for better protection of elderly people. What happened in nursing homes in Pennsylvania and New York,
Starting point is 01:40:59 where COVID-infected patients sent back was a violation of that principle of focus protection. Had we had that debate openly, maybe that might have been avoided. So the medical community at large was restricted from your ideas. Is that correct? The social media companies and also the government, the federal government in the form of the head of the National Institute of Health, worked to essentially create a propaganda campaign to make this illusion of consensus that their ideas, Francis Collins' ideas, Tony Fauci's ideas, were a consensus of scientists.
Starting point is 01:41:34 when in fact it wasn't factually true. There were tens of thousands of scientists signed on who opposed the lockdowns that were in favor of focus protection of vulnerable older people. That debate should have happened without suppression but didn't. Do you feel that this silencing of speech, and particularly individuals like you from the medical community, do you feel that this has damaged the trust in public health apparatus? It's as low as I've ever seen in my career.
Starting point is 01:42:02 And it's tragic because public health is very important. important that Americans trust public health. And when public health doesn't earn that trust, very bad things happen to the healthy American public. So take us to the next step. How do we earn back that public trust? We need, public health needs to apologize for the errors that it made, embrace honestly, and lists them out and say,
Starting point is 01:42:24 we were wrong about the ability of the vaccine to stop transmission. We were wrong about school closures. We were wrong to suppress the idea of focus protection. And then put in place, forms so that people can trust that when public health says something is actually the true thing truth and allow dissenting voices to be heard all the time. Dr. Bata-Chara, thank you for your candor and thank you for your expertise. Mr. Chairman, I yield back.
Starting point is 01:42:53 Thank you. The gentleman yields back. The chair now recognizes the gentlelady from California 16th District for five minutes. Thank you, Mr. Chairman, and thank you to each one of the witnesses. This is a very important discussion today. You know, I've always thought of the American flag as the symbol of our country, but the Constitution is the soul of our nation. And so the discussion about First Amendment is a very, very important one.
Starting point is 01:43:26 It's a sacred one, in my view. in listening to each one of the witnesses I think that my sensibilities move from one they swing from one direction to another are these sensibilities of individuals professionals who have a great deal of pride
Starting point is 01:43:53 about their profession what they write what they say you know, in politics we say throw a punch, take a punch. Is it someone's ego that is offended by the reaction to what they have written? Dr. Badashara, you wrote the Great Barrington Declaration.
Starting point is 01:44:15 I think context is very important in this as well. That was in October of 2020. There were 24,900,000. 30 deaths due to COVID in October of 2020. An average of 787 precious souls that were dying every day. We didn't have the vaccine yet. Now, your complaint about being censored is with the platform. And I think that you also have a beef with who was the head
Starting point is 01:44:57 of NIH and Dr. Fauci because they didn't agree with you and there was fierce opposition to what you put out. That's all part of the enormously important debate that takes place in academia and in the medical community. That's vibrant. It's a reflection of our democracy. but what I would like to get to is what the definition of censorship is. Mr. Overton, would an accurate definition of censorship be the suppression or prohibition of speech by the government? It's this concept of the government, and the government is key in terms of censorship. The courts have come up with a test if government is being coerced. in terms of social media.
Starting point is 01:45:54 So are they being coercive? Are they going to punish social media for keeping things up? That's the state action. That's the problem. One other quick note here is that some of this reality that, frankly, I just think we have missed here.
Starting point is 01:46:09 In Q4, 2021 alone, YouTube removed 1.26 billion comments. 1.26 billion comments. So if we think that they have got to go through and give an explanation for every comment that they have removed, that's not going to happen. Basically, what's going to happen is they're going to say, we're going to get out of this business, we'll just leave up the smut, the obscenity, the hate speech, that is your internet, if that's
Starting point is 01:46:42 what we have to do. Well, Congress has a major responsibility in all of these areas, whether it's the reforming of Section 2.30, let's see what the Supreme Court does. My sense is they're going to kick it back to Congress again. A national privacy law. I don't take a backseat to anyone on that issue. Congresswoman Lofgren and myself wrote what academician said was the most comprehensive privacy legislation in the Congress. So we have a lot on our plate and a lot of responsibilities to meet. Would a state law that prohibits private sector employers or public university professors or students from discussing diversity, racial equity, systemic racism or sexual identity be considered censorship? It would be, and in fact it was a couple of courts last year.
Starting point is 01:47:42 Let me get to another question because you said yes. Would a state law preventing public school teachers from discussing their own sexual identity and requiring them to hide it from their students be considered censorship. With the Florida new law banning books, we do consider that censorship. Certainly is applied to universities and private sector employers, yes, and courts have agreed with me. Well, it seems to me that some of us speak out on what we consider censorship. There's a convenience in this, that what we don't like, we consider censorship.
Starting point is 01:48:18 but I think it's very broad under the First Amendment, and I think the steps that Congress needs to take is to certainly address the reforms in 230 and a very strong national privacy law. I wish I had more time, but I thank you again for your testimony and for your answers. Thank you, the General Lady Yields back. The chair now recognize as a gentleman
Starting point is 01:48:43 from Florida's second district for five minutes. Yeah, thank you very much, Mr. Chairman. I have a few questions for the panel, but I noticed that we ran out of time as Dr. Bottateri was trying to respond to Madam Eschew. I thought I would give you a brief moment first to do that. Thank you, Congressman. I won't take very short. A couple of things. One is that in October of 2020, when we wrote the Great Brampton Declaration, it was already clear from the scientific evidence that school closures were a tremendous mistake. It was
Starting point is 01:49:11 already clear that there was this huge age gradient, that it was really older people that were really high risk. And so a call for protecting vulnerable people was not a controversial thing. It should not have been a controversial thing, and yet it was censored and suppressed by social media. Second thing, it was not simply a problem of ego. It's fine to have scientific debate. In fact, I like scientific debate. The problem here was that you had federal authorities with the ability to fund scientists
Starting point is 01:49:44 saying, putting their thumb on the scale, and then the federal government using its power to suppress that scientific discussion online in other. I do agree with you, Dr. Bottacharian, and I'm going to get back to you with a question here in a minute, but thank you for that. I think there's a clear pattern of censorship, and it reveals the political leanings of those who are censored versus those doing the censoring. I think that's evident, and I think it's self-evident that the arbitrary censorship role of big tech has led to partisan outcomes. And the same holds true with fact checkers when they collude with other interests.
Starting point is 01:50:18 For instance, the company NewsGuard defines itself as a journalism and technology tool that rates the credibility of news information and tracks online misinformation. However, they are partnered with big tech, big pharma, the National Teachers Union, and even government agencies. In fact, $750,000 went from the Department of Defense to NewsGuard in a government contract. Mr. Schellenberger, do you find this pattern of censorship and political bias to be real? To be real? Yes. Yes, sir. I do, too.
Starting point is 01:50:52 It's also my understanding that the vast majority of outlets targeted by NewsGuard specifically are conservative leaning outlets. Do you think that's true? I think that Newsguard, I mean, we know that NewsGuard rated discussions of COVID origins is coming from a lab as disinformation. Yes, I remember that. One big example. Thank you. I agree with you. I think fact checkers need to be fact checked and removed from the government payroll.
Starting point is 01:51:17 As a medical professor, I find extremely disturbing. You see medicine become partisan, enabling global institutions, big farm and government to have the power to make sweeping mandates and censor personal health freedoms. This is an unequivocal departure from the same platforms that we saw what we saw with those platforms back in the days when Twitter was claiming. that they're the free speech wing of the Free Speech Party. A lot's changed in the 10 years since they made that claim. Dr. Bottagero, in your testimony, you mentioned the mass censorship of the Great Barrington Declaration.
Starting point is 01:51:55 And that was a declaration where tens of thousands of doctors and public health scientists signed on to a very straightforward declaration. In fact, I am one of those doctors, so thank you very much for that. And as a medical doctor, do you consider the opinions of tens of thousands of doctors endorsing a single medical opinion as a sort of consensus of sorts? I mean, I don't think that there was a consensus, but I also don't think we were a fringe position. I think that there wasn't a legitimate discussion to be had, and had it, we had it openly, we would have won the debate.
Starting point is 01:52:28 I think that's true, too, and I was going through that in real time with my colleagues here and elsewhere. Last year's Twitter files revealed that Dr. Bonachery, you were placed on their trends blacklist, which prevented your tweets from trending on the site. Were you ever contacted by Twitter regarding your placement on that blacklist, or did you have any idea that they were targeting your account? No, not until Elon Musk took over. That's excellent. So I have to say thank you very much, Dr. Barclay. We have to do more about transparency in medicine.
Starting point is 01:53:02 We have to do more about censorship. We need to get back to the times where I know you remember, I remember, I remember. recall the times when we had free and open debate in fact it was demanded of us if you will and post-operative you know more M&M conferences and whatnot that we actually review the truth that you know face our faults our flaws are our mistakes I hope that we can get back to that in the future thank you very much for coming mr. chairman I yield back thank you the gentleman yields back the chair recognizes the gentlelady from New Hampshire for five minutes great
Starting point is 01:53:36 Thank you very much, Mr. Chair. I want to spend my time focusing what I believe are real victims of online harms and examine how Section 230 plays a role in those harms. As the founder and co-chair of the Bipartisan Task Force to end sexual violence, I'm particularly concerned about reports of online dating apps being used to commit sexual assaults and how Section 230 has prevented the survivors from seeking justice. I recognize that Section 230 is the bedrock of our modern-day Internet, but Congress has a responsibility to ensure that these legal protections are functioning as intended.
Starting point is 01:54:16 The protections that Section 230 provide online platform should not extend to bad actors and online predators. Dating platform companies have defeated numerous lawsuits regarding egregious and repeated cases of sexual assaults on the grounds of Section 230. And I think this committee can agree that Section 230 was not intended to protect dating apps when they failed to address known flaws that facilitate sexual violence. Mr. Overton, if you could, Congress has previously examined and enacted changes to Section 230 to strengthen protections. Can you speak to how additional reforms to Section 230 could better protect the American public?
Starting point is 01:55:01 Thank you so much, Congresswoman. And just this notion that platforms, you know, if you're a company and you engage in the activity, you can be sued. But if you basically take the activity and get paid for it, hey, you're fine. You hide behind Section 230. And this is the true problem. So this notion of requiring that entities act in good faith and to take reasonable sense. steps in order to enjoy the immunity is one reform that has been, you know, held up that's sufficiently flexible to deal with different contexts.
Starting point is 01:55:43 That is a possibility in terms of dealing with this. Professor Daniel Citron has put forward this proposal. She's kind of tweaking it now. But certainly these folks who know that there's a problem and they're profiting off of these platforms, effectively profiting off of Section 230, which was designed. to make it easy for folks to take down this type of activity and has been twisted by courts to basically allow for a free-for-all. So I agree with you. Exploitation, particularly of minors, is a major issue that hopefully there's some bipartisan agreement on addressing.
Starting point is 01:56:24 And based upon your expertise, do you believe that Congress should look to reform Section 230 in this way? I definitely think that we need to think about it in a nuanced way. We definitely need to reform. I think that is one of the leading proposals, and I'm very open and supportive of it. There may be some other proposals. The Shield Act, there are few others. There are few that are out there that are important. Well, thank you for sharing your expertise.
Starting point is 01:56:57 It remains clear to me that there are real opportunities to make the Internet a safer, place for the American people. Section 230 was enacted almost 30 years ago, and it's past time for Congress to take a closer look at these legal protections. I ask that this committee refocus its effort on Section 230 on preventing real online harms and sexual violence in our communities, and I yield back. Thank you. The gentlelady yields back. The chair recognizes the gentleman from Georgia's 12th district for five minutes. Thank you, Chair Lada.
Starting point is 01:57:33 convening this hearing, and I want to thank our witnesses for being here. This is a very important discussion we're having today. Big Tech currently has unilateral control over the majority of public debate in our culture, and it's concerning to most Americans. What is even more concerning is that as a result of the Twitter files, it has been made clear that Big Tech is also working in direct coordination with government officials to silence specific individuals. whom unelected bureaucrats disagree with.
Starting point is 01:58:07 This Orwellian scenario is un-American and House Republicans will not stand for it. Last year, the Portner Institute, a self-appointed clearinghouse for fact-checkers made news when one of its fact-checkers, PolitiFact, it incorrectly labeled third-party content that challenged the Biden administration's definition of a recession as false information. It is clear that political fact was biased in the content it was flagging as misinformation or false information to fit the narrative it preferred, rather than reflecting the known facts. Mr. Dillon, in your experience, are these fact checkers apolitical neutral fact-based researchers? No, that's a pretty good joke. They are not.
Starting point is 01:58:54 in the whole fact-checking apparatus, there's unbelievable hubris in the whole project. This idea, especially when we're talking about medical information too, I often hear people going back to say that, it was based on what we knew at the time that we were saying that this was true or that this was false. All that is is an admission that our knowledge changes over time. It's a knock-down argument against censorship.
Starting point is 01:59:19 If knowledge changes over time, you should never try to say that these are the facts, these are the only things that you can say. Everyone who says something opposing to that should be silenced. It's a knockdown argument against censorship in favor of open debate, which is the fastest and best way to get to the truth. Dr. Botchara, give me your experience with these fact checkers. They have been tremendously inadequate during the COVID debate and the pandemic
Starting point is 01:59:49 to police scientific debate. They can't tell the difference between some. true scientific facts and false scientific facts. They serve as narrative enforcers more than as true referees of scientific debate, which takes lots of years of experience that the fact-dectors don't have. Mr. Chelenberger, do you know who funds these fact-checkers? As far as, well, obviously, you know, somebody's paying them to do this information. Can I respond to that really quickly?
Starting point is 02:00:23 Yes. We were fact-checked. We made a joke about how the Ninth Circuit. at court had overruled the death of Ruth Bader Ginsburg, and USA Today fact-checked it, and that fact-check was paid for by grants from Facebook. And then Facebook threatened to demonetize us in response to the false rating on that joke. Okay. Well, great. Thank you, Mr. Dillon. As a follow-up, did the Twitter files or any research that you have done to expose
Starting point is 02:00:46 the practices of big tech show if fact-checkers coordinate with federal agencies when they flag information? Mr. Dillon. I'm sorry, can you repeat? As far as the Twitter files, is there any research that you have done to expose the practices of big tech that show if fact checkers coordinated with federal agencies when they flag information? The Twitter files, I think, exposed a breadth of coordination with state actors to control the flow of information. It was ongoing discussion there between the two. Dr. Makachara, what you have?
Starting point is 02:01:26 The federal government financed, funded projects at universities that then reached out to social media companies, then told social media companies how to censor and who to censor during COVID. Mr. Dillon real quickly, what did Twitter's censor of your company do your revenue? Well, initially we did see a spike because we had a lot of people sign up and support of us, but being off of Twitter for eight months took its toll. Currently, it's where we generate the most impressions and the most traffic. I just posted the other day that we generated more impressions on Twitter in the last week than we have on Facebook, Instagram, and YouTube
Starting point is 02:02:09 combined, partly because Facebook has been throttling us so much. We get more views on a post if we stuck it on a telephone pole in a small town than we're going lately. Yeah, and so much of that is just to hide the truth, to be honest with you. I met with the Dr. Cocker, Caldwell, who is Associate Professor at the Medical College of Georgia, which is Augusta University, and she gave me a page here, Protecting Young People Online, and I'd like to submit this for the record. Without objection. Okay. Thank you very much. All of you, and I yield back. Thank you very much to the chair now. Recognize the gentlelady from Tennessee for five minutes.
Starting point is 02:02:51 Thank you, Mr. Chairman. And thank you to the witnesses for being. here today and Mr. Dillon, I'll start with you. Would you agree that social media can't take a joke and that they can't handle the truth? Yes or no? Yes, I think that there is actually an ongoing outright war on the truth and reality and a lot of the reason why some of our jokes have been censored are because they carry the truth, you know, to every joke there's a grain of truth. Absolutely. And the joke that we were censored for and locked out for, the thing that I say about it most frequently is that the truth isn't hate speech. It included truth. And so they were actually moderating. This is where the, you know, the bias and censorship comes into play in a lot of
Starting point is 02:03:34 different areas. In their terms of service, they've baked radical gender ideology into them so that you must either affirm it or remain silent. If you say anything to criticize it or even joke about it, you can get kicked off the platform. So the bias is in the terms of service. In the terms of service. You know, in your statement, you said censorship guards the narrative, not the truth. It guards the narrative at the expense of the truth. And you went on to say about Twitter, now this is pre-Eleon Musk and pre-we, we know that freedom of speech costs $44 billion, but instead of moving our joke themselves, they required us to delete it and admit that we'd engage in hateful conduct. And, you know, it sounds to me like they forced you to make a plea deal,
Starting point is 02:04:17 basically, and say you committed fraud and all that kind of stuff. Just respond to that, please, sir. Yeah, my reaction to that when I first saw that they were requiring that we delete the joke. You know, censorship would be them deleting the joke. That would be them taking it down and saying that we don't want this on our platform. It went beyond censorship to what I would refer to as subjugation by telling us that we must delete it ourselves and admit in the process there was red font over that delete button that said we admitted that we engaged in hateful conduct. So that's why we refused to delete the joke is because we did not engage in hateful conduct. The truth is not hate speech.
Starting point is 02:04:51 No, and you know what makes me want to picture fulfilled prophecies from the Babylon B and interim into the congressional record just for posterity's sake, honestly, just to show that truth is stranger than fiction and it seems that satire can be a predictor of the truth, honestly.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.