The Philip DeFranco Show - PDS 6.27 The Future Of Deepfakes Is Here, Google Censorship & Manipulation Allegations, & More
Episode Date: June 27, 2019You better love this huge show or something, something throat punch! Go build a website and use code “PHIL” to get 10% off with Squarespace!: http://Squarespace.com/Phil Check Out the Rogue Rocket... Channel Trailer: https://www.youtube.com/roguerocket?sub_confirmation=1 SUBSCRIBE to DeFrancoDoes: https://www.youtube.com/defrancodoes?sub_confirmation=1 Follow me for the personal stuff: https://www.instagram.com/phillydefranco/ Need more news? Find more stories here: http://roguerocket.com Watch the previous PDS: https://youtu.be/5itzzS9xxfM Watch the latest Deep Dive: https://youtu.be/Aq6ZDWHhBk8 Support this content w/ a Paid subscription @ http://DeFrancoElite.com ———————————— Follow Me On: ———————————— TWITTER: http://Twitter.com/PhillyD FACEBOOK: http://on.fb.me/mqpRW7 INSTAGRAM: https://instagram.com/phillydefranco/ ———————————— Today in Awesome: ———————————— Check out https://phil.chrono.gg/ for 25% off “Project Winter” only available until 9 AM Check Out the Rogue Rocket Trailer: https://youtu.be/gF2Vb5Q_KGc Brownies - Basics with Babish: https://youtu.be/kDdUdvNQndo Worst Punishments In The History of Mankind: https://youtu.be/Zi6k21kM0UA Idris Elba Takes a Lie Detector Test: https://youtu.be/kiKPwHm7NjU Schoolboy Q Learns to Respect Spicy Wings: https://youtu.be/ifD3-niHucE Billie Eilish Surprises Her Fans: https://youtu.be/9QrlDWKP6lg Charlies Angels - Official Trailer: https://youtu.be/RSUq4VfWfjE Tom Holland & Jake Gyllenhaal Unpopular Opinion: https://youtu.be/H3FcCz2y2mM Four More Days to Get “We'll All Be Skeletons” Tees! http://www.ShopDeFranco.com Secret Link: https://twitter.com/EmmaKerwin2/status/1142901343516https://youtu.be/H3FcCz2y2mM069890?s=20 ———————————— Today’s Stories: ———————————— New Controversial App: https://roguerocket.com/?p=11887 Google Accused of Bias: https://roguerocket.com/?p=11889 Woman Charged in Alabama: While we used multiple sources to compile the information for today’s coverage, due to YouTube’s demonetization issue we will not include them here. ———————————— More News Not Included In Show Today: ———————————— Wayfair Donates $100,000 to Red Cross in Response to Employee Walkout: https://roguerocket.com/2019/06/27/wayfair-donates-100000-to-red-cross-in-response-to-employee-walkout/ Sony, Microsoft, & Nintendo Say Chinese Tariffs Will Hurt the Gaming Industry https://roguerocket.com/?p=11892 Nike Pulls New Shoe Line in China Over Designer’s Support for Hong Kong Protests: https://roguerocket.com/2019/06/26/nike-pulls-new-shoe-line-in-china-over-designers-support-for-hong-kong-protests/ 7-Year-Old Avengers Actress Speaks Out Against Bullies: https://roguerocket.com/2019/06/26/7-year-old-avengers-actress-speaks-out-against-bullies/ Fans Respond to Comments on Billie Eilish Photo: https://twitter.com/TheRogueRocket/status/1143609948007309312?s=20 Why Twitter Tried to Cancel Lil Nas X: https://roguerocket.com/?p=11746 Trump Ordered Strikes on Iran, Then Called Them Off: https://twitter.com/TheRogueRocket/status/1142160311090212864 ———————————— Edited by: James Girardier, Julie Goldberg Produced by: Amanda Morones, Brian Espinoza Art Director: Brian Borst Writing/Research: Philip DeFranco, Maddie Crichton, Lili Stenn, Sami Sherwyn Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Sup you beautiful bastards, hope you're having a fantastic Thursday.
Welcome back to the Philip DeFranco Show, and if any of you were freaking out for a second
because you heard those sounds that I just had the editor put in, I see you.
That said, let's jump into it.
And by it, I mean me threatening you with physical violence if you do not hit the like button on this god-awfully long video.
You say you like him, big, prove it. That's what she said, but I'm talking about the like button.
That said, let's jump into the Thursday
Philip DeFranco show.
And the first thing we're gonna talk about today
is some we said that this was going to get worse
and more widespread news.
What we're talking about is big news around deep fakes.
We've talked about this topic a number of times in the past
and we're talking about it today because yesterday,
Vice's tech publication, Motherboard,
they reported that they had found a deep fake app
called Deep Nude.
And unlike deep fake apps that we had seen in the past
where you take someone's face,
you put it on someone else's body,
this app takes the photos of women
and it removes the clothing
so it looks like they are very realistically naked.
Motherboard claimed that they downloaded and tested the app
on more than a dozen pictures of both men and women.
And there they said that they found
that while the app does work on women who are fully clothed,
it works best on images
where people are already showing some skin,
adding the results vary dramatically.
But when fed a well-lit high resolution image
of a woman in a bikini facing the camera directly,
the fake nude images are passively realistic.
With Motherboard even testing several of the images
that they tested in the article with photos of Taylor Swift,
Tyra Banks, Natalie Portman, Gal Gadot, and Kim Kardashian.
And while some of the pictures have a few errors,
like the bikini strings in Kim and Tyra's pictures,
they are still very, very realistic.
You may have also noted that the pictures
Motherboard included are only of women.
That's because as of right now,
the app explicitly only works on women.
When you test it with a man, it quote,
"'Replaced his pants with a vulva.'"
Although of note there on the Deep Nude website,
they also say that there is a male version in the works.
But of course, one of the biggest things around deepfakes
isn't that the technology exists.
When you go to the theater, you see a show,
the stuff you see in that is incredible.
You have movies where they're making digital recreations
of people that are no longer alive.
And so with this, one of the biggest key things
is just how accessible it's becoming.
According to Motherboard, anyone can get the app for free
or they can purchase a premium version.
Although on that note, Motherboard reported
that the premium version costs $50,
but there's a screenshot that's published on The Verge
that indicated that it was $99.
In the free version, the output image is partly covered
by a watermark, and in the paid version,
the watermark from before is removed,
but there's a stamp that says fake
in the upper left-hand corner.
But, of course, as even Motherboard notes,
it would be extremely easy to crop out the fake stamp
or just remove it with Photoshop.
And as far as, you know, where did this come from,
what's happening now, it appears that DeepNude
launched downloadable software for Windows and Linux
on June 23rd, although right now you can't actually get
the software from their website.
And this according to both DeepNude's Twitter and website
is because they've just been receiving too much traffic.
Saying, we did not expect these visits
and our servers need reinforcement.
We are a small team, we need to fix some bugs
and catch our breath.
We are working to make DeepNude stable and working.
We will be back online soon in a few days.
As of right now it is unclear who these developers are
or where they're from.
Their Twitter account lists their location as Estonia
but doesn't provide more information.
Although, Motherboard was able to reach
the anonymous creator by email,
who requested to go by the name Alberto.
And Alberto told them that the app's software
is based on an open source algorithm called Pix2Pix,
which was actually developed by researchers
at UC Berkeley back in 2017.
And reportedly, that algorithm is similar
to the ones being used for deep fake videos,
and weirdly enough, also similar to the technology
that self-driving cars use to formulate driving scenarios. With Alberto telling Motherboard that the algorithm actually only works
on women because images of nude women are easier to find online. And according to the report,
Alberto also told Motherboard that during this development process, that he asked himself if it
was morally questionable to make this app. And it appeared that he argued that he basically believed
that the invention of the app was inevitable. Saying, I also said to myself, the technology
is ready within everyone's reach. So if someone has bad intentions, having deep nude doesn't change much.
If I don't do it, someone else will do it in a year.
Right, and that inevitability argument is something that we've talked about on the show before.
It goes along with the idea that even if these deep fakes are banned by Pornhub and Reddit, like we've seen in the past, they're just going to pop up in other places.
This is part of an inevitable reality.
And that's why a really important part of the discussion around deepfakes is how to detect and regulate deepfakes.
When you develop the technology to alter reality in a realistic manner so that people see something and they think that might be real.
It is also going to be incredibly important to train people to be able to identify things and to make software that makes it as easy to detect.
And this is actually something that came up when Motherboard showed the DeepNude app to Hani Farid, who was a computer science professor at UC Berkeley.
And Farid, who is described as an expert on deepfakes,
said that he was shocked by how easily
the app created the fakes.
Right, usually in the past when we talked about
deepfake videos or anything that was remotely realistic,
we were talking about hours.
Deep Nude reportedly only takes around 30 seconds
to render these images.
And that's why Farid said,
"'We are going to have to get better
"'at detecting deepfakes.
"'In addition, social media platforms are going to have
"'to think more carefully about how to define
and enforce rules surrounding this content.
And adding, our legislators are going to have to think
about how to thoughtfully regulate in this space.
And let's actually talk about those last two points,
the need for social media platforms and politicians
to regulate this kind of content.
Over the last few years,
deepfakes have become widespread internationally,
but any kind of laws or regulations
have been unable to keep up with the tech.
I mean, just yesterday you had Mark Zuckerberg saying
that Facebook is looking into ways to deal with deepfakes
during a conversation he had at the Aspen Ideas Festival.
And there, he didn't say exactly how Facebook is doing this,
but he did say that the problem from his perspective
was how deepfakes are defined, saying,
"'Is it AI-manipulated media or manipulated media using AI
"'that makes someone say something they didn't say?'
I think that's probably a pretty reasonable definition.
But as others have pointed out,
that definition is very, very narrow.
Right, I mean, to look at a recent event,
something where Facebook got a lot of backlash,
as you may or may not have seen,
they decided not to take down a controversial video
of Nancy Pelosi that had been slowed down,
making her seem drunk or impaired.
And Zuckerberg said that he argued
that the video should be left out
because it's better to show people fake content
than hide it.
Although I do wanna note there, there is a difference.
That would not be described as a deep fake,
that would be more a manipulated video.
And then you ultimately end up getting into a debate
of the intent, right?
Was the intent to mislead,
or was the intent commentary or comedy?
But still you have experts worrying
that Zuckerberg's kind of thinking here
could set a dangerous precedent for deep fakes.
And actually on that note, on Monday,
you had lawmakers in California proposing a bill
that would ban deep fakes in the state.
And the assembly member that introduced the bill
said that he did it because of the Pelosi video.
Which for me makes this an interesting thing to watch
to see how that bill progresses
and what they define as a deepfake.
And also of course, like we've seen on other issues,
if this influences other policies
at the state level elsewhere.
And I say on the state level there,
because right now on the federal level,
those efforts have been stalled.
Separate bills have been introduced in both the House
and the Senate to criminalize deepfakes.
But both of those bills have only been referred to committees.
And right now it's unclear whether or not
they've actually been discussed by lawmakers.
But even if those bills did move forward,
there's still a ton of legal hurdles
that they'd have to go through.
And around this, you had an attorney
by the name of Kerry Goldberg,
whose law firm specializes in revenge porn,
saying, it's a real bind.
Deepfakes defy most state revenge porn laws
because it's not the victim's own nudity depicted.
But also our federal laws protect the companies
and social media platforms where it proliferates.
But ultimately that is where we are
with this story right now.
It's going to be interesting to see what happens
in this space moving forward.
When it comes to deep fakes, I think a lot of people
are like, oh, we'll be able to fake celebrity nudes.
But I think the more dangerous and damaging potential here
is actually for people without large audiences.
Right, when it comes to famous people,
you expect a lot of fake stuff.
But when you have someone where, I mean,
we live in an age where Joe Blow is sharing
the same amount, if not more photos
than famous person A or B.
And I mention that because there'll be plenty of photos
for the deep fake app to use.
And you're talking about a person
that won't have the bullhorn to say that this is fake.
Because people, unfortunately,
and I think this is a human flaw
that we have to actively combat,
might see something that looks realistic
and because it in no way affects our lives,
just assume, yeah, of course, that's probably real probably real doesn't pertain to my life in any way
Why would I deep research that said I agree with the Albertos of the world that these fakes are an inevitability?
But I also think that because of that the main focus needs to be on how to identify it how to react accordingly
Actually last second unexpected update to this story the new Twitter account posted a statement saying despite the safety measures adopted
Watermarks of 500,000 people use it a probability that people will misuse it is too high.
We don't want to make money this way.
Surely some copies of Deep Nude will be shared on the web,
but we don't want to be the ones who sell it.
And concluding, the world is not yet ready for Deep Nude.
But with that said, of course,
I pass the question off to you.
What are your thoughts on this whole story?
Do you think that we're heading into scary times or no?
Do you think this is overblown?
Any and all thoughts I'd love to see
in those comments down below.
And then let's talk about the story that has just blown up
over the past 24 hours coming from Alabama.
So this is 27 year old Marshae Jones.
Back in December, she was shot
by 23 year old Ebony Jemison.
And notably here, Jones was five months pregnant
at the time, was shot in the stomach and lost the fetus.
Now that story as described blows up on its own.
But the reason this story hit a whole different level
is not only did the manslaughter charge
against Ebony get dismissed,
this after a grand jury failed to indict her,
but now Jones, who was the pregnant woman who was shot,
she has been charged with manslaughter.
And as far as the reasoning behind the manslaughter charge,
we heard from Pleasant Grove Police Lieutenant Danny Reed,
who said that the investigation showed
that the only true victim in this was the unborn baby,
and continuing, it was the mother of the child who initiated and
Continued the fight which resulted in the death of her own unborn baby according to a report on this Reed said that the fight stemmed
Over the unborn baby's father and that the investigation showed he said that it was Jones who initiated and pressed the fight
Which ultimately caused Jemisin to defend herself and unfortunately caused the death of the baby with Reed saying let's not lose sight that the unborn
Baby is the victim here
She had no choice in being brought unnecessarily
into a fight where she was relying
on her mother for protection.
And as far as the negative reaction to all of this,
you had a lot of people saying this seems crazy.
For example, you had Amanda Reyes,
who directs the Yellowhammer Fund saying,
the state of Alabama has proven yet again
that the moment a person becomes pregnant,
their sole responsibility is to produce a live,
healthy baby and that it considers any action
a pregnant person takes that might impede in that live birth to be a criminal act.
Is Fred Guttenberg on Twitter saying this is fucked up?
Every woman in America should be screaming about this.
Josh Gad tweeting,
Alabama, you have basically become The Handmaid's Tale.
This is so freaking disturbing and backwards.
But ultimately, that is where we are with this story right now,
and I'm very interested to know what your thoughts on it are.
Do you think the thinking around justifying the charges in this situation, they make sense,
or are they batshit crazy?
Are you somewhere in the middle
where you think maybe it's a slippery slope?
Any and all thoughts, I'd love to hear from you
in those comments down below.
And then the last thing we're gonna talk about today
is easily the most requested story over the past 24 hours,
and that is this Google YouTube Project Veritas situation.
This is a story that we've been looking at,
looking through this week, trying to get responses
from YouTube and Google on several things
Since the situation keeps developing. We've also actually as of this morning gotten some responses pertaining to this
But to start things off a lot of this started with a Project Veritas video that came out on Monday
If you don't know they're a conservative nonprofit that uses undercover journalists to try to expose politicians and organizations
It's run by conservative activist James O'Keefe. They're well known for secretly recording audio and video encounters
Although on that note they have also long been criticized by some for what people have called deceptive editing.
And I wanted to mention that at the beginning, not to try and make you think a certain thing,
but because we're going to be showcasing a lot of this edited video before we get to the part where someone says that it was deceptively edited.
So the video in question includes footage of a Google employee being secretly recorded, seemingly speaking about company policies,
as well as an anonymous source
who said they were a Google employee,
who also had documents they claimed
showed that the company had a liberal bias.
So we're gonna start with the video,
but a quick note there,
the video in question has been removed from YouTube.
I also saw an update that it's now been removed from Vimeo.
And one of the updates around this story,
because a lot of people have been saying,
"'Why did you remove this video?'
They told us that it violated their privacy guidelines
as someone was filmed without their consent."
And telling us if that Googler's face and their name were removed from the video, right, it was black privacy guidelines as someone was filmed without their consent and telling us if that Googlers face and their
Name were removed from the video right it was blacked out. It was blurred out the video would actually be fine
So moving forward I'm not actually going to be including the Googlers face that Googlers name
They identified the Googler in the video as the head of a department at Google the clips in this video show her saying various things
about Trump and the election. since 2016 to make sure we're ready for 2020. So training our algorithms, like, if 2016 happens again, would we have...
Truck month is on at Chevrolet.
Get 0% financing for up to 72 months
on a 2025 Silverado 1500 Custom Blackout
or Custom Trail Boss.
With Custom Trail Bosses available,
class-exclusive Duramax 3-liter diesel engine
and Z71 off-road package with a 2-inch factory suspension lift,
you get both on-road confidence and off-road capability.
Dirt road ahead? Let's go!
Truck month is awesome!
Ask your Chevrolet dealer for details.
When does fast grocery delivery through Instacart matter most?
When your famous grainy mustard potato salad
isn't so famous without the grainy mustard.
When the barbecue's lit, but there's nothing to grill.
When the in-laws decide that, actually,
they will stay for dinner.
Instacart has all your groceries covered this summer.
So download the app and get delivery
in as fast as 60 minutes.
Plus enjoy $0 delivery fees on your first three orders.
Service fees, exclusions, and terms apply.
Instacart, groceries that over-deliver.
We also hear that Google are talking about politicians who want to intervene with Google,
saying they've been called over by Congress but don't want to be attacked by them for
practices they don't intend to change, and adding that she doesn't believe that breaking
up Google will be effective.
Elizabeth Warren is saying that we should break up Google.
I love her, but she's very misguided. Like that will not make it better, it will make it worse because now all these smaller companies who don't have the same resources that we do will be charged with preventing
the next Trump situation.
It's like a small company could not.
We also hear her speaking about political bias when it comes to what Google considers
to be a credible news source.
I've got accusations on our end of fairness is that we're unfair to conservatives because
we are choosing what we define as credible news.
And that's not true. a credible news source. The accusations on our end of fairness is that we're unfair to conservatives
because we are choosing what we define
as credible news sources,
and those news sources don't necessarily
overlap with conservative sources.
And also, as I mentioned in the beginning of this story,
it was not just this Googler
who didn't know they were being filmed.
It was an anonymous source that sat down
with Project Veritas that claimed
that they worked at Google.
That source saying that Google was highly biased and wanted to prevent a Trump re-election in 2020.
They also brought forward documents about Google's practices.
One of which looked like an internal document that shows that as far as Google is concerned,
Google's goal is to, quote,
establish a single point of truth for definition of news across Google products.
Another document is explaining a concept that they call algorithmic unfairness and how they're trying to address this.
And according to the documents that the source brought forward, algorithmic unfairness and how they're trying to address this. And according to the documents that the source brought forward, algorithmic unfairness means unjust
or prejudicial treatment of people
that is related to sensitive characteristics,
such as race, income, sexual orientation, or gender,
through algorithmic systems
or algorithmically aided decision making.
And it gives an example saying that if you search CEOs
into Google Images, you will see mainly men.
And even though this would be factually accurate,
it would be algorithmic unfairness
because of what it reinforces about men and women's roles in the workplace.
But it also says that in some cases,
it may be quote, appropriate to take no action
if the system accurately affects current reality.
While in other cases, they could consider how to quote,
help society reach a more fair and equitable state
via either product intervention
or broader corporate social responsibility efforts.
So this part was odd and kind of interesting to us.
And so to see what would happen in our case, right?
This isn't scientific. We wanted to see if Google was doing anything to make the results less algorithmically unfair for that CEO example.
So we simply typed CEOs into Google Images and as you can see you do get images mainly of men, though
it does suggest woman as a suggested term.
But still the source also says that Google is furthering an agenda in its search suggestions. When typing women can they got things like vote,
do anything, fly. When typing men can, they got have babies,
cook, and get pregnant.
And so they say that this is furthering
a progressive agenda.
We also typed those phrases into Google
to see if those results were universal.
And what we found is that there was actually
a fair amount of overlap,
but we also saw other results as well.
Another search example that they used
was typing Hillary Clinton's emails are
versus Donald Trump's emails.
And for Clinton, it gives no suggestions,
but for Trump, there are some suggestions.
According to this anonymous source,
they say that there is a reason for this.
Well, according to them,
Hillary Clinton's emails is a conspiracy theory
and it's unfair to return results based on her emails.
And that source goes on to say
that they are training AI to turn results like this.
So we also tested this as well.
We got no suggestions for Hillary.
We did get suggestions for Trump.
And while it's being mentioned here, right,
as the auto-complete, the suggestion feature,
I think something that's also important to note here
is what results you actually get with a search.
Because when we searched Hillary Clinton's email,
number one at the top was WikiLeaks,
followed by NPR, Politico, Wikipedia,
and a recent Fox News article.
Right, so if the argument is that they're trying
to hide bad information about Hillary Clinton,
it would be weird for Google to go,
"'Hey, here's the number one result,
"'a searchable archive of Hillary Clinton emails from WikiLeaks.
Also to jump back, the Veritas video brings up section 230 of the Communications Decency Act, which allows companies like Google to not be accountable for the content that they provide.
This because they are a platform, not a publisher.
But this anonymous source claims that they have become a publisher and should be held accountable.
That source also goes on to talk about YouTube, which is a Google owned company, saying that YouTube is demonetizing conservative voices,
that they're using AI to suppress their videos.
And they say that because since a conference in May,
many have seen their view counts go down.
And one of the examples they used in the video
was Tim Pool.
And so we looked at Tim Pool's view counts two months ago.
Before May, his range was 110 to 250,000 views.
And between now and then, there were some dips,
with some videos hitting below 100,000.
But also at the same time,
looking at his most recent videos,
the views range from 150 to 636,000.
Although a thing I do want to point out there,
just because some of those recent videos
have had high views,
that doesn't mean that those videos were not suppressed.
It just means that they were either not suppressed
or they gained so much traction on other social platforms,
like maybe the video was shared on Twitter and Facebook
or wherever else.
Although as outsiders, there's no way to know for sure,
unless Tim Pool shows his analytics on that,
which I don't think that would be right
to pressure him to do that.
That would also help us have a clearer view.
So we had all of that in the initial report.
And then later on Monday, we saw the Googler
who was filmed without their knowledge,
responding to the video,
saying that among other things,
she has been receiving threats,
then explaining how this meeting and video came to be,
writing, in late May, I accepted an invitation
to meet with a few people who claim to be
from Two Step Tech Solutions. They said they wanted to chat to me about a mentoring program for young women of color in tech, Writing, published it widely online. Then answering the self-imposed question of why did they do this to me? Where she says, it seems they found
that I had spoken publicly at Google IO on ethics
and they wanted someone who could give them juicy soundbites
about tech's alleged bias against conservatives.
Over the course of a two-hour dinner,
I guess they think I delivered.
She then goes on to claim,
"'Project Veritas' has edited the video
to make it seem that I am a powerful executive
who is confirming that Google is working
to alter the 2020 election.
On both counts, this is absolute,
unadulterated nonsense, of course.
In a casual restaurant setting,
I was explaining how Google's trust and safety team,
a team I used to work on,
is working to help prevent the types
of online foreign interference that happened in 2016.
And adding, Google has been very public about the work
that our teams have done since 2016 on this,
so it's hardly a revelation.
Then going on to say that the video goes on
to stitch together a series of debunked conspiracy theories
about our search results and our other products.
Going on to say,
but despite what the video may have you believe,
I'm not involved in any of these products.
Just like I'm not involved in any of the other topics
Project Veritas baited me into discussing.
Whether it's antitrust, Congress,
or the dozens of other topics that didn't appear
in the video on which I presumably didn't say anything
that could be twisted to their advantage.
I was having a casual chat with someone at a restaurant
and used some imprecise language. Project Veritas got me, well done.
And then near the end saying I don't expect this post will do anything to deter or convince the people who are sending me abusive
messages. In fact, it will probably encourage them, give them oxygen and amplify their theories.
But maybe a few people will read it and realize that I'm not the cartoon cutout villain that Project Veritas would have you believe.
But that is also not where this story ends.
On Tuesday, Project Veritas released another report.
And this time they share what they claim
are emails from a Google employee.
And in those emails, someone refers to PragerU,
Jordan Peterson, and Ben Shapiro as Nazis.
And then says,
I don't think correctly identifying far-right content
is beyond our capability.
But if it is, why not go with another Googler's suggestion
of disabling the suggestion feature?
And according to their report,
they say that this implies they should be removed
from suggested content.
We then saw PragerU launch a petition
to stop big tech bias saying
that conservative ideas are under attack.
Ben Shapiro also tweeted at YouTube CEO Susan Wojcicki
saying, hey Susan Wojcicki,
would love to discuss this with you.
Do you think your employees should be cavalierly labeling
those who militantly hate white supremacy Nazis
and then shaping algorithms on the basis of such lies?
Also regarding these reports,
we saw YouTube insider tweet,
we've had a lot of questions today,
clarifying, we apply our policies fairly
and without political bias.
All creators are held to the same standard.
Also regarding this,
when we spoke with a YouTube representative this morning,
they said that the company,
whether it be Google or YouTube,
they have a very open culture.
With that, there are a great number of people
and a great number of groups
where people are sharing very strong opinions
on everything from pets to politics.
And so essentially saying they want their employees to feel comfortable sharing their opinions,
but also at the same time claiming that the email that was leaked was not from a YouTube employee, and also saying that that person
does not speak on behalf of the company and that it is not an official company position. Although, and this is my personal opinion here
just to create some separation, I will say this part probably concerns me the most. Just to provide one example,
I have a number of disagreements when it comes to the likes of Ben Shapiro.
And I find it concerning that allegedly,
according to this email,
there are people equating him to Nazis.
An argument and a person saying it,
I think creating a scary situation.
Obviously, as of right now,
this is just kind of one person,
a snippet of an email where we don't see anything else.
But still there, I see there to be a reason for concern.
Also regarding this story, we've seen political reactions.
On Tuesday, the US Senate Committee
on Commerce, Science, and Transportation had a hearing.
And at that hearing, Maggie Stanfill, the director of Google User Experiences at Google, spoke as a witness.
And there, Senator Ted Cruz brought up the Project Veritas report, and notably the claim that Google wants Trump out of power.
Do you think it's Google's job to make sure, quote, somebody like Donald Trump never comes to power again?
No, sir, I don't think that is Google's job.
And we build for everyone, including every single religious belief,
every single demographic, every single region,
and certainly every political affiliation.
At a separate House hearing, we also saw Texas Congressman Dan Crenshaw mention the email.
The recent leaked emails from Google,
they show that labeling mainstream conservative media
as Nazis is a premise upon which you operate.
We also saw Trump speak about Google
on a Fox Business phone interview.
He said,
Look, we should be suing Google and Facebook and all that,
which perhaps we will, okay?
Also claiming that Google was trying
to rig the 2020 election.
But ultimately, that's where we're gonna end
with this story today.
There's still stuff coming out now.
I mean, even as I'm finishing this video,
I think they did another release.
And ultimately what I would say,
as far as my personal takeaway,
given the allegations that the video of the Googler
who didn't know they were being filmed
was selectively edited,
it seems like the easy way to combat that
would be to release the full video.
Unless they release the video,
I think a fair assessment of it is impossible.
You know, I make a daily show.
When you're trying to put out a piece,
so you know, sometimes you just make cuts
to get to the main points.
But given the criticisms and allegations that aren't limited to one event that this group is selectively and deceptively editing videos
It seems like the easy way to fight that is to just release the full bit and at that point I would feel way
More comfortable with the point that it appears that it's trying to make otherwise
I'm incredibly skeptical of video that the context is given to me
It is obviously edited because at times when done by someone that is trying to push a narrative, you can mislead a lot of people.
I've been burned in the past.
Yeah, I think it's gonna be interesting to see
what else comes out, if it's confirmed or denied,
what is real, where things are coming from.
Because of course, I think liberal and conservative voices
should not be suppressed.
Yeah, that's where we're gonna end
what feels like an incredibly large show, we'll see.
Of course, with this one and anything else
I talked about today, I'd love to hear your thoughts
in those comments down below.
Also, while you're at it, if you liked today's video,
I would love if you took a second to hit that like button.
Also, if you're new here,
you want more of my dumb face in your life every single day,
hit that subscribe button,
ring that bell to turn on notifications.
Also, if you're not 100% filled in,
maybe you missed one of the last two Philip DeFranco shows,
you can click or tap right there to catch up.
But with that said, of course, as always,
my name is Philip DeFranco.
You've just been filled in.
I love yo faces and I'll see you tomorrow.