Big Technology Podcast - How Ad Dollars, And Some AI, Might Restore Our Shared Truth — With Vanessa Otero
Episode Date: September 27, 2023Vanessa Otero is the CEO of Ad Fontes Media. The company rates publications based on their biases, and allows advertisers to concentrate their spending in news media that may disagree, but isn't so wi...ldly biased it loses rooting in reality. Listen for a conversation about the ad industry's broad defunding of news, what it would take to return that money, and why artificial intelligence might help scale the efforts of Ad Fontes' human news raters. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
An advertising CEO who wants to restore our shared truth by directing ad dollars to publications with a reasonable amount of bias joins us right after this.
LinkedIn Presents.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
We're joined by a great guest today.
Vanessa Otero is the CEO of AdFontas Media, which rates publications on how their bias.
and lets advertisers buy ads based on those ratings.
It's one of the more fascinating companies
I've come across in my years as a reporter,
and I'm so excited to bring you this conversation today.
Vanessa, welcome to the show.
Hey, thanks for having me, Alex.
Thanks for being here.
Let's start here.
It's clear that our shared truth has kind of dissolved, right?
There are people in the U.S. and all over the world
who can look at the same news story
and either look at it completely different
with a different set of facts
or maybe even some folks on one side of the political,
aisle, won't even know that a story is happening where another side is going to be fully
tuned into it. So I'm curious from your perspective, how did you initially realize that this
was a problem because you're trying to solve it? And what can we do about it? Well, this all started
for me back in the 2016 election. And the way that people would fight about news and politics
on Facebook in particular, back when there was lots of news shared on the news feed, people would
share really biased and unreliable sources with each other. And I noticed that people couldn't
convince each other of their points, right? Like, people would just completely discard the media
from the other side. And it appeared to be this function of the fact that we had so many more
news and information sources than we ever had before. So it was really easy for people to
get into their own little filter bubbles. I mean, in the years right before that, that's when
that term was really coined. And like you said, there's no shared truth. I noticed that
folks would, they'd have divisions because of both polarization, but also because of, you know,
what people broadly call misinformation. Like, it makes sense that people on the left and right,
political divides you don't see eye to eye because they don't see you read and hear the same
things but misinformation is like that too like if you have one set of folks that believes a certain
set of facts and others just don't believe those facts that creates division in that way as well
so you know what this all started out as was you know an infographic just to plot out a few
dozen of the news sources that are out there to show that there's a difference between stuff
that's highly reliable, stuff that's okay, stuff that's really problematic, things that are
left and right, and things that are even more so. So I am kind of curious, like, okay, these are
big things that you're going to tackle, bias and misinformation. Like, how do you even begin to
consider, like, doing this in a way that's not politically motivated? The answer is you have to get
It's super granular.
So for us, that means human analysts from left, right, and center, looking at individual articles, individual episodes, individual, your TV and podcast content in order to look at sentences, headlines, graphics, all the things that make it up to come to a rating on that.
You know, maybe the idea is, and I think this is what your idea is, right?
you know, don't eliminate the idea of biased publications or publications that reflect one
political standing or another, but sort of concentrate the ad spend into those that still
are, you know, that believe in the shared truth versus are extremely skewed. Is that what
you're trying to do? And how does that happen? So when we first approach ad tech companies
and brands and publishers.
Like the whole idea was, you know,
we want to help it change the incentives
in the media ecosystem.
And we thought that the brand's biggest use
for our data would be to help them avoid
extremely polarizing and misleading
and inaccurate content because it's bad for society
and also it's probably bad for the brand, right?
But when we started approaching more and more brands,
we found so many of them,
would tell us like, well, that's not really a problem for us because we just don't advertise in
news anymore at all. And to me, that seemed like worse, right? Because the, you have this
worst of all possible worlds where the misleading, highly polarizing content, it's fairly cheap to
produce, right? You don't need, you can have a person or five with a website and you can
churn out just rage, clickbait material pretty easily and get enough like programmatic ad dollars
to stay afloat and you're great. But then the high quality investigative reporting,
the editorial newsroom stuff, that's expensive. So if you have enough brand pulling away from
that kind of content, then you see this degradation in our media landscape where good journalism
suffers and the sort of garbage out there that thrives. So that's what we're really trying to
change in the ecosystem. Right. And you wrote a op-ed recently in Pointer. And you said this one sentence
that to me really stuck out, which is that brand safety, this rush toward brand safety is misguided.
I mean, it seems to me logical that brands would want to put their ads on content that is
not controversial. So why do you think it's misguided? Yeah. So the brand safety as an industry
only really got started, you know, within the last, you know, 10 or 12 years. And,
It was, I think the first big moments in this brand safety movement were around YouTube videos
where there are terrorists beheading people and ads were showing up next to that kind of content,
like this user-generated content.
So, you know, companies rushed in to provide solutions around, oh, let's not show up next to things
that are violent.
Let's not show up next to things that are sexually explicit.
Let's try to screen out, you know, bad words.
that in do this at scale.
And it sort of grew and grew and grew to this place
where it's like anything that's, you know,
kind of yucky at all, brands are like,
okay, well, that, let's stay away from that.
Let's not put our advertising dollars next to it.
And that came to encompass news.
However, there's not great data.
There's not really any data that shows that brands suffer any sort
of, you know, backlash from consumers if they're advertising in high-quality news.
Examples I give, you know, Wall Street Journal, you know, a few months ago, they had an
expose about Instagram and how, you know, sexually explicit content with minors was showing up
on Instagram. And it was like this investigative reporting that was, you know, pretty important.
And in the Wall Street Journal, you know, they have embedded ads.
from lots of big brands, right?
And there was no backlash to those brands for appearing adjacent next to content that
was reporting about this unpleasant subject, right?
No one said, oh, that advertiser, they support, you know, child porn on Instagram.
No one says that, right?
But the brand safety tools that exist to, you keep folks away from news, they're so,
broad. So many brands use keyword blocking lists and category exclusions in in order to stay away
from anything that's, quote, controversial or quote, negative. And it's just swung too much in that
direction that it's really hurting news publications. And you say that more than 80% of advertising
has evaporated from the news business since 2005 as print has collapsed. So how much of this
money moving away from the news business is due to the collapse of
print and how much of it is due to the fact that advertisers just don't want to be on news
stories anymore?
Well, it's a combination of a lot of things, right?
A lot of things have changed in the news and technology landscape since 2005.
I mean, you have the rise of, you know, Google and Facebook.
I mean, so much spend that was going towards news moved over to search and these walled gardens,
right?
So it's not just that brands have moved away from news, but the brand safety movement
has certainly not helped.
So, and it's ironic because in, you know, social media platforms, brand safety is very opaque.
There are really no guarantees on it.
So it's ironic that brands would say, well, we don't want to advertise next to news because
it's not safe.
yet they'll just put that money in Facebook X, YouTube, where there's really not a lot of insight
for those brands into the brand safety on those platforms.
Okay, so let's dig in a little bit into how your team rates.
You know, it's kind of interesting.
You brought it up that they look at sentences and they rate based off of factual integrity
and there's someone from the right left and the center there.
but what is the actual process?
Like, how are you really able to tell if a story is,
is like right, left or center bias?
For instance, like the New York Times there skews a little left
and the Wall Street Journal skews a little right in your chart.
They're both, like, within the, like, the respect of fact, obviously.
Well, I guess maybe not obviously.
People might have a bone to pick on that in the respect to fact category.
So how do you, like, rate each story?
Well, the overall news source ratings, like you mentioned,
New York Times, Wall Street Journal, being a little to the left and to the right, respectively.
Their ratings are based on a representative sample of content. And for each one of them, we rated
hundreds of articles manually over the last several years. And so for each article we rate,
it's a panel of three folks. So, I mean, this is, it's as in as work intensive as it sounds.
All day, every day, in shifts on Zoom, you have panels of three analysts at a time.
just going through article after article after article episode after episode. And they're looking at
certain factors for reliability and certain factors for bias. And it's akin to like a grading
rubric for a for a paper like an AP test or something or if you're like trying to grade
something that's inherently a bit subjective like gymnastics or figure skating. We're looking for
certain things. And once folks are trained in our content analysis methodology, they can
systematically look for those things and arrive at pretty consistent ratings. What are you looking
for? So the subfactors for reliability are, you know, one, the headlines and images, like how
those, how well those match the story. Two, expression is a really big one. So how it's expressed
as fact as analysis or opinion or worse. And then the main one, the, the, the, of
the one that folks think of immediately is veracity, like likelihood of veracity. And this is the one
that trips people up philosophically because they're like, well, you know, how do you tell what's
true? Look, there are things that are truer than others, right? And there are things that you
can be 99% sure. There are things that you can be like 60% sure. So we're looking for the
likelihood of veracity of underlying claims to the best of our ability by looking at other
primary sources and other reporting on the internet.
Like, we can do lateral reading.
That's like the primary way that folks who teach media literacy teach people how to evaluate
whether something's true or not.
And for bias, we're looking at different sub factors too.
Like there's the language that people use to characterize their issues or opponents.
right? Like you can use, you can use adjectives to describe, you know, a politician as like
senile or decrepit or cunning, right? All those matter. There's the terminology that you use
for the positions like, you know, illegal aliens versus undocumented persons, right? We're looking for
the advocacy of political positions. Some have it, some don't. We're looking at the comparison
between, you know, different articles about the same topic.
So each one of those subfactors is one that we can use to,
you, triangulate and measure, like, an actual score for bias in a rubric.
And then you apply those ratings to different tranches of media
that an ad buyer can go in and basically be like, all right, like, give me the ad fontist
group that is, you know, let me, I'm going to just pull up your chart.
Yeah.
That one that, you know, might be middle, skews left and skews right.
but isn't hyperpartisan right or left.
And maybe within like the realm of truth of, you know, opinion or wide varieties and reliability or just like, you know, most analysis or mix of fact is a mixed of fact reporting and analysis or fact based reporting.
So is that they can sort of pick like the buckets that they want and then go to an advertising technology platform and say, give me these?
Yeah, exactly.
I mean, generally, for most brands that want to advertise.
in news or, you know, do advertise in news, you know, they want to stay towards like the top
middle, not, you know, exactly right in the middle, like centrist. Like I said, there's nothing
wrong with like left and right bias. And there's a certain, you know, there's certain analysis and
opinion content that's very palatable to advertisers and has great audiences. So yeah, we
basically allow advertisers to select and get back into news based on, you know, being on the
higher reliability side and the minimally biased side.
Interesting. So talk a little bit about how long the company has existed, how long this
product has existed, and what tangible results have you shown so far from your efforts?
Yeah. I mean, we have been around as a company for a little over five years.
We're a public benefit corporation. So, you know, we're a for-profit company with a stated
public mission. You know, this all started because we wanted to help folks navigate the news
landscape. And that's all the stakeholders in news media, whether it's individual consumers,
you know, just figuring out for themselves what's reliable and biased, whether it's educators,
teaching media literacy. But, you know, advertisers, publishers and social media companies
all have such a huge role to play in this. And our data has really, you know, come into its own
in the last, you know, two, two and a half years, or that's the time period during which we've had
dozens of analysts. We have now rated, you know, nearly 10,000 different individual news
sources. So it's a lot of, it's a lot of data out there, right? And since it's now available
in DSPs and is selectable, it's... Which is basically for those listening to DSP as a place you buy
online ads.
Yes. I'm sorry.
right. I just jumped right into the ad tech, you know, the terminology. I know. It's,
it's selectable, you know, in the places where you can buy media. So you, we're really,
we're really optimistic. We're really hopeful with what we've seen so far with, you know,
advertisers, you know, major advertisers in all the different verticals using our data to
get back into news responsibly because that's, it's been encouraging, right? Our, our message has
been like well received like people know that they should uh advertise in good journalism and
avoid the uh misleading polarizing stuff it's just uh they haven't had good ways to do it before
so the advertisers aren't coming to you and saying we want the hyperpartisan not true stuff like
it's mostly the full mostly advertisers saying we want some you know reasonably biased
publications that respect the facts yeah exactly like no one like wants to
advertise on the hyper-partisan, like, misleading content on purpose.
Like, I really haven't had any advertisers that are like, yes, our audience is there.
Let's go, let's go after that.
Like, you know, advertisers have like this.
You could.
You could.
If you're my pillow, you might want to use this stuff.
But anyway, I digress.
People do bring up, bring that up.
And sure, they're like, advertisers, for the most part, try to block this stuff.
Like, there's enough energy around.
Enough, like, self-awareness to know that they shouldn't advertise on misleading, extremely polarizing content.
A lot of it ends up there on accident, right?
It's the ad tech ecosystem is opaque, sometimes it's not that easy to control, exactly where your advertisers run, especially on the internet.
So, yeah, no one's come to us saying, like, yes, we want to target the most extremes.
Do you have a sense as to how much money you think you've brought back into news?
how much total spend has gone through with your data?
And can you name some names of advertisers that are using this stuff?
You know, it's hard to say, you know, dollar-wise, but we do have, we just launched this
billion-dollar challenge to get advertisers back to news in this 20, especially in this
2024 election cycle.
So we're still pretty early that that's a challenge that we issued just within the last
that month or so. And it's funny that you ask about particular brands. I'm not going to mention
any names without, you know, prior express permission because when it comes to things like,
you know, political, you know, how biased stuff is and, you know, how brands make decisions
about where they advertise, as you can imagine, some are like a little sensitive. So, you know,
hopefully within the next few weeks for our billion dollar challenge,
we'll have some brands that we can talk about that we'll endorse.
Like, that's what we're actually looking for.
We're looking for brands to step up to the plate and say, yes,
like this is something that not only we do in practice,
but we believe in and want to put our name on it.
So it's a, you know, I want to be, you know,
fully like Frank about the challenge that this is, right?
So many big brands are afraid of news and of putting themselves out there and saying, like, yes, we're going to draw this line in the sand about reliability and bias.
That's why newsrooms continue to suffer because brands have been so hesitant, especially in recent years.
So then what? I mean, I guess like from my standpoint as a journalist trying to see how real this is, like, you know, what should convince us that this is something that's going to actually?
take off and lead to a spend that helps restore, you know, it helps invigorate publications
that respect the truth and aren't skewed hyperpartisan, helps reinvigorate them financially,
and that this thing can actually accomplish its mission versus something that's, you know,
a cool idea in principle, but might just never take off because of these hesitancies.
We can't, we don't know that for sure. That's the thing. I mean, that's the unknown part.
You know, we're a startup.
We have not, we have not yet proven that we can successfully bring back so many brands to advertising and news that it's going to make a difference in the success and monetization of good publishers.
And that's a scary place to be in, I think, for our democracy, right?
publishers have been struggling with this for a while.
I mean, we talked to major publishers, like some of the biggest names out there who during
2020, coverage of COVID, coverage of like the protests after George Floyd's murder, right?
The negativity and news, like everything in the news was negative, right?
And publishers would say to me, you know, our readership went up, up, up, up, up.
Our monetization just went down, down, down.
Because branch was just like, oh, no, we don't want to be around COVID coverage.
We don't want to be around coverage about Black Lives Matter.
We don't want to be around this stuff.
And so I don't have a guarantee for you, for anybody, that my company, as much as people say that they love this idea,
And so they say, like, oh, yes, definitely want to support journalism and branches support
democracy.
No guarantees, right?
So that's why we're issuing this call to action, this billion dollar challenge.
It's going to take some changes.
Like it's the trend has not been good for publishers.
The, you know, the responsibility of brands, it hasn't been, you know, they have not been
stepping up to the plate, right?
Right.
So I'm an optimist, and I think it can work, but we need people to jump in the boat with us.
Vanessa Otero is here with us.
She's the CEO of AdFantis Media.
We'll be back in the second half of some more questions about the business model and maybe what we're going to see heading into the 2024 election.
Back right after this.
Hey, everyone.
Let me tell you about the Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here on Big Technology Podcasts.
podcast with Vanessa Otero, the CEO of AdFontas Media.
Vanessa, you've also written in depth about what you're planning to do here.
So a couple more questions for you about the, what happens, the impact, you know, should this succeed?
So first of all, you write as you veer off to the left or the right, quality and trust drop precipitously.
So I'm curious when you write something like that, like, how do you answer like the both sides argument that people might say like, well, like, you know, okay, like, you know, you might have.
some of this stuff on the left and some of this on the right, but the magnitude isn't the same.
So you putting it in the same sentence, some people might not be fans of it.
I'm curious what, I mean, I personally think that, like, yes, there are problems on both sides of the political spectrum here.
But how do you answer that argument?
Well, just because there are two sides to an issue doesn't mean that they're equal and opposite.
Like, if you think about, you know, a case in court, right?
There's always two sides.
And what we're trying to do is, you know, be, you know, judges of the content.
And a judge, you're not asking a judge to be neutral.
You're asking a judge to be fair, right?
What I mean by that is, at the end of the day, the judge doesn't say like, oh, well,
you both have a point, you know, good luck.
You ask the judge to, like, make a call, you know, which side has a better argument.
And the, you know, side with a better argument in a particular, on a particular issue,
tends to be the one that has better facts, more facts tied to their analysis and
their conclusions. You're right that folks on the left and the right will point to the other
side and say, well, they're worse because of X, Y, and Z. And the nature of what's like far left
and what's far right on our chart are different, right? You'll notice, if you look really closely,
especially at our interactive media bias chart and at our web, our web chart versus our podcast chart,
there is a, there are more bright-leaning websites. There's just literally a larger number,
objectively, of sites that have misleading inaccurate information. It's just like a bigger ecosystem
over there. So folks on the right will point to that and say, well, that just means it's your chart.
is biased because there's more stuff on the right. I mean, I don't think that's what it does mean.
It's not that there's not misleading content on the left. And there's just less, there's a lot
less website content that's specifically tailored towards left-leaning, misleading, inaccurate
information. But you go to podcasts, though, there's a, there's a nice, like, little universe
that's long form, you see this in video too.
You can find a lot of YouTube channels that are left-leaning that are conspiratorial, right?
So it's just not the exact same content universe on each side.
So, you know, just because two sides exist doesn't mean they're doing exactly the same thing.
On the right, we do see while there will, while there is more misleading and inaccurate content,
you'll rarely see, like, bad words, for example.
It's something that on the right folks tend to refrain from.
However, on the left, like, really partisan, you know, left-leaning opinion, propaganda stuff,
tons of curse words, tons of, tons of, you know,
really vile, personal insults.
So that's not exactly the same thing.
Those are just the things that move things towards the bottom of the chart on each side.
Another argument that folks might make is, well, what happens if this company ends up skewing, you know, some of these big brand dollars, like to one political side or the others?
So is your goal to proportionally direct add dollars to, like, sort of reasonably biased right and left sites?
Or like, what happens, for instance, if a lot of the money that ends up coming and, you know, using your data ends up going to left-leaning sites or right-leaning sites?
Is that still a win for you because they're going to more reasonable news or is that something that you want to stay away from?
We want to direct add dollars towards more highly reliable sites.
And there are highly reliable sites on the left and the right.
So, yeah, and I don't want to, you know, duck this question.
I talked about how the, like the right leaning sites, how there's more like misleading, um,
in inaccurate ones like below like a low threshold on our.
chart. Well, one, so folks on the left look at that and they're like, ha ha, see, I was
correct. But the chart also shows data that vindicates folks on the right. A lot of folks
on the right say that the mainstream media, like some of the biggest publications,
askew left. And if you look at the concentration of some of the biggest news properties
in our country, they do skew left. They're not sitting like right there on the middle.
of the chart. And it's, there's not an equal distribution above this, you know, the high reliability
threshold. And if you think about like how our country is and how content is and how there's just
tens of thousands of news source out there, it's almost an impossible expectation to think that
there would be equal and opposite news sites across every possible spot.
in the chart, right?
There are so many different factors at play.
So ultimately, we just want folks to support,
brands to support the most reliable content,
whether it's left center or right-leaning.
And I say center because center is a bias too.
Vanessa, is there a risk that this just ends up
directing money to the mainstream, mainstream sites
versus like the upstarts?
I mean, because it does seem like, all right,
like part of like a proxy of what we've been talking about
is like, you know,
these are these establishment sites or, you know, talk,
so talk about your rebuttal to that,
what people would say on that front.
So somebody can score high reliability on our chart if they're like brand new.
You know, they do not have to be around for a long time.
And there are a bunch of newer sites just in the last, in the last few years.
Like, like the messenger is a new site.
You know, that popped on on the scene and we rated it as a high reliability score.
but it doesn't have to be just like a big site that happens to be one that's like well funded and it has some like media industry veterans associated with us but being new doesn't preclude them from being high reliability but also being small doesn't preclude you from being high reliability so podcasts like we we scoured like the the media landscape for new subs stacks like it it can be just a one person shop but if that is about the content
So we actually are helping advertisers discover not just like the good stuff they know.
You know, do I need to tell people like, oh, yeah, the AP and Wall Street Journal and like New York Times, like those are good.
People know those ones, right?
We're talking about the ones that people don't know, the new stuff, the small stuff, the local stuff.
You know, there's so much good local content.
There are so many good trade publications that absolutely deserve this kind of dispense.
these ad dollars. And we have like thousands and thousands of them that we've identified.
Okay. Can this extend to social media at some point? I mean, is this something that like a buyer who
wants to buy like we let's go full circle. We talked about Facebook. We talked, you know,
we talked a little bit about X and a media buyer. Like there's been so much discussion right now in
terms of like, you know, Elon Musk and his, you know, anger towards advertisers for pulling out
of Twitter completely, like, could this be something that an advertiser decides, hey, I'd like
to go to Twitter, but want to buy mostly, like, the discussion that falls into the, you know,
less extreme areas of conversation, or is that too complicated to work through?
It's actually not. So we actually have data on X Twitter handles and publications, because
everything that we've rated on our, on our chart, where they have.
like Twitter handles and other social media handle, like YouTube, we match it up and tag it
because the content that somebody produces outside of social media is often very similar,
if not the same as a content that they put on social media. And, you know, X has the adjacency
controls. You know, they have a tool for those who aren't familiar, like for advertisers,
they can say, the advertisers on X can say,
I do not want my ads showing up next to, you know,
these particular handles or these particular keywords.
We have data that advertisers could use to make those decisions on platforms like that.
And ideally, we like to see that,
and we have that kind of data for X.
We have it for YouTube.
We even have it for threads, even they don't have an ad product yet.
but yes the has any advertiser used that yet not yet so it's a new it's a new
offering so you know we're hopeful that we've uh we're hopeful that folks will because social
like as much as uh you know social media is a really fraught place with a you know lots and lots
of real valid concerns about the content that's on it broadly um advertisers sort of can't help
wanting to reach their audiences that are there and, you know, different people see different
stuff across those platforms.
Let's end with, you know, a small discussion on AI.
Yeah.
So you have dozens of people under your employee that are rating these stories.
You mentioned it's like a very painstaking process.
Can artificial intelligence be something that we could reliably use to rate stories?
I mean, AI comes with its own bias.
So it's kind of interesting.
How do you see AI factoring into the future of your work?
Well, we've been working on.
this problem for a really long time. And to be honest, a lot of folks have for a long time
tried to, this holy grill is like, oh, what if we could just identify misinformation with
AI? Wouldn't that be so great? Like, we would have none anymore. And the reason that that
hasn't happened is because it's very difficult for a machine to tell the nuance of how
true something is and how left and right it is to degrees.
So, you know, that's what we started with manual ratings.
But, you know, over the years, we've developed itself.
It's almost over 70,000 individual pieces of content that we've hand labeled.
I'm a patent attorney by background.
I, you know, did software patents in my career.
And I knew that, you know, manually labeled training data is what you need in order to create AI for pretty much anything.
So we have this, the largest in the world, we believe, set of label data for this.
So we've developed our own AI models.
So we can actually now score articles at scale for reliability and bias.
And it's quite accurate as compared to our own human ratings.
I say this is something we've been working towards for a long time because, you know,
folks look at the painstaking work that we've done to rate articles and episodes.
And they're like, wow, that's great.
We're like, can we do this at scale?
Because in advertising, in social media, and content on the internet in general, there's just so much of it.
So humans aren't scalable enough.
AI is not accurate enough.
So for us, it, you know, I'm really, I'm pleased that we've been able to get to this point where we rate the top news articles with humans every day.
And we rate the rest of the news landscape with AI.
So we're rating now tens of thousands of articles per day to help advertisers, like, reach that skill that they want.
Okay.
I just got to ask you.
What, there's just been this, you know, I already said we're going to sign off.
But, you know, it just popped in my head.
Like, what do you think about the whole argument, like the free speech argument?
And does this sort of impede on the free speech argument?
I, it wouldn't surprise you to note that I've thought a lot about this.
So especially like when it comes to content moderation.
and you know section 230 and you know for those not familiar with uh you know section 230 i mean
it's it's a law that basically allows social media companies to escape liability for either
moderating content or not moderating content like let social media companies just sort of off
off the hook and the it seems that the only real workable way to do and do and do do
anything about misleading or extremely polarizing content on the internet while still
respecting not only the laws of free speech in this country, but the ethic we have around
free speech in this country.
The only way to respect that is to label content and provide users more of a choice.
So, you know, people, no one in this country, I mean, this is a broadly shared left and
right sentiment.
don't want information to be repressed. People don't want information to be censored.
They want to be able to have choice about information, even if it's abhorrent or even if it's
false. But the reality of like this very overwhelming information landscape is that people
need more information about the information that they're going to consume. So it's a,
I view labeling stuff is a solution like it's like a content rating for movies.
Like, this is G, PG, PG-13, R.
Like, you just have a general idea when you're walking into it, like, what you're getting
into.
You can have the choice whether to do it or not.
So, like, a label of like, you know, left and right, here's how far.
This is fact analysis opinion or it has some other problems.
Here's some more information about it and you can make that choice about it.
To me, that's a version of more speech being the solution to your free speech problems.
Right.
Okay, really the last question.
Any 2024 campaigns coming to you to say,
hey, I want to advertise to like this segment of the population?
And if not, do you expect them to come through?
Yeah, we have had some agencies for political advertisements express interest in our, in our ratings.
And we find that really fascinating.
So just to be clear, like the terms of our data are really explicit that you cannot use our data to,
target most extreme or misleading content, right? It's really fascinating to be able to use our
information about content and how left and right it is because we believe that there's a
very high correlation between like the content people read and their political views,
right? Usually center people read center stuff and center left people will read central left
stuff and center right people will read center right stuff and on and on, right? And it's hard for
political advertisers to target people to that level of voters to that level of granularity.
And in elections that are decided by just tiny vote counts, tiny percentages, is really
important to reach persuadable folks. So our data is quite useful for identifying who would
be persuadable. And hint, it's folks that read more reasonable content. Yeah. Like if I'm in a
Republican primary campaign, I'm working on that staff, I'm going to you and saying, get me the,
you know, people that are reading, you know, the based in fact center right publications and give me
them in Iowa. And, you know, then you're, you're really talking about a group that could have
weight politically. Yeah, absolutely. I mean, folks that are like reading very, very strong left
and right, they tend to not be persuadable. It's just the nature of our, you know, very,
polarized society right now. But what gives me hope is there are a lot of reasonable folks
reading high quality, high reliability news. And yeah, we want to elevate the folks who are
putting out that kind of work, that kind of journalism. Vanessa Otero, thanks so much for joining.
My pleasure. Thank you, Alex. Thanks, everybody for listening. Thank you, Nate Guadne for handling
the audio. Thank you, LinkedIn for having me as part of your podcast network.
We have a great set of shows coming up for you over the next couple weeks.
We do hope you stay tuned and we'll see you next time on Big Technology Podcast.