Big Technology Podcast - Reddit Revolts, Media Armageddon, AI Fakes In Politics — With Sara Fischer
Episode Date: June 16, 2023Sara Fischer is the senior media reporter at Axios and media analyst at CNN. She joins Big Technology Podcast for our weekly discussion of the week's tech news. We cover: 1) The rebellion at Reddit ...2) Rampant media job losses 3) The push of ad money to retail 4) Whether we should share our personal information inside ChatGPT and other LLMs 5) Political misinformation via AI generated text and images 6) The cost of always trying to distinguish real from fake 7) The White House's War on Junk Fees. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Welcome to Big Technology Podcast Friday edition,
where we break down the news in our traditional cool-headed and nuanced format.
We have a great guest for you today.
Sarah Fisher is here.
She is a senior media reporter at Axios and a media analyst at CNN.
We have so many stories to get into the big Reddit strike,
what's happening with Google, the latest on AI.
And of course, one of my favorites, are we going to end junk fees in the U.S.?
I think the time has come.
Sarah, it's so great to have you on.
Welcome to the show.
Thanks for having me.
Great to see you.
Okay, why don't we start with the story of Reddit?
So the company has been dealing with a large strike from many of the moderators of its subreddits.
People are saying, okay, you know, maybe this is the end of Reddit.
As we know it, I know you've been following the story closely.
what's your read on what's happening and where it goes from here?
Yeah, it's a good question.
Reddit is so important to culture in the U.S.
Oftentimes, and I'm Googling things,
and I can't find an answer to something specific.
I'm going to get a Reddit thread is coming up in my search engine.
So it's a huge part of how we get and access information
and how we develop communities online.
If you're somebody listening to this and you're like,
well, I'm not on Reddit.
Chances are somebody in your life is.
They have 57 million people that use that site daily.
they have over 1.2 billion subreddits or like communities per topic.
And at any given time, you can have over 100,000 of them that are active.
So the fact that there's a huge blackout on Reddit is a massive story for the internet
and for big tech.
My read on it, Alex, is that this is going to be a pretty important cultural issue for the
company.
Essentially what happened is Reddit CEO, Steve Huffman, said that they would start charging
for back-end access to its data.
And while a lot of people use that backend access for fun, silly things, like creating an app for reading Reddit threads on your phone or things like that, why he did it is because there's a lot of other big tech rival firms that use tools to scrape Reddit's data.
And then they're using that to train their AI algorithms.
And so he essentially said, look, we're going to get paid if you're going to do that.
Unfortunately, that sparked a huge uproar in the Reddit community.
We now have thousands of Reddit subreddit communities that have.
have made their channels private.
So essentially they are in a blackout period
until they can see if the Reddit CEO will change his mind.
But it does not seem like he will, Alex.
And I think that's where this cultural tension
is going to come about.
For a long time, Reddit was not seen as this big commercial player
like Meta and Google.
It was seen as a community that really cared
about the people who used it.
And so now we're seeing this company as it approaches
a potential IPO starting to really side with its business,
and prioritize that potentially over some concerns from its community.
Now, here's the thing I don't get.
It seems like both sides, and this is such a typical fight in 2023,
where you have both sides.
They're basically incinerating each other, right?
The moderators are destroying this platform that they know and love,
and the owners of the company are trying to make a profit,
but driving away so many users.
I just don't really see this conflict as a very productive one.
Maybe I'm seeing it wrong.
Maybe there is, you know, reason for, you know, the moderators in particular to want to go dark.
Obviously, Reddit's a site that's gone dark in the past to try to get what it wants in terms of lobbying for specific tech regulation or against it.
But this one just seems to me like it's just going to end up destroying something the moderators love and also blowing up the site for Reddit.
Where am I wrong there?
I don't disagree with you.
And I think that the CEO's new announcement this week that basically members of the community can vote out.
their moderators, you know, kind of, again, creates that cultural tension that I think could be
damaging to the business. However, one of the things that Reddit has not provided is they must
have done some sort of internal audit trying to figure out what money they could make by charging
big companies in particular to be accessing this data. And it must have figured that that's a
pretty big number. Remember, Alex, Reddit is an ad-supported platform, and we are in one of the
slowest growth years for digital advertising that we've seen in history. So I can understand why a
platform like Reddit, especially because again, they are looking to go public via an IPO. They filed
confidentially a year and a half ago at this point, would want to increase the revenue wheels.
I don't know what the calculation is. They haven't provided those details, but I think certainly that
has to do something with it. You know, you have to be making some money if you're trying to file for an
IPO, of course, like, they're, you know, the advertising slowdown is real. I've definitely
experienced that in the newsletter. Like last year, forget about it. Like I was selling every ad
this year, I'm like begging and pleading advertisers to come aboard with the newsletter.
There has been some discussion of like the thing that the moderators here are most angry about
is the fact that client apps like Apollo, which is one of them, have been basically forced to
shut down because operating on top of Reddit is going to be too expensive. Then you have the other
uses, the crawling for AI training. It makes sense to charge for that. Maybe not for this tool
that's made your platform so valuable, like an Apollo, right, which users want to use and actually
contribute to the forum. Now, it obviously seems like Reddit still wants money from the client
apps and wants to force people to their app, you know, if they don't, if the client apps can't
make it. But why not carve out an exception, right? Carve out an exception for the, um, these client
apps, the apps that make Reddit more usable, so you're going to actually be able to thrive,
even though you're dealing with, you know, trying to find a way to make money.
It's a great question. And it again goes back to, I don't think we have all the information.
For all we know, Reddit's trying to build its own, you know, sort of more readable apps.
And that could be part of why they don't want to give access to these clients.
They have, I want to note, given some carveouts, they've said folks like researchers will still
be able to get API access to their data. And researchers, by the way,
the reason they would want to get access to Reddit's data is to monitor things like
misinformation, cultural trends, etc. But they have introduced this policy, which is very blunt
on the commercial side, you know, basically grouping in all of these use cases together. And to
your point, Alex, we don't quite understand why. If they were more, you know, if they have information
about why they're doing it, I think that would be critical. They've in the past few days,
they started to put out more info. And they're talking a lot, I know, to their community
moderators about why they're making these decisions and what sort of factors go into it.
But what I haven't seen yet is like the raw business case.
Again, they're a private company.
It's not like they need to disclose it.
But I think it would help folks like you and me who are analyzing the situation and people
in their community understand the rationale a little bit more.
Yeah, it kind of strikes me that the CEO of Fred at Steve Huffman, I, you know, I've heard
about him, but you're really starting to see how this guy runs the company.
And the people who are closest to it just have this visceral.
real hate for him. And I think that's probably like not something you're going to want to
foster if you're running a forum company. Yeah. And I'll just say he's a co-founder. He's been there
for so many years. This has been a breaking point. I don't think the community viscerly didn't like
Steve beforehand. I know as a reporter, I found him to be very compelling and honest and open
executive to work with him. But I think this is just a huge milestone in Reddit's growth,
again, ahead of a potential IPO that's rubbing a lot of folks the wrong way. Well, one guy that
definitely does not like Steve Huffman, is Ryan Broderick, who writes this great substack
and one of my former colleagues at BuzzFee, but his subsect is called Garbage Day.
And he wrote a piece commenting on this that I thought was really interesting.
It's called Platforms Don't Really Make Sense anymore.
And he's talking about this idea, and I think there's some truth to this, that like the big, broad
public forums, the things like the Reddit, the things like the Facebooks, and
Twitters have actually declined in relevance as people have gone on and started to move into
group chats.
for instance. So let me read you piece of it and I'd like to have your your perspective on it.
So he says over time as more platforms appeared and consolidated actual internet real estate,
we came to rely on them working together in some capacity. And though they never want to acknowledge it,
these platforms have also been relying on us to give them content. Now users are revolting.
Ad revenue is beginning to lag. Younger users are moving away from social networking
and gravitating toward Netflix-style entertainment hubs like TikTok.
and dark social and AI are replacing the feed interface with the chat window so platforms are panicking
because they were always middlemen they were always marketplace and without a market they aren't
really anything what do you make of that i think to an extent he's right that the social marketplace
has died down and so they all need to pivot out of being just social companies but they've found other
marketplaces that are very successful so let's look at meta for example i think where you're seeing
meta lean in is that they are a marketplace that connects consumers to businesses globally and
at an incredible scale. And as a result, their business opportunities being this sort of
closed wall marketplace for the commerce business to consumer communication, they're leaning
into that. Their business opportunities are increasing there. What are they doing? They are
basically displacing call centers, you know, and making it so that you can do a lot of back-end
messaging relatively quickly with a lot of these businesses. So I think for them,
It's not that they don't have a role to play.
It's that the role that they have to play
is gonna have to shift into marketplace businesses
that are not social, but they're more commerce
and selling of goods or to the point that you missed made
about Google, it needs to be the exchange of ideas
and helping to better clarify inputs and outputs through AI.
I think for a company like Reddit,
they're interesting because they are inherently conversations.
So how do you pivot out of a closed wall garden
platform, that's just conversations into other things.
They're experimenting.
You know, they bought a video platform.
They bought Dub Smash.
They kind of folded it into their infrastructure.
And I think companies like Twitter trying to figure out the same thing, right?
Twitter's experimenting much more right now with video, live streaming.
How can they be more of an entertainment hub akin to a TikTok?
So I agree with Ryan's premise.
I definitely think the social Waldgarden play is probably over.
You need to now become the walled garden for a marketplace of, you know,
entertainment content, commerce content, et cetera. And I do think there's still a place for it there.
Right. It's just that you can't have so many platforms do that. Like this whole idea of Twitter
adding more video, like is it trying to become some sort of hybrid between Twitter, YouTube and
TikTok? You eventually just kind of lose focus, don't you? You can and you do. I think where Twitter
has a very interesting value proposition is it's the place you go for real-time events and conversations
around them. So when they're leaning into video, they're leaning into announcements,
They're leaning into live feeds of sports and entertainment and award shows.
They're going, I think, to lean into a lot more in this election cycle of like live
announcements and things that are happening there and then all the conversation around it.
So for them, you're right, you can risk becoming distracted.
I see where it works for them.
But a great example of becoming too distracted is meta.
Like they recognize that the social marketplace was not going to work and they leaned into
way too many things.
I mean, they launched a newsletter platform for journalists.
They launched a podcasting and live audio app.
All that's now been shut down because now they recognize that they really need a hyper focus
on what is actually working, which I think for them, they're hoping is going to be this, again,
the connection between consumers and business as it pertains to commerce.
Yep, year of efficiency.
And by the way, with the Twitter stuff, like the fact that they're going to, you know,
it's a great point that you make, that they're going to focus more on announcements and something
live, right, live video, if you do it right, has been something that's compelling,
although you really need to be the best of it.
But the fact that they have Linda Yakarino
who'd come from NBC
and is used to selling TV
that might help bolster it.
By the way, so speaking of advertising downtrends
and trying to be meaningful
conduits of information,
I was going to say this for later,
but it's a great moment to bring it up.
You tweeted an astonishing stat this week
looking at how many jobs have been lost
in the news business this year.
I mean, so many.
right. So you say the media industry has announced at least 17,436 job cuts so far this year.
And that's the highest year to date level of cuts on record, worse than the outset of 2020.
I mean, we've definitely seen the fact that there's been cuts, you know, across my old employer and BuzzFeed News, inside vice, inside, you know, insider all over the place.
CNN, right? Why is this year so bad?
the ad market is brutal.
That's what I would mostly attribute this to.
And in addition to that, you had a lot of consolidation
that drove debt.
And I think a lot of companies are kind of looking to cuts
to manage some of that.
Obviously, you don't typically cut people to manage big swast
debt. You like divest platforms and things and big businesses.
But I do think some of these cuts are tied to that,
particularly when you look at some of the big newspaper mergers.
We've had a lot of cuts coming out of Ginnett
and coming out of that company was merged with Gatehouse,
same thing with Lee Enterprises. I've been writing a lot about their cuts. Those were
mostly, I think, earlier this year. So that's a big part of it. The ad market sort of
consolidation and paying down debt related to it. I also think there's a little bit of
subscription fatigue. In the Trump era, a lot of these news companies made a lot of money on
subscriptions. And now you see companies and subscription revenue is down. You know, the Washington
Post had $3 million in the Trump era. Source tells me that's now down to $2.5 million.
The publisher is out this week.
Yeah, exactly.
The publisher and CEO step down.
We'll be stepping down in August.
So there's a lot of turbulence that's leading to cuts.
But what is astonishing is what you just mentioned at the top.
This is more than what we had at the outset of the pandemic.
And we put so much time and attention towards how do we provide relief for the news industry.
I mean, we put up PPP loans.
We made news companies eligible for them.
And they took them.
I mean, it was extraordinary how much we paid attention.
And the reason I wrote that story this week,
is like, I don't think we've collectively been paying attention to how much the fourth
the state has been crushed. If this was, I saw a few people tweeting, like, if this was the mining
industry or if this was some sort of, you know, industrial industry, we would be panicked. But right
now, with the news industry, you don't see that kind of same level of moral panic at the regulatory
and sort of lawmaker's side, at least. Yeah, it's pretty wild. I mean, one of the things I wrote in
my notes is like maybe it's design. Now, I know that's kind of facetious, but like, you know,
you go to these websites, you take off your ad blocker, then they like reward you by papering over
all the content with like just so many ads. It makes it impossible to read. And maybe that's
an old man crumogyny take on this thing. But it is interesting how the experience has degraded.
You know, they've given more real estate to advertising, even as they're making less money
from advertising. But we're, because we both come from the business side. Like you were, you were
business side of a publisher. I was buying ads and ad tech. So we know how this thing works.
But yeah, go ahead. I would just say economically, just so to be clear, the rate of growth
in advertising is slowing, but it's still growing. It's decelerating, not decline. And so I think one of
the challenges is not necessarily that publishers are like rapidly, you know, losing all their
revenue. It's that it's not growing at the clip that they expected it to grow. And as a result,
they have to manage their expenses. So that's why I think you're seeing a lot of these cuts.
To your point about a lot of publishers just jamming ads on the page, we did have a big turnaround in this industry around 2018 when Google said, I'm not sending any of my referral traffic.
Safari said this too.
If you don't, your ad specs don't adhere to this coalition standard.
So that's why you don't have like pop-ups instead of good.
I think most premium web publishers, they're now, the advertising experience is better.
However, I think where it's starting to get convoluted is, we now need to inject ads into everything we do.
We have ads within the middle of our events.
We have ads within our live streams.
We have ads within our podcasts.
And so even though the experience, I think,
and the web front has gotten better in the past few years,
I do think consumers still feel like inundated with commercialization.
Right.
And by the way, like even though the ad pie is growing,
it's not going to publishers, right?
Like the one thriving ad business that's growing at a percentage
that's, you know, reminiscent of what it was like pre-pandemic or even mid-pandemic,
is Amazon. So where is that growth and ad money going? Going to three places, and that's such a good
point, mostly retail. Retail is so interesting, right? Yeah. You know, you have a bunch of retailers
who are creating digital networks to sell ads. Name a few of them. Walmarts. And the Walmots are all
the grogars, you know, Kroger and Albertsons, et cetera, Amazon. CTB connected television,
there's a lot of momentum there. So as eyeballs moved from linear television to streaming, you know,
more digital streaming television ads.
And then, you know, there's still obviously growth and momentum in search and social.
Social actually and search are growing a little bit slower than they have traditionally.
And that's just due to the maturation of platforms like Google and meta.
One area where I'm looking at for a lot of growth is any app that has a consumer attention,
meaning you're opening the app for a long period time, suddenly now is a place where we put ads.
So outside of retail, delivery services, Uber has a huge ads business,
Instacart has a huge ads business.
I think you should expect to see a lot more growth there.
On that note, let's take a break.
We're here with Sarah Fisher.
She is the media correspondent, senior media reporter at Axios and CNN's media analyst.
We're talking about Reddit.
We're talking about the decline in media.
But when we come back on the break, we're going to cover a few themes that we keep coming back to on this podcast.
The latest in AI, Google's warning to its employees not to put interesting,
put any confidential information into Bard and other large language models.
And then also we'll talk about the end of junk fees, potentially, back right after this.
And we're back here on Big Technology podcast with Sarah Fisher.
She's a senior media reporter at Axios and a CNN media analyst.
First half, we talked a lot about current events with Reddit.
There's something else that's going on, which is that we are having this massive rollout of large language model based
applications and the security side is something that doesn't get discussed too often but certainly is
starting to get way more attention and it's time to talk about it now i mean google made this very
interesting warning to its employees by the way it didn't broadcast this out this was something that
ended up being scooped by the media and it said be careful pretty much about what you put inside
these large language models things like barred because it's not exactly safe right so i'm
curious there what you think this says first of all about our william
this to kind of run and put all of our information inside these models.
But also like, what exactly is Google seeing on the other side of all of our queries?
I mean, more than traditional search if we're starting to talk to its technology like a friend.
Yeah, that's a huge problem here, right?
Like we're, when we used to go search for things, we'd ask a question typically.
Right now when we're using chat GPT, we are inputting more than just questions,
but we are inputting the way we want these questions to be answered.
And oftentimes to do that, we're inputting personal information.
So for example, you know, can you write me a college application essay?
I am a girl who is 16, who has a GPA of 3.2, and I'm from suburban, you know, Wisconsin or whatever.
Like now suddenly you're giving them all this personal information about you that previously in search,
the platform could likely infer, but now you're willingly giving it over.
And so what Google essentially saying to its employees is like, please be careful when you give in your personal information because we don't quite frankly know how these large language models with AI, how they're processing it, how they're saving it to be able to tap into it later to create even more sophisticated queries. Now, the challenge is, okay, if you're Google, why would you ever admit something like that, right? You're rolling this out on a commercial scale. Why would you ever conceive that? And I think the answer is, you know, we and in
Sundar Pichai has said this, they move very quickly on this because of the consumer interest.
And now they're trying to balance things like safety and reliability with the commercial
opportunity that's presented itself in rolling it out quickly.
I think a lot of people look at this and they're alarmed and they would have said,
why didn't you move slower?
But the cat's already out of the back.
That's the problem, right?
Like, is it Google's fault that OpenAI has already introduced this technology and it would
do the same things anyway?
No.
So I think that that's where they're kind of stuck right now.
How careful are you about the stuff that you put in to these LLMs?
I don't put in any personal information.
And I also don't put in any information regarding my reporting.
So, you know, I will use, I've experimented with BARD and Open AI and I'm trying to figure out ways to be smarter about it.
And I'll use it for small day-to-day things.
I have to write, you know, my mom a quick note about what time I'm getting to the airport or something like that.
Like, not pretty, you know, innocuous stuff that I don't care if somebody out there knows that I...
Sarah's mom. If you're listening, you're talking about it.
I'm landing in, you know, Terminal C at Newark Airport. Like, it's fine. I think I'm not going to put in any sensitive information that would relate to my work or my personal life or identity, really.
You won't like, write, like, hey, I'm going to write this story. You know, can you write it up in Axio style? Here's, like, a couple of bullet points.
No, because, like, one, if it's a scoop and it's my proprietary information. I don't know.
on section there?
Well, look, I mean, I definitely, I've definitely been putting the first few sentences of my stories
in there. And it's been interesting to watch the way that it's evolved. I still think it's a
terrible writer, both Bing and Bard and Chad GPT. I mean, that writing is like totally
unusable, but maybe I should be a little bit more careful. And it is interesting. I mean, you know,
the thing is, you could definitely have like people sharing like some like personal information,
like I'm planning a vacation or something in this area and Google getting a chance to see it.
But I think what they're really worried about, and I'm curious what you think about this,
is that people are, this stuff is amazing for code.
And I'm sure right now across Silicon Valley, engineers are just dumping in like trade secrets
and the entire code base of really important apps and products into these large language models
and asking them to refine or change that.
And that's really where you can get into trouble, don't you think?
Yeah, I definitely think there's a commercial risk for sure.
But I think the personal identity risk is even more present because Google as an employer has a right to protect all of its employees' information, et cetera.
And like even more damning than some of your code getting out is some of your employees' proprietary information being abused by your own technology.
Like I can see class action lawsuits out the wazoo as a result of that.
And I think that's a huge driving factor as well.
Yeah.
Did you see this week?
So speaking of AI, did you see this week there was a photo of Rick DeSantis?
of Donald Trump kissing Dr. Fauci on the nose that was put out by the DeSantis campaign?
I did. And we're following AI and politics really closely at Axios. You know, one of the things we've
been in touch with the FEC, with the Federal Election Commission, the FCC, the FTC about how we
regulate AI in political ads, how we regulate AI and political discourse. And it's really an uncharted
territory. I think it's going to be very hard to figure out where we set boundaries ahead of the
24 election, we've been talking about the threat of things like deep bakes for years. This feels
like the election cycle where it finally becomes an issue. And Alex, I will tell you, we don't
really have any parameters set, especially, especially, and this is the one I'm the most worried
about, the local broadcast level. Because of the way our regulations are designed, local broadcasters
cannot edit, they cannot fact-checked, they cannot deny political ads from candidates, and obviously
you have to have equal airtime. So now what happens when you have an ad that comes in using
AI, as a local broadcaster, again, you're not really technically supposed to fact-check
or edit these ads, but you also have a duty in terms of adhering to other advertising
regulation writ large, like false commercialization, which is when you put out an ad that's
demonstratively false that would cause you to buy something wrong or buy something
with understanding. I think they have no clue what to do. And I don't think their policies
have caught up. And I'm hearing from campaigns who are annoyed that they're submitting ads and
they're getting rejected and they don't know why.
And so this is going to be a huge storyline for the 2024 election.
Or will it?
Because, you know, another thing that I saw in response to this, first of all, like in the run
up to the 2016 election, something like this would have got first, you know, front page news,
right?
Everybody would have been covering it.
I don't think anyone really cared yesterday.
And in fact, when I shared it, I know that someone who probably saw my tweet, you know,
tweeted in a response, breaking politicians, lies.
in ads like facetiously being like we don't we don't care anymore and it is interesting it
sort of seems like we might have gotten to this point and maybe I'm wrong but there's been so
much misinformation and so much talk about misinformation that you know people don't really seem
to care anymore it just doesn't seem to be an issue that's animating people's interest
maybe I'm wrong what do you think I think there's going to be a select few areas of our world
and life that we're going to deeply care and Google has kind of suggested and hinted at this
in its latest ads policy update, or actually this might have been a few updates ago,
they said very explicitly, like, please use extra caution when you're looking at results
around personal finance and health, because those are the types of areas where consumers
will have zero tolerance for misinformation.
If your kid swallows a bottle of nail polish and you go to Google, what do I do, what is
the poison control, local poison control number, and you're getting misinformation, the consumer
will revolt against those platforms. So they're going to have incentive to definitely figure out
what's accurate and what's not. Same thing for personal finance information and help.
When it comes to politics and news and information, you're right. I think a lot of the consumer
base doesn't much care. They assume so much of this is lies and not lies. But where they will care
is if we have an instance, like what happened a few weeks ago, where there was a deep fake,
an AI-generated image of the Pentagon, basically, being on fire that caused the stock market
to change and to drop. That's when consumers are going to care. If it's national security
related, if it's related to health and safety, if there's a deep fake image on the news that's
being circulated of a tornado about to hit your neighborhood, I mean, you might make decisions
around that. You might go run and try to pick up your kid from school. So consumers are going
to demand accurate information and seek it. It just has to be kind of a life or death.
really. That's a great point. And it also could like you make a great point. It could also impact people's
pocketbooks. So like there was, I feel like every day there's a new story of another financial bot that's
going to give you investment advice. And my perspective on this is just like, okay, if anyone releases
these bots, whether it's J.P. Morgan and or whoever, if that bot says anything other than
invest in diversified index funds and wait, there's going to be serious lawsuits.
Where do you think this is going to go?
I agree. I think a lot of AI regulation will go down and be regulated in courts because of lawsuits,
especially when people ask me about copyright law, current copyright law protects human works.
And so it's unclear as to how it protects works that are developed through a combination of human
intelligence and AI. And I think a lot of those test cases, and we're starting to see this in the
music industry a lot, are going to be litigated in courts that will set new precedents and help
shape the law as we understand it and interpret it. To your point about the personal finances,
there was a story a few weeks ago about, I think it was a National Eating Disorders Association,
using a chat pot to give people information. The information was widely wrong. And so they had to
retract. Yeah, that strikes me. It's so crazy. Yeah, but it gives you a good example and a good
sense of, you know, a lot of entities are feeling pressure from an highly inflated economy,
relying on AI to help outsource and automate human intelligence.
And I think they are very quickly realizing that it's not that simple.
And so I think you're right.
You're going to have major institutions of proceed with caution, especially, like, again,
I cover the news and media industry.
You're seeing news companies use AI in the back end to help with, you know, billing and ad
optimization.
But a lot of newsrooms are proceeding with caution because they don't want to get caught up in
copyright issues and stuff like that.
So, you know, I think it's just going to be experimentation.
but big institutions and big brands that have a lot of brand equity to risk will move even
slower than upstarts.
Did you see Nick Carlson and his return, well, not return, but at the end of this insider
strike, he sent a memo out to the company talking about how they're going to start relying
more on AI Insider.
And he mentioned straight up, AI can make you a better writer.
What do you think about that?
So I cover Insider's first announcement around AI, how they were creating sort of like an AI special
council and terminally to experiment. You know, I don't want to dispute or sort of put down his
perspective because for all I know, AI, when leverage correctly and when you're trained to use
it correctly, can make you a better writer. But I do think, and the Digital Publishers Association
put out guidelines to its members the other day about this, I do think, even if it makes you
a better writer, the risk is that your work is not protected by the copyright laws that it used
to be, which becomes very challenging when you want to monetize the IP to create books or movies
or plays or anything like that. And so, sure, go ahead and use it to help improve your writing
all you want, but you need to do that at your own risk of IP protection. And I think that,
depending on your business model, can be a big deal. If you think about a company like Vox Media,
which makes a large chunk of its revenue from licensing its IP for movies and TV shows and
podcasts, like I would proceed with a lot more caution than if you're just like a clickbait
website that's just making programmatic digital ad revenue and you don't really care about
your IP.
Most definitely.
And I think that like I agree that AI can make you a better writer.
I've been using Gramerly for years and Gramerly has taught me things about the active and
the passive voice that I could never potentially, I could never learn from a book just because
it highlights and it's like, that's passive voice.
And after it did it like a hundred times, I was like, oh, I understand this now.
So it'll be interesting to see how that plays out.
this really interesting. Speaking of AI misinformation, there's a story that you wrote that I want to get to talking about AI's hidden toll on our brain. So you have this at Axios with a colleague that experts are raising alarms about the mental health risks and the emotional burden of navigating an information ecosystem driven by AI that's likely to feature even more misinformation identity theft and fraud. So basically like there could be mental health costs to people who are trying to like see all this.
content and discern what's real or not. Can you expand a little bit about, about, you know,
what the study was that you reported on and how valid do you find? I mean, just gut-checked their
findings. Yeah, no, I wrote that story with Axis's science editor, Allison Snyder. And really
what the experts that we spoke to or that we quoted that were, what they're trying to explain
here is that there is definitely going to be like an opportunity cost in our efficiency.
and in our mental health, in having to allocate so much more mental bandwidth and emotional
bandwidth and figuring out what is real and what is not. Now, if you think about it from the misinformation
perspective, like, I don't think that people are going to be up in a tizzy trying to figure out
whether or not the news article that they read is real. I think what people are going to be up
and a tizzy about is someone made a deep fake of them doing something inappropriate or sexual
and they sent it around to a group chat with all of their friends.
Or somebody made a false video of someone pretending to cheat on a test
and emailed it to their teacher.
That is going to cause a huge undue burden of people trying to prove what's real and what's not
in ways that have real implications on their lives.
And there's a huge opportunity cost in our lives.
Like we're going to have to be taking the time out of our day to disprove these types of images
and disprove these types of content.
And so that's basically what they're saying.
And the other thing, too, is that the more this proliferates, especially without regulation,
the more consumers are going to be at risk of things like identity theft and fraud, like you mentioned.
People are already very stressed about things like digital security, their password managing.
I mean, how stressed are you every time you have to create a new password for something, write it down,
forgetting about it?
So this is going to put that on overdrive, is trying to make sure and manage every weird scam text that you get.
Is this actually my delivery package from Amazon?
Is it not?
Every, you know, fake image.
That is exhausting, for sure.
It's exhausting.
And I think that it's going to have the main point that I wanted to make with that story is there is an opportunity cost in society with having to deal with these problems and not having the tools to deal with it.
You become so mentally exhausted and emotionally exhausted managing it that in some ways you retreat from using the tech, which could make us better in advance or you become distressing of the world around you.
And that has huge implications for like society, democracy, all of those things.
Yeah.
It is amazing that high school.
schools haven't already filled with people mid-journing their friends and teachers and putting them
in weird positions. I remember when I was in high school, we would use Microsoft paint to
like put the teacher's faces on Disney characters and then hang them up in the hall and we thought
we were hilarious. I mean, with today's technology, the pranks can really get a lot better.
Yeah, I'm very worried especially for kids. Imagine if two kids get in a fight, one kid uses
to AI to create a video of someone pretending, again, to admit to cheating on a test, I mean,
that could impact your ability to get into college. So it's, it's very scary.
It is bad for sure. Have you tried mid-journing your face onto different people or in different
scenes? I have not in part because I don't want my personal data out there. Like you would have to
kind of input some of your own photos and I just am not comfortable with that yet.
Yeah. You're appropriately cautious about this technology.
and I'm needlessly reckless and definitely try to get mid-jurney to take my face.
You can upload a photo.
You know how this works, right?
And then say, with this person's face, you know, do this.
And I did try to get it to put me in an extremely puffy coat.
And I think the good news is that you can still tell the fakes from the real, but it doesn't
really seem that far away.
No, what makes you feel like so comfortable with inputting that information is with
just that you think your stuff's already out there or the benefit from experimentation outweighs
the risk?
Both, but I think I've really been lulled into this sense of complacency handing my data over
to the, you know, the digital world.
And, you know, I think that eventually that will probably bite me.
But up until then, you know, this is probably a ridiculous stance to take.
so I'm even embarrassed to admit it on the podcast.
But I'm just willing to, like, throw as much data out there.
Obviously, like, I have boundaries.
I try to keep my personal life as much as I can off of social media and off of the
internet.
And my banking information certainly will not be entered into chat GPT anytime soon.
But I'm waiting to get hit and it hasn't happened yet.
So that's sort of where I stand.
Am I crazy?
No, you're not crazy.
I think that's calculated risk.
So all good.
Yeah.
I'm just...
Okay, the last question.
I just looked at my computer's out of battery and I'm like, ugh.
All right.
So let's just go.
Last question.
I want to hear your perspective on the fact that the White House is making companies like Ticketmaster just close the junk fees up front.
So you're going to be able to see Ticketmaster or Stububhub fees altogether when you look.
It doesn't help that Ticketmaster and Live Nation have had so much regulatory scrutiny over their merger and antitrust dominance in the wake of the Taylor Swift fiasco.
fiasco. But I think this is bigger than just those ticketing companies. Airbnb has been the subject of
a lot of complaints here. The White House is listening to consumers getting frustrated, especially
in this very highly inflationary environment about what they're sort of paying. And they don't
like deception. I will say as a country, consumer deception, that's mostly under the FTC's purview,
something we take pretty seriously. And so I think that makes sense that they're tackling this
In terms of like why they're doing it now, like I said, I think it's the regulatory environment around Ticketmaster and Live Nation.
I also think in the post-pandemic world, a lot of people are starting to actually go out more.
They're buying tickets.
They're buying movies.
They're buying vacations.
And in the pandemic, we weren't so focused on hidden fees because what were we buying?
Now that's like bounced back in the other direction.
And so consumers want a little more protection.
Yeah.
You have any fun concerts you're looking to hit up this summer?
No, none.
I feel like.
Are you not a music person?
No, I like music.
I actually just said this on a podcast.
I like music, but I'm trying to be smarter about my budget.
Anytime I can go to a free concert, I'm there.
And so you and I go to all these work conferences with concerts.
Like, I'm going to Can next week and, like, you know, all of these different musicians are going to be playing.
And I'm like, I'll get to see them for free.
It's fine.
Yeah, I'll never forget.
Sorry?
This summer, you know, I might go to the dead and company show in City Field next week.
But I have a busy week, so we'll see what happens.
But my last word is, I'll never forget seeing LCD sound system,
one of my favorite bands, play at Google I.O. a few years ago.
And they spent the whole time talking to the crowd about how they were grateful that they were there,
even though it's not their thing.
And everybody seemed pretty receptive, except for the engineer sitting in front of me,
not paying attention to the show, and coding the entire time from the third row.
Sarah Fisher, thank you so much for joining.
This was so much fun.
Great having you on the show.
Good to see you.
Thank you.
Thank you.
Please come back again.
All right, everybody,
that'll do it for us here.
Thank you to Sarah.
Thank you, Doug Gorman,
for doing the titles here on LinkedIn.
And thank you to LinkedIn
for me as part of the podcast network.
We'll see you all next Wednesday.
An interview with ServiceNow CEO, Bill McDermott is coming up.
So hope to see you then,
and we'll see you next time on Big Technology Podcast.
I don't know.
Thank you.
You know,
You know.