A Bit of Optimism - Trust with Jimmy Wales
Episode Date: June 20, 2023Information is power. But how do we adapt now that most of the information we consume comes with a heavy bias? In an ironic twist of fate, it’s Wikipedia — the encyclopedia that anyone can edit �...�� that has done one of the best jobs maintaining public trust in our polarized era. I sat down with the founder of Wikipedia, Jimmy Wales, in front of a live studio audience to explore how we got here and how Wikipedia’s transparency and vulnerability may be a model for a better future.This…is A Bit of Optimism.For more on Jimmy, his work, and the Founders Forum summit that hosted our conversation, check out: https://en.wikipedia.org/wiki/Jimmy_Waleshttps://twitter.com/jimmy_waleshttps://ff.co/Â
Transcript
Discussion (0)
The internet has a problem.
This holder of once trustworthy information has become anything but.
We don't trust social media or even our news if it's on the internet.
But there's one glaring exception to this trend.
Wikipedia.
Wikipedia didn't used to be a trusted source of information
and these days is one of the most trusted sources of information. So I sat down with Jimmy Wales,
the founder of Wikipedia, to talk to him about how Wikipedia has been able to increase its
trustworthiness and how we can bring a little more trust back to the internet.
It's a particularly special episode because we recorded it in front of a live audience.
This is a bit of optimism.
So this is really fun for me because we're doing a live recording of a bit of optimism
at Soho Farmhouse at Founders Forum, part of London Tech Week, which is really fun.
And I get to sit here with the indefatigable
Jimmy Wales, founder of Wikipedia.
The thing that I am fascinated about,
which is where the rest of the internet,
the sort of the big tech companies,
are losing trust.
You know, we don't trust the pictures we see on Instagram.
We definitely don't trust Twitter anymore.
Big media is losing credibility in the stuff that they're writing.
But Wikipedia, and it has its issues and has its ups and downs,
but Wikipedia, for the most part, has been able to maintain the public trust since you showed up.
Yeah.
Why? How has that been possible? Yeah, well, I mean, this is
very high on my mind these days. I'm writing a book on trust and thinking about this decline
in trust that we've seen across all kinds of sectors of society, trust in politicians,
journalism, et cetera, et cetera. And really thinking about, okay, what do you
need to do to maintain trust? What have we done that has allowed us to be trusted?
And it's really interesting. We're not trusted because Wikipedia is perfect. There are errors
in Wikipedia. You can definitely find biased articles or things that have been vandalized
and so forth. But I actually think it's in large part the fact that we are so trusting.
We make ourselves vulnerable to the world. Anyone can go and edit Wikipedia. You know, you can go
99.9% of the pages are completely unlocked. You can go and make a change. It goes live instantly.
But there's a lot behind the scenes. A community that's monitoring and checking. And if you do
naughty things, you get banned very
quickly and so forth. And there's a level of transparency as well. So, you know, everybody's
stumbled across a Wikipedia entry that says, you know, the neutrality of this article has been
disputed. The following section doesn't cite any sources. So we give you those warnings right up
front, which you think, well, that surely reduces trust, but actually it increases trust. People are like,
oh, okay. So they're being open about how this works. I always joke, I wish sometimes the New
York Times would put a banner at the top saying, we had a big fight in the newsroom as to whether
to run this, but we think it's strong enough, but just be aware there's some dispute about the
neutrality of this. That would be useful. That would actually be a great idea because then you would go into it knowing that there's
disagreement before you start reading it. And you get to be a part of that rather than just
have it thrust upon you. But Twitter, in some way, shape, or form, has that openness where
anybody can contribute. And I know it's slightly different now because they have vastly fewer
people monitoring. How many people do you have monitoring
at Wikipedia versus what Twitter had? It's hard to make a comparison because it is a bit apples
and oranges. Although, as someone pointed out quite amusingly, you can compare apples and oranges.
So in a sense, if you look at the staff at the Wikimedia Foundation who are tasked with monitoring all the content as it comes in, that is zero.
No one is doing that.
We do have trust and safety people, but they really only take on issues that get escalated to them that are beyond the capacity of the community to deal with.
So, for example, a persistent vandal across many, many languages
is kind of a pain in the neck.
Someone who's posted a suicide threat on a talk page in Wikipedia,
the community's not really in a position to sort of call the authorities
and do what he's doing.
So we do have a small staff.
Those kinds of things are quite rare.
One of the big differences is, like, for social media,
and I include in this twitter instagram facebook
youtube you know there's sort of a box where it says upload your creativity or what are you
thinking and so forth so it's very much uh by default wide open free speech forum which is
going to lead to people uploading horrific offensive you know there'll be some of that
no matter what.
Whereas Wikipedia, there is no place on Wikipedia where you're supposed to just
upload whatever is on your mind. It's even the talk pages of every entry are talk pages about
improving the article. So if you go to the Donald Trump talk page and you start, you know, ranting
about how great he is or how horrible he is, the people there are going to say, hold on, what does this have to do with the article?
We're here to talk about improving the article.
And that actually cuts out a huge swath of community management issues and problems.
It's just like, we don't have to worry if you have a horrific opinion.
You're not really supposed to come and post your random opinions at all.
But another answer to your question is, how many people do we have monitoring?
Exactly the same number of people as we have editing at all.
Because everybody's involved in an open community process.
So can any of those formerly trusted places, all the social media, and even the big news media,
can they become trusted in your opinion or will they
actually have to die off or become something else and be replaced by something else well i mean i
think if we're talking about the news media i think they have to become trusted again yeah um
because without a trusted but how brown truth like well i mean if they bring you in yeah and say okay
let's make you there's a job available
you can actually run cnn if you like um you know how do you how do you fix that yeah well i think
there's different answers to this so i think one of the things to look at is the business models
of journalism have been broken for some time and we we're starting to see some resurgence of,
for example, subscription models, which I think is very healthy, very good. Because part of the
problem is, if your business model is showing as many ads as possible, getting as many clicks to
your site as possible, it does drive you in a certain tabloidy direction of inflammatory content,
inflammatory, you want it to go viral as
everybody's all upset and debating about what you've written. And that moves you away from
a kind of Walter Cronkite-esque, like serious, like we're just going to tell you the facts and
we're going to be very clear about it and we're going to acknowledge where we're not sure and so
forth. And I do think you have to get back to that. I mean, I think people, I remember during the Trump administration, I read an article in the Washington Post.
And the Washington Post was doing some fantastic journalism during that time.
Read an article in the Washington Post.
It was not an op-ed piece.
It was a straight news piece.
And I loved it.
But I loved it because it was really a completely biased rant against Donald Trump, which I agreed with.
But I didn't feel great when I finished it.
I was like, that's not really proper journalism.
I loved it.
They're pandering to my sort of, they got me to click on an inflammatory headline that basically suggested he's about to go to jail.
I'm like, yes, let's see about that.
But then at the end, I'm like, you know what?
That was really one-sided and it didn't really advance my understanding of anything.
And that's a problem.
And it's a problem when you don't have subscribers who are like, I really want to pay for this.
I don't think people realize sort of like how we got here, too, which, you know, when the public airwaves were first made available, the television airwaves were first made available to commerce, the government made a deal with the broadcasters
and said, look, you can make money on the public airwaves,
but here's the deal.
You can offer the news as a public service.
And everybody agreed to it.
And that's how you could have people like Walter Cronkite
because the news was never viewed as money-making.
It was always viewed as just a public service.
And it was in 1979 on Nightline
during the Iran hostage crisis
that the ratings for the first time ever on the news went skyrocketed.
And all of a sudden, all the folks, all the executives leaned in and be like, what happened?
And they saw the sensationalism, although it was newsworthy, spike.
And all of a sudden, they started becoming more interested in the ratings of the news and flash forward a bunch of years and now selling advertising on the news where you could – the goal was now to maximize your ad dollars from the news.
And to your point where you just said, and the only way you can do that is if you make news sensational or incredibly biased because it's not about actually providing the news.
It's no longer become a public good.
It's now just a business. Yeah. To an extent though, because I do think there is a business
model around being trusted and being serious about journalism. But is it an ad-based need to click,
need as many eyeballs as possible model? Maybe not. This is why I'm very bullish on subscriber
revenue at the major newspapers,
because I do think then you sort of take a longer view because, you know, if all you're offering is
the same clickbait sort of inflammatory stuff you get everywhere, I'm not going to pay for that. I
can get that for free. But I might pay for, you know, if I feel like, oh, actually now I'm
understanding the world, I'm getting fed not just things I agree with, but things that are
helping me, then I'll pay for it. So I think that's a piece of it that's really important.
But also, I think when we look at, that's just talking about the news media, but we also look at
the ecosystem of social media, where you take all of the problems we just discussed, it just
explodes. The kinds of things that we've seen so many times,
we're here on the YouTube stage. I think YouTube has done a better job than most at recognizing,
gee, if we just optimize like in a very simplistic way for time on site, how many minutes of video
you're watching, we will prioritize things that are not really healthy because people, you know, we're human beings.
And so we do, we kind of follow, everybody turns their head to see what's happened at the,
at the car crash. It's digital rubbernecking. Yeah. Digital rubbernecking. And you know,
that's problematic. You end up prioritizing extremist content, things that are outrageous.
I mean, I've watched a couple, at least, of flat earth videos, mainly just because
I'm like, really? Like, come on, let me see. This can't be real that somebody, and you know,
that's just sort of human. And so YouTube, I think, has done a good job of saying, okay,
we need to think a little bit more in a more subtle way about what are we optimizing for?
But they have the same problem. Like, that's always going to be a difficult problem
in social media that is advertising based. So we're talking about the companies and we're
talking about business models, but then what about us, the consumers? You know, I think we all,
you know, to some degree we're all kind of full of it, right? Which is we say we want unbiased news.
We say we want just the facts, but we don't really, because when we read something that we
disagree with, we dismiss it and call it biased.
If you ask folks on the left to read Fox, even if there's some good journalism in there that
skews one way, they'll dismiss all of it as biased. And conversely, if you ask people on the
right, too. So how do we actually get... We want to read things that we agree with. You know, are we as a society,
or at least an entire generation, so far gone
that we, you know, we can't even, we don't even,
despite what we say,
we don't actually want unbiased news anymore.
So how do we get us back to neutral?
Yeah, so my counter to that is that Wikipedia
is more popular than the top 50 newspapers
in the world combined.
Yeah.
And people do like Wikipedia quite a lot.
They love it.
And I just think humans are multilayered and multifaceted.
We are both.
We both want a bit of sensationalism.
But we also have a more Aristotelian brain as well, where we say, no, actually, I want
to be well-informed.
I want to understand what people on the opposite side are saying.
I want to get their arguments.
And I want to grapple with ideas.
And this is, of course, not everybody.
But I think most people can see both of those things in themselves.
And where we can get better as consumers is to really be very conscious about this and also to recognize the addiction nature of some of this stuff.
about this and also to recognize the addiction nature of some of this stuff.
The, you know, you know, there's a very famous XKCD comic, somebody saying, come on, honey,
it's time to go to bed.
And the partner says, I can't, someone is wrong on the internet, you know, and that's a very, I was like, well, there's Twitter for you.
It's like, I can't sleep right now.
There's this idiot here and somebody's got to straighten this out.
But a big part of that is the design of social media, the way it's managed and so forth.
So I say this about Twitter in particular. It's like, okay, if you see somebody doing something bad on Twitter, whatever that might be, being abusive to another person, spreading misinformation.
You basically have three choices. So one, you can block them. That helps you. It doesn't help
anybody else. And sometimes it doesn't even help you if they're saying something horrific about you
and you block them. That's not stopping everybody else from seeing, which is actually what bothers
you. You can block them. You can report them, which if you've ever reported anything on Twitter, even pre-Elon, that's not a
very useful avenue. Or you can yell at them, which is the most popular choice. So people yell at each
other. And that's kind of the design of the platform. And there is really no community norms,
community standards where people are debating and discussing and thinking
about how do we do this, which is really like, how do we handle that kind of thing in our
real life social networks? Well, it's complicated and nuanced, right? You've got this one friend
who's unfortunately kind of a jerk to this other person. And so you think about, oh,
do we invite them back again? And, you know, really don't like them over at dinner because
they get drunk and they get annoying, et cetera, et cetera.
And we do manage these kinds of conflicts.
And in social media, what I would say is we have come to this point where the model doesn't really have any kind of genuine community controls.
It's a model of, I call it a feudal model.
You know, everybody using the site is a serf on the master's estate,
and the master sets the rules and enforces the rules,
and there's not really much you can do.
You're very powerless.
Whereas at Wikipedia, although I do acknowledge
the problem is vastly simpler,
writing an encyclopedia,
the community manages all of that.
How many people are,
I don't know what you call them, super users,
but the people who are mostly maintaining Wikipedia.
What's that number?
So it depends on how you count it.
So if you look at the number of people who edit at least five times a month,
that's like 70,000 to 80,000.
That's pretty active, but five times a month,
you're not really daily engaged.
And the people who are editing at least 100 times a month is 3,000 to 5,000.
So that 3,000 to 5,000 is the core who really manage things.
And I think that's the big difference, right?
Even if you take the bigger number,
the number of people who engage with Wikipedia,
which is colossal, it's a minuscule number
who are actually using it.
Whereas for all of social media,
everybody, that colossal number, is the group maintaining it.
And I think that's... It's one of the challenges. It's kind of like media, everybody, that colossal number, is the group maintaining it. And I think that's...
It's one of the challenges.
It's kind of like democracy, right?
We don't really want a democracy.
We like representatives.
It actually works better.
Yeah, yeah.
And so Wikipedia is a representative government rather than a complete democracy.
That's right-ish, yeah.
It's an analogy, but yeah, definitely, yeah.
So, yeah, I mean, one of the things, one of my side projects is WikiTribune Social, WT Social.
So it's a pilot project, social network, where the users can trust each other.
And it's really focused on gaining trust from other users and so forth.
Will it work? I don't know.
But the concept is to say, how can we democratize?
How can we broaden out? So rather than simply having a model
where Twitter hires 30,000 people,
very horrible jobs in low-wage countries
to look at horrific content and deal with it,
can we actually devolve that power into the community
in a completely new way?
Where, you know, so like,
here's one of my canonical examples is on Facebook.
So Facebook's done a pretty good job of, you know, these disinformation accounts that have millions of followers and they deal with that.
I mean, once it gets to millions of followers, so you could complain about that.
But actually, I think there's a deeper problem, which is everybody's got the stereotype of the racist uncle.
And I always say, oh, my uncles are wonderful.
I don't have a racist uncle.
But we know what that is.
racist uncle and i always say all my uncles are wonderful i don't have a racist uncle but we know what that is the blowhard uncle who you know at thanksgiving dinner christmas dinner everybody's
like oh god you know uncle ralph again here we go right every family's got one every family's got
one and you just sort of manage it and deal with it what if you're that one if maybe that because
i don't know if we've got one it must be you exactly um But he's always banging on about social media and knowledge, you know, what an annoying guy.
But anyway, and then another stereotype is the very sweet grandma, right, who just wants the grandkids to be happy and wants to congratulate and love everybody.
So on Facebook, your annoying uncle posts his sort of bigoted opinions.
And the algorithm notices people jumping to go, Uncle Ralph, shut up.
And it's like, ooh, engagement, right?
So suddenly, you know, Uncle Ralph has 900 followers on Facebook.
And grandma, who just posts heart emojis when there's a kid picture and says sweet things, doesn't get a rise out of anything.
It's less fun.
It's less fun.
Well, it's less engaging. I don't know if it's less fun, but it's less fun. It's less fun. Well, it's less engaging.
I don't know if it's less fun, but it's less engaging.
What's the rubbernecking?
It's less ad revenue.
Yeah, yeah.
And so then you just sort of get this deep problem where it's like, oh, actually the
people in real life who we all just roll our eyes and don't listen to have an audience.
And the people who are really sweet who we think, oh, yeah, great role model.
I love that she's so supportive to the kids and so on uh gets nothing and so that deeper problem is really a structural design problem that
that comes directly from an advertising based business model where you just say oh well how
do we get more people looking at more ads turns out uncle ralph is a gold mine because he stirs
up controversy and gets people
back to the post. Someone is wrong on the internet. It's Uncle Ralph. And so that deeper problem,
I think, has to be solved by moving to platforms that have a different model.
So the big difference here is we're asking a community to manage an online encyclopedia.
Yeah.
Right? Versus asking a community to manage people's opinions.
It's hard.
It's hard.
Absolutely.
It's a much more complicated question.
Here I can, and Wikipedia requires, you know, citation.
Like, where's the evidence that you're going to change somebody else's post?
Yeah.
Versus if it's an opinion economy.
Yeah.
Like, I just disagree and think you're wrong.
I'm going to change your post.
So if the algorithm is based on trust, so you've got whatever signals you've got and you're saying, actually, we're going to optimize to show people more trusted content rather than more engaging content.
Well, it's not going to work very well for an advertising-based business model because you're probably going to get fewer page views total and so forth.
But it may create a space that people feel more authentically enriching to their lives.
So the business model of Wikipedia is just purely donations.
And we've done loads of A-B testing over the years on different kinds of messages.
I mean, in the early days, I remember one of our fundraisers,
I wrote a letter specifically talking about,
because I didn't know any better, but I thought,
oh, we're a charity and we want, you know, what do charities do?
Okay.
I wrote a long letter about what we want to accomplish
in sub-Saharan Africa to bring free education and so forth.
It performed okay, but it wasn't really great.
It turns out what really performs the best in terms of messaging
is kind of an equity argument. Basically, you use it all the time. You should
probably chip in. You love Wikipedia. Why don't you chip in and help out? And that performs really
well. People are like, you know what? I do love Wikipedia, and I really should give them some
money because I've benefited so much in my life. And so how do you get to a place where people are
like, yeah, am I actually willing to send in my 20 quid? And it's because you have to meaningfully impact their life in a positive
way. I don't think anybody's ever thought, gosh, I mean, Twitter has changed my life for the better
so much. I just want to send them some money. And hence why Twitter Blue has had a very relatively
low uptake. Don't know exactly, but it seems to be relatively low.
And they keep changing to make it more transactional.
In other words, if you pay, you get promoted up in the algorithm,
and you show up first and so on,
which has empowered a lot of really dumb people, as it turns out.
I want to go deeper on the trust.
Because one thing you said, which I sort of repelled a little bit from it.
You said the algorithm for trust.
Yes.
Right?
Trust is a feeling.
Yeah.
Right?
Yeah.
People can say things to me that, but I might get a bad feeling or conversely, you know, I get a good feeling about something. And we're highly attuned social animals who need to have this mechanism work for us
because our survival depends on our ability to go to sleep at night and rely on somebody else to
watch for danger. So we've learned to feel trust. And so how can we develop an algorithm for a human
emotion? Yeah, well, we clearly have algorithms for human emotions already. Sentiment analysis
and things like that and looking at even engagement is largely about certain emotional reactions.
Fear.
Fear, yes.
Paranoia.
And hope.
Yeah.
So, you know, I think it really is about, so there's some interesting research that's
been coming out that in a mock social network type of thing, these experimenters looked at, where people rated whether or not news postings
were trustworthy or not trustworthy
rather than like and dislike,
performed quite well compared to like and dislike
in terms of elevating higher quality content,
in terms of how people behaved.
So if you go on Twitter
and you want to get a big following,
don't behave like the sweet grandma who's just lovely to everybody.
Behave like the bigoted uncle because that gets you followers.
It gets you traction and so forth.
Whereas if instead of like and dislike on your Facebook post or the information you're sharing, if people could say, oh, this was trustworthy or not trustworthy, then you get signals, different signals that will tend to be, I
think, higher quality.
The right would say that Fox is more trustworthy and the left would say that MSNBC is more
trustworthy.
Yeah, right.
No, exactly.
But I think both, like I personally believe, and this is my experience with the Wikipedia
community, people of different political viewpoints can distinguish between higher and lower quality
output. So for example, you know, in this country, I would say, you know, compare the
Telegraph and the Guardian. So you've got a left-leaning and a right-leaning paper.
They're both quality newspapers, and you can critique them on this and that.
And then you've got very low quality media. I'm not going to mention
the Mail Online, but if I were,
you know, and actually there was a great survey that came out just a couple days ago. I saw like
the most trusted, I think it was The Economist and the BBC scored very high, FT, very high.
Whereas, you know, the Mail scored quite low in people's estimate of trust, and yet it's incredibly
popular. And so I think we can make those distinctions. I mean, I can say, I can talk
about various commentators on the internet, and there are those who I disagree with, but who I
rate, who I say, okay, you have to grapple with their ideas. They're serious thinkers putting
forward arguments that have to be grappled with. And then there's just blowhards who have no idea what they're talking about.
Like one of the things I've always said is like, if I went on Facebook one morning and they said,
oh, we're trying a new feature. Instead of showing you the things we think are going to engage you
the most or so forth, we're going to show you things that we have signals that are of quality
that we think you might disagree with. Oh, that sounds interesting. Like show me something quality
that I don't agree with. That's actually very interesting to me. It might change my mind,
or I might improve my own thinking and so forth. And if you only show me things that just flatter
my own preconceptions, I kind of like that too, of course. I'm a human being, but I really want,
in a deeper sense, something more interesting, more engaging, more thoughtful, which is what I hope to get. You know, when I go to Wikipedia on a complex topic, I hope to actually learn about what
are the various sides saying, how can I grapple with those ideas.
Wasn't that the goal of some of the aggregators, though?
Some of the news aggregators, it's kind of their goal, which is like, we'll show you
everything.
Yeah.
No, I think there have been a lot of people who've experimented with different kinds of ideas of, like, you know, what sorts of things.
So I don't know how they do it, but I use Google News quite a lot.
They tend to show you a variety of different things.
Isn't the algorithm of Google designed to show you the news that it thinks you want?
I do.
So I often surf it logged out to restore back sort of a general view of the world, not just things I like.
There was an experiment a bunch of years ago that wanted to prove the bias of the Google algorithm when Benghazi was a thing.
A thing, yeah.
And well-meaning citizens who wanted to really do their own research, if you were right-leaning and typed in Benghazi into Google, it showed you right-leaning news.
Ah, yeah.
And if you were left-leaning and trying to do your own research and ignore big media,
you put in Benghazi and you got left-leaning.
And people that had no political sort of interest typed in Benghazi and they got tourist advice.
So here we are.
We looked to the search engine to, again, well-meaning citizens who are like, I reject
the bias.
We don't even know where to go anymore to get
legitimate news if we're looking for it. Yeah. So if you think about that,
like what would you really want? I think that's one of the reasons Wikipedia is really popular,
at least hopefully. You go to Wikipedia on Benghazi and you do get sort of an understanding
of what happened, what's the issue, what's going on in a way that shouldn't sort of lean left or
right. Obviously, we're human beings, so you can grapple with anything in Wikipedia, but
people do want that. They have a passion for that. They don't want to be fed one line.
They want to understand the world. Again, not everybody. Some people are perfectly content
to live in a bubble. But I actually think even a lot of the people who are perfectly content to
live in a bubble are victims of the bubble they're living in.
They haven't actually sort of stepped outside or they haven't seen anything that's challenged them to go, oh, hold on, maybe I need to have a more nuanced view of this.
But I think people, you know, Aristotelian thinking, you know, sort of people want to know.
They want to love, they love to understand the world rather than just be fed an illusion. Why aren't there more
Jimmy Waleses? It's a
serious question. There are plenty of people who claim. That's why they start an aggregator
or something and they go out and get VC funding for their aggregator.
That's a completely different problem. But why aren't there more of you
who are out there trying to rebuild or offer alternatives to news or social media?
You know, we've got the encyclopedia down.
Yeah, yeah.
You know, like, where's the competition to get this right?
Well, I mean, I think there is a fair amount of competition, but nothing's gotten real traction yet in many cases.
And why not?
Like, why is it not working?
Well, I mean, I think a piece of it is with the social pilot, I've decided, well, we're
not going to have any advertising.
Okay.
And part of the reason for that is, so when I first started the WikiTribune project, we
thought, okay, we're going to do a hybrid,
hire 10 journalists, and we're going to do a hybrid with journalists working with community
members to write the news.
And there's a lot of reasons why that didn't work.
It's complicated.
But actually, one of the things that I realized is like, we ran a story.
It was a pretty good story.
It was a crypto story about a scoop we had about one of the big thefts or something like that got a huge amount of traffic but I
looked at the headline the headline was quite inflammatory and I was like oh
we're we're embedded in an ecosystem that rewards that you know and that's
problematic and so it's really that broader ecosystem that rewards
inflammatory things that that has to be. And so going back to the business model, it's like, okay,
can I create a social network that people say,
actually, it's not that I'm addicted and go on there all the time,
but when I do go there, it's such a satisfying experience.
I've learned something. I'm challenged.
I always find quality things there that interest me
that I'm willing to say, you know what, I should chip in for this.
And that's going to take some time. Will it even work? I don't know, but I'm trying.
And I think the big thing there is also paying the journalists, you know, which is...
Yeah, yeah, yeah. So paying the journalists, like, so we're not doing journalists now.
Journalists can't be incentivized based on how many clicks they get.
Yeah, yeah, yeah. No, that's exactly right. No, I actually met this really interesting guy in
Canada who was doing online local news. And he was very adamant, to the surprise of many at this news conference, that he doesn't even let the journalists see their traffic numbers.
He said, look, if I do, they're just going to write more car crash stories.
And he's like, what we're really trying to do is be the heart and soul of each local community.
They were in small towns far enough away from a big city that there was a need for actual local news. And he said, we do that by
embedding ourselves in a long term that we have the obituaries always popular and we have
really local zoning issues and really local stories. Even though they may not get the most
clicks in the short run, it's the only way we can really embed ourselves in people's lives and be a part of the fabric of the community the way a local newspaper was back in the olden days.
And I thought that was really interesting.
And he said, it's poison.
We tried showing the journalists their numbers and rewarding them based on it.
And he's like, they just write car crash stories.
And it's not healthy.
Completely different topic.
But one of the things that I lament about the
internet, you know, Ted Koppel had said, we used to give people the news they need, whether they
want it or not. Now we give people the news they want, whether they need it or not. And one of the
things that I remember when I used to read an actual newspaper or an actual magazine is I'd
sit there with the newspaper, I'd read the front page, obviously, and then I'd start going through the newspaper.
And invariably, I would find something
that would spark my curiosity,
and I would learn something new that I wasn't looking for.
And now, sort of the whole promise of the internet online
is we'll give you the stuff that you're looking for,
we'll give you the stuff that you want.
Everything is curated for you.
Yes.
But serendipity has been lost
because I don't ever anymore.
So I have a very good online news experience,
but I'm not learning things.
Yeah.
And I'm not going to go to another section
that I'm not interested in
because that's not what I do.
Well, I mean, we can talk about, you know...
Where's serendipity?
The example of Reddit, for example.
So Reddit is huge.
It's very popular.
It's probably not in the first tier, but second tier.
And it is full of serendipity.
You know, you go to the front page of Reddit and you'll see all kinds of random things.
You know, you can join various subreddits on topics that you're interested in.
But I find on Reddit, you know, I end up reading some stories about astronomy. I
don't really have a particular interest in it, but I find it interesting. And I'm like,
oh, I didn't set out today to learn about this planet that possibly has an Earth-like
atmosphere. I find that interesting. But it's not what I thought, oh, I'm going to go and
research planets today.
Right.
And so I do think there are places where that serendipity can happen.
But you have to go, the mentality there is today I'm going to go do discovery and serendipity
versus what real serendipity is, is I set out for this and I accidentally discovered
that.
Yeah, but I mean, I think it's sort of the same thing.
And the happy accidents have been sort of algorithm down.
I go to Reddit for certain things and then I end up somewhere else. And, you know, you see this already quoted in XKCD comics, so I might as well go for two.
But, you know, there's one that's sort of like you started at Wikipedia for some serious topic.
I think the example was the Veranzano Bridge, you know, whatever.
And, you know, two hours later you're reading about Brady Bunch.
And that is a common thing at Wikipedia.
People come for one thing, and they just end up down a rabbit hole and finding interesting things.
Because people do love that serendipity and so forth.
Serendipity in my social media life comes from other people.
Right, yes.
Where other people discovered things, and then I go.
And share something.
And that's how I'm learning from somebody's.
Yeah, exactly. But I'm certainly not doing it as much myself and i like i said i do
like pulling out a newspaper or magazine yeah definitely just because i miss it yeah no i i
if you ask me my favorite magazine what's your favorite magazine oh uh my favorite magazine is
the week junior so this is a magazine for kids.
It's a news magazine.
And I actually think because of the business model, you subscribe to it for your kids.
They want to reach all parents anywhere on the political spectrum.
So therefore, they don't lean left, lean right.
They're quite straight down the middle.
They cover the news of the day in a gentle way for children because sometimes there's upsetting things in the news, but they do cover it. Lots of cute animals.
But I can flip through the week, and I'm like, this is a great magazine. It's not aimed at my
age level, but I wish there were a grown-up. Well, have you tried the week?
You know, I probably should. It might be good. Yeah.
Jimmy, I mean, I'm hopeful that we can actually figure out this internet thing.
We don't have time to, unfortunately, unpack AI, you know, because we blew, you know, when internet and news media showed up, we sort of like said, here it is.
And now we find ourselves trying to pick up the pieces and clean up the mess.
And AI is moving even quicker in its rate of adoption and just expansion.
And my fear is that, you know, where internet adoption was slower,
and we find ourselves with messes we're trying to clean up,
what happens with AI?
Just your quick, you know, very, very quick sort of point of view on that.
Yeah.
So one of the things I always say about myself is I'm a pathological optimist.
Yeah. I'm a cynical optimist. Cynical optimist. Yeah. I'm not even cynical. I'm just optimistic.
And so, you know, I've been playing around with the large language model tools and ChatGBT and
so on. One of the use cases that I'm exploring for the social experiment is, so normally you paste your link in Facebook or whatever and it goes and it fetches the headline in the card and shows you.
And that's written by the publisher.
And it has all of the positives and negatives of that, including clickbait, slightly over the top headlines for quite a normal story.
So what I've been testing is saying,
go read this story and write a neutral headline for it and a neutral two-sentence summary,
and then you grab the picture to see, like,
what does that look like?
And so I've done it maybe 10 or 20 times.
It's not a full scientific experiment.
I find initially it's like, oh, that's actually really good.
So you get quite an inflammatory headline from whatever news source. And then instead,
you get a really calm headline that actually explains what's in the story. I think that's
the kind of tools and the kind of things that we may be able to say, oh, actually, we can
start to have better ways of engaging with information
and knowledge where we specifically ask for a healthier version. I don't know if that's going
to work or not. It makes me optimistic. I love that because the accountability is now me,
where before I can blame the media for showing me things, whether I agreed with them or disagreed
with them. But here, I have to say,
show me, unlike Google,
which will do the algorithm skews,
I can say, show me something biased to my opinions.
Go read everything and show me something biased about Benghazi.
Or show me something pretty neutral based on what you can see.
And one of the things that's interesting too
is some of the best clickbait headlines raises a question in your mind, makes you think it's going to go one way, and then you read the story and it's like, oh, it's not that exciting at all.
It's like, finally, so-and-so has admitted the truth about whatever.
I'm like, oh, I've got to see.
They've admitted the truth about nothing.
It's sort of like what they've been saying for 20 years.
So I think finding ways of smoothing a bit out
some of that crap can be very useful.
As always happens in these conversations,
it really starts to get good at the end.
Unfortunately, we're out of time.
Jimmy, thank you so much.
I really wish we could clone you
because the world would be a better place
with many more of you.
Oh, that's very sweet of you to say.
Thanks for doing this.
Really appreciate it.
Thanks very much.
Great.
If you enjoyed this podcast and would like to hear more,
please subscribe wherever you like to listen to podcasts.
And if you'd like even more optimism,
check out my website, simonsenic.com,
for classes, videos, and more.
Until then, take care of yourself,
take care of each other.