Motley Fool Money - AI & You, AI & Reddit
Episode Date: December 20, 2024For our annual best-of interviews show, we look at the topic of the year – artificial intelligence – through two different lenses: how individuals can put the technology to use, and how companies ...are using it to make their products better and fuel growth. (00:33) Wharton Professor Ethan Mollick walks through his four rules for using AI, how he pushes students to use the technology, and the research from his book Co-Intelligence: Living and Working with AI. (19:03) Reddit CEO Steve Huffman explains how the company is harnessing AI to localize content to reach new markets internationally, open up new revenue opportunities, and improve the user experience. Stocks discussed: RDDT, GOOG, GOOGL Host: Dylan Lewis Guests: Ethan Mollick, Steve Huffman Engineers: Rick Engdahl Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
This episode is brought to you by Indeed.
Stop waiting around for the perfect candidate.
Instead, use Indeed sponsored jobs to find the right people with the right skills fast.
It's a simple way to make sure your listing is the first candidate C.
According to Indeed data, sponsor jobs have four times more applicants than non-sponsored jobs.
So go build your dream team today with Indeed.
Get a $75 sponsor job credit at Indeed.com slash podcast.
Terms and conditions apply.
We're looking back on some of our favorite interviews from 2024.
This week's Motley Fool Money Radio Show starts now.
That's why they call it money.
From Fool Global Headquarters, this is Motley Fool Money.
It's the Motleyful Money Radio Show.
I'm your host, Don Lewis.
And listeners, today we are coming to you with our special episode as we head into the holidays.
Each week on Motleyful Money, we air an interview segment.
It's our chance to go outside the Fool and get some perspectives on where the world is heading, straight from the people.
that are helping shape it. Today we are going back to two of my favorite interviews from the past year.
Both of them touch on the undeniable topic of 2024, artificial intelligence. It's new, it's interesting,
it's a little bit scary, but it isn't going away. And the best time to dig into the technology,
other than yesterday, is now. So today we are going to do that. Our first guest, Ethan Mollick,
talk to me about how folks like you and I can use AI as a companion for everyday tasks, work, and more.
he's a professor at Wharton focused on entrepreneurship and innovation who has brought AI into his classroom with his students.
When we spoke earlier this year, he walked me through his four rules for using AI and how we put those rules into practice in his classroom and to help him write his book, Co-intelligence, living and working with AI.
As I noted in your intro, you're a professor at Wharton, and I know you've been focused on innovation, entrepreneurship, a lot of the major themes of the business world over the last decade plus.
had you focusing on things like crowdfunding, when did AI begin to come more into focus for you?
So I've always been AI adjacent. At the MIT Media Lab, I worked with Marvin Minsky, who was one of
the founding fathers of AI, but I was always sort of the non-technical person in the room,
this sort of entrepreneur and connection maker. But I've long been interesting the idea of how do we
use education at scale? How do we teach lots of people, especially entrepreneurship and business lesson,
a business school professor? I teach entrepreneurship. And so I've been using AI for doing that
kind of worked for a long time. And then when chat, Chivity came out, I just happened to be ready for
a world where that was already using those tools and knew what was what was happening a little before
other people. So I like to think of a couple months ahead. I think you're probably more than a
couple months ahead of a lot of people. What I liked about the book reading through it is it was a great
exploration of the space and kind of a foundation, but also in a lot of ways, a very practical
user guide for getting up to speed very quickly and kind of going from, all right, I don't know
anything, to beginner, to intermediate.
And I think that it's really useful for people in that sense.
Knowing how quickly the AI landscape has changed, what was the process like for writing the
book?
And was it an accelerated timeline?
So I wrote the book and edited it through the end of December.
I wrote it knowing GPD5 is coming and is not yet, but it will be.
And all of these other tools were coming one way.
I did write it pretty quickly.
This is my third book.
But I couldn't have written it without AI.
Actually, there's almost no AI writing in the book.
It's not AI writing.
Like, it's, you know, there's little AI segments, but they're clearly marked.
The interesting thing is, hey, I did all the other stuff that made writing books horrible
for me on my behalf.
So, like, if I got stuck on a paragraph, right, sometimes you work on that sentence for a long time,
I'm like, give me 30 versions of this sentence.
I use that inspiration.
There's a lot of work showing that AI works well as a marketer and as a persona to market
too.
So I asked it to read my book and various personas to give me feedback on what I was doing and, you know,
and advice.
And I asked it to summarize research papers that I turned.
into part of the paper. So it was very, very helpful in accelerating this process. It's sort of what AI does,
the co-intelligence idea, it's an accelerator. I think that that tees up nicely for some of the
rules of using AI that you talk about in the book. And I want to run through them because I think
there are probably some listeners out there that are avid users of chat GPT and probably some other
folks who maybe aren't as familiar or have never interacted with an LLM before. So how do you
structure how people should be using AI?
I recommend four rules to get started with.
And the first rule is invite AI to everything you do.
That basically nobody knows how AI is most useful for your field.
Nobody does.
I think people are waiting for instructions.
I talk to Open AI all the time.
I talk to Microsoft.
I talk to Google.
There is no instruction manual.
Nobody has a book of like that they haven't shown you yet.
There's no consultant who knows anything.
Nobody knows anything.
So the way to figure out what this does is just to use it a lot and see what it does.
And I strongly recommend just trying to use it for everything.
legally and ethically can.
Then the second piece of advice is that you should learn to be the human in the loop.
So the AI is really good at a lot of things.
We could talk about the studies and results on this.
But it's really good at innovation.
It's really good at analysis.
It out invents most people.
You want to think about what you're actually really good at,
because whatever you're best at, you're definitely better than the AI.
And I think that there's going to be a real benefit to think about what you want to do
and what you want to delegate.
The third principle in the book is one where I say, you know, you should treat the AI like a person.
And this is kind of considered a sin in the world of AI.
You're not supposed to anthropomorphize it.
But the fact is, it's trained on human language and human interactions.
So it works best when you work with like a human.
In fact, one of the mistakes people make is they assume that software developers
that people should be using AI, but it's actually not.
It's really managers, writers, journalists, teachers, often do a much better job using AI
because they can take the perspective of it as a person, even though it isn't.
That helps you do great work.
And the fourth is the worst AI you're ever going to use.
And we are in the early days of this kind of revolution.
So I know you really have your students focus on using AI.
I believe it's a requirement of your classes.
What do those rules that you laid out there look like in practice for them using it as part of the classroom experience?
I initially went viral a while ago for my AI rules for the class, right?
And the right after chat CBD came out where I required use and made people accountable for the outcomes.
None of that works anymore.
That was great for GPD 3.5, the free version of chat GBT.
GPD 4 writes better than most of my students.
I teach at Ivy League school.
My students are amazing, but it writes better than most of them at Home Workers' assignments.
It makes less errors than an average student.
So how do you deal with the fact that that's the case?
You can't just say use it and your response for the outcome is because I can't find the errors as easily anymore.
So instead of adapted to how we use AI in the class, AI helps me co-teach classes.
It helps me provide assignments.
One of my assignments a couple of weeks ago was that people had to replace themselves at their own jobs.
So you're going for a job interview.
You have to use the AI to do your job for you and hand a GPT that does this to your employer and say,
I'm ready for a raise now.
And I had students that were everywhere from Navy pilots to financial analysts to hip-hop promoters,
and they all found ways of automating their jobs.
Three of them got jobs that week, by the way, as a result of this.
So it was a successful trial.
I think so, yes.
It's interesting to hear you say that.
And I think where most people have lived with AI usage, certainly for me as someone who works in content,
is it is a kind of co-pilot for brainstorming.
It is something that can be very helpful to kind of get the ball rolling as a creative process.
I know with education, you've really focused on the way that can help simulate experiences for people
and the way that you're able to kind of mimic real-world situations that people might be in.
Yeah, I mean, that's one of many uses, right?
So building simulators, I've been building simulators for a decade.
They're very expensive and hard to build.
I built realistic sims where you built fake Gmail, fake chat, fake Dropbox, fake Zoom,
and you literally run a fake startup in real time over the course of six weeks.
And those took a big team of people, a lot of money, a lot of resources,
and I could get almost the same effect from two paragraphs, not quite as good, but pretty good.
So simulation is one of the areas that AI is really good at.
Another one is that it's very effective as a tutor under various certain circumstances.
Actually, the default way of using as a tutor doesn't work very well,
which is asking the AI to explain something like your 10,
that's great for getting an explanation,
but we don't remember that.
A real tutor asks you questions.
It interrogates you, and we can make the AI do that.
It works really well as a, you know,
there's a whole bunch of assignments we have around this
for integrating knowledge, for help to test you.
So there's a lot of uses.
You mentioned the asking it to be a tutor,
and kind of that leads us into some of the ideas around prompting
and the way to set AI up well to give you what you're looking for
and maybe what's most helpful for you,
For some folks who maybe haven't spent as much time prompting,
what would be some of your tips for interacting with an LLM?
The mental model you want to have for LLM is that it knows a lot of things about the world.
It has a huge web of connections,
but it's going to kind of give you the average median answer all the time.
Your job as a prompter is to knock it away from that average answer to something more interesting.
And you do that by providing it with context that gives it a different place to start from than just its default.
So easiest way to give it context is a persona.
You are blank.
You're a very good marketer.
you are a marketer for consumer products.
That's an easy way to provide context.
And then there's more advanced ways of doing that too,
but ask you to think step by step by providing examples.
But your goal is to provide additional context and information.
It's similar in a lot of ways to the way that people use Google.
I mean, early on with search,
the more specific you could be the tools of using things like quotations
to have specific text rather than just kind of a general query
helped you get closer to what you were looking for.
over time, search-ended results have gotten better and better because they've learned more and more about what we're looking for when we provide specific queries.
Does that feel like a good parallel for how people should be thinking about prompting and interacting with AI?
Well, Google is sort of headed the way that AI did.
I mean, you can't do all the things you used to do with Google.
They removed a lot of their specialized controls that used to make you good at Googling.
It used to be called Google Fu.
You were good at Googling.
Right now, you can be good at prompting.
I actually don't think it's going to be that important in the long term because I think the AI already knows your intent.
Like, if you want to write a novel, ask the AI, help me write a novel.
And you'll get a surprisingly large part of the way there from just that.
So I know we saw a lot of companies putting out these prompting roles that they were hiring for prompt engineers.
Do you think that that's a short-lived career?
So I think prompt engineering will be useful if you're building prompts for other people.
But I think for most of us using these systems, these systems are getting smart enough.
And I don't know a single person who's an insider at one of these organizations who doesn't think that the AI will,
It's self-be-able to be self-prompting.
Most of the people I talk to at OpenAI or Google or Anthropic don't think prompt engineer
is a long-time thing to learn for most people.
So does that just become a skill that most people are generally bringing into their work
rather than it being a specialized trade that we're hiring out for?
No, I just think that the AI is smart enough to be the problem.
We already know that the AI can already figure out intent better than most of us can.
And so you'll just say what you want, and you're not going to need to prompt it because
it'll know. As you've been bringing students in, I imagine some folks have some familiarity with
AI. Others maybe don't. Have you gotten pushback on bringing it into the classroom?
I think people expect a lot more pushback in the world than there is. I think there's a lot
of theoretical pushback, but people want to learn how to use these systems. So there's a lot to discuss
about ethics and privacy and other sets of concerns, but I think people also want to figure out how
to make these things useful. They're here. There isn't really a choice anymore. I occasionally,
I have like a Google deep mind person in one of my talks.
They'll raise your hands and say, what do you think about the ethics releasing AI?
And I always say to look, you made the decision to release large language models.
This is not a conversation we get to have anymore.
You made this choice.
We should know what their limitations are and what their ethical compromises are.
But like it's out there in the world.
So we better figure out how to use it.
Listeners more from Professor Ethan Malik on how individuals and companies are using AI after the break.
We'll be back in a minute.
You're listening to Motley Full Money.
Welcome back to Motleyful Money.
I'm Dylan Lewis.
and this is our annual best-of-interview show for 2024.
We are zooming in on the most zeitgeisty of zeitgeisty things this week, artificial intelligence.
Earlier this year, I caught up with Ethan Mollick.
He is a Wharton professor and author of the AI Handbook, Co-intelligence, living and working with AI.
And what I loved about our conversation is that it brought AI down to the everyday person
and the implications of things like AI at work and how companies might use it well and also misread its impact.
Let's dive in.
I want to dig into AI at work in particular.
Studies in the book that you mentioned show, I think, is like 95% of job categories,
have some overlap with AI, including professor.
You know yourself.
You're right up there at the top of jobs with AI crossover.
How do you personally feel and think about that shape in your work?
The thing to think about with jobs is jobs are not just jobs.
Jobs are bundles of tasks.
We do many things, right?
So, you know, you do podcast interviews.
I'm sure you do five other things.
plus you have to fill out an expense report.
You have to email me about this, have a pre-conversation, do research, all this kind of stuff.
Some of those tasks you probably really enjoy and you're really good at.
Some of those tests you're probably mediocre with, but just got bundled into your job.
So the first place you want to start with AI is thinking about what parts of my job bundle do I want to hand off to the AI.
It's sort of like accountants used to spend almost all of their time doing math by hand.
And then spreadsheets came along.
And now accountants don't do that anymore.
They're still room for accountants.
Their job is shifted and moved up market.
If you're conscious about how you build it, you can shift and move up market as well.
So AI is putting you in a spot where you can maybe focus on more of the things that you'd like to be focused on
and doing some of the less wrote things that you aren't as interested in.
Right.
I mean, so expense reports are something I'm happy to hand over to AI.
I would love to hand over grading, but I don't do it because ethically I don't feel comfortable doing that.
But like, there's a lot of tasks that the AI can do that are things they don't want to do.
When we survey people who use AI, we get the same two answers, which is they are both, you know, a little nervous about the future and also really happy.
because the AI is taking the worst parts of their job.
You talk about research showing that AI can really help close performance gaps when it's provided to people.
What does that look like and what is some of the research showing in that?
There's a universal result from all cases where AI does work with you, which it improves performance of the lowest performers, more than the highest performers.
Now, there are a couple of caveats to that's really important.
One of those is that's sort of naive AI use that everyone goes through initially.
we don't know in the long term whether the top performers also get a 10 times boost.
So it starts off as a leveler, an elevator for the lower, but it may not work that way in the long term.
Little bits of evidence around that, there's a really cool controlled study in Kenya that
that found small business owners who got access to the AI.
If you were a top small business owner, you got a 20% profit improvement by getting AI advice,
not help, but actual advice.
While the lowest performers weren't able to implement the advice of the AI.
So I think it depends on circumstances, but we're definitely seeing that leveling, upskilling effect
almost everywhere.
Are you seeing that in the classroom?
Yes, of course.
There's no bad writers anymore.
Knowing that everyone's writing gets better,
how do you start looking for top performers
and weaker students?
Welcome to everyone's giant problem.
I mean, there's easier.
Classes are easier, right?
Like, we have short-term disruption,
but we can test.
Like, we've got lots of options.
But if you think about middle management
in most companies,
what most managers produces words, right?
They do a whole bunch of tasks,
but they produce words.
So they write reports,
they write documents.
they do presentations, and the number of words they write as an indicator of effort.
Big document, lots more effort.
The quality of the words they produce is their intelligence or ability.
The lack of errors is their conscientiousness.
All that just broke.
I can produce an infinite number of words that are all high quality seem good enough at scale.
What does that mean for organizations?
I think one of my favorite sections in the book is you talk about this notion of the button
and it existing in basically any productivity, software application,
and something people would use for slacking, emailing people,
this button that kind of automagically drafts responses for you.
And it's a very visible, very clear extension of the technology,
and I think one that's easy for people to wrap their heads around.
But when we start drafting a lot of things out with AI
and then maybe tinkering a little bit,
what do you think that does the value of communication and work?
How do you wrap your head around that?
That's the problem.
I mean, we're about to just break how all of work and communication operates, right?
I mean, you're kind of an idiot to not use AI for stuff.
The clearest example to me is as a professor, I'm supposed to write letters or recommendation
for people.
And the whole point of a letter of recommendation is not the letter.
It's the fact that I'm setting my time on fire as a signal to people that I care about
this student, right?
So they send me the resume, the job they're applying for, and I spend a good 45 minutes
working on a letter for them.
But if I just give the AI, the resume, the job they're applying for and a thumbs up or
thumbs down say, I'm Ethan Malik.
I will get a much better letter in 35 seconds, especially if I go around or two with
the AI, maybe two minutes. Do I send the ethical letter that's less likely to get them the job,
or do I send them the unethical letter that's more likely to get them the job?
Maybe you split the difference? I don't know. I mean, it's an open question, right?
Although I did just have a student send me the prompt they want me to use to write their letter.
Wow. So this is the first. I want to extend that line of thinking a little bit. I mean,
the Motley Fool is a business of intellectual property. That's what we do. We provide
premium stock newsletters, we provide model portfolios, we have coverage, we have this very podcast,
as well as a lot of articles. As you see people that are in content making investments in AI,
what are some of the things you start to wonder about when it comes to information,
when it comes to the way we consume things? So right now, AI is at the 80th percent of many kinds
of human performance. There's no way you're working within Motley Fool that you're not in the top 1%
or 0.1% of ability level in whatever you're doing
because you wouldn't be there otherwise.
So to me, the biggest danger is organizations,
especially content organization,
thinking that this is fungible and replaceable.
I think the idea that this is going to do automated content
and that's the value is not really the point, right?
The point is, how do I get my writers,
my IP creators to do the stuff that uses their 0.1% ability,
their high-end ability,
and that lets the AI help with the other stuff.
That means you're not doing that kind of,
interesting high-end task. You can get the AI to write reasonably good portfolio articles,
I'm sure, and with a little bit of tuning, it would do a reasonably good job. But like,
you're not into reasonably good job. That's not why people are signing up for your organization.
So I think that there's a danger of a race to the bottom of like not realizing the main advantages.
And this is true with every company. The most dangerous thing you can do is view AI as a productivity
tool for cost cutting. So the idea is like, okay, it increases performance by three times.
That's great. I can fire two-thirds of my staff. In a moment where we're actually going through
transformation change, that's a really dangerous viewpoint.
That's a wrap on my conversation with Ethan Mollick, but we've got plenty more AI wisdom ahead
from the CEO of one of the market's hottest stocks of 2024.
Stay right here.
You're listening to Motleyful Money.
In a world full of noise, long-term thinking stands out.
On the Capital Ideas podcast, Capital Group leaders explore the decisions that matter most in
investing, leadership, and life.
It's a rare look inside a firm that's been helping people pursue their financial goals for
more than 90 years. Listen to the Capital Ideas podcast from Capital Group, published by Capital
Client Group, Inc. Welcome back to the Motleyful Money Radio Show. I'm Dylan Lewis, and this is our
year-end holiday special, where we bring forward some of our favorite conversations from the past
year. Earlier in the show, we focused on how individuals can interact with artificial intelligence
and how the technology is shaping the workplace. Now we're going to turn our gaze over to how
companies are using the technology in tangible ways to make their products better for end users.
If you spend a lot of time online, you probably know Reddit as the front page of the internet.
If you don't, maybe you know the online community company as one of the best performing IPOs of
2024. Shares have tripled since the business came public this past March, and its growing
user base and monetization efforts are a big part of the reason why. Back in September, Reddit CEO Steve Huff,
Walk me through how a 20-year-old business continues to find new users and how the company is harnessing AI to localize content and reach new markets internationally.
It's a treat to talk to you because I'm a longtime user of Reddit. I first started using Reddit in college, and that was, I want to date myself too much. That was over 10 years ago.
And when I started using it, it was kind of a mix. It was for the memes, but also, you know, I was kind of learning how to dress myself. And so I was going to our male fashion advice.
and trying to pick up some tips there.
I was going to school in Boston,
and so I was trying to figure out
what was going on in the city
and what I needed to know about events
and so I was going to R Boston.
I'm guessing that some of our listeners of the show
are also longtime Reddit users like me.
There are probably also some folks
who are part of the 300 million-plus folks
who come to you weekly.
For folks that do not know Reddit very well,
how'd you describe it to them?
Oh, starting with the hard questions.
Okay.
Look, first, thanks for being a user for so long.
It sounds like, yeah,
you've been on this journey with us a little bit over the last while.
So kind of depending on what I sense their context is, I explain Reddit in a couple of ways.
If I was explaining it from the ground up, I'd say Reddit is communities.
And so those communities can be about anything and everything.
Every interest, passion, hobby, whatever you're into, whatever you're going through,
it's on Reddit somewhere.
And then what I would say is, look, if you're pretty much between the age of like 17 and 70, you know, whether you're a nerd or normie, man or woman, you have a home on Reddit. There's something there for literally everybody.
Other times I explain it in contrast to social media. Social media is powered by algorithms. Reddit's powered by people. So every piece of content becomes popular on Reddit is made popular by people voting.
and voting in the context of a community.
And so, you know, up or down.
And so users can make things popular, but they can also disappear things.
And so by definition, polarizing content doesn't do as well on Reddit.
And so what you get as a result is Reddit is the most human place on the Internet because
it's powered by people.
And if you look at the conversations on Reddit, right, everybody has comments.
But if you look at the comments on Reddit, if you look at like the object of the sentences,
you'll see that they're talking to each other about,
whatever it is. As opposed to social media where they're often talking kind of at, but past the
poster, right? They're either super effusive or maybe the opposite. But there's kind of a lack of
connection there. On Reddit, it's people organized around things they love, talking about those things
like real human beings. Where do you guys think you are in the grand scheme of Reddit's potential?
I guess you can take that in the platform direction, you can take that in the business direction,
wherever you want to go with that.
It's such an interesting idea to contemplate,
because on one hand, Reddit has been bigger
than I ever thought it would be since August 2005.
And look, by some measure, we're big now, right?
We have about 90 million people visit Reddit every day,
360 million people visit Reddit every week.
So that's big in terms of absolute numbers.
But social media, you know, the biggest platforms there
have a billion, two billion users every day. So there is a, I think, huge opportunity there.
Reddit, we're about 50-50 US versus non-US. I'd say other major platforms are more like 80 to 90%
non-US. So I think a lot of opportunity to grow more users. And then on the business side,
I think we've gotten out of the beginning phase. We're in the ads business. That's our primary
business model. Though we license data and we do some other stuff as well, we're primarily
the ads. It's growing. It grew in the last quarter reported 50% growth, a little more than 50% growth.
So that's great. Our ads are working. Our customers are happy. We're continuing to deepen relationships
there. But I still think, and so on one hand, we IPOed in March. And so on one hand, it feels
like, okay, we've gotten to a certain level of stability and scale where this feels real and it's
working. On the other hand, it almost feels like we're at the very beginning.
I have a lot of the same feelings today as I did almost 20 years ago, which is, gosh, we've
barely scratched the surface of this thing.
And it can be so special.
And I think really, really great on the platform side and the business side.
And so I'm really two minds about it.
But the Jeff Bezos idea of day one is really something I feel like we're living right now.
It feels like the beginning.
I do want to dig into some of the company numbers a little bit and talk through some of
of that. I was impressed as a long-time follower of the business to see the growth that you guys
put up in 2024, because platforms have been around for a very long time. The revenue growth of 50%,
to me, not crazy surprising for where you guys are at in the monetization story. The user growth
story of 50% was a surprise. What was behind that growth? There's a couple specific things,
which I'll get to.
But the big picture idea is,
this goes back to what I said
when I'm describing Reddit.
Everybody's got a home on Reddit.
Okay, so then that raises the question,
if everybody has a home on Reddit,
and, you know, Steve,
if you say the content's so great and it's so unique
and it's such a great experience,
then why isn't everybody on Reddit already?
And so I think there's two possible answers for that.
Well, actually three.
The first, they haven't heard of Reddit.
Well, certainly in the US, that's increasingly less likely.
Okay, so number two, they tried Reddit and it didn't work for them.
That's the group we've really been focused on, making it so the new users who are coming to our front page or opening the app for the first time,
I either primed to like experience Reddit as opposed to coming from search or something like that,
make sure they find a community that speaks to them.
And so we made sign up much, much easier, the community onboarding, like helping you find your home on Reddit much more effective.
We made both the website and the app much faster.
We just redesigned it in a lot of little ways, so it's easier on the eyes, fewer bugs.
And our home feed has gotten much better at making recommendations of communities that you might like.
And so really getting people into their home on Reddit and then finding all their interests much more effectively.
And so that's been working very well.
And then at the same time, we made our website substantially faster, two to five times faster.
We launched this in May of 2023.
Googlebot likes speed.
And so faster pages rank higher, and faster pages also get indexed faster.
And so like the Google search works in mysterious ways, as close a partner as we are with Google, we have no idea how search works.
Nobody does, right?
Right, nobody does.
But speed matters.
And so when our website got a lot faster, we started ranking higher.
And then combine that with the product improvements, users are having better experience on Reddit.
And so now it creates this flywheel that we're really benefiting from.
As we see a lot of new and core users coming from search, and there were much more effective
at kind of getting them into their home on Reddit.
And so I said there's two things, right?
So either you haven't heard of Reddit or it didn't work for you.
Those are that number two we're really focused on.
There's a third one, which is you don't speak English.
And so that's kind of the next frontier of Reddit.
Reddit's corpus today is still mostly English.
But growing outside of the United States, outside of English,
that's a part of the next chapter of Reddit, unlocking that.
What is tackling that look like?
What are some of the challenges and things that you guys are working through to make Reddit
localized to some of the other big international markets?
Sure.
So all the things I just mentioned around speed,
performance, all that, all that matters.
Right?
So that's the foundation.
There's other parts of the foundation.
Like, safety is a big part of it as well.
So the foundation helps everybody.
But on top of the foundation, there's a chicken and egg problem.
You need content to attract users, and you need users to create content.
And so we kind of come at this from two ways.
One is just program work.
So we'll target a market.
And then we, you know, there are users in every market.
We're not starting from zero anywhere.
So we go to the communities there.
We reach out to the mods.
We figure out what communities probably should exist that don't, like cities, sports teams,
local passions, things like that.
And we work with mods that kind of try to bring those communities to life,
make sure they're in discovery, make sure everything's kind of humming there.
The second thing we're doing, which is working very well, or at least off to a great start,
is machine translation.
So new technology here with large language models, we can actually translate the existing
Redic corpus into other languages at human quality.
Now, not all the content is relevant, but a lot of it is.
So we've been testing this in France, in French in the first half this year, and it's gone
very, very well.
And so now we're adding on more languages.
We're doing German, Portuguese, and Spanish.
And so that will get us just a bigger content foundation.
And then from there, right, we need to see the next step, which is kind of organic growth,
or call it kind of native organic growth on top of that.
So international is, it's real work.
You know, one of the difference between Reddit and social media is, you know, Reddit's communities.
People don't just join communities overnight, let alone create them.
And so, and we can't force it.
So what we try to do is really create the conditions for growth, but we can't actually force anything.
So we're getting a little bit better at that, but that is, you know, there's a lot of, I think, finesse required to get that right.
Folks, we've got more from Reddit CEO, Steve Huffman ahead, including how the company is fueling LLM efforts by licensing its corpus of data to companies like OpenAI, and also how artificial intelligence is improving.
the site and leading to a better, safer internet.
That's next here on Motley Fool money.
Only Picks products it would personally recommend
to friends like you.
This is the Motleyful Money Radio Show.
I'm your host, Dylan Lewis,
and today we are tackling the topic of 2024,
artificial intelligence, from all angles,
how people can put it to use,
how companies are thinking about it,
and on that note,
that is where we will pick up my conversation
with Reddit CEO Steve Huffman,
diving into how the company is looking at AI
to open up some revenue opportunities
in data licensing with AI leaders like OpenAI,
and also some of the ways they are using the tech to improve the site and user experience.
Outside of the ad business, I know it's a small piece of the pie for you guys right now,
but you do have a data licensing business.
I think it was about $28 million in the recent quarter.
That is, as I understand it, allowing other companies to use platform data for LLM training,
for AI applications.
What was the decision there and what are you guys seeing there?
Yeah, so Reddit is.
is one of the largest corpus of human conversation
on the internet.
For better or for worse, but I think overwhelmingly for better,
Reddit has been an open platform.
Reddit's content was used for training these AIs.
Now, our terms of service are like Reddit's open.
You can use Reddit's content for non-commercial use.
But for commercial use, you need a license.
And so I want Reddit to stay open.
I also want to be practical.
Like, Reddit's content is useful for these things.
And look, these AIs, these large language models, you know, they help our business.
I think they advance, you know, humanity.
It's one of the most important technologies, you know, of the last generation.
Help us make Reddit safer.
Help make the whole Internet safer.
And so we like these technologies existing.
And we're proud that Reddit's content can be used to create these technologies or advance them.
But I think just a matter of like business practicality.
commercial use of Reddit's content, Reddit's public content, requires a commercial agreement.
And so that's what we've been working on. We still do non-commercial agreements. So we'll give
Reddit count on a way to researchers or other nonprofits like Internet Archive. Now, there's
terms on all of these things. And so we created a public content policy. We released this
earlier this year. So every platform more or less has a privacy policy. And that basically says,
this is what we do with your private information.
So we have one of those too.
Now, we don't have a whole lot of private information,
but what we do have, we don't share.
It doesn't leave Reddit.
The public content policies basically says,
this is the content that you put on Reddit in a public community.
It's on the public internet.
Like, you should know that.
If you don't want it on the public internet or in search indexes
or showing up in research potentially
or potentially being used for training,
don't put it on Reddit.
So we want the terms of engagement to be clear,
there. And then we said, look, if you have any agreement, you have to use this content,
you have to have agreement with us. And to use it, you can't do certain things, like reverse
engineer the identity of our users, or use, use it to target, you know, ads against users,
things like that. And so I think under those terms and under those policies, we've been able to
strike a few deals that I think are important. So Google and Open AI are the other biggest
ones on the training side. And then we've done others, like with CISION and Sprinkler, kind of on the
social listening, like what are people saying about these brands, that sort of thing.
So, yeah, it's a new business for us. It's off to a good start. But I'd say we're still kind of
in early days in what is, quite frankly, a developing market. I'm curious how AI fits into the
picture for Reddit itself and the product, the app, the site experience that users interact with.
Oh, I mean, it's very exciting. So like, there's so much hype around large language models.
But one thing that is undeniable is they are very, very good at jobs involving text and words.
And Reddit has a lot of text and words.
So I think some things I'm most excited about.
We've been playing around with this idea of post guidance.
So if you're a new user, you've grown up on social media, you've come to Reddit and you're submitting your first post.
But you've never submitted to Reddit before.
So maybe you don't understand that this is a community.
space and this community has like very specific rules. What used to happen is you'd submit this
post and then a moderator be like, oh, this violates a rule, right? In the science community,
no joking allowed, for example. And then you get banned. That's not a good user experience.
And so now we can use things like LLMs to detect like, hey, this is a joke. And you can tell the user,
they click submit and say, hey, too funny. Funny is not allowed here. We know you mean well, but, you know,
Try again. Much, much better experience. The user can adapt their post to something that should be a better
fit for the community. The community gets a new user, Reddit gets a happy customer or happy user.
So I think that sort of thing is really, really powerful. Obviously for safety, things like harassment and bullying
or whatever idiosyncratic rules subredits have, like, you know, I was just giving you another example.
You know, LLMs can help detect. I think that's really powerful. I was a moderator for a little bit
last year, we have a program called Adopt and Admin, where our employees kind of guest
moderate subreddits. And so I did it with, am I the app? So, you know, it's a large community
on Reddit. Can you explain the community? You submit a post as a user, I don't know. I wore,
you know, a cream colored dress to my, you know, sister's wedding and everybody got mad at me,
like, am I the app? To pick a real example for my life.
And then the community debates and gives you feedback.
Yes, you're the effort.
No, you're not.
And so it's a really interesting community of people basically kind of debating these social situations.
But they have a rule.
You can't use the word Karen.
And you can't use the word manchild.
Now, I've been thinking about rules on the Internet for a long time.
I don't like word rules.
Like, you can't say this word because I think they're too brittle, right?
Because there's always this context.
And indeed, on that subreddit, we'd spend a lot of time adjudicating uses of the word Karen.
Like, were they saying it meanly, or are they correcting somebody else?
Or is this the story literally about a person named Karen?
And so it's just like so much time spent on that rule.
Now, they eventually convince me it's an important rule because it sets a tone for the conversation
and a tone for that community.
So I came around to their viewpoint.
It's like, this is important and it's had a good effect on this community.
But I'm looking forward to when an L.
LLM can do that work so that the human moderators can do something else.
Because some of the rules are really complex.
I think LLMs will make Reddit safer.
Honestly, for that matter,
than let the whole internet safer.
I think that's very exciting.
Listeners, that's a wrap on our annual Best of interview show,
but it's not a wrap on 2024 for Motleyful Money.
Next week, Asset Sharma and Ron Gross will be on with me
to preview what's ahead for investors in the new year
and the corners of the market they are paying attention to,
in particular. We'll be back with that next week. And if you don't want to wait, check out our daily show wherever you listen to your podcasts. Special shout out to Rick Engdahl for all those magic behind the glass this week. I'm Dylan Lewis. Thank you for listening. We will see you next time.
