Hard Fork - Our 2025 Tech Predictions and Resolutions + We Answer Your Questions
Episode Date: January 3, 2025This week, it’s our yearly tech predictions. We’ll review what we got right and wrong about 2024, and tell you what we think is going to happen in 2025. Then we’ll discuss how we want to interac...t with tech in the new year. Plus, we’ll answer some of your listener questions. We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
I'm not sure. Have we ever had a guest who's under active federal investigation?
We've had many who have since been under federal investigation.
I want to get one while it's still going on.
You know? In fact, that's one of my New Year's resolutions.
I want to get somebody who's in severe legal jeopardy to come on the hard fork and just air it out.
Let's get their side of the story.
Yeah, let's get the FBI to send us their pipeline of upcoming investigations so we could do a little advanced planning. Is Elizabeth Holmes still in trouble? I honestly, I would have her on.
Oh, I mean, completely. I need to ask her what happened with the dog. What happened with the
dog? Because she has a wolf dog. She had a wolf dog. Yeah. It died under mysterious circumstances.
Who killed the wolf dog? Honestly, Serial Season 6.
Who killed Elizabeth Holmes' dog?
Honestly, that podcast will get a lot of downloads.
It's true.
A lot of downloads.
That's led to Balto's revenge.
I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week on the show, it's our predictions.
We'll tell you what we got right and wrong about 2024
and tell you what we think is going to happen in 2025.
Plus, we'll take some of your questions.
Make them good.
All right.
Well, Casey, Happy New Year.
Happy New Year to you, Kevin.
I know last year was very dramatic and stressful.
There were some terrible things that happened.
There were some great things that happened.
But what I can tell you is 2025 is going to be different.
Different how? Better or worse?
It's going to be good and bad, but just different.
Okay. Wow. A bold prediction going into the new year.
Yes. I have a lot of hot predictions this year. And speaking of predictions, as has become our annual Hard Fork tradition,
it is time to check in on our predictions from last year and lay out some new predictions for
this year. Yeah. And I'm really glad we're doing this because, you know, I think both of us
identify as reporters, but I do think that we stray into punditry from time to time. And a
criticism I have of punditry is they don't check in enough on the things that they said were going
to happen and say, hey, did that actually happen? So this week on Hard Fork, we are going to check
in our predictions and see what we got wrong and if we got anything right. Yeah, it's time for some damn
accountability on this show. Yeah. It's about time. So last year, we broke down our predictions
into confidence intervals. We had high confidence, medium confidence, and low confidence predictions.
So what was your high confidence prediction for 2024? And how did that pan out?
All right. So my first prediction for 2024 was that threads would overtake X, the former Twitter,
in daily active users. And this one is a bit mixed, Kevin, but I feel pretty good about the
prediction. So yeah, let's talk about this because you made the case to me that this had actually happened or may have happened.
What are the numbers that you're looking at there?
Yeah, so this one is hard to determine with total accuracy because X is now a private company.
They do not publish audited user numbers, so it's sort of very hard to compare.
But what we know is that Threads recently reported that it has 275 million monthly users.
Today, as we record this, it is actually the number two app in the App Store in the United
States, and it has been at or near the top of that chart for about the past month.
And there was some reporting in Business Insider that if you just looked at the U.S. daily
active users, people who are using X versus Threads every day in the United
States, that Threads actually had overtaken X. Now, again, these are estimates. We cannot say
that these are completely true. But I said with pretty high confidence, look, I think Threads is
going to have a big 2024. And I do think you have to hand it to me on that one because Threads did
have a big 2024. Yeah, I don't think I have to hand it to you on this one, actually. I am dubious of the numbers that both X and threads are putting out there.
And maybe we should do like a teeny little dive into these metrics that we're talking about. So
daily active users, the metric that you predicted threads would overtake X in last year is basically
the term in the industry for people who log in every day.
Monthly active users, as you might expect, is people who log in at least once a month. Now,
those could be sessions that are very long. People could be, you know, spending a lot of
time on the apps. Or in the case of threads, which is very tied to Instagram, and often
Instagram is trying to sort of kick you over to threads with these little, like,
links in your feed. It could be someone who like accidentally clicked on a thread, went over to threads,
it counts that as a session, and then they go back to Instagram where they meant to be.
Yeah. So what you're saying is these numbers seem like they have probably been juiced a little bit.
And I'm willing to accept that. But I think there's juicing on both sides. And again,
to me, the larger question is like, is X going down and another platform coming
up? And I think the answer is basically yes. And like, to the extent we have numbers, the numbers
are on my side. Yeah, I would sort of agree with that. I think that it has been a big year for
Threads and Blue Sky, another, you know, X competitor. But I think that X is still very
relevant. You know, I was thinking about this prediction as we were getting ready to tape the show,
and I was trying to think of anything in culture this year that had sort of originated on threads
besides the viral post of you at Gay E8 of the San Francisco airport.
And Casey, I literally couldn't think of a single news story or trend in culture that originated on threads.
So while I do accept that it is probably
growing in part because it's being thrust in front of Instagram users,
I don't think it's nearly as relevant as X even today.
All right. So what I'm hearing is that you're going to be incredibly hard on me during this
recording. So let's hear your-
All I'm saying is, look, I think it's important to have clearly resolvable predictions because
one thing that we did
last year when we made these predictions is we also set up prediction markets on them
on Manifold.
And these are play money prediction markets.
No one's getting rich or losing money as a result of our predictions.
But one thing that I've been yelled at by people over there on Manifold for doing is
not having clear enough resolution criteria.
So I think we should just say for this prediction, we just don't have enough good, reliable data
about the popularity of these platforms.
All right, fine.
What was your high confidence prediction?
So my high confidence prediction from last year
was that a lawless LLM, large language model,
would get to 10 million daily active users.
And I would say for this one,
that it has the same problem
as your high conference prediction,
which is just that it is very hard to know
given the available data that is out there,
which LLMs are getting how many daily active users.
Yeah, so I accept that.
But to me, the spirit of this prediction
was essentially we may start to see
a bit of a migration away from the big mainstream chatbots like a chat GPT and towards something that was a little edgier and less likely to refuse your request.
And I'm just not sure we've actually seen that.
No, I don't think we have.
I think this one was more wrong than right.
You know, there are some large language models that are less restricted than,
say, ChatGPT that have become quite popular. Lama 3, the open source AI model from Meta,
it's not lawless, but because it's open source, it's not hard to kind of make your own version
of it, to fine-tune it in a way that makes it much less restricted. And Meta has said that
its AI assistant, which is based on Lama, now has nearly 600 million monthly active users.
All right, so we're trusting Meta's numbers for Lama, but not for Threads. Do we have that right?
I don't trust them for either. But I would say like Grok, the language model from X,
is also pretty uncensored, but we also don't have good usage numbers for that either. So I would say in the absence of better data, this one was also a bust.
All right. Let's go to our medium confidence predictions, Kevin. My medium confidence
prediction was that Google would mostly catch up to OpenAI in the quality of its large language
model, neutralizing ChatGPT's lead. This is a bigger
trend that quality differences will matter less and less and distribution will matter more.
So this one is a bit of a mixed bag. On the Google mostly catches up to open AI and LLM quality front,
I think the answer is basically yes, they did. I see you have a note here that if you look at chatbot arena,
which is where the chatbots compete in various benchmarking challenges, that as we record this
today, Gemini is on top. But did it neutralize chat GPT's lead? On that front, I think it's much
more mixed, and I sort of think it was wrong. ChatGPT said recently it has 300 million weekly
users. If Google was doing any kind of numbers like that, we definitely would have heard about
it by now. So this was actually quite surprising to me that in the end, ChatGPT's lack of a giant
distribution channel like the Google search bar, right, didn't actually matter that much. And this
was a year where ChatGPT's reputation
as like the chatbot that everyone uses just grew and grew.
Yeah, I think that part of it is totally right.
Google may have caught up to OpenAI
in some of the benchmarks
that are used to measure these language models,
but ChatGPT is still the industry leader
when it comes to just how widely referenced it is
in the culture.
I think a lot of people don't even really know about Gemini or if they've encountered it, it's because, you know, it's
been shoved into Google Docs or Gmail or some other Google product that they use. So it does
not seem like Gemini has reached the level of mainstream awareness or usage that ChatGPT has.
And I wonder how much of that has just been that while it does seem to be performing well on these benchmarks, it also had some really bad launches, right?
There was the famous case of search results telling people to put glue on their pizza or eat rocks.
There was the case where it would not generate historically accurate-looking founding fathers.
It kept spitting out racially diverse founding fathers.
So they actually stopped it from generating all images of people for a while.
So in that sense,
I don't actually think that this is a year
where Google caught up.
And I'll be quite interested to see
whether these new benchmarks
that they've been hitting recently in the chatbot arena
translate into the product.
Because I have to say, as a Gemini user,
I pay 20 bucks a month to use Gemini.
And I think it is the worst of the three that I pay for, for what it's worth.
Yeah. I mean, I definitely use Gemini less than other models. I do like some stuff that's been
built with Gemini, like Notebook LM. But in general, I've been pretty disappointed by just
the actual quality of the Gemini model. So I'm curious to see whether model quality ends up being
a differentiating factor or whether the models have all kind of gotten good enough for most people to do most of what they want to do.
And so it really will come down to, like, how is the app designed?
What is the distribution strategy?
Those kind of non-technical factors.
All right.
What's your medium confidence prediction?
Well, my medium confidence prediction last year was that white-collar workers would start unionizing to fight AI-related job loss. And this one was a total dud. This did
not happen at all in any of the industries like finance or law or tech that I thought it might.
We did not see substantial union activity related to fears of job loss. Now, we did have the port
strike, which was partly about automation and workers' fears of being replaced, but it wasn't
really about AI, and that's not a white-collar industry anyway. Well, Kevin, I don't think it
was a bad prediction. I just think you might have been a little bit early on that one.
Well, early is just as good as late in this business.
You're a really tough customer today.
Look, I hold myself to a high
standard, as I think we all should.
So I really blew that one.
You're doing great, buddy. Okay.
Final prediction from last year. These were our
low-confidence predictions, so
stuff that we thought there was a remote chance
might happen. And my low-confidence
prediction was that the Apple Vision Pro would succeed enough
to revive interest in mixed reality and the metaverse.
So I'm going to say this was mostly wrong, but I have a couple things I would say in
my favor.
One is that the information estimated that Apple would wind up selling about 420,000
units of the Vision Pro this year. That is a
rounding error when you compare it to something like the iPhone. But it is enough for about $1.5
billion in revenue or just under that. And if you were in any other company and you had a new
product launch that generated $1.5 billion of revenue for the first product,
you would say,
well, we should at least make another one.
You know?
Yeah, but this is Apple
and everything that Apple does
gets more promotion and marketing
and also has higher expectations attached to it.
So I was also open to the possibility
that the Apple Vision Pro would be a huge success.
And despite the steep price tag, people, you know, you'd be walking around and just you'd see, you know, tons and tons of people with their Vision Pro strapped to their heads.
And like, I barely ever see anyone with one in public anymore.
Yeah, it's not a hit.
But, you know, I do think that interest in mixed reality was revived anyway.
And it wasn't because of the Vision Pro.
It was because of the Meta Ray-Ban glasses.
It got the interest of some of the other big tech platforms,
which are now working on glasses of their own
that are quite similar.
So the metaverse is definitely on hiatus right now.
But mixed reality, I do think,
is poised to continue kind of creeping into our
lives. And I can see a world where it's going to be, you know, maybe a bit unusual to buy a
pair of sunglasses that doesn't have some sort of computer inside. Yeah, I agree with that.
All right. My low confidence prediction from last year was that Elon Musk would get his own
Hunter Biden laptop scandal on X during the 2024 election cycle. And this one, I got to say,
I really nailed it. Yeah, this was a banger, Kevin.
So what I meant by Hunter Biden laptop scandal, if your memory doesn't extend as far back as 2020,
is basically a politically motivated act of censorship taking place on X. Elon Musk and
other conservatives have been worked up for
years about Twitter's decision back in that election to suppress the reach and block links
to a New York Post story about the Hunter Biden laptop because there was a belief that it might
have been part of a Russian intelligence operation and a hack and leak. That kind of thing happened again in the 2024 election. There was a
document, some called it a dossier, about J.D. Vance that was hacked from the Trump campaign.
It's believed to have been linked to Iranian hackers. And the journalist Ken Klippenstein,
who runs a substack, was banned from X for posting links to this dossier. X said that
he had violated the rules about posting unredacted personal information to the platform. But many,
many people saw through that and said this was just because Elon Musk didn't want people reading
this thing. Yes. It also goes against everything he said about how he was going to run this
platform, which was with complete neutrality, is what he promised. He promised he was going to run
it with complete neutrality. And then that was just never true. I mean, the thing that gets me
the most about the story is that, yes, you are absolutely right. And I have not heard one peep
about it since the day after it happened. I still hear people talking about the Hunter Biden laptop
story, have not heard one person talking about the J.D. Vance dossier. Yeah. So that was our roundup of last year's predictions, but we also
have some predictions for this year. We do. For 2025. So Casey, what is your high confidence
prediction about technology in the year 2025? Okay. Now this is a big one. Are you sitting
down? I am. Okay. You know Apple Computer? Yeah. I'm predicting.
People are going to be mad.
I'm predicting that they're going to release the iPhone 17.
I don't buy it.
No, I've crunched the numbers.
Let me walk you through this.
This year, they released the iPhone 16.
Yeah?
That really only leaves one option for them for next year.
What did they do the year before?
I believe it was the iPhone 15.
Yeah.
No.
Okay.
Here's a second one.
I'll give you a second one.
You don't like that one? I'll give you a second one. You don't like that one?
I'll give you a second one, Kevin.
I think that this year,
the AI culture war is going to begin.
What do I mean by that?
The last Trump administration,
we got a real social media culture war.
And the nature of that war had a lot to do with,
are these systems biased against conservatives in particular? Are they
privileging one set of politics over another? Are the employees woke? I think over the next year,
as chatbots sort of enter more and more facets of Americans' lives, we're going to start to see
the rumblings of a backlash here. I can imagine there being congressional hearings about
the way that ChatGPT responds to certain questions, for example. I can imagine frustrated
conservatives using something like Lama to build a right-leaning chatbot that maybe actually starts
to get some traction. I can imagine a big national conversation about the fact that so many
people are now in these somewhat intimate relationships with chatbots, including both
adults and children that we know are doing this. So there's a lot of sort of dry tinder there,
and I don't know what the spark is going to be, but I'm telling you everything is in place
for this to have a moment in 2025. Totally agree. I like this prediction a lot. I
think that we are going to have many flare-ups in an AI culture war in 2025. What would you say is
the one that you want to use as your resolution criteria here? What would cause you to think,
okay, we've had an AI culture war? I would say if there's a congressional hearing about the
response that a chatbot gives. I agree with that. Can't you just picture like a
big poster printed out with like a chat GPT's transcript in the halls of Congress and like
Jim Jordan like yelling about it? Yeah. Yeah. Ted Cruz will say like, I asked
chat GPT to criticize me and it did. Explain that Sam Altman.
Oh God. It's like. I can picture it now.
Yeah. Yeah. This is a good high-confidence prediction.
All right, Kevin, give us one of your own high-confidence predictions.
So my high-confidence prediction for 2025 is that a newly released crypto meme coin
will briefly reach $100 billion in market cap before crashing.
Now, is this inspired by the recent success of the Hawk Tua girls meme coin?
It sure is, Casey. So I was looking at the news recently, and I saw that Hayley Welch,
who is known as the star of the viral Hawk Tua meme...
By the way, can you explain that to me? I've always wanted to know what it was about.
Nope. People can go on the internet and look that one up,
but I will not be doing the explaining there.
But she had a digital crypto meme coin called Hawk
that launched in December
and briefly hit a market cap of almost $500 million.
Again, this did not do anything.
There was no value attached to this thing.
It was purely a kind of pump and dump operation.
And it crashed within hours, losing more than 95% of its value.
But I think that we are headed into a second golden age of speculation, of gambling.
The Trump administration is going to be very crypto friendly.
And I think that people are going to take that as a signal to
try everything they can to cash in. What do you think? I mean, when it comes to crypto and meme
coin market caps, I basically believe anything is possible. I also think and you sort of highlighted
this, but one of the big themes that is unfolding in American life right now is the rise of gambling
in more and more places. I think it's quite harmful and destructive. But when the Trump
administration comes in, I do think it is going to be all bets are off on this gambling stuff.
And so, yes, we're going to see many more speculators. And I do think that that is
going to at least briefly juice a lot of market caps. So, yeah, good prediction.
Okay, Casey, what is
your medium confidence prediction for 2025? Okay, so my medium confidence prediction is that 2025
is the year that Waymo goes mainstream. So this is something you and I have been talking about
a fair bit recently. I think I've said to you that to me, when you step into a Waymo,
that might be the first moment that you actually understand how AI is going to transform everything. There's something about a car driving itself that will sort of cause things to fall into place for you. And until now, Waymo has been extremely limited. You can use it in San Francisco, you can use it in Phoenix, and now LA. But pretty soon, you'll be able to use it in Atlanta and Austin. And they just announced
that they're coming to Miami as well. And so you're going to see more of these cars in more
big urban centers. And I think as that continues, it's going to become a pop culture phenomenon.
I think we're going to see memes. There are going to be so many viral clips everywhere.
And if you're looking for a resolution criteria, maybe it's that there is
a Waymo sketch on SNL, right? Like to me, that will sort of be a moment where you think, okay,
there's something happening here. Yeah. I'm surprised there hasn't already been a Waymo
sketch on SNL, but that goes to my feeling about this, which is that the real mainstream spur for
Waymo will be when it goes to New York City, because most of the media still exists in New
York City. And a lot of people, you know, I was just in New York, and people there just genuinely
do not understand how many Waymos there are on the streets of San Francisco, or how unremarkable
it has become to walk around the streets of San Francisco and see just dozens of cars driving
themselves. Yeah, I do agree. And that probably is like the number one reason why SNL would not do a sketch about this.
But I don't know.
I'm just going to say, keep your eyes on this, right?
I mean, like I can imagine Waymo showing up
in rap lyrics next year.
Just like, it's going to start to feel
like it's a little bit more of the culture in 2025.
I agree with that.
All right, give us a medium confidence prediction, Kevin.
My medium confidence prediction for 2025
is that Apple will acquire Snap.
Okay.
Now, this is something that people have been talking about
for years.
Snap is, of course, the company that makes Snapchat.
And it has been, I would say, sort of, you know,
chugging along for a couple of years now.
It's not really growing much.
The stock price is down about 23% year to date as of today.
They did some layoffs earlier this year. And I would say this is a company that has always had
really good product ideas and really creative use of technology, but that has never really
managed to build it into an amazing business. And I think that pressure from investors,
from employees could force them to look for a buyer. I also think that it's going to be much easier to do tech
deals and acquisitions during the Trump administration than it was during the Biden
administration with Lena Kahn at the FTC. And I think Apple and Snap are sort of culturally,
they share some DNA, right? Evan Spiegel, the CEO of Snap, is a big acolyte of Steve Jobs.
I would say they have similar design philosophies.
And Apple is also reportedly interested in developing smart glasses to compete with the smart glasses being made by Meta and Snap itself with its spectacles.
So I think this would make a lot of sense for both Snap and for
Apple, and I would not be surprised to see it happen in 2025. Yeah, I mean, this is one that
has made sense for a few years, at least in some ways. Snap has struggled a lot as a standalone
company. It's been a while since they had a true big hit project. They do continue to be one of the default modes of communication
for American teenagers, and that is an enduring source of strength for them, but it's been pretty
hard to build a big business around it. If I'm Apple, my number one question is, do I want to
be the default way that a bunch of teenagers communicate? Because it truly introduces so many
annoying questions around privacy, security, safety, CSAM, right?
All sorts of really tough stuff that all of a sudden Apple is going to have to operate and
manage and answer for. So I think there's a reason that Apple sort of likes keeping these social
products at arm's length where it can continue patting itself on the back for being privacy
warriors that does nothing but keep everyone safe all day while allowing all of these apps to, you know, roam free in its app store. But all that said, can I see it happening?
Sure. All right, Casey, what is your low confidence prediction? All right. So my low
confidence prediction is that X, the former Twitter, will be merged into XAI. So XAI is
Elon Musk's AI company. He currently plans to expand his giant supercomputer,
which he uses to train Grok, to something like a million GPUs. And already, Kevin,
XAI has been valued at $50 billion. You may remember that Twitter, when he acquired it,
was only valued at $44 billion. That's wild to me because XAI does not really have a product yet.
No. So how is it valued at $50 billion? It is a wish and a dream. And people look at Tesla's valuation and they
think, well, if he can do that for cars, surely he can do that for AI. And I'm going to confess
my ignorance here, but I did not actually know that XAI was a standalone company. I thought it
was part of Tesla. Well, it is a standalone company. Now, I understand your confusion,
though, because Elon Musk treats all of his companies
as if they are all related already.
So, for example, this year,
when he was building this big supercomputer,
he had a bunch of GPUs that had been reserved from NVIDIA
that were supposed to go to Tesla
to help Tesla work on self-driving.
And Elon Musk said,
no, actually, NVIDIA, just send all those over to XAI, right?
And this is the sort of thing that like in
normal times and normal circumstances would cause shareholders to revolt and say, Elon Musk, like
Tesla and XAI are not the same company. You can't just buy a bunch of GPUs for one company and give
them to another company. But it's Elon Musk, so there are no rules. So anyways.
So why would they, why would he merge X into XAI?
Because I think the primary value for x going forward is
just going to be to generate training data for xai that this is just going to be the sort of
subsidiary that exists to help xai grow bigger i think the total value of something that like a
truly powerful ai model could provide is just much greater than what a diminished social network like
x could provide so might as well just bring them all in-house. Now, why wouldn't he do this? Well, as I said, he already treats his companies like
they're related already, and he probably just won't see the point in merging the two. But,
you know, if X continues to decline in some ways, and it's feeling like a hassle in some ways,
I can see him just saying, you know what, from now on, this is just a subsidiary of XAI.
And what would the actual
ramifications of this merger be? Like, if he already treats all his companies as if they're
one big company, what would be meaningfully different if he did merge X, the social network,
into XAI, the AI company? I think that that is the right question. And the practical answer might be
not very much in the short term. In the long term, though, it would signal to me that he had finally
decided to get more serious about the AI stuff and was going to stop wasting quite as much time
posting on social networks. Yeah, or like create an AI agent to do that for him. Yeah, exactly.
All right. My low confidence prediction for 2025 is that at some point during the year,
OpenAI will officially declare that they have achieved AGI, or artificial general intelligence. And there are a few reasons I think
that this might happen in 2025. For starters, they want to get there first, right? This is a company
that is very competitive, that is very motivated by wanting to reach these big milestones in AI
ahead of their competitors. And Sam Altman, the CEO, has said basically that they think they are
getting quite close to AGI. He said that superintelligence, which is sort of the step beyond A say, we've done it, we've built AGI. And the benefit
of that for OpenAI would be that it would release them from their current deal with Microsoft,
because under the terms of that deal, once OpenAI reaches AGI, as defined by its nonprofit board,
Microsoft effectively loses access to any of its future models. It doesn't have to
share them with Microsoft, which would effectively mean that OpenAI gets out of this deal altogether.
And why do you think they want that?
Well, I think that they are eager to, you know, reduce their dependency on Microsoft. There's
been some reporting that there's been some tension between Microsoft and OpenAI about
things like compute allocation. But the real reason that
I think this could happen in 2025 is that OpenAI is undergoing this restructuring process. And
there's been some reporting recently in the Financial Times that OpenAI was weighing whether
to basically get rid of this clause in this deal with Microsoft that would close off Microsoft's access to its models once
it achieves AGI. So this could go a couple ways. The most likely way that it might go is that
Microsoft wants to strike this clause entirely so that they can keep using OpenAI stuff even after
they say it's AGI. But I think my low confidence prediction is that that will sort of blow up
somewhere in the negotiation, and then instead, Sam Altman will just come out one day and say, we've done it. We no longer have to give you
our models. It's interesting. I mean, you know, so I also read the reporting that said that OpenAI
was thinking about getting rid of this clause. And to me, the fact that that's under consideration
suggests that OpenAI still needs Microsoft. They need access to Azure and the data centers and
everything else. So I'm less inclined to believe that this is going to happen because I think that OpenAI and Microsoft still need each other.
But Microsoft is also developing its own proprietary models.
And I just don't know how that sort of works in the long term.
If you've got Microsoft that's developing its own models, but also giving compute and sort of, you know, data centers to OpenAI. And OpenAI is giving
all of its models to Microsoft, which is then using it to improve it. So it just feels very
messy and like it may explode at some point. Here's how it works. You know, Microsoft also
makes its own proprietary web browser called Edge, and no one uses it. So that's how that's
going to work. Okay. All right. Fair point. And our standard
disclosure, as always, the New York Times has sued OpenAI and Microsoft for copyright infringement.
So those are our predictions for 2025. And if you want to kind of play along with these,
you can log on to Manifold Markets. I will go in and create markets for each of these predictions.
And you can bet with play money
on whether you think they will come true or not.
When we come back,
I resolve to share my New Year's resolution
with you, Kevin.
Me too. So, Casey, in addition to making predictions at the end of the year,
we also do our resolutions for the new year.
Because we're always striving to better ourselves.
That's true.
So let's just quickly recap our resolutions from last year
and see how those went.
And then we can make some new ones for 2025.
All right, Kevin, remind me what you resolved to do this year.
So my resolution for 2024 was more delight, less fright.
Basically, I wanted to stop doom scrolling
and warring against my phone and
trying to like get it out of my hands as much as possible. And I wanted to make it into a more
delightful experience. Yeah. And for context, if you're a newer listener, one time Kevin just put
his phone in a box in an effort to stop using it. So that's the kind of person Kevin is. So this has
been a sort of podcast long journey for him. How did it go for you this year? It went great, honestly.
I have much less guilt about my phone use this time this year
than I did this time last year.
I've got, my phone is now presenting me on my home screen
with my delights folder of photos
of things that make me happy.
And my screen time has stayed about the same.
It has not gone way up or way down as a result, but I do just have a much better feeling about my phone.
And I think that's good.
And did you have to do anything special
to make this happen?
No, basically, I mean, I did rearrange my phone.
So I put some apps that make me happy
in this photos widget on my home screen.
I remember you said you had put my face in your delights folder.
It is actually.
I have a photo of you.
Yeah.
I mean, it's one of like 500, but every couple of weeks it shows up and I, you know, I quickly
scroll away, but yeah, you're a delight.
So Casey, what was your resolution last year?
So my resolution was when you're watching YouTube, watch YouTube.
And here's what I-
Don't just like leave it on in the background.
That was what you had been doing before.
Yes, because, you know, I grew up in this house where like whenever there were like
ads on TV, we would always mute the television.
Like, you know, we would only turn on the TV when we were watching TV.
And I always thought this was the right way to do it.
And people that just sort of, you know, let the TV go on all day, you know,
were doing something wrong.
And then I woke up like halfway through last year
and I realized that I was doing this with YouTube.
I would just be at my desk and I would open one video.
I would immediately stop listening to whatever it was,
even though it was a video I had chosen to watch.
And I would play a video game.
I would read a browser tab.
And I thought, I am truly just
destroying my own attention and this has to stop. So I did pretty good about this. I would give
myself like an 85 out of a hundred for the most part. I really did. Now I do think I started to
slip a little bit toward the end of the year. I think I was stronger through, let's say the first
three quarters of the year than I was this last quarter. A big thing I did was I stopped playing video games on my laptop, basically.
Like, I used to have these, like, very simple games that I would do just to waste a little bit of time.
Marvel Snap, one we talked about a few times.
I stopped doing that.
So now it's like if I'm going to watch video, I watch video and I try not to change my attention too much.
Now, do I look at my phone while I watch TV?
That's a different story
and maybe a resolution for another year.
All right, well, I'm glad you got that under control,
at least for most of the year.
I'm curious if you think that being more intentional
about YouTube has made you also more intentional
about other things that you do on your phone
or your laptop.
Do you feel like you kind of command your own focus more?
I think that this year was pretty good for me
in terms of doing more single tasking and less multitasking.
Where I feel like I succeeded was in moving away
from that place of,
I am just going to let technology mindlessly steer me around, right?
I think the big exception is
anytime you're looking at a feed-based social network,
you are letting an algorithm drag you around.
But-
TikTok, you mean?
For example, or Threads or Blue Sky,
which I also spend probably even more time on.
But when I'm not doing that,
like I'm relatively locked in, you know?
My biggest attention-related challenges are
I find it pretty difficult to get through a lot of books.
I find it difficult to read academic research papers.
I just feel like the attention leeching out of my brain when I try to do that.
Other stuff I feel okay about.
Yeah.
Well, that is a good way to segue into our New Year's resolutions for 2025.
I enjoy the process of making New Year's resolutions for 2025. You know, I enjoy the process of making New Year's resolutions.
I don't hold myself to some impossible standard.
I'm not one of these people who, like, you know,
sort of needs to accomplish it or I feel like a failure.
But these are, I would say, they're more intentions than resolutions.
But did you make any resolutions about your tech use for next year?
Yeah, so I have one. So I would like to get medium good at meditation using AI.
And here's why.
This year, more than others, I struggled with feelings of burnout,
which was really surprising and challenging for me because I truly love what I do.
I do not want to do less of what I do.
But there were moments during the year where I was like, oh gosh, I feel so tired.
And so I thought, I'm going to do what people have been telling me to do for years,
which I have just avoided, which was meditate. And when I started to do this, instead of reading a
book, I went and I used a chatbot, in this case, a Claude from Anthropic. And I said, hey, I want
to get started with meditating. What should I do? And it gave me a bunch of instructions. And I went and I tried it. And then I came back and I talked
to it again. And I said, hey, I tried that. Here's what I noticed. And then it helped me refine and
iterate and say, hey, why don't you try this different? Or you might want to try this different
kind of meditation. I really enjoyed that feedback loop. So what I would like to do next year is to
continue doing this because while the AI piece of it is interesting and makes
it a little bit techie, meditation, of course, is the least techie thing in the entire world.
Right. It's one of the oldest hobbies in existence.
Exactly. And like one of the most time-tested methods for just sort of like improving your
mental health and your well-being. So to me, this feels like a good marriage of like a true goal
that I have in my life, which is to like manage those feelings of feeling burnt out and give it just enough of a tech twist so that I, as a tech reporter, think,
aha, I'm doing something very cool and futuristic. Now, can I ask you something about your AI
meditation practice? So I have also struggled to meditate. I've never really successfully
had a consistent meditation practice. And I was very optimistic when ChatGPT's advanced voice mode
came out that I would be able to have it basically be my meditation teacher. And so instead of just
typing to it, I could actually say like, could you lead me on like a 15 minute guided meditation
about this thing that I've been stressing out about? But it can't really do it because it doesn't sort of know how to insert all the right
pauses. Like it wants to talk to sort of like fill the space. And so it's not actually built in a way
that has made it a good meditation partner for me. I know some startups are trying to do more AI
meditation coaches, but do you ever use it for that, for literally like leading you on your
meditation? Or is it just sort of talking to you about an experience that you've had on your own?
It's the latter. I think about it as a journal that talks back to you, right? Which is like
kind of what being coached in anything feels like, right? You think about you're learning
an athletic skill, something that I haven't done in years and might never do again, but you have
a coach who's standing there with you and says, hey, go try this thing. And you do it and you come back and the coach says, next time do it this way. That is
like essentially what the AI is doing. And because it is this general purpose technology, it can coach
you pretty well in a lot of things. And one of the things I like about this is it gets around a common
and true criticism of these chatbots, which is that they make a lot of mistakes or they hallucinate, right?
All of that is true.
But if you just want to like become a novice at meditating,
it can handle that.
And actually, it's like really good at it.
And so I think it's important to find
those chatbot use cases where it's not mission critical,
no one's life or career is at stake,
and yet it can provide you this meaningful help
because I think that actually is the truest story of AI that is unfolding right now is this
expanding set of a positive and helpful and increasingly more powerful things. And if you're
not sort of encountering that, I do think you're missing a big part of what's happening in Silicon
Valley right now. Yeah, I agree. And I've been using AI to teach myself stuff this year a lot with pretty good success.
I was trying to get really good at poker this year.
That was like a hobby that I picked up.
And I found that sort of like what you said, like is very good for getting you from basically
knowing nothing to sort of having a sort of beginner's understanding of a topic.
If that thing is sort of widely represented on the internet in the training data.
But I've found that there's a sort of limit to it, right?
Where if I get good enough, if I want like, you know,
more advanced strategy advice for poker, it can actually help me with that.
So are you worried that with meditation,
you're going to kind of reach the limit of what Claude can do for you?
Yeah, well, first of all, it sounds like you should try a no-limit poker bot.
That's a poker joke. But yeah, I absolutely will. And, you know, let me anticipate another
criticism that I may get for this suggestion, which is, Casey, why don't you read a dang book?
That's a good point. I can and should read a book. In fact, you know, my boyfriend recommended me
some good meditation books to read, And I probably will read them next
year, honestly. But the thing about a book is that it can't talk back to you. You cannot ask
questions of a book, right? You can't do that with an AI. So I love books. I'll continue to read
books. But like this is something different. And it's really engaging. It brings you in because
you are having a conversation. And that's just a powerful thing. So will I hit a limit? Yes. But
like, that's okay. It's okay to hit those limits. That's just when you go deeper and you know what to do when you
want to go deeper. Well, I hope that resolution succeeds. I don't like you feeling burned out.
We need you strong and kicking for all of 2025. Thank you, Kevin. My 2025 resolution is to be
the poster I wish to see in the world. All right, I'm excited to hear about this
because I feel like you have had
a somewhat distant relationship with posting this year.
Yeah, so I was once a very active user
of many social media platforms.
I posted all the time.
I was constantly on there,
arguing, posting jokes,
putting links to my stories and other people's stories up there.
Spreading vaccine misinformation.
Yes.
Yes.
Snuff films.
Disinformation campaigns.
And then I can't exactly tell when it happened, but maybe a year or two ago, I just kind of ran out of posts.
And I felt like, you know what, I can promote our podcast, I can promote stories that I'm
working on once in a while, I can go on and, you know, spend a few minutes doing a back and forth
with someone. But I just kind of got tired. And I stopped really posting. And, you know,
I think there are good reasons for that. I think I'm not the only person who's had this experience. But at some point recently, I began to feel like a hypocrite,
because I spend a lot of time complaining about social media and how all these platforms have
their problems, and the people who are active on them are terrible. And, you know, they're spreading
all this garbage. And at a certain point, I started to feel like, you know what, it is my job,
if I want social media to be better,
to roll up my sleeves and get in there
and start posting what I want on social media.
And what do you want on social media?
So it's a mix of things.
Like I think part of what I want
is just more casual engagement that is not self-promotion.
Like I wanna go,
I wanna promote stuff that I like on the internet.
I want to do the kind of thing that stuff that I like on the internet. I want to, you know,
do the kind of thing that was much more common
on Twitter a decade ago,
which was just like,
here's some interesting stuff that I'm reading.
Or here's a news story that just happened
and maybe a comment that I have about it.
Like that feels almost archaic in this day and age
for people to do.
But I think that is one of the best ways
to use social media
is to tell the people in
your network, like what you are paying attention to and give them some sense of what you're
thinking about it. So that is something that I have not done in a while, but I am going to get
back into it. Not because I think I want my brain to be more hooked into social media, but I just,
I feel like I can't complain about it unless I'm prepared to do something to fix it. You know, my case for doing this is I just think it's the fastest way to get your finger on the
pulse of the conversation. How are people understanding certain subjects? What are
the sort of third rails that no one ever touches? And what are the things that people can't stop
talking about? The only way to really get a handle on that is to get in there and post.
And, you know,
I mean, it can be hard. I've been obviously canceled twice on Blue Sky this year, once in
Portuguese, and it takes a toll. But I think there's a way to do it that's really enjoyable.
Yeah. And a way to do that, that takes for granted the fact that if you're an active
poster on social media, like people are going to get mad at you. Like that is going to happen.
You're going to post something. It's going to be a little out of pocket or a little risque, or people are just going to take it the wrong way.
And so part of what I'm trying to do as part of this resolution is just kind of prepare myself
for the inevitability that something I do online is going to piss people off. And that when that
happens, I just have to sort of greet it with poise and with understanding and try to do better the next time.
Yeah, comes with the territory.
If I ever told you what I think about cancel culture, no, what's something that would get me canceled?
Like the top 10 dogs that your followers own that you think should be given away.
Because you don't think they're good pet owners.
Yeah, I didn't say I wanted to be the shit poster
that I wanted to see in the world,
but I do think that there is a kind of defensiveness
to the way that I and a lot of other people
act on social media these days,
because we've seen so many people
just blow up their lives and careers
by posting in an unhinged manner
that we've sort of retreated into this very comfortable,
kind of boring
use of social media. And so I'm going to spice it up a little bit.
Yeah. The thing that I like about social media is that it just lets you show up and say,
well, this is interesting, which I actually think is most of a journalist job.
Yeah.
Like, well, this is interesting.
But I do want to get your opinion on this, as I set out in this resolution, which is that what
I don't want to have happen is to rot my brain by spending too much time on social media and by over-indexing on what is happening on social media.
Because you and I have both seen plenty of examples, including some people in our own industry, of people who just spend way too much time on social media,
who start to think in posts, whose every reference point becomes some meme or some controversy on social media and who kind of lose contact with
reality. So as I'm going into this, be the poster I wish to see in the world year, how do I keep
that from happening to me? I think, you know, if you find yourself posting more than three or four
times a day, check in with me. Okay. Okay. Something bad might happen. Like I might end up
running SpaceX and Tesla.
You might wind up being the White House crypto czar.
Yeah, a lot of bad things happen.
No, I think it's, you want to,
and honestly, it's not really the,
some people do fall into like posting too much territory.
Actually, I think that's quite rare.
I think the more common thing is you just sort of can't stop looking at the feed.
And that's something that only you
can really decide for yourself.
But yeah,
try to get a sense of like, when you know that you've had enough, when do you have a sense of
what the conversation is that maybe that's like the heuristic is, if you feel like you have a
sense of like the contours of the conversation that day, whatever it might be, great. Now you
can move on. Great. And if I do slip into a social media brain rot, I want you to tell me.
I absolutely will. All right. Those are our resolutions. God help us all.
That's great. Based on this, we're going to be better people next year. I think so. Yeah.
When we come back, we'll answer some listener questions like,
why is Casey so annoying? Hey, I'll ask the questions around here. Well, Casey, we have one more thing to do
on our very special beginning of the year episode.
This is where we commit a ritual sacrifice to the gods
to protect us in the year to come. Yes. And we should also answer some questions from our
listeners. Yes. And, you know, we always love hearing from our listeners. They send us so many
good thoughts and questions every single week. And so what better way to kick off a fresh year
of hard fork than by finding out what's on their minds? Yes. So we want to do something special,
never before done today, which is to bring in one of our producers, Whitney Jones, fork than by finding out what's on their minds. Yes. So we want to do something special, never
before done today, which is to bring in one of our producers, Whitney Jones, to help us sort through
all of our reader mail. So Whitney, welcome to hard fork. Hey, welcome to hard fork.
We're breaking the fourth wall here. So tell us what our listeners have asked and what we should
respond to. Yeah, I feel like I should have a giant mailbag, but actually I just copy and pasted these all into a document.
They sort of fall into different categories, and so I want to take them sort of in categories.
And the first one is just responses to segments that we've done on the show.
There was one recently after the Polly Market Elections betting segment that we did at the beginning of November.
Listener Anne Lachey wanted to know more about VPNs
that you guys mentioned.
You mentioned that Americans were using these VPNs
to get around restrictions on Polly Market
to bet illegally on the election on the site.
So Anne wanted to know, she writes,
how many people are using VPNs?
Is it mostly for downloading movies, music, media
without having to pay?
Is it for gambling as mentioned? Is it for political disturbances? Is it for hij downloading movies, music, media without having to pay? Is it for gambling, as mentioned?
Is it for political disturbances?
Is it for hijacking Wi-Fi?
I don't know, she writes.
Are companies concerned about it?
What is the future for VPNs?
Is anyone cracking down on it?
What happens if you get caught using one?
That is so many questions about VPNs.
Yeah, I want to be clear.
In the future, you're limited to one question.
No, we will try our best to answer this one because I think it is a good one.
Casey, what do you know about VPNs and how common they are and what people use them for?
So virtual private networks have been around a long time, and they are a pretty big market.
I found one estimate that said that the market for them is well in excess of $40 billion.
And I can just say anecdotally that when I go on YouTube and I'm watching videos,
one of the most popular ads that gets inserted into creator content is ads for VPNs.
So to answer one of these questions, yes, I do think the primary reason that people
use VPNs is to get around geographical restrictions on what kind of media they can
consume. Yeah. And just if people have not used VPNs before, what they are is basically a kind
of means of making it look like your traffic is coming from somewhere else, right? So you're
basically renting a server located someplace else, and it sort of sends your traffic through that
server to the website or the
service that you're going to. So if you're on a streaming site and you want to make it look like
you are in London, but you're in California, you can use a VPN to accomplish that.
Yeah. Now, it is the case that there is a political dimension to these, and often we see
in authoritarian countries, VPNs become quite popular. In fact, once the war in Ukraine started and Russia became
a sort of country non grata on the global stage, various apps pulled out of Russia and the number
of VPN downloads in Russia went through the roof because everybody wanted to use the internet that
they were used to without all of the new geographic restrictions that had been placed on them. So
that's why I do think they can be a really important technology to sort of help people in those countries. Yeah. And I like we should say
also that VPNs, there are many, many legitimate uses for them. I use one when I'm on a public
Wi-Fi network. It makes it harder to sort of intercept your traffic. Corporations use them
for employees to sort of keep their networks more secure if you're giving people remote access. So lots of people use VPNs for lots of totally normal and probably pretty privacy-conscious
reasons.
Yes.
And if you want to know, can you use a VPN to hijack your neighbor's Wi-Fi?
I don't really think you can do that.
I mean, if you can hijack your neighbor's Wi-Fi, you don't need a VPN to do that, really.
Right.
But they are useful in some cases for things that are not licit. So if you want to gamble online in a jurisdiction where
that is not allowed, you can sort of route your traffic through a place where it is allowed
and do it that way. Although if you win a lot of money, you may not have an easy time cashing that
out. And a bunch of gambling sites do actually block VPN traffic.
There are ways to sort of figure out
which traffic is coming from VPNs and block those.
And that's all we know about VPNs.
Thank you, Aaron.
We hope you're happy now.
Yeah, have fun gambling from Lithuania
or wherever you're going to do it from.
All right.
Another thing we got a lot of email about
was the interview you did with Stephen
Johnson about Notebook LM. And Bob Flint wrote recently asking, how do I know that you guys
aren't bots? Thanks to you, I'm now familiar with Notebook LM. How can I know you're not
using similar technology to produce your podcast, albeit with a certain degree of human intervention?
Well, let me ask you a question, Bob. How do I know that you're not a bot?
How do I know that you didn't use ChatGPT
to write that whole thing?
Two can play at this game, bud.
But Casey, I like this question
because there are actually companies
that are building AI podcasting tools
that will allow you to clone your own voice.
And we actually do have sort of a laborious process here
where if we mess something up
when we're originally taping the show,
we will go back and record a little insert.
And some eagle-eared listeners
have sometimes picked up on these.
We try to make them sound as smooth as possible.
But that kind of thing could be easier to do
if we just had AI clones
of our voices and our producers could just kind of change what we say. But we don't do that now,
do we? No, we don't. Now, you know, is it true that due to the existence of these technologies,
it now does just get harder to tell which of the media that you're consuming is synthetic in some
way? Yeah, it does mean that. That's one of the big concerns I have
about the rise of AI, and we have to keep our eyes on it. In the meantime, all I can do is tell you
something that GPT-2 would never tell you, and that's that 2 plus 2 equals 4. So hopefully that
gives you some confidence. Wait, I'm being told that we're now at GPT-4, so I'm actually not sure.
Yeah, we should do like a CAPTCHA at the beginning of every episode
to just prove that we are humans.
No, this is interesting actually.
And I'm glad you wrote in with this, Bob,
even though I think this question was mostly a joke
because a thing that I am starting to hear
is that people are sort of getting suspicious of podcasts
that sound too much like the Notebook LM podcasts.
Have you heard about this?
I haven't heard about this.
So I was recently talking with a friend and I was asking them, they were talking about a podcast
they recently started listening to. And they were like, I think this might be a Notebook LM thing.
It sounds very similar. And it wasn't. I know the people who make this podcast, but because of the
way that these podcasts sound with the kind of back and forths and the disfluencies in them,
there is starting to be kind of this question about like, is this a real person?
Yeah. I mean, maybe the only answer is that like all podcasts will have to move to three or more
people so that it no longer sounds like the two people podcast.
Oh, you don't think Notebook LM is working on that?
I hope not. Well, the whole team just quit. So hopefully that'll help.
That's true. Okay. what's next, Whitney?
Another category of questions that we get a lot
are just general questions about different technologies,
like why is my chatbot behaving in this particular way?
I've got one from a listener, Raphael Holmes.
I had a question about AI image generators
that I wanted to run by you guys.
He says, hi, Kevin and Casey.
I was trying to show my octogenarian dad
the wonders of generative AI,
and his request was to try to draw a
loon in a bathtub, and it
turns out... And it said a loon in the bathtub, and it drew
Kevin! Yay!
No, I'm sorry. Go ahead. It says,
as it turns out, Dolly has no problem putting
a terrifying five-foot loon
in a bathtub. See attached images,
one of many, but it can't get the correct
proportion to one another, even with
tons of wrangling still
only monster loons my sister-in-law tried gemini and got very similar failures would you try it
with a few image generation tools the world needs to know and so i have a bunch of images here for
you guys if you want to have a look at these so these are the ones that the listener rafael sent
in to us this is a a giant loon in a bathtub there's this one
which you can see the prompt says a tiny loon that appears almost invisible in a huge bathtub
and this is the image that comes back with and like for reference like how big is an actual loon
well i'm not a big loon guy loons appear to be a bit bigger than a duck. Okay. But in these images, they're like filling up the whole bathtub.
Like these are giant loons.
It's either a very big loon or a very tiny tub.
Yeah.
So the images you're showing us, Whitney, are these AI-generated images like the ones that our listener described of just these like monstrous loons that are taking up most of the bathtub.
Correct. Correct.
Okay.
I went to Meta AI.
Meta AI did the same thing.
So do we know why this happens?
Yes, we do know why.
Here's why.
When you use a text-to-image generator,
it's trying to find the statistical average that satisfies your prompts, right?
It's kind of like a sculptor.
It's trying to take away everything from the image that is not a loon in a bathtub and get to the like median image that it can, you know,
sort of conceive of. And the thing is, loon in a bathtub is probably not a very high volume
request. I don't think a lot of painters out there have painted a lot of normal size loons
in normal size bathtubs. And so this is just a classic case
of asking a model to do something
that it is not well-suited to do.
You know, when you are using a text-to-image generator,
you are throwing a wish into a fountain.
And sometimes the wish is granted.
Many times the wish is not.
It is not your problem.
You're doing everything right,
but the model cannot do this yet.
Maybe someday it will, but it can't yet.
I think it will be able to do it soon. And I put this question to Claude about why
these image generators are having trouble doing a Loon in a bathtub, and it drew me
a diagram of a Loon in a bathtub. It actually coded a diagram of the common Loon at 26 to 28
inches and the standard bathtub at about 60 inches. So this kind
of thing is possible, but it, you know, I think that the reason that this is having trouble with
current models is because of these, these things called contextual scaling issues. Basically,
if two or more objects do not frequently appear together in the training data of whatever this
system was trained on, it may not sort of understand the proportions of one and the other.
And so often people will notice that these image generators,
they have a hard time with proportions, with sizes,
and that will probably improve somewhat in future models.
But right now, I would not use it for anything mission critical
when it comes to loons and bathtubs or any other sort of like juxtapositions.
Also, this is a great time to just learn how to draw a loon in a bathtub you you have the power listener all right what's next
the next one says a couple different listeners asked about how what's hot and what's not when
it comes to data sources for training uh ai right now uh one listener asa strong says do they want
satellite data ben stone says are they using data from home assistants and security cameras?
So any new news on training data and what's in demand right now?
So, I mean, in short, they want all the data that they can handle.
Do we know what data is hot?
No, not really.
And the reason is because they don't tell us what data they put into the models anymore.
They used to, but then they stopped. Let's just say some certain newspapers got a little bit irritated about what they were reading about the training
data that was going into those models. And so now there's a lot less transparency. I wish there were.
It would really help us understand these models. If I could tell you one kind of data that is
becoming increasingly popular in this world, though, it is video data. There is a lot of
thinking among AI researchers that the final frontier of developing models that have something approaching
human or even superhuman understanding is the kind of knowledge in the world that you only get
by moving through the world. And so they're starting to adjust a lot more video to understand
motion and depth and everything else, reasoning, everything else that you can learn by just sort
of fixing cameras on the world, running them through models and trying to understand what's
happening there. Yeah. I do know a little bit about the training data that is in demand right
now, because unlike Casey, I've done reporting on this. Oh, let me guess. There was a huge demand
for Kevin Roos columns. They couldn't get enough of them over there. No, but I did talk to someone who is working on a project where they're basically going into university libraries and archives and digitizing a bunch of stuff there that has not been previously digitized.
Because the sort of low-hanging fruit, the stuff that's online, the stuff that's in these sort of repositories that are widely used. That stuff is, you know, good, but has already been used.
And so now they're looking for new sources, these AI companies.
And a lot of what they're finding is that there's just a lot of stuff that hasn't been digitized.
And if you can go in and go into a library and just put everything on there online,
you will maybe improve the resulting models.
That's great.
Maybe you should write a story about that,
because I think it's time you saw the inside of a library.
What else do we got?
Mitzi had a question about
security and voice cloning technology.
She writes,
both of my investment institutions
use voice verification
to ID customers on the phone.
The password is literally the voice
saying my voice is my password.
In this era of AI,
it seems foolhardy.
Am I being paranoid?
No, you're not being paranoid. Tell your bank to knock that off.
Go to a new method of authentication immediately.
Yeah, this is a known issue. There was a story in 2023 in Vice by Joseph Cox about how he broke into a bank account with an AI-generated voice. These voice verification systems, they are not secure,
and it is very easy to clone someone's voice using just a small snippet of audio from that person.
So yeah, I would move as quickly as I could away from using your voice for verification.
Cool. Do you guys want to close out with one ethical hard question?
Sure. Yes.
Let's do it.
Dylan writes, here's my ethical dilemma. Someone I was recently dating, but now no longer,
left their HBO Max account logged into my computer after a movie date. I only realized it was still logged in after we had stopped seeing each other. I was going to log out of
their account, but then ended up binge watching House of Dragon, which I had been dying to see.
Was that wrong? And if not, could I watch The Last of Us next?
Well, I mean,
it was wrong to watch
the second season of House of Dragon
if you'd watched the first season
because it was really bad.
And I thought,
I really thought
it should have been canceled.
But as far as, you know,
the ethics of using
a logged-in HBO Max account,
I say go nuts.
So I have feelings about this
because I've been on both sides
of this this year.
Because I,
well, not in the dating sense. Wait, did you break up with Dylan?
No, but I left my Amazon Prime video logged in at an Airbnb like several years ago and only
discovered it this year because I was checking my credit card statements
and I found like a BritBox subscription
and like other stuff that I had just not,
like movie rentals that I just not subscribed to.
So someone had been purchasing things
on my Amazon Prime Video account
at this Airbnb that I had stayed in several years ago.
So I would say the ethics do not extend to purchasing,
but I would say if you're just watching the stuff that is like included for free,
I would say that's kosher with one exception. What's that? Which is that if you are watching a
show on a Pilfered streaming account, that the owner of that account is also watching.
You may not skip episodes. You may not watch that show until the owner of the
account has also watched that show because you know what happens. And this has happened to me,
and I've actually accidentally done this to people. You're borrowing their account,
you're using their account, and you start watching a popular show that just came out,
like the show about chimps on Netflix or something on Max. Yeah, we love the chimp show.
And they are watching it at the same time.
And by watching these things on the same account
at the same time,
you were actually screwing up all of their timestamps.
When they go into resume watching,
it's going to take them
to a totally different episode of the show.
So don't do that.
Just keep your hygiene consistent
when you're sharing these accounts.
I think that's good advice.
And you know, I think it's really brave of you
after having recently made fun of me
for having wine on tap at my house
to have just admitted that you ignored
thousands of dollars of purchases of videos
over the years because you didn't even notice.
You literally don't even notice
when people are renting thousands of dollars
worth of movies from your Amazon Prime account.
Wow.
Must be nice, Bruce.
Must be nice.
And if you are the person who bought BritBox and rented movies on my Amazon Prime video
account at this Airbnb, I will track you down.
This is not over.
Can I ask a follow-up question?
Yes.
Do you think the ethics of borrowing logins changes if you're no longer in a relationship with somebody?
No, I don't.
Who is the victim here?
Who is being harmed?
It's the victimless crime.
And in fact, I've heard of people actually
sort of continuing to voluntarily split accounts
with exes after they break up.
So, you know, you might even not need to hide it.
And here's what else I would say.
As long as there is a single logged in HBO Max account
between the two of you,
there's a chance you could get back together.
There's a fiber of something there
that could turn into something actually really special.
Ooh, that's, I hadn't considered that,
but it might give you some hints.
If you know they're watching House of the Dragon,
you might just spark up a conversation.
Right, like imagine you've broken up with someone
and then you go back into your HBO Max
and they're halfway through a movie
and the name of the movie is I Really Missed My Ex.
All of a sudden, the wheels start turning.
Hmm, maybe I should text that person.
Maybe there was something there.
So yeah, keep watching it.
But The Last of Us, that's a hard but last last the last of us that's a
that's a hard watch i'll say it that's a hard watch i couldn't do it it was too dark yeah
did i tell you about my idea for a sequel to the last of us no it's called the second the last of
us anyways uh well on that note um whitney uh you so much. And we should also just say it's delightful to have a producer on the show.
It is.
Our team, Whitney, Rachel, Jen, Caitlin, Chris, Ryan, everyone works so freaking hard all year to make this show.
And we are just so, so appreciative.
So thank you, Whitney.
And yeah, don't let this be the last time. Yeah, thanks appreciative. So thank you, Whitney. And yeah,
don't let this be the last time.
Yeah, thanks.
Come back anytime.
It was fun to be on.
Hardfork is produced
by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant. We're fact-checked by Caitlin Love. Thank you. Video production by Ryan Manning and Chris Schott. You can watch this full episode on YouTube at youtube.com slash hardfork.
Special thanks to Paula Schumann, Queering Tim, Dalia Haddad, and Jeffrey Miranda.
You can email us, as always, at hardfork at nytimes.com. Thank you.