Hard Fork - Our 2026 Tech Resolutions + We Answer Your Questions

Episode Date: January 2, 2026

Happy New Year! We’re kicking things off by sharing our tech resolutions for 2026 and reflecting on how we fared with our social media and meditation goals from last year.Then, we open up the listen...er mailbag and answer your questions on data centers in space, who’s to blame when a customer service A.I. bot lies to you and whether it’s OK to deepfake Santa into your home security footage.Also, get your very own “Hard Fork” hats, now available at the Times Store: https://store.nytimes.com/products/hard-fork-baseball-cap Additional Reading: What I Learned About Productivity This YearData Centers in Space + A.I. Policy on the Right + A Gemini History MysteryShuffling Some Whimsy Into Poker and Blackjack We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.

Transcript
Discussion (0)
Starting point is 00:00:00 I have a story about technology. What's that? Saving my ass. Let's hear it. So I was on a flight across the country yesterday with my kid, and I made a huge mistake, parenting-wise. I did not understand that when you download things off of Netflix to use on a kid's iPad, for example, during a long flight, they expire.
Starting point is 00:00:20 They don't stay there forever. So we got on the plane, and I pulled out the iPad, which is the only way to take a flight with a three-year-old. If other parents are doing this without it, God bless. God bless. For me, not going to happen. Can't happen.
Starting point is 00:00:33 Pull out the iPad, get his little, like, Bluetooth headphones on. I go to Netflix, nothing downloaded. Everything's expired. Oh, no. I'm thinking to myself, this is going to be the longest five hours of my life. Absolutely. Then, I learn via my seatmate that you can now connect your Bluetooth headphones to the TV on the back of the seat in front of you. That finally works?
Starting point is 00:00:54 That finally works. And it saved me. And so to the person at United Airlines IT department who figured out how to connect the Bluetooth headphones to the in-seat TV, I salute you. You save me. Absolutely. Is that something you could have figured out in 2011? Sure. But look, we've all been busy.
Starting point is 00:01:13 And you got to it now, and that's what matters. And we say, thank you, United Airlines. Thank you, United Airlines. Beautiful. I'm Kevin Rousse, a tech columnist for the New York Times. I'm Casey Noon from Platformer. And this is Hard Fork. This week, 2026 is here, and we're sharing our New Year's tech resolutions.
Starting point is 00:01:34 And then we're taking listener questions, Kevin. We'll answer all your questions about AI, space data centers, and whether you should deep fake Santa into your home security footage. You know, I've been wondering about that. Well, Casey, it's a new year. It's a new year. Happy 2026 to you and your family, Kevin. Yes.
Starting point is 00:01:57 And notice that he didn't say it back. Go on. Happy 2026. Thank you. Are you supposed to wish people a happy year that just ended or a happy year that's beginning? I think it's more traditional to wish them a happy new year, yeah. Yeah.
Starting point is 00:02:12 Not a lot of happy old year cards playing out the shelves. Not a lot of happy old year, but I start a new tradition if you want. Well, happy old year and happy new year. Thank you so much. What a joy it is to get to do this show with you week after week, year after year after year. Likewise, my friend. And we have something special today, which is that we are going to do our New Year's tech resolutions, as we do every year, and then we're going to answer some listener questions. Yeah, and we truly love doing this. I wish we did it more, to be honest with you,
Starting point is 00:02:39 but we have so many great questions from you, our devoted hard fork listeners, and we thank you for them, and we can't wait to dive into them. So before we set out our new resolutions, let's check in on what we resolved last year at this time. Casey, your resolution, last year was to, quote, get medium good at meditation using AI. And I want to ask you how that went. Well, Kevin, I'm afraid I would have to categorize this one as a major flop. I'm not going to front. When we came into this studio last year to record our resolutions, I had the best of intentions. I had recently begun a meditation practice. And I had found that after I meditated, I could go back to Claude, I was using in this case, to say, hey, I noticed this thing while I was meditating.
Starting point is 00:03:27 Give me some guidance maybe for the next time that I do that. And Claude was very good on this front. The thing is, and this is the great mystery of meditation, every single time I did it, I felt very good. However, my instinct to meditate again was non-existent. You know what I mean? Where it was like, I would have to get to a state of feeling incredibly overwhelmed to say, you know what? I'm going to take out a timeout. I'm going to sort of, you know, go to the spot in my house where I meditate, and I'm going to do this.
Starting point is 00:03:53 Every time I did it, I felt great. It just never became a habit. So I do not blame the technology for this one. I have many, many friends who also have tried to start meditation practices over the year, and they seem to crash on these very same rocks. If I were to get myself any credit at all, I would say that I did just sort of develop other strategies to, like, address the feelings that were leading me to want to meditate last year. so if I'm happy about anything at all
Starting point is 00:04:19 it is that I feel like I got sort of those aspects of my life under control although I am of course quite embarrassed that I failed this hard at my resolution What are these other techniques that you found? Do they involve ketamine? I'm going to do something strange and be sincere
Starting point is 00:04:32 and yes they did involve ketamine what I found was that in 2024 when I was feeling burned out the thing that wound up kind of pulling me out of that was like one, things that you would expect, like, you know, taking breaks, like going on vacation, putting the phone down. These things you could probably guess. The best thing I did for myself, though, was I feel like I just kind of rediscovered my
Starting point is 00:05:01 sense of purpose. I went to a conference with you about AI, met a lot of really interesting people, kind of got some new ideas percolating in my head, renewed my sense of like what my job is in this moment as a journalist. And that did more for me than like any individual meditation session, which isn't to say that the meditation wasn't great, because it was, but it was that thing, that's feeling of sense of purpose that wound up benefiting me more than anything else. I love that for you. I know you were burned out. I hope that you are less burned out. I know you work very hard, and you deserve a break, and you deserve to feel passionate about what you're doing, and so I'm glad that that... I actually, like, something similar happened to me this year,
Starting point is 00:05:39 which is that I feel like going to events like the one you brought up, and also writing this book, have really connected me again to like what I love about the work that we do. And we're just, I feel so lucky to get to do this. And I think that's, yeah, that helps a lot with burnout. You can do a lot more when you're excited about what you're doing. Yeah, absolutely. So, and I, for what it's worth, I do not fault you at all for failing or flopping at your New Year's resolution.
Starting point is 00:06:06 I have set out a goal many times of trying to meditate and develop a practice. and I have failed every time and every time I go to my more experienced meditator friends and I say I feel terrible about this and they say there's no failing in meditation you just haven't succeeded yet. Oh, I like that. I like that very much.
Starting point is 00:06:26 I will say that, you know, my boyfriend and I can be somewhat competitive and he's quite good at meditating. So he went on a meditation retreat this year and he came back and I was like, how did that go? He's like, well, you know, I saw through space and time and connected with the source energy of the universe on my first night there. I was like, well, great job, sweetheart.
Starting point is 00:06:43 Yeah, I'm good at other things. All right, let me now shift the spotlight away from myself, Kevin, and I ask you to remind us, what was your resolution from last year? So my resolution from last year was to be the poster I wished to see in the world. Basically, I was tired of just sort of lurking on the internet and complaining about the state of social media, and I wanted to engage more and post more of the kind of stuff that I, I personally think is good, and I should be honest about this one, too, and say that I think
Starting point is 00:07:19 I only did a medium good job at this one. Well, I'm going to interrupt you and say that from my perspective, you really succeeded at this because I feel like I saw your tweets popping up all over the place. I feel like you were having conversations with interesting people online, and I was often jealous of the conversations that you were having because it seemed like you were having a lot more fun out there than I was. That's really interesting, because I feel like my primary win toward this resolution this year was that I did post more, and I posted more freely.
Starting point is 00:07:56 Yeah. I was less worried about, you know, trying to sort of anticipate all of the potential objections or people who might get mad at any one thing I would post, and I would just kind of let it rip. And I would do this not all that frequently, but when I did it, it was always, you know, very gratifying because that's how I used to use social media. And then we kind of, like, you know, got all these followers and, like, people started, like, criticizing journalists in new ways. And it just felt for a minute like it was unsafe to post. And I think what I've done
Starting point is 00:08:28 this year is reminded myself that, like, the stakes actually aren't that high. It's not the end of the world if people get mad at you. And you can just kind of say what you think. Yeah, particularly on X, where everyone who is still there is a goblin. And so there really is just, because here's the thing, No matter what you post on X on any given day, something infinitely worse will have been said and will have gotten much more attention than it. So, you know, if you want to bring in your AI take on X, you're going to be just fine. Yes. I will also say that one other thing that I have found helpful, and I hope this is helpful to other people out there who may be feeling some trepidation about posting online, is that it is actually not possible to please everyone in your audience or in the audience. And so I have started to feel okay with like niche posting, even if I know that like only to, 10% of the people who follow me are going to understand or care about the thing I'm posting
Starting point is 00:09:14 about. I'm trying to do that anyway. I think it is salutary for me to sort of put my feelings out into the world, to get some feedback on those and to sort of stress test ideas before I put them into a column or onto a podcast. So I feel like for my next year, I will meditate whenever I want to, if I want to, but I would say that this has moved out of the category of resolution for me. How are you thinking about posting in 26? So this gets to, do you want to start with our New Year's resolutions? Oh, we haven't, is it, is your new New Year's resolution connected to this resolution? It is sort of tangentially connected to it.
Starting point is 00:09:48 Let's get into it. So my resolution for 2026 is to get good at short form video. Because here's the thing. Everything is TV now. This is not a point that I am coming up with. Derek Thompson recently had a very good post about this sort of shift of every platform, every social media experience is now sort of becoming dominated, by video and specifically short-form video. And I have been observing this from afar for several
Starting point is 00:10:18 years, feeling like, oh, someone should actually, like, get good at this, who is a journalist because the people who are good at it are generally not journalists. The people who are going viral on these platforms are generally not doing it because they, like, want to get good, accurate, true information out into the world. There's a lot of low-quality, short-form video out there. And so I think a lot of journalists have been kind of repelled by the whole medium because this is not a place where serious people go to do serious things. At least it doesn't seem that way from the outside. This is like a place for goofy, you know, rage bait and stunts and people trying to go viral by doing like sort of catering to the lowest common denominator. And I understand that reaction.
Starting point is 00:11:00 I have very ambivalent feelings about the rise of short form video as a replacement for text. We are people of the word. We are writers. But I do think this is something that I expect to continue. I think that we are learning that there are just many more people in the world who like getting their news and information in the form of short videos than in the form of 1,200-word newspaper articles. And so my resolution is to explore and experiment and try to figure out something that I can do on short-form video that feels good and authentic and not like I'm trying to be someone I'm not. or, you know, latch on to some trending TikTok thing,
Starting point is 00:11:40 but like something that is actually high value and journalistic and experiment there until I figure it out. Now, I feel like everyone who succeeds at this short form video game has some kind of gimmick. Have you thought about what your gimmick might be? I'll give you an example. Someone showed me one the other day where there's a guy who just eats strange foods in public.
Starting point is 00:11:59 So he'll film himself like eating a whole onion on a plane and it'll just sort of, you know, take video of the person, next to him without their consent, and then he'll get a lot of likes for that. So is that the sort thing you're thinking about doing? I don't know. This is more of an, like, I don't have a real strategy in mind yet. I think more of what I want to do is just experiment and see what feels good. I'm not that interested in going viral. Like, it's not my primary, you know, job to be an influencer. But I think you and I, you know, are both pushing into video now. This has been a part of the podcast growth strategy, not just for us, but like across the industry.
Starting point is 00:12:39 And I think a lot of podcasters feel like they're kind of being dragged into this video thing, like, oh, we have to do this because that's where the audience is and that's what they want. And I am trying to take a more open-minded and curious approach and saying, like, yeah, if the world is moving in this direction, it's good for us strategically, but also probably for our career longevity to find a way to do this that feels good and energizing and to sort of figure out what the high-quality short-form video landscape looks like and what it should look like. That's very interesting.
Starting point is 00:13:10 I will say, I have had a similar thought about whether this is something that I should look into for the same reasons. And I actually have a very short list of short-form videos that I want to make, and I do think about making them all the time. And so maybe I should just do that. But then sometimes I just get hung up on like, is me making this going to accelerate to the cooking of a child's brain? You know what I mean?
Starting point is 00:13:29 Like, am I going to make something that's, like, good that, like, somebody's going to swipe past, like, on the way to get their eating disorder content? I don't know. I just have like some concerns about the sort of full throated embrace of the video revolution. I get that. But I do think that like, I do think that that impression of short form video is based on like what is out there now and not what could be out there. And I think if you are looking at like, you know, the 30 worst shows on TV, you're going to have a very different impression than if you're looking at like the highest quality stuff that's on TV. I think we need to like avoid confusing the medium and the message. That's fair. Well, I'm
Starting point is 00:14:05 curious to see how this turns out, because I will say a lot of times when I see journalists doing direct to camera videos where they're like talking about their story, there's just something about that format like combined with the earnestness of the average journalist, which is so wonderful in print, that does come across like a fifth grader giving a book report. And I think that's like a really challenging tone to make succeed in the realm of short form video. So I want to encourage you to shitpost and to troll and to clown. And I think if you can sort of combine those things, you may have something. All right. Well, that's my resolution for next year. My other one is more, that was more of a professional resolution. I do have a personal one, too, which is that I am
Starting point is 00:14:45 going to be where I am in the broadest sense. Like, I think one thing that has stood out to me, you know, we've spent the year like talking with important and successful people. And one thing that always strikes me when I meet these people is that they are never looking at their phone when they're talking to you. And we were at our dinner last week, in fact, and I can't say who it was with, but it was with someone who is very busy, very successful,
Starting point is 00:15:15 presumably has people making requests of them at all hours of the day. And we sat down to dinner with this person for like two plus hours, and I did not see them look at their phone one time. No. And to me, that was a lesson. that you actually don't have to know what's going on
Starting point is 00:15:34 in all of the pockets of your life at once. And I think I am very bad at monotasking, at being in the place where I am. You know, I'm at home, I'm cooking dinner, or I'm cleaning the house. I'm listening to a podcast. I'm catching up on some texts or emails. I find it very difficult to just, like, be present
Starting point is 00:15:56 with the person or in the situation that I'm in. So I'm not saying I'm going to use my phone less, but I do want to compartmentalize it a little bit so that I actually have an easier time like focusing on the thing in front of me. Interesting. Have you considered meditating? I've heard it can be very effective in achieving that objective. Something to think about. Okay, Casey, what is your resolution for 2026? Okay. This is a big one, Kevin. You're going to have to hold me accountable to this. But here's the resolution. For the next one year, I am going to try. try not to make any meaningful change to my productivity system because I think I finally cracked it, Kev. Really?
Starting point is 00:16:38 I think I've cracked it. Okay, so I want to tell you about this, and I hope you don't find this too boring. Well, can I just give, for listeners who may be newer to the show. Yeah. You are the world's twitchiest and most sort of promiscuous user of productivity software. That is correct. You will try any database, any personal knowledge graph, anything. that promises to make you 5% more productive, you will spend two to three months using it and then
Starting point is 00:17:05 ditch it for something else. Yes. If I read about it, I will install it, and there's a very high chance I will buy a lifetime subscription to it for no reason. So you are becoming monogamish with your productivity stack. That's the idea. And look, the whole reason to have a productivity stack is to accomplish a set of goals. And so I think if you want to be serious about this, the first thing you have to ask yourself is like, well, what are your actual goals? And over the past year, I feel like I figured out what I actually want out of the system, and then I built the dang thing. Now it's just kind of working for me. So I want to see if I can stick with it. Tell me what it is. All right. So I'm going to tell you a little bit about my system. I happen to run my system in an
Starting point is 00:17:47 app called Capacities. I do think you could do a version of this, though, in a lot of other apps. Here's what I need out of a system. Number one, I need a place where I can write a a daily journal. This has just become something that's really important to me. I sit down in the morning, I have a cup of coffee, I empty my brain of whatever's on it, it's a mix of personal stuff, professional stuff, but this just kind of clears the decks for me, clears my head, puts me in the right frame of mind, okay? Number two, I need a very lightweight task management system. I need a place where I can put my to-dos, and it so happens in capacities, I can have that right in the same place where I have my journal every morning. So that's kind of nice. And then we get to
Starting point is 00:18:26 number three. And this is where I actually feel like I'm at the frontier a little bit. Okay. So as a journalist, Kevin and I, we're tracking all kinds of different little narrative threads, right? Are we in an AI bubble? Is Apple ever going to figure out its AI approach? What is the leading frontier model of the moment? And we read stories about these things every day. And for me, those stories that bubble up can become the beginnings of columns, or they could be come the beginnings of a story that I might want to report myself. The problem is, how do I keep track of them? Earlier this year, I read a blog post by a guy named Andy Matushak, and he had the idea of what he calls blips, which are essentially just little notes that might be a single sentence, such as we're in an AI bubble, and that's the beginning of something that you
Starting point is 00:19:20 might flesh out over time. So what I have done inside capacities, Kevin, is I have just been creating these blips. As new stories come along that speak to whether we might be in an AI bubble, I add them into that little blip. And then inside the little daily journal page that I have in capacities, I've set up a live query. So every morning, it randomly selects five of these blips and shows them to me. And this is the key. Because you can create as many little notes as you want. You're going to lose track of them. You're going to forget to update them. But I now have a system. that is doing random spaced repetition. And so as I'm doing my little journal,
Starting point is 00:20:02 I'll see the little note that says we're in an AI bubble. And I will think, oh my gosh, I just saw like the craziest deal that Nvidia signed this week. I'm going to go in and I'm going to add that to the list. I've been doing this for about four months now, so I'm not brand new to this. But I have to say, it has made researching my column so much easier now
Starting point is 00:20:16 because things come up. I know exactly where to go to look to find the past five stories that I saw about this. And I just feel like it's truly been making my life easier. So in this moment, moment, I could not tell you one thing that I do not have out of my productivity system. Like, I actually feel like I have the thing that I want. And so for the next year, I am just going to try to use it. Okay. This is progress. Yeah. I think I would be happy if this one stuck
Starting point is 00:20:44 because, among other things, I think you need some stability when it comes to this part of your life. Yes. And it's time for you to settle down. And make an honest woman out of capacity. Yes, and I love this for you. Do you have any fear that the AI tools that sort of exist in this sphere are going to get so much better over the next year that you will be forced to abandon your beloved capacities-based system and switch to something new? It's a great question because I do think that there is still room to integrate AI into journaling a little bit more, right? Like, I think a journal that talks back to you is a really interesting thing. Capacities has some sorts of AI integrations. They say they plan on doing a lot more,
Starting point is 00:21:31 so I'm just going to kind of cross my fingers that that happens. Can I imagine an AI journal coming out in the next year that I'm definitely going to want to try? Yes, and, like, I probably will try it. But ultimately, day-to-day, like, I'm not trying to have, like, you know, hour-long, you know, chatbot interactions where I'm really trying to get to the bottom of something. So I think I can just kind of stay where I'm at. Hmm. Yeah.
Starting point is 00:21:53 That's really interesting. I am going to run a natural A-B test with you on this where I'm going to continue to flail around using no coherent productivity system in particular in my notes app, emailing myself, voice memos, little scribbles on scraps of paper, and we can compare notes at the end of the year and see who's been more productive.
Starting point is 00:22:16 I think that sounds like a great plan. I'm excited about it. All right, Casey. That is our resolution for this coming, year. When we come back, we will hear from our listeners and many of the questions that they have been asking us. We'll try to answer them. Let's see what's on their minds. Yeah. Well, Casey, it's time to open up the mailbag. It is, Kevin. What better way to kick off the new year than by hearing what is on our listeners' minds.
Starting point is 00:23:03 Yes, we truly have the greatest listeners in the world. They surprise and delight us every time they send us something. And we really love hearing from you, and we wanted to end this episode by answering a few of your questions. So here to help us out, we have Hartforx senior producer Whitney Jones. Hi, Whitney. Hi, everybody. Hey. Great hat. It's got to be back. If you're watching this on YouTube, you can see that Whitney's wearing a very handsome hard fork hat that I believe is available for purchase at NYTimes.com. It is. I just purchased mine this week, which is a little bit ridiculous because the old stock from the previous hat ventures are all sitting at my desk, and so I have a whole bunch of them there. You have 500 hats at your desk.
Starting point is 00:23:42 But only one of these, which are now available. Where do people get those if they're interested? You go to the New York Times store online and you type in hardball. for a cat, and this will pop up. Great. Perfect. We'll link it in the show notes. It's a perfect gift for this upcoming Christmas. Yes, 2026.
Starting point is 00:24:02 No, too early. Don't let Christmas sneak up on you this year. Okay, well, Whitney, what do you have for us? So I have the first... These are just a couple of quick ones to get us warmed up here. Okay. The first letter is from Cody, from Taiwan. Cody writes,
Starting point is 00:24:23 Love your show very much, but I am still confused with the name Hard Fork. What does it stand for? I will be very happy if you could explain it to me. Thank you so much. Sure.
Starting point is 00:24:31 I'll tell you what it stands for. Free expression. Equality, justice, and goodwill towards mankind. Yeah. Yeah.
Starting point is 00:24:42 Well, it is a name that we came up with in 2021. And a thing to know about 2021 is that crypto was the biggest story in technology. You wouldn't believe how huge it was at the time. And when we pitched this show to the New York Times,
Starting point is 00:24:58 we thought it was going to be a show mostly about crypto. And so we were thinking we should have like some sort of crypto term that we could put as the name of the show. Yeah, I'll maybe tell a story that I haven't told before about this, which is that the original name for Hard Fork was going to be not going to make it, or NGMI, which was at the time something that crypto people would post on social media, a lot like if you weren't part of the crypto revolution you were not going to make it and i just thought it'd be very funny to start a podcast every week with like hi i'm casey i'm kevin and we're not going to make it
Starting point is 00:25:30 okay so that was what we pitched and then the times lawyers looked into it and slate magazine had written one column under the name not going to make it about crypto they never wrote a second edition of this column but on that basis the times was like we don't want to pick up a fight with slate dot com and so we chose hard fork and i have to say i'm actually like much happier that we chose that instead of i'm not going to make it for the lawyers yeah new york times because otherwise this show would be very cringy i would hate saying that every week yeah but yeah needless to say we love the name and we'll never change it so cody uh to answer your very specific question a hard fork is a term in crypto programming blockchain programming basically it is when you fork a a chain
Starting point is 00:26:16 like Bitcoin or Ethereum and fork it in a way that renders previous additions to that chain obsolete. So, like, basically you are taking something that exists and you are starting over. And it's usually because a disaster has happened. Like, there's been a horrible breach, like there's been a huge theft. And you say, essentially, we are going to rewind back in time and start over. And in 2021, it felt to us like that was what was happening in Silicon Valley was they were saying, hey, this whole social media era, it's over and it's going to be crypto from here on out. Well, and I think there were rumblings about AI.
Starting point is 00:26:48 This was like sort of before ChatGPT, but this was sort of when things were starting to heat up. And I think we sort of collectively thought it would be a good name because it would signify this sort of break between the old Silicon Valley and the new one. And that part, I think, has panned out the way that we expected it to, even though I think if we were starting the show today, we would probably not pick a crypto-related name. Yeah, that is probably true. Although, as Kevin said to me, when we were having this discussion, almost five years ago now, and I'll never forget when we say, said this. He said, it doesn't matter what the name of a podcast is. And it sounded so wrong to me at the time, but I have to say you were completely right about it. It's true. Like names just become names at a certain point. Like everyone when Google first came out was like, what a stupid name for
Starting point is 00:27:30 a company. That'll never work. And look at them now. Huge. Huge. All right, Whitney, next question. Next question. This one is a bit more of a moral ethical question. Also another question to sort of, you know, preparing for Christmas next year. Good. This one came in over the holidays from Brett Cochran from Acton Mass. He says, hi, Kevin. I guess, Casey, you don't exist in this question. I'm sorry.
Starting point is 00:27:58 As a fellow dad of young kids, you should be able to empathize with my moral quandary. I'm considering using AI to add Santa to security camera footage from inside my home to show my children on Christmas morning. They are three and six. I work for a security company and have cameras all. over our interior that I'm testing for work. I have one pointed at the Christmas tree and fireplace, an excellent backdrop for AI Chris Kringle.
Starting point is 00:28:23 However, I'm dealing with much soul searching if this might be taking AI too far. Am I taking work away from an actual Santa impersonator whose livelihood depends on this season? If my children misbehave, would I be a bad father if I added Crompus instead? Would love to hear your and Casey's take on this. Oh, there you are. Love the show. P.S. before and after image using nanobanana is attached. And I will say it looks very realistic.
Starting point is 00:28:50 We got Santa kneeling down by the tree, stuffing presence into a sack, or taking them out of a sack, I suppose. So Brad's question is, can he use AI to add Santa to his security footage to show to his kids? Casey, what's your take? Look, I think it's clear. The only ethical solution here is to hire a human Santa impersonator to break into your house to capture that footage and to show it to it your children
Starting point is 00:29:17 to terrify them. And I think if you take any other approach, shame on you. What do you think, Kevin? Well, so I have mixed feelings about Santa as sort of a cultural phenomenon. In part because, well, so before I had a child, I had this idea
Starting point is 00:29:33 that I was going to be an enlightened parent. And when the subject of Santa came up, I would use it. I was actually, my plan was to pivot to try to get my kid excited about shipping logistics. I would say, you know, I get that you're excited about Santa, but let me tell you about the miracle of container ships
Starting point is 00:29:52 and how our goods and presents actually get from the factory to our house. Now that I have a three-year-old, I see that that was not perhaps the best plan. He's less interested in that than you might have guessed. Less interested in containerization than I would have predicted. And so I've been forced to make other tough calls about what we are telling him and not telling him about Santa. I will say this does not bother me one bit.
Starting point is 00:30:15 I think this is a very funny gag. I am a little curious about the fact that you have so many security cameras installed around the interior of your home, but I won't ask any further questions about that. Maybe it lives in a rough neighborhood. He's testing them for work. Yeah. No, I think this is fine. Keep us posted on how it goes. And, yeah, I don't have any problems with this.
Starting point is 00:30:33 Listen, I think it's fine, too, but here's what I'm going to say. There might be some blowback if you give your children the idea that, when you show them video, they can't actually separate truth from fiction anymore. Like, once they realize what you did when they were kids, I think, you know, there's going to be some trust issues there that they might be talking about in therapy. Yeah, just stick with the conventional telling your kids that Santa comes down the chimney and puts the presence under the tree. Let's not stretch credulity here by adding some AI. When I found out, you know, obviously I was very sad that, you know, Santa wasn't real.
Starting point is 00:31:06 What? I was, yeah. No, wait, what did you say? You know what, never mind. Let's move on. Okay. Next question. Next question.
Starting point is 00:31:15 From disgruntled optimist. They write, I am mind-boggled at the hopes and dreams we load into AI while I watch extremely basic technology fail. For context, I work at a large, well-capitalized public company in the pharma industry. I see both of these worlds literally every day. I understand that we have to invest now for technology in the future. It's just really crazy to see large enterprises. like mine, announce massive AI initiatives for R&D when the Wi-Fi is not working. No joke, I tethered my work computer to my personal hotspot for two hours while mandatorily in
Starting point is 00:31:51 the office last week. Who is working to reconcile these two worlds? Are these corporates just hoping I burn out so they can replace me with AI? If I hear one more person say that an AI agent can do, quote, that, I'm going to scream. What do you make us? Well, I fear that nothing may have made our listeners scream more in 2025 on the Hard Fork podcast, because we are people who believe that, like, AI systems can accomplish tasks, right? I think there is a view here, which I've talked about on the show, which I call the New York view of AI, which is what can't it do, right? You look around, you see all the examples of it failing all over the place, and those are real things, and they are extremely annoying. And if you are in a workplace where you have
Starting point is 00:32:35 a boss who says, hey, I want you to start using AI for everything, and gives you no additional I understand why you're very frustrated. And I do think that is silly. And I do think you should essentially, you know, try to work around whatever bad boss that you have. I think Kevin and I believe that a role that this podcast should serve is to just gesture at what AI can do. Not because we think it is always good. In fact, we think it's often bad, but because we think it is going to change your life in ways that are good and bad. And we believe that we can't do a good job of that unless we're telling you about what it can do at least half of the time, right? We're living at a time when the capabilities of these systems are improving.
Starting point is 00:33:14 We think that that is absolutely going to accelerate this year. And so at least when it comes to the hard for a podcast, I hate to say it listener, but you might be screaming this year when you hear his talk. No, I will sort of defend our listener here and say that I think this is like a valid and legitimate frustration. It is very obvious that the companies, especially the big sort of legacy companies, that are trying to sound very future-oriented and forward-thinking are, like, announcing all these buzzy AI pilots and programs
Starting point is 00:33:44 when they still haven't, like, they're still using database software from 2003. Right. And I think there's some tension there, and I think it's real and fair to call that out. I think this goes to the point that we've talked about on a few shows this year, which is, like, there is no AI-shaped hole in most big companies. It does not fit easily into the work that you're already doing, and it does not fix every problem. It does not fix the broken printer.
Starting point is 00:34:10 It does not fix the Wi-Fi issue. It does not fix the sort of, you know, 1,000 mundane things that can go wrong with an IT department on any given day. And it is not mutually exclusive to say that AI is and can do incredible things, even within an organization that still struggles with some basic technology stuff. And so I think that's the question is, like, Can the big corporate customers of this stuff figure out ways to use it without breaking more than
Starting point is 00:34:39 they're fixing and how quickly does that all happen? I've become very pessimistic about the potential for large, slow-moving companies to adopt this stuff quickly. I think that is not my theory of change here for how this stuff kind of takes over the economy, but I'll be interested to see how it happens. All right. Next question, Whitney. Next question from Johan asks about our data centers in space episode.
Starting point is 00:35:05 Johan says, When you mused about the advantages of building data centers in space, you only focused on energy, cooling, and avoidance of NIMBY issues. Am I too paranoid to think that the very big and obvious reason is that these data centers will not fall under any earthly jurisdiction? Data centers in space are outside the reach of any standard legal code, so any earthly regulations do not apply. It's like what happens in space stays in space.
Starting point is 00:35:29 space. Is this too far-fetched? Here's what I say about this. I think that one of the considerations for the companies that are trying to build data centers in space are talking about building data centers in space is that it is just easier to imagine putting up a giant data center in space than having to like go through all the permitting and the land use and the zoning and getting the permission of city council to put up the data center on the physical earth. I do not think this is the primary consideration. And I don't actually think that the, um, that space is as lawless as it may seem. We have something called the Outer Space Treaty that was signed in 1967 that says that
Starting point is 00:36:10 nations are responsible for what their citizens do in outer space. So if, you know, Google were to build a data center in space and something, you know, horrible were to happen for it, they would be actually liable for that. And the United States would be liable as Google's sort of host nation. Let me throw one more wrinkle into the mix, Kevin. If you've read DC Comics, you know that space law is maintained by the Green Lantern Corps. And that if some of these jokers, pun intended, think that they can just put a rogue data center up in space and that there are going to be no consequences, they're going to have Hal Jordan to answer to. So something to think about and be curious to see how this one develops.
Starting point is 00:36:50 Yes. Next question. This is from someone who works in tech who wanted to remain anonymous given that they work in tech. and this is sort of on their personal side. Okay, Sundar. Yes. It writes about an experience they had. I had a weird experience tonight
Starting point is 00:37:08 when experimenting with Gemini three deep research. I was prompting a deep dive into family genealogy. I watched the model think and research all the expected sources based on my prompt, but the analysis that spit out was a complete hallucination outlining Elon Musk's ancestry and going down unhinged paths about how this shaped Elon's amazing personality.
Starting point is 00:37:30 Question. Could this be a hallucination based on some poison in Gemini model by Elon? Might sound crazy, but the experience made me think about Mecca Hitler. Was this potentially triggered by some poison inadvertently injected into Grok's model? Are companies poisoning each other's models
Starting point is 00:37:46 outside of the public's view? Casey, what do you think about this? I mean, this is a very fun and outlandish theory. Look, we can't actually tell you what was happening here. know what the prompt was. We didn't read the output. I would be very surprised if what you're reading is the result of an effort to poison each other's models. But I will say that people have floated the idea. You know, I remember just as Chatchibati was taking off, there was talk about creating radioactive data around the internet. And the idea was to kind of try to trip up
Starting point is 00:38:21 people's models as a way to discourage them from scraping websites without permission. In the And it seems like all of the big frontier labs were just able to find ways around this. So in short, I think that even if they wanted to do this, I don't think it would work that well. And I don't think that that is what has happened to you. Yeah, I don't think this is what's happened either. I think this is probably just a hallucination. The one sort of other possibility, it's not exactly poisoning.
Starting point is 00:38:49 It's not done with malicious intent. But a lot of sort of AI companies, especially outside the top few, are doing distillation, sort of when you take a model that's out there, whether it's an open source model or something else, and you sort of use outputs from that model to train your own model. That is not a standard industry practice that is still somewhat frowned upon, but it is still happening. Is it theoretically possible that Gemini was distilled from Grok? I actually don't think that's probably true. I think you'd probably make your model worse if you distilled from Grok that way. But that is sort of one other possibility. Whenever these models sort of start seeming confused,
Starting point is 00:39:27 about which model they are next question all right next question comes from emma domingez she's a new mom and she writes this is in response to the neo robot episode she writes i'm the mother to an amazing seven-month-old baby girl i love her and also fully understand the exhaustion and time-consuming nature of taking care of a baby if neo is doing a host of household chores i can see parents being tempted to also delegate things like diaper changes, bottle feeding, staring at the baby and making her laugh so you get 20 minutes to do something else. You could have the robot stay in your baby's room all night like a monitor and try to soothe the baby when they wake up. Sounds really helpful. But what are the long-term
Starting point is 00:40:12 implications? Will a baby who is biologically programmed to bond with and love her caregivers develop a similar attachment and love for the robot? Is that healthy for child development? So this has been a huge question throughout the history of science fiction. 1940, Isaac Asimov writes a story called Robbie about a little girl named Gloria who becomes attached to her robot nursemaid and it freaks her parents out because they're like, hey, you know, what's going on here? Ray Bradbury, 1969, I sing the body electric. A family gets a robot grandmother after their mother dies. The kids get super attached to her and it raises all sorts of questions about can mechanical love substitute for the biological kind. So this is something that we have just been wondering about since the dawn of robotics.
Starting point is 00:40:58 And the crazy thing is, Kevin, we have robots now. So we're going to get to find out. Yes. I imagine that a lot of parents are going to be sort of experimenting with these humanoids for various child-rearing related tasks. I remember, you know, this was sort of just pre-humanoid era. But when my kid was born, there were vigorous debates on the internet that I saw about whether using a snoo, which is one of these sort of robot bassinets that sort of automatically senses when your baby's crying and sort of, you know, gently rocks them back to sleep, was kind of taking away from some bonding because there were some people who thought this is essentially letting a robot to your child, and I don't like that. I made the decision. My wife and I made the
Starting point is 00:41:43 decision that we were going to use the snoo, and I think that was a justifiable decision because I actually think that when your child is that young, they need parents who are well-rested, right? There are trade-offs. You are giving up some of the intimacy of, like, soothing your child. But, like, you also, if you are a child, you need parents who have slept the night before. And part of that is having a robot rock your baby back to sleep. I think there are some tasks within parenting that are just that.
Starting point is 00:42:10 They're tasks, right? They're washing the bottles, doing the laundry. They're sort of restocking the diapers. they're doing all of these sort of tasks that are not the kind of intimate face-to-face bonding attachment tasks. Go with God, outsource those to Neo
Starting point is 00:42:26 or whatever humanoid robot you want. I do not think you are at risk of damaging your relationship with a child. But I think Emma's instinct to guard these like interactive moments is right. I think that that is actually where people start to develop
Starting point is 00:42:42 these early attachments. This is part of the bonding process both for the parent and the child. So I would not completely outsource newborn parenting to Neo, but by all means, do it for the drudgery and get some sleep. It is worth saying about Neo. This thing is nowhere near being able to safely hold and rock your baby. So don't try that at home or anywhere.
Starting point is 00:43:06 But, you know, is something like that imaginable within the next few years, maybe? But as always, with these things, you don't have to be the first person to try it, okay? Yes. Let Casey and I make that mistake. let us entrust our baby to a robot. We're having a baby, by the way. You and me? Yeah.
Starting point is 00:43:22 Used to me. Congratulations. All right. Well, that brings us to about the halfway point through the mailback, Kevin. Let's take a quick break. And when we come back, we'll have more listener questions. Welcome back, everybody. We're answering your listener questions,
Starting point is 00:43:59 and we're going to reach back into the mailbag here with our producer, Whitney Jones. And Whitney, why do you tell us who's next up? Who's next up? The next letter is actually like a three, maybe three and a half parter from Tim, Gerardo and Matt. More of a comment than a question.
Starting point is 00:44:15 Yeah, more of a comment on the question. These all have to do about, like, the sorts of models that you talk about and use. I'll just get into this. Tim says, hey, guys, bottom line, how come you don't talk about co-pilot? I know it's not a sexy frontier thing, but it's used by a lot of folks. Gerardo says, I'm wondering why on Earth you're not reviewing Deepseek. Matt says, I've noticed something while listening to the past few months of episodes. You cover basically every major LLM, but.
Starting point is 00:44:44 GROC barely gets a cameo. Any chance we could get a proper hard fork segment on GROC one of these weeks? So I think my question's sort of distilling from all this is, how do you guys decide what models to pay attention to, which ones to try out, which ones to cover on the show, and which ones to sort of ignore? Yeah, I appreciate the questions. To Matt, if you want to hear a great segment about GROC, we did one called Mecca Hitler that you can Google that I think you'll get a lot out of.
Starting point is 00:45:11 But in terms of, you know, how do we decide which. models to cover day to day, we are in a moment where all of the chatbots that got mentioned by those listeners are for, let's say, 80 plus percent uses roughly equivalent, right? You can use co-pilot, deep seek, and maybe even grok to get a decent answer to a lot of questions. In our world, that just makes them kind of boring, right? For us to be delivering you something every week that feels really like fresh and exciting, We've got to get to the frontier. We have to be talking about the models that are inventing new capabilities that we think are actually going to change your lives and maybe even all of society.
Starting point is 00:45:54 So for us to do a whole segment about it, it has to be a model that we think might be able to get you there. Yeah, I think we should at some point do like an episode about the boring AI tools that people are using. And I would put a co-pilot in that category. Like no one thinks this is a frontier model. I looked it up last night. co-pilots M-A-I-1, which is their preview model for the sort of proprietary Microsoft model, is currently ranked 62nd in LM Arena, which is the sort of AI chatbot leaderboard. Huge congratulations to the team over at Microsoft.
Starting point is 00:46:28 Maybe if they just had a few more financial resources, they'd be able to compete with the big guys. So look, I think, you know, we could be snarky about this, but I do actually think we should look at, among other things, not just whether a model is on the frontier technically or not. but like how many people are using it and in what contexts, right? So I take this feedback, especially about co-pilot. I think we should do a kind of look at some point this year. We should kind of look at the more sort of stayed enterprise tools that people are using.
Starting point is 00:46:57 To address the other two, I think Deepseek, we did do, you know, several episode. We have talked about that a lot on the show. It was on our iconic technologies list. I think the Deep Seek models themselves have mattered less. than I thought they would. I think deep seek has mattered more for what it represents in terms of China catching up, these efficiency gains, the state of open source,
Starting point is 00:47:22 then for sort of whether people in the United States or our listeners should be using the deep seek models. Agreed. As for GROC, I actually do think we should probably talk about GROC at some point. I think that it's a hard topic. It's sort of loaded with all this baggage about Elon Musk and what he's trying to do with Grok. Obviously, they had a big sort of fiasco when Mecca Hitler was happening.
Starting point is 00:47:48 But Grok does actually seem to be a good model. It is up there with Gemini and Claude and the GPT series on some of these benchmarks. And it does have some interesting features, not just the porny anime characters, but other, like it has real-time access to X data. That is the only time I have found myself using Grok is when I went. want to look up someone's tweets from a long time ago. So it does have some features that may be interesting to people. And I take the note, we should be expansive in the models that we talk about.
Starting point is 00:48:21 All right. Next question. Next one is from Ellen Russell in Tucson, Arizona. Says, initially I thought I was chatting with a human customer service rep. A conversation about an incorrect shipment that was characterized by a plethora of courteous words, specific promises, and zero actual performance when it came to delivering on the agreed-upon resolution to the problem. Dozens of dead-end exchanges that spanned almost two weeks, still no resolution.
Starting point is 00:48:48 Question. If a chatbot agrees to specific action, then does not deliver who is responsible and what's a customer to do. Love this question. Let's me talk about one of my favorite legal cases in recent years, Kevin, Moffat v. Air Canada, you know this one? I was wondering when you were going to bring up Moffat v. Air Canada. Well, Moffat versus Air Canada, of course, involves the protagonist Jake Moffat, who asked Air Canada's chatbot about bereavement fares, and the bot said he could book a full price ticket now and claim a partial refund within 90 days of travel. And so that's what he tried to do. But when he applied for the refund, Air Canada denied it. They pointed to a PDF buried on their website saying that bereavement fares do not apply to completed travel. And so in the legal case, Air Canada argued they aren't liable because the chatbot was a lot.
Starting point is 00:49:37 a separate legal entity that was responsible for its own actions. And the tribunal in Canada called this argument, quote, remarkable, and said, actually, the chatbot's just part of your website, Air Canada. And so Air Canada had to pay up. Really? Yeah. So this one is not completely settled, but when I have sort of surveyed the landscape, it seems like we are sort of leaning in the direction of liability for these chatbots.
Starting point is 00:50:01 So you should absolutely go hogwild when you're talking to these chatbots. Try to get them to promise you anything. because you might be able to get it. Yeah, it reminds me of like, do you remember that Chevrolet of Watsonville chatbot? Yes. It was like right when ChatGPT was starting to like become a thing, there was this like Chevy dealer in California that like installed a customer service chat bot using ChatGPT. And people were just going, that's trying to like jail break it and get it to promise to like sell them a Durango for like a dollar. A 2024 Tahoe for a dollar is what actually happened.
Starting point is 00:50:33 That's correct. And you're saying that in Canada, at least, that would be a valid and binding contract. Well, here's the thing. In contract law, Kevin, in which I am an expert, if an offer is considered too good to be true, courts generally don't enforce it. And so in this case, the little prankster here was not able to get a $1 Chevy Tahoe. Oh, damn. So sort of be careful, you know, you got to get them to promise you something reasonable, I guess, if you want this scam to work. But we do wish you the best and keep us posted in your efforts.
Starting point is 00:51:04 Yes. Next question. Next question from Matt Goldman, writes, when I was growing up in the 90s and early 2000s, at least in the computer science and sci-fi communities, one of the topics of conversation was always the touring test. Now it feels like the test has been completely left behind, but I don't recall any announcements that it was ever officially passed.
Starting point is 00:51:26 I'm curious why there was seemingly no cultural moment of us realizing the touring test had been passed, or alternatively, if it hasn't been passed, then why? Are the goalposts just continuously moved out, and what we used to think of as the Turing test has now just morphed into the AGI test? Yeah, so this is an interesting one, and there are a couple answers to it, but one of them is just that in some ways
Starting point is 00:51:50 the Turing test was discredited before it was passed, because I think when Turing created the test, there was an assumption that if a machine could pass this test, it would mean that they were thinking like us, okay? And then this philosopher named John Searle comes along, and he introduces this concept of the Chinese room. You know the Chinese room? I sure do.
Starting point is 00:52:12 So the idea of the Chinese room is if you have somebody in a room and you pass them notes that are written in Chinese and the person doesn't speak Chinese, but they have a rule book that explains the rules, and they can sort of go in and match the characters that instruct them how to respond, and they can pass those responses back out of the room, to the person who's receiving those responses,
Starting point is 00:52:32 it's going to seem like the person inside is speaking Chinese, right? They clearly fluently speak Chinese. In fact, they're not doing that. They're just sort of following the system of rules. This is the way that machine learning systems have actually worked, and they have thrown cold water on the idea
Starting point is 00:52:47 that just because it can issue a convincing response, a machine is actually thinking. So that idea kind of got into the bloodstream and philosophers kind of threw out the touring test as something that was going to be useful in helping us understand when machines could actually think. Yeah, I would just take the Chinese room experiment out of it
Starting point is 00:53:04 and say, like, we have realized, as these systems have gotten better, that the ability to hold a conversation is not actually intelligence, right? There are things other than conversational fluency that matter more. But I will also say, like, this is a case of shifting goalposts. I think if you had shown a large language model to a sort of philosopher or a computer scientist back in the early days of the Turing test, they would have died. They would have died, like a medieval peasant encountering Mountain Dew.
Starting point is 00:53:34 Yes. And we have actually had some attempts to sort of run the Turing test on these newer models. In May 2024, there was a study that found that GPT4 was judged to be human 54% of the time. In 2025, researchers showed that newer models were judged to be human 73% of the time, which is a better-than-chance rate. So, yes, the Turing test has been passed in the classical formulation, and people who are moving the goalposts now want to talk about the things the day I can't do. But I actually think we should have marked this in some way. I think this was a big deal when Chatbots started to pass the Turing test. And I think we kind of just, because of the sort of landscape of shifting expectations, the hedonic treadmill, whatever you want to call it, we just kind of all went on to look at the next milestone.
Starting point is 00:54:25 And we didn't really acknowledge the fact that this incredible. as sort of Rubicon had been crossed. Yeah, absolutely. All right, just a couple more questions, Whitney. There's just one last one. Just one last question, Whitney. All right. This is specifically for Casey from Daniel.
Starting point is 00:54:44 He writes, Casey talks about how he likes Bellotro. Will you have him share his recent accomplishments? Whether he's hit anti-12, the 100-mill hand yet, what his favorite jokers are, and his preferred strategies. I must admit, I don't understand any of the words I just read,
Starting point is 00:55:01 but I'm hoping you can shed some light on it. Thank you so much for the question. To answer your question, I have made it to Ante 13, where I think to pass the first ante there, you need $300 billion to do that. I've never successfully done that. That's where I've died every single time. But I'm going to give you a little strategy
Starting point is 00:55:23 that works super well for me. and it's two jokers, and they're both common jokers. So these show up all the time, and they're pretty cheap. One is called The Hanging Chad. That is the one that essentially counts your card three times when you play it. So if you have like a little bit of Maltz on that card or some extra chips, you're going to sort of rack that up. And then pair that with the photograph Joker, Kevin.
Starting point is 00:55:48 That's the one that gives the first played face card to XMult. Okay? So basically what that means is every single round, if you're playing a face card first, you're getting X6 moult on the entire enchilada. That is usually enough to win the game. Okay, that'll get you to your 100,000 score. So I hope that was useful to you. If you don't understand what it means yet, put the podcast down and go devote your life to Bellatra. You'll become a happier person.
Starting point is 00:56:17 Don't do it. Don't do it. No, I actually, this is breaking news. I got my first million point hand last night. Really? For one million points, congratulations. Thank you. I know you're way past that, but I'm catching up here.
Starting point is 00:56:30 500 million is my biggest hand. Woo. Buddy. So a thing that I have learned through playing Bellatro is that addition and multiplication are different things. And I'll tell you, I mean, you know, theoretically I knew this. But, you know, they always talk about feeling the exponential. Belacho really teaches you to feel the exponential
Starting point is 00:56:48 because there are two ways to grow your score. One is that you can give your cards the ability to add multiplication to the multiplier when it tallies up the score for your hand. The other is to add to the multiplication to the exponent of what you
Starting point is 00:57:04 are doing. And it is that second thing, that mysterious multiplication that gets you the high scores. That is what you want to do. For way too long in Bellatro, I was trying to add more chips. I was trying to add more plus molt. Eventually, I was like, no, you want all of your jokers to just be X-Mult as fast as you can
Starting point is 00:57:20 that happen and you will win the game yes best of luck to you and godspeed and happy new year godspeed please don't download this game it's a trap do it Casey, before we go, let's make our AI disclosures. I work at the New York Times company, which is suing Open AI and Microsoft over alleged copyright violations. And my boyfriend works at Anthropic. Hartfork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant. We're fact-checked this week by Will Pyshal.
Starting point is 00:58:18 Today's show was engineered by. Alyssa Moxley. Original music by Marion Lazzano, Rowan Nemistow, and Dan Powell. Video production by Soya Roque, Pat Gunther, Jake Nicol, and Chris Schott. You can watch this whole episode on YouTube at YouTube.com slash hardfork. Special thanks to Paula Schumann, Puiwing, Tam, and Dahlia Hadad. You can email us at hardfork at NYTimes.com with a question that we'll probably get to sometime in about a year. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.