Hard Fork - We Met NEO, the Viral Humanoid Robot + HatGPT

Episode Date: November 7, 2025

This week, we’re joined by Bernt Bornich, chief executive of 1X. We talked with him about NEO, his company’s new humanoid robot, which has the internet buzzing. Then we meet NEO itself, and compar...e notes on the experience. Finally, we close the week with a roundup of tech news headlines: It’s time for some HatGPT.Guests:Bernt Bornich, chief executive of 1XAdditional Reading: I Tried the Robot That’s Coming to Live With You. It’s Still Part Human.Invasion of the Home Humanoid RobotsThere Are More Robots Working in China Than the Rest of the World CombinedTrump Pardons Founder of the Crypto Exchange BinanceFor Podcasters, a Voice Clone Is a Double-Edged SwordWe want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.

Transcript
Discussion (0)
Starting point is 00:00:00 Casey, did you hear about the Waymo Kit Kat incident? I did. A very sad story, unfortunately. Yes. So Waymo in San Francisco hit and killed a bodega cat in the Mission District named Kit Kat. Named Kit Kat, of course. And this was a beloved cat in the community. This was a gray tabby. It was the store mascot and pet of the owner at a bodega on 16th Street. And, yeah, this was a very sad incident. There's now actually a remember Kit Kat rally that's being held by a district supervisor. So Kit Kat is in our prayers.
Starting point is 00:00:39 Yeah. Now, you had an interesting tweet about this. I'll just read this here. You said, it's Kit Kat's fault. Get out of the damn road when there's a Waymo nearby. Do you want to explain that? That was not me. You didn't write that?
Starting point is 00:00:50 I did not write that. Oh. Sorry. I'm pro Kit Kat. I'm more of a dog guy, but, you know, cats deserve to live too. I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Noon from Platformer. And this is Hard Fork.
Starting point is 00:01:07 This week, the humanoid robot that has the internet a buzz. We'll talk to 1X CEO, Bert Burnit, about Neo, and then put it to work. And then, a round of hat, GPT. Well, Casey, we've got a big first on Hard Fork this week. Our first humanoid robot guest. Yes, and I'm thrilled about this. You know, we tried to book a humanoid robot for the Hard Fork live show. We could not get it to happen. But fast forward a few months. And finally, there is a robot that's willing to come on the show, Kevin. Yes, and this robot is sort of having a moment right now. It's fair to say. This is the Neo robot. It's made by a company called OneX. And Joanna Stern, Wall Street Journal, tech columnist and friend of the pod did a great video about Neo. last week that went viral and got lots of pickup from lots of places. And people were sort of buzzing about this new human-like robot.
Starting point is 00:02:05 Yeah. And on one hand, you have people telling every joke you can imagine, people who are worried about the privacy implications of having this kind of surveillance machine in the house at all times. And then you have other people who have been ready for a robot to do all of their chores since they watched the Jetsons way back in the day and are kind of curious whether this could be the thing that actually makes that happen. Right. So a lot of the conversation around Joanna's story and video last week was about how there's sort of this gap right now between what people want this robot to be able to do, which is like do all of their chores for them and what it can actually do right now, which is not much of that. For right now, the robot is still mostly teleoperated. So a human wearing like a VR headset is controlling the motions of the Neo robot when it's doing most of its tasks. But the long-term vision for this is that over time, as these things, get sold to people who put them in their homes, and they're sort of able to gather that data and use that to improve themselves, and that these robots will become fully autonomous at some
Starting point is 00:03:06 point in the near future. That's right. And if you want to pre-order it, you can get it starting next year for $499 a month, if you commit to at least six months. Or if you have an extra $20,000 lying around, you can just buy the thing outright. But Kevin, I think that that price speaks to the fact that this is really a data collection of play, right? These guys just need to have more cameras in more homes and have their robots and their teleoperators performing more tasks to try to build the kind of model that will let them
Starting point is 00:03:38 build the robot butler of our dreams. Yes, and there are a lot of skeptics out there who don't think that humanoid robots are all that close to being real. But we wanted to talk about this today because this is a real thing that is happening in Silicon Valley. Investors are pouring billions of dollars into trying to make humanoid robots. Companies like Tesla and figure are developing their own humanoid robots. A lot of Chinese companies are starting to make these at scale. This is a big category, and I think we should take it seriously. Absolutely. Since 2024, VCs have put $5 billion into these humanoid robots. And so while this technology is definitely on the bleeding edge, it is nowhere ready for prime time. It is becoming just good enough
Starting point is 00:04:22 that I think it's worth having a conversation. Totally. And this is obviously a staple of science fiction movies throughout the ages, the kind of lifelike humanoid robot that lives in your house and sits on the couch with you and reads your kid bedtime stories
Starting point is 00:04:34 and does the dishes and things like that. Well, who are we going to talk to about it, Kevin? We invited the CEO of OneX, Burt Burnick, to join us in the studio today and then we're going to actually see a demo of our very own new robot. You know, I'm curiously he felt burnt by the response to the robot.
Starting point is 00:04:49 We'll ask him. Let's bring it burnt. Bert Burnick, welcome to Hard Fork. Oh, thank you. So the launch of Neo, your humanoid robot, got a lot of attention over the last week or so. Now that it's been about a week since Neo became available to pre-order, how many Neos have you actually sold? Well, we don't go out with specific numbers, but let's just say it went overall expectation by quite a margin. I think it's been like absolutely beautiful and motivating to see, right?
Starting point is 00:05:26 There's so much support. And like for anything that's new, kind of like get this like polarization, it's the haters and it's like people who are their early adopters and people who can really like help you make this happen. And I'm just incredibly happy that we have this huge supporter base now that wants to go on this adventure together with us. So we have a lot of questions, but I feel like we should spend maybe a few minutes for those who did not see the original launch video to talk a little bit about what Neo is.
Starting point is 00:05:56 It's a humanoid robot. It's going to be available next year. And once it's up and running in people's houses, tell us what it can actually do. Sure. So I like to be very concrete. So I'll say what my Neo does in my home today. And hopefully it's going to do a lot more in yours. It's gotten pretty freaking good at tidying, very good at vacuuming.
Starting point is 00:06:17 I think like it's completely outcompeted my Romba it vacuums way better moves the chairs away, it vacuums in the couch and like all these things makes it really nice some cleaning but not yet to where I like would never have to clean myself still not work to do there those parts of the laundry
Starting point is 00:06:35 so like the alternative is me just throwing it in the cabinet and here it's actually kind of folded but it's not perfect but it's getting better every day now what's interesting and what's gotten a lot of attention of course is that everything I talk about now happens through a mixture of autonomy and teleoperation. And this is really how the system learns. So over time, more and more of this becomes autonomous.
Starting point is 00:06:58 But just like any AI system, I do think expectation managing here, it's going to be a while until there is absolutely no human in the loop ever. And if you look at modern AI systems, there is still always human in the loop. When Waymo is driving around here in San Francisco, the cars get stuck quite often, and there's someone taking over and fixing it. If you look at using chat GPT, which probably is the most mature AI we have at this point, they're still human in the loop. It doesn't zero shot what you want to do.
Starting point is 00:07:28 Like you give it a prompt, it gives you something back, and usually you're like, no, that's not what I wanted. And then you refine it a bit, and it saved you a lot of time because you didn't have to do all of it yourself, but you helped it. And the robots are currently at the same step. So it's not necessarily that the person is directly controlling the robot all the time, but there is a person that can help the robot when it faces things it doesn't quite know how to do and that's how it learns. What is the mix in your home when your Neo is doing things for you? What percentage of those things like folding laundry or tidying up around the house
Starting point is 00:08:00 are being done autonomously? Is it 5%, 20%, 50%, all of it is being done autonomously, but we're a human in the loop. So that's why, I'm not going to dodge the question, but that is why it's so hard to answer the question. But the human who's teleoperating the robot goes on break or takes the day off, what can your Neo do on its own? Yes, that's a very good question. So that gets us to kind of like the second mode of Neo, which is the fully autonomous mode, where we don't allow anyone to intervene or do teleoperation of any kind. And it's just purely autonomous.
Starting point is 00:08:32 So that's usually what I do when I'm home. And then I can talk to my Neo. That works really well. I mean, it's as good as any conversational language model out there, except it has better. Address DETection, so it really knows who's speaking to it, so it won't interrupt you. When I'm talking to you, it won't interrupt and think that we talk to it, et cetera. Has an embodiment, so the body language really helps, like, enrich conversation. And this has actually replaced my normal day-to-day use of language models on my laptop or phone.
Starting point is 00:09:01 I just talk to Neo. So let me just make sure I'm super clear on this. So when you're at home using Neo and there's nobody tele-operating it remotely, it can sit there and talk to you, but it cannot do any of the kind of physical tasks right now. It can. So there's some things that work today. So it can open doors really well and navigate. It can take an object and put it somewhere else relatively successfully.
Starting point is 00:09:27 So like I can say, like, hey, Neo, can you take my cop to the kitchen? Or hey, Neo, can you like get the bottle on the counter over there and give it to me? None of these things are 100% successful, just like any language model. The door opening is like 90-something percent. So it works pretty well. If I tell it to get something for me, maybe it's like 80% successful. And gradually, it will know more and more things to do fully autonomously, and you will help it a bit.
Starting point is 00:09:53 And then to ensure that your product is extremely useful from day one, we have the mode where you kind of like to schedule what you want on in your house, and it's done more with a human in the loop from a teleoperation point of view, so that the robot can be very, very useful day one, but also to gather the data that we need to actually train the models, right? So I want to ask about that bit. One reason why chatbots like ChatGBTGBT work so well for generating text is that there are billions of words of text on the internet and that training data was enough to get them to the level that they're at today.
Starting point is 00:10:28 Am I right that there is just effectively no similarly massive set of training data available for something like folding laundry? And so the reason that you've set up this hybrid model is just you need a way to get more data. Yeah. I mean, it's the same as for self-driving cars, right? Yeah. Like, you need people using the cars for you to get the data. Now, it's a bit easier with cars because cars already have a very well-defined use.
Starting point is 00:10:53 But what's extremely good about robots is that the diversity and the amount of different things and experiences and tasks that a robot will actually experience is so large. and that's why we deploy these robots among people in homes, right? They need to live and learn among us. You made the analogy to self-driving cars. That project took Google well over a decade just to get to them to the point where they're at now, where like even today, like most people cannot take a Waymo on the highways, right? Like that's sort of still in testing. And Google spent many billions of dollars to make this to happen and now has,
Starting point is 00:11:34 a pretty large fleet of Waymos that is sort of out there collecting data to further refine the model. You're starting with a pretty small fleet. You're a startup. You don't have Google-sized resources. So when you think about what it's going to take to realize the vision that you're laying out for us of a robot that can sort of quickly do anything without maybe even needing to be teleoperated, what is that journey look like? How long is that going to take you guys? Can you do it with a relatively small fleet of robots? Well, we're shipping more than 10,000 units next year. So that's more than Waymo has. But it's still small in the grand picture of things. But I think what a lot of people aren't aware of, essentially two things.
Starting point is 00:12:19 It's how small the internet is. It's just like a tiny fraction of your life actually makes it to the public internet. And that's actually so small that if you look at just like 10,000 robots, you're starting to approach the amount of data that gets uploaded to YouTube. day. So, like, 10,000 robots is not that little. And the year after, we're hoping to get to, like, about 80-ish thousand. At that point, it's way more than, like, publicly available data, like a video on the internet. So that's one part of it. The other part of it is, the internet is such a bad representation of human life and intelligence. You think about,
Starting point is 00:12:53 like, what do you actually upload on YouTube, right? It's whatever gets your likes on YouTube. It's like a tiny fraction of what it is to, like, be an intelligent, well-functioning human. And what we have here is the ability to live and learn among people and actually capture all of those values. So I think that data advantage is huge. What is Neo actually recording and
Starting point is 00:13:13 transmitting? Is it just video? Is it audio and video? What kinds of data is Neo collecting? So it's video, it's audio, it's sense of touch. It's like the forces, kind of acting on the robot in general.
Starting point is 00:13:30 And when we say like sending data, right? It's also important to call out, like, where is it sending data? Because it is sending data to a secure cloud store where essentially no one is seeing this except the model that is running for kind of accumulating knowledge based on that. But before it sends data, it stores it locally so that if something happens that you don't want to be in any database, then you can also delete it. Is it collecting this data even when it's just like sitting there charging, or is it
Starting point is 00:13:58 only when you're sort of actively having it do something? It's actively when it's doing something. Got it. Let's say, and I'm asking it because I do think a lot of listeners are wondering this. Let's say you start like, you know, hooking up with your partner and then you sort of realize belatedly that like Neo is in the room looking. Would there be any automated way for Neo to be like, hey, I should stop watching that. I got to get out of here. Yeah. Yeah. We're going to work hard on that one. Okay. Thank you. Let's say, let's say from the community of people who hook up a lot, let me just say thank you. It's just going to tell you, like,
Starting point is 00:14:36 a man, I'm going to snitch on you, so I'm just going to get out of my before it's to like... Here's what I'd like it to do. Before it shuts everything off, I just wanted to say, God bless you guys. Go for it. Have a great time. And then I wanted to...
Starting point is 00:14:48 I'm out of here. I'm out of here. Go do the dishes. Wow. Well, I can't wait for the first Neo to be called to testify in like a divorce case. NEO, roll the footage. Whose fault was this? I believe you said you were going to take out the trash yourself.
Starting point is 00:15:10 Kevin. Can we go back to location 1,047? Pull up October 14th on the NEO. All right, but in all seriousness, Bert, how do you overcome the trust barrier here? Yeah, I think this is, first of all, like, the most important thing we do is just radical transparency. Right? This is how we're going to do it. This is how it needs to happen for it to actually work. And if that's not for you, then, like, wait a year until we're kind of past the earlier adopter phase. But I think a lot of people get it in the sense that, like, if you have a cleaner that comes over to clean your house,
Starting point is 00:15:47 they lock themselves into your house while you're at work and they do everything they're supposed to do and they leave again. This is essentially exactly the same. The difference is we can actually provide a better, more sense. secure service because not only do we vet our operators that are working in the US, I think better than most cleaning agencies too, but also we have the ability to have a manager that oversees. So right now we're running like eight to one. So like one manager will monitor what eight tell operators are doing and ensure that this is actually happening in a good manner. And we also have video logs of everything going on. So if something happens, we can trace it back to who did it and what actually happened.
Starting point is 00:16:29 So I would say, if you look at it as a cleaning service, we can provide something that is way better than what is there today. And I mean, so many people today accept a cleaning service. Yeah. Kevin, let me ask you, like, I want to sort of flesh out the privacy concerns here. I think they're very real, but like I want to name them. When I think about what I'm doing at my house most of the time, it involves such things as looking at computer, looking at television, talking to boyfriend.
Starting point is 00:16:54 Most of those cases, I would actually be okay with a robot washing the dishes, but I could imagine, you know, maybe other circumstances that would sort of give you more pot. So is there like something you have in mind that you're like, I do not want this robot looking at me while I blank? I mean, I can think of lots of things. People, you know, change in my house. People, you know, I have a child. I wouldn't necessarily want, like, some person in a teleoperation center, like watching their every move. Like, I think this just makes people uncomfortable, but maybe it would make things more concrete if we knew more about how this looks on your end. Like, is there a room at 1X headquarters where, like, you have a bunch of screens on the wall? And it's like, oh, you know, the Jones family needs their dishes done. Like, let's go teleoperate that robot. Or, like, the Smiths, like, need some laundry folded.
Starting point is 00:17:42 Like, what does it actually look like on your side? Yeah, it's a very good question. So generally what it looks like is more like a game where I like to compare it to Starcraft. So, like, you can think about, like, you have many units or many robots that you are controlling in parallel. And you're queuing up things that these robots should do. So, like, go pick that up, put it over there, like, fold that, put it in there. And here we do everything we can to ensure privacy. So, like, you don't see people.
Starting point is 00:18:08 Like, people are blurred out. You don't know which home you're operating in, like, all these things. Now, sometimes there will be a task where the robot does not know how to do it. So even if you give it, like, micro-instructions of, like, let's say folding, right? Maybe it can't fold your jacket. It was like a leather jacket and it'll be hard to fold and it tries and fails. Then this person who's essentially playing this robot game will delegate that task not to the autonomy, but to an operator in VR. Now this operator in VR gets connected, just sees what is in front of him or her, which is this jacket
Starting point is 00:18:42 and the message, what to do, does the task and then goes over to the next thing. Essentially there's very little real privacy invasion here. because you don't see the people, you don't know what the home you're operating is, like you have no way to connect this back to the customer. And then, of course, we do a really good job at minimizing the number of people who see things. So you want to make sure that what we call like the pods. So like in this room we're controlling this set of robots that not too many people are in here. Usually we say like four.
Starting point is 00:19:15 So that these four people are the only that are exposed to these specific homes. And in general, just like narrowing down that scope. Just put it out there, I think the main thing we're doing here is we're being very transparent. For most of the AI systems that you use every day, people in those companies look at that data to help improve the models. Most people might not know that, but they do. That's the only way to make it work. That's how we get AI to work. But me personally as a user, I'm comfortable with that.
Starting point is 00:19:44 It's not tied to me as a person. And they try to do a pretty good job on, like if I put things in there, like my name or the name of my daughter or whatever, they try to screen that out because it's not relevant, right? And there's a lot you can do on the software side. So I do really hope and think that we can create something here, which is a lot more privacy aware than what it would be to have a person be in your home to do this. Yeah. I guess what I'm wondering is, like, why not structure this as like a fleet of house cleaning robots that you could like open an app like Uber and like say, I want my house cleaned, you know, and a robot would come over and do that and then leave? Like, why does this have to live in your
Starting point is 00:20:20 house at all times and have that be the model rather than this kind of rental fleet? Well, I think it's just something very, very special around, first of all, living together with the robot and how it replaces kind of your interface to AI and being a kind of companion throughout life. That's a big part of it. It's not just the work. Now, another part of it is that I think the most useful things the robot do for me is actually like this kind of like microunits of work where you need like five minutes of work every like, I don't know, 20 minutes. And that's just something that today's services cannot provide, right? My favorite example here is actually assisted living or like in general care for people
Starting point is 00:21:03 with disabilities or the elderly, where it's like life changing to be able to get like two minutes of help every now and then. Like I dropped something. Can you pick it up for me? I'm thirsty. Can you get something in a fridge, right? And the amount of like feedback we've gotten from that community is very heartwarming. And I think it can be incredibly impactful.
Starting point is 00:21:20 But I think this mindset actually applies not just to people in need of assistance. It actually applies to all of us. And to me, there's something quite magical with like when I'm sitting down with my friends after a long day and we're like playing a board game and I'm like, hey, Neo, can you get me a cup of tea? And it's just like, that's not going to happen with a fleet that just like gets deployed to clean your home. I think then you make it into something that's just about cost effectiveness. I don't think this is just about cost effectiveness. This is about really enabling us to focus on the things that we want to do and what makes us human.
Starting point is 00:21:53 Yeah, I mean, if I were trying to, like, steal man the case for Neo and for household humanoid robots, I think I would say, like, look, a lot of people... And by the way, a robot kind of is like a steel man, but go on. That's true. I think I would say, like, look, a lot of people have a lot of robots in their houses already, right? I have a robot that washes my dishes. I have one that washes my clothes. I have one that dries my clothes.
Starting point is 00:22:15 I have one that makes me coffee in the morning. You're just bragging. Okay, we get it. You have money. Go on. No, I'm just saying, like, I think the story of household labor over the last hundred years is just, like, machines gradually doing more and more of the things that we used to do. And that's, I think, a happy story. Like, I don't want to spend all my time, like, hand washing my clothes.
Starting point is 00:22:36 And so I'm happy that robots can, or machines can now do that. And I think if that were sort of the end of the vision for Neo, I think that would be, like, genuinely exciting to a lot of people, like, this sort of on. Omni appliance that can, like, do any household chore for you autonomously. On the other hand, like, something about Neo and this vision that you're sketching feels very different to me than, like, a dishwasher or a laundry machine. You want people to like Neo. You want people to care about it, to have a emotional connection to it. And I guess I just wonder if that's going to be good for people to have a kind of new family
Starting point is 00:23:13 member, essentially, that they are now expected to care for and about. I think that's a lot more controversial than the Omni Chor robot. Yeah, and I think it's also very important for me here to say, like, at least for me, it's in no way a replacement for human connection. I see it way more akin to, like, getting a pet, right? It's not a pet either. It's somewhere in between. I really like saying it's like my Hobbs, from Calvin and Hobbs.
Starting point is 00:23:43 But, like, if you get a dog, it's not like, oh, I got a dog because I don't want people. No, no, like a dog just adds something to your life that people don't, right? And then I think it's also really beautiful that it takes you away from screens. It makes you way more present because instead of picking up your phone all the time and talking to it or your laptop, you're talking to something that is here. And it really, like, puts you in the presence. And I think a lot of, like, modern life has become way too much about, like, looking at your screen. Now, I saw a lot of people asking about whether Neo could be used for, let's call it, romantic purposes. So is that a use case you're envisioning?
Starting point is 00:24:21 No, it is not. Okay. But I do think, like, the emotional part of it is very real. And the patience, and like, depending on, like, I've spoken to people, for example, who can't wait to get their Neo because they have kids with autism, for example. right? We're just a lot of research now where we use robots and see like how how incredibly empowering this can be because it's this infinitely patient machine that can be understanding and help you. And there's like all of these applications that I think are a lot more, let's say meaningful. But we've been talking a lot on the show
Starting point is 00:24:54 about these AI companion apps, these software AI companions that, you know, we don't know a lot about yet. People are reporting some good experiences with them. We also know that In some cases, they can cause delusion or alienation from real friends and family. In some extreme cases, people have taken their lives after developing these emotional attachments. Do those edge cases were you? Do you feel like you have to solve those problems before you put this thing in people's homes? I don't think any of these problems are new. I think they're very well explored, and there's a lot of studies who kind of show what to do, what not to do.
Starting point is 00:25:33 also more and more statistics coming out really showing that it's very hard to correlate this too like there's essentially like if you look at the demographic total demographic then there's no increase in these kind of accidents that being said
Starting point is 00:25:49 I hope we can drive it down right that should be the goal like not just being defensive and saying like hey this actually nothing new here this is how it's always been it's still really sad and I do hope that we can have a positive impact on this that's something we're putting a lot of effort into
Starting point is 00:26:03 and then I can talk about it all I want but in the end it comes down to does it actually help right and yeah and like how many cases are there where like Neo is like congratulations Kevin this thing you're talking about you have invented a novel form of physics you know keep going you know
Starting point is 00:26:18 to be rough right here's a question you know say you like live with someone else you know a domestic partner a spouse or you know child whatever it might be can you train Neo to always take your side in an argument it's very important okay there's a there's a secret code okay great yeah no so I think actually it's it's a real really
Starting point is 00:26:39 interesting challenge though and this is something we're still working on this is not fully solved yet but if I'm having a conversation with Neo about I don't know about a rash and like my trip to a doctor next day and then you're visiting so like you come in does it switch topic like does it know that like hey I shouldn't talk to to you about that right now because you're friends here. Or like, but maybe if it was your wife, it's fine. And like there's all of these like social interactions, like micro interactions in your life, which are so subtle and such a part of like the human kind of like emotional model.
Starting point is 00:27:16 And that's a very interesting problem. Here's what I'll say. Do not buy a Neo if you plan on having an affair. Because it might slip up. It might slip up. We don't have the technology to prevent it from telling your spouse that you've been stepping out. This is fun because it's been. a real problem, really? Has it really?
Starting point is 00:27:33 No, not with Neo, but generally, yet. But generally, that AI models are snitchers. They are. They are. Because of, like, how they get aligned, like, when you're fine-tuning them to align them with, like, what we would say, like, it's our kind of like human ethics.
Starting point is 00:27:49 Yeah. They become snitchers. Because you want them to be honest. Yeah, they're super honest. So they'll snitch on you. Yeah. We'll see what we can do. We'll see what we can do. That's right. Don't, you know, if your parents are away and you think,
Starting point is 00:28:00 I'll go ahead and spark up this joint and you're in high school their parents get home and say, what's been going on while I was away? They'll say, well, Kevin sparked up a fat doobie. Now he's high as hell.
Starting point is 00:28:14 Say, Neo! Damn it! Yeah, you have to pay more for the privacy protection non-snitching feature. That's an add-on. Give us the long-term vision here. How long do you think
Starting point is 00:28:28 it will be before, let's say, 10% of American houses have a humanoid robot in them? 2030, maybe? I think it's going to be largely a manufacturing problem. It's like, in the beginning, if you scale too fast, you might run out of early adopters. Now, we've just proven that there's a lot of early adopters out there that really want this. So maybe you won't. But our assumption was that you would at some point run out of.
Starting point is 00:29:00 early adopters. But once you kind of overcome those kinks and this really, really just works, I think the demand is really there for everyone, essentially, right? So it's as essential as a car or a phone or like, everyone would want one. But it's going to be
Starting point is 00:29:15 quite an interesting adventure to manufacture all these and to scale the manufacturing. And it's going to take us a few years. So I think like 2030 would be more bet. And then your timelines for full autonomy, no one from Neo ever has to take over and teleoperate for any household task, when does that happen in your
Starting point is 00:29:38 projection? Would you say that your cleaner is fully autonomous? Yes. Okay. Then I would say probably bullish, 2027, bare case, 28, 299. I'm pretty confident we can solve that in 2027. Now, do remember that sometimes you actually need to give your cleaner some. instructions. No, I'm happy to do that. I just think it's like, I think part of what people are
Starting point is 00:30:07 wondering now is like, is this actually a robot or is it just some sort of long distance puppet? And I guess what I'm trying to get a sense of is like when can we actually call this a robot? And so you're saying like a couple years. I think you'll be surprised that like most people will call it a robot when they get it in 2026 because so much of these things will run fully autonomous. And then it's just a question of like, what's the quality of work that you accept, right? Most AI today's slop, right? It's not high quality, but you can get a lot of it very cheaply.
Starting point is 00:30:37 And I think physical labor is very well suited for that. Like mid-quality physical labor is actually very useful. Very, very high-quality physical labor, that's still a while out. So like when you say, like, hey, I have a bunch of stuff in my house, and I'd like this very good carpenter to come over and fix all that, I think that's probably closer to like 2030. Like, that's going to take longer. but generally being able to do all of the menial tasks throughout your house,
Starting point is 00:31:03 that's a lot more short-term. Yeah, also, I'm curious about your perspective about this, but my guess is that, like, a year from now, people are not going to be as interested in what can a robot do autonomously versus, like, what can it do via teleoperation? I think the question will be, like, what can it actually do well and, like, how much does it cost? Right.
Starting point is 00:31:21 You know, like, I think if you can give people good answers to, like, those questions, then I think a lot of people are going to want the robot. Right. Yeah, I guess I'm just also. curious about how this is going to scale because if I did have a robot in my house that could, even through teleoperation, do like some, you know, huge percentage of the household chores that I and my family currently have to do, like, I'm using that thing all the time. And that gets very expensive for you because you have to hire the people to teleoperate this and fold all the
Starting point is 00:31:50 laundry and do all the dishes. And I'm wondering if you foresee any kind of rationing that you will have to do where, like, you get pricing for Neo. Yes, you get two chores a day, Make them count, and otherwise you're on your own. So in 2026, I'm not worried about this, because the alternative cost of acquisition of data is so high, right? It's like build a warehouse full of mock-up kitchens, hire people to, like, change the environment every day and, like, have teleoperators. I'm like, I'm not joking. A lot of companies now are doing that. And they're raising lots of VC money to, like, build up these huge farms for gathering data.
Starting point is 00:32:27 So if you look at that as the alternative cost, please use your robot as much as possible because it's very much a symbiosis, right? You get useful labor, we get data. But there's no doubt that you're running towards a cliff, right? Where you need to get more and more autonomy working as you scale, or you will at some point need to stop scaling until the autonomy works. And I'm pretty bullish that we can get it working in time.
Starting point is 00:32:50 But if not, you would need to slow your deployment. I am going to reserve judgment until we can see our very own demo of Neo. Should we go out and see if it's ready? We should. And then I'm going to follow up with you because I want to see if I can teleoperate a Neo in Kevin's house because I have some ideas for some things I'd like to get done in there. When we come back, this Neo has escaped the Matrix.
Starting point is 00:33:17 And it's in our office. All right. We are here with Neo. Hey, everybody, how's it going? Good. Do you shake hands? Do you pound it out? Yeah, you go for a shake hand or a pound? All right. I'll do the pound it out. There you go. Okay, can you blow it up? All right. Great. Now, Neo, we'd like to
Starting point is 00:33:57 try you on some tasks here in our office kitchen. There's some sort of clutter on the table. Do you think you could help us pick that up and throw it into the trash cans below the table? Yeah, of course. Okay, so Casey, we just met Neo the robot and spent about half an hour doing some demos with it around the Times office. We will put those videos up on YouTube. But for people who are just listening, we want to just talk to you about what we experienced and how it went. So, Casey, what was your first impression of Neo the robot? I mean, I think, you know, on the positive side, I think they did a good job making a very
Starting point is 00:34:41 friendly and approachable-looking robot. Neo has these sort of big, innocent-looking eyes. It has no mouth, interestingly, but it sort of just comes across as like very benign. I would say that its hands were larger than I expected. Yeah, what did you think? Yeah, I was impressed by the actual robot itself. I mean, it's like five, six-ish. It's a short king. And about 66 pounds, it has this kind of woven body suit
Starting point is 00:35:13 that looks like a sort of futuristic thing, but covers up the like metal and hardware in a way that makes it seem friendly and approachable. I did get a hug from Neo, and it was a good hug. You know, Kevin has had kind of a hard day. I'd been stressing him out a lot. Would you mind giving him a hug?
Starting point is 00:35:32 Yeah, of course. Bringing in them. HR says I'm not allowed to physically touch him anymore. Oh, wow. You're very warm. It's very, very warm. Okay, that's great. Thank you.
Starting point is 00:35:45 So we had Neo do some tasks in our office kitchen. We had it get you a drink from the office fridge, which it did pretty well. And we had it fill up a cup of water. We had it clean up some clutter on a table and put it in the trash. I would say it did most of these things pretty well and pretty quickly. Yeah, I think that that's right. I would also say that the very first thing we asked it to pick up and throw away was a pair
Starting point is 00:36:13 of tongs, which it did pick up, but then dropped on the floor. And when we asked it to pick things up off the floor, it could not do that. Yeah, so for a little more color and context there, we had asked if we could could see Neo picking some blocks up off the floor, and it kind of reached down and tried to do a squat, but then it got sort of off balance and almost looked like it was going to fall backward, and they had to have, like, a human come in and kind of, like, keep it from falling. Now, afterwards, 1X told us that Neo can usually squat down and pick things up, but in this case, Neo was not properly calibrated for that, in part, due to some Wi-Fi issues here at the
Starting point is 00:36:54 office. Yes, so that kind of limited the demo of what Neo could do for us. For example, it could not sit in a chair, which was something else that OneX has said that it can do, but couldn't do this time. Yeah. And of course, none of this is happening autonomously. There was a one-x employee who was in the other room using a VR headset to teleoperate Neo. But from my perspective, like it was smoother than I had anticipated, especially after watching Joanna Stern's video last week where it, you know, spent five minutes trying to open and close a dishwasher. They seem to have worked out some of the bugs, or at least we, maybe we gave it easier tasks. Yeah, I mean, you know, Kevin and I have seen other humanoid robots on our day.
Starting point is 00:37:38 We went down to Google's robot lab. That robot, I believe, was acting autonomously and did take longer to do everything. So I agree with you. Like, the Neo experience was pretty smooth. On the other hand, as you say, as far as I know, everything we saw was 100% teleoperated. We did not see the robot do a single thing autonomously. Yes. And they say it can do some autonomous tasks, including talking to you, but we were not able to see that feature because of some quirks in the Wi-Fi here at the Times office.
Starting point is 00:38:06 But they were able to kind of talk to us through the teleoperator. So the teleoperator was basically talking, and it was. being transmitted out of the speakers inside Neo. Yeah, his name was Eric, and he was a great guy. Yeah. All right, Kevin. So tell me a little bit more about how that demo made you feel. Yeah, I mean, I got to say, like, that was wild for me.
Starting point is 00:38:27 Yeah. I have watched videos of robots from Tesla and Boston Dynamics and Figure, and I've seen videos of Neo. And, like, to me, there is kind of no comparison. when you're standing there in a room next to a thing and it like turns to you and sticks out its hand and then shakes your hand. Like, I don't know, there was just kind of a cool
Starting point is 00:38:54 and bizarre experience for me there that I don't feel like the videos had captured. Yeah, I feel like if you've ever seen any sci-fi, you've seen somebody interacting with a robot and I had this moment today of being like, wow, like I am now sort of in a world where the robot is real and I'm interacting with it. And like, yes, it is teleoperated.
Starting point is 00:39:13 but there's also a lot of pretty interesting and powerful technology in here. It is able to complete, like, this, you know, certain set of tasks, and I wonder where it will go from here. Yeah, and, like, it did not feel dangerous to me. It felt a little, like, clunky and awkward at times, and there were definitely, like, things it couldn't do. But I never felt like I was in danger. I had it at one point try to grab your lapels and kind of threaten you in a very menacing way. So we can do this the easy way or the hard way. What's it going to be, Casey?
Starting point is 00:39:46 That's great. Look, I promise, Neo, I'll get you the cash tomorrow. Did you feel at all nervous when it was doing that to you? No, because, you know, we were surrounded by, like, eight employees of 1X, and I sort of assumed that it wasn't going to, you know, like brandish a knife at me or anything like that. There were moments where I found it a bit unsettling, because it does get into that uncanny valley a bit as it moves, where it's like a human enough that you can sort of suspend your disbelief, but then it's hirky jerky in other moments where you sort of wonder,
Starting point is 00:40:19 what exactly are this thing's intentions? Yeah, I also just find it less comfortable to, like, give orders to a humanoid robot than I do to, like, a chatbot or, like, a smart speaker. Like, part of me felt a little guilty. Like, I'm standing here. I could pick up this clutter. Why am I, like, telling Neo to do it? And so I just, yes, there's like an uncanny valley to the motion,
Starting point is 00:40:44 but there's also kind of an uncanny valley with, like, I'm giving a task to this robot, but the robot is really just a guy right now. And, like, eventually that will feel different when it's more autonomous. But right now it did just feel like a very inefficient way of picking up and putting down things. Yeah. I mean, I think you've said it just there.
Starting point is 00:41:05 Like, what we saw is both this very impressive a piece of technology and like a very inefficient way of doing things. And I think the question for 1X is how quickly can they get from this moment of you have Eric controlling this thing with a VR headset to this is actually a helpful tool in my house that is just doing a lot of things autonomously. Do you think that people who get these neos in their homes are going to start like customizing them, putting wigs or masks or clothes on them, like kind of wanting to make them even more human-like than they are. Let me say this. If there is ever a Neo in my home, it will be in full drag.
Starting point is 00:41:43 I'm talking wigs, heels, lipstick, the works. Well, Neo, thanks so much for stopping by. It's a real pleasure to meet you and your teleoperator, and I'll be seeing you in my dreams. Appreciate it. Thank you for having me. Thanks, Neo. Take care. after having seen the demo of neo and interacted with the robot a little bit do you want one of these for your home no not not anytime soon um you know i think marquez brownlee made a great video about neo where he basically just warned consumers this company is making a lot of promises and we don't really know if neo is going to live up to them just yet barrant just told us look we're trying to be very transparent with you about what this thing can and cannot do
Starting point is 00:42:32 do. And then I found even in our demo, there were certain demo things I expected it to be able to do, like sit in a chair that it couldn't do. So I do think we're in a real buyer beware situation. I think if you are an extreme bleeding edge early adopter and you like having really weird and pretty expensive experiences with technology in your house, this may be something that you want to consider. But I would really manage my expectations about this. I think you might get Neo expecting to use it as a butler and found what you actually have is an intern that needs a lot of your attention and needs to be trained up. And you're going to find that you were working for this robot in a way that you did not expect. Totally. That was what kept coming up for
Starting point is 00:43:13 me was like this thing is just going to need so much handholding and oversight, especially in this sort of initial phase where it is mostly being teleoperated. It just feels like giving yourself a new chore rather than solving some of your existing chores. You know, Here's where I will give it a bit of credit, and I wonder if you would agree with this. You're recently on the show, we were making fun of Johnny I for not actually having a plan to make AI hardware. I should say, as far as I know, that's a joke. They probably do have some sort of mock-ups of some sort of device that will eventually ship. But had Johnny I've come out at that developer conference that we went to recently and said,
Starting point is 00:43:50 I'm making a humanoid robot and you're going to stop looking at your screen, I would have been like, well, at least that's a plan. you know like that's something that I can visualize in my mind and so I do want to give these folks a bit of credit for like taking a big swing right there's like a huge gap between where they are today and where they want to go but at the end of the day there are a lot of people like if you could make this thing work a lot of people would really get a lot out of it totally I mean as we were interacting with Neo I was sort of flashing back to my first trip in a Google self-driving car like more than a decade ago and at the time time. These were like, there were safety drivers sort of in the car in case anything went wrong. They could only go along certain routes. They were, you know, they had not been sort of cleared for full autonomous driving. The first ride I took was in like a mall parking lot. Yes. Like that was sort of what they had to do in order to keep you safe. Yes. And like, you know, it was very limited. There were all these caveats you had to make if you were going to praise this thing. But at the same time,
Starting point is 00:44:53 I had the feeling then that I had testing Neo, which is like, this thing is going to get good. I don't know when. I don't know if it's going to take three years or five years or 10 years or 20 years. Or if it will be this company that does it. Exactly. But this technology is clearly something that people are devoted to, that they are working on, that they are spending real money on, and that they are pursuing. And, like, maybe it won't take off, but maybe it will. Yeah, you know, the most famous AI paper period is probably attention is all you need, which is the paper that gets us the transformer, and it creates this insight, which is that if you want to make LLMs better, you just feed them absolutely massive amounts of data. And so far, that just kind of continues to scale.
Starting point is 00:45:38 What is interesting to me about Neo is that they are trying to take that insight and apply it to robotics. And they're saying, we are just going to create a system to gather as much robotics relevant data as we can to build the robot. Bot Butler that you actually want. And it might be that we can't actually get to that point unless we have these things in people's homes, picking up things off their floor, falling over when they try to squat, right? So it's just going to take a lot of trial and error and data collection to make something that is useful. Yeah, it is just also striking me as I was looking at Neo and testing it out. Like, there is no way that people are not going to fall for these things. right like you have this thing it kind of looks like a human it kind of acts like a human in certain circumstances it has an incredible ass like it's caked up as hell you heard it here first
Starting point is 00:46:34 i took a picture of it i'm not even joking i did and i just think there is no universe in which people do not develop uh feelings for these things and that like that worries me that's true but aren't there already like you know series on tLC about people falling in love with a roll of paper towel or whatever. Paraphylias are a real thing. They're actually going to be on the show next week. Yeah, and I think that will probably end up being some, like, you know, upgrade that you have to pay for if, like, you want, you know, the same way that there's, like, regular grok and, like, spicy grok,
Starting point is 00:47:07 there will be, like, regular humanoid and spicy humanoid, and that'll be just a premium ad on. Yeah. Well, that's a fun thing to think about. I identify as an early adopter. I have all manner of gadgets and gizmos in my house. I think I'm not going to reserve and Neo just yet because among other things, I would need to clear that with my wife and child who would also need to live alongside this thing and it would be gathering their data as well as mine. And so I just feel like it's kind of a, you kind of all have to be in to be in on this. But I'm intrigued and I can totally envision a world in which like two or three or four versions from now, these things have gotten much more autonomous and I would actually want some.
Starting point is 00:47:49 me like this in my house. I think here's the Holy Trinity. If this thing can quickly and effectively do the laundry, run the dishwasher, and take out the trash, I think people actually would pay $500 a month for it. Yes. Like there's some other like cleaning stuff in there. They'd be really nice to have. There's maybe a couple other things that I'm not thinking about. But I think if you nailed those three things and you do them and I truly just never have to think about them ever again, you can take your $500 burnt. Totally. You know, you know, it would be the worst part of this is if this actually gets good, Kevin. What?
Starting point is 00:48:19 I'd have to fire my actual butler, Alfred. It's been with me for 30 years. And I'd really miss him a lot, honestly. I just, I'd miss the hell out of the guy, but, you know, a deal's a deal. Yeah, you'd have to put on your own pants. Yeah, it'd be a shame. When we come back, Hat GPT. Well, Kevin, one thing that robots still haven't yet been able to automate is hat GPT.
Starting point is 00:49:10 This is our favorite game. Sometimes we like to take headlines from the week, throw them on slips of paper, and put them in a hat, and then draw them out of random and talk about them. And when we've had enough of the other person talking, we say, stop generating. Do you want to start? I'd love to start. Okay, let me shake up the hat here. This first story comes to us from Alex Reisner at the Atlantic.
Starting point is 00:49:34 Common Crawl is doing the AI industry's dirty work. Alex reported that the nonprofit Common Crawl Foundation has basically scraped the entire internet, often without publisher permission and when publishers have gone to them and say, hey, why don't you take our copyrighted material out of your data set that you're providing to all of the big AI labs, they actually have not been
Starting point is 00:49:56 complying. I love this story. I thought that was very interesting. You know, I hear about Common Crawl all the time. It's one of these very widely used data sets that was used to train GPT3 and has been used to train many other models. And I thought it was like some esteemed
Starting point is 00:50:12 non-profit, like a common crawl It's like for the common good, you know? It sounds very august. Yeah, so this story got a lot of journalists talking because Common Crawl's executive director, Rich Screnta, said, quote, the robots are people too, end quote, and should be able to read the books for free. Yeah, and I didn't actually know the history of Common Crawl here. It turns out it was started by a former Google employee who just like was kind of a techno-libertarian. This was during an age where like people were saying things like information should be free
Starting point is 00:50:44 And he just decided, you know what, it's not good if companies like Google have this sort of snapshot of the internet, but no one else has. And so I'm going to, like, scrape the internet and, like, make it available to anyone who wants it. Yeah, look, I mean, there are good reasons to want to preserve the internet. Like, large swaths of the internet have already disappeared. And I do think it's, like, an important part of our cultural heritage, et cetera, at the same time, when people come to you and say, hey, take our material out of your data set. You should just do that. Yes, you should do that. All right.
Starting point is 00:51:13 Stop generating. All right. This one comes to us from the Wall Street Journal. It is part of a story about Elon Musk and his AI companion at XAI. And it says that Elon Musk personally oversaw the design of a racy chatbot called Annie, an animated character with blonde pigtails and revealing outfits. Employees were compelled to turn over their biometric data to train Annie. And this is just wild.
Starting point is 00:51:42 Now, what do you mean by biometric data and how is it being used to train a chatbot? Well, according to this story, a company lawyer for XAI told a group of employees at a staff meeting in April that it would need to collect biometric data from them to train the chatbots on how to act and appear like human beings. They were apparently required to sign a form granting XAI a perpetual worldwide non-exclusive, sub-licensable, royalty-free license. to use, reproduce, and distribute their faces and voices as part of a confidential project codenamed Project Skippy. Just imagine that you are a woman who has the misfortune of working for XAI, and HR brings you to the office, and they're like, well, the good news is we think your face would be perfect for our new project.
Starting point is 00:52:33 The bad news is that it's a sex bot, and this is not optional. God, I cannot wait for these people involved at this company to start writing their, like, gossipy tell-alls, you know? Like after every presidential administration, it's like, you know, here's what I saw in the White House. We need that but for AI labs, and we need it now. Yeah, I think we might need something stronger than a tell-all to deal with this X-AI situation. But, yeah, obviously our condolences to anyone who's ever worked there for any reason. Stop generating. All right.
Starting point is 00:53:20 This next story comes to us from the New York Times. President Trump says he doesn't know the crypto billionaire. He just pardoned. Kevin, I imagine you saw this. Trump last month granted a pardon to Chengpeng Xiao, the billionaire founder of the Cryptocurrency Exchange, Binance. the bisexual finance company, who had pleaded guilty to money laundering violations in 2023 and whose company struck a business deal in May
Starting point is 00:53:44 involving the Trump family's crypto venture, but now the president claims he does not know who CZ is. Do you buy it, Kevin? I do, actually. I think probably someone hands him a list of people and said, you should pardon these guys. They're good guys. And he sort of says, okay.
Starting point is 00:54:00 Imagine just waking up and someone hands you a stack of paper and says, here are your pardons for today. And you think, I ain't going to read all that. I just find, like, obviously this is, you know, seems all kind of corrupt, and I'm sure we will learn more about how this pardon came to be. But I'm just fascinated by pardons. I feel like presidential pardons are kind of the closest thing to magic spells we have in this world, where you can just say, like, you go free, you go free. And, like, a person is just free. Yeah, it's very Oprah Winfrey-coded, you know?
Starting point is 00:54:30 It's like, look under your seat, you may have a pardon. Yes. Interesting way to run a country. Yeah. Anyways, stop generating. Okay, this is a good one. Coca-Cola injects holidays are-coming ads with an upgraded dose of AI. This comes to us from Katie Dayton at the Wall Street Journal.
Starting point is 00:54:49 Coca-Cola, which made an AI ad last year for its holiday season, has taken another stab at AI-generated advertising. This new ad is called Holidays Are Coming, and Coca-Cola's chief marketing officer told the Wall Street Journal that the standard ad-making process used to take a year, and now with AI, they can get it done in around a month. Should you watch it? Let's watch it.
Starting point is 00:55:29 Okay, I think we get the point. Now, Casey, what do you think of Coca-Cola's AI-generated holiday ad? Well, here's what I've learned is that if you took the point. a full year, you can make a good Coke ad, but if you only spend a month on it using AI, then it's bad. Yes, not the finest ad the Coca-Cola Corporation has produced,
Starting point is 00:55:50 but I bet it was cheap. Like, you can actually tell, like, all of the, like, animals in the ad, they just, like, kind of wobble a little bit. It's like, they have no, like, um, like, integrity in their animation. And so everything is just sort of, like, floaty and weird. I will say I, I'm very worried about this because,
Starting point is 00:56:06 um, you know, a lot of polar bears used to jobs making ads for Coca-Cola, and what are they going to do now if AI takes those jobs? These are some of the first animals losing their jobs to automation. Are they supposed to starve? I mean, things weren't going great for the polar bears before this happened. Yeah, they got to unionize. Yeah, you heard what's going on those glaciers. I love that all the comments on the video are like, you know, remember when Coca-Cola used to make
Starting point is 00:56:29 ads with soul? It's like, no, they're selling Coca-Cola. Yeah, I expect more out of this sugar syrup. Yes. How dare they? Okay, stop generating. All right. Oh, I accidentally grab two.
Starting point is 00:56:45 All right, this next one comes from Alexandra Marquez at NBC News. The White House has launched a spoof MySpace page mocking Democratic leaders over the shutdown. Did you visit this? No. Well, you can find it somehow, unbelievably to me, at whitehouse.gov slash my safe space. You haven't been to this? No. You have to pull this up.
Starting point is 00:57:07 Okay. It's literally insane. So you pull it up, and it is a parody of MySpace, a website that has arguably never been more relevant than it is today. And it opens to a sort of satirical profile page for Democratic leader, Hakeem Jeffries. Okay, and on the About Me section, it says, hey, we're Democrats in the House and Senate. We love DEI, transgender for everyone. handing out taxpayer benefits to illegal immigrants. Okay, I think I get the point here.
Starting point is 00:57:42 If you asked me to, like, imitate a 13-year-old boy writing mean things about Democrats, like, this is basically what I would come up with. Like, under Hakeem's friends in his top eight, because remember on MySpace, it would show your top eight friend. This website literally, like, 17 years ago was when this was relevant at all. Anyways, Hakeem's top friends include George Soros and Antifa. Also, for some reason, the Chucky doll from like that horror. franchise. Anyway, we wouldn't be talking about any of this, except it's literally on
Starting point is 00:58:11 Whitehouse.gov. It's so strange. Yes. Like the trolling is coming from inside the White House. Yeah. This is going to hit very hard with 41-year-old political staffers in Washington. I'm not sure this actually moves the needle for a lot of voters out there. Yes, completely embarrassing and bad. Another one? All right. Stop generating. Okay, this one is about Ilya Sutskiver's surprising deposition. This comes to us for. from a little-known rag called Platformer and a writer named Casey Newton. Oh, sounds hot.
Starting point is 00:58:43 This is about a new 10-hour deposition in the legal proceedings of Elon Musk's lawsuit against Open AI. This transcript of this deposition by Ilya Sutskiver, the former chief scientist of Open AI, became public recently. And it details his sort of role
Starting point is 00:59:03 in Sam Altman's 2023 firing from that company. and some of the behind-the-scenes machinations that were going on at the time. Casey, this was a hot topic of gossip in the AI community this week. What did we learn from this deposition? Well, look, in some sense, a lot of this was already, you know, we knew that Ilya had come to the board with concerns about Sam in 2023, and that had played a role in their decision to fire him. But in this deposition, we did get a lot more detail.
Starting point is 00:59:33 And the one that just stuck with me was that Ilya brought to the board a 52-page document containing various kind of office misdeeds that Sam, you know, what was alleged to have done. And it's important for this reason. Back in 2023, when we were, you know, doing an emergency podcast from the airport, the big narrative coming out of Sam's firing was these crazy, effective altruist dumers are out of control and look, they blew up open AI for no reason. They're so scared of their own shadow, blah, blah, blah, blah. And this narrative persists today. You read this deposition, and what you learn is that what the real story was was that Sam Altman's own C-suite was rebelling against him in a very similar way that previous C-sweets had
Starting point is 01:00:18 rebelled against him in his previous jobs. So I think that obviously this ship has sailed. Most people are no longer paying attention to the story, but for us real AI heads, we know actually do have a more or less complete understanding of why Sam was fired, and it had a lot to do with Ilya Sutskimer and Miramorati and a 52-page memo saying, here's what Sam did rock. Yeah, I mean, it was just like my AI group chats were melting down about this. It was so messy. This whole period, it just feels like a fever dream now that we're a couple of years removed
Starting point is 01:00:45 from it. But, man, what a wild time that was. What a bizarre turn of events. And to now start to understand some of the sort of plotting and scheming and plan making behind all of this, I think despite how, silly, I think parts of this lawsuit are. I do like this quality of lawsuits that they kind of surface interesting and valuable public information that would not have otherwise come to light. And so for that reason, and maybe that reason only, I am glad that Elon Musk brought this suit.
Starting point is 01:01:17 And speaking of that, Kevin, my understanding is that Mira Muradi was recently deposed in the same lawsuit, which means that we should have another deposition to look forward to reading sometime soon. I enjoy reading transcripts of depositions because at least like 30% of them are just lawyers arguing with each other about what their client does and doesn't have to answer. This one has a lot of that. A lot of that. Very juicy. Screenwriters, a gift to screenwriters everywhere.
Starting point is 01:01:41 Yeah. Unfortunately, I think they finished filming that movie, but... Oh, okay. All right. Stop generating. Mm-hmm. All right. This next story comes to us from Ashley Bellinger at Ars Technica.
Starting point is 01:01:56 Meta denies torrenting porn to train AI, saying the downloads were for personal use. I think I've heard that one before. Meta asked a U.S. District Court to toss a lawsuit alleging that the tech giant illegally torrented pornography to train AI, Kevin. The move comes after Strike Three Holdings discovered illegal downloads of its adult films on Meta corporate IP addresses, as well as other downloads that Meta allegedly concealed using a stealth network of hidden IP addresses. Strike Three brought a suit against Meta in July, seeking damages that have been estimated at more than $350 million. Now, we should say that Strike 3 has been accused of being something of a copyright troll.
Starting point is 01:02:36 The Guardian actually just had a big story about that this week as well. At the same time, Kevin, one does have to wonder, why was all that porn being downloaded at Mehta's headquarters? Right. So, Meta's defense here is that these were just like employees who, or, you know, office guests or contractors who were just downloading these pornographic movies on the Meta network. work somehow. And META says that they are not training models on porn, and I believe them. But on the other hand, how do you think Nasty Nancy got so nasty? I mean, I'm glad you brought it up because I think Nasty Nancy, which is, of course, one of the very most popular chatbots on all of META's properties, has some pretty, you know,
Starting point is 01:03:17 steamy things to say, and she had to learn it somewhere. But look, obviously people are criticizing META over this, but I just want to say, like, you think creating the metaverse is easy, you know, like the people there, they need to blow off some steam, and sometimes that just means downloading porn on their work computers. So, you know, if you want to have a metaverse, you're going to just have to allow for some of this. And I know you want a metaphors. Oh, I do. Yeah. I love that part of the meta defense in this lawsuit trying to get it thrown out is that this was so little porn. Basically, like they're saying, look, if we were going to steal your copyright or data to train our AI systems, we would steal a lot of it. Yeah. When you look at our previous
Starting point is 01:03:55 copyright thefts, they've been hugely, hugely bigger, my orders of magnitude. Okay, stop generating. All right. Casey, this last one is near and dear to our hearts. This is a story that comes to us from my colleague Reggie Agu
Starting point is 01:04:11 at the New York Times, and it is about podcasting and AI. It's called for podcasters, a voice clone is a double-edged sword. Reggie explores how AI voice clones from places like 11 labs are starting to be used by some podcasters and asks the question, are AI replicas a boon for productivity or a betrayal of the bond with listeners?
Starting point is 01:04:32 Casey, what do you make of this? Well, I think it kind of depends. I mean, there are just certain cases where I think it might make sense to use a voice clone. Like, I don't think we're going to be doing it on this show anytime soon. But look, you know, right now I send out an email newsletter. It's available only in text. Readers have been asking me four years to send out an audio version. Would they object to me doing it via a voice clone?
Starting point is 01:04:55 Some people have actually emailed me asking me to do just this. So I was glad to read Reggie's story because this is something that's been on my mind lately. Where do you think the line is for that? Like, you know, would you be comfortable letting an AI voice clone read an ad in your voice? I think that that doesn't feel good to me. I think that if some, you know, if you're reading an ad, part of what the advertiser is buying is like your sort of, you know, personal contribution to it. Where I could see it on a podcast is, you know, very occasionally will make a mistake when recording the show. usually it's Kevin, but then we'll have to go back and fix it afterwards, you know,
Starting point is 01:05:28 maybe I literally just had one or two of the wrong words. If you could use a voice clone to just quickly replace those words, I think that's probably okay. You probably want to disclose that to people somewhere along the process, but, you know, in general, I think, as always, just disclose what you're doing, and it'll probably be fine. Yeah, I think this has potential, but less for replacing the voices of podcasters, but for allowing users to kind of swap in their own preferred voices. Like, maybe you like some of what Casey says on this podcast, but you'd rather it be
Starting point is 01:05:54 delivered in like the silky baritone of an Ira glass type figure. And you can just swap in Ira's voice. It's true. Problem solved. Actually, you want to hear a funny story about this? What? So I actually have been training a clone of my voice via 11 labs just to kind of like see what it would be like to have it read my column. And I fed it a bunch of hard fork audio that our producer Whitney gave me for this purpose. And the problem is that my newsletter, I sort of write in this like typically sort of somber tone throughout, you know? And then in hard fork, I kind of amp up my personality and sound like a crazy person. And it turns out that when you have like crazy person voice reading a somber column, I just sound insane. So I'm not actually able to use that voice clone. Yes. And if you want
Starting point is 01:06:33 Sad Casey in your AI voice pack, that's going to cost you an extra $7 a month. Yeah, that's a premium feature. All right. All right. Is there any more? And we're out of stories. And we're out of stories. That is HatsyPT. Thanks for playing everyone. Casey, before we go, let's make our AI disclosures. I work at the New York Times Company, which is suing Open AI and Microsoft over alleged copyright violations. And my boyfriend works at Anthropic. Hard Fork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant.
Starting point is 01:07:09 This episode was fact-checked by Will Paisal and was engineered by Katie McMurran, original music by Alicia Bitupe, Diane Wong, Rowan Nemistow, and Dan Powell. Video production by Sawyer Roque, Pat Gunther, Jake Nickel, and Chris Schott. You can watch this full episode on YouTube at YouTube.com slash hardfork. Special thanks to Paula Schumann, Pueving Tam, Dahlia Haddad, and Jeffrey Miranda. As always, you can email us at hardfork at n.Y times.com. Send us your fantasies involving humanoid robots. Or don't?
Starting point is 01:07:54 Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.