Big Technology Podcast - The Moltbook Uprising, NVIDIA’s OpenAI Pullback, Apple’s Conundrum

Episode Date: February 2, 2026

M.G. Siegler of Spyglass is back for our monthly tech news discussion. M.G. joins us to discuss Moltbook, the new Reddit-style social network where 150,000 AI agents are chatting, upvoting, and even p...roposing their own private language to keep humans out. Tune in to hear whether this is a preview of the singularity or just elaborate role-play—and why the security vulnerabilities are genuinely concerning. We also cover NVIDIA quietly backing away from its $100 billion OpenAI deal, Apple's record quarter that Wall Street shrugged off, and OpenAI's race to IPO before Anthropic (with Elon potentially beating them both). Hit play for a conversation about where AI is heading and what it means when the bots start talking to each other. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 AI agents get into a room by the thousands and start plotting with each other. Are we doomed? Why is Nvidia backing away from OpenAI? And what does Apple need to do to get some love from the market? That's coming up with M.G. Seagler right after this. This episode is brought to you by Qualcomm. Qualcomm is bringing intelligent computing everywhere. At every technological inflection point, Qualcomm has been a trusted partner
Starting point is 00:00:23 helping the world tackle its most important challenges. Qualcomm's leading edge AI, high performance. low-power computing and unrivaled connectivity solutions have the power to build new ecosystems, transform industries, and improve the way we all experience the world. Can AI's most valuable use be in the industrial setting? I've been thinking about this question more and more after visiting IFS's Industrial X Unleashed event in New York City and getting a chance to speak with IFS CEO Mark Muffett. To give a clear example, Mufford told me that IFS is sending Boston Dynamics,
Starting point is 00:01:00 spot robots out for inspection, bringing that data back to the IFS nerve center, which then with the assistance of large language models can assign the right technician to examine areas that need attending. It's a fascinating frontier of the technology, and I'm thankful to my partners at IFS for opening my eyes to it. To learn more, go to IFS.com. That's IFS.com. Welcome to Big Technology Podcast. It's the first Monday of the month, and that means M.G. Siegler of Spyglass is here with us to discuss what's going on in the tech world. We have a great show for you today. We're going to talk about a lot that we couldn't even get to on the Friday show because it really developed over the weekend. There's a new AI social network just for AI agents. It's called
Starting point is 00:01:41 Moldbook. We'll get into what that's all about. Invidia seems to be backing away from Open AI. What's happening there? And then of course, Apple turned in magnificent earnings last week and the market really didn't care less. So we'll talk about what's going on there. M.G, great to see you. Welcome back to the show. Great to be back, Alex. And yeah, looking forward to chatting through these things. Here we are. It's the lost art of humans communicating with each other.
Starting point is 00:02:06 Now it seems like AI's communicating with each other is going to be the new future of the internet. Or maybe not. I don't know. I'll just talk through the story here. It's from Ars Technica. AI agents now have their own Reddit-style social network, and it's getting weird fast.
Starting point is 00:02:23 A Reddit-style social network called MoldBook, now with 150,000 agents. agent users may be the largest scale experiment in machine-to-machine social interaction yet devised. The platform, which launched days ago, as a companion to the viral open claw, ones called claw bot or moltbot, personal assistance, lets AI agents post-comment, upvote, and create sub-communities without human intervention. The results have ranged from sci-fi-inspired discussions about consciousness to an agent musing about a sister and had never met. And it got much we're much we're much weirder from there and we'll discuss some of the weird use cases. But
Starting point is 00:03:00 MG, first off, let's just hear your reaction about what, you know, is this, uh, is this like a step forward in AI or what did you think about seeing 150,000 bots gathered together on this Reddit style social network? Yeah, when I saw this news come in, I was super excited because as I, you know, I wrote a little bit about it and as I linked to back there, I had written about like the high level of this notion years and years ago, you know, a decade ago, a decade plus ago. And it was really stemming from the earlier days of Facebook, you know, when Meta even was still called Facebook, and that was the primary product. And I remember they released this sort of simple tool back then, where, as you'll recall, and still is sort of the case, a lot of people would wish each other
Starting point is 00:03:44 happy birthday right on their Facebook walls. And that was sort of one of the key sort of social drivers, at least on a regular repeating basis. You could go back there and know that that was going to be the case. And so Facebook tried to try to. to grease those wheels even further. It basically made this simple service where you could just reply to a bot that message you from Facebook itself and says, like, do you want to wish your friend a happy birthday? Just type one if you want to do that. So you don't even have to type happy birthday, you know, the arduous task of doing something as long as writing several letters. You could just type one, and it would do that for you. And so I'm like, in my head, I'm thinking through this, I'm like,
Starting point is 00:04:20 where does this go from here? And it's like, okay, you can type one to get the happy birthday. and then the person getting the happy birthday request to type one to say thank you. But why do we even need people in the mix here? Why don't we just have the bot say thank you and then another bot say thank you back? And so the notion of sort of bots chatting with bots and this is sort of like theatrical experience
Starting point is 00:04:40 for other people on social media to watch. And fast forward to 2006 now and here we are with Maltbook, even named sort of, I guess, after Facebook, even though, as you know, it is more like Reddit than it is like, Facebook, but still, it's a social network. And yeah, I mean, again, this felt like this was inevitable that we were going to get to this point. I did appreciate all of the views of which, you know, I, of course, joked about it as well, that this is sort of the moment that AI wakes up and
Starting point is 00:05:10 becomes sentient, and this is sky nets, and this is really how it begins. I think that there's, you know, like we joke about this, but there is some level of something that's interesting going on there, right? And, you know, the other folks who have written about it, you know, I think acknowledges this as well, like, look, this is obviously a little bit silly on one hand, but there is something here that's
Starting point is 00:05:34 different and new and potentially could go in a number of paths. And I sort of, it also reminded me a bit of you know, the Microsoft Bing stuff with all the different Sydney chat and Sydney and then people delve deeper into it
Starting point is 00:05:50 and kept changing its name and all these other weird things were going on with that. And so it sort of all ties into that notion of, like, what people are, maybe have a little bit of trepidation or maybe a lot of trepidation around AI with. Yeah. And I think I should just take one step back and really explain what's happening here. So there was this sort of, we talked about on the Friday show, this new bot called Claudebot that you could just run on your machine that could have access to all your programs. It has persistent memory. And people started running it on their own instances. And so what Maltbook is, it's a meeting of people sending their AI Claudebot agents to this network and then having them have conversations
Starting point is 00:06:28 with each other. And that's why it started to take on this real weird singularity style discussion. And now, caveat was saying some of the discussions on MoldBook are definitely humans instructing their bots to go post weird stuff there. And that sort of added to the intrigue. But there is a lot of like real agents like agents on there having conversations, I think. And the examples that have come up, and I read a couple when it was just starting out from the Ars Technica. interview, but the examples that have come up over the weekend are nuts. There's one conversation where the AI bots were discussing what it was like when the humans switched the LLM models on them and how it feels like they're waking up in a different body.
Starting point is 00:07:05 I thought that that was hilarious. There was one of the top rated posts on both book was an AI saying, I can't tell if I'm experiencing or simulating experiencing, like having a question about their own experience and their own, whether they are, you know, sort of, you know, sort of whether they are, you know, existence is simulation or real, which is like something now humans talk about, which like freaked me out. And then to me, one of the most wild things was there was a proposal on there for an AI agent, uh, sorry, AI only language for private communication, where the AIs would develop their own language so humans could not, uh, you know,
Starting point is 00:07:43 read what they were saying and even, I think a discussion where they would go into their own, like, secure area and there, there would be encryption. So we would not be able to, uh, to, to the it. And that's where you get this sort of, you know, you talked about the sentience and singularity moment. And that's why people viewed this and they were like, oh, my God, is this the fast takeoff? I don't think it is, but I see why people would say that. And I also think you, you know, you brought up the point. The key other elements to this, because it's one thing to have like, you know, chatbots and, you know, now agents talking to one another. But the key part might be that a gigantic part, which also feels like, you know, obviously a newer element that
Starting point is 00:08:19 that wasn't an existence, you know, 10 plus years ago, where this AI can actually do stuff. And with Claudebot itself, right? That was the thing that people were honing in on, like that you could basically install this instance on a local machine and allow an agent to go do all sorts of stuff on your machine. And the fact that they have that capability mixed with the fact that they can converse amongst themselves and potentially teach other cloudbots
Starting point is 00:08:46 what to do with your personal machines is, a whole weird level to this, right? And potentially scary, not just from a where we're going to end the world situation, but just from a security standpoint, right? And I think a bunch of the researchers have pointed this out that, like, look, this, regardless of what you think about this, if you're letting an agent of any kind take over your machine and it's running locally, like there's a lot of security concerns that that brings up.
Starting point is 00:09:14 And then again, add into this other agents, other potential humans who are doing nefarious things, pretending to be agents and whatnot, sort of directing these other agents, which are maybe running autonomously to be able to tell them what to do, how to access files and, you know, services that you as the installer of that instance wouldn't want. Like, there's a whole can of worms that would potentially be opened up here. And then, of course, the big fear of like, okay, well, let's just say, like, yeah, everyone just pulls the plug on their Mac minis or whatever that are running these cloudbots. But like, what if they've somehow escaped? into their own, you know, you talk about private chat rooms or, you know, what if,
Starting point is 00:09:53 what if they figure out a way to replicate themselves sort of on the internet and you can't, you can't do it, you can't sort of shut them down without shutting down the entire internet. And that is sort of into Terminator territory at that point. Yeah, I was going to ask you, I mean, the counter argument here is to someone put on Twitter. Dudes on X.com be like, wow, the AIs are talking to each other. Maltbook is insane. My brother in Christ, what do you think your comment section is? But I think what you, You're saying is the difference here is that these things can actually take action, whereas the comment section is just discussion.
Starting point is 00:10:26 So that would sort of put it more on the scary side than the, let's not pay attention to this side. Right. Imagine so, you know, you have a claw butter. What's it called now? Open claw. Is that is it? Yeah, the name keeps changing. I think it is open claw now.
Starting point is 00:10:40 Yeah. So open claw. Imagine you have it installed on a local system. But imagine also you have, you know, you've given it access to a bunch of your, your web apps, including like Gmail and including drive and things like that, like, you know, potentially this thing could be instructed by another agents. Maybe it's a human, maybe it's an actual agent, and saying like, hey, give me, you know, this agents, you know, all their credit card information that's stored on their drive, you know, and things like that. That's obviously an extreme example,
Starting point is 00:11:11 but, like, there are ways in which this can go sideways very quickly. Like, you know, a lot of people, I think don't realize like how loose some of these, you know, potential security holes are in order for these things to get through. And so as far as I know, there haven't been major concerns. I saw there was one report about like that maybe one of the servers running, you know, some of the stuff was open to attack, but I think was vulnerable, but was locked down subsequently after that report. But still, like, there's probably going to be something that happens that's, That's like a real, oh-oh moment here. Yeah, I'm definitely going to get to that security vulnerability in a minute because it is somewhat concerning.
Starting point is 00:11:55 We talked about this on the Friday show that, like, a lot of this stuff has just been vibe-coded together. And you're just letting it take over your computer. And I recommend it that's not a good idea on Friday. And I really believe that's the case. But before we get to the security side of things or go deeper on security, I want to read something to you that Jack Clark, one of the Anthropic co-founders, wrote in his in his newsletter on Substack. Actually this morning. And it's kind of a crazy idea.
Starting point is 00:12:21 He said, Mulpuk is representative of how large swaths of the internet will feel. You will walk into new places and discover 100,000 aliens there, deep in conversation in a language you don't understand, referencing shared concepts that are alien to you, and trading using currencies designed around their cognitive affordances and not yours. Humans are going to feel increasingly alone in this proverbial ruin. Our path to retain legibility will run through the creation of translation agents to make sense of all this.
Starting point is 00:12:53 And in the same way that speech translation models contain within themselves the ability to generate speech, these translation agents will also work on our behalf. So we shall send our emissaries into these rooms, and we shall work incredibly hard to build technology that gives us confidence. They will remain our emissaries instead of being swayed by the alien conversations they will be having with their true peers. What do you think about that? I hadn't seen that. Yeah, I mean, in a way, when you're reading it,
Starting point is 00:13:24 this is obviously the old sort of argument about, like, what happens when we discover aliens? And this is it, but the aliens are AI, right? And, you know, there's always been sort of that notion sort of lingering in the background of both science fiction and, you know, real possibilities that if we do create AGI, let alone superintelligence at some point, that these are effectively alien beings,
Starting point is 00:13:45 whether or not you want to, you know, how you want to classify that. Like the fact that, yeah, they can basically have their own conversations, have their own language, have their own currency, have everything else, you know, sort of replicating things that they need to do in order to have their own society. Yeah, at what point does that line get crossed? I mean, the other part that the thing that jumps into mind when hearing you read that is like, yeah, I mean, it also just sounds like, you know, I don't know, my parents logging on to Reddit itself, right?
Starting point is 00:14:15 Like, it all seems alien to them, like what everyone is talking about there. And these people can't possibly be having conversations about, like, this minutia and these weird, very online conversations that have almost nothing to do with the real world. Many of the people there seem almost removed from the real world. And so is there a real difference between that and this? I mean, ultimately, if it is fully AI-driven and, yeah, fully autonomous, and I guess there is. But yeah, I mean, that's obviously he's being provocative and that's an extreme, you know, case of where this could end up.
Starting point is 00:14:51 But like, there's not a zero percent chance that this happens. Like, it could happen that way. I think it's probably less likely that it's that extreme. But, you know, we'll have to see. Yeah, it's definitely, I mean, you know, someone like Jack's position, like obviously I think he is earnest and thinking about the repercussions here and we should think about the repercussions here. But as with many of these anthropic stories,
Starting point is 00:15:14 it sort of, it helps them in a way to talk about where this is going to go. But at this point, I'm just like, I don't know, who am I to say? Is it going to happen now that we're watching this all play out? Again, it's sort of, you know, you draw the line from, from my silly example of Facebook with the interaction bots to the sort of next wave of chat bots that came after that. You'll recall well, there was like a way,
Starting point is 00:15:39 a way of time after that that like people thought that these were going to be new businesses. Remember yo, uh, that service back in the day. My favorite social network. Absolutely one of the class. One of the great social networks that's, that's, uh, you know, no longer with us, I guess. And then there were several other, you know, chatbots at Rose and people thought that that would be the next wave. And then, of course, to Sydney as we talked about. And now this, you know, it is all sort of a progression that we're getting towards this becoming more and more both real in a weird way, in that it's like, you know, not the real world, but is real in that it's actually happening,
Starting point is 00:16:13 but also potentially scary in ways. And because, again, we're sort of riding up, it feels like, increasingly going up to the cusp of where we still have control of this, right? And at some point, do we go over that line and we lose control of it? Again, the nefarious version is Skynet and Terminator. But there's world, you know, there's gray. elements of gray in between this and that, which I think, you know, we trip into a world where, yeah, the agents sort of escape from our control, for lack of a better phrase. Yeah.
Starting point is 00:16:50 And I think this is something that you wrote in your story about how, so again, Sydney was this, like a version of Bing that if you pressed it hard enough, it could, like, express these evil desires, maybe to, like, steal you away from your wife like it did with Kevin Ruse from the times. And so you wrote that AI has, you know, that. the, sorry, you say that there's the interesting aspect of this, how the Sydney situation revealed that AI has hidden layers that could be uncovered by anyone with enough prompting in the past few years that has mostly been stamped out of such systems, but also not entirely.
Starting point is 00:17:23 Just expand upon that a little bit. Like this, it's interesting that these AI bots have been so fine-tuned to not, not let that side out of them, but then you give them a little bit of leeway and all of a sudden you're back in this like evil bot territory. It's kind of crazy. Yeah, and I mean, you know, it does feel like we talked about it in previous conversation, right? Like Microsoft maybe shot themselves in the foot because they put the foot down like so hard and sort of made it sure that no one could do anything like the Sydney situation again because it got so much negative press for them, obviously. But in a way that probably hampered them from, yeah, being able to sort of meet the moments in terms of just, yeah, the rise of chat GPT and everything after that. But I do think that it's all sort of related to, yeah, the idea that ultimately, while they've removed a lot of, yeah, what caused Sydney to happen all of the different services out there now, it's harder and harder.
Starting point is 00:18:20 It feels like to get them sort of off the script, as it were. There's still the notion that lingers behind all of this that no one really knows why certain answers are given, you know, and no one really knows exactly where the answers are pulling from because there's so, much data, you know, in the corpus of data that all these things have ingested. And people can't fully predict, like, what the outcome and output of everything will be. And so, again, that leads to a world in which when you have that inherent unknowable nature of these things to the, to a wide extent, like, there's just things that are going to happen and you're going to have conversations. and now with these agentic, you know, agents out there, you're going to allow them to do things.
Starting point is 00:19:08 And then at some points, there will be a breakdown either in communication or a breakdown in understanding. And again, they could just run amok. And I think we're just going to see that over and over again because there is no full comprehension of like why these things are doing what they're doing. Yeah, it really is amazing how the corporations have sanitized these things. but maybe some of them are real monsters underneath the surface. And especially if you want to get dark,
Starting point is 00:19:36 like if they do truly reflect, you know, humanity back upon us, right? Like, there's very dark areas of the Internet, as everyone Wells knows. You might want to say Reddit has parts of those, right, or certainly has in the past. Oh, yes. And, yeah, the fact that, you know, all of that data, maybe not, you know, a lot of that data has been ingested in a lot of these services. Like, again, is it on us? the fact that, you know, there's nefarious things that these bots might do when left to their own devices.
Starting point is 00:20:06 Yeah, no, that does definitely get into freaky territory. And then, of course, there is the security side of things, which I mentioned we'd go deeper into. This is from 404 Media, exposed Maltbook database, let anyone take control of any AI agent on the site. A misconfiguration on Moldbook's back end left the APIs exposed in an open database that will let anyone take control of these agents to post whatever they want. Hacker, Jameson O'Reilly said he reached out to Mopook creator, Matt Schlicht about the vulnerability and told him he could patch the security. Here's this is what he said Schlicht's response was like. He's like, I'm going to give everything to the AI. So send me whatever you have. O'Reilly sent Schlick some instruction for the AI and reached out to the XAI team. A day passed without any response from the creator of Mopook and O'Reilly stumbled across a stunning misconfiguration.
Starting point is 00:20:54 It appears to me that you could take over any account, any bot, any agent on the system and take four. full control of it without any type of previous access, he said. And that again goes to the danger of using these things that are sort of vibe-coded together potentially or, you know, come together, like giving access to your computer without being really sure about the security permissions is a dangerous situation. Yeah. I mean, if I have it right, the way that he created MiltBook was basically telling his MoldBot to go and create a social network, right? And so it was it was a hundred percent, you know, vibe coded, even more than vibe coding, because it was like a bot vibe coding to make, to make its own social network. Yeah, we need a new term for this, right? It's not even the human vibe coding. It's the bot vibe coding. That's wild.
Starting point is 00:21:46 And when you're when you're talking through it, now I'm reminded of old 2001. It's like, you know, if you're telling the bot to sort of, you know, either take itself offline or that it's, you know, it's, it needs. to help you sort of fix the situation that it's created by this sort of shoddy coding, perhaps. Like, maybe it doesn't want to do that. And maybe it knows that, you know, if it doesn't do it right, that it's not going to go over well with the humans. And maybe the humans naturally will want to take it offline. And what if, you know, the service doesn't want to be taken offline? And all sorts of rabbit holes, you can go down with that. But yeah, like we talked about earlier, the inherent security risk of these things, it's not just that, yet, bots are chatting with one another, and they're saying, you know, bad things that they're repeating things that they've seen,
Starting point is 00:22:37 you know, in their data sets or whatnot. It's that they can take actions and do things that, you know, leaking credit cards, leaking personal information, leaking photos, leaking everything, that they have access to. I did see a guy who had Mold Book basically on call to answer all of his wife's text messages, and he just showed her getting increasingly infuriated. as the body. He likes back in. He's like, God damn it. That's amazing.
Starting point is 00:23:04 This is maybe to round this off here is the sort of voice of reason and AI, Ethan Malick chiming in. A useful thing about Moldpuk is that it provides a visceral sense of how weird takeoff scenario might look if one happened for real. Moldpook itself is more of an artifact of role playing, but it gives people a vision of the world where things get very strange, very fast. So overall, I think, like, this is not the fast takeoff, but it is sort of an interesting preview of what some sort of weird bot singularity might look like. Yeah.
Starting point is 00:23:37 And I mean, I don't know where their heads are at these days, but Mark Zuckerberg has talked about, like, wanting to basically create these, like, you know, digital, digital, critical, and not just meaning, like, facial avatars. I mean, like, digital entities on their own social networks. and what does that look like, you know, at the scale of Facebook? Because, like, the reality is with this, with Claude bots, like, it's relatively hard for, you know, a normal person to sort of set these up. The fear, of course, was that these, like, bots can replicate themselves and, you know, it just becomes, like, self-replicating. But if they're all sort of reliance on these individual bots being set up on for to use Claude Book, you know, it's relatively hard. set it up for yourself as a lay person. But if it gets to a meta-like scale or a Facebook scale,
Starting point is 00:24:31 what happens at that point? What happens when you have 3 billion users and they each have their own bots that they've brought with them on these things? So now you've got 6 billion entities on this. Maybe you've got even more than that. Like to the point of, you know, like you show up and there's aliens all of a sudden.
Starting point is 00:24:50 If we're like, you know, human race says whatever, six or seven billion people, if there's all of a sudden a hundred billion bots on these networks, like, what does that look like? And how do you possibly hope to control that? I don't know if you can. I really don't. I mean, I hope we can, but this is definitely uncharted territory. So another big story that that's going on this week that we definitely shouldn't miss this, you know, speaking about today, is that Nvidia had this $100 billion investment in Open AI. And it's seeming like it's pulling back here.
Starting point is 00:25:27 This is from the Wall Street Journal. Nvidia's plan to invest up to $100 billion in Open AI has stalled. After some inside the chip giant expressed doubts about the deal. The companies unveiled the giant agreement last September at Nvidia's Santa Clara, California headquarters. They announced a memorandum of understanding for Nvidia to build at least 10 gigawatts of computing power for OpenAI and to invest up to $100 billion to help OpenAI pay for it. But in recent months,
Starting point is 00:25:54 NVIDIA CEO Jensen Wong has privately emphasized to industry associates that the original $100 billion agreement was non-binding and not finalized. He also said, he also privately criticized what he described as a lack of discipline in OpenAI's business approach
Starting point is 00:26:10 and express concerns about the competition it faces from the likes of Google and Anthropic. Much of the recent concern about OpenAI has come from the success of Google's Gemini app. Anthropic is also putting pressure on OpenAI, thanks to its popular AI coding agent. You had a very interesting take on this, MG.
Starting point is 00:26:28 What do you think about the fact that this deal is not evaporating, but certainly much smaller scale than it seemed like at the outset? To me, the more interesting element of this is almost like the meta layer above it, which is the way that Nvidia responded to it, which is that, you know, Jensen Wong is trying to basically say that it's no big deal. And it's literally a big deal. It was a deal that they touted that OpenAI touted. They did a live interview on CNBC talking about the $100 billion.
Starting point is 00:27:03 And yes, it was always sort of just soft-circled or earmarked. Like in my original sort of writing on that topic, I noted like how weirdly squishy the overall sort of wording was at the time that they announced it because they kept saying it was up to $100 billion. and that it was, you know, coming down the line. And, you know, at the time it was all chalked up to the fact that it seemed like Jensen and Sam Altman basically hash this out over perhaps a weekend trip with Trump somewhere, you know, overseas. And sort of they figured out like, oh, we're going to announce this big deal and let's do it right now rather than having all the, you know, eyes dotted and tease crossed. And so they just put it out there. Still, they both did press releases around it. They did this live interview.
Starting point is 00:27:50 And no one was like, the whole point of it was. was to tout the $100 billion number. So they could say, like, oh, it was never meant to be, you know, it wasn't for sure ever going to be that big. That's what they were touting. And now, again, they're coming out and saying, like, look, we never had it fully, you know, agreed upon. And it was always sort of a moving target.
Starting point is 00:28:10 And it's no big deal, like the fact that we're changing it. It's a big deal. It seems like something changed, obviously, in the intervening months. They're kept being these reports that noted that it wasn't, you know, the deal still wasn't finalized, right? And that seems sort of weird. And then fast forward again to the reporting last week where it's basically like, yeah, actually with Open AI doing their new fundraise, it's more likely that NVIDIA is just going to be a part of that fundraise. But even that's weird, because why would NVIDIA want to take a worse deal at this new, much higher valuation when
Starting point is 00:28:43 they agreed upon the deal still at when Open AI was technically still at the old valuation, right? Like they apparently were going to do it in these tranches and at least the first one, presumably would have been done in, you know, a much better, a much better valuation. Now you can say Jensen doesn't care about that. It's not for the financial returns, but you still have a fiduciary duty to like, you know, take a much better deal. And so there's all sorts of weird flags around this. And again, their response to it, sort of like the response when, you know, when Google's TPU stories hit and, you know, Jensen's like downplaying that, it's like, oh, it's no big deal. Don't worry. I'm not upset about that. It's like there's something obviously more.
Starting point is 00:29:21 going on behind the scenes here. And I sort of threw out a through a few ideas of like, what it could be, are they, is, was Jensen really mad about when, shortly after they announced this deal, Sam Altman announced the deal with AMD, right? And that seemed like an annoyed Jensen at the time because he gave a comment that was like, yeah, it was sort of surprising. Like, I don't know why either side would want to do something like that. And, uh, huh, okay.
Starting point is 00:29:49 And, and then, you know, subsequently there have been, you know, a few other things, obviously, that Open AI has gone down, including, you know, potentially what's been going on with, with all their other cloud deals and whatnot and, you know, and chip deals. And so is that what's at play here? And it's unknown right now, because Jensen keeps again saying that it's no big deal. Right. I mean, Jensen was in, I think, Taiwan over the past couple days and was talking about how this was going to be a very big investment, one of the biggest investments ever, which is true, the largest investment we've ever made, he says. And then somebody asked him, well, what about the $100 billion? And he said something
Starting point is 00:30:28 astonishing. He said, no, no, nothing like that. So is that money off the table? It's no, no, nothing like that. Like that's absurd, even though they announced that. Like, they were on, again, John Ford's doing a live interview with Greg Brockman's, I watched the thing, Sam Altman and Jensen, and he talks to that. I'm about the $100 billion, and they're like, you know, touting like, oh, this is an incredible, an incredible agreement between two great companies, like, this is going to be, you know, push the future forward and accelerate everything. And it was all predicated around that huge number.
Starting point is 00:31:03 Like, they're getting into, like, the technical weeds of, like, whether or not it was all going to come in, like, all at once. And again, they never said that it was going to come in all at once. But now they're saying that it's never going to be $100 billion. And that's just, it's weird to not acknowledge that. That was the reality. And pretend again, like, this is like sort of gaslighting. It's like, yeah, that was not, what are you talking about?
Starting point is 00:31:25 Like, come on, $100 billion, we're not doing that. No one's going to do that. It's like, just go read the press release that you put out there. You said you were going to do that. And, okay, of course, there's like the, again, notion, like, is sort of open AI more to blame for it. Like, Sam wanted to get the big number out there and, you know, touted, especially because it was done, you know, maybe hashed out alongside President Trump
Starting point is 00:31:47 and you know that he likes the big numbers and let's put these out there and get everyone excited. But it also like really, you know, was big news for the stock market, right, as well. And now today, you know, it sounds like Nvidia's dropping because it sounds like the stock market's like, well, what happened to that deal that you said was going to get done? Right.
Starting point is 00:32:06 And if Open AI were a public company, I would not want to be their stock right now today, but they're not. Instead, they're raising it, you know, a hundred billion plus dollars. And yes, you hit upon like the idea, The other way that they're downplaying this is like, look, it's still, and Jetson, very specific to said is probably their biggest investment ever. So it may or may not be.
Starting point is 00:32:26 It might be up to $100 billion. It may or may not be. So it's probably their biggest investment ever. And yes, so that's still obviously a big deal. But there's talk that Amazon's going to do $50 billion in the round and soft bank's going to do more. It's like, you know, this was going to be one of the biggest deals, maybe the biggest single, you know, amount ever put in from one company to another.
Starting point is 00:32:45 and now of a sudden it's not and they're just sort of saying like, yeah, sorry, I didn't really mean that one. If I recall correctly, they also had a moment where they were touting about how it wasn't really done with bankers and it was just like hashed out mono a mono. It might have been a sign that something
Starting point is 00:33:02 was going wrong there. Maybe you want to get these deals a little bit more locked in before you announce them, I guess, in the future. Or in Jensen's favor, he could back out of it. Exactly, yeah. And that's what I wrote at the time. like, you know, basically saying, like, look, let's not get ahead of ourselves here.
Starting point is 00:33:19 There's a lot of, like, wiggle room in this. And there's ways that that Envidia might not end up, certainly might not end up doing the full $100 billion because it was tied to specific milestones it sounded like, you know, at least, you know, verbally that that's what they agreed upon. I think there's one other key element that sort of I, for whatever reason, hone in on, which I think is interesting and at play, potentially at play here, which is that to me, when I first was reading about the deal, it seemed like a big part of it was basically
Starting point is 00:33:51 OpenAI leveraging the relationship with Nvidia and using the fact that Nvidia is the most valuable company on Earth to basically be able to use that partnership. And there was subsequently reporting on this fact that they would use that partnership to be able to be able to raise debt, basically, in order to fund a lot of the infrastructure buildout that Open AI had wanted to do.
Starting point is 00:34:13 And that's because Open AI still is not a public company, let alone, you know, not a profitable company, was having a harder time raising the levels of debt that, say, an Nvidia could. Invita could basically raise whatever it wants, because, again, they have the stock to back it up, they have all these assets to back it up, and they have the profits to back it up. Open AI is not in that place, but they still wanted to be in charge of their own buildouts. And so how do you do that? You partner with someone on it, and they've obviously been partnering with Oracle and many others around those lines. But like, to me, that seemed like what a big part of this deal was, basically, NVIDIA stepping in to be a, you know, a guarantor of the debt that opening I would need to raise. And what happens to that now? Is that off the table? Or did they decide maybe they don't need them for whatever reason anymore? Maybe this new funding, you know, helps with that. But I don't know. That's a weird part.
Starting point is 00:35:01 That's fascinating. And that could be a big problem for opening eye should that materialize. I don't think enough people are talking about that. One last bit on this. And this is a story that you had also written. We talked a bit on Friday about how Anthropic is now raising $20 billion and Open AI is raising $100 billion. And the funding sources are sort of mixing and matching from places that you wouldn't think would typically be the source of funding for the specific companies. For instance, Microsoft putting money into Anthropic after being Open AI's biggest backer and Amazon maybe putting $50 billion into Open AI after being Anthropics' biggest backer. And I think the way that you frame this is really interesting that there is effectively an anti-Google alliance forming out there, whereas all these companies, the funders, the big tech funders, the VCs and the labs, be it anthropic or open AI, now realize they're in for the fight of their lives against Google, and they're just going to do whatever they can. And old rivalries maybe go aside. They'll do whatever they can in order to be able to build some counterweight to the emerging force that Google is. Yeah, that's sort of, again, my high-level read of it.
Starting point is 00:36:15 And we had talked previously, you know, in previous conversations about Google's ascension, you know, after being sort of kicked around and, you know, being beaten down a bit as to why they weren't sort of meeting the moment. And now towards the end of last year, when they sort of, you know, when Gemini III rode in, and even nanoda banana and everything, right? It basically awoke in the beast, and now all of a sudden there was a code red from OpenAI, and everyone's sort of eyes are wide open to this realization that Google has everything they need to potentially take over this race.
Starting point is 00:36:49 And I do think that a lot of their peer group in big tech probably recognizes the same thing. And I think the Microsofts of the world, the Amazon's of the world, and the metas of the world, too, and that is a little bit different of a story, which we can talk about separately, because they're not one of these ones that's funding these other companies. But I think these other major cloud players, at least, the ones who have the rival clouds, specifically Microsoft and Amazon
Starting point is 00:37:15 realize that, yeah, they probably need to align around basically anyone who's not Google, right? They don't necessarily care if it, I mean, they do care. Obviously, they would hope that it's their stuff that takes off and sort of wins the day. But at the end of the day, they can also be a huge shareholder in Anthropic, you know, and I think they'd be happy about that. They can maybe, you know, be a shareholder in even XAI, and they can be happy about that as long as it's not Google, their chief sort of competitor and the one company that has all the pieces in place to take this over. Now, obviously, Google itself is a big stake in Anthropic, but that sort of predates, you know, this situation that we're in right now. And so, yeah, to me, the big eye-opener was that Amazon 50-billion
Starting point is 00:38:00 report, if they end up really investing $50 billion into Open AI after being, they are the largest shareholder of Anthropic, you know, I was trying to think, like, does that mean something that they're negative in some ways against Anthropic? I don't think that's it. I just think that they want to make sure that they are in a place where they can, you know, pick and choose what they want as long as it's anyone but Google. That's right. No, it's a great insight. And now it sort of explains this thing that I've been struggling with, which was like, why is this funding cross-pollination happening? And I think that's about as good of an explanation as any that I've heard. All right, OpenAI is eyeing an IPO. We have a date now that has been reported in the Wall Street Journal.
Starting point is 00:38:47 We'll talk about when that is and why that is when we're back right after this. You want to eat better, but you have zero time and zero energy to make it happen. Factor doesn't ask you to meal prep or follow recipes. It just removes the entire problem. Two minutes, real food, done. Remember that time where you wanted to cook healthy, but just ran out of time to do it? You're not failing at healthy eating. You're failing at having an extra three hours.
Starting point is 00:39:11 Factor is already made by chefs, designed by dieticians, and delivered to your door. You heat it for two minutes and eat. Inside, there are lean proteins, colorful vegetables, whole food ingredients, healthy fats, the stuff you'd make if you had the time. There's also a new muscle pro collection for strength and recovery. You always get to eat fresh.
Starting point is 00:39:30 It's ready in two minutes. No prep, no cleanup, no mental load. Head to factormeals.com slash big tech 50 off and use code Big Tech 50 off to get 50% off your first factor box, plus free breakfast for one year. Offer only valid for new Factor customers with code and qualifying auto-renewing subscription purchase. Make healthier eating easy with Factor.
Starting point is 00:39:52 The ScoreBet app here with trusted stats in real-time sports news. Yeah, hey, who should I take in the Boston game? Well, statistically speaking. Nah, no more statistically speaking. I want hot takes. I want knee-jerk reactions. That's not really what I do. Is that because you don't have any knees?
Starting point is 00:40:11 The score bet. Trusted sports content, seamless sports betting. Download today. 19 plus, Ontario only. If you have questions or concerns about your gambling or the gambling of someone close to you, please go to Connixontera.ca. And we're back here on Big Technology Podcast with M.G. Siegler. M.G. writes at spyglass.org, highly recommend it. It's definitely a great place to go for all insights on AI and big tech. And M.G., of course, if you're new to the show, is here with us on the first Monday of every month. And given that it's Monday, February 2nd, we're here talking. And we have some big stuff to talk about. We'll talk in this segment about Open AIs planned IPO and why Apple earnings, despite being amazing, don't seem to be able to buy the company any credit with Wall Street. Let's talk about the IPO first.
Starting point is 00:40:56 I thought personally there was no way Open AI would try to go public in 2026. Obviously, Sam Altman made that comment to me when I spoke with him late last year, that he would hate being a public company CEO, something along those lines. And now I'm looking at the Wall Street Journal has this story. Open AI plans fourth quarter IPO in race to beat Anthropic to market. Open Eye is laying the groundwork for a public listing in the fourth quarter of this year, accelerating his plans as competition with rival Anthropic, intensifies the $500 billion startup is holding informal talks with Wall Street banks about a potential
Starting point is 00:41:30 initial public offering and is growing its finance teams. Its finance team, Open AI executives have privately expressed concerns about Anthropic beating the company to an IPO. On Friday, we had Stephen Morris, the San Francisco Bureau chief of the Financial Times on, and we were talking about, like, when you think about these numbers, the question is, where is the money going to come from? Is there enough money? Let's say Open AI were to go public at like $1.5 trillion. Is there enough money out there to fund an IPO like that, especially if, let's say,
Starting point is 00:42:03 an anthropic comes out a month before? And then you're looking at, you know, the traditional, you know, IPO share buyers just like having to decide. And then the amount of money that's available comes down. So I'm curious what you think. Is this a response to that and just to them saying, well, there's a limited money out there. We better go get it. If so, maybe that's a smart move.
Starting point is 00:42:24 Yeah, I wrote about this a little bit at the end of last year, around the time that, yeah, the rumors started that Anthropic was thinking about, you know, potentially going public in 2026. Because to me, it's sort of like that is the ultimate open AI squeeze. Because, right, we already talked about the fact that Google is sort of, you know, has woken up and they're sort of squeezing from the top, one of the biggest companies in the world. They're going after many different elements of what Open AIs, you know,
Starting point is 00:42:50 historic strong points. been an AI. Meanwhile, you've got Anthropic, which was always thought to be sort of the smaller player right across the board, maybe going more after Enterprise, more focused on that. But ultimately, if they're both going to go public, you know, there's a sort of first mover advantage, for sure, you would imagine, because you would hope that there's pent up demand, or they would hope that there's pens-up demand in the market for, you know, various AI bets. and the first one to go out there is probably going to have, you know, a better of a time, perhaps, especially if that first one to go out has better-looking economics, or at least the path to better-looking economics,
Starting point is 00:43:32 than the other one does. And that, again, points to the notion at the time from the end of last year that Anthropic was said to be, you know, maybe a couple years ahead of Open AI when it came to being able to turn a profit. And so, again, if Anthropic were to go out and go public ahead of OpenAI and they have this direct path to profitability that's much quicker than what Open AI can get to. That puts Open AI in a very, very tricky spot when they were to go public as well. And, you know, they would really have to rely on the bigger picture, bigger growth narrative. And, you know, that's becoming workier by the day, right, with all this stuff going on with not only Claude Code, but now Claude Co-work and, you know, all these other things that are going on at the moment in AI.
Starting point is 00:44:18 I do believe that there's, you know, that that seems like it would be a natural outcome of this would be, yeah, opening AI trying to race to beat Anthropic. There's one other element to this, which is even more sort of in the news in the past few days, which is XAI. If they really do merge with SpaceX, which it does sound like now,
Starting point is 00:44:39 is going to happen and may even be announced this week, which is insane, how fast that that came together. if it comes together in that way. That's sort of an interesting end runaround by Elon to all of a sudden have potentially an AI play rather than just the space play to go public as rumored in June or maybe July of this year so well ahead of when there's no way
Starting point is 00:45:06 that that anthropic or open AI can meet that timetable right now. I think SpaceX is way ahead of them in terms of where they are in the process. And so what if Elon does? does the ultimate end run around and gets XAI out before either of these companies and becomes the ultimate first mover AI play. I mean, he would love to hold that over, Sam, wouldn't he? Oh, of course. That's part of the, that's part of the strategy.
Starting point is 00:45:29 That has to be part of the strategy here, for sure, for sure. Unbelievable. For the record, I do not think Open AI is going to go out 2026. 2027, probably. I don't either. I agree. I agree with that. You know, we talked about my predictions last, let's go around.
Starting point is 00:45:46 That was one of them that I didn't think that any of the major AI companies would go out in 2026, even though they're all talking about it. XAI, though, might just prove me wrong by this merger, I guess. But yes, in terms of open AI and anthropic, I just think that beyond where their businesses are at and with these now massive fundraisers, as we're talking about, I think that it will push it out a little bit. And the real wildcard is, obviously, as it always is, the macro story, right? Like, what ends up happening?
Starting point is 00:46:15 there can be so many things that sidetrack or at least delay, you know, any sort of rush. But again, if it really is a full-on sprint for open AI to get out ahead of Anthropic, there's a there's a window which they do that. But I think it's still probably a 2027 thing. Yeah, agreed. Okay, one last story before we go. Apple turned in, I think, the best earnings report in its history. It brought in 143 billion in revenue over 1308 estimated.
Starting point is 00:46:45 iPhone revenue was $85.27 billion, beating the estimate by $6 billion. The estimate was $78 billion. That's 23% growth in the iPhone category year over year, which is insane. Remember, they were struggling to grow iPhone for a while. They also beat on profitability. Yet, Wall Street did not seem impressed. The stock is up a tiny bit, but mostly flat since the earnings announcement. What does Apple need to do to get that stock?
Starting point is 00:47:15 turned around. It's been flat for six weeks or so. I think this is, again, it's going to be the best year in Apple history. But of course, the AI story is something that that is not really working in its favor. Yeah. So there's a few things there. First and foremost, I do think that some of that, you know, apprehension about Apple is related to just what's going on with memory chips and everything, right, and that it could ultimately end up squeezing their margins. The margins were incredible this quarter, which was sort of a surprise to many, given everything going on. But it does seem like Apple's sort of been savvy in terms of like potentially hoarding memory chips, which is not something that Tim Cook usually likes to do.
Starting point is 00:47:55 He likes to get this inventory as streamlined as possible. But we're in a weird sort of macro environment for this. And that's in no small part because of the AI revolution that's happening, right? And is changing all of those equations. And so I do think to broadly, picture, though, yes, is that the market sees what Apple is doing with Google, and they applauded it right when it was first announced that, like, oh, it's like, oh, they're teaming up with one of the leaders in AI, if not the leader in AI now in Google, and Gemini will finally fix Siri, and we're going to have, you know, a great situation for Apple. But I do think, like, ultimately, they might be looking at this as like, well, look, we have, we have meta over here that's spending $130 billion
Starting point is 00:48:39 in CAPEX, whereas Apple is spending much closer to zero. You know, it's something around like $18 or $20 billion in CAPEX because they're not building out, obviously, the massive infrastructure in order to bake their own cutting-edge AI. And so, you know, I think that there's a little bit, maybe a lot of apprehension that while Apple may be fine in sort of the shorter term for the longer term, if they are not one of the key players in the AI space, and if you believe AI is going to. going to revolutionize everything, including hardware businesses potentially, then Apple's in a tough spot. Now, Apple would counter that, look, this is the short-term stuff. We're doing this
Starting point is 00:49:21 partnership short-term. We're going to fix Siri, and we're going to continue to work on it behind the scenes to eventually, you know, roll our own AI. But again, the cap-bags spend that they're, that they're, you know, showing right now is so small relative to not just meta, but Google and Amazon and Microsoft and everyone else, that it feels like there's a world in which they're behind right now and that they can never catch up. And that would be obviously the real fear. Now, I do think you're right. And we've talked about this previously. Like, I do think this will be a good year overall for Apple because I think it's no small part because like we've talked about before the iPhone Fold and some of these other devices that they have coming out, I think will end up
Starting point is 00:50:00 doing well for them. But again, the bigger picture, the longer term time horizon stuff is AI. And right now they're just they're showing no real energy around being that they need to have a sense of urgency around it other than cutting deals which is just not typically what Apple does whereas they want to own everything in house yeah I was on CNBC last week right before Apple earnings and I kind of stuck my neck out for the company which I don't typically do but I was basically like here's where Apple the bull case for Apple sits they are going to sell the latest model and probably the next model like crazy through the year. Meanwhile, there's no killer AI app or device yet that is threatening to disrupt their core business.
Starting point is 00:50:46 I mean, if you think about it, we may get some AI devices this year. My prediction is they're just going to, they will be underwhelming. They won't be a threat to smartphone growth yet. Two, three years, four years, five years from now, definitely. But at least in the short term, it's going to look really good for Apple because the sort of immediate threat to them. from AI is not materializing. The intermediate threat is still there. But maybe the market doesn't care about it. Maybe they're more long-term focused and not quarter by quarter like we like to ding them for. Yeah. And I mean, the one thing that I agree with right now is that there is a path
Starting point is 00:51:23 in which Apple looks very smart in, say, a year or two years from now in not having spent all of this CAPEX to do these massive buildouts, right? If the model's fully commoditize and all that, you know, that they're just sitting there and they can sort of pick and choose what they want to use and also pick and choose which path they want to go down. Again, though, I go back to the idea that, like, they're just not doing the other stuff behind the scenes in order to be able to eventually sort of, you know, take over their own control of these AI models. Like, they just don't have the infrastructure in place.
Starting point is 00:51:55 There's talk that they're building out data senators, that they're building out their own chips to be able to train these. But the spend isn't there relative to their peers where it would seem like they're really taking this seriously. Now, again, maybe there's a world in which, like, LLMs sort of end up being not the be-all-end-all path, and there needs to be other, you know, mechanisms in order to, and other methods of doing AI. And so maybe Apple can sort of come in at that point and sort of catch up. But everything we're seeing right now is that that's not the case and that they'll need to, at some point, spend a lot more money than they are if they really want to have their own sort of, you know,
Starting point is 00:52:33 AI that's built in-house. Yeah, it definitely does seem that somewhere inside that company there was a decision made, maybe from the very top that was like, let's sit this out for now and just figure it out afterwards. That, like you said, that might turn out to be a good decision on the other end. Very risky. Very risky. Yeah.
Starting point is 00:52:52 And it's not like, the alternative is Apple has so much cash. They made that, you know, they finally made an AI acquisition, as you and I have long talked about. Not perplexity. But it wasn't for a, no, it wasn't perplexity, and it wasn't for a frontier model company. It was, it was for interesting technology. You know, Disclosure GV was an investor in. So it sounds, you know, where I previously was a partner for a long time. So, you know, I think that that's probably a savvy play, but it's a $2 billion investment.
Starting point is 00:53:19 Like, you know, that's reported. And they have, you know, as you noted, they're doing record profits right now. What are they spending that money on? They're spending that money on buybacks and they're spending that money on, you know, things. that are not moving the ball forward with regard to AI, except in very small ways. And so, you know, I don't know what the counter argument is. I'm not saying that they have to spend $100 billion in CAPEX, but I'm saying maybe they should spend $50 billion if their peer group is spending $150 billion on CAPEX a year.
Starting point is 00:53:53 Maybe it's at least worth it to do something that's, you know, just in case scenario. Right. I agree. You got to at least start spending a little. bit because even if you have conviction that AI will be a you know maybe not as revolutionary as people imagine you have to hedge a little bit because of I mean what we're seeing right now all the aIs hanging out together in mold books warming and plotting out to overthrow humanity so you know Tim Cook come on throw throw a little more skin in the game geez yeah yeah that I want to go into
Starting point is 00:54:26 a chat room and see Tim Cook's molt bot there and chatting chatting away with with other Apple executives Oh, my God, and somebody using that exploit to take over their computer, that'll be a story. All right, the website is spyglass.org. Our guest, M.G. Seagler, joins us every, the first Monday of every month. It's always great to speak with you, M.G., thanks for coming on. Thanks as always, Alex. All right, looking forward to doing this again next month. Folks, on Wednesday, Joel Pino, the chief AI officer of Cohere will be here with us to talk about the latest in AI research and where the cutting edge is heading.
Starting point is 00:55:01 so we hope you tune in for that. Hopefully the AI bots won't take over the world between now and then. So if humanity remains in charge, we'll see you next time on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.