Limitless Podcast - This Week In AI: OpenAI asks for Bailout / $1B Google x Apple / The Rise of TPUs

Episode Date: November 7, 2025

This week in AI, we examine the economic implications of OpenAI CFO Sarah Fryer’s "backstop" proposal. We discuss Apple’s $1 billion AI partnership with Google, Google’s new Ironwood TP...Us, and a novel AI personal device ring. We also cover Anthropic’s rise as an OpenAI competitor, Meta’s financial struggles, and Google’s plans for space using solar-powered AI data centers.------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFTSubstack:https://limitlessft.substack.com/------TIMESTAMPS0:00 OpenAI's Government Backstop3:53 Government Bailouts7:31 The AI Bubble11:07 Apple and Google's AI Collaboration13:11 Google's New TPU Ironwood15:41 The Sandbar Ring18:52 Anthropic Shows Life22:27 Meta's Earnings27:25 Google Space Compute31:21 NASA On the Board------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 Everyone's asking, are we in a bubble? When is the bubble going to pop? How big can AI get before it all comes crumbling down? And yesterday, the OpenAI CFO may have delivered some hints as to how and why this happens. She used the word, and I'm like a shame to say it out loud, this feels like a bad word. She used the word backstop. And if you're familiar with the word backstop, it's what a lot of banks were experiencing in 2008, where the government kind of sits there to fund any sort of negative. repercussions that happen as a result of this buildout. And then after seeing this interview, it kind of begs the question, are the circular economics we've been seeing among Nvidia, Microsoft AMD, is it really a capitalist efficient thing? Or is this, are we really starting to see early signs of a bubble starting to burst? So we're going to get into all that. We are also talking about a deal that was just confirmed with Apple, where they are paying Google a tremendous amount of money to actually offload their AI compute to build a new model. So we have an official deal there. We have a new hardware device in the world of AI, and we have just a bunch of
Starting point is 00:01:01 cool space-related stuff. So EJAS, let's get into the big news of the day, which is Open AI asking for a backstop. That sounds scary. Well, because I feel like some things get lost in translation here. What exactly did they say? And what does it mean for us? Yeah. So let me set some context. Sarah Fryer is the current CFO at Open AI. And she did this interview with the Wall Street Journal where obviously they get into the economics of Open AI's revenue model and how they're going to pay for all these trillions of dollars worth of compute. And she had a few choice words. The most choicest was around this backstop that you're talking about, Josh.
Starting point is 00:01:39 Let me kind of paint it out for you. So Open AI is currently signed $1.4 trillion worth of compute deals. This is OpenAI agreeing to pay compute providers or chip providers or GPU providers like like Nvidia, AMD, hundreds of billions of dollars in order to buy their GPU so that they can train the next best AI model, AGI. There's one massive problem. They don't have enough cash. They don't have enough money. In fact, they are currently running at a loss.
Starting point is 00:02:09 There's no plan or near-term kind of goal that they can achieve to pay for this stuff. So then the question becomes, what happens if they can't pay for this stuff? What happens if chat GPT stops becoming profitable in the future as they. they have projected. What happens then? And Sarah had one simple answer to that, which was, the government can bail us out. And so specifically what she described was an ecosystem where the government basically pays open AI and gives them the money that they need to buy these GPUs if they happen to default, if they happen to not have enough money to pay for these things. This, in turn, will give the private equity firms and the banks that are agreed,
Starting point is 00:02:53 to loan Open AI money in the first place to buy these GPUs, the sanctity and peace of mind that, I'm going to be okay, I'm going to get my money back if Open AI doesn't deliver through. The craziest part about this, Josh, is I feel like I've just been ricocheted across the room for two weeks straight because Sam Altman went from operating a non-profit to operating a nonprofit that is kind of a private company where they kind of turned a part of their company into a for-profit, so they're kind of like lying about it, then kind of breaking rumors around doing a $1 trillion IPO. What does your immediate gut reaction to this? Is it lies? Is it real? What's happening? I just like, I keep repeating the words too big to fail in my head. Too big to fail.
Starting point is 00:03:38 Too big to fail. It seems like that's what I want to get to. And I'm like, I'm of two minds of this. One is that, well, AI is a matter of national security. It is very important to get this right and to move as fast as possible. If that requires some government help, that probably makes sense to an extent. And then the second thing is, well, if you're asking for government help, that's probably not a good thing in this instance. And government help really means taxpayer help. Like, we fund this stuff. And to pay a backstop for Sam Waltman, when we don't even get public stock exposure, because it's still privately held. And another thing is she admitted here that they actually have little to no interest in IPOing anytime soon. So now there's no real trajectory for the public being able to own any
Starting point is 00:04:25 upside and only participate in the downside. After they've been signaling that they are not for profit since inception, it's just like these really horrifically mixed signals with no clear intention. It's a lot. Yeah. I guess the way that you described of being like ricocheted across a room, It kind of feels that way. And it's a little disturbing. But I wonder if this is kind of at the core of what we've been seeing with this circular economic thing happening where opening AI sensitive to deal with Google, with Microsoft. It's all very, it feels very incestual.
Starting point is 00:04:56 But maybe it's because it really is a matter of national security. And the government's just kind of allowing them to do a lot of things that wouldn't traditionally have been acceptable. Yeah. But it looks like we have some qualifications, right? Yeah, I mean, maybe we should take this to the grain of salt. Yeah. So a grain of salt being that Sarah Fry, the same CFO that kind of made these claims, walked back her claims in the interview, specifying that Open AI is not seeking a government backstop for our infrastructure commitments.
Starting point is 00:05:24 And she goes on to give this official statement about, you know, Open AI, you know, being profitable and going to be able to pay their way through this all. The issue that I have with this, Josh, is I don't think it's an honest statement. Why don't I think it's an honest statement? because her boss, CEO Sam Altman, has claimed so many times in interviews and his own blog posts that he's written that he has no issue asking the government for a bailout, for a backstop to help him kind of like cover his deficits
Starting point is 00:05:54 and his debts if he's not able to pay for it. I have an excerpt from an interview pulled up here where he goes, at some level, when something gets sufficiently huge, whether or not they are on paper, the federal government is kind of an insurer of lost resort. As we've seen in various financial crises and insurance companies screwing things up, they basically cover us. And so this is again, and this is the second time we've spoken about
Starting point is 00:06:19 this week, about Sam Altman's character inconsistencies where he says one thing and means another. We did an episode earlier this week where we covered a 52-page deposition where his co-founder and former chief science officer, Ilya Sutskiva, basically says that Sam was like incessantly. lying and that's what led to his ousting, his firing in November 2023. Fast forward to present day, it seems like Sam's still at it, Josh. There's a lot of character inconsistency and what's certainly not helping the case is all of his co-founders from day one to up until a few years ago, they're all testifying against him.
Starting point is 00:06:58 Elon strongly disagrees with Sam. Ilya and Mira, they strongly distrust Sam. So this is not, this is poor signal coming from the public perception, but also, from people who know him personally, which is not really making a good case. So we'll see. We'll continue to monitor the situation. That's kind of where we're at with Open AI News. I just want to, sorry, I think we need to just look at the other side very quickly, Josh, and I'm curious of your take here specifically. Yeah, let's get into the bubble. So the take from this is, oh my God, we're doing circular investing. This is a massive bubble.
Starting point is 00:07:33 It's so obviously going to pop. They're asking for a literal government bailout before the crash actually happens. We've seen this with COVID. We've seen this in 2008. What are we doing here? And the counter argument to that is if you look at every other hyperscaler, maybe not put open AI aside for a second, if you look at Meta, if you look at Google, if you look at Microsoft, although they're spending hundreds of billions of dollars and committing to do that over the next couple of years, they still haven't made a crazy enough dent where you should be getting worried on their profit and loss sheet. Remember, these companies all have other businesses. that are massively profitable, and they're making tons and tons of money. The money that they're investing in compute right now, technically, if you weigh it up against that, isn't too crazy. It is crazy. It is a bigger dent that we've seen them spend on anything else over the last decade,
Starting point is 00:08:22 but it's still not over compensating for what they're earning right now. And that's the only argument against it, which is like, hey, we're seeing demand with our enterprise customers, we're seeing demand with our retail customers, and so it makes sense for us to invest in this compute. I don't know what you think about this. Yeah, like in the case that AI scaling law stopped tomorrow, where suddenly we figure out, oh, no, like this isn't actually going to work, spending more money. The market gets harmed very badly, but it's not catastrophic. It is not a recession level bubble. In the case that this continues to prolong, companies like Open AI that don't have a cap table like companies like Microsoft and Google who exist outside of AI and are now using their balance sheet to pay this off, I think that's probably when you start to see problems. So I kind of agree with. you in the sense that we're still good.
Starting point is 00:09:08 Like, things are still good. I'm not particularly concerned of a short-term bubble happening here. I want to emphasize that, like, Open AI isn't just kind of sitting on their hands and not coming up with other ways to turn on revenue. One other thing that she revealed in this interview, Josh, was, I think it's number six on the screen here. Additionally, they will do creative commercial deals. What they mean by that is, if there's a company, say a pharmaceutical company that uses
Starting point is 00:09:36 chat GPT and finds a cure for cancer using chat GPT, they're signing a deal with that pharmaceutical company such that they get a percentage of profits from that drug that they create using their AI intelligence that will like occur for like God knows how long after that. And so taking a percentage of profit or revenue share from people who are using it as a product to create other products or if you're a company that sells products via chat GPT, Etsy is a common example that is live on chat GPT right now. They get a percentage of profits. And then there's the obvious one, which is, open AI is just going to turn on ads.
Starting point is 00:10:16 And when they turn on ads, who knows how much money that's going to bring in. So they are making efforts towards it. I don't want this to be like a, hey, like bad open AI thing, but it's just unlikely given the amount that they've committed to spend. That first point you mentioned around like health breakthroughs through chat GPT and your percentage. That sounds like a train wreck waiting to happen. That's a very messy monetization structure. So a lot still to be evaluated. But I want to talk about Apple now.
Starting point is 00:10:40 We had an episode yesterday with Apple and Google and how they kind of relate to each other, particularly around a deal in which Apple kind of sucks the AI. They're really just not good at it. And they need help. And here is Gemini coming to the rescue. We officially have a deal that is unofficially official. And it looks like they're going to be paying $1 billion a year, Apple, to Google, in order to get a $1.2 trillion parameter Google Gemini model custom for Apple.
Starting point is 00:11:05 This is a really big deal. Apple is struggling. Apple has not done anything in the world of AI. And suddenly, they have this really powerful model. So this seems like it's going to be very important for the case of, the bulk case for Apple, at least. So I'm happy about this for a few reasons. So nobody I'd be laughing, right? I'd be like, ha ha. This obviously Apple's failed. They're late to the game. But I'm happy about this because in Apple's sense, it's kind of a smart move. Think about it, right? They haven't spent hundreds of ability. of dollars trying to invest in GPUs and train a complex AI model. They haven't taken on any of that risk. They just tap Google on the shoulder who's done all the hard work and say, yo, are you down if I pay you a billion dollars per year and you make our own custom version of an Apple AI model that I can plug into Siri and will run on our private cloud instance. So, you know, Google won't necessarily get access to all of it, but they just get the payment every year. That sounds like a pretty sweet deal. The other thing I like about this is this isn't just any kind of like model. It's a 1.2 trillion parameter model. That is like up there with like one of
Starting point is 00:12:11 the biggest models that would be out there. And to combine that with the kind of personalization that I'm presuming Apple is going to integrate into Siri and in the consumer experience with using text and other apps on the phone, that's pretty attractive to me. The other thing that I thought was super cool on the Google side here, Josh, is to be able to run a 1.2 trillion parameter model at that economically viable cost, aka they're making money from that, just goes to show that there's some pretty crazy engineering that Google has achieved. This indirectly tells me that they have absolutely nailed their chip design and their TPU architecture to be able to pull this off.
Starting point is 00:12:50 Just super cool. There's a lot more info if you want to find out about Apple on our episode that we released yesterday. And also, I am publishing an essay in the newsletter today when you're watching this, all about the economics and why this makes sense for Apple to do. So if you're interested in hearing more of these takes, like more thoughtful takes, check it out on the newsletter. I'm like very proud of this article. I think it'll do really well. It's just really fascinating to see the Apple strategy kind of accidentally step into this amazing situation for them.
Starting point is 00:13:19 It was certainly not by design, but they somehow managed to put themselves in a really good place. But on the topic of Google, I also want to talk about the new hardware that they just announced, which is their new ironwood TPUs. Now, again, in yesterday's episode, this was a good one. We talked about what a TPU was and how it relates to a GPU. And today, in some new news, we got new TPUs, EJ. So can you walk us through what these Ironwoods are, what they do, why they're impressive? So Ironwood is Google's latest TPU. TPU stands for TENSO processing unit.
Starting point is 00:13:46 All you need to know is that the TPU of Google is the equivalent of the GPU from Nvidia, but with some additional perks, it is more specialized and custom fit towards Google software and app sweep, right? It's the thing that has powered and trained all their models. They've never actually relied on Nvidia at all to kind of train an inference a bunch of their models. They've been kind of like this lone entity. And why this is so cool, and we explain this on yesterday's episode,
Starting point is 00:14:14 which you should check out, is Google's been super independent, and they've been able to make several breakthroughs, which have allowed them to train the same type of models that Open AI and Microsoft produce, but much cheaper or cost-efficient and can scale massively with a greater number of models that they built. And now they've released this new model called Ironwood, which is basically the next generation of their TPU.
Starting point is 00:14:39 It is four times faster than the prior version, and it can basically clamp together as one singular stack in a much more feasible way, which means that training larger models at scale is going to be much easier. But the biggest news about this, for me, Josh, is they're going to start selling these TPUs and making it more accessible for anyone else to buy their TPUs and train or influence their own AI models. Why this is such big news is this means they're formally stepping into the ring to compete with
Starting point is 00:15:11 Nvidia. Now, don't get me wrong, they're not doing this at the scale that Nvidia is currently doing it, but currently there has been no feasible challenger to Nvidia, and now you have Google entering the ring, which has a lot of distribution and technical expertise. It's notable and probably a hint that the Google Market Cap should be much, much higher. Yeah, I wouldn't say no one is competing with Nvidia. There are AMD chips. China's creating their own alternative. So I'd say it's at that level where it's trying to compete, but there is going to be a very steep mountain in order to get there to become a real Nvidia competitor. In other news, we got a new AI hardware device this week, and it goes by the name of Sandbar. And it comes in the form factor of a ring. Now, I have an aura ring on it. I love my order ring. It's great,
Starting point is 00:15:52 non-intrusive sleep tracker. This is a totally new take on a ring because it gets into these things that we call edge nodes. When you deal with AI systems, there are sensors that can then send requests back to the actual AI system. And this is a new sensor. So what we're seeing on screen is a promo video of this person who has a ring. You press a button on the ring and you can speak into a microphone. Now, this microphone acts as an interface between yourself and the AI system. And you could ask it to do things like remind you, to query questions against it, to record conversations. And it's this really unique and I guess somewhat novel form factor in the world of AI hardware devices. This was interesting to me, EJES, at least because I'm so fascinated about
Starting point is 00:16:34 what Open AI is going to make next year with Johnny Ive in terms of AI hardware. And this is an interesting experiment to kind of see how the ring form factor would work. So the way this kind of exists is it's a microphone that has passive audio back to your earbuds or I assume back to your phone. And it's this really fun and somewhat intuitive way of using AI without AI getting in the way. So I think a lot of the things that we're going to start to see in this removal of the smartphone is this suite of ambient devices where you can just kind of engage with AI wherever you are at any time. And a ring is a really neat form factor for this because it doesn't really get in the way. It's just kind of always there. If you want to engage with it, you summon it. If you don't,
Starting point is 00:17:14 you don't even think about it. And I think this is an interesting experiment. And it's something thing I'd kind of want to try this. Not that I think it's a successful product, but I think it's an interesting take on what the future AI devices could look like. Yeah. Where my mind immediately leaps when I look at this is her holding up her hand to speak into the ring. It either looks like she's about to cough or like, you know, in the movies where the bodyguards are like touching their air when they need to speak into it in secret service. It kind of seems like something like that. So habitually, I'm kind of curious as to how this kind of integrates into society, but I agree with you. I think like the ring is super subtle and kind of non-obtrusive and it makes it super convenient to kind of
Starting point is 00:17:56 engage with this technology without needing to stand on another screen and whilst maintaining the ability to kind of interact with real life. Yeah, this led me down a rabbit hole because we were on the topic of Apple and it really, I'd love for Apple to start doing things like this. Like for example, Apple has $100 billion in cash. They go acquire aura. Now they get the custom Gemini model from Google. Suddenly they have AI. They ship this AI ring that is compatible with these new Gemini models. They ship a new set of AirPods that have visual sensors on them so you could collect data from the outside world.
Starting point is 00:18:27 And you start to get this suite of devices that isn't an iPhone, but is increasingly capable and more powerful and approaching what we can do with an iPhone. So I hope this is a trend that we see where the next iPhone level device isn't a device. It's a suite of devices. Maybe the ring is one of them. Maybe it's not. But it's an interesting experiment to see what it could look like. that became the case. I want to talk about the dark horse of the AI race, which is Anthropic. Now,
Starting point is 00:18:53 I'm going to hold my hands up here, Josh. I have given Anthropic a lot of flak, and I've kind of called them the NAC-AI model. Like, they follow the rules. They go to the government. They run to the government and say, hey, like, can you give us this deal or whatever? And I've kind of, like, look down on Claude since they kind of, like, maintained their parity at the coding agent level. So I was kind of like, whatever. Like, why would I use Claude? Turns out I was very wrong. So the information leaked a report on projected revenue for Anthropic, and it basically puts them at the same level as Open Air in some cases better, Josh. So the major takeaway from this is they're projecting $70 billion worth of revenue by 2028 and a $400 billion valuation. Bear in mind that
Starting point is 00:19:39 Open Air is currently valued at, I think, $500 billion. So to make that leap from where they are currently, which I think is $200 billion, is a crazy jump. But number two, this would assume they then become profitable way earlier than Open AI. Open AI at the same time, 2028, will be making technically more money than Anthropic, but won't be profitable. And so the immediate question that I asked myself was, well, like, how are they planning to do this? Like, so far they're losing on the retail sales. So, like, do they have a Hail Mary?
Starting point is 00:20:11 And the answer I connected to the dots, Josh, comes right here in the enterprise AI market share, where Anthropic has sneakily surpassed Open AI. They currently command, I think this chart is a little outdated, but they currently command around 25% of the enterprise API share. And why this is super important is although there may be fewer enterprise customers in terms of numbers, like so, you know, Open AI has 800 million weekly active users. Anthropic might have, I don't know, a couple hundred thousand enterprise users. Each enterprise user pays way, way, way more than the average retail user. And so it's just something that I didn't see, Josh. They're like engaging with a lot of enterprises behind the scenes. They're signing these multi-billion dollar contracts.
Starting point is 00:20:59 And they're actually like translating these contracts into useful products that these businesses are using behind the scenes. How we prove that, I don't know. Maybe it's like the economic GDP over time from a bunch of these different companies that they've signed deals. But I thought this was cool to point out. Yeah, this to me intuitively makes sense. It's like when you think of coding, you think of Claude. And I think that's kind of the universal truth amongst corporations.
Starting point is 00:21:24 And when you want a bot to write code for you, you are using Claude. And if you're pinging an API that is writing code for you, odds are you're using Claude. So while the retail general public facing sentiment isn't that optimistic around Claude and Anthropos, an Anthropic because it's just not as useful as Chat Chupy or Gemini. The reality is that if you're writing code and if you are a company that wants a model that writes fantastic code, you are using Claude and Anthropic. And Anthropic is just collecting a lot of the upside without a lot of the public-facing noise because those are just private entities. They're just swiping their credit cards and they're getting all their tokens and they're just happy and they're on their way. So yeah,
Starting point is 00:22:01 I mean, I'm happy for Anthropic. I hope this is durable. I like the fact that we're starting to see each of these companies kind of slot themselves into a portion of the market. So I like that Anthropic is just working on code. I think that's great. Don't try to be the best of everything. Try to make the best coding model and look how much money you could print from it. This is a positive some game. The pie is continuing to grow so quickly. So if you could just own a small corner of it like Anthropic is doing, all the power to them. Keep it going. Someone who is not faring as well as Anthropic. The White Horse. Our good friends over at Meta and Mr. Zuckerberg, they got absolutely crushed this week after earnings for a series of reasons. This doesn't really come as a shock. I think
Starting point is 00:22:41 we've been pretty bearish on meta as a whole. After that whole glasses debacle, I became increasingly bearish on meta as a company and their ability to execute in this world of AI. But EJS, do you have any takes on what happened after the meta earnings report? I do. So we put out an episode, I think two weeks ago on the meta bull case. And so the question a lot of you might have is like, you know, do you still maintain some of that. The short answer is yes, but over a longer time period. Like, here's the facts. So earnings came out for meta last week, and it wasn't as great as they had hoped for one particular reason. Their spend on AI was ludicrous for the quarter. The company spending tens of billions on employees is overspending. I would have never guessed. Exactly. So they had spent
Starting point is 00:23:30 billions and billions of dollars, in some cases, in crazy. ways just to hire a couple of people, but also to pay for CAPEX investments, for their Hyperion Data Center, to invest in different apps being built, to fire a bunch of people, just ludicrous amounts being spent. And I think that meta shareholders had a bit of PTSD from the Metaverse days in 2022, where they renamed their entire company from Facebook to Meta, based on this Metaverse theory, and ended up
Starting point is 00:23:59 not panning out, right? NFTs weren't a thing. And so they have a bit of PTSD where they're seeing Zucks, spending all this money, but no real ROI. One clear example is they launched an AI assistant and no one really uses it. No one's really on Facebook and it's not really integrated well into their existing products. People just use chat GPT. They then launched a SORA competitor called the meta vibes app. No one would use that either. Do you remember that, right? And then they launched they pioneered. They said, listen, we're going to stop copying people. We're going to do our own thing. And they launched their own hardware device, which is the AI glasses. The reception and feedback
Starting point is 00:24:34 from their diehard fans was the worst they've ever heard it. They hated the entire experience. They think it's a, it is a lesser product than anything else on the market. So for all of these reasons and much more, people just don't have faith in Zuck's ability to spend and deliver on this. And it's reflecting in the share price. They lost $250 billion in 24 hours, Josh, on market over. Just completely insane, down 15%.
Starting point is 00:24:57 So now going on to the bull case, I do think they pick themselves out. I actually think MET is probably like a really good buy at this point. And for one solid reason, which is I don't think Zuck wants to lose this race, and he's willing to figure any and all out to get himself to a point. Is he able to produce a bunch of really cool apps that leverage kind of like the distribution that he has using AI like Google has like Open AI has? I don't know. TBD. But yeah, Josh, do you have any thoughts? There's this interesting phenomenon happening where I feel absolutely zero inclination to use any of METIS products. And that doesn't exist with. any other company. Like, I've experimented with pretty much everything. There is not a single part of any of Meta's AI product stack that I'm remotely interested in. In fact, the only touch point I have with Meta is Instagram because Facebook is such a disaster and it's cluttered and it just doesn't, I haven't used it in years. So the meta ecosystem as a whole is not interesting. The meta hardware
Starting point is 00:25:58 delivery is horrific at best. Like, Ejas, you wanted the glasses, you ordered the glasses, you still don't have the glass. Dude, I couldn't order the glasses. I couldn't walk into the store. Yeah, so that's my point is to have a flagship product like that be that despicable. It's really, it's high signal that there is some sort of lack of care in terms of what's being delivered and perhaps move fast and break things worked early on in the days of Facebook. But in a fully formed meta, that is a huge behemoth now, that doesn't work as well because
Starting point is 00:26:29 the costs are so high. I don't like anything about it. the hardware, even if it was exceptional, even if meta released true augmented reality glasses today, and they locked me into the Facebook meta ecosystem, I'm not a user. So they could have the best product in the world. The product that they hope to release five years from now, even if they released it today, I'm not a user because I don't care for that ecosystem and they don't care to unlock the ecosystem. So I think they have a lot of hard problems to solve.
Starting point is 00:26:58 One is actually building a product people like in the hardware world. One is building a software stack that people like, and one is shifting these billions of users that they have over to something more meaningful than a social media feed. And maybe they don't do it. I don't know how they monetize in the case that they don't. But there's a lot of questions that need to be solved from Zuck and the Facebook team that remain to be unknown. Josh, are you, is it hot where you are? You look to be you're sweating a little bit. Like, very, very interesting.
Starting point is 00:27:28 Because in this next and final topic. Oh boy, here we go again. I am pleased to announce Google is launching GPUs into space. I feel like I need boxing gloves every time we bring this up. My God. To create an AI data center in space, which Josh hates so viscerally.
Starting point is 00:27:48 But one of the richest men's in the world is going to give you his argument as to why it's an important thing to explore. So termed Project Suncatcher, a moonshot attempt to launch GPUs into space by Google. to harness the power of the sun, solar energy, which equates to 100 trillion times humanity's total electricity production. These are Sundar Pichai's words, the CEO of Google, not mine, right? He then goes on to explain how he would attempt to do that.
Starting point is 00:28:18 You know, he says, like, listen, I know that there are problems in space. There's radiation, but we're working on that. We're running trials and tests right now, which actually proves that our TPUs can survive in that radiation. Okay, checkpoint number one. But of course, the question remains, how on earth are you going to harness the energy that comes from the sun in an efficient manner? Maybe that's easy to solve. And then the obvious one is it's so expensive to send stuff into space. How are you going to pay for that? And actually, if you dig into his report and his announcement, he argues and he makes the point that like, you know, if you if you extrapolate the cost of space going forwards, it should end up being equivalently pretty cheap for us to send GPUs where it makes sense to. create a data center in space. I'm going to pause there before I start gloating, but Josh, any feedback on this. Okay. One thing that I do like is they're readjusting these timelines here. These timelines are starting to feel a little more realistic. Launch two prototypes by 2027.
Starting point is 00:29:15 So we're pre-prototype now. This makes me happier. We are going to launch prototypes in two years. We are hopeful that SpaceX and Starship will be able to get the cost per kilogram to orbit down low enough where it makes sense. Fine. Okay. If you want to experiment with these moonshots, and you want to take your time with it. All right, I guess I give up. Like, I got nothing else to say. This seems like, do it. Like, you got the billions of dollars.
Starting point is 00:29:38 You put it into R&D. You do it. I think it's like, go, go to space. I don't care. I'm just done fighting about this. I got nothing else to say. And also, I'm running out of arguments because he is like methodically
Starting point is 00:29:49 removing the constraints by, I guess just taking more time and assuming the unit economics makes sense. The problem with doing it right now is the cost per gallon to orbit it is so unnecessarily high. And the technology is so difficult to prove as prototypes that it doesn't seem worthwhile. I hope that they can figure it out, I guess is what I have to say.
Starting point is 00:30:09 But anyway, that's all we got for this week. There was a lot of craziness, a lot of chaos all across the board. All of our favorites, Open AI, meta, Google, Apple. They all got some screen time this week because they're all up to no good. Well, I guess some are up to some good. It's a mixed bag this week. Circular good. A circular good.
Starting point is 00:30:27 There was one last better news on the topic of space, and that is that Jared Isaacman has been nominated for head of NASA, which is exciting because NASA, EJAS, I'm not sure if you realize this. NASA at one point, they sent people to outer space. Like, they actually had rockets that went up into outer space. And it functioned. It was a functioning part of society, which has unfortunately degraded over the years. And the space program has kind of faltered to nothing, which is, I think, where a lot of the enthusiasm around SpaceX comes from. Thankfully, Jared is someone who is really passionate about space. In fact, he has been there. Twice. So he has a lot of experience. He understands how this works. And I'm hoping we'll provide a jolt into NASA to just make the space program more exciting again. They have the Artemis II program, I believe, which is supposed to be taking a rocket to the moon fairly soon. So I'm just excited to see this new ambition happening with space. Congrats to Jared on a nice, a nice win. And that's pretty much it. That's everything for this week. If you enjoyed, as always, please do not forget to share with your friends, like, subscribe. Drop a comment about what you want to hear about next.
Starting point is 00:31:32 We are doing slightly better on Spotify. Thank you for subscribing. If you have not gone to Spotify and watch there and followed us there and some content there, please do you. We do not have a government bail out. So we need you. We need you to bail us out. You are the backstop. You are the backstop.
Starting point is 00:31:48 Turn on your notifications. Subscribe to us wherever you are. Tell your friends to do it. Even if they don't listen, it would help us out so much. So with that, I think we're done for the week. We're going to go take a nice show weekend. We will be back bright and early next week for whatever hot news happens to come over the weekend. So stay tuned.
Starting point is 00:32:07 And as always, we will see you guys in the next one. Until the next time, peace. See you guys.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.