Moonshots with Peter Diamandis - OpenAI vs. Grok: The Race to Build the Everything App w/ Emad Mostaque, Dave Blundin & AWG | EP #199

Episode Date: October 8, 2025

Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends   Emad Mostaque is the founder of Intelligent Internet ( https://www.ii.inc )  Read Emad’s Book: ...thelasteconomy.com  Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified, focused on AI and complex systems. – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding      Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy   Join Salim's Workshop to build your ExO https://openexo.com/10x-shift?video=PeterD062625 – Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Alex Website LinkedIn X Email Connect with Emad  Read Emad’s Book X Learn about Intelligent Internet  Listen to MOONSHOTS: Apple YouTube – *Recorded on October 7th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Open AI Deb Day just occurred. Some of the most staggering things in human history got announced yesterday, and they still undersold it. Good morning, and welcome to Deb Day. The battle here is that human attention is finite. Open AI, meta, everyone's making a play for who are you talking to that then enables these MCP-enabled agents to come and do the job. Everyone's trying to be the everything out.
Starting point is 00:00:23 What happens when suddenly we're able to 5x, 6x, 6x, 7x, the amount of broadly accessible superintelligence across the world. I think this starts to become the foundation for transformative economic changes at a planetary scale. Opening eye is really trying to do a global land grab, right? Going into India, going into the UK, going into Greece. And then you've also got all of the open source models coming out of China. We're just at this tipping point, and the tipping point is in the next six months. Now that's a moonshot, ladies and ladies and
Starting point is 00:00:58 gentlemen everybody welcome to moonshots and our next episode of wTF just happened in tech we're spinning up this episode real quick with my extraordinary moonshot mates because open a i dev day just happened want to cover the subjects there but there's a lot happening across the board in robotics you know fsd 14.1 from tesla just dropped as well as other robotic updates and data center updates We've got my moonshot mates, Dave Lundon. Dave, good to see you, pal. Good morning. And, of course, we have AWG live from someplace in the, in hyperspace.
Starting point is 00:01:41 Thank you, Peter. Good to see you, Alex. Welcome back. And then one of our other moonshot mates, Imad Mustak, coming in from London. Imad, good morning, dear. Morning. Or good afternoon, as the case may be. You know, in the rocket business, there's something called
Starting point is 00:01:57 hyperbolic fuel. Hypergolic fuel is when two chemicals come together and they explode and they make a propulsive force. And I think about AWG and EMOD coming together is my hypergolic fuel this morning. Better than coffee. Completely. Yeah. Absolutely.
Starting point is 00:02:16 All right. So as always, this is the news that's breaking that I think is impacting the global economy, impacting our mindsets, impacting how we teach our kids and run our companies. So nothing more important for me. And let's jump in. The reason we spun this up for everybody is OpenAI Deb Day just occurred. I want to hit on this. And I'd like to really evaluate along the way, you know, how critical, how rapidly,
Starting point is 00:02:44 how is Sam sort of manipulating the future of his company and AI in a positive way? We're going to discuss this. All right. I'm going to open up with a short video clip of Sam opening up Open AI Day yesterday. Let's take a listen. Back in 2023, we had 2 million weekly developers and 100 million weekly chat ChbT users. We were processing about 300 million tokens per minute on our API. And that felt like a lot to us, at least at the time.
Starting point is 00:03:16 Today, 4 million developers have built with OpenAI. More than 800 people use Chat ChbT every week, and we process over 6 billion tokens per minute on the API, thanks to all of you. AI has gone from something people build, play with to something people build with every day. We think this is the best time in history to be a builder. It has never been faster to go from idea to product. You can really feel the acceleration at this point. So to get started, let's take a look at apps inside of ChatchipT.
Starting point is 00:03:45 All right. Dave, you want to open up? Yeah. So obviously Sam's not, you know, the best. He's not Steve Jobs on stage, but the numbers are just staggering. You know, the 300 million to six billion tokens per minute is the one that really jumps out. And it's going to explode from here forward, too, because, you know, I could easily consume 10,000 plus myself just coding. And with the number of developers coming on board and the number of home users coming on board, it's just astronomical.
Starting point is 00:04:17 So as we've been saying, nowhere near enough compute to keep up with it. I think Johnny Ive might be the guy driving the hey let's do this Steve job style have our very first big stage developer day uh and you know their GPT5 launch was really flat i mean really really really flat they did a much better job yesterday uh i got to believe johnny is driving that's put some money and some effort behind it let's let's go it's but you know they some of the most staggering things in human history got announced yesterday and they still undersold it relative to the implications we'll see it in a couple other videos here. But I don't know if that's deliberate slow playing because they don't have enough compute to keep up with demand anyway or if it's just they're learning how to do showbiz
Starting point is 00:05:00 on the big stage. But in any event, we'll see some more just mind-blowing capabilities that, if anything, are understated. And 800 million users is pretty extraordinary. They're tracking for a billion users. And I just wonder, is this a winner-take-most type scenario or Is there anything that can overturn them the final result? Imod, let's go to you, and then we'll go to Alex to bring us home on this one. Yeah, I think to put it in context, it's a lot, but it's about as many weekly active views as Snapchat. I know which one's going to have a bigger impact upon the world between the two, you know? I think there's still so much upside to come from here, but now you're seeing their model with Sora 2 and others moving maybe towards an advertising model as tokens get cheaper as they get faster.
Starting point is 00:05:48 to put the token numbers in context, six billion a minute is three quadrillion tokens a year. All of the humans in the world, together, speak 50 quadrillion tokens a year. And I expect that number to go up 10 times. So next year, opening eye is probably going to be at 30. And then by themselves the year after, they'll overtake in terms of tokens
Starting point is 00:06:10 all the human words spoken every single year. So this is the type of... I think we all get a moment. Calculate it. That's really good. I think we're getting close to that because Google said they're doing a quadrillion on their billion active users right now because it's in search and things. I think we're at that tipping point now where the number of AI tokens coming into the world is about to overtake humans. And maybe we should call it a something day, right?
Starting point is 00:06:35 Quadrillion here, quadrillion there, yes. Quadrillion here, quadrillion there. Yeah, Alex, what's your take on this opening commentary? Yeah, I think we're really far from saturation. So I would add that in addition to being call it at about 6% saturation by comparing number of open AI generated tokens versus human spoken tokens per minute, I think there's probably an even more important statistic, which is that there are approximately 4 billion human users of smartphones that aren't yet using any sort of superintelligence, if you will.
Starting point is 00:07:13 Now ask yourself, what happens when, suddenly we're able to 5x, 6x, 7x, the amount of broadly accessible superintelligence across the world. I think this starts to become the foundation for transformative economic changes at a planetary scale. And you're limiting that to humans. And of course, humans might be the least significant users of superintelligence in the final result.
Starting point is 00:07:38 Well, with full autonomy, superintelligence is arguably the ultimate user of super intelligence. Yeah. There's a limit to the number of words we can. can say, you know, it's like 20,000 a day. Our thinking tokens are 200,000 a day. AI has no limit to the number of tokens and economically valuable tokens it can do, except for the GPUs. That's the only limit. Well, on our last podcast, we talked about, you know, Sam was saying we're going to have to make a trade-off between tokens used for education for our children or token used for healthcare to save lives. We can't, we don't have an infinite amount of compute. And I don't want to make
Starting point is 00:08:12 those difficult decisions. But if, you know, FSD is coming on. online, and all these cars are going to start driving themselves, and the quality of the driving is directly tied to the amount of compute available. So we're imminently going to make very, very difficult decisions around, you know, tolerate a very rare car crash versus give somebody the ability to build something at home using AI. And it's just incredible the difficult decisions that are coming immediately after all these functions that we're about to see get deployed. And at the same time, also limited by energy, which we'll talk to in this.
Starting point is 00:08:45 this conversation. I'm going to move us to a few of the features, the Open AI Day. We'll see a few things. I didn't show the video here, but one of the primary, you know, high points was they're talking to apps within chat GPT. They have an apps SDK. And the ability for them to say to booking.com, book me this trip or Figma, you know, diagram this or Coursera, teach me this, or, you know, just speaking to Zillow, it's the ultimate interface. with all the other apps out there. What's the significance on this for you, Emud? Well, I think attention is all they need, as it were.
Starting point is 00:09:26 Like the battle here is that human attention is finite. And so OpenAI, meta, everyone's making a play for who are you talking to that then enables these MCP-enabled agents to come and do the job. What is the 10-cent we-chat-type super app that's coming together? because everyone's folding themselves into these nice kind of things. And again, that's how they're going to try and monetize. So you'll see this battle between meta via WhatsApp, Instagram, things like that, Google and OpenAI to try and occupy that real estate.
Starting point is 00:09:57 And then, of course, Elon's going to come in with X. And all sorts of interesting things will come with a yes. Everyone's trying to be the everything app. Yeah, for sure. Dave. Well, in a second, we're going to see actually something built by voice. When we look at it and then we can. It's actually really cool when you see it. Oh, the codex example?
Starting point is 00:10:17 Yeah, yeah. Yeah, we'll come to that. But before, I mean, I'm just wondering, you know, when Open AI drops this capability, are they picking winners in the final result? Are they going to be equally, you know, sort of spreading their attention across everybody? And are they basically eating away all the entrepreneurial startups? There was a tweet that went out. I was trying to capture it, but it said, okay, Open AIJ, just to limit.
Starting point is 00:10:43 you know, a million different startups out there working on on their approach. Remember, every platform ergonomically wants to have its own app store. So I think that the notion of an app store being built on top of a new platform where chat GPT and then presumably other frontier models as well, wanting to become the new operating system or it certainly rhymes with Facebook platform moment when Facebook launched that as well. I think that that's a very natural market movement. But I would also perhaps caution at some point, I think it's reasonable to expect that every pixel is going to be generated. It's not just going to be vector art or HTML type graphics.
Starting point is 00:11:23 Every pixel is going to be generated. So I would view this as almost a transitory moment where apps are floating on top of chat GPT as the new operating system environment, but it's a passing phase. At some point, every single pixel probably wants to be generated. That was my first thought. The other thought is, do you remember Peter back in 1987 when Apple, without Steve Jobs, launched their knowledge navigator concept? Yes, I do. We're living in that now. We're living in that where, you know, professor is having a conversation with basically similar type canvas that is able to pop open new apps and interact with them on demand.
Starting point is 00:12:04 We caught up with the future approximately 40 years later. we're living the knowledge navigator future. Every week, my team and I study the top 10 technology metatrends that will transform industries over the decade ahead. I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more. There's no fluff. Only the most important stuff that matters,
Starting point is 00:12:26 that impacts our lives, our companies, and our careers. If you want me to share these metatrends with you, I writing a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important metatrends, 10 years before anyone else, this reports for you. Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you if you don't want to be informed about what's coming,
Starting point is 00:12:52 why it matters, and how you can benefit from it. To subscribe for free, go to Demandis.com slash Metatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode. One of the examples they had here, on their demo live stage, they had an individual propose a new startup in this case. It was a dog walking app and they said, okay, create me an image for it, create me a name for it. And then they said, okay, Canva turn this into a deck. I want to raise money.
Starting point is 00:13:24 At the end of the day, you know, we're not too many steps removed from, you know, chat GPT, start this business for me and start, you know, wiring the revenues to this location. I mean, I think that's the multi-trillion dollar end game here where at some point we see autonomous corporations. Yeah, I literally did exactly what you just said, Peter, yesterday at a red light in Cambridge as I was sitting there, created a business plan and tried to recruit a Princeton team into it via AI at the red light. That's like that's like when Elon said when he was driving from space. SpaceX back to his home in Beverly Hills, and there was traffic, and he goes, damn it, I'm going to start, you know, a tunneling company. It's going to be boring. I'll call it boring. I mean, and then it's, there's literally a future in which we're going from mind to materialization. It's stating what you want to do and having the universe conspire to create it for you. That's crazy. Imad. Yeah, I think, you know, he has a boring company, but then he has his even cooler name of Macro Hard, his new software company. I love that. Against Microsoft. Elon is a 13-year-old kid, for sure.
Starting point is 00:14:38 Which is literally trying to do this. It's trying to create ideas to full companies entirely digitally, right? And I think what you've seen is three phases. Consumption was expensive. It became cheap. Creation was expensive. It's becoming cheap. And now the valuable thing is curation and attention.
Starting point is 00:14:55 So again, the battle is, who can have that value for the pixels that you see, for the noises that you hear? and then a lot of that creation element is going to be abstracted away. And I think all the big players realize this. And the question is, where does it end up? Where does it go eventually? Dave? Well, no, that quote that you had, I've heard it 100 times,
Starting point is 00:15:14 a million startups just died because of what they rolled out yesterday. It's absolutely not true. Show me the names of those startups that died. This came up when we were talking to Amjad Masad a couple weeks ago on that other podcast, you know, the founder of Replit. He had to build his entire foundation model from scratch to get to market because it was before, you know, OpenAI had the APIs. And you ask him, do you regret that? You had to throw away all that code. He's like, no, I absolutely don't regret it. You constantly
Starting point is 00:15:39 have to change. You know, AI is going to move at this ridiculous accelerating pace. You're not in your business constantly. You're dead on arrival. Yes, exactly. But your team is intact. If you have a great team and you're in AI, you will succeed every single time. Yeah, maybe something you do gets crushed by the next iteration of Open AI, but you pivot so quickly. and easily, just like we're talking about right now. So you show me the names of those companies, those million companies that died, they don't exist.
Starting point is 00:16:08 All right, let's jump into the next demo they had at Open Eye Day. It is Agent Builder, creating multi-step workflows without coding. I'll just show the first few seconds of this. And to make this interesting, I'm going to give myself eight minutes to build and shift an agent right here in front of you. So I'm starting in the Workflow Builder and the Open Eye platform. And instead of starting with code, we can actually wire nodes up visually.
Starting point is 00:16:30 Agent Builder helps you model really complex workflows in an easy and visual way using the common patterns that we've learned from building agents. All right. Imad, you're building agents left right and center right now for intelligent internet. What do you think of this? Yeah, I think you've kind of gone from the creation to the now composition and the multi-stage process. For image generation and media, we built something called comfy UI, which again is this node-based process. But where we're going, we don't need nodes and stuff.
Starting point is 00:17:00 spaghetti. Exactly. You know, the future of these things, you can look at our common ground platform, for example, it flips between Camban and kind of workflows and Gant charts and things. It will just show you what you need to show, and the way that you'll interact with agents is like interacting with Jarvis in Iron Man. Like I think in a year or two, that's what the agent builder is going to be. You'll just have a nice chat, and it will show you all these things and mock them up
Starting point is 00:17:26 instantly. And in fact, Claude had this with their latest release for the pro-use. users, this instantly generating app desktop type thing that literally program things on the fly without code, because code is just a human translation layer, and that can be removed completely. For sure. Dave. Yeah, yeah, it's funny because the people succeeding in AI are overwhelmingly really young, really, really smart, with very limited business experience, and they keep recreating the same mistakes from like 20 years ago.
Starting point is 00:17:58 So it's okay because, you know, AI is such a great tailwind. But this graphical programming language of those lines is the stupidest thing in the world in the age of AI where you can talk to the AI. It's very similar to in the cursor interface if you want to upgrade your account and you're talking to cursor. Like, hey, or to Clod 4.5, hey, Clod, upgrade my account. And it says, well, go to the menus, navigate to the settings. What are you talking about? I'm talking to you right now. you have MCP, like just do it.
Starting point is 00:18:29 So it'll all get fixed very quickly. But it's crazy. The whole interface to AI is going to be voice, voice and images. And the idea that you're going to design programs by drawing boxes and connecting with lines, which has been around since like 1980, no, no, no, no, no. So it's, it'll get cleaned up very, very quickly. It's just kind of funny to see this transition phase and all the same old, same old mistakes being made. Alex, any other points you want to make on agent?
Starting point is 00:18:57 builder. Yeah, I'm reminded almost by analogy of the early days of Hollywood that were shaped around vaudeville type design patterns. I think that's the stage that we're at. This is the vaudeville on Hollywood screens stage of AI for software development, where it's just sort of, on the one hand, it's great, glad that it exists and glad that it provides probably a comforting safety net for enterprises that are migrating to end-to-end, agentic workflows. On the other hand, very much feels like a passing phase. I think it's an interesting modality for specifying for software development.
Starting point is 00:19:34 But really, why not take a full leap and instead of specifying flow charts for individual workflows, why not just dump in an entire org chart of a human organization? And rather than individual tasks or roles, just poured over an entire company work chart and ask for an entire enterprise to be replicated.
Starting point is 00:19:52 Ed Vogel analogy is great. You know, if you look at an old original black and white movie, and the actors are on an actual stage with actual stage lights. They're doing a play, and they're just recording it on film. Very similar to when the Internet took off and the newspapers just copied the exact newspaper over to the Internet and said, oh, we just read it over here instead, same font, same everything. We humans are so stuck. We're so stuck in the way we do stuff.
Starting point is 00:20:18 We have such a high cost of switching. All right, continuing on this theme of voice mode over everything, here's a little video. of controlling Codex with voice mode. Take a listen. Let's see if we can actually control this camera now with this Xbox controller. There we go, it works. That's pretty awesome. What's funny is like I didn't even specify to Codex which button should do what, but it figured that the joystick was probably good idea.
Starting point is 00:20:48 So that's actually perfect. But let's do the real test now. Let's enter voice mode and see what happens. Hello? Hi there. I can hear you loud and clear. Let me know what you need. Yeah, what do you see on the camera? I see a large audience in front of you, with rows of attendees seated and ready.
Starting point is 00:21:07 The room is well lit, and the crowd stretches far back, filling the space. That's right. Could you shine the lights towards the audience? There we go. I've just lit up the audience with a nice, bright water. They should be perfectly highlighted now. All right. So this is an unusual demo for for codex to say the least, just showing our ability to control physical items in our environment.
Starting point is 00:21:32 But I have to say, I love this for one reason. I've always said I will invest in the first company that's able to make AV foolproof, right? How many of us are in conversations or giving presentations and we can't get this to connect to that? I just want to be able to say to my AI, show that video on that screen, connect me over Zoom to that screen. So there's hope.
Starting point is 00:21:57 still. And that's a business plan that like that's not hypothetical. That's a business plan that if someone's listening right now and they get together a team and then Peter Seed invests in it to give it credibility, that will become yet another one of these massive success stories. It's just really that simple. I'm on stage and I say, you know, something fails. Okay, you know, AI is easy, AV is hard. Well, hopefully AI can follow that. Well, but also controlling. Right now you kind of wave to people that are backstage and they push some buttons or whatever. It's crazy because the AI can now recognize your hand gestures and can respond to your voice. And it's much more engaging for the audience if you're talking to the AV and it's changing the lighting, changing the slides, you know, pulling up things from the internet in real time.
Starting point is 00:22:39 Very doable. You could get that product out the door in like six months or less and just crush it. And then, of course, it's self-demos. And then Peter will bring it into the podcast. And it's like, it's just that simple. There you go. Let's start with Alex. Alex, what's your thinking of it? I think it's sort of interesting because so many facets of.
Starting point is 00:22:57 Codex are open source and available for review on GitHub to actually trace where it appears. Tell us what Codex is in the first point. Codex is an open AI brand that seems to cover a number of different independent software tools. So it covers
Starting point is 00:23:13 their code generation specific AI model back end. It's also used as a web front end for software agentic software development. It's also used in connection with a command line interface tool. So they use it as a umbrella brand, as it were. But in this case, at least one of the Codex associated projects
Starting point is 00:23:32 is up on GitHub. You can review the source history. And so pulling the threat on the story, it was interesting, I think, to discover that at least some aspect of this functionality appears to have originated as a feature request from a third party from the Carnegie Mellon affiliated Software Engineering Institute back in April. That was where a user was complaining or really pointing out that human prompt typing speed is increasingly become the limiting factor, becoming a limiting factor for software development. By the way, I want to read a quick tweet here that came over from an opening AI employee. It says, Agent Builder we released today was built end to end in under six weeks with Codex
Starting point is 00:24:15 writing 80% of the PRs. This matches the AI 2027 report that was forecasted in 2014. 26. Coding automation goes mainstream. Agents will work like teammates. AI, R&D is 50% faster from algorithms. I mean, we are seeing sort of, I want to say science fiction, but science predictions sort of tying much closer to reality. We're close to the point of recursive self-improvement. And it can go in the other direction as well. We can get negative speed where the software is just written preemptively. Interesting. So, you know, we're not.
Starting point is 00:24:55 smart enough to realize we need the software, but the AI is, and it's prepped for us in advance. Exactly. I love that. Imad, your thoughts on Codex here. Yeah, if you get enough tokens, I think this is the thing. Most people are just using half a million or a million there, and to build something like that, it's five bucks. With the new GROC model, it's 50 cents.
Starting point is 00:25:17 Like, you're seeing a crazy thing. I want to come to that after we close on this, which is, you know, how would you be disrupting open AI if you were going to. Dave, you want to comment on controlling codex with voice mood? Well, something Alex said really sparked a thought, which is, you know, Codex when they launched it was a way to run five or ten different coding processes in parallel and have status checks and it made you much more efficient. But now, you know, they use it to kind of bundle five or ten for different things together.
Starting point is 00:25:49 This is going to be a real problem because we're used to products having a very specific name and brand and doing a very limited number of things. with AI, the explosion of capabilities is exponential, and you can't even keep up with the names. So now it's going to be much more like these thematic branding, like Codex is a grab bag, you know, GPT is a grab bag, but what else can you do? There's just so much going on. So just keeping up with the names of things and naming things in general is going to be. And we had that conversation with Kevin Weil at OpenAI that is naming protocols are kind of insane.
Starting point is 00:26:23 Yeah, there's a fair not one thing. Six months ago, Dario Amadai, Amanthropic, said that 90% of code will be written by AI. I think he meant can be written by AI, and this is a really great example of that. And so, again, they decided to embrace it. So you see Codex, the CLI, the command line interface tool, literally gets two updates a week, which for a multi-billion dollar, half a trillion dollar company is unheard of. And so, again, as Alex said, I think you're going to get these self-recursive improvement cycles, first with humans in the loop, but then the software might just upgrade itself and respond to what
Starting point is 00:26:59 people might need. Yeah, continuously. Yeah, what's interesting is that interacts with the interface. Like if you said my iPhone is going to update itself twice a week, you'd be confused as all hell. You would never know where anything is. But now that you have an AI interface on everything, it's okay. Because it's self-explanning. It's just seamless. I mean, I just want Jarvis. I just want an AI I talk to and it does everything I need to get done. I'm just going to assume that anything is doable and my AI is going to enable it or find the capability. And I don't need to know all the hard work it's doing on the back end. I don't need to know what's calling up and getting access to. It's just making
Starting point is 00:27:38 it happen. I'm telling you, Peter, within the virtual world, not within the physical world, but within the virtual world, that's today. No one's productized it yet, but all the technology and capability exists right now. The robotic version of it where it makes your Ironman suit might be a year or two or three out, in the virtual world, you know, build me a video game, build me, whatever. That's right now. Someone needs to go and get Paul Bessonese voice rights. Let's take a look at one more video from Opening Eye Day, which was the SORATU API and a segment I call sketch to video. And again, this is going from mind to materialization. If I can imagine something, can I make it real? Take a listen. Today we're releasing a preview of SOR2 and API.
Starting point is 00:28:22 Mattel has been a great partner working with us to test SORA 2 in the API and seeing what they can do to bring product ideas to life more quickly. So one of their designers can now start with a sketch and then turn these early concepts into something that you can see and share and react to. So let's take a look at how this works. you're listening to this podcast, what we're seeing here is basically a hand sketch, then being developed into a photo realistic video of a Mattel hot wheel toy or matchbox toy. Super compelling being able to go from that. And I've talked about in the future, I'm going to be able to describe verbally what I want. I want a device that can hold a hot liquid. I want to have a
Starting point is 00:29:18 handle on it. I want to have it this color. And then as I'm describing it, it's visually materializing on my, you know, AR glasses in front of me. And I say, no, can you make it a little bit larger? Can you stretch the dimension? Just in plain English. And then how much would it cost to make? It gives me a price. And can you give me an alternative that's cheaper or that has better thermal insulation? And I go, yeah, that's it. Please print it for me, manufacture it for me, and put it up on the web so anyone can grab it. I mean, this going from, again, I call it mind to materialization, super powerful.
Starting point is 00:29:57 Imai, do you want to open up on your thoughts on this? I mean, the holodeck is getting closer, right? Not with hard lights, but as you said, that aspect is there. These models learn physics, they learn material. So in the video they're just shown, the car goes down these ramps, and it's transformed it into 3D. You can have 3D extensions from it. One of the things you can do with these models is like, you can actually do a sketchboard where you show scene by scene how every single thing changes.
Starting point is 00:30:23 And you have that as the input and it will generate that clip. And it doesn't do that by thinking or breaking it apart. It literally interpolates the concept to the video. And so we're actually only scratching the surface of how powerful these models are at the moment. And then I think as they get more and more used, you'll see that they are genuinely world models that can create anything you can imagine and then adapt on the fly as well. With audio to match, with perfect audio to match. Yeah. Yeah, I think everybody has access to this, right?
Starting point is 00:30:52 They just launched the API version of it yesterday for large-scale use. But anyone can do this. And if you haven't done it, you're crazy. Do it. It's so mind-opening and compelling. Do exactly what I'm odd said. Do it as a series of scenes. And then right now you've got to wait about five or six minutes to get your video back,
Starting point is 00:31:10 which is really annoying. And it shows me the compute. How quickly we are spoiled. Well, it shows you the compute bottleneck, though. I'm sure when they do it internally, it comes back in a millisecond, you know, it is physically possible to do it very, very quickly. But again, way too many users for the capability, but you've got to try it because, again, it's mind-opening.
Starting point is 00:31:31 When I first saw this video, I didn't get it because I couldn't tell the video is actually synthetic. I thought it was like, this guy is sketching a toy, a Mattel toy, and here's the toy. Like, so what? Like, oh, wait, that's not a... That toy doesn't even exist. That's actually been synthetically created. It's just so good.
Starting point is 00:31:48 The video has perfect physics. You just would never know that it's synthetic. Alex, what does this mean in the final result? Where are we going here? This is mechanical design getting solved. MIT, the mechanical engineering department, has an entire set of courses just devoted to training the next generation of mechanical engineers how to do product design like this.
Starting point is 00:32:10 We're seeing right before our eyes an entire discipline or sub-discipline get solved in bulk by generative AI. And I think maybe even more interesting than this particular instance is the API pricing. So if you go to the API pricing page now for SORA II, it's 10 cents per second for the base model. You do the arithmetic, that's $360 per hour. Assume 10x year-over-year hyper-deflation model costs. Within the next year, suddenly, it's far cheaper to outsource mechanical product design.
Starting point is 00:32:44 to an API call to Saur or two or whatever it evolves into than a human. That's an entire hyper-deflationary field getting solved overnight. Yeah, keep those numbers top of mind because when we start talking about compute in a minute and the cost of compute, you'll immediately recognize what Alex just said, you know, that 10x deflation in price, we need that desperately because the demand for what we're seeing here is going to be orders of magnitude bigger than the amount of compute currently available. What amazing time to be a kid, right?
Starting point is 00:33:12 imagine you're sitting down with your mom and your dad and you're just describing what you want as a toy or what you'd like your toy to do. And all of a sudden, it's materialized into a video for you. And then some other enterprising company in the 3D printing world says, I can just manufacture that for you as an end of one. I mean, just amazing. Star Trek replicators aren't 24th century. They're now just 2025. We are really bringing Star Trek to today. And it makes me so happy. I'm so happy about that. Why wait a few centuries? Yeah, for sure.
Starting point is 00:33:45 All right. So we're going to wrap on the Open AI Day there. But, Imod, I'd like you to jump in here a second. You know, we're seeing a half a trillion dollar valuation for OpenAI. We're seeing Open AI really working hard to create multiple revenue flows from advertising, from selling products in a multitude of other areas. What are your thoughts on Open AI? So I think that the core business of Open AI in terms of the monthly chat GTUT subscription is going to come under challenge
Starting point is 00:34:17 because we've seen this breakthrough with DeepSeek, GROC, GROC, and others where the cost per million tokens is literally dropped 20, 30 times. And so the basic chat experience is not good enough anymore. So almost like you see these levels of AI that will fill. So the chat experience is basically a couple of bucks a year when you calculate it now in terms of the cost, down from 200. bucks just over a year ago. So now they have to think about agentic workflows. They have to think about economically valuable workflows and then even beyond, because the number of tokens goes from 2000 to 20,000 to 200,000 to 2 million. And so this is why when we see SORA, they're doing likenesses and they'll be doing advertising and more, because how do you have the cash flows to justify
Starting point is 00:35:01 that? Google and META both have the advertising cash flows. How do you monetize those 800 million users either by delivering excess value through your $20 a month subscriptions or by having these new verticals because your competitors are going to release what was your key product at the start of this year. Chat GPT for $20 a month for free
Starting point is 00:35:23 because that's how far and how quick token prices have dropped. We've talked about the notion that opening eye is really trying to do a global land grab, right? Going into India, going into where you are Imad in the UK, going into Greece, going to other locations. I mean, and it's an interesting battle between its land grab, and then you've also got
Starting point is 00:35:46 all of the open source models coming out of China, which are going after a land grab as well. Well, you know, Paul Graham said Sam Altman, if you dropped him on an island full of cannibals and came back a year later, he would be running the island. So, I mean, he's got to be one of the greatest. business strategists of all time. And so he's going after India. He's going after a massive installed base. He's got an 800 million user installed base. He's going after the rest of the world. And he's also going after the data centers. And we'll see that later in this podcast. So I think he's narrowed in on the two foundational points of control in this great battle
Starting point is 00:36:24 are installed base of users and massive amounts of compute. If you control the endpoints, everything in the middle will fill in. That's the way I think he sees it. This episode is brought to you by Blitzy, Autonomous Software Development with Infinite Code Context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-compiles code for each task. Blitzy delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint.
Starting point is 00:37:13 Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-IDE development tool, pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org. Ready to 5X your engineering velocity? Visit blitzie.com to schedule a day. Demo, and start building with Blitzy today. All right, I call this section. Meanwhile, in the continuing AI wars, let's hit on a few others. Anthropic nears superhuman computer use.
Starting point is 00:37:50 And here we're seeing a graphic of performance as a percentage, hitting human performance very close to it, and we're seeing basically over the last year. Alex, do you want to kick us off on this one? Yeah, so maybe a comment first on what the benchmark is in the past on this podcast. I've beaten the drum for the importance of benchmarks more broadly for not just measuring progress, but also accelerating progress. In this case, the benchmark OS world for operating system world is really lovely benchmark that was initially developed by Salesforce and colleagues. And it's a benchmark that measures the ability of a computer use agent and an AI that has access to a keyboard and mouse and screenshots. to be able to conduct regular everyday economically important tasks on Ubuntu, Linux, Windows, and Mac OS,
Starting point is 00:38:41 hundreds of different types of tasks. And so what Anthropic is demonstrating with this chart is probably, again, by the law of straight lines, perhaps by the end of this year in the next few months, we're going to see, at least from Anthropic, putting aside other frontier labs, superhuman performance at the ability to control computers for normal everyday tasks. So Alex and Ahmad, I asked perplexity comment, what does this benchmark even mean? It's really vague, and it came back with some complete garbage answers. So hopefully you can fill me in.
Starting point is 00:39:15 Like, what are we measuring here? We're measuring the ability for an AI to literally control Windows-type interface with mouse control and keyboard control and perform a variety of tasks, web browser, navigation that would require it's the ultimate you know verbal interface i'll you know do this for me without the i don't need to know i mean i just set up a new macbook pro and getting all of the settings back to where i wanted it it just ate up a half a day of wasted time yeah yeah yeah and peter you want jarvis that this is jarvis i'll be it not in the physical world for Jarvis for controlling your computer across applications, but this is a pretty
Starting point is 00:40:02 good benchmark for Jarvis for computer. There are companies that are also setting up giant science factories controlling, you know, hundreds or thousands of experimental devices, right, where it's just basically putting an AI layer on top of all of them and running 24-7 dark experiments to ferret out the breakthroughs of science. Anyway. Imod, do you want to add to this? Yeah, no, I mean, it's 360-odd tasks that take over your computer. The labs and things, it's a different kind of reinforcement learning environment. I think what this is showing is that these generalist models are getting good enough to do most human standard tasks.
Starting point is 00:40:48 And again, these models have economies of scope. So now we're seeing thinking machines and others, building RL environments so they can plug into the real world even more. seamlessly. And I don't think anyone believes that that line isn't going to break through the human level. Again, this is the takeoff point. And so when they can control anything we can control digitally and then physically, then it's only a question of the number and quality of tokens behind that. And so again, this is the takeoff point. This is why we're about to see that clip. And what could possibly go wrong? I would say what could possibly go right and quite a bit can go right yeah okay thank you for bringing me back to the world of abundance alex i appreciate you anytime
Starting point is 00:41:29 all right our next uh our next news item here is a major update to grok imagine uh going from v0.1 to v0.9 i love the the numbering protocols everybody and also uh Elon entering the gaming world or at least announcing you know Elon if anything else is a gamer and the video gaming industry is massive, you know, outweighing entertainment by, you know, Hollywood by a long shot. Let's take a quick look at a video clip. And the thing that's important is GROC imagine can generate 15 second clips. And their comment is we're focusing on speed and fun. All right. Well, let's take a look at some speed and fun. And I love that it says Grock launched as a truth-seeking AI, and there you see Elon as this, you know, medieval emperor, you know, in battles.
Starting point is 00:42:40 It's like, okay, this is the truth. We're seeing. But I think even just taking that line, truth-seeking and combining it with these models, I completely buy the notion. that video as a first-class modality, when incorporated into chains of thought, is going to help us to discover the truth. I think it's one of the key modalities for understanding our universe. Okay. What does that mean, Alex? dive in a little bit deeper, please. So when you ask a question of chat GPT or some other frontier model, now post-ChatGBT 5, there's thinking that goes on usually under the hood. It thinks internally sequence of tokens before it produces a
Starting point is 00:43:20 final answer. Right now, almost all of that thought takes the form of text tokens. But imagine a near future where the agent, as it were, is able to think not just in terms of text, but in terms of video. It's able to hallucinate a short video clip, imagining basically visual imagination, imagining things that you can introspect as well. You can pop open a little drop down and see the little videos that it's generating as part of its chain of thought before it answers your question. Video reasoning, I think, is going to end up being a killer app for how these video models that right now are obviously largely aimed towards entertainment end up delivering transformative economic value. Amazing. In our, you know, our occipital cortex, our neocortex for
Starting point is 00:44:05 visual image understanding processes much more data than our ability to bring it in language. Yeah. So OpenAI did $4.3 billion in revenue in the first. half of the year. The video game market did 200 billion in revenue last year. So you can see when we think about gaming, when we think about media, this is a massive market to go after, and, you know, X Elon are going to go after it from a first principles basis. Whether or not the games will be any good, that's a question, you know, I think they'll probably be quite addictive. And again, the scarce things in the world, there's Bitcoin, there's my financial coin, there's human attention. The battle for human attention is the next battle for revenue.
Starting point is 00:44:52 And everyone is basically drawing their lines, getting their GPUs ready for it. So I think we'll see this type of thing from everyone, and it's good for consumers in many ways, because the quality bar will lift and the access will expand. All right, we can go so deep into that entire conversation. But before we exit, meanwhile, in other AI wars, I want to be a lot of wanted to play a quick clip and say a thank you to one of our subscribers, C.J. Trueheart who heard our call for a Moonshot theme song and proposed one. Not saying this is it, but I was super impressed. Okay, let's hear what he has to say or sing or produce. So I recently heard you guys mention on the last podcast that you were going to create a Moonshots theme song.
Starting point is 00:45:44 And for someone who's who's been using Suno for two years and especially since Moonshot's podcast is my favorite as far as AI and technology goes, I really appreciate you helping me be able to understand what's happening and give me a perspective for my entrepreneurial, creative mind to best position myself, made you a customized theme song for the Moonshots. When the Moonshot makes, breaking through the noise. WTF just happened with a clarifying voice. In a basis, messy disruptions never clean. We'll show you what it means, the story in between.
Starting point is 00:46:36 Moonshot mates, with Moonshot minds, building today, so tomorrow. Love that. Thank you, DJ. I love it. I love the fact that you pulled over to shoot the video, too. That's just awesome. Much appreciated. Yeah, I just appreciate it.
Starting point is 00:47:01 I love our subscribers. They're just, they're generous, they're intelligent, they're creative, and just a shout out to all of you guys. Thank you. We love your feedback, your input. We read it. We consume it. All right.
Starting point is 00:47:14 Let's jump into our next segment. Chips and data centers, a lot going on there, but probably the single most important news, AMD and OpenEI announced strategic partnership to deploy six gigawatts of AMD GPUs. Dave, let me go to you, buddy. Yeah, stock moved, what, 30% on the news, which shows you Sam's ability to morph the world, or warp the world to his perspective or whatever he says. Massive, massive impact on a huge public company. And, you know, it's interesting. He's going to get 10% ownership or Open AI will get 10% ownership if they hit
Starting point is 00:47:52 milestones for basically no price. And how often do you get to negotiate a deal like that? Unless you're the president of the United States in which case you can negotiate it all the time. Yeah, I guess that's true. The reason this is a serious win-win, though, is, you know, AMD is they have capacity to manufacture with TSM. And anyone can design inference time chips and sell them out. But you have to have the manufacturing capacity.
Starting point is 00:48:21 So Sam's going to grab that capacity via AMD. I'm really curious on November 14th to look at Leopold's 13F filing, you know, from the situational awareness hedge fund and see if he also bought AMD and got that 30 or 40 percent. He probably did. Probably did, yeah. Dave, isn't this, I mean, we could have predicted this as well. At the end of the day, we, you know, talking about Intel and the criticality, that capability, you could have said the exact same thing about AMD.
Starting point is 00:48:50 Who else? I mean, there's Broadcom, there's Micron. Which of these other chip manufacturers are going to be pulled into sort of this U.S.-centric chips-first strategy? Well, I'll tell you what else. If you drill a layer deeper, underneath the chips, there's a whole bunch of other material that will get dragged into the vortex that no one's quite realized yet. So if you really want to see these 30, 40, 50 percent pops, you look a layer deeper than just the chip companies into the underlying,
Starting point is 00:49:20 you know, you've got silicon bulls, you've got glass, you've got, you know, all this underlying manufacturing infrastructure. That's all just going to get sucked into the same exact vortex. And, you know, a lot of those are public companies. And some of them are smaller too, so the movement is much bigger. Yeah. Imad, thoughts on this one? I mean, he'd probably just be calling everyone now and saying,
Starting point is 00:49:43 hey, you want to get me warrants, your stock price will go up, right? To all the companies. Can you imagine that if he chronically did this exact deal 50 times back to back, the amount of value that would create? Oh, my God. Just all the SaaS companies. Come on. They'd partner up with you, right?
Starting point is 00:49:59 Hey, if you want to do a deal with Everquote, I'm the chairman of that one. Just give me a call. We'll do this deal tomorrow. You're right. I do wonder if he's using G. WPT6 Pro to kind of come up with these deals. But, I mean, it's massive. If you look at the 10 gigawatts that they're doing with NVIDIA and the 6 gigawatts here,
Starting point is 00:50:16 it's about $50 billion of buildout per gigawatt. So it's about $800 billion of build out, like a trillion that they've already got, I think. Probably more. And completely sold out. Sold out years in advance. So about years in advance. And, you know, again, the only market that can sustain this, and then cry the revenue, that is if they're going after the entire like all software
Starting point is 00:50:44 jobs effectively. So I think in the next few years, you're going to see basically open AI and others replicate the whole macro hard strategy of fully autonomous workers. That is the product that they will bring to the market. And they will cost like 10, 20,000, 30,000, 100,000 dollars. And that's the only thing I can see that will fill this particular massive amount to Alex, are you going to stick with your efficient market hypothesis from two podcasts ago, or are you going to just start tracking the tail number of Sam's jet and seeing who he's meeting with next?
Starting point is 00:51:17 Well, one might imagine losing sleep as a public market investor that may be the singularity, as it were, happens in some private company where there is indirect at-best exposure via public markets. Like, what happens if OpenAI and call it the 10 other largest privately traded companies, suddenly have an intelligence explosion and are worth tens of trillions of dollars overnight as a public retail investor, that's perhaps a suboptimal outcome. So I would actually view this through a very positive lens that through indexing, through exposure to AMD, Intel, et cetera, this is now an enormous jump in exposure to open AI to the extent that an intelligence explosion happens there.
Starting point is 00:52:00 Let me hit on a couple of these related stories. So BlackRock is buying up to 78 data centers, totaling five gigawatts in a $40 billion deal. And then we're also seeing here, Corning is posed to dominate AI data centers with optics, obviously for fiber optics, for connecting everything. On these two topics of Corning and BlackRock, let's get some commentary there. Imod. Yeah, I mean, that's the war for.
Starting point is 00:52:35 the downstream kind of elements here, right? Like BlackRock coming in with that 40 billion, they're coming about three times what the normal multiples are. Like you need to deploy capital and this feels like currently the best capital to deploy. Downstream corning is kind of optimal here, but half, I think something like half of all GDP growth in the US this year is AI. Which is insane. I mean, comparing to where we were even just a year ago or two years, years ago. It's an economic transformation story for the U.S. at a minimum, the company that Black Rock is purportedly considering buying many of the campuses that they're converting to data centers are brownfields, including according to public reporting, former coal plant in Ohio. This is what
Starting point is 00:53:25 economic transformation, industrial economic transformation at scale looks like. And then it is the world, it's again, we're on a war footing. We have to realize that. We're in just pre-World War II, we're converting automotive plants into aircraft plants, were tiling. I mean, you know, Santa Monica Airport, where I fly out of was basically build out as a secret manufacturing and airport hub. It's happening. And it's accelerating. This is a great programming of the entire industrial base. Yeah. Yeah. And also it's another investment theme just for our investment oriented listeners. One of our best and most prolific partners, Cush Bavaria, is starting a new company with Alex's help, that funnels money into data centers, but it's part of a broader
Starting point is 00:54:16 theme of if, you know, this is half the GDP growth of the country in accelerating, there's all this pent-up capital all over the world that's not investing in things like Corning. And so if you can create new conduits of the money into all the implications, you know, so Alex has been talking about photonics for months now. And the leap from there to saying, oh, Corning is going to benefit is not a huge leap. So then, you know, the capital just needs to get into these avenues to keep this engine humming. And so, you know, new entities, new funds. You know, BlackRock is obviously very, very smart money pouring into this area. But then all the other implications, you know, you know, data centers in new geographies and pumped hydro and what about the equipment for pumped
Starting point is 00:54:58 hydro and solar installation costs, all those things, all are investment opportunities. I mean, is there, I mean, is this an infinite sync? In other words, it's going to attract as much money and capabilities and resources. You know, is there, is there any moment where the music stops and there's not enough chairs for everybody who's invested? It's super, super easy to calculate. Now, it's an infinite demand, no doubt, but it's limited by chip fabs. So if it gets overbuilt or overinvested, it's just purely because. you know too much of X for the number of chips but you know the upper bound is
Starting point is 00:55:38 based on the chip fabs and you can see those coming four years in advance and so you know it's all bottlenecked at Intel TSM and Samsung so from there you can do all the math in both directions in terms of data centers and users and everything well my mental model go ahead go ahead Alex my my mental model continues to be that the music can continue as long as the transformative applications continue as long as we're driving the of the service economy to zero, as long as transformative discoveries and scientific inventions pour out of these super-intelligent boxes, then the music can continue. The data center build-out
Starting point is 00:56:15 can continue to the point of trillions of dollars of CAPEX. We just need the transformation to continue and the revenue generation that results from that. And the transformations, you know, optical, like, okay, nothing was optical, now it's all going to be optical, corning, huge beneficiary. Nothing was liquid cooled. Now it's all going to be liquid-cooled. that Jeff Markley told me he bought a million valves. Why'd you buy a million valves? It's like, well, because if water starts leaking out of a pipe, you need to isolate it quickly. These are like, you know, $60,000 in a single one you.
Starting point is 00:56:47 You can't have water dripping on them. So I need to, but there aren't enough valves in the world, so I bought them all. And then there's the underlying problem here of energy production, right? We're about to see energy begin to spike. We're seeing certain communities that are voting against opening up data centers because they don't want to have it soak up all the energy. And so are we getting at differential pricing where data centers are paying this much per kilowatt hour versus homeowners paying a different? Otherwise, we're going to have communities basically blaming the AI tech bros for taking their jobs and hiking up the cost of electricity. and that does not bode well.
Starting point is 00:57:31 I think the scenario where new data center deployments continue to be connected to utility scale grid is probably implausible at this point. There's simply too much demand for co-located new energy output that is completely off-grid. As long as the regulatory environment continues to be favorable and it does continue to be, I think it's more likely we end up in a future
Starting point is 00:57:54 that looks like Colossus where there are co-located NatGas and soon SMR. and fusion plants in a few years, and all of that is by default disconnected from the broader utility scale grid. I think that you had Jeff Bezos, who's a pretty smart guy, saying we will have gigawatt data centers in space.
Starting point is 00:58:16 And when you actually do the math, it actually makes sense in a few years. When you look at payload costs, you look at chip costs, you look at, again, power with solar. And that's just something that's crazy, but again, it just shows the demand for these things.
Starting point is 00:58:31 Yes. I know Eric Schmidt has a great deal of interest in that vision as well. All right. I'm going to move us on to our last conversation topic for today, keeping this WTF episode sharp and fun, and that's on robotics. And the release of FSD 14.1. So Elon is released as promised, something that's got 10x more. more AI parameters. I love this. Navigation and routing are now handled by Tesla's neural net can help you find detours, unexpected obstacles. I've had my Tesla drive me into, you know, situations that I shouldn't have cut into. Robotaxy style upon arrival, so you can now
Starting point is 00:59:19 select precisely your arrival option where you want to park on the street and a garage and a curbside, and Elon's words, V-14 feels alive. Of course, this is just the prelude to the entire automation of driving across every sector. Who wants to jump in here? I'll tell you one thing that's new is, you know, with the big screen and with FSD, you can watch the podcast on screen safely. So you don't have to have Peter describe every video to you anymore. Let us know if you're watching this while driving. Yeah, I think that this promises to be a big jump over 13.2.9. And I think aspirationally also represents the beginning of several different forms of
Starting point is 01:00:07 convergence, the convergence between obviously Robotaxy tech stacks and human-driven or supervised-driven autonomy text-tax, obviously. Less obviously, I think we're, I would expect we're going to see over the next few years, maybe two to three years, a sequence of subsequent converging. conversions I, convergences I would expect to see, for example, the Optimus tech stack converge with FSD, maybe in some future version. And what at the core, I think what we're seeing is the emergence of a vision language action model, VLA model from Tesla. That's just end-to-end embodiment. It works in cars. Hopefully it works in Optimus robots as well. I would expect to see from all of the other major frontier labs also singular consolidated VLA model.
Starting point is 01:00:55 that work across a variety of different embodiments. Amazing. Speaking about VLA models, so out of Google, we're seeing the next gen of physical agents, Gemini's Robotics ER-15, play a little video. If you're watching this on YouTube, you can see the model is identifying everything on your desk.
Starting point is 01:01:17 And so it helps robots think through complex real-world tasks. Reasons like a human model outperforms GPT-5 on embodied reasoning and pointing accuracy. Imod, you want to comment on this one? Yeah, I mean, what are the brains of robots? It is these joint vision language models that can basically think and reason. And the crazy thing about this is, like, to do this a couple of years ago, you really need to have very high-performance chips with 1,000 watts of electricity.
Starting point is 01:01:51 If you look at how efficient models like this are, you can extrapolate out. you can see they're actually going to be possible on edge compute, which just opens up the opportunity so much. And again, I think, as Alex said, this is why you're standardizing around specific stacks, just like Dojo was stopped in favor of the edge compute at X, for example. So I think we'll see these very specialist chips and these very specialist models for them
Starting point is 01:02:14 with tremendous capabilities that can then act as a basis to learn any given task effectively. So, I mean, everything becomes smart, everything understands its context where it is and you can speak to anything and have and understand what you mean. Alex, where is this go? It gets even better than that. I just in line with what we were discussing a few minutes ago about our living in the sci-fi future, you can't make this stuff up. The safety benchmark for deep mind Gemini's robotics VLA model is named Asimov. Of course. And it's a benchmark that's semi-synthesionage.
Starting point is 01:02:54 but it's based on lots of different visual slash language slash action scenarios and the relative safety thereof. And the beauty is if you actually go and read the Asimov paper, Gemini, the Gemini team in Deep Mind are benchmarking the safety of Isaac Asimov's three laws of robotics against better constitutions for safety of these embodied robotic models. And it turns out that there are, in fact, better constitutions for constitutional AI that one can come up with. with beyond those three laws. But the very fact that we're now at a point in our future history where we're benchmarking the three laws of robotics against other better models, it's amazing. I love the group at DeepMind and Google.
Starting point is 01:03:38 Thank you for what you're doing. Well, Alex, this week we'll close our investment in the, I don't know if I, yeah, Andean systems, who cares if that leaks out. But an incredible company that, you know, picks through all the recycling using this exact technology you just saw in the video, pulls out the precious and rare and valuable metals, the rare earth metals, and then gets them back into recycling for the next generation of chips and computers, right out of Wally. I mean, just the – and it shows you how this human paradise is possible where, you know, everything can be clean, sorted, fixed, repaired using this exact vision capability you saw in that video. Love it.
Starting point is 01:04:22 Alex, your investment picks so far still 100%, so I'll add this to the... No investment advice for me. I'm going to show this video of Tesla's optimist learning kung fu just because it's so cool. Let's take a quick look. Now, if you're listening rather than watching on YouTube, we just saw an optimist with a Kung Fu sparring partner, making some impressive moves. And I've got to imagine just for it to actually be impressive, that was not a human controlling
Starting point is 01:05:20 Optimist that that was its AI model in the world. Anybody have any countervailing evidence of that? Elon has actually said in connection with this that it was autonomous. It was not teleoperated. Fantastic. You know, we've seen our friends from Unitary, you know, doing impressive work, but Optimus towers over the G1 from Unitry. So, you know, we're not too far from Mexico. bots fighting in the ring. And it's trained by imitation learning. We're so painfully close, I think, at this point to unlocking physical labor and solving physical labor.
Starting point is 01:06:00 And remember, in the services economy, approximately two-thirds of all of the service labor ultimately is connected to some sort of physical tasks. So think of how in the future, so many tasks that no human would ever want to perform for which there aren't any jobs even can just be automated. Yeah, I think the fact that it's all neural network-based and imitation learning, that's a really important point because people who've been working in robotics, you know, I had dinner with the founder of I-Robot, and he's all cynical about, you know, robots are slow, robots, whatever. It's just, it's not true because it's all neural network-driven now, the pace of development, and that the smoothness of the movement and the dexterity is going to skyrocket because it's all neural net-based. Yeah. I mean, I need to jump shortly, but some closing thoughts here, pal.
Starting point is 01:06:51 Again, the most exciting sci-fi times. To learn Kung Fu is going to be like a couple of megabytes. And to do any task, it's probably not going to be more than another couple. But as models, once a good, generalist. You're going to have to wait for the neural link to get better for that. I think, again, we're just at this tipping point, and the tipping point is in the next, like, six months across just about all of these. Alex, you're going to need these capabilities for data center construction.
Starting point is 01:07:22 If we're going to achieve 250 gigawatts by the early 2030s, we may not have the human labor to accomplish that. So one, as you know, Peter, I'm always looking for what the innermost loop of the tech tree is, in this case, mixing metaphors. And it increasingly, to me at least, looks like the innermost loop is going to look something like a recursive self-improvement of, robots building data centers, training better robots? Well, robots, robots building robots first that are then building data centers, that are then putting out the digital superintelligence to increase the efficiency of the materials that the robots are built out of and the efficiency of the energy used to pump into data centers. It's a hyper-exponential. I can feel the singularity coming. You're feeling the AGI? I'm feeling the ASI. Oh, my God. Great. Yeah.
Starting point is 01:08:15 Dave, thoughts to close us out. Well, my final thought. Tomorrow's my 25th wedding anniversary. And so when Morris sees this podcast, she'll see I bought two tickets to Bermuda for the weekend. So we're going to spend a ton of money. Happy and I'm very sure you to both here. See that it'll be concurrent with the podcast. So go ahead and open it.
Starting point is 01:08:35 And my question ultimately is as we hit longevity, escape velocity, does till death do us part hold out for hundreds of years? We're going to find out. be awesome. So I went to Tiffany's and I bought something. I swear it's made of vibranium and set with Infinity Stone given the pricing it. But dealing with people in a physical store is the worst form of torture for me that I can possibly endure. So that's the real gift. My God.
Starting point is 01:09:05 Amazing. Imod and Alex, grateful for your brilliance as always. And see you guys next time. everybody listening. Thank you for subscribing. Thank you for being part of our community. Super pumped. The speed of these breakthroughs. I mean, I don't know how you asymptotically approach infinity, but we're going to watch it happen. All right, take care, guys. Every week, my team and I study the top 10 technology metatrends that will transform industries over the decade ahead. I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy,
Starting point is 01:09:40 longevity and more, there's no fluff, only the most important stuff that matters, that impacts our lives, our companies, and our careers. If you want me to share these metatrends with you, I writing a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important metatrends 10 years before anyone else, this reports for you. Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you if you don't want to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to Demandis.com slash Metatrends to gain access to the trends
Starting point is 01:10:20 10 years before anyone else. All right, now back to this episode.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.