Moonshots with Peter Diamandis - AI Experts React: Elon’s Grok 4 Is Now #1 in AI — This Changes Everything w/ Emad Mostaque, Salim Ismail & Dave Blundin | EP #182

Episode Date: July 11, 2025

Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends   Salim Ismail is the founder of OpenExO Dave Blundin is the founder of Link Ventures Emad Mostaq...ue is the founder of Intelligent Internet – Offers for my audience:  Get the first lesson of my executive course for free at https://qr.diamandis.com/futureproof  Test what’s going on inside your body at https://qr.diamandis.com/fountainlifepodcast   Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod     –- Learn more from Emad: https://ii.inc/web  Learn about Dave’s fund: https://www.linkventures.com/xpv-fund  Join Salim's Workshop to build your ExO https://openexo.com/10x-shift?video=PeterD062625 Connect with Peter: X Instagram Listen to MOONSHOTS: Apple YouTube – *Recorded on July 10, 2025 *Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 How impressive is GROK 4 for you? If you look at the AIME benchmark, which is an advanced math quiz, GROK 4 scored a hundred percent on it. You're literally running out of benchmarks. He's got to be driving Google nuts that Elon got this done in 28 months from a cold start. When he said he was gonna put this huge cluster together, every AI expert in the world said you cannot get power laws and coherence at that scale. You just can't do it. Every AI expert was like, oh, God dang, he did it. The amount of compute and resources, again, are going exponential. Now it's the real quality that differentiates the top models between each other.
Starting point is 00:00:39 My big question is, where do we go from here? Now that's a moonshot, ladies and gentlemen. Everybody, welcome to Moonshots, an episode of WTF Just Happened in Tech this week. Special episode today following the release of GROK 4. It is Large Language Model Release Month. An extraordinary string of new models coming up. I'm here with my moonshot mates, Dave Blunden, the head of Link XPV. Salim Ismail, the CEO of OpenEXO and a special guest to help us dissect all of this is Emad Moustak,
Starting point is 00:01:20 the founder of Intelligent Internet. Guys, it was a pretty epic day yesterday. Good to see you all. Pleasure to have you. Yeah, likewise. Yeah, and this is our special Grok 4 edition. Imad, you're in London, yes? Yep.
Starting point is 00:01:38 Fantastic. And Salim, where on the planet are you, buddy? New York. Okay. Dave's in Boston. I'm in Santa Monica. All right. Let's get going. So, just to jump in, goal here is dissect what happened yesterday.
Starting point is 00:01:55 Blow by blow. What's Grokfor all about? And just to shadow what's coming, we've got a few new model releases coming with Gemini 3, GPT-5, you know, and probably a few others. So let's kick it off with this video. Like GROK4 is postgraduate, like PhD level in everything, better than PhD, but like most PhDs would fail. So it's better that said, I mean, at least with respect to academic questions, I want to just emphasize this point.
Starting point is 00:02:29 With respect to academic questions, GROK4 is better than PhD level in every subject, no exceptions. Now this doesn't mean that it's, you know, at times it may lack common sense and it has not yet invented new technologies or discovered new physics but that is just a matter of time. I think it may discover new technologies as soon as later this year and I would be shocked
Starting point is 00:03:00 if it is not done so next year. All right, Dave, you want to take the first bite? Yeah, it's awesome. This is actually a golden moment in time because it is an absolutely brilliant assistant that can do almost anything you want it to do. But like Elon said, it's not reasoning yet. So it's not coming up with the fundamental,
Starting point is 00:03:19 this is what we should build and this is why. So that's still in the hands of the creator, the human operator. And so this moment in time is actually really, really golden. It feels just like an Iron Man movie where you've got Jarvis. Jarvis will build the suit for you. You have to decide how you're going to save the world. It's a really, really fun time to be using these brand new, like you said, there'll be
Starting point is 00:03:43 three of these in the next month or so. This is the first round and he's dead right. You know, the PhD level solution, it's all measured in the benchmarks we'll get into in a minute, but it does virtually anything mind-blowing capabilities, but it doesn't decide what to do and why. Emad, I love your take on this. You've been plugged into this world, you know, intimately for a while. How impressive is Grok 4 for you? I think it is very impressive.
Starting point is 00:04:14 I think, you know, picking up what Dave said, I think it is reasoning, but it's not planning as yet. And there was a question as when we got to this Rona flop level, I think that's the term. Like 10 to 28, I think that's the term, like 10 to 28, I think, flops, would we continue to see improvements? And part of that is the compute and part of that is the data as we'll get to later. And the answer is yes. And again, like Elon said, getting above graduate level in every sub postgraduate level in every subject it can now execute and it can reason
Starting point is 00:04:46 It doesn't have planning yet So I mean isn't isn't that a GI isn't that the sort of like kind of definition of a GI? But we passed through the touring test without noticing we're gonna pass through a GI without noticing too It's like this hedonic adaptation. You're like, of course it's fine, you know? Already, again, if you want to get a job done, it will do the job for you of summarizing a book. Like, it will do the job for you of writing a summary of something or translating, et cetera.
Starting point is 00:05:18 And life is just the same so far because you haven't got that final step that Dave said. And there's a few extra bits that we need for full agentic above that but we're nearly there because we have that final building block now with this next level of model yeah where it's reliable. I like the distinction by the way that it's it is reasoning it has to be to solve these really hard PhD level problems but it's not planning that's a great way to phrase it. Run a flop is 10 to the 27th so that's the scale of these training algorithms. That was the level the AI act said
Starting point is 00:05:51 they wanted to ban by the way. So this would be the first ban model. Yeah that's a great point. I think one of the things that's happening is the absolute beauty of capitalism where you've got big juggernaut companies fighting it out for supremacy and throwing taking massive risks Choosing design paths taking huge gambles and really really going for it. I think it's really what? Magical to watch this happening. Yeah, I love this tweet from Sawyer Merritt. It says XAI was founded in March of 2023 Just 28 months later. It's now the number one model in the world, verified by independent testing, incredible achievement. I mean, it is insanely fast compared to everything else that's being built. I remember when in May two years ago, when Elon was first raising money
Starting point is 00:06:41 and I had a chance to sit in on a investor pitch in the first round for XAI and he said I'm gonna have a hundred thousand GPUs H100s operating by the by the end of the summer and everybody's like no no freaking way and he did just that and he's not slowed down so here we see in this image artificial analysis intelligence index, GRAC 3 was placing like fifth or sixth, GRAC 4 leaps to the front of the line. Are we going to continue seeing this Emod? You know, this just leapfrogging each other, leapfrogging each other. Is there no end in sight? It's getting very difficult because if you look at the benchmarks they have there, if you look at the AIME benchmark, which is an advanced math quiz, Grog 4 scored a hundred percent on it. So you're
Starting point is 00:07:37 literally running out of benchmarks in order to do that and the amount of compute and resources again are going exponential because you need to to squeeze that out as well as have good data as well as have good algorithms. So, before you could just chuck everything into a pot, slosh it around. Now it's the real quality that differentiates the top models between each other and it's become more of an engineering and quality challenge than just a brute force challenge. Insane. Can I please say what for a second? Please.
Starting point is 00:08:08 Okay, so I've got a problem. I would suggest that if I'm trying to answer that problem or get a solution to it, I could go to any of these and they're going to give me marginally roughly the same answer. So we're at a point where the new step is, I'd love to, I wanna get into the details of GROK to figure out why is it so radically different from any of the others, right? And that's where I think the fun will come. Every week, I study the 10 major tech meta trends
Starting point is 00:08:37 that will transform industries over the decade ahead. I cover trends ranging from humanoid robots, AGI, quantum computing, transport, energy, longevity, and more. No fluff. Only the important stuff that matters, that impacts our lives and our careers. If you want me to share these with you, I write a newsletter twice a week, sending it out as a short two-minute read via email.
Starting point is 00:09:00 And if you want to discover the most important meta trends 10 years before anyone else, these reports are for you. Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive companies. It's not for you if you don't want to be informed of what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to dmandus.com slash meta trends. That's dmandus.com slash meta trends to gain access to trends 10 plus years before anyone else. Well the funny thing is we're using you know we're basically
Starting point is 00:09:35 going to Einstein you know and asking him to summarize a poem for us. I mean it's like there's such massive level of intelligence and the utilization for the general public is de minimis. All right, let's look at what's next on this. So Grok outperforms the highest level test, humanity's last exam. Up until now we've seen, see, 03 was at 21%, Grok 4 was at 25.4%. Gemini 2.5 at 26.9%.
Starting point is 00:10:09 And then GRK4 and then GRK4 Heavy comes in at 44.4%. We were talking about this a little bit earlier, Imad. Can you speak to humanity's last exam for us? Yeah, this was come up by Scale AI and kind of a few others to have an exam that even the most polymathic people in the world would find difficult. So they estimated that like some of the smartest people in the world would score maybe 5% on it, maximum 10%.
Starting point is 00:10:38 And the top models at the time, which was probably like half a year ago, nine months ago, scored 8%. Now you have a qualitative leap above to that 44% level. And I think it's interesting because as kind of Salim was referring to, like what are these models for? They're at this super genius level. It's like having a mega liberal arts program. And then the next step is going to be to have really useful people in the workforce on one stream. And then the other stream will be to take the subcomponents of this and just push up to superhuman reasoning, discovering new things at a level that we could never have
Starting point is 00:11:14 before. And I think this is one of the indications of that because again, I tried to read some of the questions. I didn't even understand the questions. It was impossible. Let me give you a couple examples. I literally just gave a presentation on this yesterday, so I have it right in front of me.
Starting point is 00:11:29 Tell me. Humanity's last exam, 2,700 questions. When the slide says, for reference, humans can score 5%, that means the very best humans in any given domain can score 5% within just the domain they understand. And I'll tell you why. Like, here's an example question. Compute the reduced 12th dimensional spin boardism of the classifying space of the Lie group G2, and then it goes on from there. Most people can't even understand one word of that. Exactly,
Starting point is 00:11:58 here's another one. Take a five dimensional gravitational theory compactified on a circle down to a fourth dimensional vacuum. So yeah, these are the hardest questions. And that's why this exam is supposed to last for a long time. A 44% score is just way outside the range of human ability because nobody has that broad knowledge that spans all these topics. So how far, how long before we had 100 percent here to you might any bets? Two years, Max, I would say probably next year.
Starting point is 00:12:32 So, you know, there was a conversation years ago about AI getting to a point where you can't understand the questions it's asking and answering. And we're not far from that. So I mean, we're unable to actually, at some point, we're unable to measure how rapidly it's advancing. That becomes a little bit frightening. It's got to be driving Google nuts that Elon got this done in 28 months from a cold start. Absolutely.
Starting point is 00:13:03 Largely because, you know, Elon is phenomenal at large scale manufacturing, large scale organizational management, and you know, people working four or five a.m., sleeping in tents on the factory floor. That's his wheelhouse, and that's Tesla, that's SpaceX. And because all the intellectual property was more or less open-sourced by the research community at Google and Meta, he was able to pick up all that brilliant thinking and just plow it into implementation. Also small teams, right?
Starting point is 00:13:34 It's not large. I mean, Google is a massive organization. Yeah. I think there's something else here though. Remember, we talked about this last time when Croc 3 came out, right? But when he said he was going to put this huge cluster together, every AI expert in the world said, you cannot get power laws and coherence at that scale. You just can't do it.
Starting point is 00:13:53 And he went right back to first principles, created new kind of connections between the chips and whatever and did it. And every AI expert was like, oh, God dang, he did it. And so this is the incredible ability he has to go into a domain with a beginner's mind, go to first principles and just re-engineer the heck out of it to achieve massive performance. And I think this is an indication of that. My big question is, as you mentioned earlier, Dave,
Starting point is 00:14:23 where do we go from here? Like what does it mean to have a 50% versus 44% on this test? Yeah. Yeah. I think if I can just give a little bit of context in 2022, Amazon built us the 10th fastest public supercomputer in the world. 4,000 day 100s. And that was 2022. That was the 10th fastest in the world of any supercomputer that we were training on. And there was an instance where literally hundreds of the chips melted because of the scaling. Now they've managed to, by turning this into an engineering problem, scale the hardware, but also the inside of the model, which I think is this really important thing. The reason it is above PhD level in each of these Areas is that was a computation scale problem and
Starting point is 00:15:10 So what happens is that if you could scale a liberal arts person all the way up to post-grad in everything you would and Then you specialize down and then you look at some of these things and Salim's question there You've got the base dance just for reference everybody everybody, it's the XAI cluster now has 340,000 GPUs. There are about $30,000 or more each. Yeah, do the math. 10 billion. A lot. I mean this is why we're seeing a billion dollars a day going into AI and why Jensen said there'll be a trillion dollars a year by 2030 and it's not slowing down.
Starting point is 00:15:48 So here's another image from the little conversation Elon had yesterday. These are the benchmarks his team put up. I don't know if you want to hit on any of these, Imad or Dave or Salim, any of the favorites for you? Well, my favorite one is the AIME25, 100%. You're done. You know, GPQA, these are all hard benchmarks. I think Elon would want to go to 110%.
Starting point is 00:16:17 He likes 11. He likes 11. The only one I don't recognize is on the bottom right, Emad. Do you know what that is? The USA 025? I think it's the USA Mathematical Olympiad. Oh, right. So it's about to happen. But again, these are novel, hard benchmarks, effectively, all of them.
Starting point is 00:16:37 And they're being saturated because ultimately, the AI can reason mathematics and science better than we can. Again, it can't plan just yet. It doesn't have the same memory capacity and the building blocks haven't been put together. But it's already superhuman narrow capability in many narrow areas. So it's inevitable, I think, what happens next. You know, we glossed over his quote there, discover new physics. Wouldn't surprise me if it's this year.
Starting point is 00:17:06 Certainly no later than the end of next year. Alex Wisner-Gross has been having a field day with that all day. I bet. First of all, what does it mean to discover new physics? That's pretty interesting by itself. Well, I mean, Alex has been saying we're going to solve all of math and then physics comes next. Chemistry and biology follow quickly. Alex has been saying we're gonna solve all of math and then physics comes next, chemistry and biology follow quickly.
Starting point is 00:17:27 I mean, this is the most exciting, for me, this is the most exciting thing of these models are will they literally unwrap the present of the universe before us, right here, right now, during our lives in the next five or 10 years? Well, there's a couple of specific applications that I think I've been watching. I want to see an AI break and solve
Starting point is 00:17:49 the quandary of the wave-particle duality of light. That would be interesting. And seeing what exactly is going on in this. The second one would be molecular manufacturing and how do we use techniques for doing molecular manufacturing. Because you crack that, then you crack all assembly and manufacturing of all kinds. Right, then the cost of anything becomes about a dollar a pound per weight,
Starting point is 00:18:13 a computer, a dollar a pound. And now you're in an amazing space. I mean, listen, again, going back to Ray Kurzweil's predictions, right? How he does it, I still, you know, he's mentored you, he's mentored me, but, you know, his predictions that we're going to have nanotech in the early 2030s, where is it, where is it? Well, this is probably its parents.
Starting point is 00:18:36 Yeah. Well, the one that's really fun to think about, you know, the quantum teleportation, Peter, that you brought up at one of the press meetings? So how do you reconcile the fact that two entangled particles can be infinitely far apart yet still communicating in real time with the fact that the speed of light can't be transcended? So Alex's speculation is if we can solve physics in the next year or two or three and it turns out that you can communicate
Starting point is 00:19:05 using quantum teleportation, that we instantly discover all these other intelligences around the universe. Yeah, we've just been listening at the wrong frequency with the wrong codecs. These are the key takeaways. I'm gonna just read these out loud and we can talk about them.
Starting point is 00:19:22 They spent just as much on fine tuning, training the AI after initial phase, as they did on pre-training. So that's a big change. Iman, you want to dissect that for us? Yeah, so it used to be that everything was basically, take a snapshot of the internet, and then you put it into this giant supercomputer mixer, and it figures out all the connections, the latent spaces,
Starting point is 00:19:44 to guess the next word. Then you have this very weird AI that came out that was a little bit crazy. It's like a disheveled graduate student without his coffee. And then you had to tidy him up with the reinforcement learning. That was the post-training. And that was 1% of the compute. Then with DeepSeek it was 10% of the compute. And now it's moved to equal because they figured out how to chain reasoning strips. And in fact, I think part of what they did, whether we've seen this with other labs, is they use their frontier model to make data for the next frontier model. So having large amounts of compute to create your own training data in a structured manner allows you to take that latent space, the landscape, and make it smarter and smarter and smarter.
Starting point is 00:20:30 Just like your brain adapts as you learn more and more reasoning, as you see more and more things. And so rather than having to have these massive scrapes of the internet or whatever, it's more and more structured data making up these models, which are making them smarter reasoners. So the 50% additional compute dedicated to the fine tuning, does that mean we have a more sane version of Grok? Well, fingers crossed. It doesn't necessarily mean that because you can still get all sorts of mode collapse within it in terms of if the latent space goes but probably Because again, you're training it just on a certain field of things as opposed to reddit and other things
Starting point is 00:21:15 In terms of order, I'd say this is probably like a hundred million dollars each. So it probably adds up to one meta AI researcher And you a new unit of measure in the AI world. That's funny. So, let's comment on the cost here. $3 per million tokens, $15 per million output tokens and can handle long context windows of 56,000 tokens. How does that measure up, Dave, in your mind? Well, that's pretty normal these days.
Starting point is 00:21:46 It's the longer context. A lot of the claimed context windows aren't real. Under the covers, the dimension of the neural net is much smaller than the claimed context window. So I suspect at this scale that this is the true dimension of the network, but I don't really know. We'll have to dig in over the next couple of days and find out. But what it means is you can feed in 100 books worth of information concurrently.
Starting point is 00:22:11 It instantly digests all that knowledge and then gives you an intelligent answer based on all of that information in one pass. So it's just the next step in what's been going up sequentially from model to model to model. Ima, do you expect we're going to be constantly reducing the price per token? Is this a demonetizing curve for a while to come? 100%. I mean, so the cost of this is about the same as the cost of Claude-4-Sonic, which is the second model of Anthropic or 03's cost, but it's
Starting point is 00:22:47 better than both. It's about 0.7 words per token to give you an idea. And so the cost of a million very good words that are smart is $20. But next year with Vera Rubin, the next generation chip they're going to whack in there, just by the hardware it will be three times to four times cheaper. And they'll probably figure out some more stuff around that. So equi-intelligence, the cost probably drops by around five to ten times a year. So it'll be a buck for a million amazing words.
Starting point is 00:23:21 It's hard to believe the most powerful technology in the world is de minimis in cost. It's crazy. I want to put a comparator though here. This is amazing. Like we could put hundreds of our books into the thing and it would hold all of that in real time as Dave said. But let's note that a single human cell has several billion operations going on in it at any time point in time, right? So we're kind of several orders,
Starting point is 00:23:53 multiple orders of magnitude for modeling one cell. And so we've got a long way to go try and model life or get to really big, big, big things. There's a coming wave of technological convergence as AI, robots and other exponential tech transform every company and industry. And in its wake, no job or career will be left untouched. The people who are gonna win in the coming era won't be the strongest, it won't even be the smartest. It'll be the people who are fastest
Starting point is 00:24:21 to spot trends and to adapt. A few weeks ago, I took everything I teach to executive teams about navigating disruption, spotting exponential trends a decade out, and put them into a course designed for one purpose, to future-proof your life, your career, and your company against this coming surge of AI, humanoids, and exponential tech.
Starting point is 00:24:41 I'm giving the first lesson out for free. You can access this first lesson and more at dmandus.com slash future proof. That's dmandus.com slash future proof. The link is below. Let's talk about super grok heavy. You know, gotta love Elon's terminology, right? We've got Falcon Heavy, now we've got super grok heavy. He loves his terms and I love them too actually.
Starting point is 00:25:06 It makes me, I smiled when I saw that. Why Heavy by the way? Is there a name reason for that? It consists of the Elon verse. Yeah, no, I mean like, you know, Falcon Heavy was able to have three boosters to launch a heavier payload to orbit. So why not, why not talk about heavier capacity so I mean in reality right Falcon Heavy had multiple boosters and this has multiple agents so so
Starting point is 00:25:33 next one will be heavier and the one they will have to next will be rock starship it'll be BFG so the price point here is that's a new high bar. That's going to scare a lot of people. I say the same thing I said last time, try it. Burn the 300 bucks for one month. You can turn off the subscription, but you've got to try it to know what you're missing or not missing. A lot of the use cases, the day-to-day use cases,
Starting point is 00:26:05 it won't matter much. But if you're building something complicated, writing code, or designing mechanical parts or whatever, you're going to get addicted to it. What I'm really curious about is the margin at 300 bucks a month. Are they actually chewing up all that money on compute for you, or do they have a significant margin at that price point? One thing I've been predicting for a long time that's inevitably going to happen soon is the use cases where you need that extra intelligence. When you're building a software product and you're prompting it, you absolutely need that extra level of intelligence.
Starting point is 00:26:38 It makes you dramatically more efficient in moving forward. If you look at the cost of a software engineer's time, you can afford to go up another factor of 10 or even more in price point for this and still be glad that you paid it. I think the escalation of pricing is going to come soon. The counter argument is that the competing models will then commoditize it, but I think people will pay a lot for marginally better improvement because the effective product you get out the other side it really accelerates your time to development or the quality of the design or whatever the solution to the math problem it is right rather than wrong makes a big difference.
Starting point is 00:27:18 My guess is they're losing money. You think so? That's what OpenAI said for their pro level. Whereas the level below they make money. So I think the way that I view this is as loss leader, because if someone's paying 300 bucks, you enterprise sell them up. And then you do team things to get everyone doing it. Because basically right now what we have is a UI problem.
Starting point is 00:27:40 The reason there is there, the way to hook it up and make it usable for as many people on your team isn't there. You know, this is what Andre Carpathy calls context engineering, you know, like what are the new UIs that will enable us to use this most efficiently and get our data in there? If you can crack that, then 300 bucks a month for a high level knowledge worker is nothing. Yeah, just like you used to pay a thousand, two thousand bucks a month for Bloomberg when I was a hedge fund manager, mostly for instant messaging. But you know, like again, it's just not quite there, but it's about to flip there. Yeah, well like a lawyer will cost you that much per hour or even three, five times that per hour.
Starting point is 00:28:22 Will this do the job of your legal document better? I can't wait. That's the one profession I would love to replace. It's my career lawyers. All right, you mentioned enterprise level. Emad, let's go there right now. What else can Grog do? So we're actually releasing this Grog,
Starting point is 00:28:38 if you want to try it right now to evaluate, run the same benchmark as us. It's on API, has 256K contact lens. So we already actually see some of the early adopters to try GROG4 API. So our Palo Alto neighbor, ARC Institute, which is a leading biomedical research center, is already using and seeing like,
Starting point is 00:29:01 how can they automate their research flows with GROG4. It turned out it's able to help the scientists to sniff through millions of experiments, logs, and then just pick the best hypothesis within a split of seconds. We see this is being used for the CRISPR research and also GROG4 independently evaluates scores as the best model to
Starting point is 00:29:25 examine the chest X-ray. Who would know? And in the financial sector, we also see, you know, the Graph4 with access to all the tools, real-time information is actually one of the most popular AIs out there. So, you know, our Graph4 is also going to be available on the hyperscalers. So the XAI enterprise sector is only started two months ago and we're open for business. Open for business. So, Eman, you've been working on medical-related AI. The block here isn't the tech, it's going to be the regulations. It's going to be It's going to be the regulations. It's going to be when will an AI be able to fully replace a radiologist or fully replace a You know any profession of in the medical world. How do you think about that?
Starting point is 00:30:20 well, I think it's the augmentation first reduce errors increase outcomes and then eventually it's replacement because Google had their AI medical expert study which showed that it was was doctor, doctor plus Google search, doctor plus AI, and then AI by itself. Yeah. But just with self-driving cars. I just want to touch on that because it was a really important article that came out. If you, again, the physician by themselves was getting something like 80% of the cases correct. The Centaur, the physician plus the AI was getting like 87%. The numbers are approximate and then the AI without the human bias, without the human biasing the output, the
Starting point is 00:30:57 AI by itself was outdoing all of them at like their early 90%. Extraordinary. Well again it's wasted it's better than any post-grad at the moment. But right now, I think it's about the empowering and the acceleration in terms of the integration. And you're way off the liability profile of replacement. And I think you need replacement right now. What we need is less errors in something like medicine, right? And more upside down.
Starting point is 00:31:21 I think the doctor number by itself Peter was 70% because I remember Daniel Kraft saying when you go to the doctor you get the wrong diagnosis about 30% of the time right? That's a staggering number of errors by the way. That means out of four of us one and a half got the wrong diagnosis the last time we went to the doctor. I mean we need to figure out who that was. That's really ridiculous and so you need an AI to take over that whole field. Well, especially- Human bias, and getting human bias out of that
Starting point is 00:31:50 is also even more important, as we can see. Yeah, the number of types of scans and sensors you can do is way, way outstripping any human ability to look at all the data that comes out of it. So a lot of it isn't trying to beat a doctor, it's trying to assimilate data that never could have gotten into the diagnosis before. That's a great point.
Starting point is 00:32:07 That's a great point. Yeah, just. All right, let's go on to our next one. So available for an API, all right. We've covered these areas already. Let's move on. A quick aside, you've probably heard me speaking about Fountain Life before, and you're probably
Starting point is 00:32:26 wishing, Peter, would you please stop talking about fountain life? And the answer is no, I won't. Because genuinely, we're living through a healthcare crisis. You may not know this, but 70% of heart attacks have no precedence, no pain, no shortness of breath, and half of those people with a heart attack never wake up. You don't feel cancer until stage 3 or stage 4, until it's too late. But we have all the technology required to detect and prevent these diseases early at scale. That's why a group of us, including Tony Robbins, Bill Kapp and Bob Haruri, founded Fountain Life, a
Starting point is 00:32:57 one-stop center to help people understand what's going on inside their bodies before it's too late and to gain access to the therapeutics to give them decades of extra health span. Learn more about what's going on inside their bodies before it's too late, and to gain access to the therapeutics to give them decades of extra health span. Learn more about what's going on inside your body from Fountain Life. Go to fountainlife.com slash Peter, and tell them Peter sent you. Okay, back to the episode.
Starting point is 00:33:17 All right, I love this. You know, Elon is a gamer, and so it's not unreasonable for him to be talking about using grok to make games. Let's take a listen. Yeah, so the other thing we talked a lot about, you know, having Grok to make games, video games. So Danny is actually a video game designer on X. So, you know, we mentioned, hey, who wants to try out some Grok for preview APIs to make games. And then he answered the call. So this was actually just made first person shooting game
Starting point is 00:33:49 in a span of four hours. So some of the actually, the unappreciated hardest problem of making video games is not necessarily encoding the core logic of the game, but actually go out, source all the assets, all the textures of files and to create a visually appealing game. I think one of the challenges is what we do with all of our time in the future,
Starting point is 00:34:13 and we may be playing a lot of video games. You know, this could actually light up the entire metaverse world, because building the metaverse world and building those environments was the big limiting factor and now you can do it at a very rich level this could be really interesting to see what comes from this yeah when did you guys first hear that grok 4 was gonna come out last night well he said a few days ago didn't he he? I mean, he was saying it was going to be this weekend, and then it got pushed to yesterday. Yeah, because I feel like we had about 48-hour notice, plus or minus a day or two.
Starting point is 00:34:55 If you look at the presentation, the raw presentation from last night, and compare it to Google I.O., Google I.O. was scripted and staged with multiple presenters and you know clearly planned way in advance. This last night was like is it done yet guys? Is it done? Does it work? Okay if it works we're launching tonight. Let's go get on stage let's go. And I think that's the way it's going to be in the future because you know it seems like getting to market one day, two days sooner actually matters a lot in this horse race. So this is kind of the dynamic we should expect going forward.
Starting point is 00:35:29 But by the way, that narrator, that's the AI voice of a geek who is living and breathing it. And that's what you want in there. That's what you want. Let's take a listen on Elon on video games and movie production. For example, for video games, you'd wanna use Unreal Engine or Unity or one of the main graphics engines
Starting point is 00:35:54 and then generate the art, apply it to a 3D model, and then create an executable that someone can run on a PC or a console or a phone. We expect that to happen probably this year and if not this year, certainly next year. So
Starting point is 00:36:17 that's a it's gonna be wild. I would expect the first really good AI video game to be next year. And probably the first half hour of watchable TV this year, and probably the first watchable AI movie next year. It's amazing what the fragmentation of those industries is going to be incredible. Because, you know, normally we think of a video game coming out in a release, all of your friends get the exact same release, it's a release that's maybe good for a year or more, and you're
Starting point is 00:36:53 all on like FIFA 23 now or whatever, 25. But here, because it's only four hours to create the next iteration, then you can say, well, no, I want a customized version. There's going to be all this fragmentation and the version of the movie that I saw isn't the same ending that the one that Saleem saw. So now we're debating on how we're not even on the same page and how the movie ends because we saw a different AI generated version. And it's going to be great.
Starting point is 00:37:16 It's going to be really, really cool because everything's customized here. We're going to have a lot to do with our time. I mean, Iman, listen, you spent so much time as CEO of Stability in this market arena of entertainment and video production and such. When I asked you earlier whether Hollywood is going to be disrupted, you said no. Can you explain that, please? So I think the thing that won't grow is people's attention. So, if you look at Netflix, their biggest competitor is video games, which is why they're going into video games.
Starting point is 00:37:50 You only have so many hours in a day and you're a consumer. Video game sector right now, I think is $450 billion. The movie sector is 70 billion. That's how fast it's grown. Like education around the world is like 10 times larger. So, it's 10% of education in terms of size. So if you think about that, then for Hollywood studios, this is great because the costs of coming down and it's been a dramatic shift. To give you an idea, the first video models, stable video I think was pretty much the first. We released that in 2023. And now
Starting point is 00:38:21 with VO3 from Google and others, you're pretty much at Hollywood level, close to it, but you need one more generation to get there. And the average Hollywood click length is 2.5 seconds. It used to be 12 seconds. Now it's 2.5. And we can generate eight and soon we'll be able to generate more. So you're getting to this point where you can make that. But again, people like having common stories to talk about. Barbie, Oppenheimer, and things like that. So these marquee things, they can get the
Starting point is 00:38:49 license of Cary Grant from back in the day and make him a star again. You know, you're going to see. Don't you think that there's going to be so much supply? And if I have a chance to watch a new episode of classic Star Trek, but I'm the character playing Captain Kirk, and you're playing Spock, and my friends are taking the roles, I don't know why I would not be buying that entertainment from a source outside of Hollywood. Well, you'll buy that too.
Starting point is 00:39:27 But I think one of the things we've seen in the AI world, what's it about? Distribution, distribution, distribution. So you'll buy your interactive games and put yourself in the game, but you'll still have your marquee things and the cost of that will decrease dramatically. And the distribution cost will decrease dramatically and the impact will increase. So again, for companies, this is all great. For the individuals working in the industry, this is terrible. And so I think this is the key thing. For the individual creators, this is great because you can finally tell the stories, so we'll see richer stories, but you've still got to distribute them.
Starting point is 00:39:58 It's like one of the examples I had to give is, you know, Taylor Swift, bless her heart. It's not the best music in the world, but she still causes earthquakes, you know, Taylor Swift, bless her heart. It's not the best music in the world, but she still causes earthquakes, you know? Yeah. Yeah, no, your point that I think the video game industry bypassed all other media combined. I think I read that. And it's on a much faster growth trajectory as well. But I think the video games are far more compelling
Starting point is 00:40:23 with AI components, AI players, AI voices, voices that are talking directly to you. And so that interactive media is going to get even more accelerated by this trend. So whether you call it movies or video games or other, the media is going to change, right? It always does. So it may not fit exactly in those swim lanes, it's clearly the interactive talk to me part is gonna grow much much faster than passive watching part Yeah, I think it's the quality part is the feedback for you to find flow So the movie industry is grown from like six fifty billion to sixty billion the last ten years average IMDB score six point three Video game industry is like doubled inside quadrupled. It was 170 billion.
Starting point is 00:41:06 Now it's like 500 billion. The average score has gone from 69% on Metacritic to 74%. Games are good now and you need to be good to compete. And again, I think what we can see from this technology is I as a creator can create the best things better because I can control every pixel. This is what Jensen has said, every pixel will be generated. Exactly what's in your mind, maybe you don't have to use a keyboard, it just comes straight from your mind, can be on that screen, you can tell the stories you want. And on the other side, you've got the fast food. So, you know, the general content farms get even better. So, you've got your gourmet and you've got your fast food and both of the quality of
Starting point is 00:41:44 those will increase. All right. Every day I get the strangest compliment. Someone will stop me and say, Peter, you have such nice skin. Honestly, I never thought I'd hear that from anyone. And honestly, I can't take the full credit. All I do is use something called One Skin OS One
Starting point is 00:42:01 twice a day, every day. The company is built by four brilliant PhD women who've identified a peptide that effectively reverses the age of your skin. I love it. And again, I use this twice a day, every day. You can go to Oneskin.co and write Peter at checkout for a discount on the same product I use.
Starting point is 00:42:20 That's Oneskin.co and use the code Peter at checkout. All right, back to the episode. Of course, Grok for coding. Let's take a quick listen. Right. So if you think about what are the applications out there that can really benefit from all those very intelligent, fast and smart models and coding is actually one of them. Yeah. So the team is currently working very heavily on coding models. I think right now the main focus is,
Starting point is 00:42:46 we actually trained recently a specialized coding model, which is going to be both fast and smart. And I believe we can share with that model, with all of you in a few weeks. I still remember, Imad, when you were on stage with me, like three years ago at the Abundance Summit, and you said, no more coders in five years. It was front page throughout India. I got hate mail about that, you know.
Starting point is 00:43:14 Oh my God, you scared the daylights out of me. And it's true. I mean, it's a big issue. It's a big issue. Why would you be able to talk to a computer better than a computer can talk to a computer? Yeah. You know? Well, hold on. Let me drill into that just for a second. It's a big issue. Why would you be able to talk to a computer better than a computer can talk to a computer? Yeah You know hold on let me get a good drill into that just for a second Don't you think we'll end up with really good coders just creating a hundred times more code
Starting point is 00:43:35 No, because what you'll have is really good context engineers Directing to build things code is an intermediate step of language because the computers and the compilers couldn't handle the complexity of what we wanted to talk about. Now you can talk to the AI all day long about anything and it understands to a reasonable degree what you actually want. And once we get the feedback loops really going,
Starting point is 00:43:58 as we've seen with cursor and other things like that, like there's a reason it's got to $500 million in revenue in a year. You know, there's a reason that Anthropics got to four billion dollars, probably two-thirds of that is code. Crazy. All right. It's disappointing that we won't have this for a couple of weeks. We'll have to get back on the pod and check it out when it's out. Somebody told me you can get to it through cursor right now. I'm looking at cursor as we speak and I don't see it popping up as a cursor. Cursor is very much linked towards Anthropic.
Starting point is 00:44:27 So it probably like lobotomize it, but GROK three or GROK four already heavy is a pretty good code. It writes clean code and the coding model I think will be even better. But again, how much better are you going to get when you can output a 3D video game like that or just about anything? And I think this comes to think, if you're trying to create content, the AI is good enough already for just about anything. If you're trying to create something creative, this is the final part that requires planning and
Starting point is 00:44:55 coordination and multi-agent systems and the UI UX isn't there yet for the feedback loops, et cetera. Yeah, I can use all the horsepower they can give me though because when you're writing a little code module, it's all pretty much perfect already. But right now, I can go to the best Claude model and say, build me a dashboard for this function and just give it that prompt. Most of the time, it comes back great and even thinks of things that I wouldn't have thought of for that dashboard.
Starting point is 00:45:22 I can use another step up of capability in that area. So I'll use it up as quickly as it comes out, believe me. All the tokens to Dave. OK. Let's hear from Elon about his video model training. What's coming on input output? We expect to be training a video model with over 100,000 GB 200s and to begin that training within the next three
Starting point is 00:45:49 or four weeks. So we're confident it's going to be pretty spectacular in video generation and video understanding. So 100,000 GB 200s, more than anybody's thrown at this. Imaad, how does that hit you? So when we trained the state-of-the-art first video model two years ago, two years ago, we used 700. 700 H100s.
Starting point is 00:46:22 So like, let's say they're three times slower. So the equivalent of 200 of the chips that he's about to use, because these are the integrated GB chips from Nvidia. The top level models right now, if you look at the Lumas of the world, the ByteDance models of the world, the VO3s use two to 4,000. He's about to use a hundred thousand of those. And the thing about video is when you train a video model, it actually learns a representation of the world through computation. So once we made a video model, we extended it to a 3D model that could generate any 3D asset. It understands physics and more. So actually video models are world models that
Starting point is 00:47:03 can be used to do all sorts of things Like improve self-driving cars by creating whole worlds and other things like that as well I think that's the reason why given they've got three hundred thousand chips They're putting a hundred thousand of these to the video model. Well, they're planning a million GPUs by the end of this year No Let's you know, it's like, it's like no small dreams here. When you pioneered this just a couple years ago, like you said, the video model was trained completely separate from the large language model because it was just too much.
Starting point is 00:47:36 You couldn't put everything into one mega model. Is he going to do a monster retraining of this model with video data or is it a separate set of parameters in a separate model? This will be a separate model. So we took the image model and then we created the video model from that and then we created the 3D model from that. Now they're doing from scratch training because the technology we developed for stable diffusion 3, the diffusion transformer matching it, is able to do that all at once. And this is similar to what VO three and others use. And with optimizations, you can just pop that all straight in. Now the arch that they use, like the grok model
Starting point is 00:48:11 for the image is actually the same architecture as for the language. And they may do the same thing. I'm not sure how they're going to train this model. Because again, they're super smart. But it's a different model entirely. But they may all end up being the same model. Because if you want a model that understands physics and the wonders of the universe, and what's the question to get to the answer 42, you probably want to train on everything that a human sees and more because it will train on everything a million humans can see and understand and read and all sorts of stuff. I mean, you know, I'm excited about the idea of there's so many of my favorite science fiction books that have never been made into movies or TV series, right?
Starting point is 00:48:52 I mean, the ability to just say, hey, you know, like one of my favorite books is the Bob Ivers series by Dennis Taylor. And you know, I love it. It's a four book series. It's extraordinary. Make it into a movie for me. Make it into a 20-part TV series for me. Here's a hundred bucks. It's actually really fun actually if you took the best books that have ever been turned into movies already and use that as training data. So like this book turned into this killer movie, make the changes necessary to get from point A to point B. Okay, now here's a book that never got made into a movie.
Starting point is 00:49:32 From what you learned about those patterns, make the movie that's most compelling. The thing is, you won't even have to do that. Like just with the pace of chip improvements, as we go through the generations in two years, you will have live 4K TV. So you've already seen some people do like live low resolution stuff, interactive stuff. When Jensen says every pixel will be generated, he literally means it. Like with the next generation chips and a bit of more improvement in the algorithms and optimization of the models, you can have live streaming 3D or video
Starting point is 00:50:08 where every single pixel is generated on your screen within a few years. And so you can just say, stop, try this, adjust this, and that'll be the feedback loop. It'd be fun to take some old movies and make them way better. Like take the old Kona and the Barbarian movie and make it really a proper movie.
Starting point is 00:50:25 That could use some energy. You know what hits me? We're sitting here having this conversation in four different cities around the world where, you know, we've taken so much for granted in this video channel. And like, you know, 10 10 years ago what do we have we had just barely had skype and now you're it's it's crazy so we humans adapt so rapidly to awesomeness and we take it we take it we normalize it very fast it's like your second way my ride right? Your first one's like, wow, and your second one's like, okay. Oh, for sure. So, any closing thoughts on Grok 4?
Starting point is 00:51:12 I have a question for Iman. You've been in the space for a while, now we have Grok 4, right? What are the types of things that Grok 5 will be able to do? So, Grok 5 will be a multi-agentic system, but rather than having four boosters, it will have 60 or 600 or 6,000, depending on what you want. It'll probably have a world model plugged in and it'll have interconnectivity, and this is something that Elon mentioned yesterday,
Starting point is 00:51:38 to every major type of system. So it knows how to use Maya, it knows how to use advanced physics simulators. It will write its own lean code and optimize it for mathematics. And so it's just going to be like an incredibly versatile worker. And just like he's going to unleash millions of optimus robots, he's going to unleash billions, if not trillions of these things, GPU demand, withstanding, into the economy. And that's going to be a bit crazy.
Starting point is 00:52:05 And I think the way that you'll interact with Grok 6, probably Grok 5, is you'll have a Zoom call with it just like you have now. Hey folks, Salim here. Hope you're enjoying these podcasts and this one in particular was amazing. If you wanna hear more from me or get involved in our EXO ecosystem,
Starting point is 00:52:22 on the 23rd of July, we're doing a once a month workshop, tickets are $100. We limit it to a few people to make sure it's intimate and proper, and we go through the EXO model. What we do there is we basically show you how to take your organization and turn it into one of these hyper growth AI type companies. And we've done this now for 10 years with thousands of companies. Many of these use the model that we have called the exponential organizations model.
Starting point is 00:52:47 Peter and I co-authored the second edition a couple of years ago. So it's a hundred bucks, July 23rd. Come along, it's the best hundred dollars you'll spend. Link is below, see you there. Gemini 3 and GPT-5, let's talk one second about what you expect there. Are these going to just leapfrog GROG4? Are they going to be sort of diverting in different directions? Imad, your thoughts?
Starting point is 00:53:13 I think they'll probably all be kind of the same plateau. Now it's really about the UI UX and then how you wrap these into agents and then multi-agent systems. And then how you make it so just easy for anyone to use like this. So Google, in the work that they've done with their AR glasses, enabling you to have a conversation with your AI and being able to have it see what you see,
Starting point is 00:53:38 that's a great step forward. OpenAI with their voice mode has been fantastic. Are there any versions of user interface that we haven't seen yet? I mean, BCI will be one of them for sure. I mean, I personally think again, the interface is just the interface that you have with the remote worker.
Starting point is 00:54:02 And all the technology is almost in place for that. Get on a call, hit him a slack. Pretty much. And you just don't know. That's my AGI. My AGI is actually more actually useful intelligence, right? Like this is what I think what Salim would like. Just I don't know it's an AI or not. It just gets the job done and it doesn't sleep. And this final part of it as well is that the task length of these AIs has gone to like seven hours now. I think I've seen from various entities now, they're getting that up to almost arbitrary length.
Starting point is 00:54:32 So you can set teams away and they have organizing AIs and others, they get the job done, they check in whenever they're unsure about something. And then this is that next step up for all these technologies. But I think the 10 to the 27 models will, as you said, all be pretty much similar because they're already
Starting point is 00:54:49 above PhD in everything. Now it's about making them super useful and getting them out there. And the demand for that is in the billions of agents. The millions of computers. Dave, you know what I find interesting is Elon's got basically a limitless capital supply. It's every time he's gone to raise money, I've asked, well, how much can I get in the next round?
Starting point is 00:55:14 And it's like, well, we're oversubscribed already. Yeah. Yeah, it's another constraint. It's going to be the money. It's going to be the GPUs. I have a question for you, Matt, about that actually, because if you say, okay, the, you know, GPT-5 will be out soon, a couple of weeks, hopefully, it'll be on the same plane, probably Leak Frog, but in the same genre, and then Gemini 3 will come out and it'll be somewhere similar, maybe a little better. But the chip supply, you know, Google has huge amounts of GPU and a massive cloud computing platform, plus they make their own TPUs. Then, you know, you've got a million chips going to Elon. We just talked about that.
Starting point is 00:55:59 Sam at OpenAI has had a little bit of trouble with Microsoft recently. There's definitely some kind of falling out there. I mean, the way open AI got ahead of everyone in the first place is getting access to the compute From Microsoft and so is he gonna have a problem getting catching up to a million concurrent GPUs? Training a single massive model I mean, I think Stargate is in that order of magnitude when you look at the kind of gigawatts and now Stargate is in that order of magnitude when you look at the kind of gigawatts. And now Amazon's just announced poor Anthropic using Dranium for something that's even bigger than Stargate with their latest kind of chip supply. Google's the leader in this. So they have 3 million odd.
Starting point is 00:56:34 But the thing that I come back to is OpenAI basically slowed down when everyone was making Ghibli memes. And so if you think about order of compute of Ghibli memes compared to order of compute for useful work, I'm like, it's that versus that, right? Google is okay because Google are actually landing millions of their own TPUs and they have the full stack and it has better interconnect for large context length.
Starting point is 00:57:00 It's actually really good seventh generation hardware. Elon will get the supply because he's a beast. And I think, again, OpenAI have the capital, but they're moving more and more towards consumer, with the Johnny Ive acquisition and things like that. The dark horse here is probably, again, Meta, to be honest, because Zach is going to drop $100 billion on this. He dropped $30 billion on the the glasses on the metaverse.
Starting point is 00:57:27 He thinks AGI is coming and Meta is a 1.7 trillion dollar stock. He will easily drop a hundred billion. Yeah, he's got 70 billion dollars of free cash right now to use and can pump it up. Well, I did an interview of Yann LeCun at MIT not super long ago and they had committed and already bought a million GPUs for internal use and meta so he had those on order already then I'm sure they're in house now so he has the compute in house. So basically all the top guys can get a million the next step is 10 million. Well there's only 20 million in the world so this is where it runs into a bottle. You can't even keep
Starting point is 00:58:02 a straight face can you? Well but, think about every pixel being generated. And think about, again, the economic activity of actually having a single useful teammate or account. We're talking about accountants and lawyers and other things like that on the other side of the screen. We're not even talking about super genius PhDs. Is NVIDIA just going to just keep going, going, going? Is anybody going to displace their production at all?
Starting point is 00:58:29 All of the top chip manufacturers are good enough to run these models. The only question is who has enough gating supply. So the reason for the hopper thing was actually the packaging of the chips, you know, the co-op. So you have different supply channel constraints, just like robots. In two years, robots will be good enough to do what? 90%, 95% of human labor.
Starting point is 00:58:53 The only reason the entire global economy on labor isn't gonna flip over from $2, $1 robots is supply chains. So what we've got is a complete replacement of the capital stock of the economy from GPUs for virtual workers and robots. It's just supply constraints. So Nvidia, number one, you don't go wrong.
Starting point is 00:59:12 You don't get fired getting Nvidia, but you'll get chips from wherever you can get them because those chips are orders of magnitude cheaper than your team members. I just asked actually Gemini in the background here what it costs today's market rate to train a run a flop. So one of these models, just the compute cost is 312 million. So like you said, Ahmaud, it's like one signing bonus over at OpenAI these days. So that's not, the cost is not the issue. It's who has access to the compute.
Starting point is 00:59:43 What's amazing to me in this entire conversation. We haven't said the word Apple once. And Apple controls about a third of the manufacturing capacity at TSMC for their M3 line, M2 line chips. So they could easily become a player in the get a big data center up and running game. They'd have an incredible asset having that manufacturing toehold with
Starting point is 01:00:05 TSMC. It's just incredible that they haven't done that. Well, I think this comes down to the thing. These models have economies of scope in that once you train a model that's good enough, do you really need another one? And then it becomes like electricity, it becomes a utility. So your genius models become utilities and then what matters is the model that runs on the M3 or whatever, you know, like liquid AI just releasing edge models. Those things become even more important because the M3 has capacity, M4s have capacity. Yeah, yeah, that's a really big deal, by the way. Liquid is, I didn't appreciate how big
Starting point is 01:00:43 a deal it was until recently, but people are going to want to use this stuff immediately. I mean, it's so addictive. And the inference time compute is severely constrained. And Liquid runs fine on the edge on these M3s. It runs really, really fast. It runs on the chips in the cars. And it's about, they say, about 100 times more efficient than just trying to run a brute-force transformer
Starting point is 01:01:06 So that could be a huge unlock for people having access to AI, you know At least more access to keep up with the demand Exactly because you'll have your gated stuff and then they might increase prices because they have to because there'll be so much competition for chips Even as you get them cheaper and then you just got this AI with you But that AI will be smart enough to do your day-to-day and so you'll have a whole curve of intelligence just like sometimes you need to have steady workers and sometimes you need your geniuses. I forgot you were the you're actually the first guy to see liquid when it was just a research. Yeah I gave them all the compute to get going.
Starting point is 01:01:40 Yeah that's right. Amazing now they're worth two million dollar valuation. So listen, Imad, when you come back and join us next week, I think we have it scheduled. I want to hear all about the intelligent internet. I'd love you to break the news on what you've been working on in secret for the last year or so. I've seen pieces of it. It's awesome. But hopefully you'll spill the whole master plan for us. All right, Dave, Salim, my moonshot mates. Thank you guys. GROK4 special edition. See you with GROK5.
Starting point is 01:02:14 Yeah, we'll be back online soon. All right, see you all. Thank you for joining us. Take care folks. Bye, see you all. Thank you for joining us on the Rooch Hats. All right, take care folks. Bye y'all. Take care guys, bye. If you could have had a 10 year head start on the dot com boom back in the 2000s, would you have taken it? Every week I track the major tech meta trends. These are massive game changing shifts
Starting point is 01:02:36 that will play out over the decade ahead. From humanoid robotics to AGI, quantum computing, energy breakthroughs and longevity. I cut through the noise and deliver only what matters to our lives and our careers. I send out a Metatrend newsletter twice a week as a quick two-minute read over email. It's entirely free. These insights are read by founders, CEOs, and investors behind some of the world's most disruptive companies.
Starting point is 01:03:02 Why? Because acting early is everything. This is for you if you want to see the future before it arrives and profit from it. Sign up at dmagnus.com slash meta trends and be ahead of the next tech bubble. That's dmagnus.com slash meta trends. When does fast grocery delivery through Instacart matter most? When your famous grainy mustard potato salad isn't so famous without the grainy mustard. When the barbecue's lit, but there's nothing to grill. When the in-laws decide that, actually, they will stay for dinner.
Starting point is 01:03:47 Instacart has all your groceries covered this summer, so download the app and get delivery in as fast as 60 minutes. Plus, enjoy zero-dollar delivery fees on your first three orders. Service fees exclusions and terms apply. Instacart. Groceries that over-deliver.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.