Limitless Podcast - Apple's Biggest AI Announcement This Week (Not MacBook Neo)

Episode Date: March 5, 2026

Apple released a bunch of new products this week, and everyone's talking about the lineup of cheaper iPhones, iPads, Macbooks, etc.But what we're talking about today is how Apple's new chip a...rchitecture is quickly making it the sleeper giant for AI hardware.------🌌 LIMITLESS HQ ⬇️NEWSLETTER:    https://limitlessft.substack.com/FOLLOW ON X:   https://x.com/LimitlessFTSPOTIFY:             https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE:                 https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED:           https://limitlessft.substack.com/------POLYMARKET | #1 PREDICTION MARKET 🔮https://bankless.cc/polymarket-podcast------TIMESTAMPS0:00 Apple’s Big Week1:15 The New Devices4:11 Breakthrough Chip Architecture7:22 Strategic Positioning9:34 Outlook12:33 The Siri Delay16:02 The Future of Local AI18:16 Leadership Transition at Apple20:46 Valuation Predictions21:36 Closing Thoughts------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 If you're a fan of Apple products, this is an incredible week. And even if you're not, chances are something that was just announced in the last three days is for you. Apple just announced over five products this week. But there is a headline within this that everyone's missing. I think a lot of the headlines point to the fact that prices now start at $600, that the specs are now really high, that you can run OpenClaw on a laptop from anywhere in the world at a pretty high rate of performance with models that actually run locally on the device. These are all very cool things, but one of the things that people are missing is the AI angle to all of these releases and how Apple is quietly becoming one of the biggest players in the AI space, even though they haven't actually spent any money on building AI models or scaling their infrastructure.
Starting point is 00:00:44 This is probably the most bullish I've been on Apple in the AI race. And the funniest part is that they've made no mention of an AI device, but that's exactly what they released. You mentioned five devices. We've got, what's it, a MacBook Pro. new laptop and a specialized chip. I've got my turtleneck on today, Josh, in honor of Steve Jobs. For those of you who are new to our Apple episodes, we have the number one fan of Apple on the show. That is Josh. It is his Super Bowl. I'm fired up. Today. And we're going to get into the weeds about why Apple just released the top new AI device. Okay. So let's first get into what they actually
Starting point is 00:01:23 released, because there's quite a few things. They released the MacBook Pro. They released a MacBook Air. A brand new MacBook named the MacBook Neo, which starts at a surprising price point of $600. This is important, remember that. They released the studio display and studio display XDR. Pause right there because these studio displays are awesome. The previous XDR display used to cost $6,000. This new one costs half that and has way better specs across the board. So I will be trying to purchase one of those bad boys.
Starting point is 00:01:49 And then finally, they released the iPad Air and the iPhone 17E. Those two devices are also priced at $600. So for the first time, there are three entry-level Apple devices priced at 600 bucks. And this is important, and this is noteworthy. What's also important and noteworthy that wasn't mentioned as much is the chip architecture that lives within these devices, particularly the new MacBook Pro and the MacBook Air. Now, Apple created their vertically integrated silicon back with the M1 chip, the M-Series chips, and they are now on the fifth iteration.
Starting point is 00:02:20 But this fifth iteration is pretty amazing. And I think that's the product that a lot of people are sleeping on. today is not only now do they have the MacBook Neo, the iPhone 17E, and the iPad Air that are capable of running Apple Intelligence locally for $600, but now they have these chips that are capable of running actually large language models locally on a MacBook. And this is the first time ever. I think this is the most excited of been about what's inside an Apple product versus what's actually outside. I'm usually a display guy. I'm usually an iOS app guy. I'm like, wow, this experience is amazing. But these chips are actually insane. So let me give you those.
Starting point is 00:02:56 the rundown of the headlines. The AI compute processing power of an M5 chip is around 4x larger than the previous generation of M4. It's 8x larger than the M1, so the first in this entire series. Now you can do a bunch of AI prompts, tooling, apps, and system integrations on your laptop, and it just feels buttery smooth. In fact, it's just super quick. And the reason why this makes me really excited is now you can conceivably host and run AI models on your own local device. That means you can give it access to private data without handing that over to the lives of open air or anthropic. And you can create a more personalized AI experience without having to hand over all that private data. But there's a really unique architecture around how these chips
Starting point is 00:03:45 are made, right? Before we get into the novel architecture, I just want to double down on something you said EJAS, which is a testament to how fast this is relative to previous hardware. One of the fun facts that I love is that previously when Meadow was released in the Lama models, the $70 billion model, that required something like $40,000 worth of GPU clusters to run 18 months ago. And now you can run that on this new M5 chip. And that brings us back to the novel breakthrough that enables this to happen, which is the post that we're seeing on screen.
Starting point is 00:04:12 They basically took what other companies were calling the chiplet architecture and built their own version of it, where you take the CPU and you, you take the CPU, and you're you take the GPU, two of these things are both very important to processing AI, and they fuse them together into a singular chip. And what's interesting about this is the CPU part of the chip is the same on every single version of the chip. There's the M5 chip, there's the M5 Pro, and there's the M5 Max. All of those have the exact same CPU. The only difference is the amount of GPUs that they bolt on. So the Pro gets 20 cores, the Max gets 40 cores, and you could think of it like these Lego blocks for Apple Silicon. And this is noteworthy because
Starting point is 00:04:49 you can scale this a really long way. What we know about AI models in general is that GPUs are how you scale these things. And the CPU is kind of used as the orchestration layer. So this allows these new chips to be modular in the sense that they can just kind of stack GPUs more and more and more. And I assume this is the architecture that we're going to see with the ultra chip that's probably coming out later this year that's going to be able to run some serious AI models locally on this device. And it's a really novel way of architecting these M-series chips that Apple's kind of doubled down on and I think we're going to see some really amazing improvements from it. What I like about the modular approach is it doesn't seem to come at the cost of the size of the
Starting point is 00:05:27 device. Like these things are still getting smaller and sleeker and thinner every single year in generation. So that makes me like, you know, super excited about like how much further we can take these devices. The other thing is I watched a really unique video this week where some random dude hacked into Apple's. I think it was their. M4 chip. I don't know if you saw this, Josh. Uh-huh. This is so cool. Yeah, they converted it into an AI transformer. So what that transformer was capable of doing was training, inferencing, or fine-tuning
Starting point is 00:06:00 an AI model right there on his Apple MacBook. And what he found out was the inference and training costs were 80 times more efficient than an Nvidia GPU, an A-100. Now, that's from some lone person hacking into this. And Apple's obviously. obviously, I'm not necessarily seen this example, but they're aware that they have the most bleeding edge chips. There is no consumer tech hardware device that has more premium components making it up than Apple stuff. Like their supply chain is just insane. So I'm really excited about that. The other thing is what this unlocks is, in my opinion, what I'm calling a new era of personalized intelligence. I think one of the main challenges that AI is face today is that we're
Starting point is 00:06:45 relying too much on model labs, which kind of results in a more fractured experience. Like, the model doesn't know who we are. It keeps asking us to tell us about ourselves. And with this new chip architecture, you can have a more persistent AI agent that understands who you are, that is more useful, that is there right with you in the weeds as you're doing stuff on the internet or on your computer. And that's really bullish for me. I can't help but imagine what it would look like if Apple decided to really scale the manufacturing production of these chips and turn them on for AI. similar to what Google did with their TPUs and kind of have their specific hardware accelerated version of these GPUs.
Starting point is 00:07:22 I feel like that would be a huge business opportunity, but clearly they are not taking advantage of this position they're in. Because previously on an episode a few weeks ago, we spoke about CAPEX, how much money these companies are spending on scaling AI infrastructure. This includes data centers, this includes GPU, powering the data centers, all of the employees that are required to make this happen. and the numbers were staggeringly large. I mean, between what is this, Amazon, Google, Microsoft meta, we have over $630 billion of spend. But Apple is only at 1.4, which is actually down 19% year over year.
Starting point is 00:07:57 You see this little swanched on here, Josh? It's like so sad and depressing, and they're spending no money on scaling this. And you have to ask, like, why, what is going on here? Because clearly they're in a position where they can win if they double down on these things that they're working are working well, but they're just not doing it. And I wonder if you have any takes on this of what you think, like, what are these other companies doing? And why is Apple not participating?
Starting point is 00:08:21 So there's an optimist take on this story and then there's a pessimist take. The pessimist take is Apple was asleep at the wheel. And they were not focused on AI. They completely missed that rush. And they fell behind creating one of the leading intelligence models. when they are the most valuable company. So Google, Microsoft, Amazon, and all these companies
Starting point is 00:08:45 that you're seeing on the screen here got way, way ahead. Now, the optimist take is this was all planned because Apple's decision was never to partake in the AI model race. Apple's plan was to own the distribution and operating system layer of AI,
Starting point is 00:09:00 which is, hint, hint, what they did with cell phones and the app store and iOS, and they're doing exactly the same thing on AI. So you could actually look at it as Apple was so smart, not wasting hundreds of billions of dollars of their hard-earned cash, and instead pays Google a billion dollars to rent Gemini and then builds an ecosystem right on top of it. It's kind of
Starting point is 00:09:21 genius if you think about it. This is like the ghost of Steve Jobs' hand, like looking over the company. And actually, if you scroll up on this post a little bit, it's kind of me making fun of Apple and how they've kind of accidentally stumbled upon this miracle. And I would believe the optimist case. If, in fact, they didn't totally fumble WWDC two years ago, there was a very clear intention to deploy Apple intelligence throughout the suite of hardware and to place themselves into this AI race, it just failed completely and catastrophically. And had that not have happened, I think I could have believed this optimist take where they really are just being slow and calculated, but I think this was an accident. They just so happened to create the best hardware in the world. And
Starting point is 00:10:06 It just so happens that all the AI models need to run on hardware just like this. It's kind of like Nvidia. Like, Nvidia accidentally became the most important company in the world. And it required a lot of execution along the way. And they deserve every bit of that. But they were in a unique position to do so. And I haven't seen any signs of Appling doubling down to do so on the software side, at least. It's only been on the hardware side. And it's only because this has been the trajectory since 2021 when they first launched the M1 chip. But it is interesting. I mean, if we look at this chart down to the bottom of the bottom of the post here. It shows the increase in cabics from everyone is going straight vertical and Apple's spending none. And yet, the Mac minis are sold out everywhere. You cannot buy one because everyone's
Starting point is 00:10:46 running OpenClaw on it. The Mac studios are running local models on everybody's machines. The new MacBook pros are incredible. They're going to be running models. I mean, there's just, everything is sold out, everything is backlogged. They can't make enough hardware to support this. And something is happening here. The market forces are out of play and they are saying, Apple, you are making great hardware. Like, please do more of this. I was joking with some friends the other day that the only real threat to NVIDIA's hardware mode is Apple with their Mac minis and with their laptops because they're the second
Starting point is 00:11:20 largest, most valuable company that comes behind them for a reason because they're selling out all their hardware. People can't get enough of it. But consumers in particular use it to run their AI models. On the software side of things, listen, it's not clear just yet, but I do think Apple gets ahead for two main reasons. Number one, they have like the largest distribution ever. I believe it's 2.5 to 3 billion active Apple devices. It's unbelievable. It's insane, right now. It's insane, right? So if they wanted to, they could switch on Bleeding Edge AI via Google's Gemini or their own
Starting point is 00:11:53 fine-tuned version of that model to 2.5 to 3 billion people tomorrow, right? So they instantly become the most or the largest consumer mode for AI immediately. But they're taking their time. I believe they're building something much more curated and better than what we have today. Now, the pessimists will say, oh, they're slow. They've been slacking, and I would probably agree with you. But hey, they're the second most valuable company in the world. They can take that time to wait and build something interesting. That being said, all these releases are super cool, Josh.
Starting point is 00:12:23 But there's one thing that's nagging me at the back of my head, which is Siri AI has been delayed again. I do not know when I'm going to get this, but it's already been delayed, what, a year and a half at this point? This is, of course, Apple's personal AI assistant, which is probably going to be the conduit and the main spokesman for all their Apple software stuff. So until I see that released, I'm not going to believe that it's actually happening. Yeah, I mean, the software part, again, they just fumbled so hard. There's no denying it, and there's no signs that they are
Starting point is 00:12:52 going to recover. I think the most bullish thing they've done recently in terms of software is just license out their AI to Gemini. Clearly, Google can do an amazing job. And for a billion dollars a year, Apple is getting access to Gemini models and they're going to integrate locally into, perhaps not locally, but they'll integrate them into all these mobile devices. And that takes us to this new weird place where like there is a potential to shift the current market forces based on this distribution that you just mentioned that Apple has of multiple billions of products already in people's hands that are AI capable. And this is noteworthy.
Starting point is 00:13:25 I mean, what we're seeing with OpenClawe in particular, it kind of set the stage for what, how strong of a preference people have to running these models on actual hardware that they feel that they own. A lot of people bought OpenClor, not because they needed the compute to run the models. That's not true. You could do this on a $5 virtual private server. They used it because it connected with the ecosystem that Apple provides. They used it because it can query through their iMessages and it could send iMessages and it could call and it can FaceTime and it can use the Apple speed of software. And that is a really big deal. And that gets into this AI edge compute bulkcase, which is the idea.
Starting point is 00:14:00 that everyone needs GPT 678, 910 serve from OpenAI's cloud servers might not be true for the majority of the users that actually just want AI to help them like figure out their grocery store order and summarize their emails for them. And there is a limit to the intelligence that the average user will actually need access to. And it would seem as if a lot of these current models have reached that threshold. Not everyone needs to go cure cancer or solve novel physics. And with that understanding, we're at a moment now where Apple's hardware, particularly this new hardware, is able to run all of these models that are capable of these average use cases locally on device. And that's a lot of users that will be using this. And it may actually be like one of the largest bear cases against
Starting point is 00:14:49 companies like Open AI like Anthropic who rely so heavily on customers paying money and using the API i fees because these local models are becoming so highly intelligent, so capable, and so small, that they could just run on an iPhone. Yeah, I actually wrote about this at length in the essay that we just dropped in our substack. If you're not subscribed, definitely go check that out. It's, in my opinion, banger and it goes through everything that Josh just covered. My thinking about models has evolved pretty drastically over the last month in the light of OpenClaw, because what I initially thought was OpenClau was just a bunch of open source tech-heavy developers that were just kind of toying around and messing around with something quite dangerous. And then what I actually learned
Starting point is 00:15:31 was that the reason why they were doing it was because it led to a better AI experience overall. And when you and I have tried out OpenCloan, we have a bunch of episodes that demonstrate this, we have just had a much better experience. Like the AI actually remembers you, but can do so many things for you. And I think that's where people are eventually going to settle, particularly consumers, like, yeah, okay, we can talk to chatjee, but like I wanted to now do stuff for that. It's much harder to do that if your AI provider is OpenAI with their own servers versus having an AI model locally on your device.
Starting point is 00:16:02 So I do think there's a larger trend, which is going to be around edge compute and local AI devices and local AI models running on your phone and on your laptop, which will lead to a more personalized experience. I'm excited to see people take privacy more seriously at this point. With OpenClau, some of the worst examples of it was, the agent would steal your credit card info and spend it on some random stuff or would go rogue and burn up all your compute tokens. You have more control and access over that if you go through an Apple device that might kind of give you a semi-private experience that you don't have to expose
Starting point is 00:16:36 all that kind of data. If this trend becomes true, then it completely threatens Anthropic and open AI's mode, which have relied heavily on subscriptions. Why would you pay $200 a month on a clawed subscription? I'm just playing the antagonist here. If you, you, you could get frontier intelligence for a much smaller model that fits on your mobile phone device. That's a Quinn model that got released this week. And that can work with all your personalized data. Why wouldn't you just do that? It's a no brain. And I understand the thesis behind it. And I think that's what Apple's going after. Well, now we have to ask the question, are they capable of doing this and who is going to get them to this place? And to do so, we have to look at the leadership.
Starting point is 00:17:16 We have to go to the Seed Suite first. And that is thanks to Polymarket, who has prediction markets on who is actually going to be responsible for running the ship after Tim Cook leaves. So it's been widely rumored that Tim Cook is going to be stepping down from Apple to as CEO capacity sometime this year. He's been there for a long time. He's had an incredibly successful run. But it seems as if the Apple C Suite is kind of grooming the next person. And according to Polymarket, John Turnus is going to be that guy.
Starting point is 00:17:44 And this is exciting because John Turnus is the VP of, I believe, engineering hardware at Apple. he's a hardware guy. He's the person that has helped design, develop, and lead these devices. And Polymarket has him at what over 50% chance of running the company. So if he does actually become CEO, is there a world in which he can push this company forward in the sense that they can really double down on this edge AI compute thing? They could get these models running on all the devices. Maybe. I think that would be a really fun opportunity to see. And we really need a shakeup. Like Apple's been so slow. They've been so boring for so long that this would make a really big difference. And also, there's another market that shows the future products that they're planning to launch. And there's
Starting point is 00:18:26 one that I think surprises a lot of people, which is a foldable phone before 2027 at an 84% chance. So by September of this year, you will be able to buy a folding iPhone, which seems a little bizarre. But according to Polymarket, this is true. Thank you to Polymarket for supporting and sponsoring this section of the episode. And yeah, I think it's just a testament as always to how Apple is slow, but they are figuring it out. And man, if they can get this foldable iPhone that turns into an iPad, it runs models locally on your phone, it's going to be pretty cool. So feeling bullish on Apple in general, I mean, what do you think this means for the valuation of the company, EJAS? Like, Apple as a stock, is this, are we still being theoretical, hypothetical, or does this
Starting point is 00:19:07 convert to actual revenue dollars? I think it eventually converts to revenue dollars, but they're going to need to deliver on, well, they're delivered on the hardware side. They need to deliver on the software side. And that's typically where Apple has really dominated the consumer market. Yes, they built an amazing iPhone, but they also killed it with creating the best perfected app that you can use on your phone and that entire ecosystem for developers and consumers on either side. So they need to pull off the same thing for AI. And the challenge that they're going to face is it's not the same as the internet that we know today. It's going to be a new operating system. You and I have discussed multiple things on the show before.
Starting point is 00:19:46 we've discussed, perplexity releasing a new personal computer and then open AI releasing an AI web browser and all these random products that you and I don't really use anymore. It's used for very niche things. And what those attempts are getting at is trying to rebuild an operating system around this new weird technology that kind of feels like magic, but is also kind of dangerous. And so Apple is the company that's currently being presented to solve that problem. My bet is they're going to nail it, at least within the next kind of two to three years, for two reasons. One, they have the largest distribution I was mentioning earlier, three billion active devices, so it's easy for them to kind of turn that on. Their biggest threat actually might be Google,
Starting point is 00:20:26 but Google doesn't really have the best experience of creating consumer hardware, aka Google Glass. There's actually a new version of that coming out in a few months' time. But yeah, I think Apple has the best shot of it. Valuation-wise, I actually think it's up from here. And they're currently valued at what, like just under $4 billion? is just over four billion, four trillion dollars. And I think they're going to see their kind of Nvidia type rise now. It wouldn't surprise me if they actually compete with Nvidia for the top spot once consumer AI takes off.
Starting point is 00:20:59 So in summary, there is a lot of cool new hardware that just came down the pipe this week. There is probably something for everyone. And the noteworthy thing is this new kind of AI angle that they're taking. They have the accessible devices starting at $600. They have the high-end devices like these new MacBook Pro that go. up to $7,000, $8,000, but that are capable of running these really powerful, impressive local models on them? And is that going to be enough to actually start to detract away from the market shares of these other players like Open AI and Anthropic?
Starting point is 00:21:28 We will see. We will be monitoring the situation as always, but that is the news as it relates to Apple this week. It is a huge release in hardware. Are they accidentally sleepwalking into success, or is this a tactical master plan? We don't know. Perhaps the new CEO, John Turneris will tell us. But until then, thank you so much for watching. Like he just mentioned, he has a newsletter about this, about Local Edge Inference that is releasing, as you're watching this episode, it's out. So you can find that on our substack linked in the description below. Another way that you can help us is by sharing this episode with your friends. That always goes a long way. Share it with someone who you think will be interested. We do this four times a week,
Starting point is 00:22:06 25 minutes. If you listen to all of them, there's nothing that you will miss in the world of AI. Because our job is to keep you up to date. And thank you so much for staying up to date with us. yeah, we will see you guys on the next episode. Hold on. Let me feed our system prompt. You are going to record an absolute banger of an episode right now, and you're going to make no mistakes. Think hard. Let me feed my biological LLM. You're going to absolutely smoke this episode, and it's going to go even more viral than the previous episode. Make zero mistakes. Let's go. This is why we should charge premium. All right. I know.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.