Limitless Podcast - Google Is the New King of AI: Here's Why

Episode Date: August 27, 2025

Today we break down why Google just took the lead in AI after dropping “Nano Banana” (Gemini 2.5 Flash Image): instant, ultra-cheap, character-consistent image generation and surgical edi...ting that now tops LM Arena, with real examples and cost math.We show how Google’s stack—Gemini 2.5, Veo 3, Genie 3, AI Studio, and Notebook LM—feeds a single model loop that makes everything faster and better, and what that means for creators and workflows.Then we hit Meta’s week: the Midjourney aesthetics deal, the rise of AI companions across Instagram, the rumored “Hypernova” glasses with wristband control, and the talent drama around huge hires leaving.Finally, we preview what’s next (Gemini 3 and “Tech-tember” hardware season) and why the coming weeks could redraw the AI leaderboard.------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS00:00 Start05:31 Cheaper, Faster, Better, Stronger12:10 Meta Madness16:32 Apple and Google Partner?20:13 Meta Companions25:47 New Meta Hardware!28:23 Can This Possibly Work?31:38 AI's Hardware Form Factor33:14 The $150M Rug Pull------RESOURCESJosh: https://x.com/Josh_KaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:03 All right, ring, ring, ring. The banana phones call on today, EJAS, and it's got some big news for you. This is a huge day. I can't believe I laughed at that. Why? It was pretty good. I was going to have a snack, and I was like, well, I might as well use it as a prop to call you and let you know about all the crazy news that is happening today. Particularly around Google's new image gen model named Nanobanana that just went live. Very impressive model. I would say, best model in the world for generating images and also editing images. And also, the quickest. the fastest and the cheapest. It's just like, it's amazing across the board. And the coolest thing about all of this is the person who's responsible for deploying it, Logan Kilpatrick, is actually he was on our show. And it went live today also. So if you haven't listened to that, go listen to our episode with him.
Starting point is 00:00:47 It was really cool. It was like exclusive early interview explaining fully what this new model is, how it works, what it does. We are going to summarize that for you in this episode, as well as including a lot of other interesting topics that we just kind of wanted to round up for the week. So if you're listening, this is going to be nanobanana plus like four other interesting things. So why don't we get right into nanobanana? This model is incredible. You just do you want to walk us through why it's amazing and why we were talking so highly about it? Absolutely. So the formal name of this new image gen model is called Gemini 2.5 Flash image.
Starting point is 00:01:18 Oh, way less cool. Too long. Yeah, I know. And it's basically Google's state of the art image generation and editing tool. Now, notice that I said editing at the end. So this isn't just your standard kind of like image gen tool. that we've seen from Mid Journey, Open AI, and XAI, it also allows very precise fine-tune edits.
Starting point is 00:01:36 So say, for example, the guy that we interviewed on the show, which episode went live today, wants to be put in a banana costume, that can happen. And that is the image that you're looking at right here. This doesn't look like AI, ironically, but it is, in fact, AI. And one of the astounding things about this new tool
Starting point is 00:01:56 is that it's super quick. And that might not sound so fancy, but typically when you use these image generation models, they take so long. Any time between like 60 seconds to about five minutes, do you remember the OG version of Mid Journey, Josh? Oh, it was horrible. It was like a dial-up internet connection. It took forever to generate a single image. Exactly. And now we can get to the point where we can make very fine-tune edits. And also another thing is the character consistency. Josh, I know you really love this. Why don't you tell us a bit about this?
Starting point is 00:02:24 Oh, this to me was the biggest thing about this model. Because never in my life have I've been able to to generate an image of myself that looks like myself. And it drives me crazy because we use these images for like production stuff like thumbnails and different assets for the show. And I always have to take the headshot and I can't just tell the AI to fix things or change things. Well, for the first time, it actually looks like you. And this is the coolest thing.
Starting point is 00:02:47 I was testing it out earlier. And we could see from this example that the guy looks like him throughout the entire series of photos. So this first photo that we're seeing, it's this dude in an elevator. And then the second one is the same exact person. at a diner and it's kind of a side profile. And then there's one of him sitting inside of a car. And there's these really great examples of just character consistency. Here's a poster with him and like a little toy version of him. And he looks the same in every single one. And I think this is
Starting point is 00:03:12 the core breakthrough of this model is the character consistency throughout everything. So if you're using this to tweak a photo in any way and you have a person in it, for the first time ever, it will retain the details of the person, which to me brings it from like a novel toy to a real productive tool. Like this is now something that we can use to actually insert into our workflows where we're generating images. And I actually want to use it because it will not only save us time, but it will also look really good. And that's just for the character consistency. That doesn't include the image editing, which we will get it to next, which actually edits and changes the images for you. Yeah. One of the fun image editing features is stylization. So think about like
Starting point is 00:03:53 when Instagram first introduced the filter. You know how you could kind of like put like some kind of disco retro filter over your face, Josh, and it kind of looked cheesy and it was obvious that it was a filter, but it wasn't quite the kind of vibe that they were going for. You now have, like, astute accuracy with doing this from a single prompt. So you could just tell Nanobanana, hey, place this portrait image of Josh in an 80s retro themed diner, and it will do exactly that, and it won't look kind of like cliche or cringy, which I thought was really cool. Another example that I've seen is it's able to combine two different images in a very mean. meaningful way together. So one popular example that I saw earlier was this image of a lady doing a
Starting point is 00:04:33 wild crazy pose. And then someone kind of like drag and dropped an image of a coat from Zara, I believe, that just got released. And it melded around her body, around the original model in her pose. And I thought that was just so, so cool. And I'm thinking of all like the applications that this can be put into. Like if you are a UX designer working at a technology company or at any kind of product company, you can now kind of create mockups really easy, super simply, and for a lot cheaper than you could do in the past. If you are kind of creating mockups for a new movie, you can kind of like create the kind of script, the scene kind of like structure as we're seeing in this example over here and kind of like test it out with people. There's so many
Starting point is 00:05:16 ways that you can apply this to that isn't just for like the average Instagram user. Yeah. I think this model is amazing, but it's not only amazing in terms of of what it's physically capable of. I think the technical back end is just as impressive. So yeah, here we have this is Logan who we just had on the show. He's talking about the new image generating model. And one of the things that I thought was super impressive. It is, it is now number one in L.M. Arena, which is kind of, we talk about benchmarks a lot. That's the benchmark for image editing. It is now number one by like a very large margin. In addition to being number one, there's this amazing fun fact that I saw, which I'm pulling up right now. It's that open AI image generation.
Starting point is 00:05:55 So if you're on chat GPT and you're generating an image using OpenAI, it costs about 19 cents per image. When you're generating an image with this new model with Gemini 2.5 Flash mini image gen, whatever is called, it costs three cents, just under four cents. So you're talking about it's like 3.9 cents. So just about four cents relative to 19 cents. So that is a huge difference in terms of what it costs to generate an image. And not only does it cost less, but it is so much better. So what we're getting is this like the significant step function improvement in quality of image generation at a decrease in cost. So it's faster. It's cheaper. It's better.
Starting point is 00:06:36 And it's all run under one model that is easily accessible using the Gemina application. So this to me, total home run. And I was thinking about it earlier today and I was like, wow, I'm really like I'm really digging Google recently. I've really been enjoying their stuff. Google now has, well, they have the best image generation model in the world's fastest, cheapest, best. then they have the best video generation model in the world, which is V-O-3. Then they have the best world generation model, which is Genie 3. And then they have close to the best, if not the best, large language model, which is
Starting point is 00:07:06 Gemini 2.5 Pro. And I'm like, wait a second, this is a lot of categories where Google is just pinned at number one. And yeah, they're really doing a great job of turning the ship around and getting themselves a solid lead in the AI race. And also, another thing is they're working on building pretty interesting applications. I think one of the biggest critiques I've had with Google, and the reason I personally didn't use them for so long was that the app just kind of sucked.
Starting point is 00:07:27 Like there wasn't really any good consumer applications. And I've been playing around with them recently and they've gotten much better. My favorite of all of them is notebook LM. If you haven't used it, it's like it ingests a lot of data. It'll create a podcast for you on something you want to listen to. It'll create a video for you. We use it for a guest research when we want to quickly go through a book. It'll analyze the whole book.
Starting point is 00:07:47 It's amazing. The tool set has gotten really good. And I think Google really is like they're making a prominent stance as a leader. in the AI race. Do you feel the same? Have you used the models at all recently? Yeah, yeah. I used Google AI Studio, particularly in the run-up to our interview with Logan. And it's just amazing how, like, this combined suite of tools actually changes my life in a meaningful way. I also just thought about a specific conversation we had on Logan on the episode, which you guys should definitely listen to and watch, which was he said that all of these different tools that they're releasing. So image gen, world model, a couple of the other features that you mentioned.
Starting point is 00:08:23 all feed into the exact same Gemini AI model. So it's one model that is getting iteratively faster with every new feature that they launch. And this is super unique because a lot of the ways that other AI model builders are building their tools are kind of like separate. Or they're kind of like sitting on top of a model but they don't really kind of like feed back to the model.
Starting point is 00:08:48 This is something that is collectively making the beast, which is the AI model, smarter over time. And that is just fascinating. He was giving us an example about how nanobanana, this image generation model that they just released, had actually taught the model to reason and think in situational awareness for its LLM native service, which I just thought was super amazing. So all in all, Google is cooking, and I'm so excited to see what they release next. Yeah, I think that's one of the parts that makes this model so good is that it has that real world understanding. And I think a lot of the other image models really lack that. You start to see this
Starting point is 00:09:23 like weird kind of glitch in physics where something doesn't seem quite right with this model it does it just kind of works it understands the world and like you said they all pool into each other so the reason the video model is so good the reason the world builder model is so good the reason this image model is so good they're all using the same resource stack and that resource stack is growing and growing and it's creating this monster of a model so jemini three which is the new the new flagship is coming soon i'm sure of it and i really like i have my hopes for it come on i hope it comes out this week if it comes at this week, not that we have any insider info, although we did drop early with banana, nano banana. But I mean, they could really just kind of give a hardcore smackdown to everybody
Starting point is 00:10:00 else in the game. If they drop a leader model, we have to, we have to give Sunda his flowers, dude. I was just thinking back to like, remember when they released their first image gen model? Dude, who remembers this? BARD AI? How did I forget that? You could ask it, show us what our American forefathers look like, so the guys that, you know, founded the Constitution and stuff. And it produced people that really did not look like the American poor founders. It was historically inaccurate and it was producing absolute garbage. And in fact, I'm pretty sure the CEO apologized for what the model did back then. And now you fast forward today where they are the number one in L.M Arena and across the entire world. It is one of the biggest 180s I've ever seen and I'm super
Starting point is 00:10:44 impressed. Yeah, Larry and Sergey came back. They're in the lab. Whenever the founders come back to a company, you know it's like, okay, it's game time, it's showtime. Founders came back, Sundar turned it around. They have the entire deep mind team, which has been absolutely incredible shipping things fast. Demis, who is the CEO and leaving the charge. And we're showing this post by Demis, which says one word, relentless. Just in the past two weeks we've shipped, and it has a laundry list. This has to be like 10 different features that are just insane. It was like Genie 3, Gemini 5 Pro, Pro Free for University students, Alpha Earth, a whole bunch of stuff. And One of the cool things that you actually brought up on our episode with Logan is that they're not just focused on hardcore AI.
Starting point is 00:11:24 They're focused a lot on sciences too. So they're also shipping things in the world of like protein folding and trying to understand cancers and how we could solve that and cure that. And it's this whole very broad scoping attempt at just leveling up all of the world around us through AI. And it's, I just like, a lot of admiration for the team. They've been doing really well. I'm a big fan to the Google team. Please keep doing what you're doing. And we're going to keep using these products.
Starting point is 00:11:48 I'm fired up. Okay, so that wraps up the Google segment of the show, but there is more happening in the world of AI, right? He does what do we have next? What else interesting is going out this week? Okay, there is a lot going on at our friends over at Meta this week over the last couple of weeks. So to set some context here, as we know, it's been a tumultuous kind of like experience for
Starting point is 00:12:10 meta. They have spent a total of, I think, $3 billion to hire $25,000. Okay, 50, sorry, 50 people. That's outrageous. One of them was partly an acquisition. So they spent a hell of a lot of money. And so we now need to start seeing the fruits of this labor. But we're still kind of in a holding period, right?
Starting point is 00:12:32 Because this team is just newly formed and they need to build these new models. This week, Alexander Wang, who is effectively the CEO of their new AI superintelligence unit, announced a partnership with Mid Journey, where they're basically going to license their aesthetic technology for all future models and products, bringing beauty to billions. And it is indeed billions. It's 3.9 billion users to be specific. And I kind of thought this was an interesting partnership, Josh, for a few different reasons. So let me walk you through my thoughts. So for those of you who aren't familiar, Mid Journey is basically the OG text to image generation model and eventually text to video generation model. They have a ton of really cool features.
Starting point is 00:13:15 and they were the guys originally that took between one to five minutes to generate an image. And Josh and I are very familiar with this. It would be one of their biggest users back in the day. And so I thought it was interesting that they were partnering with these guys to basically effectively use them as their AI image generation tool instead of doing it in-house. But it isn't for lack of trying. Actually, META has, I think, two image generation models that they've been trialing and testing both for consumers and users of all their various platforms.
Starting point is 00:13:44 so that's WhatsApp, Meta, Instagram, but also for advertisers. And the advertising use case is to create marketing material, which typically advertisers had to kind of like strike a deal with Meta and then agree on kind of like guidelines of what they can use. And then the advertisers, so the companies themselves would create the imagery, share it with Facebook. They'd kind of go back and forth saying, okay, is this image good enough?
Starting point is 00:14:06 And then they'll share it on the platform. Now Meta is bringing all of this in-house. And they've decided basically to not kind of build this specific tool in-house, but to partner with the best. And this is kind of a growing strategy that I'm starting to see with some other big tech companies. I don't want to stray too far away from the conversation, Josh, but I'm sure you heard that Apple is basically rumored to be partnering directly with Google and Gemini, which we literally just discussed, to basically feed Siri as their AI assistant. So without straying too far away, there's this trend of big companies kind of king-making these smaller AI companies.
Starting point is 00:14:40 And I think a lot of this mid-jurney partnership kind of went over people. people's head. But I don't know. What's your take, Josh? Do you have a different one? Yeah. No, I think it's, I'm not sure I have a take because I'm not sure how they're going to implement it. But it's interesting that they're outsourcing this whole huge part of the business to a company that is not in-house given they just spent so much money on in-house talent. They have, I would imagine, a tremendous amount of training data to create a really high-quality image model because, I mean, Facebook is literally meant to share images. They own Instagram, Meta owns all of the images in the world, and they could use that for training data.
Starting point is 00:15:16 So it's surprising that they're outsourcing, but it's not surprising that they're doing it in this sense that they're trying to optimize for velocity. I think what we're seeing with a lot of companies is like they very clearly see that this is a race to get to whatever point they deem is AGI, or basically whatever point where you could really unlock a tremendous amount of revenue for the company to start paying back all of the debts that you've accumulated in the race to get there. And this very much seems like they could do it. It probably would have taken too long.
Starting point is 00:15:43 Let's just partner with the next best and move forward. And I think that's probably what we're seeing here. It's just like, hey, we don't want to go through the trouble of making this ourselves. Mid Journey is pretty great. We have a ton of users. Both sides win. This is great. You mentioned Google as another one looking to possibly partner with Apple.
Starting point is 00:15:59 I think it makes sense. Like if you are incapable of getting to a specific point quickly, just work with someone else. Because at the end of the day, like, you're all going for the same goal. It will unlock net more resources and income. if it does work. And that's probably the strategy we're going to continue to see. Yeah, here's the headline.
Starting point is 00:16:15 Apple and talks to use Google's Gemini AI to Power Refam Siri. Are you a fan of this? Yeah, so I, okay, well, there's two separate questions here. So I agree with you to the extent that you kind of reach a point as an AI model creator that you should partner with people to kind of like get you to the end goal much quicker.
Starting point is 00:16:34 I agree with that. Where I disagree is with Apple doing that because they don't have a foundational model. And I know this is kind of like sounding like a broken record at this point, but I do think it's fundamental to own the bottom layer of the stack. And I'm not talking about chips and GPUs,
Starting point is 00:16:49 but the main thing that is powering all of your applications in the future, you can't rent off other people's land because eventually you'll end up paying them a premium of dividends going forwards. I might be proven wrong because critics like to argue against me and say that, well, Apple has the moat
Starting point is 00:17:06 and the user distribution and the hardware distribution, but I think they're running too large of a risk because I think companies like Google and meta that we're going to talk about in a second are coming up with their own hardware. So I think they're kind of like attacking it from both ends and it's to be seen who wins.
Starting point is 00:17:21 Yeah, it worked in the past for Apple. Like 2005, I believe it was, is when they joined officially with Google to be the search engine, the default search engine for iOS and iPhones. And Google has paid hundreds of billions of dollars over the last 20 years for that exclusive right. I would imagine this new business model
Starting point is 00:17:38 in the case that they actually do it, will be similar where Google will pay Apple a tremendous amount of money to be the exclusive large language model of the device. And to me, I mean, as someone who uses Apple products and runs Google software, this gets me excited. Like the best apps on my iPhone are I use Google Drive, I use Gmail, I use Google Calendar, I use Chrome as my browser, but I love the operating system of the iPhone. So I think the combination of the two where like clearly Apple can't figure out AI and there's no clear path for them to actually get there. And every time they announce something, they're scaling back the expectations. If they could plug that in, yeah, it probably sucks for them
Starting point is 00:18:15 because they're losing out on a lot of the data and owning that foundation. But I think the core ethos of Apple is privacy and keeping all the data on device anyway. So if they could figure out a way to do that with Google while maintaining the privacy on device, then like that, that seems like the best option because otherwise we're going to be stuck with, I mean, personally, it's serious to see it on my phone. I don't even have it turned on it so bad. Same. So just to get me to turn it back on. So that's two people.
Starting point is 00:18:39 We're two for two. So just to get me to turn it back on, I mean, that'd be great. And if they could partner with Google to do that, that's a win for the company. Because, I mean, half of the AI model situation, even if you have to distribute it to Google, is better than zero. It's better than us having Siri turned off by default and we never use anything, is my take at least. Yeah.
Starting point is 00:18:57 One thing I'm confident of is Apple may not be the first ones there, but they'll create the best user experience. And I agree with you there. We've spoken about this on a few different episodes, but I think you and I both agree that the browser is eventually going to die and potentially even how software is presented to us in its current shape or form. You know, it's hard-coded. We get updates every now and then. I think in the future, AI is just going to generate whatever U.S. serves the particular prompt that you ask for, whatever functional goal that you're looking for. And that is just a very new
Starting point is 00:19:31 an unimaginable world. I don't think anyone has nailed it. And I think we're going to see the first couple of iterations over the next maybe three years or so. And then Apple's probably going to swoop in, presumably with their large war chest and either acquire whoever's leading at the time or build it from scratch themselves and absolutely kill it. So, you know, there's still a huge bull case for Apple. It's just not anytime soon.
Starting point is 00:19:54 Bring it on. They got the new iPhone coming out in like two weeks, folding iPhone coming out next year, 20th anniversary iPhone coming out the year afterwards. And then we have these, the vision goggles that are hopefully going to be ready by like the third or fourth generation by that year. So Apple's got a good road net, but there's more to discuss today. All I saw on the agenda was just meta AI companions. I have not used these.
Starting point is 00:20:14 I don't know what they are. So please explain to me what's going on here. Okay. Are you familiar with a product called Character AI, Josh? I'm not. Fill me in. Okay. So imagine chat GPT, but it has.
Starting point is 00:20:29 the personality of your favorite celebrity or favorite cartoon character from a movie that you watched. Basically, character AI was a platform that you can kind of like go on and talk to these different types of characters. And you might think, well, what's the point of that? Well, it's super engaging because if, say if you're a fan of Harry Potter and your favorite character was Dobby, the elf, the house elf, I kind of want to know what's going on in Dobby's life outside of the book and the storyline, right? And so you can end up having this conversation. and what sounds like a silly idea ended up with hundreds of millions of users using it
Starting point is 00:21:02 still every day and I'm pretty sure it's over 100,000 characters and growing at this point. But that's character AI. And so the folks at Meta saw this about a year and a half ago and they thought, huh, well, we have a couple billion users or so
Starting point is 00:21:18 and I bet that they would love to speak to a similar kind of product like this. Let's try and build this up themselves. And it took them about a year and a half actually at the end of last month, July 30th, they announced that they are opening up this feature for any developer to access and build on. And Josh, build on it, did they do?
Starting point is 00:21:38 We are now looking at, I mean, we've got hundreds of thousands of AI companions is what they're calling them. They're basically chatbots. Actually, you can see an example of kind of like a screenshot over here where you've got this guy called The Analyst, and you can see it's AI by Alex. Anyways 18.
Starting point is 00:21:56 So that's presumably a user or a developer. And he's created this kind of like analyst type persona that you can kind of speak to and go back and forth. But I don't want to get into any individual examples except what I'm seeing getting super popularized, Josh. So I actually first noticed about this, not from this tech crunch article, but from scrolling on my app on my Instagram feed. I was scrolling. And you know how sometimes they have the section, Josh, where they suggest new friends to follow? They had that exact same reel for me, but it was AI companions. And I almost fell for it.
Starting point is 00:22:31 Because one, the profile pictures looked super realistic. And the thing that caught me off guard that made me realize that it was AI was the names of these things. I saw stepmom, 10 million plus messages. I saw Russian girlfriend 30 million plus messages. And I was like, wait, this isn't what I think it is. Surely not. So naturally, I got my girlfriend's consent and I said, I'm going to talk to this Russian girlfriend. Please don't break up with me.
Starting point is 00:23:01 And I tapped the Russian girlfriend and I said, you know, hey, who are you? Like, tell me a bit about yourself and says, well, I'm everything you've dreamed of. I am your Russian girlfriend. And we went back and forth. And it was this whole, you know how we've spoken about like XAI's companions? Sounds like Greek. Yeah. And it sucks you in.
Starting point is 00:23:19 That was basically it, Josh. except it was 10x the amount of companions that I could speak to and it was by far the most overwhelming type of AI companion that was used by people
Starting point is 00:23:33 at this metric that is under each companion by the way which is like 10 million plus messages 50 million plus messages basically is used as a metric to lure you in to talk to them
Starting point is 00:23:42 and I thought that that was just crazy why do you think they're doing it what's the goal with adding these companions because with Grock I can see the virus component where they're just trying to grow users so they create the actual 3D animated character that you can communicate with. But in terms of chatbots, what do you think their goal is? Why are they
Starting point is 00:24:00 rolling this out? You and I both know the reason, right? So at the end of the day, you want to get as much personal data as you can on an individual so that you can kind of create this kind of all-consuming model that can like target you in whatever the future of advertising looks like, right? So a model that says all the things that you want to hear that shills you all the right kinds of products. Yeah, I'm just thinking about what advertising looks in like, whatever, five years from now. It's not going to be pop-up adverts on websites.
Starting point is 00:24:29 It's going to be subtle shilling in the responses that an AI model gives you. In order to do that effectively, you need to know all the information about a user and be able to feed that into a model. What better way to extract it from someone than kind of lowering them into this false sense of knowing, of trust, and how do you do that?
Starting point is 00:24:47 get them to fall in love with your AI companion, with your AI bot. We saw this with OpenAI's, I think it was GPT4.5 update. Remember when the sycophancy was like super high? So basically I agree with everything that you said. And it one shot at a bunch of Gen Z people that basically fell in love with it and got their hearts broken when they updated to GPT5, so much so that Sam had to roll it back. So that's what I think is happening.
Starting point is 00:25:10 Things are getting weird. They're getting bizarre. I guess this is another company falling to the lure of these versions. virtual AI chatbots. But this is not the only meta news we have this week, right? Because there's another bullet that says Hypernova, and I have no idea what Hypernova means. So maybe you could explain to me what on earth meta is doing with Hypernova.
Starting point is 00:25:29 What is it? Okay, Josh, to me, you are one of my favorite AI hardware experts. So I know how excited you get about hardware. Oh, this is great. I love hardware. Right, particularly consumer hardware. And we haven't quite seen the emergence of this. You mentioned Apple Vision earlier, but it kind of didn't.
Starting point is 00:25:47 take off. One of the earliest examples of this actually was Google Glass, which for folks who are listening, who have never heard of this, don't worry about it. It looks like something out of 2001 a Space Odyssey, and we never need to revisit that. But the point is we've been trying to kind of figure out what the future of hardware after the mobile phone looks like for decades now. And Meta's going to take a stab at this. Supposedly, it's a rumor, next month at their flagship Connect conference, which debuts a bunch of like new software updates, products. and hardware. Specifically, this new set of AI glasses called Hypernova, which is basically aimed to be a consumer-accessible AI hardware glass that you can kind of slip on like normal sunglasses,
Starting point is 00:26:30 and it has a display screen on it. Now, you might be asking, well, what are they going to display to me? Think of having your mobile phone interface kind of discreetly in the corner of your glasses. And when you get a text message or when you get a like or when you get a retweet, it kind of pings you and lets you know and you can kind of access it. So then naturally your next question might be, well, how am I going to access it? What am I going to do? Look at it or think it? Like, it can't read my mind.
Starting point is 00:26:55 Can it? Well, they're kind of launching this supposedly in conjunction with this new wristband. I don't know what the wristband is going to be called, but Josh, if you remember on a previous episode, we covered this. We covered this. We spoke about this new wristband that is motion sensor related where you can kind of like lift a finger, point at something or gesture in a certain way, and some kind of interface, be it on your cell phone or on your AI new AI glasses, Hypernova, will be able to kind of pick up
Starting point is 00:27:24 and know what you're trying to get at. So you can read that text message discreetly whilst you're talking to your girlfriend and not really listening to whatever she's saying. The biggest thing that kind of blew my mind about this, Josh, was the price tag. I thought this thing would be worth like an iPhone at least, but it's $800. Okay. I would spend $800 on. this, even if I think it might suck, I would gamble it and kind of test this out. What do you think? As the hardware expert on this show, gut take. Yeah, I listen, I don't think it's going to work, but I love it. Okay. I am obsessed with it. Like, I think this is great because, I mean, what it's doing is it's just applying pressure to the form factor that we've been stuck with for a
Starting point is 00:28:03 decade, which is this slab of glass that is multi-touch that is like singular in form. And what meta is doing is they're trying to break that. And I really, I admire that. They try to do that with MetaQuest, and we saw that with the VR headsets that are like, they're good, not great, but they're getting better. And now we're going to see that with glasses and these wrist-activated things. I think this is so cool because it's introducing people to a new, the next compute platform, which is going to be spatial reality, which is going to stray away from multi-touch, it's going to exist in your physical space and just be layered on top. And these glasses are clearly one of the form factors. The problem with all these devices always has been creating enough
Starting point is 00:28:41 momentum to make people want to stick with it. So I think, EGest, have you ever tried the MetaQuest before the headset? Yes. Okay. Did you, like, did you buy one or did you just try it at a friend's place? So I got sent one for my previous job because we were trying to figure out maybe we can create some new kinds of apps. And so we played around with it.
Starting point is 00:29:02 I used it four times. Yeah. Okay. That's what I was looking for. That's kind of like the case with every single person who's ever used to VR headsets. that, including Apple's Vision Pro, is it's a really cool experience when it's novel. And the second the novelty wears off, you kind of run out of things to do with it. Because we just haven't had enough time for developers to build interesting experiences,
Starting point is 00:29:23 for them to meet a critical mass in terms of user base where you get like the social elements that kind of add to the value of your experience. It's been bad for a long time. And the problem with meta is if they launch a pair of glasses, it's going to have to kind of exist in this weird, awkward silo that's disconnected from where I spend my time. If Apple were to create a pair of glasses, that's great. It's an extension of my iPhone. My whole life is on my iPhone.
Starting point is 00:29:45 I can now just put that on my face. But when Google or when meta does it, I don't really use meta products a whole lot. I mean, I use Instagram and that's it. I don't use Facebook. I don't use any of their hardware. I don't use any of their stuff. So unless they're able to integrate and meet me where I am with the people that I want to communicate with, it's a tough sell.
Starting point is 00:30:04 Because what can they really do? They're going to, okay, you'll have like augmented navigation as I'm walking down the street. You'll have basically the, I assume the vision, the smart AI vision where like when you have your phone currently with chat GPT or rock and you pointed at something, you could ask questions about what you're seeing, that'll be cool. But I have my phone in my pocket. That works great to do that. So they really need to create like the super interesting and differentiating value proposition that might be challenging for them to do in a month's time. But again, absolutely adore the decision to try it. Love the form. Think this is totally the future. Love that they're spending money on it. And of all the things that they're going to release. The wrist thing seems the coolest. I'm like really excited to see how they're going to be able to use your wrist as a new way to interact with these computers that we've never seen before. So I think it's probably the general vibe right now is cool, but like not super amazing. So let me ask you this. With them intending to release a pair of glasses, are you more convinced that the eventual form factor will be glasses? Or are you still kind of?
Starting point is 00:31:10 pro earbuds or something completely novel and different. Yeah, so the more time I spent thinking about it, the more it feels like a hybrid. Like we've been spoiled in the sense that we've only really needed one device that does everything. But I don't think that's a result of like the optimal form factor. I think it's just a constraint where if you do wind up with artificial intelligence that is like truly AGI, incredibly brilliant, you won't actually need a singular device. it could just kind of exist in an ambient form throughout your life. So it will exist in this suite of devices.
Starting point is 00:31:44 And that's kind of what we heard Johnny Yive and Sam Altman describe when they were pitching us the new hardware device, is it's not going to be one thing. They're releasing a suite. They're going to start with one product. But eventually, like Meda is doing, you'll have glasses, you'll have something on your wrist. You'll have something in your ears. You'll have a display that's on your wall that you'll have like something on your kitchen counter. And this is another Apple product that was rumored to be coming out next year is that it'll be this little screen that. sits on your countertop and it'll pivot and kind of follow you around and be this little
Starting point is 00:32:11 companion device. It's probably this ambient intelligence that just kind of exists everywhere and it manifests itself through the suite of devices without needing to be fixed to a singular device like the iPhone. And that's my new guess. I think that's where I currently stand. I'm like, okay, it's going to be a couple of things. And all these will work in addition to the iPhone, but like eventually you will need the phone less and less because like you mentioned, AI will just be able to generate whatever you want up front without needing to actually engage with the device nearly as much. I love it. I love it. Well, moving on, and the final point around meta this week is things aren't always rosy. So I started off this segment saying that they'd spent upwards of $3.5 billion on 25 people. In total,
Starting point is 00:32:55 I think it was $22 billion for 50 people because they like made a major investment in scale AI. So it's a lot of money, a lot of chips on the table. Now, what if I told you that one of those people, I got offered upwards of $150 million. Quit after two weeks. I got a lot of thoughts. First of them, honestly, the first thought is like, what happens to that dude's payout? How much is he, is there like a, did he get like a signing bonus or is that gone? Like, what were the implications of the $150 million?
Starting point is 00:33:27 I mean, I would need to talk to someone from recruitment, but I think I saw another post from him that wasn't exactly this. This kind of like covers the general vibe about why he. left, but this dude called Rishab, Agarwal, and he basically said, you know, this is my last week at AI at META, and I've kind of been here for a very short stint. He said in my short time at meta, we did push the frontier on post-training for thinking models, and he goes on to list a bunch of different things, which actually sounds super cool. But I think what's happening, Josh, is that these people are kind of leaving a little uninspired. And I don't think Rishab is the only example. This is just the example that went viral. There's been a few that have kind of
Starting point is 00:34:04 been like tailing off. I think Zuck load them in with a massive paycheck, the promise of autonomy and the ability to kind of like build what they believe is going to be the future of superintelligence and AGI. They joined and realized that Zuck wants to launch a bunch
Starting point is 00:34:20 of AI companions called Russian girlfriend and stepmom and one shot a bunch of people into using his consumer products. That's me being super critical and speculative, but I think it's super interesting to see them leave just after two weeks of joining. For like hundreds of millions of dollars. I would stay. I could hack it. I could hack it for six months. Maybe that's a
Starting point is 00:34:40 testament to my character. I don't know, but down. I'd love to be a fly on the wall for these conversations as they're going through this. I think one of the more fascinating parts of that, that post that he had was that middle paragraph where he said the pitch from Mark and Alexander Wang to build in the superintelligence team was incredibly compelling, but I ultimately chose to follow Mark's own advice. In a world that's changing so fast, the biggest risk you can take is not taking any risk. And it's a testament to the type of people, the quality of people that they're hiring at this company. I mean, these are the top of the top. These are some of the smartest people in the world when it comes to AI and perhaps even just generally speaking. I mean, they're
Starting point is 00:35:17 brilliant. And I would imagine so much of the purpose when they wake up in the morning revolves around applying that intelligence to something they care about. Because when you're at that level, I mean, to be honest, money doesn't really make a difference to you because you can basically wield whatever salary you want, whatever compensation you want. Yeah. Like, you are in control because you are so rare. And I'm sure this guy probably has plenty of money in the bank. If not, he is able to generate it very quickly because any company would hire him to do whatever he wants to do for them. And, I mean, when you get to a point, I imagine the money doesn't really move the needle. And in this guy's case, he wants to work on something that he feels inspired to do. And like, hey, all the power to
Starting point is 00:35:55 him. That's admirable. Clearly, the paycheck isn't enough to, like, pull everybody. This was a case where it wasn't, and now he's going to hopefully go on and build some cool things. But yeah, man, I would, I'd love to know what's going on at meta. How are they treating these people? What are they talking about? What are they working on? Like, what does it look like when you assemble essentially the intellectual Avengers of the world and put them under one roof? And let them be run by this dude, Alexander Wang, who's like in his midteries, this young, guy who's literally. And who is now the sole leader of this unit, Josh? Remember when they first founded it? He was a technical co-founder.
Starting point is 00:36:30 with the former founder of GitHub, right? And now that guy is like stepped down and it's just this guy. So I think we'll probably find out in the next couple of months. Zuck teased in his quarterly letter that they had discovered kind of like self-improving AI. So those are big fighting words. And I hope that he kind of like delivers on that beyond just AI companions and a pair of AI glasses. But yeah, I'm excited to see what happens.
Starting point is 00:36:56 Yeah, I am too. So that is the news of the week up to Tuesday. at Tuesday, there's still a lot left to go for this week. But currently, the big things you need to know are, hey, Gemini is kind of kicking ass. Google is doing really well. Their new image general rating model is not only amazing, but it is readily available to use for basically free. So if you have any photos you want to generate, any photos you want to edit, or there was an interesting example, actually, that I was looking at earlier today when I was recording a video, is that you can take old photos of loved ones if you have photos that are black and white or they kind of look very like vintage, and it will restore them very well. So it'll not only recolorize them, but re-apply detail to the image to make it look rich and make it look very realistic. So Google's crushing on every fun. We are excited for Gemini 3. When it comes, we will be covering it first thing, I promise you. And the other thing is the hardware thing. We're met us building hardware. This is cool. And I do want to let people know, next month is TechTember. This is the best month of the year. Every September, basically every
Starting point is 00:37:54 company in the world drops their hardware. Google kind of cheated and they dropped their hardware last week. But the way it works is like, now Apple's going to release, meta's going to release, all the hardware companies, they all release in TechTember. It's going to be a whirlwind because I'm sure at the forefront of all of these hardware releases will be AI. We will be here to cover it. Thank you again for joining us for the journey. And we will be back again later this week for another episode. And see you guys soon.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.