Limitless Podcast - OpenAI Just Gave Away Their Secret Formula... For Free?

Episode Date: August 6, 2025

In this episode, we discuss OpenAI's shift to open-source with the release of a 120 billion and a 20 billion parameter model for local use. Ejaaz and Josh highlight the democratization of AI ...access, enhanced privacy, and customization opportunities. We analyze the competitive landscape against major Chinese models and hint at the anticipated GPT-5 release. Tune in for insights into this transformative moment in AI!------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS0:00 OpenAI's Surprising Release1:30 The Power of Open Source Models3:28 Local Computing Revolution5:33 Privacy and Personalization6:54 The Impact on Industries9:32 Testing the New Models17:43 Competing with Chinese Models24:06 The Future of AI Technology26:29 Anticipating GPT-5------RESOURCESJosh: https://x.com/Josh_KaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:03 The unthinkable has just happened. Open AI has released an open source model. Open AI has been closed AI since the time that I knew them. They have been named themselves Open AI. They were not open source. They have finally released an open source model. And surprise, surprise, it's actually really great. And I think the downstream implications of an open source model from a company like this that
Starting point is 00:00:24 is this good are really, it's a really big deal. I think this really matters a lot. Just yesterday they announced the release of GPTOSS. There are two models. There is a 120 billion parameter model, and there is a 20 billion parameter model. We're going to get into benchmarks. We're going to get into how good they are. But the idea is that OpenAI has actually released an open source model.
Starting point is 00:00:44 And this can compare to the Chinese models because we've recently had Deepseek and we've had Kimi, and those were very good. But this is the first really solid American-based open source model. So, Ejazz, I know you've been kind of digging in the weeds about how this works. Can you explain to us exactly why this is a big deal, why this happened, what's going on here? Yeah, it's pretty huge. So here are the hot highlights. As you mentioned, there's two models that came out.
Starting point is 00:01:07 The 20 billion parameter model, which is actually small enough to run on your mobile phone right now. And they have a 120 billion parameter model, which is big but still small enough to run on a high performance laptop. So if you guys have a MacBook out there, jump in, go for it. It's fully customizable. So remember, open source means that you can literally have access to the design of the entire model. It's like OpenAI giving away their secret.
Starting point is 00:01:32 recipe to how their frontier models work and you can kind of like recreate it at home. This means that you can customize it to suit any kind of use case that you want, give it access to all your personal hard drives, tools, data and it can do wonderful stuff. But Josh, here's the amazing part. On paper, these models are as good as GPT4 mini models, which is, it's pretty impressive, right? But in practice, and I've been playing around with it for the last few hours, they're as good, in my opinion, and actually quicker than GPTO3, which is their frontier model.
Starting point is 00:02:06 And I mean this across like everything. So reasoning, it spits out answers super quickly and I can see its reasoning. It happens in like a couple of seconds. And I'm so used to waiting like 30 seconds to a couple minutes on GPTO3, Josh. So it's pretty impressive and an insane unlock. On coding, it's as good and on creativity as well.
Starting point is 00:02:28 So my mind's pretty blown at all of this. Right. Josh, what do you think? Yeah, so here's why it's impressive to me is because a lot of the times I don't really care to use the outer bands of what a model is capable of. Like, I am not doing deep PhD-level research. I'm not solving these Math Olympiad questions. I'm just trying to ask it a few normal questions and get some answers. And what these models do is an excellent job at serving that need. They're not going to go out and solve the world's hardest problems, but neither do I.
Starting point is 00:02:53 I don't want to solve those problems. I just kind of want the information that I want, whether it be just a normal Google type search or whether it be asking it some miscellaneous. question about some work that I'm doing, it's really good at answering that. So I think initial impressions, because they did allow you to test it publicly through their website, it's just really good at the things that I want. So the fact that I can run one of these models on a local device, on my iPhone, well, it feels like we're reaching this place that AI starting to become really interesting because for so long we've had compute handled fully on the cloud. And now this is the first time where compute can really happen on your computer, it could happen on your laptop,
Starting point is 00:03:28 I can download the model, and I could actually store the model, the 120 billion parameter model, on a 56 gigabyte USB drive. So you can take the collective knowledge of the world and put it on a tiny little USB drive, and granted, it needs a bit of a bigger machine to actually run those parameters, but you can install all the weights. It's 56 gigabytes. It's this incredibly powerful package, and it probably, I don't know if this is true, but it's probably the most condensed knowledge base in the history of humanity.
Starting point is 00:03:56 They've really managed to take a tremendous amount of, tokens, smush them into this little parameter set, and then publish it for people to use. So for me, I'm really excited. I like having my own mini portable models. I am excited to download, try it out, run it on my MacBook. I'm not sure I could run the 120 billion parameter model, but at least the 20B and give it a shot and see how it works. You need to get the latest MacBook, Josh.
Starting point is 00:04:17 I know, I got out. We can test that out. What I also love about it is it's fully private, right? So you can give it access to your personal hard drive, your Apple Note. your whatever you store on your computer basically and you can basically instruct the model to use those different tools. So one review that I keep seeing from a number of people
Starting point is 00:04:39 who have been testing it so far is that it's incredibly great and intuitive at tool use. And the reason why this is such a big deal is a lot of the frontier models right now when they allow you to give access to different tools, they're kind of clunky. The model doesn't actually know when to use a specific tool and when not to,
Starting point is 00:04:56 but these models are super intuitive. of it, which is great. The privacy thing is also a big thing because you kind of don't want to be giving all your personal information away to Sam Altman, but you want a highly personalized model. And I think if I was to condense this entire model release in a single sentence, Joss, I think I would say it is the epitome of privacy and personalization in an AI model so far. It is that good. It is swift. It is cheap. And I'm going to replace it completely with all my GPT4O queries. As you said earlier, like, who needs to use the basic models anymore when you have access to this? Yeah.
Starting point is 00:05:34 So it's funny you say that you're going to swap it because I don't think I'm going to swap it. I still am not sure I personally have a use case right now because I love the context. I want the memory. I like having it all server side where it kind of knows everything about me. I guess in the case that I wanted to really make it a more intimate model experience where you want to sync it up with like journal entries or your camera role or whatever, whatever interesting personal things, this would be a really cool use case. I think for the people who are curious why this matters to them, well, we could talk a little briefly about, like,
Starting point is 00:06:06 the second order effects of having open source models is powerful, because what that allows you to do is to serve queries from a local machine. So if you are using an app, or let's say you're an app developer and you're building an application, and your app is serving millions of requests because it's a GPT wrapper. Well, what you could do now is instead of paying API calls to the open-AI server, you can actually just run your own local server, use this model, and then serve all of that data for the cost of the electricity. And that's a really big unlock for the amount of compute that's going
Starting point is 00:06:34 to be available for not only developers, but for the costs of the users and a lot of these applications. So for the applications that aren't doing this crazy moon math and that are just kind of serving basic queries all day long, this like really significantly drops the cost. It increases the privacy, like you mentioned. There's a ton of really important upsides to open source models that we just haven't seen up until now. And I'm very excited to see. come forward. Well, Josh, the thing with most of these open source models, we spoke about actually two major Chinese open source models that released us week, it's not accessible to everyone.
Starting point is 00:07:07 Like you and me aren't necessarily going to go to Hugging Face, a completely separate website, download these models, run the command line interface. Most of the listeners on the show doesn't even know what that means. I don't even know if I know what that means, right? But here you have a lovely created website where you could just kind of log on and play around of these open source models. And that's exactly what I've been doing. I actually have a few kind of demo queries that I ran yesterday. Yesterday, Josh. Let's see. Okay, so there's an incredibly complex test, which a lot of these AI models, which cost hundreds of billions of dollars to train,
Starting point is 00:07:44 can't quite answer. And that is how many R's, the letter R's, are there in the word strawberry? Most say two. The bars on the floor, huh? Yeah. If we were to go with most models, they say, They're convinced that they are only two. And I ran that test today, rather yesterday, with these open source models, and it correctly guessed three, Josh. So we're one for one right now. We're on our way. But then I was like, okay, we live in New York City.
Starting point is 00:08:11 I love this place. I'm feeling a little poetic today. Can you write me a sonnet? And my goal with this wasn't to test whether it could just write a poem. It was to test how quickly it could figure it out. And as you could see, it thought for a couple of seconds on this. So it literally spat this out in two seconds. And it was structured really well.
Starting point is 00:08:29 It kind of flowed. Would I be reciting this out loud to the public? No, but I was pretty impressed. And then, Josh, I was thinking, you know, what's so unique about open source models? You just went through a really good list of why open source models work. But I was curious as to why these specific open source models were better than other open source models or maybe even other centralized models.
Starting point is 00:08:51 So I wrote a query. I decided to ask it. I was like, you know, tell me something. that you could do to other larger centralized models. And it spat out on a really good list. I'm not going to go through all of them, but some of the things that we've had it did so far. You can fine tune it, it's privacy.
Starting point is 00:09:04 I really like this point that it made, Josh, that it just shows that AI is probably getting smarter than us, which is you can custom inject your own data into these models. Now, without kind of digging deeper into this, when you use a centralized model, it's already pre-trained on a bunch of data that companies like Anthropic and Google have already fed it. And so it's kind of formed its own personality, right? So you can't change the
Starting point is 00:09:30 model's personality on a centralized model. But with an open model, you have full reign to do whatever you want. And so if you were feeling kind of adventurous, you could use your own data and make it super personal and customizable. So I thought that was really cool and fun demo. Josh, have you been playing around with this? Yeah. It's smart. It's fun. It's smart. I wouldn't say it's anything novel. The query results that I get are, you know, on, with everything else, I don't notice the difference, which is good because it means they're performing very well. It's not like I feel like I'm getting degraded performance because I'm using a smaller model. But it's just like it's nothing too different, I would say. The differences, I mean, again,
Starting point is 00:10:07 all this boils down to the differences of it being open source versus being run on the server. Let me challenge you that, right? Okay. So you're saying it's good, but nothing novel. Would you say it's as good as GPT-40 minus the memory? Let's just put memory aside for a second. Would you use it if it had memory capability? Actually, no, probably not. I still wouldn't because I love my desktop application too much. I love my mobile app too much. And I like that the conversations are shared in the cloud so I can use them on my phone. I could start on my laptop and go back and forth. So even in that case, I'm probably still not a user because the convenience factor. But there are, there are a lot of people and a lot of industries that would be. And this is actually something
Starting point is 00:10:49 probably worth surfacing is the new industries that are now able to benefit from this, because a lot of industries have a tough time using these AI models because of the data privacy concerns, particularly, I mean, if you think about a healthcare industry, people who are dealing with patients data, it's very challenging for them to fork it over to open AI and just trust that they're going to keep it safe. So what this does is it actually allows companies that are in like the healthcare industry, the finance industry who's dealing with very high touch personal finance, the legal industry, who's dealing with a lot of legality, government and defense. A lot of these industries that were not previously able to use these popular AI models, well, now they have
Starting point is 00:11:24 a pretty good model that they could run locally on their machines, and that doesn't have any possibility of actually leaking out their customer data, leaking out financials or healthcare data or like any sort of legal documents. And that feels like a super powerful unlock. So for them, it feels like a no-brainer. Obviously, get the 120B model, running on a local machine inside of your office, and you can load it up with all this context. And that seems to be who this would be most impacting, right? But still to that point, I wonder how many of these companies can be bothered to do that themselves and run their own internal kind of like infrastructure. I'm thinking about OpenAI who cracked, I think, $10 billion in annual recurring revenue
Starting point is 00:12:07 this week, which is like a major milestone. And a good chunk of that, I think 33% of that is for enterprise customers. And to your point, like these enterprise customers don't want to be giving Open AI their entire data. You know, they can be used to train other AI models. So their fix or solution right now is they use kind of like private cloud instances that I think are supplied by Microsoft by their Azure Cloud Service or something like that. And I wonder if they chose that one because there wasn't any open source models available or because they kind of just want to offload that to Microsoft to deal with. My gut tells me they're going to want to go with the latter, which is like, you know, just give it to some kind of cloud provider to do. deal with themselves and they just trust Microsoft because it's a big brand name. But yeah, I don't
Starting point is 00:12:53 really know how they'll materialist. I still think, and maybe this is because of my experience in crypto, Josh, that the open source models are still for like people that are at the fringe, that are really experimenting with these things, but maybe don't have billions of dollars. Yeah, that could be right. It'll be interesting to see how it plays out on all scale of businesses because, I mean, as a, like, I think of a lot of indie devs that I follow on Twitter and I see them all the time, just running local servers. And they just, if they had this local model, if they could run on their machine and it takes the cost per query down from like a penny
Starting point is 00:13:24 to zero, that's like a big zero to one change. So he does this, this model special because there are also a number of breakthroughs that occurred in order to make this possible in order to condense this knowledge to be so tight. So here's this tweet from the professor talking about the cool tech tweaks in this new model and what open AI was able to achieve. Some of these, I believe, are novel. Some of these are seen before.
Starting point is 00:13:45 If you look at point two, mixture of experts, we're familiar with mixture of experts. We've seen other companies use that like Kimmy and Deepseek. Basically, instead of one brain doing everything, the AI has this team of experts that are kind of like mini brains and specialized in different tasks, it picks the right expert for the job, and it makes it faster. So, like, instead of having the entire 120 million parameter model search for one question, maybe you just take a couple million of those parameters that are really good at solving math problems, and they use it.
Starting point is 00:14:13 And that's what brings compute down. The first point is this thing called the sliding window attention. So if you imagine an AI is like reading a really long book, but it can only focus on a few pages at a time. This trick kind of lets it slide its focus window along the text. So when you think of a context window, generally it's fixed, right, where you can see a fixed set of data. This sliding window attention allows you to kind of move that context back and forth a little bit. So it takes what would have normally been a narrow context window and extends it out a little bit to the side. So you get a little bit more context, which is great for.
Starting point is 00:14:44 a smaller model. Again, you really want to consider that all of these are optimized for this microscopic scale that can literally run on your phone. And then the third point is this thing called rope with yarn, which sounds like a cat toy, but this is how the AI keeps track of the order of words. So like the position of the words in a sentence. So rope, you could imagine it like the twisty math way to do it and yarn makes it stretch further for really long stuff. So we have the context window that is sliding. We have this rope with yarn that allows you to just kind of like stretch the words a little bit further. And then we have attention sinks, which is the last one, which is there's a problem when AI is dealing with these endless chats that lets it, it kind of
Starting point is 00:15:23 sinks in or ignores the boring or old info. So it can pay attention to the new stuff. So basically what it is is if you're having a long chat with it and it determines, hey, this stuff is kind of boring. I don't need to remember it. It'll actually just throw it away. And it'll increase that context window a little bit. So again, hyper-optimizing for the small context window that it has. And those are kind of the key for breakthroughs that made this special. Again, I'm not sure any of them are particularly novel, but when combined together, that's what allows you to get these 04 mini results or even 03 results on the larger model on something that can run locally on your laptop.
Starting point is 00:15:57 So it's a pretty interesting set of breakthroughs. I think a lot of times Open AI we talk about them because of their feature breakthroughs, not really their technical breakthroughs. I think a lot of times the technical breakthroughs are reserved for like the Kimi models or the Deepseek models where they really kind of break open the barrier of what's possible. But I don't want to discredit Open AI because these are pretty interesting things that they've managed to combine together
Starting point is 00:16:17 into this one cohesive, tiny little model and then just gave it away. Yeah, I mean, they actually have a history of front-running open-source frontier breakthroughs. If you remember when DeepSeek got deployed, Josh, one of their primary training methods was reinforcement learning, which was pioneered by an open AI researcher, which who probably now were,
Starting point is 00:16:39 works, it matter. Yeah. And I was looking at the feature that you mentioned just now, not the feature, but the breakthrough sliding window attention. And you mentioned that it can basically toggle reasoning. And I was pleasantly surprised to just notice that on the actual interface of the models here, Josh, can you see over here? You can toggle between reasoning levels of high, medium, and low.
Starting point is 00:17:01 So depending on what your prompt or query is, if it is kind of like a low-level query where you're like, hey, just record this shopping or grocery list, that's probably like a medium or a low query. So it was pretty cool to see that surface to the user, like see it actively being used. Yeah, no, it's super cool. I think I like the fine tuning of it. And again, allowing you to kind of choose your intelligence levels
Starting point is 00:17:22 because I imagine a lot of average people just don't, a lot of average queries just don't need that much compute. So if you can toggle it for the low reasoning level and get your answers, that's amazing, super fast, super cheap. Did you see that trending tweet earlier this week, Josh, which basically said that the majority of chat GPT users, have never used a different model than chat GPT 4-0. I haven't seen it, but that makes sense.
Starting point is 00:17:44 Yeah, I feel like the bulk of people. I was chatting to my sister yesterday, and she was kind of like using it for some research project at work, and the screenshots she sent me over was 4-0, and I was like, hey, you know, like, you could just run this on like a model that's like five times better than this, right? We'll come up with a much more creative set of ideas. So just made me think that, like,
Starting point is 00:18:04 I don't know how many people, like, care that they are, like, these brand-new novel models, you know, this kind of like basic model is good enough for everyone. I don't know. But moving on, Josh, there was a big question that popped into my head as soon as these models race, which was, are they as good as the Chinese open source models, right? I wanted to get some opinions from people. And the reason why this matters, just give the listeners some context, is China has been the number
Starting point is 00:18:32 one nation to put out the best open source models over the last 12 months. It started with Deepseek, and then Alibaba's Quen models got involved. And then recently we had Kimi K2, and I think there was another AI lab out of China, which came out. So they have, outside of America, the highest density of the top AI researchers. They all come out of this one university, Tsinghua, I believe. They kind of partially work or train in the US as well. So they've got this kind of hybrid AI mentality of how to build these models. And they come up with a lot of these frontier breakthroughs.
Starting point is 00:19:04 Kimi K2, for context, had one trillion parameters in their model, right? Comparing this to like 120 billion and 20 billion parameters models from OpenA, I was curious like, does this beat them to the punch? Some people, Josh, don't think so. Okay. This guy, Jason Lee, he asks, is the GPT OSS stronger than Quinn or Kimi or Chinese Open models? And then he later kind of quote tweets that tweet and says, answer, the model is complete junk.
Starting point is 00:19:35 It's a hallucination machine, overfit to reasoning benchmarks and has absolutely zero recallability. So a few things he's mentioning here is, one, it hallucinates a lot, so it kind of like makes up jargon terms, ideas, or parameters that didn't really exist before. Number two, he's saying that Open AI designed this model
Starting point is 00:19:54 purely so that it will do well on the exams, which are the benchmarks that rate how these models compare to each other. So they're saying that OpenAIA optimize the model to kind of do really well at those tests, but actually fail at everything else, which is what people want to use it for. And the final point that he makes is that it has zero recall ability, which is something you mentioned earlier, Josh, which says it doesn't have memory or context. So you can have a conversation and then open up another conversation, and it's completely forgotten about the context that it has for you from that initial conversation. Okay. So not the best. Not to be unfair to Open AI, but it feels like they delayed this model a good bit of time. Oh, yeah.
Starting point is 00:20:32 And they wanted it to look good. And it intuitively makes sense to me that they would be kind of optimizing for benchmarks with this one. But nonetheless, it's still impressive. I'm seeing this big wall of text now. What is this? What is this post here? Well, it's this post from one of these accounts I follow.
Starting point is 00:20:47 And they have an interesting section here, which says, comparison to other open weights models. Oh, sick. Yeah. What is this? So he goes, while the larger GPTOSS, 120 billion parameter model does not come in above DeepSeek R1. so he's saying that DeepSeek R1 just beats it out the park,
Starting point is 00:21:04 it is notable that it is significantly smaller in both total and active parameters than both of those models. Deepseek R1 has 671 billion total parameters and 37 billion active parameters and is released natively, right? Which makes it 10x larger than GPT's 120 billion parameter models, but what he's saying is,
Starting point is 00:21:24 even though GPT's model is smaller and doesn't perform as well as Deepseek, it's still mightily impressive for its size. Okay, that's cool because that gets back to the point we made earlier in the show that this is probably
Starting point is 00:21:37 the most densely condensed condensed, however you want to say. Like, base of knowledge in the world. They've used a lot of efficiency gains to squeeze the most out of it. So in this small model, it is, I guess if we're optimizing,
Starting point is 00:21:52 maybe we can make up a metric here on the show, which is like output per parameter or something like that. Like based on the total parameter count of this model, it gives you the best value per token. And that seems to be where this falls online, where it's not going to blow any other open source model out of the water,
Starting point is 00:22:09 but in terms of its size, the fact that we can take a phone and literally run one of these models on a phone, and you could go anywhere in the world with no service and have access to these models running on a laptop or whatever mobile device, that's super powerful and that's not something that is easy to do with the other open source models. So perhaps that's the advantage that OpenAIA has is just the density of intelligence
Starting point is 00:22:29 and the efficiency of these parameters that they've given to us versus just being this home run open source model that is going for the frontier. It's just a little bit of a different approach. Yeah, we need like a small but mighty ranking on this show, Josh, that we can kind of like run every week
Starting point is 00:22:45 when these companies release a new model. No, but it got me thinking if we zoomed out of that question, right, because we're talking about small models versus large models, parameters and how effectively they're use versus other models that are bigger, what really matters in this, Josh? In my opinion, it's user experience and how useful these models are to my daily life, right? At the end of the day, I kind of don't really care what size that model is unless it's
Starting point is 00:23:14 useful for me, right? It could be small, it could be personal, it could be private. It depends on, I guess, the use case at the time. And I have a feeling that the trend of how technology typically goes, you kind of want a really high performance small model eventually, right? I try and think about us using computers for the first time, you know, back in our dinosaur age, and then, you know, it all being condensed on a tiny metal slab that we now use every day and we can pretty much work from remotely from wherever. And I feel like this is where models are going to go. They're going to become more private. They're going to become more personal. Maybe it'll be a combination of, you know, it running locally on your device versus cloud inference and trusting certain providers. I don't know how
Starting point is 00:23:58 it's going to fall out, but I think it's not a zero to one. It's not a black or white situation. I don't think everyone's just going to go with large centralized models that they can inference from the cloud. I think it'll be a mixture of both. And how that materializes, I don't know, but it's an interesting one to ponder. Yeah, I think this is funny. This is going to sound very ironic, but Apple was the person that got this most right. Sorry, who's Apple again? Yeah, right. I mean, It sounds ridiculous to say this. And granted, they did not execute on this at all. But in theory, I think they nailed the approach initially, which was you run local compute
Starting point is 00:24:31 where all of your stuff is. So my iPhone is the device I never leave without it. It is everything about me. It is all of my messages, my contacts, all of the context you could ever want for me. And then the idea was they would give you a local model that is integrated and embedded into that operating system. And then if there's anything that requires more compute, well, then they'll send the query off into the cloud.
Starting point is 00:24:49 But most of it will get done on your local device because most of it is. isn't that complicated. And I think as a user, when I ask myself, what I want from AI? Well, I just want it to be my ultimate assistant. I just wanted to be there to make my life better. And so much of that is the context. And Apple going with that model would have been incredible. It would have been so great. We would have had the lightweight model that runs locally. It has all the context of your life. And then it offloads to the cloud. I still think this model is probably the correct one for optimizing the user experience. But unfortunately, Apple just has not done that. So it's up for grabs. I mean, again, Sam Altman's been posting a lot this week.
Starting point is 00:25:24 We do have to tease what's coming because this is probably going to be a huge week. There's a high probability we get GPT5. And then they've also been talking about their hardware device a little bit. And they're saying how it's like it's genuinely going to change the world. And I believe the reason why is because they're taking this Apple approach where they're building the operating system, they're gathering the context. And then they're just they're able to serve it now locally on device. They're able to go to the cloud when they need more compute.
Starting point is 00:25:47 And it's going to create this really cool, I think, duality of AI where you have your, your super private, local one, and then you have the big brain one, the big brother that's off in the cloud that does all the hard computing for you. Well, one thing is clear. They're going to be hundreds of models and it's going to benefit the user. You and I for so many multiple. It's the big company's problems to figure out how these models work together and which ones get queried. I don't care. Just give me the good stuff.
Starting point is 00:26:13 And I'm going to be happy. Folks, Open AI has been cooking. This was the first open source models that were released in six years, Josh. The last one was 2019, GPT2, which seems like the Stone Age, and it was only like four years ago. Thank you so much for listening. We are pumped to be talking about GPT5, which we hope to be released in maybe 24 hours. Hopefully this week. I don't know.
Starting point is 00:26:39 We might be back on this camera pretty soon. Stay tuned. Please like, subscribe, and watch out for all the updates. We're going to release a bunch of clips as well if you want to kind of like get to the juicy bits as well. Share this with your friends and give us feedback. If you want to hear about different things, things that we haven't covered yet, or things that we've spoken about, but you want to get more clarity on, or guess that you want to join the show, let us know.
Starting point is 00:26:59 We're going full force on this, and we'll see you on the next one. Sounds good. See you guys soon. Peace.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.