Limitless Podcast - We Stepped Inside a Memory from 2005 (Apple SHARP Gaussian Splats)

Episode Date: December 24, 2025

Gaussian splatting is a groundbreaking technology that overlays past images onto current scenes for immersive 3D experiences. We discuss its potential to transform Hollywood productions, red...uce costs, and enhance gaming. Splatting could actually redefine our interactions with technology and memories. Straight out of Black Mirror.------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS0:00 Introduction to Splat Technology6:59 Live Demo of Splatting11:11 Usa Cases19:40 Splat as a Bridge Between Realities22:16 Merging Digital and Physical Worlds25:38 The Future of Wearable Technology------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 I've been obsessed with these things called Splats for like six months, and just last week we had a new breakthrough in this revolutionary weird world of 3D imagery. So I want to start with this image that we're looking at on screen right now because this is, it's pretty unbelievable. And for the people who are just listening, what it is is there's a dude with the normal backyard and he has a vision pro. And he has a photo that he took in the past of the same backyard in a different season. And with this new Splat technology, he's able to overlay a previous picture onto a current place and then actually walk through it as if he's able to relive the memory for the first time using these Apple Vision Pro headsets. So it's this unbelievable technology that Apple released just last week that allows you to relive the past in the present in a way that is totally immersive, totally submersive using these things like the Vision Pro or any sort of virtual reality headset. And the new technology has some pretty unbelievable examples. So that's what we're going to cover in this episode is the weird world of 3D splats and how you're actually able to turn any photo you've ever taken, any video have ever taken into into something you could actually step into and relive again like it's the first time.
Starting point is 00:01:07 Yeah, it's pretty crazy. It looks like he's taken a picture of his garden from December 10th, 2017, when it was snowing and he's kind of transposed it onto his garden in real time. I think the thing that shocked me the most from this, Josh, I did a double take was they have a Doctor Who prop in here. It's like a telephone box. Who has this in their gun? This is pretty insane. But I saw a more recent example, actually, of this technology this week in a slightly crazy application. So you might have heard that some Epstein files got released and some documented footage, pictures, videography got released. And someone decided to splat the entire thing.
Starting point is 00:01:48 So what you're looking at here isn't a real video, but rather a series of images. which have been splattered to form this kind of like 3D immersive experience. And he was able to generate this in a couple of minutes, which is insane. And to kind of prove to you that this is like a real thing, what you're seeing on my screen right now is a Google drive of basically all the leaked images from these files. And you can literally click on any of these. Let's go with Blue Guest Room 2.
Starting point is 00:02:20 And if you press W to zoom in, you can now literally, zoom in and peek around the entire thing. Like, let's see how close I can get to this. Oh, go further. Can you look at what's on the mic? Oh, my God. Wait, what's under the bed? Okay, we don't know. We don't know, but like I can't read the title.
Starting point is 00:02:36 You can really get into it. Oh, I'm under the bed now. Oh, you're under. I am officially under the bed. Anyway, this stuff is just insane, Josh. And this is due to Apple Sharp's model, right? Yes. This is the coolest thing.
Starting point is 00:02:50 So Apple Sharp just released this model last week, which coincided with the release of these files, which created this funny convergence of two technology, or I guess a technology and breaking news at once. But I do want to talk about what splouting actually is because we're saying this funny word a lot, and no one actually has a clue what it means. It's basically a way to make a 3D scene
Starting point is 00:03:08 by storing it as a cloud of tiny, soft little blobs instead of building traditional 3D models. So in the past, EJAS, we've all played video games before. It requires a large machine to run them because it's a lot of textures and polygons and there's just a lot of detail required to render fidelity. With what these blobs do, what these splats do, is each blob, it's kind of like a puff of spray paint,
Starting point is 00:03:31 and it's floating in 3D space. And then every puff has a position, size, shape, color, and transparency. So to render an image from any viewpoint, the computer projects all those puffs onto your screen and then blends them together like layering this transparent paint. So it's much more efficient than the previous way of doing this, which is lots of crazy rendering, lots of compute required.
Starting point is 00:03:52 And if you were to create something like we're showing on screen now, which is an example of 3D skulls that are sitting with dynamic lighting and it looks really real, it's something you'd see in a video game. You would normally have to render that overnight. It would take forever to do. But what this new splotting technology does and what Apple's AI model does is it allows you to take this high fidelity rendering and turn it into a splat by using these blobs and that way you could render it as a single asset on something as light as your smartphone.
Starting point is 00:04:18 So it turns these really detailed, complicated compute 3D images into something very simple, so simple in fact that you can actually do it yourself. You can make images yourself. You can do this for your own content and you can do it for free almost instantly. It's really cool. And this is thanks to Apple's new Sharp model that they released last week, which is open source, which allows people to go around and play with it themselves, which I think is a really cool new paradigm for this technology.
Starting point is 00:04:43 So the way I'm thinking about this, Josh, and correct me if I'm wrong, is if you're If I take a 2D image, right, it's 2D and it's composed of a bunch of pixels, right? If I use this splat technology on that image, it turns every one of those pixels into a kind of like 3D object, which is why like these skulls, for example, isn't a video of like the skulls from all different orientations. It's like a couple of pictures of these pile of skulls or even maybe an AI generated image or whatever. and it's splattered into these like 3D blobs and I can now kind of like maneuver around it and look around it. Is that kind of roughly on the right track? Yeah, and it's funny you'll notice with like the Epstein examples.
Starting point is 00:05:26 There actually is a lot of fidelity in the way that it's rendered. So if you look at a video game in the past that you've played, normally you'll look at the details and it's very fuzzy. It doesn't look very real because the computer's trying to save rendering of compute for the things that you're actually looking at that matter. But these scenes that use this new technology, they look so photorealistic because you're essentially repainting the world from these millions of soft dots. And it's fast because rendering is mostly throw blobs onto a screen efficiency rather than heavy 3D geometry or this slow ray tracing that you're seeing on a computer.
Starting point is 00:05:55 Got it. And so the number one question that pops into my head then would be, well, what's the cost difference for like doing this in the traditional way versus like in the current splatting way? And how long does this take? We got this tweet by Brad Lynch who tried out this Apple Soft. And he said he generated what we're looking at on the screen right now, which is, I think it's an image of him sitting by like an open ocean and like, you know, he's got his Apple Vision Pro on. And he's like kind of peering around and he's in his living room. He's moving the images side by side. And it made it, it took him 10 seconds to generate this on his MacBook Pro. And I'm assuming because he used Apple Sharp, which you just said was open source, it's the cost of downloading the software and just running that on your computer. Is that? Do I have that rough paper? Yeah. It's totally free. It's totally open source. and it is totally available for users of any computer. I mean, you could render it on a laptop, you can render it on a phone, you could do this instantly anytime.
Starting point is 00:06:51 Hold your horses, Josh. If you're saying that means we should do this ourselves, right? Perhaps we should do a demo for the audience. Let's not take his word for it. Let's do this ourselves. Let's do it ourselves. We're going to do it live here. Yes, we have the Apple shop kind of software here.
Starting point is 00:07:06 We're going to upload an image. Let me see if I can find a convenient image of Josh. And he just, oh, what do you know? What do you know? I have it here. I'm curious how long this is going to take. Look at us down. We look good looking gentlemen on the screen. We look good in 2D. I would love to know how good we look in 3D. And that's the fun thing too is if you have like a loved ones or children or people who are younger, you could really freeze these moments in time. Special memories, special trips to Uganda. We're done, Josh, by the way. Sorry. Oh, before I even finished my explanation, we're done. Here we are. Let's see this. Walk me through the picture here. Oh, it's like a watercolor effect. You can see it materializing.
Starting point is 00:07:39 Oh, it's rendering in real time. Oh, this is weird. this is weird i'm zooming in oh my god wait can we oh my god i can we oh my god i can see you from like above wait wait this is try to go closer and get in deeper wait hang on let me get in deeper let me get into this face get up in our faces wait how do i how do i like zoom oh my god oh so we can like see us from the side too that's wait that's pretty crazy david's face is a little warp david's the guy the less handsome guy uh in the center um our arms are looking pretty pretty much pretty good. We're going to the gym. Oh, we could look at us from above as well. That's pretty insane. Throw into the image. I'm trying. I'm trying to, sorry. I don't, I don't mean to
Starting point is 00:08:18 zoom into our crotches here. But this is the furthest I can go in, which I'm kind of upset about. Hang on. Maybe if I maximize. Oh, wow. Okay, that does look better. Oh, my God. Well, what I'm impressed with here, Josh, is number one, how quick that took to take. It's funny. On the screen, it said it gave us a countdown done from 60 seconds, and it ended up producing the entire rendering that we're looking at right now with 47 seconds to go. So it took 15 seconds to make. And I'm touching my laptop right now. It is not warm at all. So I'm assuming it didn't cost anything energy-wise as well. Also, the fidelity of this, Josh, is actually really, really good. It's way better quality. I mean, I think I look a bit kind of out of it. I look like I've been, I've had a few drinks.
Starting point is 00:09:02 Maybe I did on the night, actually. But it looks really good. Yeah, it's impressive how quickly it's able to render this and how low cost it is and how lightweight it is. I mean, you could just run this in a browser very simply. It's not a very compute-intensive thing. And it's really cool. So this is an example of a photo. There are three kind of tiers to the splatting. There's the photo first.
Starting point is 00:09:22 And then second is this in-between before we get to videos, which is this example that you're seeing with Casey Nystad's studio. Now, a lot of people who watch YouTube, they know Casey Nystad. They love Casey Nystad. And this is the most iconic place in the world of Casey. What the meta team has done is actually go through and create a giant splat of the studio so that anybody with the VR headset can actually put the headset on and walk around it. Now, what we're looking at on screen, it looks like it's an actual video of the studio. But the reality is that it's one large splat. And it is a full fidelity splat.
Starting point is 00:09:54 So if you put on goggles and you walk through the space, you can actually go and read the bindings on the books. You can like walk up to all the shelves and peer at all the little things, all the little trinkets that are on that. It is a full and total high fidelity scan of the studio, but in a very lightweight way. If you used to try to do this, you would need a supercomputer to render this and you need a supercomputer to run it. You couldn't even do this on goggles that would be like disconnected from a computer. Now thanks this new technology, you can scan real places into the cyberspace. And it's kind of acting as that almost like a preservation technique where if there's a place
Starting point is 00:10:31 that you love, if there's a place you want to remember, you can actually scan it and then relive that forever. You can capture this place in its full fidelity exactly how it is today. And I thought this was a really cool example. I mean, what I find super cool about this is like in the traditional where you would have to take a million pictures and stitch them all together, which would have taken you hours and hours and efforts and probably a bunch of people to get involved to help you produce. Also, I like that it's to scale as well, Josh. Like a lot of these simulation kind of videos or games that I've tried with Applevision Pro and stuff like that just seems kind of, unrealistic. Obviously, like, maybe you're playing a fantasy game or something. This is like
Starting point is 00:11:08 to scale. It's like you're walking through. You're not going to bump into anything. I just think it's awesome. But then the natural question that pops into my mind is, well, can you do this with video? And I think, you know, we had our answer a few months ago earlier this year when this viral tweet went about of this guy just who's kind of like directly speaking to a camera, but you can see in this video that someone's navigating around him. And this is just like, you know, a 2D video taken head off. on of this man sitting down in his chair, and he is able to navigate around him in every single different type of direction. You can peer at a kind of like angle perpendicular to him. You can see
Starting point is 00:11:47 kind of like the way that his jaw looks like. And this is all generated through splats. None of that is real. None of that was actually filmed with a camera to the side of him. This is all generated via splats. Super cool. It's fun to think of when you capture things, to think of your camera as a paintbrush, or maybe even a can of spray paint like we were talking about earlier, where if you can just capture the smallest amount of detail of a specific part of an image,
Starting point is 00:12:13 you can then render it all fully in a 3D way. Like we're seeing another image here where you can zoom in on the video, you can pan left and right, and that's because it was kind of scanned like it was this can of spray paint. You want to just kind of spray paint things and then you could relive them and capture them.
Starting point is 00:12:28 And I think it's such a cool new paradigm where they're driving through the city streets or they're watching someone dancing or whatever these examples are. If there's something in your life that you want to capture, you can just do it and then relive it. And this is particularly interesting if you're a user of iPhone because Apple's really the company who has been leading the charge of this. And if you use an iPhone, you're aware of the camera sensor, right, how they're kind of lined up. And when you shoot a video using these top two, you actually give real 3D spatial depth to it. And that's also because there's a LIDAR scanner on the bottom of the camera as well.
Starting point is 00:12:59 So Apple has all the tools here built in to create the highest fidelity splats possible. And now they're rolling out the software to enable that to happen even more so on these handheld devices that we all use. It's like, I guess the last example is it's kind of like if you take a black and white photo, you can add color to it. This is taking a two-dimensional photo and adding a third dimension to it. And that's a really cool unlock. My mind naturally goes on to like applications. Like what can I use this technology for? And I think through a bunch of the examples that we've shown so far, it's kind of cinematic and maybe even like veered towards gaming as well.
Starting point is 00:13:36 Hollywood is the instant industry that I think of that I'm like is going to get completely run over by this, right? I know for a fact that they spend months, in some cases years, to render a single visual of splat that we've been looking at throughout this entire episode. And so I think that this is going to cut cost down by like tens of millions of, and it is going to cut time down and even jobs as well. I know they have teams of different people with different skill sets to stitch all the images together to get the right grading, lighting, to like post-processing of a bunch of these images and then kind of make these visuals. There's no way that this doesn't get disrupted by it.
Starting point is 00:14:13 It also got me thinking of one of my favorite shows on Apple, finally we're talking about Apple and now they have this. Like one of their hit shows is Severance, right? And I remember season two, they have this like crazy scene where like we've got the camera panning around him and all. all different kinds of ways. And Ben Stiller had an interview on this where he said, each episode costs roughly $20 million to make.
Starting point is 00:14:35 And this particular scene alone costs $10 million. Now, if he had something like a Splack technology, right, he could generate this in a couple of minutes or even under a minute, like we showed ourselves earlier on. And for a fraction of the cost, it just, it's a no-brainer for me, Josh. Yeah, I don't want to say that Hollywood is under attack, but they are definitely in need of rapid innovation quick, because this is a second front that there is being disruption on.
Starting point is 00:15:05 We talked about Google's V-O-3 and all the video generation models, how well they understand the world, how good they're able to generate a video. Now this merges that gap where you can actually take the real world, but you could capture it much more efficiently than you ever have before and much more fully, which creates a lot more dynamic optionality for these shots. So if you can't create it in the real world using a splat,
Starting point is 00:15:26 well, then you can create it in the digital world using AI. And what I understand is that people in Hollywood, they're already starting to use stuff like this, where they are capturing things once instead of 10 times, and they're using AI, they're using splats to just kind of massage the scene to get exactly what they want if it wasn't perfect on that first try. And it just saves a huge amount of money.
Starting point is 00:15:47 But there are more use cases for this. Yeah, so we have, you put this one in the dark Josh of a Swiss glacier collapse in 40. Is this like can this be used as like a prevention model for these kinds of things? Yeah. So earlier this year there was a big landslide in the Swiss Alps and it took out an entire village and it was very dangerous to create a very uncertain times because it's hard to access that area and people didn't know what was affected.
Starting point is 00:16:12 So a helicopter came through with a big camera array and it just swept the whole valley. And you could see on the video the before and after and they captured this incredibly high fidelity splat up the valley. They could then diagnose immediately what area or something. were most in danger, what areas needed the most help, how much danger there actually was, and they were able to observe all of the things at any time without needing a specific video feed of a specific area. So let's say you were looking at a specific location on the mountain, well, you could just pause the splat and you could zoom in on that one area, even if you didn't
Starting point is 00:16:42 capture it with a video camera. So there's a lot of utility for these outside of just entertainment. There's also safety and other things like this. I just thought it was a fun example of a real-world use case that actually happened earlier this year. Love it. It's been a long time dream of mine to go to Japan and I've been fortunate enough to go a few times and I'm going again next year and I kind of thought about like how do I share this experience with different people and I spotted this one Josh where it's a tweet that goes we've made it possible to walk through the hot spring town of Yamagata Ginzan's Onsen with an avatar and this is like a real life rendering and it looks like a game because it's been generated one of using a splat machine or a sputelyt flat model, but it is to scale. These are real-life shop fronts and stores and homes and streams. But obviously, it's like a simulated game environment. And it got me thinking like, this looks like something out of GTA, Josh, right? And I'm thinking like, this would change the way that you create simulated realities. Like, imagine the Sims game, but using real life
Starting point is 00:17:45 worlds and it can be generated in real time to reflect different kinds of people, personalities, and shops. Like, imagine if New York City was updated every single second or day or every hour to reflect accurate goings-ons in that city. I just think this changes gaming forever, right? Because one of the things that I loved about gaming is that it had kind of like a preset story. But then when you got to the end of the story, I was like, damn, I can't, I now have to wait
Starting point is 00:18:10 like a year or two years until the second one comes out. GTA, what is it, five or six? Which one have we been waiting on? Oh, we're waiting on six. Six, right? We've been waiting for like 12 years. 12 years, over a decade for this game. Now you can get the second game or the third game or the fourth game or the fifth game immediately
Starting point is 00:18:26 if we had these kind of generated simulated realities. But it kind of like I played this out in my head, Josh. The end game for these splats surely has to be world models, right? World models is supposedly going to be a big trend next year in AI models where you create these simulated realities or environments of the real world. that we live in today, and you stick in an AI agent or an AI model to kind of generate synthetic data so it lives out of its life and it kind of figures out
Starting point is 00:18:57 how humans perceive things. Aren't splats just world models? They're not actually. I think splats, you can imagine, think of a splat kind of like what a neuralink is to the human brain and AI. A splat is kind of like to the physical world and the virtual world.
Starting point is 00:19:13 It's the bridge that combines the two together. So what we just saw on that last example is, you're able to walk through Japan and capture it with the camera and then merge that real world data into cyberspace. And if it was a world builder, it would just kind of create these virtual worlds. So what I see is kind of the way this goes would ideally be a combination of the two, where splats are unbelievably efficient and are easy to capture the real world with. And then you could take an AI model, a world model, and you could apply extra fidelity on top of it, depending on how much compute you have. So you could think of the splat,
Starting point is 00:19:49 is a way of scanning the real world into the digital world. And then the AI world models are a way to increase the fidelity using neural nets to predict what should be there to fill in all the blank spaces and to make it feel like more of a real world plus experience. So if you were walking through your childhood house and you were scanning it, you can take a low or a high res, but not totally high res version with a splat and then use AI to enhance it. So then you can actually capture this place that's special to you and relive it forever using these cool new technology. So this very much feels like a bridge into this future hybrid between the real world and the digital space. Okay, that makes sense.
Starting point is 00:20:25 So if the mission is to help AI understand humans in all forms of the way that they sense things, the way that they perceive things, cite audio, visual stuff, instead of like relying on them to kind of generate it from a bunch of data that we feed them, we can kind of take our reality and surroundings, compress it into this splat model, and then feed that in into an AI model, a world model, a simulated reality that they're kind of operating in, and they'll have a more accurate depiction of how humans perceive the world, which will then accelerate us to whatever the hell AGI is going to end up being. Yeah, it's like if you played with like Nanobanata Pro, for example,
Starting point is 00:21:04 and you added an image that was old, and maybe it was a black and white image that was very low quality, it can add color and it can make it feel more high quality. That's kind of what this does before for more virtual spaces. Well, what I like about that is we're just going to end up with an abundance of data. And data has been lacking. I think at this point, every single model has been trained on the same corpus of data and we need to start tapping into private buckets of data to add kind of value or intelligence
Starting point is 00:21:32 to an AI model. This kind of bypasses that entirely and creates this kind of synthetic but really accurate environment. That's super cool. And then like in terms of like the end game here, Josh, like, do you think Apple's going to forge the path here? Has Apple somehow dug themselves out of the grave or rather dug up themselves? out of the grave that they've kind of left themselves?
Starting point is 00:21:50 Because they haven't been involved in AI or anything. No, they haven't been. And this is not by any means a real attempt at AI. This is kind of a separate thing. This is in regards to their vision platform. This is kind of like what the future of compute is. Everyone's building a pair of glasses. We have Meadow.
Starting point is 00:22:07 We have Google now. We have, I'm sure Microsoft is working on HoloLens. Apple is the Vision Pro. Everyone is working on this new spatial compute platform. Apple is definitely furthest along. this path. And granted, the Vision Pro gets a lot of hate because it's very expensive. It doesn't have a lot of use cases. But what you're seeing here is an early prototype for what the future of this compute will look like when applied to actual consumer products. So if you scroll down to one more beneath this,
Starting point is 00:22:32 there's a really fun example where we take the splats that were mentioned, that we mentioned in the video, and you can actually put them and pin them on your wall in your apartment. So as you walk through the real world, you're able to pin these photo frames, and it has the splat built in. So you could walk and actually look into the photo and relive that experience. There's some other examples that they have where they've pinned widgets on the wall. And as you walk into your apartment, these widgets that are like a calendar will just be present on your wall. And again, it's this merging of the digital and physical worlds. And it looks real. It looks like it's embedded inside the wall. It's kind of in Boston. But what this leads to is this merging. It's this combination of digital and physical through these
Starting point is 00:23:14 augmented glasses that we're going to get. And splats are a really important part of. that. So when Apple open source their model earlier this week, that was a really big deal because it allows other developers to also lean into this. And you could see even in this example, you could scan in people's faces and you could speak to them in real time as if they were sitting right in front of you. So it's this, this fun entry into the metaverse. And this is almost what I wish I saw meta was doing. Because meta being their new name meta, they should be leaning into building some sort of a metaverse, which is the combination of these two worlds. And it seems like Apple's actually the furthest ahead. And this is kind of what it'll look like when it's implemented across consumer
Starting point is 00:23:46 products as we go. It looks like something out of Star Trek or Star Wars. Like, you know, you got this holographic simulations of people speaking to you. I think a major trend that's helped us get to what we're looking at today in front of us and make all of this feasible is just massively reduced costs of things. Like, we've just spoken about like the cost of producing a splat or like a Hollywood traditional version of this would have cost tens of millions of dollars and now it takes a couple of seconds and download an open source software. That is just massive. I think the next biggest trend is going to be the form factor specifically. Like, I can't help, but sorry, hate on how big and bulky these, where's the, where is it,
Starting point is 00:24:30 big and bulky, the Apple Vision pros look on people's heads. I'm like, that just looks so dorky. It also kind of reminds me of Google Glass, which is obviously a completely different product, but looked also really dorky and crazy for people to wear. It seems like the form factor is going to be glasses, Josh. MET is making them. It was leaked this week that Apple is also potentially working on a glasses version. That's not going to be Apple Vision Pro. It's going to be much more slimmer, sleeker, thinner. And then you have Google that's releasing Google Glass 2.0 next year. And then Amazon apparently is even releasing one as well. So it seems like glasses are going to be the form factor. I think it's now cheap enough to produce at a much larger scale
Starting point is 00:25:14 so that anyone and everyone can use them. But also, I think, like, the components are making these glasses, like the transition and stuff are also cheap enough to run this technology. So it would kind of add a culmination of all these trends coming together where it's going to make the spatial reality that you've just described happen in real life, which is super cool. We're getting close. It's like apples are big, bulky, and expensive.
Starting point is 00:25:33 They're $3,500 that weigh a couple of pounds. They work unbelievably well. That's what you want. Meadows's glasses, Google's glasses, they suck. They're cheap. They fit on your head. but they're a terrible experience. So as we converge to the middle of whatever that is,
Starting point is 00:25:48 as we reach Apple's quality with meta's form factor, that's when you're going to start to see this stuff everywhere, because it will be cheaply accessible and a really phenomenal experience. And like you said, these costs are coming down quickly. It is only a matter of time until we get that perfect middle ground and this hybrid product exists where we do start to get these experiences available to us in our day-to-day lives in a package that is reasonable to walk around in on a day-a-day basis. So that's one of the things I'm most excited about is this new frontier of
Starting point is 00:26:17 hardware that is paired and supercharged by AI and all these other cool pieces of software like splats that we're seeing unveiled pretty much every week now. So the metaverse is basically becoming a reality. And I'm so glad that we've moved on from it being a fad to it being a reality. Maybe we were just kind of like five years too early with all the NFT stuff from the crypto sector that we saw way back when. But this is, This is awesome. And I'm excited to kind of see this scale to real-life applications, Josh. Like, it's all and well seeing like these demos of things,
Starting point is 00:26:49 but I can't wait to see the first splat movie so that I can kind of like see things from different orientations, get people's different perspectives. I can't wait for it to hit gaming and fast forward GTA 6, 7 and A so that we don't have to wait another decade for these releases. And I'm excited about the costs and the form factors that are going to come with this. Like being able to wear glasses, I'm curious, right? Because I was super skeptical when AirPods became a thing. and then I'm like, oh, I wear them all the time now,
Starting point is 00:27:14 and it's just kind of like embedded in culture. It's going to change the way humans kind of interact with each other, which is going to be super, super cool. But that is it for the rest of this episode. It has been quite a week and quite the year. This is our Christmas Day episode. We hope you feel fearful. Josh and I came in our best Christmas attire.
Starting point is 00:27:35 I came with, let's call it the coal for being bad, I guess, the coal color. and Josh, you're reppping red. That's awesome. If you are somehow so passionate about Limelis and you'll listen to this, I just want to say thank you. That's frigging awesome.
Starting point is 00:27:50 There have been thousands and thousands of you that have joined our community that are subscribed to us, that listen to us. Week in, week out. And it means the world to us. It means the world to me especially. And it's just awesome to have you guys here.
Starting point is 00:28:04 We know about like 80% of you aren't subscribed. In the spirit of Christmas, we would love if you tap that subscription button if you tap the notification button or if you're listening to this on Spotify and you don't even want to see our faces, give us a rating. It would mean the world to us.
Starting point is 00:28:20 Also, this is one fun fact is we just crossed 100 episodes for Christmas. So in terms of Christmas gifts, one, we have 100, two, thank you. You guys listening is the Christmas gift and three, we will continue to post all throughout the holiday season as a gift to you for supporting us all year out.
Starting point is 00:28:35 So thank you for the support. As always, you know, this is like the best part of it is just being able to just see the success of it, see people listening, sharing it with your friends who would also enjoy it. Ejazz ran into someone the other day who randomly was talking about the show but didn't recognize Ejazz's face because he had never watched the videos, only recognized the voice, which I thought was so funny. So it's a really nice thing to see the message spreading. So thank you for that. I guess happy holidays to all who are listening. We're going to keep the episodes coming and yeah, we'll see you guys in the next one.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.