RHAP: We Know Survivor - Ask Dr. Christian Hubicki | Summer 2025

Episode Date: August 29, 2025

It’s that time of the year again: time to expand your mind with Dr. Christian Hubicki! In our ninth installment of the series, Rob Cesternino welcomes the Director of Optimal Robotics Lab, robotics... professor, Survivor castaway, comptroller of slam-town and great friend of the podcast Dr. Christian Hubicki to respond to questions submitted by RHAP listeners.

Transcript
Discussion (0)
Starting point is 00:00:00 When you're with Amex Platinum, you get access to exclusive dining experiences and an annual travel credit. So the best tapas in town might be in a new town altogether. That's the powerful backing of Amex. Terms and conditions apply. Learn more at Amex.ca. www.ca.com. Ontario, the wait is over.
Starting point is 00:00:32 The gold standard of online casinos has arrived. Golden Nugget online casino is live, bringing Vegas-style excitement and a world-class gaming experience right to your fingertips. Whether you're a seasoned player or just starting, signing up is fast and simple. And in just a few clicks,
Starting point is 00:00:48 you can have access to our exclusive library of the best slots and top-tier table games. Make the most of your downtime with unbeatable promotions and jackpots that can turn any mundane moment into a golden opportunity at Golden Nugget Online Casino. Take a spin on the slots, challenge yourself at the tables, or join a live dealer game to feel the thrill of real-time action, all from the comfort of your own devices. Why settle for less when you can go for the gold at Golden Nugget Online Casino. Gambling problem call Connects
Starting point is 00:01:18 Ontario 1866531-260. 19 and over, physically present in Ontario. Eligibility restrictions apply. See Golden Nuggetcassino.com for details. Please play responsibly. This episode is brought to you by Defender. With a towing capacity of 3,500 kilograms and a weighting depth of 900 millimeters, the Defender 110 pushes what's possible. Learn more at landrover.ca. One and sip, and two, and sip, and three, and sip.
Starting point is 00:01:53 Oh, hey, I'm just sipping Tim's all-new protein ice lot. Starting at 17 grams per medium latte, Tim's new protein lattes. Protein without all the work at participating restaurants in Canada. It's Rona Week. Now until Wednesday, rain or shine, you can always be building yourself a better summer. So head on over to Rona and save 35% on cans of 3.78 liter Rona interior paint. Give that room you keep saying needs a fresh coat of paint. A fresh coat of paint. Build it right, build it Rona.
Starting point is 00:02:28 Conditions apply, details in store, and more offers at rona.ca. We sell buckets too. Hey everybody, what's going on? Rob Cesternino, back with you for another installment of one of the longest running series that we have here on RHAP. That's Ask Dr. Hubicki, born in the dog days of COVID, as just an excuse to get together with this guy. It's become a staple, a bi-yearly, is it would it be bi-yearly? Is that, let me have a semi-annual. A semi-annual font of knowledge for all who have a love of learning back here with my great friend, Dr. Christian Hubicki.
Starting point is 00:03:25 Hey, it's great to be back. Great to be here with you. It's been a busy summer. But how are you? I'm doing great. I know things have been, it's been such a busy summer since we've last gotten together. And it's so nice to get the chance to catch up and learn so much here today. Any big news for you?
Starting point is 00:03:48 Well, it's been a very busy summer for me. I'm sure you know exactly what I'm thinking and that it's, you know, I had a lot of summer research projects going on. and I know everyone has been curious about them. So we're building a whole new robot prototype that, you know, for a big company and there's a lot of, a lot of project planning. That's what I heard. Yeah, yeah.
Starting point is 00:04:07 I mean, and so, and I'm starting to do a lot of travel, which, you know, and of course, you know what you're thinking? It's, of course, all of my robotics research talks that I'm going on the road for. So, in fact, I believe when this podcast is dropping, I think this is the end of the week as we're at now, right, roughly. Yeah.
Starting point is 00:04:23 I'll be in Atlanta for Dragon, and con giving a bunch of robotics talks. Okay. Which will be especially exciting is because we will be bringing my new baby boy with us. Baby boy. Okay. Yeah.
Starting point is 00:04:36 So congratulations to you and Emily on the new edition. Thank you so much. Thank you so much. Yeah, he's a wonderful little angel almost all the time. And it's fantastic. And he's been, he's been adorable and cute. And we'll be taking him. him to DragonCon, which is, of course, like a pop culture convention.
Starting point is 00:04:58 Emily has already made some costumes for him. Yes. Okay. Well, really, you know, sincerity, congratulations. And how is life as a dad? Oh, it's, I tell you, it's wonderful. I love it. And it really forced me finally to have, like, a set schedule in my life.
Starting point is 00:05:16 It really adds a lot of structure and also tears down a lot of structure to your life. Things that you plan on doing at a particular time is like, nope, nope, nope, no, no, baby needs attention. Like, so really, I learned, I learned to value the times where, like, you know, I get to be alone with, you know, hang out with him and play with him, but also the times where he is asleep. And those are the times that I have to be productive. Yes. You have to prioritize things. There's like less time that is unstructured and more like, okay, I am either working or not, you know, doing something because it's important to me or not. 100%.
Starting point is 00:05:53 100% really forces the prioritization of all that. I mean, but it is all the little firsts are fun. I'll never forget his first little bath. And I was always afraid as like, what's he going to respond to the bath like? You know, being in the water,
Starting point is 00:06:06 you know, and he came out and he had this adorable little like face. Like, well, as if he just experienced something beautiful for the first time. And I'm like, oh, I'll never forget that.
Starting point is 00:06:15 And so, and my photo role has never been longer on my, on my camera. That's for sure. Yeah. Okay. Well, so happy for you both and all the best to you and the whole family. And we have invited our listeners once again to ask questions.
Starting point is 00:06:38 And Dr. Hubicki has selected some things that he feels like that he can illuminate the rest of us about, has done research where applicable and is ready to. answer all of the questions here today. Anything you wanna set up before we dive in? Oh, well, first off, lots of great questions. Thanks for everyone who popped in on either Instagram or Blue Sky or Twitter, who put in questions. A lot of, of course, robotics and AI-related questions
Starting point is 00:07:13 that's becoming increasingly topical. And it is a bit wild because something that was always my field, you know, specifically robotics and legged robotics, is now like a multi, multi-billion dollar industry of startups. So like it's exciting and I'm glad people who are excited to know more about it for all kinds of reasons.
Starting point is 00:07:34 Okay, would you ever leave education for the private sector? Probably not, probably not. I mean, like, never say never. I never say never, sorts of things, but like I worked for a long time to become a 10-year professor. And one of the things I love about it,
Starting point is 00:07:52 is I can say what I want to say about these robots, whether it's great or if it's over height. You know, I don't have a profit motive. And so, I mean, just like it is great because then like a reporter can call me up and say, hey, Christian, what do you think of this particular news item in robotics? And I can tell them the truth. And I think that so in a way it's extending education beyond my classroom to trying to get what I, at least my honest, I think informed opinions on a subject.
Starting point is 00:08:22 I was just quoted in Business Insider earlier this week on some later, and more of the recent pestle humanoid robot news that they were finding. And I enjoy that. It would be harder to do that in the private sector, but never say never. Okay. All right. So let's then open up our discussion here today with something that you and I actually had the chance to talk about recently on last week's edition of News A.F, of course,
Starting point is 00:08:49 News A.F for the unfamiliar. the podcast that I've been doing for 10 years with our colleague Tyson Apostle and the great Danny Bryson and we covered the world's first humanoid games and Christian was kind enough to call in. This is a question from Kat from Minnesota. Did you follow the world humanoid robot games? Was it a good platform to highlight the best technology of today or was it just a spectacle? Great question. So the humanoid robot games really snuck up on me, and I was like, how did I miss this was coming up? And I was, that would be really scary if the humanoid robots start sneaking up on us.
Starting point is 00:09:30 Yeah, you know, give it some time. But anyway, the, uh, we'll not like that. The games, no, no one does. No one does the thinking robots. Thankfully, they're still a little too loud for that. Um, but the, the, the, uh, the games, I was like, when did this happen? Because often, like, these large robotics competitions have long lead up times. Like famously 10 years ago was the DARPA Robotics challenge.
Starting point is 00:09:50 And that was where you had humanoid robots that were going into like a simulated industrial disaster, like a nuclear meltdown to try to, you know, to try to himself. But that had three plus years of lead time. It was announced. And then there was like, there were like two, there were like three years of competition until you got to the main competition. This one I found out was like, like, I don't say stealth announced, but like in like May. It was when it was announced. And I wasn't busy at all in May in June.
Starting point is 00:10:18 So I didn't miss. So I was like, no wonder. I know wonder I missed some of this news and so it really was sprung up and it but it's really cool so you can just Google the the world humanoid robot games and it took place in China in I believe Beijing and it was as far as it and it was a showcase of a couple of Olympic like events for humanites and so I this all dropped while I was on vacation like last like last week or so a week or so ago and I was like what's happening and I'm like, do I go on vacation or do I watch these games? And so I said, I'll watch them late. They were on the internet. They were on the internet. They were on the internet. People kind of posted clips on the internet. I didn't, they were not televised. Um, so the event. And so these Olympic like events, there was a basically sort of a track and field-ish event. Like there was like a 1500 meter dash. There was an obstacle course, which is not as far as I can tell, an Olympic
Starting point is 00:11:17 event but but like they're sort of like olympics inspired inspired events more american gladiators than olympics it was a little bit of american american gladiators they were and the thing they call it the eliminator it's you know might as well like i i suggested on news days they at news a f they should do the that they that they should have the joust the jouse yeah from the american gladiators that'd be cool but um there was there was there was a bit of running uh there was uh kickboxing which i think was the coolest one. But I think that the choice of event, and there were some dancing competitions and such, this is all very clearly put together by the Chinese government as a big showcase of the humanoid robotics field in general. But very specifically, while anyone could
Starting point is 00:12:08 show up for it, it was predominantly if not only Chinese-made humanoid robots that were at the event, probably because it was so recently announced, like who's going to be able to come out and do these events. But there were some varying degrees of cool things. Like there was 5V5 soccer. There was, and the kickboxing was fun. But the question here is, like, was this a showcase of technology or was it spectacle?
Starting point is 00:12:34 Well, it's both. The question is, what was it designed to highlight? And that's what a competition like that is designed to do. And like when the DARPA Robotics Challenge 10 years ago, It was designed to put the same challenge out to a lot of innovators in the field to say, can you make a robot that is what they called semi-autonomous, meaning that you could give it sort of high-level commands and have it be able to do something important in a factory. Like, if we had a disaster, we couldn't send in people, could we send in robots instead?
Starting point is 00:13:03 It was inspired by the 2011 tsunami that hit the Fukushima Daiichi power plant in Japan. And they couldn't send in the Japanese robots at the time were more for show. than they were for practical work. So this really is a signal to the world that we weren't ready for these things. So that was the competition was designed to determine how ready we were are we for semi-autonomous disaster response.
Starting point is 00:13:28 And it created a lot of innovation, also a lot of meme videos of robots falling over. This competition, as far as I could tell, was to highlight what the capabilities were of a lot of these very recent and very well, highly visible Chinese humanoid robot makers, particularly unitary robotics, which is a lot of the robots you'll see in the running, like that are humanoids are going to be that, that are sort of in the wild, by I say in the wild, like you might see a humanoid robot walking around on the streets.
Starting point is 00:14:05 Actually, there might be another question that we have and might get to. It's probably one one of these Unitary robots because this Chinese company, Unitri, they do some really good engineering and they're able to also mass-produce these, make these robots relatively cheap. And they can do some really good stunts with them. Particularly, they're really good at making their robots go fast. Their humanoid robots are a little smaller and there's a little more agile and they're actually really good on rough terrain. So it's kind of, so unsurprisingly, Unitry killed at the competition where they had to run really fast and go over rough terrain. And so, and they did really good job.
Starting point is 00:14:42 It's really genuinely good control work that they do with it. So it was a good spectacle. Like you look out in the audience, you see a bunch of kids who are clapping and cheering for these cool humanoid robot competitions. And these humanoid robots are doing tasks like that I was trying to do in the lab with my colleagues like 10 years ago with our robots. And it's like, oh, how cool. This thing that we would do, which was put a bunch of rubble on the ground and say, robot,
Starting point is 00:15:09 drive over it, you know, walk over it with your legs, and we'll see if it doesn't fall down. They're doing it for fun and for sport. And I think that's cool. So it's good, but I think that the thing you want to highlight is that for all these competitions that are coming out right now, these games, they are engineered in such a way to make a particular point about a particular subset of the industry. Everyone's sort of setting their own goalposts in the humanoid robotics field. And it's not just these companies, like there's a company called Figure AI who say, who they happen to have a really good laundry folding robot.
Starting point is 00:15:40 So they're like, hey, look, my robots, like the CEO is like, my robot is in my house, folding my laundry. Look at this. Here's a video. And like to them, that's the standard, right? So everyone's sort of setting their own goalposts in humanoid robotics business. And at the humanoid games, they literally set the goalposts when they played soccer. Literally, yeah. Yes.
Starting point is 00:16:00 They played soccer. And so just a few, maybe a little, few technical details and bits so people know, get a little more flavor as to what these robots were doing. So in most of the events, the robots are remote control piloted. And what, so like if you're watching the 100 meter dash, if you will look on the, if you look around the field, there's going to be some human, there's some human operators running behind the robot with a remote control. And that was surprising to me when you said that. Yeah, it's true. I mean, so that's because one of the things that Unitri does really well is what we would call the low-level control policy. And that's the part of the code that says, oh, you want me to walk forward?
Starting point is 00:16:42 I'm really good at knowing how to take a step. And if I get pushed, how to not fall over, that's really taken off and being very good over the last five years. Like, we've gotten really good at that as a field. And Unitri knows how to train those algorithms really well. So it shows off that. And so as a result, but they don't, they can't see anything. They're blind to the point where during one of these dashes, one humanoid robot literally runs into the back of another person on the field and like knocks them over.
Starting point is 00:17:13 And the humanoid robot stayed up, by the way. I thought that was impressive, although it would in my head, in my head, it's like, hey, let's, let's think a little bit about safety lanes here. Yeah. Yeah. Well, you had told us on News A.F that you would actually be. be very good at being the control person on operating one of these robots? Yeah, no, I think I would be pretty.
Starting point is 00:17:37 I have some experience, and I have a, and occasionally, you know, so I run my own robotics lab, for those who don't know, and we have some of these humanoid robots. We have a taller one than the ones you'll see in the humanoid robot games. It's called Digit. It's made by an American company that was started by all my friends and professor. And so sometimes you got to, like, drive the robot around. And my students will, and this will be a bit of a flex. So maybe, but my students will code up the algorithm.
Starting point is 00:18:03 And I mean, and I know what algorithms are coding up. And I have a general understanding of it. I still am very proud. Like my student was driving the robot around with his, you know, low level control policy, as we call it, which turns directions into motions for the robot. And I was like, can I drive it? And I was like, man, he gave it to me. He gave the control.
Starting point is 00:18:22 I had a good feel for it pretty quickly. Because you can't tell the robot where exactly to put its feet. But if you kind of understand how it responds to a command based upon where it's moving at a given time, you can kind of have an intuition for how it will respond. So I was pretty good with that. And I, in fact, and this goes back years, back when robots were a lot less stable, we would still have these controllers, but we would still have these game controllers. But the low level control code wasn't as precise or good.
Starting point is 00:18:55 And so we had to be really careful on the thumbstick on how these things would move. And you play a video game or two. Yeah, I played a video game or two. But I think it's just understanding, like, if it's just a black box, do you have no idea what it's doing it or why? You might just say, I said it hit forward. Why is it not going forward? It's like, well, you kind of got to understand the context of what it's, you know, of what it's sensing and blah, blah, blah. And so, and I was pretty good at it.
Starting point is 00:19:23 Yeah. Yeah. Okay. Yeah, and so there's a lot of remote control. So if you're watching the kickboxing matches, which I think are, there's, you'll notice they're throwing punches and they're missing constantly. Well, that's because the robot can't see. It's literally just the, it's literally like rock'em-sock and robots that they hit a button
Starting point is 00:19:41 on the controller and the robot will throw the arm forward and whatever direction it's pointing. And it's pretty complex to get the robots to do that as effectively as they can, but they still can't land the punch. but what's really well done for me is that these get-up routines that when the robot gets knocked down, it will pick itself back up pretty quickly. And that part is actually, they did a really good job so that way it does that stuff autonomously. Because you're doing that by hand, impossible. Yeah, we saw some of that video.
Starting point is 00:20:13 And there's a ton of video out there from the humanoid games if you want to check out what that looks like when the robot gets back up. Yeah, for sure. It's like, and there's one where like the robot was, I think, basically sort of pushed into by, and you'll notice that all the robots are very similar. That's because a lot of them are very similar model robots. One will get pushed into, knocked backward, and it's kind of butt hits the ground, but it bounces up just enough that the feet can catch itself again. And it's really like, and that's, and the way that it does this is that it is using deep. learning and meaning that what's what is true has given up is given a piece of code that is trained
Starting point is 00:20:57 on how to handle many many different situations so on a computer you know back at the lab they will run a whole bunch of simulations of what are the putting the robot in different scenarios and try to optimize what the robot will do in all these different scenarios that's the high level thing and they can run so many different scenarios that it is seen something similar enough to it being in that's it to being kind of close to the ground not on the ground that can recover they and they train it and then so all the learning is done quote unquote um on that computer at the lab then they take what that learned and they they they they they take what the what we call a policy and they upload it to the robot you know and then it's on the robot and that's how
Starting point is 00:21:44 it behaves right so the robot itself is not doing any learning to be clear when you're watching these when you're watching this event. That's all the learning is already done. But what it's doing is it's reacting with this policy has already been trained. It's the humanoid games. You've said robot a lot. At what point do we need to start calling them
Starting point is 00:22:05 Androids? You know, Is there a distinction? I had someone at a talk once asked me this question, and I thought was a good one. That person, Davy Rickenbacker came from one of my talks, asked me that same question. question. And weirdly enough, Android is not a term that I encounter very much in the field of
Starting point is 00:22:26 robotics. It's more like a science fictiony term. Now, I think people know what people are talking about, but like the only distinction is the degree to which it is intended to look like a person. Right. And is there an established line as to what that is? Not one that I'm familiar with. And so if you call this an Android, I wouldn't be offended or say you're wrong. I think it would just be, it's just, we think of it, it's, it's a robot that happens to have the form of a humanoid. That it, I would imagine that maybe if you hit a point where you were trying to make something that really passed for a person, it might make sense to give it a different name like Android, because that will be a different field of research to make that happen.
Starting point is 00:23:10 Because I feel like growing up that it was almost like that to call like a humanoid robot, a robot, like that's offensive. That's an Android, okay? That's a humanoid. It's an Android. It's not a robot. Well, let's look at Lieutenant Commander Data is, was, he said, you couldn't call him a robot. No.
Starting point is 00:23:31 That would be, you go to HR on the enterprise. That's, so, so we're going to, and I mean, they're all counselor, Troy would rip you a new one. She would do it politely and give you a very, very, she would do it very politely, though, I feel, but that's. Yes. But like that. And there are a lot of terms in the sci-fi world. Yeah, yeah, yeah, I'm sensing that you are worried about your future employment on the enterprise. Yeah.
Starting point is 00:23:59 So, but like there are a lot of terms in sci-fi that I find myself not using very much, like the prefix cyber. You know, that's a thing. People use it in like an actual real academic terms, like cyber physical systems. But I don't find my use using the prefix cyber or Android or even a lot of times, like when I'm just talking. talking with my colleagues, I almost don't use the word technology. It's weird. There's no good reason for that. But it's, but like, that's just, that's just a shop talk thing.
Starting point is 00:24:27 Yeah. You know, but, but there's, that's something we have to translate well to the public because people know the term Android because they've seen Star Trek. Yeah. Okay. You ready for another topic? Indeed. Okay.
Starting point is 00:24:41 This is a question that it comes to us from Mark Rich, who wants to know. Is the Price is right game Plinko totally random? or is there any type of strategy with the coin placement? So that's a great question, Mark Rich. So it is random, but that does not mean that there that there's no strategy to it. So let me talk about what's level set for a second. For those who are not familiar, the Plinkgo game is a very fun game as part of the price is right where you take a chip on top of a peg box.
Starting point is 00:25:19 and you drop it in a particular slot and then it goes down all these pegs, it kind of bounces around, and it lands at the bottom in some prize or zero dollars, right? And it is very random as to how it bounces, right? But that doesn't mean that every position would yield the same result. So, in fact, there are a lot of analyses that people that have done. It's actually pretty simple in the scheme of things, but like the distribution as to where it would land if you tried many different times.
Starting point is 00:25:54 It's a well-known distribution called the binomial distribution. So I'm going to pull up my board here real quick and sort of illustrate if I can actually get this work in here. Okay, all right. It's going. Yep.
Starting point is 00:26:09 So yeah. So for those at home who are listening on the podcast, I'll narrate this as the best I can as I'm drawing on my little board here. Yeah, I'll go full screen on it or at least let's go big frame. on Christian here, and let's see if we can do the switch it. Okay, there you go.
Starting point is 00:26:26 So the idea is if you have a little puck that you're going to drop down and it lands on a peg, okay? It could either go to the left or it could go to the right, right? And we would argue that say it's roughly 50% that's going to go on either way, on either side, right? And then it's going to go down either side and it's going to hit another peg, each one of which might go down either side, about 50-50 chance, okay?
Starting point is 00:26:52 Right? And so, and you continue, you continue down until eventually you get down to the bottom where you have all of these little bins that all drop into, right? Right. The question is, how likely it is it that you're going to land in any one of those bins, right?
Starting point is 00:27:08 And the distribution of it and what we would say, so it's like the probability, how likely is that you're going to land into each of these bins? Imagine each one of those is like a vertical bar, like a bar chart. So if you dropped your Plinkgo chip in the middle of this board, it would be the highest probability would be in the middle that would land there. And as you get further and further away, the bars get smaller. And the way that they get smaller,
Starting point is 00:27:35 if you can imagine the good old-fashioned bell curve. The bell curve is, the bell curve we know as a normal distribution. Well, this particular distribution is called a binomial distribution. And the by as in the number two, and what that means is that you can go one of either two ways each time, okay, is more or less why this becomes a binomial distribution. So if you wanted to be pretty sure that you want to land near the middle, you would drop it in the middle, okay? It's not equally likely that you're going to land in every bin, in which case that little bell curve would instead look like a flat rectangle. Yeah. And that would be, that's not what you're dealing with, okay?
Starting point is 00:28:19 But that's only part of Mark's question, which was, is there any strategy to it? It's like, well, yes, it's going to, if you, in fact, drop it down in the middle, okay? You're going to have me centered in the middle, but if you centered it, but if you dropped it more to the left, that peak is going to be a little bit more to the left. And what complicates it is the fact that the Plinkgo board is not infinitely wide. walls on either side, okay? And that shifts the distribution from being something that is like a, a, a, a symmetric bell curve that people are familiar with, okay, to something that is skewed and asymmetric.
Starting point is 00:28:57 Now, I was almost ambitious. I almost went and rode some simulations right before, uh, before I, uh, and what's you what it looked like, but then I had to take Michael to daycare. But anyway, but the, uh, so there's always time after the podcast for an update. Time after the podcast. Yeah, I thought to post that online. because now I'm, now I'm curious. But it might look kind of a skewed bell curve.
Starting point is 00:29:18 You might get a retweet from the Price's Right account. That would be good. I would appreciate that as a science communicator. You know, and also a CBS property, I'll point out, by the way. So, yeah, so it'll look like a little shifted. So, like, if literally, it probably is as simple mark as if you want to target a particular amount, you drop it over that amount. Now, that does not mean you will get it.
Starting point is 00:29:41 As you can tell, it'll sometimes go to the side, right? And very cleverly, the price is right, people, the way they will typically arrange the prizes, that they will have the really good prize right here in the middle, right? And the really bad prize is right next to it, right? So if you wanted to zero and five, it was $5,000 when I was a kid and now it's, I think it's $10,000. Yeah, I looked it up. It's $10,000 in the middle. And then on either side, it's zero, nothing, right?
Starting point is 00:30:09 So the way you would optimize what you want is if you would then, sum up, you would do what we call an expected value. You would take the probability of dropping in a particular spot that you will land in a particular bin, multiply all those probabilities by the dollar value, and say, hey, okay, which one of these will give me the highest dollar value? So I did not do that full analysis. Although I think that maybe something we can do after the podcast, because maybe there would be an optimal spot to drop it to maximize the expected amount of dollars you would get for every drop. Yeah, I think that the zeros are an interesting factor of like are you better off to potentially just like totally avoid the possibility of
Starting point is 00:30:48 the zero is the you know is the 10,000 exciting enough to avoid uh you know or to possibly like get the zeros because I feel like that people almost never just drop it like straight down I feel like they're always like a little to the left or a little to the right yeah the little to the left to right is interesting I almost wonder so like it might just be like a learned behavior of like that people were from when I was a kid watching the show always like never put it like just like oh I'm I got to be strategic so like only a dummy Mark would go like I don't mean Mark that asked the question only you know some dummy nob would do it right down the middle I do it a little to the left or a little to the right yeah it feels like you want to be just
Starting point is 00:31:35 slightly tricky enough right you're like oh yeah I I get that but yeah so that that This might be an answerable question. And again, it assumes that there's another element of this as well that like you can, you know, maybe try to get it like just to the left or to the right of one of the pins at the top to kind of skew the probability a little bit for the first two layers. Maybe you can squeeze out a little bit there. But this is a very tractable problem.
Starting point is 00:32:00 One that you would, we, what we could do and by, and we could investigate just by simulating it a bunch of times with a very, very simple simulation. But that little adjustment, I guess, could be an over thing. in terms of just do it straight down the middle every single time. Yeah, now I really want to do this. If only I didn't have like three other talks I had to get ready this week. I would totally, this would be, this is fun. This is fun.
Starting point is 00:32:23 Now I'm really thinking about this, Mark. Thank you. Summer's here, and you can now get almost anything you need for your sunny days, delivered with Uber Eats. What do we mean by almost? Well, you can't get a well-groom lawn delivered, but you can get a chicken parmesan delivered. A cabana?
Starting point is 00:32:37 That's a no, but a banana, that's a yes. A nice tan? Sorry. Nope. But a box fan? Happily, yes. A day of sunshine? No. A box of fine wines? Yes. Uber Eats can definitely get you that. Get almost, almost anything delivered with Uber Eats. Order now. Alcohol and select markets. Product availability may vary by Regency app for details. Bank more oncores when you switch to a Scotia Bank banking package. Learn more at scotiabank.com slash banking packages. Conditions apply. Scotiavec, you're richer than you think. At Grey Goose, we believe that pleasure is a necessity. That's why we craft the world's number one premium vodka in France, using only three of the finest natural ingredients,
Starting point is 00:33:22 French winter wheat, water from Jean-Sac and yeast. With Grey Goose, we invite you to live in the moment and make time wait. Seep responsibly. On September 5th I come down here I need you out! Array! Array!
Starting point is 00:33:52 Array! Array! Array! The Conjuring Last Rites, only on theater September 5th. All right, Take another question, and why don't we dive into one where...
Starting point is 00:34:15 Pull this up, Rob, here. One thing I forgot to mention, I wanted to thank... So one of the things we covered last time is I mentioned, like, we've done enough asked Dr. Hibikis now. We've been, like, this is like the 10th one or so, I feel, like, nine or 10th one. Yeah, and I was like, I'm at the point where, like, I'm worried I'm going to say the same stories multiple times. I really should catalog what I said, and if anyone wanted to help me,
Starting point is 00:34:38 out with that. I'd be thrilled with that. Well, guess what? People actually responded to the call, Rob. Yeah, what did they do? So I had two, I had two people, Aaron and Sarah, they independently reached out and catalog some or all of the questions I answered on all the Ask Dr. Hiviki podcast and sent me a list. Okay. What are you doing with that list? Thank you very much. So I want to put it into a database and so I can search it and basically say, okay, did I answer this type of question already? That way I can filter the questions in the future. That way, We're always getting new content, Rob. Isn't that what the Internet's all about?
Starting point is 00:35:12 Okay, of course, constantly. All right. Here's a question and a survivor-related question from Zach. This is Christian longtime fan of the podcast, especially Ask Dr. Hubicki. And I was curious what the most mathematically challenging and possible vote in Survivor would be, does it involve a lot of people, a few or somewhere in between,
Starting point is 00:35:42 and what would be the benefits of a vote like that? Okay. So this, go on. Yeah. Do you feel like you understand what this, like, what this person is asking in terms of, how is it like the most difficult to crunch the probability around? Well, it's the way I interpreted this, Rob, and, and it's our podcast, so I can interpret this however I want,
Starting point is 00:36:06 was like, what vote is like the most interesting vote? Like, is it like a final 10 vote or like a final 13 vote where you have a ton of people and there could be all these different type of different vote breakdowns or is it like the final five, final four? And I think that intuitively, I think we know that if there's like a final three vote, which, you know, if they did this anymore, right? It's not that interesting in general, although Richard Hatch made it pretty interesting in that first season, regardless, you know, with how he handled the,
Starting point is 00:36:36 how he handled the immunity challenge. But so it's interesting because I was thinking about this. And when they mentioned this, I realized how interesting and fun, I know six votes can be or votes with six people in modern Survivor. Back in the history of this here show, which I hear has been going on a number of years now. God knows we get a lot of them in the new era.
Starting point is 00:37:02 Yeah, indeed, indeed. We get a lot of these small votes. And that's actually a point I wanted to bring up. But like back in the day, you know, I remember we would talk about, I feel that there was a discussion about how the final nine vote or the specifically the final seven vote was always so important because it's an odd number. Therefore, it's easy to flip one person and therefore it makes a lot of sense that that's often a very dramatic vote. I actually remember a long time ago Tyson saying something to the effect of like when on blood versus water, he knew that the final seven vote could be so perilous that. that he said he was going to play his idol no matter what at final seven regardless of the situation because he expected it to be chaotic.
Starting point is 00:37:42 That's what I remember him saying. I could be, I never asked him, but I'm curious that's the case. But these days, things like the final six, we get a lot of final six vote. We get a lot of six person votes in the new era because we have six person starting tribes. And because of the addition of added and lost votes, being at a six person vote, you don't even know if you're in a six person vote or not. You could be in a seven vote final. You could be in a five vote. And so if in just, I feel like to the point where if you were writing a book on Survivor Strategy
Starting point is 00:38:15 from some kind of numerical vote-moving perspective, you could write an entire chapter on six-person votes. Yeah, you would say something, Rob. I pause for that. No, I think that that would be a compelling book. Yeah, no, at least for me. I mean, for one, I mean, you have the classic series three to one. I mean, that was when we first started to see that a final six vote could be really interesting,
Starting point is 00:38:38 that you, you don't have a majority. So, re-showed that that's all you need in order to, in order to get what, in order to get the votes that you need. You flash forward all the way to season 42. When Omer went home, I believe Marianne, she had like, what, an extra vote at that vote or something, and she turned into a final seven vote and managed to snooker a minority vote, a plurality vote. I'm old enough to remember an interesting.
Starting point is 00:39:03 final six vote that happened in season six of Survivor? That was a classic one in that fact that you that that you had someone in the middle in Christy Smith who thought she was the power broker and overplayed her hand and became two in between alliances of two, right? So it has all the benefits of all of all of these different voting shifts, right? I think intuitively you might think that there's more possibilities with the final 13 vote. But I was at a final 13 vote at Survivor David versus Goliath. It's just impractical to do really complicated voting shifts. I mean, at the final 12, we did do that, but it was rare. And it only required. And so at the final 12, we, you know, Nick suggested the
Starting point is 00:39:52 minority vote split and was brilliant to do so. And Davey, it required Davy playing an idol on me to make it work. But the thing that made it work is it only required, like, four people knowing. We didn't even tell Gabby about it. And so, like, it only took four people. But to actually swing a whole bunch of people in a massive vote in weird ways to get weird minority and plurality votes, that's very hard. But at six, like, Sirisha, you still want to tell people certain things of certain people, or you only had to, like, leave Christy Smith out of the vote. In fact, I believe you have, like, a deal which you said to Jenna and Heidi is that if anyone is spotted
Starting point is 00:40:32 talking to Christy, the deal's off, right? Is that how to work? That was a contingency, yes. Yeah, so, right, that's the contingency. And can you do that when Christy Smith has six other people she could be talking to? Probably not. What are you to tell Christy to go sit in a corner? That's not good.
Starting point is 00:40:52 Probably not with out arousing some suspicion that something was going on. Exactly, exactly. So there's the numerics of what make a small-numbered vote interesting, like six, because it could be a seven, it could be a six, or it could be a five, for all you know. If it's before the merge, a shot in the dark can be played, right, which changes the numbers. Recently, Jeff has clarified the tiebreak rules that happen in season 47 and 48, and they are bananas, you know, like as to whether or not you get to vote. on a revote and in what phases of the thing, to the point where you literally should just sit down and create what we would call a state machine, which is sort of an if-then theory.
Starting point is 00:41:37 It's a decision tree. It's a better way to think of it. If-then statement as to what to do when you have these numbers, if then, and why. You really should do that. Especially given that you're, if you're on a new era season, odds are you're going to be at a vote with six people. When I played David versus Goliath, I never was at a tribal council with fewer than seven people. And when I was at a tribal council with seven people, I went home. So I never got to experience
Starting point is 00:42:00 these nifty small tribal councils, but they seem awesome. But I feel like you could write a whole chapter of a whole book and all the ways that they can be finagled and maneuvered. And we see those even in the post merge with these split tribal councils, where they often split people apart after the merge into like tribes of five and tribes of six. There are all kinds of little maneuvers that could have been used, but were never used. And Survivor 41, one, Zander had an extra vote on his split tribal of five and theoretically could have forced a tie. And because he had the extra vote, there was actually going to be a discrepancy as to who
Starting point is 00:42:39 would have drawn a rock. And it would have really forced a great situation maybe for him, maybe not. I'm not sure the reasons why he didn't want to play the extra vote. But like, we never actually got to see that happen. But that's been on the table multiple times. Like there's all, there's this whole compendium of things you could do. at a final six vote. And that's why I think the final six is like,
Starting point is 00:42:59 it's probably the coolest vote out there. Hard question to answer, but I think you've made a compelling case. Sure, sure. To each their own, but that's like, I think that, you know,
Starting point is 00:43:10 between Surrey and Christy Smith and Mary Ann, there's just, and all, some of these wacky and frankly fun and sometimes confusing new era tribal councils at the beginning, Like, there's a lot there. And if you know the rule, like seriously, sit down and work out the rules, you could actually
Starting point is 00:43:34 have a legit advantage in one of those early tribal councils. Yeah. Okay. How about a question from Gray? What are your views on gambling? What games do you think you would enjoy most? And why? Okay.
Starting point is 00:43:48 Are you a gambling man? I know you are a master of probabilities. But do you gamble? So on rare occasions, I do gamble. So when I grew up, you know, I don't know, when people are kids, there are probably things that they grow up like, oh, I can't wait, I'm old, Tom, old enough that I'm allowed to do this. Yeah.
Starting point is 00:44:10 I can't wait until I'm old enough to drive. And for me, it was to go to the blackjack table in Las Vegas. Yeah. That was what I wanted to do. That was, I remember when I was like nine, our family took a trip to Las Vegas. And all the blinking lights and I, when I, and there was a blackjack table. And my dad, I think my dad and friend described like, oh, Chris, it's like, there's math to blackjack. I'm like, what?
Starting point is 00:44:36 But yeah, you make a decision as to what card you get and cards you hit. And I thought that that was so cool. Yeah. So, but, and if you ever go to Vegas and you're a kid, you know, you have to walk on this little carpeted area where you're not allowed to get off of it's, you have to unless you're over, what, 21. I forget to be 21 to gamble in Vegas. I forget.
Starting point is 00:44:54 It's like Blackjack. And I remember I was so intrigued by the idea of playing Blackjack that I would take like, I would like see if I get away. I would take two steps off of the red carpet and get a little closer. And I would take another step and another step. And like they would intercept me after like four steps. And they broke your life. Yeah, that's true.
Starting point is 00:45:11 That's true. And then I was taken to a back room and shaken down. But yeah. And but like that's why as a kid growing up, I always wanted to do that. And now as an adult, I'm like, and then when I went to grad school, you know, I was like, oh, I'm old enough to go gamble, but I don't exactly, you know, grad student stipends are not really great for, great for cash, but like I would go find a, I think we talked about this before, I would go and find $5 blackjack tables.
Starting point is 00:45:36 And I have a very disciplined way of going to blackjack table. I said, I'm going to lose $100 and then I will leave, right? That's what I wouldn't do. And for me, it's an exercise in feeling the whims of the probabilities. And I've talked about this before, but like, I think that maybe as I've gotten older, like, Like, I do, I am glad that I was not allowed to gamble when I was a wee child because I feel like I was able to develop some reasonable self-control before I would actually sit down at blackjack table. Because as much as I enjoyed, you know, just the thrill of gaining and losing the money, but I could sense the irrationality creeping in, the feeling of trying to check, you know, I just got to get that money back. Just one more hand.
Starting point is 00:46:20 Just one more hand, right? And getting the experience, like, that raw, visceral irrationality is really humbling. And you realize that all the ways that you are susceptible to being manipulated by a casino, all these subtle ways, like even when you sit down at the table, they're like, oh, you know, do you want to, they'll offer you a drink. Do you want a drink? Do you want a free drink? And I thought, I thought I was clever.
Starting point is 00:46:47 I was like, no, I'm not going to drink alcohol at the blackjack table. No, then you make that poor decisions. I'll just order tonic water and lime just to have something to drink. But then I realize the wait staff, they take a long time to go get you your drink. And I'm like, and I want to leave and I'm still waiting for my, my, my, just my tonic water. And I realize out of politeness, I'm staying at this table longer than I should. I should leave. And that just like, it just manipulates me on that.
Starting point is 00:47:14 Especially if it's a cold table. Yeah, exactly. You just want to go. You want to go. And so it's, and I realized like, oh, my goodness, like I, I thought I was clever enough to not out with the casino or at least mitigate how they would manipulate me. But still, they have the house always wins, okay? They do. That's why people will make money gambling.
Starting point is 00:47:36 They do it in poker because they take other people's money, right? Yeah. I mean, I've never really played poker. It sounds like a fun game. I played it like maybe twice in my life. But that's how, you know, then at least you're going head to head with a person. right um you know i but like obviously there's count card card counting but the casinos are really good at you know doing their they're they're really good at detecting card counters and their
Starting point is 00:48:02 and their strategies you can find people on youtube and go and do this and it's this whole high-stakes thing that they do in fact casinos love spreading the idea that you can count cards well by yourself at a casino because they know you're not good at it and they know you're gonna take your money anyway. So I think my opinion on gambling is like I enjoy the novelty of it, but I've grown such an appreciation for how easily I can be manipulated by it. And so I can only imagine like, you know, that's just in a casino. Imagine if I was able to do that with my phone. You know, I was able to play blackjack on my phone. And you could do that in some country, like if you VPN in some countries, right? I was like, that's, you know, I'd caution people about that.
Starting point is 00:48:46 Yeah. I, too, when I was a young person, like really fancied the idea of going to the casino. And I didn't go to Vegas when I was a young person, but I had been to my parents like drove us through Atlantic City. And I got to like walk through a casino. I just thought like, wow, I'm going to do all this. This is like, how cool would this be? And I always had like a deck of cards when I was a kid. And I did a lot of stuff with, you know, playing, you know, card games. that I saw on television. And as an adult, I have not really often enjoyed doing a lot of playing casino games. It hasn't really been something that, like, because I just have found that the losing hurts more than the winning feels good. That's, it certainly does. I think that my, my outlook is generally like, as much,
Starting point is 00:49:46 I really try to prepare myself to be like, okay, I'm going to spend that $100. That's money spent. I'm going to get the entertainment out of it. And if I happen to be up $100, I'll quit. I'm like, oh, I made $100. So I think like in sort of low expectations thing, but really I'm just playing an entertainment tax to the table that way. So I try to view it as little like gambling as possible. But, you know, I, but I'll tell you, I use it.
Starting point is 00:50:08 It makes for fun mathematical exercises. Right. Like, yeah, I mean, I'll never forget, you know, I had my laptop in college. I remember I happened to be in a place that had a casino, went to my parents on a vacation at one point. I was like, you know what, I'm going to make an Excel sheet that computes blackjack basic strategy. I find it inspiring and fun.
Starting point is 00:50:26 And if you want to see really cool examples of people developing, like, interesting math approaches, like there were some of the people who invented information theory, such of Claude Shannon. They took their hand at trying to predict roulette. there are some of the folks that would there's a book it's called Fortune's Formula I'm not sure if I mentioned on this podcast on a podcast before
Starting point is 00:50:47 by William Poundstone and he likes to chronicle stories about math and development of math and how they came to be but this but Claude Shannon who invented information theory more or less he's seen as the father of information theory like he would go
Starting point is 00:51:04 into a casino and would have a computer that would try to predict after a couple spins of the roulette wheel within a certain range where the ball would land. And I don't know how successful they were. They kept having all kinds of technical problems, but it was interesting. Yeah.
Starting point is 00:51:20 Okay. Maybe that guy can figure out Plinko for us, too. That would be good, but he died, I think, in the 80s. Oh. Okay. Awkward, too soon. But yeah. Okay.
Starting point is 00:51:34 How about this one from a valued listener, Maybe somebody we know says, I'm taking a trip to San Francisco this fall for a certain reality TV event. Should I trust the Waymo self-driving cars? Oh, yes. And by the way, I just, as you Google, Glaw, Shannon died in 2001.
Starting point is 00:51:54 So I make sure I got that information out there. Now, do they name Claude after him? You know what? Probably. Actually, I didn't think about it. It probably is named after him. I didn't even think of that. But, yeah, that's, so the Waymo cars,
Starting point is 00:52:07 In San Francisco, first off, it sounds like it would be a lovely reality event, whatever that was, you know, whoever put in that question. And so the Waymo cars, they're self-driving cars, they're driving around in a couple of different areas. I know one of them is San Francisco and one of which is, I think, Phoenix, Arizona. I saw them in Austin when I was there for South by Southwest this year. They're there. Okay, they're there now.
Starting point is 00:52:30 And so they operate in very specific areas and they are overseen by human drivers. Like, these are test deployments. You can kind of think about them in beta. And so this question just came in, so I only got a chance to look at it, look it up a little bit. But it seems like the safety data in these very, what we call geo-fenced environments, is pretty good, right? And in part, because one thing is helpful is that somewhere in a building, there is a human driver overseeing what's happening with their car. So if there's anything that happens, that if you end up in a weird situation that the autonomous vehicle doesn't know how to handle, then there's a human ready there to take over.
Starting point is 00:53:11 And as a result, I believe that the safety of the vehicles is pretty good. So I would, I'll say I would go in a way, Mel. I would. But this is a different. And so that's different than the discussion I've had in the past with you, Rob, which is I've been a little bit skeptical as to the scalability of this approach to the entire United States. In our previous conversation, you had said that I feel like that the autonomous car is very far away. Yeah. That's what I feel like because, I mean, what we call the long tail of probability,
Starting point is 00:53:46 that there are these little things that are really rare that are very difficult to train or anticipate in your algorithms for the robot. That can be very dangerous, right? But if you're going in a city, you're generally going at a slower speed. You know, while you're more likely getting a car trash in a city, you're more likely to be seriously hurt on the highway because the speeds are much higher if you have, if you have a crash. So like the stakes of a crash are much lower when you are in the car. Obviously, pedestrians, they're not going to like being hit by cars. Do they not go on the highway?
Starting point is 00:54:25 I don't know if they're within the geo-fence area, but like generally they're supposed to be in the cities like a like in any city area. I don't know if they go, they definitely don't go long distances on the highway, right? So they're going to stay in the area. So they're not going to go from city to city. So as a result, I would get in a Waymo, and they're generally, I think, respected as one of the better self-driving car companies. But yeah, so I get in a Waymo, but that doesn't necessarily mean that this is a scalable solution that you should expect in your city anytime soon. This is a question that I have. definitely been interested in knowing the answer to.
Starting point is 00:55:06 Caitlin wants to know. People have talked a lot about the environmental impacted AI use. How would you explain the environmental impact of AI use by the general public versus other internet usage? And how much is environmentally taxing AI integrated into our activities more than people might expect non-chat GPT-type activities? And I apologize if I butchered the reading of that question. I think you got to point across that is great. Basically, like, why, there's a lot of talk about these generative AI models being bad for the environment.
Starting point is 00:55:47 And why is that? What is that discussion about? And that's really what we're getting at. And it really comes down to the two major things are one, energy use and two water use. Okay. Okay. Energy use. So like, you know, all computers when you're running them, they take energy to run. And that is, that, that's true. Just like we take energy to think, you know. It's actually something I came to appreciate on Survivor that when I was really, really hungry, it was actually very hard to think because I think that in part, I was very energy starved. So I just could not spare the resources. That's a feature, not a bug. Exactly. That's something I really underestimated when I played a Survivor David's Goliath. So that's, but so, but the thing is not all compute is equally energy intensive.
Starting point is 00:56:35 So like if, so for instance, when you're, so computations that you might do on a calculator, right? Let's say you wanted to put one plus one on a calculator, right? When you put one plus one, enter, it's going to send instructions to a little processor that's going to flip, lots of little transistors on and off, and each of those little transistor flips, they take energy, okay? But it'll be, it's a tiny, minuscule amount of energy to give you two. If you were to go into chat GPT, which is this giant AI model that people have heard about, I've probably heard about at this point, and ask, what is one plus one? What happens? Well, your instruction is sent off to a massive data center, someplace hopefully near you. And it gets crammed into a model,
Starting point is 00:57:21 which is like basically a piece of code, and it requests what we call an inference, okay? And it's going to do a big data prediction based upon all of this trained data and delivered you an answer, which is hopefully two, right? One of those is much more efficient than the other, and much more inefficient than the others particularly, right? And so what is different about these big AI models?
Starting point is 00:57:50 AI models. When I say AI, I mean the generative methods. So like when you say chat GPT, the G stands for generative because it takes an input and it keeps incrementally generating an output until it's told to stop. And that's what the G stands for. And it is for pre-trained transformer. So these are each three important components. The generative is the key one for this one.
Starting point is 00:58:18 It goes to how they actually set up the model inside of their laboratories. They pre-train it once and they retrain it again, more or less. And Transformer has nothing to do with Michael Bay. It is a specific type of neural network. It's a particular structure they call a transformer. We talked about it a little bit in one of our previous asked Dr. Hibikis to go into the notes that they were sent to me as to which one it was. But basically, it's something that uses what's called attention to see, hey, what should I be paying
Starting point is 00:58:52 attention to in this sentence of this prompt that you sent me, that you typed in? If I said, what is one plus one, the attention will say, oh, one is important, the plus is important. Okay, now the other one is important. Okay. But the generative methods, so what makes all of this work and what has made all of deep learning work, which has really taken off since 2012, okay, is the fact that you run these computers. these computer programs, not on the CPU in your computer, but on the GPU, which is called
Starting point is 00:59:22 the graphics processor. Okay. And so if you're a gamer and you're trying to look into buying the new PlayStation or make a new gaming PC, I'm sure you're all thinking, geez, what graphics card do I put in my computer and how many months of my salary will it cost? Okay. And that's because the graphics processor. is really important for, well, graphics, right?
Starting point is 00:59:50 And it's, and graphics processors are structured differently than the central processor. The central processor is doing a lot of stuff. Like if you're running, you know, Microsoft Word, you're running all these things where you want to type out a sentence, you give it a, you give it a type, and then it goes through their program. Your program is being run probably on a central processor. Central processors are really, really, really fast and doing things over and over and over again, like in sequence. Do this, then do that, do that, do that, do that. They are optimized for that.
Starting point is 01:00:21 They run at these gigahertz frequencies. They're super fast, and they will sometimes cram multiple CPUs, central processors, onto one. That's why you have multiple cores. And you might have many as eight cores on your central processor, right? I don't know how it's on my computer. A graphics process is a little different is that what it does is it doesn't have eight cores of a CPU, it has very much simpler cores that don't do as much as a CPU, but we'll have
Starting point is 01:00:51 hundreds or thousands of them. Okay? And what they can do is you can give similar instructions to all of the cores on that, on that processor, and then it'll do them all in parallel, and then return the answer. And they'll do it really fast. And where's that useful for? Well, on your screen, where your graphics are being presented, you'll have. have lots of tiny little pixels, right?
Starting point is 01:01:15 All of which are going to require very similar calculations. So you would say, hey, I want to know what the color of all my pixels are going to be. And I'll say, OK, I'll run all those in parallel. And I'll get them back. Great. I can display my game. I don't know what game is popular these days.
Starting point is 01:01:31 I can play, you know, Fortnite or whatever the children are playing. Minecraft is perennially popular, right? They're playing, and steal my brain rot. Is that is that a game? steal my brain rock? Yeah, I think it's in Roblox, but they're playing, they're playing in a lot of that. Okay, got it. So I'm behind in my Robb's knowledge, Rob. I'm afraid I'm going to have to learn about this in a couple of years. But the, but that's, so that's, the graphics
Starting point is 01:01:57 processor's really good for putting graphics on your screen. But we've learned over the years, that's also really good at handling math problems like we split up in lots of little problems, right? And back is specifically in 2011, 2012, if you forget which year it was, there was a major breakthrough with this network called AlexNet for identifying images. And it was a machine learning challenge. Can you train an algorithm to learn certain types of images? And people have known that you can get neural networks to do this, but it's really, it takes a long, long, long time to train these neural networks to give you the answer that you need.
Starting point is 01:02:34 until someone figured out was like, hey, not only should you use the GPU instead of the CPU to train them, you can use multiple GPUs and keep training them. In fact, you can take a whole bunch of GPUs and train them even better and even faster. And they had a major increase in performance. And that set off the entire deep learning boom. And which has now become like this chat GPT like boom, that's a certain subset of there's deep learning and then there's these chat GPTs are what we call foundation models, blah, blah, blah. That's not that's not important.
Starting point is 01:03:03 But these GPUs, because they are doing so much work in parallel, they put out a ton of heat, a ton of heat. And now you have these massive data centers that things like GROC or chat GPT or Claude, they're all over the country, that their only job is to take in queries when you're typing into these things, oh, what is one plus one, taking all his data, shove it through this massive series of video of graphics cards and gave you an answer. And that's a lot of power. How much power is it? I'm glad you asked. Okay. So in terms of power, I was making sure I get the proper estimate. So data centers before deep learning, like in 2012,
Starting point is 01:03:44 took about 1.8% of the U.S. total electrical supply. 1.8%. That's a lot. It's a lot. Now it's 4.4%. More than double. Because now we're doing, not only not just, you know, back in 2015, it was mostly just doing like, you know, storing things for probably like cloud storage and
Starting point is 01:04:06 data storage. There's a lot of this power than running all those GPUs. So now it's more than double in 2023. And based on the, trying to remember what to get, look at my source here, it's now estimated by the U.S. information, the Energy Information Administration, that by 2028, it will be 6 to 12% of all energy consumption of all the United States to run these data centers. That's a lot. That's a lot of energy.
Starting point is 01:04:31 For sure. And it gets so hot, you have to cool it with water. It's a lot of water usage. So that's why there's these concerns, these environmental concerns. Yeah. And I know it's a very new industry and, you know, there's so much development has happened in the last couple of years. But shouldn't these technology and AI companies,
Starting point is 01:04:53 especially when we hear about how, you know, all of these companies or have like all these valuations and are worth, you know, you know, billions and billions of dollars, why are they not held accountable for offsetting? Because I, and I say this as somebody who is generally pro-AI, shouldn't they be having to be, you know, making sure like, you know, they are doing no harm or finding some way to, okay, well, we are doing this, and then we also have to find some way to, you know, here is, you know, whether it's financially or in some other way, have some kind of offset for what we're doing.
Starting point is 01:05:41 Yes. Yes, absolutely. That's my opinion. But as of right now that that is not what is the case. Yeah. I mean, right now that that's not the case. I mean, right now, I mean, the, anything I happen to do with questions of the energy scale are complicated.
Starting point is 01:05:56 I'm not an expert in the subdomain, right? But it's complicated. But that said, there are all kinds of complexities in how the energy market works. Just because, like, there are all kinds of ways that energy is bought and sold in the United States where we live, that if one company is using a ton of electricity, it drives up costs to just a whole household. You know, there are all kinds. And that's just energy costs.
Starting point is 01:06:21 And if we're talking about environmental costs, yeah. I mean, I'm not here to talk too much about policy, but like, that's, but that's important. It's just basic fairness that if, that if something is going to have a cost to us that's indirect, you know, if you're, if you, you should pay for that cost instead of like externalizing that cost to us in the long run, right? Like, like if, and I think that that's a, they're, that, to me, it seems fair that if I'm typing a chat GPT prompt to make, you know, to, to, to, to, to search for something. something. And it's taking some estimates I've seen from three to 30 times as much energy
Starting point is 01:06:59 is just doing a Google search, right? That should be factored into the cost that I'm paying for that service. And I mean, there are all kinds of business reasons that people would want to make it cheap or free for now to grow the business. But like right now, this is not a small industry. We're talking something that might take six to 12 percent of all electricity usage in the United States. This is not small. This affects all of us. This seems like that this would be, you know, a winning political issue for, you know, either side or even both that may want to, like, take on this idea of, you know, that these AI companies should, you know, be doing more to, you know, could they also be, like, devoting a portion of their resources
Starting point is 01:07:42 to using AI to fix and solve, come up with better solutions to these problems? I, you would think so, Rob, but like, you know, but it is a, it is a, it is a. challenging environment to sometimes, sometimes things are, that you think make sense, there are all kinds of areas that pop in your way. And there's not a politics podcast. I agree, that's, that's, that's, that's, that's, that's the after show. You have to pay for the patrons, right? For that.
Starting point is 01:08:06 Sorry, not false advertising. There's, there's no service for that. Anyway, but the, but the, but like that's, it's true. I mean, like, this is, this is something like common sense talk is, is, is necessarily like a political issue. And I'm sure that the, the answer is probably like, uh, You know, is there an AI lobby that's like, you know, going to be, you know, paying people under the table. Like, hey, don't talk about this.
Starting point is 01:08:31 Don't pass these laws. And the answer is probably. Yeah, the answer is there's absolutely a lobby. I mean, like, I mean, that's not controversial. There's absolutely a lobby for it. But here will be the arguments, right? Like, the, the, there's lots of political tension with China. And there is, as a result, the United States wants to make sure that we stay ahead in AI, you know, there was the whole.
Starting point is 01:08:53 deep seek moment where a Chinese company came out with basically a chat GPT clone that required 120th of the energy. I forget what, I don't quote me on the number, it was some tiny fraction. Right. But was that real? Did they know for sure? Like was that? We don't know.
Starting point is 01:09:09 We don't know. But like that, but that's what people are reacting to, right? It's less the reality, the more the perception that, oh, we are, that there is an AI arms race, right? And why would we handcuff ourselves in an AI arms race? but the people end up holding the bag are us, you know, like as a consequence, right? So, I mean, it is genuinely fascinating. It's just like how much this particular breakthrough, this one, this idea of the, of a foundation model, right, transformer, the T of the chat TVT has now set off this bonanza, which we're not, which people are not sure if it is going to end the world or is all just the hype balloon.
Starting point is 01:09:51 You know, it's like, like, there's, they're all, like, the conversation. Save the world's not on the table. Well, you know, it's somewhere in the middle, I think, somewhere. I think it's maybe, maybe there, you know. It's just all one big Plinkgo game. We're dropping the chip. It could end up in the zero. It could end up in the 10,000.
Starting point is 01:10:10 I mean, it's, it's genuinely, like, I'm about to go and give a lot of public talks this weekend. So, by the way, I'll be a DragonCon. If you're listening to this now, it's on, on the weekend of Labor Day weekend, and I'll be a DragonCon with a baby in tow. So you'll probably notice. Yeah, I'll be giving talk. So I'm like, I'm preparing myself.
Starting point is 01:10:27 Like, how do you properly deliver a message on anything I have to do AI? Now, I am not a sub-specialist on Finance Foundation models, but it's the kind of question you get all the time. How do you, how does the public make sense of all of this craziness, right? There's environmental impacts in the short, medium, long term. There are psychological impacts, and people use these tools in the near term. A couple months ago, there were stories of people starting their own little cults using large language models like your chat GPTs to talk with them, and they became convinced that they were prophets of a new religion that awakened this AI to the points where their spouses were like, I don't recognize my spouse anymore, right? This is all kinds of crazy stuff.
Starting point is 01:11:08 And I think one of the more simpler, and things we don't necessarily know how to regulate that are actually really tough, because like how do you regulate that? Do you age-restrict it? It's a challenge. But one of the easier questions is when you're talking about, Rob, that if there is this big literal cost and energy that has this environmental impact that can be estimated, who should pay for that? And I think it's one of these simpler questions, I think, that could be answered. Yeah.
Starting point is 01:11:35 Okay. So just to answer the question, and you've explained it really eloquently, to answer Caitlin's question, you know, how much is the, I guess that probably the question that I know I've asked myself is, you know, if I use this, you know, how much am I personally contributing to the problem? Yeah, I mean, I mean, think about it this way. I try to come up with some good estimates because it's hard to say, oh, it takes one kilojoules to do a Google search versus 10 kilojoules to do a chat GPT thing. I have a few references I can post in the chat afterwards,
Starting point is 01:12:17 based of where I can post afterwards where I get some of these numbers from. But think of it like this way. Like in terms of energy, yes, if you use it, you understand that you are using a lot of energy, right? As you would with any decision you do in your life. I think it's the one way to look at it. There are other people argue moral hazards
Starting point is 01:12:36 with using these generative models, like, you know, intellectual property. There's too much to talk about. This is the one of the challenges. But like from energy standpoint that every, like if you do a prop where you just get a text response, you know, if you imagine that you like drained like half of your phone battery to do that is roughly the amount. And I try to pull up my notes.
Starting point is 01:13:02 I did a little quick calculation. So maybe it was some of between like 10 to 10ish to half of your phone battery. take to you run that on a computer somewhere. That's one way to think of it, right? And maybe it's just a way to tangibly think of that. And I'll post a correction on social media once I get the actual number. But it was some non-trivial amount of your own battery. Yeah, I have done a little bit of like learning about all of this.
Starting point is 01:13:29 And, you know, certainly like the generative AI is an issue. But also like these big like data centers of just like everything that, you know, we're doing on the internet and everything that's being stored everywhere and you just have like these huge data centers that just like consume you know mass quantities of you know energy and need to have things that are cooled yeah yeah yeah and um it it is a lot but the new thing about about that's actually really driving us up is the fact that you have these GPUs that are so energy intensive. That's what's turning this from being like back before they existed is kind of a flat twoish percent for a number of years. Then when people started rebuilding these things for these
Starting point is 01:14:16 large models, it's just starting to exponentially take off. That is the key new difference maker that's making this take so much energy. Not to go in a whole different direction, but could the use of nuclear power be something that solves some of these issues? Oh, that's a real hot topic. My answer is yes. I actually, I feel so prepared today, Rob. So, like, the energy landscape in the United States has changed a lot in the last, you know, 20 years or so. I remember the statistic you'd be like used to be like, you know, something close to 40, 50% of our electrical grid was powered by coal. And coal is, you know, is dirty, dirty to burn, right?
Starting point is 01:14:57 Now, nowadays, coal is actually down at 16%. And it's been replaced almost entirely, like all that replacement is, almost entirely like natural gas. So we now, so right now, as of, as of a couple months ago, 42% of our powers by natural gas, 18%, 18% is nuclear. Coal is 16, wind is 12, solar is six, hydro is six. So currently, if you add up natural gas and coal, which are both burning fossil fuels, that's almost 60% of our power grade in the United States is powered by burning fossil fuels, carbon emissions, right? And they are baseline power producers, right? And so, meaning rain or shine, they will produce roughly the same amount of power, right? So they're reliable,
Starting point is 01:15:42 but they put natural gas into the atmosphere, they put carbon in the atmosphere, right? Nuclear doesn't, right? And it's only 18%. So you could say, hey, if you want to replace that baseload with coal and natural gas to have less carbon output, you could replace it with nuclear. But nuclear is expensive, and I'm not going to get into the whole risks of nuclear disaster. I have my own opinions about it. I generally personally am kind of pro-nuclear. I'm not been come prepared to back up that argument today, but you could. But the thing is hydro and solar and wind, solar and wind, you can build up, build up, build up, and they are cheaper. And they just need to be supplemented by some baseline load because there's times where the wind doesn't blow and the sun doesn't shine.
Starting point is 01:16:25 And the sun doesn't always shine everywhere, right? So no one I've seen is, at least I'm not familiar with people arguing that we should do all solar and all wind because when the wind stops blowing, the sun stops shining, you don't have that power. You'd have to store it in a battery. And as a result, you'd have to overbuild to make sure you're producing so much that you could then redistribute it later, right? So you need some baseline load. And what could that baseline load be? Hydroelectric is a baseline load, which can only build hydroelectric power plants in certain places, where there are waterfall. and reservoirs and such like that, or nuclear, right?
Starting point is 01:16:58 If you want to get rid of carbon, right? That's the argument for having a portfolio of nuclear power. Now, how much do you need? That's one thing, but we are definitely underbuilt in terms of solar and wind. We can definitely build a lot more of that. That's cheap, gets us energy for a lot of time when we need it. But anyway, I didn't come prepared for that discussion today. Well, I guess a little bit I was.
Starting point is 01:17:21 but that's a but that's that's that's generally speaking kind of like the arguments over like okay where does baseline load come from how much wind solar do you build we definitely can build a lot more wind than solar okay Randall Meyer asked a question why do cold things feel wet sometimes I never have thought about this do yeah it's and yeah go on cold things feel wet some of the I mean I think that maybe because they have ice on them Um, it's, it's specifically like if they are cold, uh, yeah. So I mean, I think as I was thinking about this, and Brando always asked me questions that get me to think about them sometimes, like, like, not only in terms of what is the answer, but where does the question come from? And I mean that in a good way, because like, there are times you grab a cold mug, you know, even though you poured no water outside of the mug, it feels wet, right?
Starting point is 01:18:17 Yeah. That's from condensation, okay? And I think that the simple way to answer this is like, well, it's condensation, right? And then to explain what is condensation? Because people would notice, you know, I have a little condensation. I have a mug right here. I love iced coffee. So I feel it right now.
Starting point is 01:18:33 I just had some myself. Yeah, I'm an ice coffee guy. Even in the winter, I'm an ice coffee guy. So I feel a little bit of glistening moisture. You can kind of see it sort of glistening in the video right there, right? And I'm not sure it's perfectly clear to everyone why cold things get moisture on them. What causes condensation? Well, because, I mean, we generally know that when things get cold, when gas gets cold enough,
Starting point is 01:19:01 it becomes a liquid. And when liquid gets cold enough, it becomes a solid like ice, right? But, you know, we're not so cold that we're going to freeze liquid in the air. I mean, we have all this liquid here. Why does it condense into a liquid from the air? Anyway, sorry. Well, basically, a substance that is, or like, let's just say water in the air, right? Okay.
Starting point is 01:19:26 You have water in the air. It's a vapor. There's a water vapor going around. And basically, those little vapor molecules, they're buffeting around. They're randomly moving with what we would call Brownian motion kind of bumping around. And depending on how fast they are moving around, on average, that's its temperature. The temperature is the average, what we call kinetic energy, the average speed, if you will, more or less, of how things are moving around, right? On average, that means that some of these
Starting point is 01:19:54 particles are moving faster than others, right? Some are moving fast enough that they will stay vapor in the air, but some are moving slow enough that when they get close to a cold substance, right, they will stick and condense onto the surface, right? So that's, so they become, So instead of this sort of gaseous form, they become the liquid form on the side of something cold. And it happens more on something that's cold because, well, it's going to cool down the air in the local area. So on average, more of these particles are going to be slower. And then they're going to become liquid. Okay.
Starting point is 01:20:27 So because that some particles are moving faster than others, the slower ones are going to condense. And that's why something cold is going to collect liquid and become condensation. That's sort of like the first cut answer to Brando's question. Now, that could be all he means by it, but I was trying to search around what other people said about this question and do things that are not even having condensation of them, do they feel cold to some people? Sometimes they say it feels clammy. And I don't know if that's true or not. I'm not sure I'd correlate that, but some people say that. Like a handshake. And I think it's like a cold handshake, cold, clammy handshake.
Starting point is 01:21:05 But so one thing I saw people arguing about, and I was starting to look into a little bit of literature on, It didn't get super deep. But the idea is that even if there isn't condensation, does it still feel cold? And if so, if there isn't a condensation, does the cold still feel wet? Some people, they say it does. And the idea would be is that we tend to, in our brains,
Starting point is 01:21:26 when we touch something wet, we feel cold. And we feel like if you put your finger in water and you feel the ring of water around your finger, you know, it feels cold. So maybe it's just when you feel cold, you also feel wet things, unless it's particularly dry. I was only looking to this a little bit, but I thought that there's another sort of deeper cut to Brando's question,
Starting point is 01:21:49 is that is there a broader phenomenon to people saying that cold things feel wet than just the literal wetness you get from condensation? I don't have a perfectly great answer to that, but that's something I was thinking about. Here's a question from Sam Wilson, who says with 3D printing gaining major advances in the last couple of years, I didn't even know about that. Have any of your projects begun to utilize 3D printed parts as an alternative to other materials?
Starting point is 01:22:19 All the time, all the time. I saw this question and it's super relevant to a project I'm on right now. So largely my specialty is in coding robots, you know, the control side in particular. However, there is, but we also do some design work. And I have a colleague, a friend in the department who also is expert in robot design. So I work with him a lot. And we got a pretty big contract to make a new robot prototype. And it's on a very tight timeline.
Starting point is 01:22:50 My students was actually one of the bigger things I was worried about when I was leaving on an extended vacation in June, was whether or not this project would keep going because we had tight timelines. This company was basically contracting us as almost like a little R&D division to make this new kind of robot. that, wait, I can talk about this. Yeah, it's all, the patents are already out. Yeah, so it's like a four-legged robot that has, like, that imagine, that has wheels on each of the feet. And it's more complicated than that. So, like, imagine, like, this is like, you know, this is like your leg above the knee.
Starting point is 01:23:25 And then this would be your shank beneath the knee. Okay. But, but, but, but, but, I'm sorry, this is probably boring for your podcast audience. The idea is we would offset it and this whole thing would tumble and spin and have wheels on the end. So it was actually really easy for this. this thing does tumble its shins to go over rough terrain and these wheels would spin. We've already made a couple prototypes. Now we're making a full scale prototype, something that's big enough to just fit through a door
Starting point is 01:23:48 and carry like large heavy loads while going upstairs. And we are constantly having discussions of like, okay, what do we machine out of metal, you know, versus what do we print? And more and more things, we're like, how do we get ourselves, we're behind? What do we do? Oh, we'll just print it. And it used to be that these prints. and materials, you know, they're going to be weaker than your aluminum, which you typically
Starting point is 01:24:12 make robots out of, because aluminum is both strong and light and easy-to-machine. But nowadays, it's like, just 3D print this thing. In fact, we'll do a couple things. One of my engineers, he said, we have an approach called Build It on the Bench, which is they will use, they'll use a 3D printer to print out the parts that you would otherwise machine just to make a prototype you can just assemble to make sure everything fits correctly. before you sent it off to the machine shop for a few weeks to get the parts back and realize you made a horrible mistake in a week's behind schedule you build it on the bench with that
Starting point is 01:24:46 and like that's and so we're doing that that process part of the process is almost done and then we're like man we can get away with 3D printing so many things um even sometimes we'll pretty print the gears which is kind of i thought that that was crazy because gears they have a lot of load they are like like gears when they mesh together matching like your comical like spur gears there's a lot of load you put on them. But there are ways you can 3D print special gear designs that can handle the load better and you can 3D print them way cheaper.
Starting point is 01:25:15 Otherwise, you have to make down a hardened steel. Harder do that at bigger scales, but my point being, I was shocked how much we were able to 3D print of this design. I mean, and it's very impressive. So it is a game changer and your ability to not only prototype design, but also have a functional,
Starting point is 01:25:36 design at the end of the day. For the most part, when you have a full line of these robots, which is the intention of this robot that we're working on, we'll see if that happens. I'm not a business guy. We'll see if they're able to sell these robots. That at the end of the day, you'll mass produce these parts by injection molding and these other processes like, you know, instead of 3D printing, which is slower and a little more expensive than that.
Starting point is 01:25:58 But 3D printing is becoming bigger and bigger part of my job all the time. Christian, do you do any 3D printing in your personal? outside of work? I, I, in theory, I do. I have a, uh, three, yeah, I, that, that, I'm still assembling my printer. I'm looking at it right now, sad in the corner. I tried to get it, I tried to get it assembled before the baby was born and I failed. Uh, so I did, so I got a fine time to do it because Emily loves 3D printing.
Starting point is 01:26:26 I mean, like, Emily actually was the one did the most 3D printing in our, in our lives because she works in a lab that teaches 3D, manufacturing and 3D printing. Yeah. so I have like thought about it sounds cool I know my kids would want it because they want to make like a million toys I know Tyson has one and he says that you know his girls are always like making things and putting things together last thing I need is a bunch of more crap around here on the floor so it's not that enticing to me but I was just wondering if there was some if there was some personal use that I could have for it I think I'd be a little bit more interested in one yeah I mean it's still obvious device right it's not an essential household product yet, right? But like, if you're into designing these things, and it's easier than ever to pop on design, I mean, frankly, you just download these things. I mean, it's like the Survivor players, they just download, you know, the online.
Starting point is 01:27:19 You can download the puzzles and 3D print them and things like that. You don't have to know anything about adding necessarily even just make it. Yeah. So the, and so like they're very low barrier for entry. I would say just buy the thing assembled unless you're really committed to assembling it yourself like I was. That was a horrible mistake. I was like, oh, how long could it take two hours? And it was like, no, it's like a five, six hour process. And this is my version of manifestation. I am looking into getting like some kind of, I'd like to have some sort of like
Starting point is 01:27:48 3D printed logo on the wall over here. I've asked my wife to assist me in figuring out how to do this. And she doesn't seem that interested. Sometimes I ask her to like, hey, here's a, here's a project and she's like okay sounds good and i never hear from it again but if there's some listeners who know how to do that you're definitely listeners to know how to do that i mean so what you'd want to do well frankly if you if you wanted us you could just farm out if you're just for this one project you could farm it out uh um to sell to another company to do it for you but if you're interested just getting your hands dirty with no i'm much more in in uh finding somebody who knows how to do it and uh trading uh money to them to get something something respectable
Starting point is 01:28:33 made rather than making it myself? There is definitely, there are shops that will do that for you. In fact, there are shops, what they will do is you can even give them an idea and they will afford and they will design it and everything that they need in a 3D model. Okay. All right. Andre Costa says, do you think the Turing test is still relevant in today's AI? Can you first set up?
Starting point is 01:28:54 What is the Turing test? So the Turing test, it's kind of a staple of AI, so of AI science fiction, if you will. The idea is how it addresses the question is, how do you know if a computer is intelligent versus not intelligent? And there are, it creates all kinds of debates. But one of the simpler tests that people that have devised one is called the Turing test is the idea that if you chatted with a chatbot and if you were a person, would you confuse it for a person, another person. And so there actually have been competitions,
Starting point is 01:29:34 such as the Liebner Prize, L-O-E-B-N-E-R prize, for many years, which were for chatbots trying to basically work with a bunch of people and have the people vote as to which, as to whether or not they were talking with a person or talking with a bot. In the hands of the people. In the hands of the people, right?
Starting point is 01:29:54 And that was the literal challenge of it, right? Now, is that still relevant? I would say almost certainly not anymore. For one, the Leibner Prize has been defunct for like five years. And so, I mean, this is before ChatGBTGPT was the release of the public in 2022. I think that very clearly it's hard for people to tell in a normal conversation if some of these large language models are people or not. There are all kinds of studies. And yet it has not changed the conversation at all as to whether.
Starting point is 01:30:26 or not these things are truly artificially generally intelligent would be our AGI, artificial general intelligence, where this thing is like we're talking to the mind of an actual human-like entity, right? So the thinking has really shifted against the idea of the Turing test being a good test of whether or not something is intelligent, right? So, which is kind of a shame because there's a nostalgia that I have for movies that are based on the Turing test. A big one was Ex Machina, which was the, she's Alexander Garland.
Starting point is 01:31:01 Is that the name of the actor? Alex Garland, is that the name of the director? He directed that Civil War movie. I'm trying to remember what it's called. But the Ex Machina can't, Civil War. It's called Civil War. It's called Civil War. I think it's called Civil War.
Starting point is 01:31:14 But back in 2014, I think Ex Machina came out. It's a beautiful movie. It's a scary movie from a sort of scary science fiction concept where I think Donnell Gleason is in a, is in a. a house with um who played po damron in star wars who's that guy my Oscar Isaac Oscar Isaac um sorry uh so uh Oscar Isaac is like a tech company CEO and they have a humanoid robot that could be is it a is it intelligent or not and so the idea is that uh don't know Gleeson is going to test this robot to see if it's intelligent that's his job and it creates
Starting point is 01:31:50 this whole scenario it's a really interesting movie yeah um but but the but the but this is an open question now that how do you know if something's intelligent? And there's a paper that's very highly cited by Francois Cholet, is one of the, is a C-H-O-L-L-E-T, which is on the measure of intelligence a few years ago, he put out there. And he's very, almost the preface of the paper is, the Turing test is lame. We don't like it. How do we measure something's intelligent based up, and it's hard to do it with a test like this. And in his argument, was to have like a test or a benchmark, if something is intelligent or not, really only makes sense is if the test is something that we actually wanted to be doing in the actual setting,
Starting point is 01:32:36 right? Is a chess bot intelligent? Well, if it kicks our butt in chess, the answer is yes. It is intelligent. But only in the context of it being a chess bot. But if beating someone in chess was a marker of being generally intelligent, then it would be a poor marker beating being chess is a poor marker of intelligence which you know gary casparov lost to a chess chess machine you know um deep blue was a deep blue and oh that sounds right i'm going off the rails here
Starting point is 01:33:05 yeah uh decades ago right and clearly we have not solved this problem yet so um but there's a whole paper uh uh by francois chalet on this but it definitely does not seem like the touring test is particularly relevant anymore okay and the turing test is not like it's not like a specific thing like the Bechdel test, right? It's not like that it's like one simple thing that it's just like the idea of a test is the Turing test. Yeah, exactly.
Starting point is 01:33:36 And some people would argue the Turing test as an idea itself wasn't even necessarily meant to be taking that seriously. It was like an idea of like a way that we could measure whether something was intelligent. It is hard to evaluate. And I'm not sure to what degree people have taken it truly seriously as a way of knowing
Starting point is 01:33:53 because something quacks like a duck and looks like a duck does not actually mean it's a duck. Yeah. Okay. Let's take another question. And this is an interesting one from an anonymous listener who wants to know. I'm curious if Christian can give his thoughts on edgick.
Starting point is 01:34:10 I know he has likened it to confirmation bias in the past, but is there some sort of science to it? I would love to hear Christian's perspective. Well, it does combine. the art of editing and the science of logic by definition it's literally in the name it's literally in the name so i i have this this this this question is correct on occasional twitter or whatever posts i have uh been somewhat dismissive of the project of trying to predict specifically what's the outcome of a season of survivor or reality show it's going to be based upon
Starting point is 01:34:53 the edit that you're watching, right? Now, I should clarify what I mean by this. Like, because I think every one of us who's watched enough reality TV has watched a show and we're like, we're getting this content off this person. Why are we getting this content?
Starting point is 01:35:08 It's because they win, right? Look, look at this. Or like, oh, they must go really far. Winners edit, they like to say. The winners edit. And I think even when I was, I remember watching Survivor, the Australian Outback back in 2021.
Starting point is 01:35:23 And I remember thinking, and I happened, my high school biology teacher happened to be the college roommate of Amber Berkich's sister. And so she was telling us. My college professor happened to be the college roommate of Amber Berkich's sister. Correct. Yes. And I was, what does that make us? Absolutely nothing, which is what this conversation is about to be. I do know some movie reference.
Starting point is 01:35:55 Yeah. So, but Cyber's such a huge deal that that was a big deal at the time. I had a weird connection to someone who's on Survivor. It was that big of a deal. And I remember, so I was watching Survivor to intently. I'm like, oh, how's Amber going to do? How's Amber going to do? And I remember that I was 14, 15 years old then.
Starting point is 01:36:14 And I remember watching it's like, oh, Amber's not getting a lot of screen time. Maybe that doesn't mean she, maybe she doesn't win. but maybe that doesn't mean she goes home soon, maybe in a little bit. I remember I talk and tell my friends this hypothesis that I had and they're like, Christian, you're crazy. Do how much they show someone matters
Starting point is 01:36:32 as to how well they do in the game? Well, now that's conventional wisdom in the edgic world, right? What's your confessional count? They measure it, right? They measure how much time you're talking on the screen now. Yeah, did they actually measure
Starting point is 01:36:45 like the literal screen time these days? Is that a thing that people do? Yes. No. So I need to catch up on my edict literature. So, look, the, so my, sometimes you're having character moments that aren't in confessionals. You know, so this happens now. So all this is fine to do to catalog, look, to each their own.
Starting point is 01:37:05 You want to measure, so you want to measure whatever's going on in the show and correlate to how people do. That's awesome. That's great. I think that what often happens, I think my experience from like being on the show and seeing how people, the edgic people talk about it, at times made me, like, impressed, but sometimes may be, like, impressed. but sometimes maybe cynical about the whole enterprise. And so I think that what I would need to see is something very systematic. And there are people, like, so like just because you apply numbers to something does not necessarily make it a science.
Starting point is 01:37:36 And I mean, and there are all kinds of enterprises that you could investigate that you don't then say that, oh, this is a science. Like phrenology is the study of bumps on people's heads determining whether or not you have a disease or not, right? You could do a scientific study as to whether or not that's the case. Good enough news, breaking news, it's not the case, right? But that does not make, the fact that you did a scientific study on phrenology, does not make it. And it doesn't mean that somebody that you spot that has a very bumpy head doesn't mean that that person does not have something wrong with their head. Like, you know. Correct. Yeah, exactly. So there could be something to do
Starting point is 01:38:16 it in an individual case, right? And there's supposed to be that many bumps. It's way too bumpy. I think there was a story that, like, there's someone who was on a news program and someone saw they actually had a bump on their neck and, like, a doctor said that actually is a disease and actually was. But, you know, that's a different story. But my point, but the way that you would turn it into a science is what do you need is you need a coherent theoretical framework, right? You need a theory.
Starting point is 01:38:44 And that doesn't necessarily mean a guess. That means that you have, you have a system that has been hypothesis tested and a established that say, okay, here is our established theory of how you would predict placements or outcomes of survivor. And it can have limitations, right? It can, it doesn't have to be perfect. And there are ways to do this. I mean, like, there are scientific theories associated with psychology and all kinds of,
Starting point is 01:39:08 and all kinds of other sciences, right? And there are some people who have systematically investigated the edit to try to predict an outcome. I mean, the example I constantly go back to is Dr. Dr. Amanda Rubinowitz, Angie Kahnz, and Sean Falconer all had a system, I think, in the late 2010s for predicting the outcome of survivor based upon certain character traits that are observed in the course of an episode, right? They put their money where their mouth is and put together a machine learning model, a clustering algorithm that determined the
Starting point is 01:39:42 likelihood of winning based upon features in the edit. And they actually, they pulled the audience of listeners, they pulled together, I guess, R-H-A-P listeners to watch an episode and list character traits exhibited by certain players in the season and had a rolling prediction from episode to episode of winner probability. Now, they predicted I would win hands down, so clearly it's not perfect. But the, but like at least that's a system, right? Now, I think I talked with Sean, he was like he'd given up on it. He thought it was so powerful that it spoiled the show for him.
Starting point is 01:40:17 Wow. That's what he fell, but he's also, like, he's a great guy. I'm less convinced of that, you know, maybe also because it predicted I win and I did not. But the, but like, that's a model, right? And so I think what I would need to see is like, okay, what are some established features of an edit that we can use per prediction? Then I start to look at it as a science. Is it like confessional count? How predictive of that is a winner?
Starting point is 01:40:46 Like, give me a number. Like what, what's the forecast? What's the fourth class probability? What are the features? Have an actual science that you then validate or invalidate. What you instead get, when I start talking about confirmation bias, is when I would see things like, well, this person needs to have a good character moment by episode five or else they're unlikely to win.
Starting point is 01:41:08 I'm like, okay, how are you sure you're not cherry picking a particular feature that you've noticed among many sets of features and as a result is not predictive that's my take yeah more with you on this one I think that there is sort of like a you know bare minimum
Starting point is 01:41:31 of screen time that somebody has to have like I think that what ends up happening is like if somebody is just like a complete non entity on the show I do think that you can rule out this person is not winning and I think that that ends up happening more often. And I feel like it's a little bit more of like, uh, the definition of
Starting point is 01:41:51 pornography is you, you know it when you see it. And I think that there's like not necessarily like a certain number of confessions. But there's a lot of times where even when we're breaking down the season, it's not like when you get to the final seven and it's like, okay, this person's not winning, this person's not winning. And it's like why? It's like that they would, we would see more from them if they were the winner. Sure. And I think there's a lot of it. And the core hypothesis of edg is not crazy. You know, the, I mean, it's the core hypothesis is that this edit encodes information because the editors know the outcome, right? And they want to present a story that's going to be interesting or satisfying to the audience, which involves the winner. So all this makes sense from a plausibility standpoint. But there are so many things in science in general that makes sense from a plausibility standpoint that's just, don't, that just don't pan out.
Starting point is 01:42:47 And so, like, I think that, and this is something I have not done a deep dive into the modern state of edgick in the lead up to this, to be clear. Yeah. So I think I saw recently there were, there, there, geez, I'm going to screw up. I'm not going to say his name because I forgot his name off the top of my head. But there are people who do data analyses of, ever since Sean stopped doing it, other people have started doing data analyses from episode to episode, because we're doing forecasting predictions and winners, right?
Starting point is 01:43:14 I haven't checked up on the current state of that. So there are some people who could be doing this. And I think that one core question that could be asked is like, hey, what is the minimum threshold on air time before someone's not considered a winner? Or what are those probabilities? They can put numbers to what you're saying and we can actually test it. Okay. Stephanie says, hi, Christian, big fan of you.
Starting point is 01:43:38 I was wondering who your favorite singer is. Personally, mine is Bob Dylan. but also, and how do I make a robot? Okay, we probably don't have time to get into that. But let's talk about your favorite singer. So this is a challenging question for me. You would hope it'll be simple. I like singers to have really great range, a vocal range,
Starting point is 01:44:07 the different registers. I think that because mostly I listen to instrumental music. and it's and I think that I think partly for reasons I grew up listening in my room I had a little radio and it would play music from the 70s and 80s I think it was a radio station called 104.3
Starting point is 01:44:26 The cult, the cult in Baltimore, Maryland. You're listening to 104.3. The cult. The 70s and the 80s. This is my question I have for you, Rob, because you are famous for your Casey Kasem impressions. Yeah. Clearly, you grew up listening to radio programs, you know, top 10 countdowns or whatever.
Starting point is 01:44:47 Top 40, right? And top 40 is what they call. Like, I didn't even know what they're called. Like, I just had this station on. And so, like, as a result, I would hear music. A lot of it had, like, big instrumental riffs from that era. But then there would be music, like, music I would hear, I like it. But I would have no idea what the song is called.
Starting point is 01:45:03 I would have no idea who the artist or the band was. I just, like, there's in my head, but I could not tell you because I didn't even, I didn't even, I'm really bad at understanding lyrics as well. So I can't even like, wouldn't, like, and this is like in the late 90s, early aughts. I wouldn't even know how to find the song that I heard. And so like, I didn't really develop a famous, favorite singer. I think that someone like that panic at the disco guy, like when I hear a song by him, I think I had to look up his name. I didn't even know his name.
Starting point is 01:45:31 It's Brendan Yuri, like that guy. Like, when I listen to him saying, I'm like, wow, what a great range. I stop and I like it. But really this question highlighted to me how bad I am at listening to songs with singers. And part of it is that I don't understand lyrics well. And I know there are people out there like me. And I was searching why. And I came across this phrase, it's called a manda green.
Starting point is 01:45:57 Did anyone heard the phrase of Mondagreen before? No. I had never heard of this. So you ever listened to a song where you swear you know the lyrics to it, but then you realize that you've been hearing it wrong for years? Okay. Yeah. It's called Amanda Green.
Starting point is 01:46:11 Like, there's that Taylor Swift song that people thought, talking about lonely Starbucks lovers. I, I remember it's the, got a long list of ex-lovers is what she says. Oh. But because the, of the, but the, but the emphasis is, so it makes it sound like lonely Starbucks lovers. I, I'd have to ask Charlie about this. I'm not, I'm not a swiftologist. But, like, my life is riddled with these. Like, like, I would hear a song on the radio.
Starting point is 01:46:40 I would no idea I'd sing it because they would never introduce it. I did not listen to Casey Kasem, who apparently would tell you what the song was. But there's a song I only now know is like Destiny's Child, like, Ladies Leave Your Men at Home, the ladies leave your men at home. Ladies leave your man at home. The club is full of ballers and their pockets are full grown. All you fellas leave your girl with a friend because it's 1130 and the club is jumping, jumping. So I had heard this for years as ladies leave your men at home because we booked the parlor
Starting point is 01:47:16 and we have the whole room. We booked the parlor and we put the full room. Okay. And all the fellows that leave your girl with your friends because it's 1130 and the clock is jumping, jumping. I figured that they had booked this room until a certain time of night and they were almost out of time. So like you got to get there.
Starting point is 01:47:35 And I completely misinterpreted this song. And people have different versions of this. Like one thing is a tiny dancer. Like some people here hold me closer tiny dancer as hold me closer Tony Danza. Or that's a very common one. So I never like, well, I realized like how many Mondagreens I had in my life. And it's embarrassing when I tell people where I think these lyrics are. So anyway, so the M-O-N-D-E green.
Starting point is 01:48:06 And so in case you're curious, that's a phrase that exists. Yeah, I feel like that that would be a good one for Robin Akiva to go into your favorite misheard song lyrics. Correct, correct. And that's a famous. And so there are all kinds of famous ones. The lonely Starbucks lovers one, I think I remember very vividly and relatively, I think 10 years ago. Yeah. Okay.
Starting point is 01:48:31 I think that would be fun to explore. Yeah, I'd say so. Okay. All right. How about, by the way, I just want to recommend to people the Billy Joel documentary that's on HBO Max. Oh. It's about five and a half hours, if not longer. Oh, my father's a big, big Billy Joel fan.
Starting point is 01:48:51 I'm not sure he knows about it. I'll let him know. Yeah. An extensive biography of Billy Joel's life. Good to know. And a lot of like discussion around like what did this song mean, what did this song mean, what the like songs? that you've heard, like, your whole life, it's like, actually this was, this was like word for word verbatim, a fight that I was having with my wife.
Starting point is 01:49:15 Could you imagine? I, because I could have, I could be the greatest songwriter of a generation, because I have so much material that I could turn into songs. And could you imagine? I could be spinning straw into gold over here. I'd be like rumple-sil-skinned. you would you would you would anything is people would interpret it just presented dryly and people would interpret it if i could just make the melodies i could turn that you know that you said are that
Starting point is 01:49:50 guy down the street was an a hole and i asked you why and you said he because he is and you couldn't give me a reason like if i could just like get the melody behind that you got you got the verse chorus You need the verse, chorus, verse structure. You got to figure out what the, what the chorus is. I think that you're in business. And these days, people find it courses out of anything. I mean, I remember there used to be like the fad of auto-tuning the news. Like there was like that people, I think what the unbreakable Kimmy Schmidt actually
Starting point is 01:50:21 turned it into like their theme song. Yeah. It's like auto-tuning the news thing. One example, I always hit me was Carly Ray Jepson's Call Me Maybe. Yep. I always heard this is crazy I just met you
Starting point is 01:50:35 this is crazy here by number call me maybe all these other boys it's the actual lyric is trying to chase me I was trying to try to date me I mean I'm still wrong I don't know
Starting point is 01:50:47 I hear I try to change me I'm like I'm trying to change me she's talking about you know people are trying to change her from who she is she's reasserting her identity you think it's a Rorschach test of when And like with Taylor Swift, you know, a, you know, I wouldn't have to explore, you know,
Starting point is 01:51:07 my connections to her family. That what's the lyric that you heard? All these Starbuck. All the lonely Starbucks lovers. Lonely Starbucks lovers. Like, does that speak to you at all as like a coffee drinker? I know you're an ice coffee drinker. Are you into, do you love Starbucks?
Starting point is 01:51:27 I feel like I'm on the psychiatry couch right now. It's like, what does this speak to me? It's like I do love my iced coffee. And maybe I am a little, some days, Rob, I just need to get away from the lab and I'll just go to a coffee shop and I'm just a lonely, arm me all in some way. You're a lonely Starbucks lover.
Starting point is 01:51:43 Yeah. Yeah, so this is the, you know, Starbucks getting this ad placement for free. How about that? But I heard try to change me for Carly Ray Jepson. And then I saw the music video and literally it's just like some hot guy mowing a lawnmower.
Starting point is 01:51:57 And she's like, I think you're hot. I want to date you. And that's like the story. Mm-hmm. Yeah. Yeah. So I misunderstood. I think that when I was a kid, I shot the sheriff.
Starting point is 01:52:08 I think I used to think was, uh, eye-shocking sherry. I thought it was I shot the sherry. I heard sherry as well. So I think there's something to that. Okay. Yeah. All right. Zach wants to know.
Starting point is 01:52:20 How do you feel about the term clanker and other robot slurs? So we're in the era of robot slurs now, Rob. Well, I said that it would be a slur if you call data a robot. But if you call them a clanker, that's it. You're canceled. You know, they're broadcasting that on Federation Twitter and you are, you're canceled. Yeah, I mean, so apparently there's a robot slurs. Like, I was looking into this that apparently there's a fat on TikTok that if you see a robot driving around with like a little wheel delivery robot, people would just shout clanker.
Starting point is 01:52:55 at it. And apparently it is a Star Wars reference. It's from the Clone The Clone Wars television show. And as a really a cell phone if you're using Clanker. I just like... I'm kidding.
Starting point is 01:53:11 And I think that I just, it's in a weird way, I think it's sort of a sign of the times that people are having mistrust of the creeping in the creeping automation in our lives, whether it's something like whether it's these large language models that people use or it's like these
Starting point is 01:53:31 robots that are like you know delivering food and they're sort of you know they're starting to put like humanoids like they're they're more common in videos I was going to bring this up when we were talking earlier with the large language models but you know I think that there is probably if I if I had to guess like what what is the animosity ensure that there is like a you know, wealth of, you know, sci-fi where, you know, things go horribly wrong. But even like in the more practical terms, you know, you have just in the last three years with the just like explosion of
Starting point is 01:54:10 these AI models and people replacing knowledge workers with AI all over the place, I think then there's also probably this creeping concern of like the advanced, in robotics, is this then, you know, this wave of where AI is taking things from people who are knowledge workers, eventually people who are sort of, for lack of a better term, blue-collar workers, are the robots? Is the coming revolution going to be where these workers are displaced more and more by robots in the workforce? I think it's where a lot of this concern comes from. I mean, And certainly, there are people who, that's what they're aiming for.
Starting point is 01:54:58 They're literally, there are companies that are aiming to try to train these robots so that way they can be cheaper alternatives to labor. Now, I think some will pretend like it's otherwise, but I think that that is, and I think it's a response to that. And I think that's, I can totally understand that. And I think on top of that, like the clunkiness of these machines, like, I think on top of this sort of like lends itself to clanker. Like you were seeing a delivery robot drive around. It's kind of like, it's kind of bumbling. And so I think I get where that comes from. And now, in my opinion, I think that this is just like the humanoid robot world
Starting point is 01:55:36 has a long way to go before it's going to have reliable human-like workers to airplanes. At some point, it'll be figured out. But like this is just my read as a humanoid robotics guy. Like I just like, for instance, Tesla is like trying to make these optimist robots. and they're trying to train in a whole bunch of data. Like, they literally, in a factory, just to give the people context, Tesla inside the research facilities, they have people wearing either motion-capture suits or recently reported, this was the story I reported on in the news recently,
Starting point is 01:56:08 at least I talked, I gave comments on, or with camera rigs on their heads who are just doing tasks all day long. And the idea is that Tesla is trying to gather enough data of humans doing tasks that their AI models that they train will then be good enough that the tasks that they can use them in factories. I'm of the opinion that this is,
Starting point is 01:56:29 that this is a much longer term proposition than other people are saying, that's just my technical opinion. But the point remains that people are worried about them taking their jobs. And there is a real, that is a real possibility. We just don't know on what scale
Starting point is 01:56:44 or what industries it's going to hit. Because right now people are trying to use like large language models to replace things like, you know, if you're a phone operator, right? Now, in the broad swath of human history, we have had automation increases and we've found places, other places for people to get jobs. But that's not, that's cold comfort if you have to change your job. If I had to change my job, I wouldn't be happy.
Starting point is 01:57:06 So I get it. I get where it comes from. I'm not qualified to do anything else. I think that, I mean, if you're asking a communications degree, Christian. I, you know, I think that, I think that, you know, I won't comment on your very recent employment with the National Broadcasting Corporation, but I think you got plenty of skills that you can, you can employ, apply to different places. You know, and maybe if they were, some of these AI tools were, you know, doing a little bit more in terms of like covering and offsetting the cost of, you know, the resources they're using. maybe there'd be like a little bit more okay well listen like we're not going to get left behind okay that they're they're making these advances but look that these are people that are taking
Starting point is 01:57:57 care of what they are disrupting yeah i think that there's there's definitely a sense that i mean like that the tech world had a completely different reputation 10 years ago you know sure um there was a point where mark mark zuckerberg was like low key shopping a presidential run at one point like he was So he was kind of visiting the early primary states for different political parties at one point. It was a – I think it was 20 – actually, it may have been 10 years ago now. He's a hero in the social network one. Yeah. They're making social network too.
Starting point is 01:58:33 A second movie? They're making a second movie? Yeah. Really? Jeremy Strong, I think, is playing Mark Zuckerberg in the – I hope this isn't like a fever dream that I had. I believe Jeremy Strong is playing Mark Zuckerberg in Social Network, too. Yeah. So I think that the tech world, the tech world had a sterling reputation because things like Google made people's, it was a great productivity tool. For sure. Maybe it was
Starting point is 01:58:58 lies easier. And it created all kinds of new methods of employment that you could work in things like search engine optimization. It created a lot of new jobs. I mean, search engines did not necessarily direct, in people's minds, directly threatened any entire industries in people's minds. Okay, as to what they were going to do, right? And we have been primed to CNN's sci-fi, and it is the intent of, I think, a lot of, like, you know, Elon Musk wants to turn these humanoid robots into a trillion-dollar industry, okay? You know, it's like, and that's what he wants to do. It's a stated goal. And there are lots of other humanoid robots that companies that are taking in tons of venture capital.
Starting point is 01:59:40 There was a, there was a big report by, I'm pretty good to the financial firm, put out a big report on how huge the humanoid robot industry they think is going to be. I think they're wrong, but like that's, but like that is what they're aiming to do. So there is totally natural to have that response to seeing these robots. So people call them clankers. Will that term stick? Well, language change, you know, slurs and language change all the time. They change rapidly.
Starting point is 02:00:06 But like, it won't surprise me to see some new version of this pop up in different forms. The more and more we may. see these in our lives. My bold prediction is that these things will bubble burst in research. You know, like that we, right now we're seeing a huge, there's an unsustainable market of money being thrown at different robotics companies, and they all can't succeed. And so it's going to be a bit of a shrink. And we'll see if it ever comes back up in the near future.
Starting point is 02:00:41 Okay. Aaron Capitelli wants to know, what kind of wall? Do you and Emily like? Oh, I'm a red wine guy. And I feel like, so, so like, if you give me, you give me a cabernet or you give me a sarah, I'd like it. But I am, so we will have red wine, even if it's like a fish dish, which you traditionally get a white wine. Emily's more of a white wine person than me. But I think that I realize, I, whenever I had white wine, I would hate it.
Starting point is 02:01:10 It would just taste like, so, like, bitter. I realize I probably just have been having terrible white wines. And so I think I've been scarred by that fact. So I mostly drink reds. So hopefully I refine my palate someday to more try the whites as they should be paired with fishes and such. Yeah. I'm just such a novice when it comes to these different wines. You know, I do.
Starting point is 02:01:33 I like some of the red, but you know, it's embarrassing that I have some red wine and I am just very apt to go to sleep. I feel like the one exception is like a sangria, I feel like is a little bit of a different story. But I will like, I get very tired to, you know, be drinking red wine. I did drink a little bit of white wine when I was in France this summer. Yeah, that looks at a great trip. That looks like a lot of fun. I mean, did you go out at any wineries when you're in France? I didn't go to any wineries, but I did go on a food tour from.
Starting point is 02:02:14 We were in the area of Montmart in Paris. And I sent you a picture of we went on a food tasting tour and we went into a macaron shop. I told the whole story about macarons actually caused a big fight. Again, this would be, if I was Billy Joel, a hit song of the Macaron versus Macaroon debate of 2024 at my house. And I sent you from a...
Starting point is 02:02:44 fine macarone shop and i guess the thing that i really did not realize was that the macarons are not just color colorful but you know it's a real like jellybelly operation uh these macarons shops of all these different flavors they're making yeah it's uh i mean that that's the kind of the the color of it is is part of the fun uh that you can you have in um i think the meringue uh uh uh It's also a bit challenging to make or the outer cookie part. I forget what it's called. I think I had a chocolate covered, like salted caramel, like lots of stuff that I like. Yeah, I mean, they are definitely a lighter flavor.
Starting point is 02:03:27 I feel like the people who are not macaron lovers. I feel like there are people who want their dessert to be more overtly sweet. Rich, yes, that's me. Rich, rich. Yeah, if you're a rich guy, you're not going to get that out of a mac. You could probably engineer a macaron to do it, but that's not really what it's for, right? But, yeah, like, you want to get a good ice cream or something like that. Then that's a different story.
Starting point is 02:03:49 A macaroni. Now, that's rich. Yeah, there you go. So the people who love macaroons, not macaron, it must be a richness thing. I agree. Yeah. Okay. Did you want to talk about your favorite thing from DragonCon?
Starting point is 02:04:03 Do you want to explain what that is from Sarah? So, yeah. So Sarah, I think, is thoughtful enough to know that I, that I go to, I'm going to DragonCon now for the past three years. I should be there right now when this is when this is dropping. Yeah, no, I mean,
Starting point is 02:04:17 DragonCon is like a big pop culture convention, kind of like a, like a Comic Con. Does it move around or is it in the same place? It's always in Atlanta, always on Labor Day. Labor Day weekend. It's good as consistent.
Starting point is 02:04:29 That means I know what I can go. And I'm going to leave this term. Yeah. And so I, oh, well, I dress up a little bit. Like I'll,
Starting point is 02:04:37 I'll like throw on like a Jedi robe. Mostly, I'm basically there for work. Yeah. I show up and it gets, I'm lucky enough that they have me give talks there and they and they they really make you to give like I like I give three hour long talks every year at this thing and it's and and and their slide deck so I'm there I'm working but like I got to play but they'll do everything from like there are
Starting point is 02:04:59 Star Wars tracks there's there's science tracks there's you know like fantasy tracks there's there's even the board games track and that's where the blood and the clock tower is played And I might have told this story before, but like, I went to play exactly one game of blood in the top clock tower in person, the social strategy game. And one person thought he recognized me from a prior game that he had played, not realizing he recognized me from Survivor and assumed I was someone he hated and he vowed to kill me first. Oh, my God. Do you ever get that, Rob, what people? Do you ever get like to be that, like, when people recognize you, but they're not sure where they recognize you from or do people either just recognize. I do you ever get that where people just want to kill you?
Starting point is 02:05:43 Oh. That's, yeah. So, yes, I have had that before where I've been like stopped. Like, did you go to my college? And then I have to say like, like, embarrassing. I'm like, do you watch the show Survivor? And sometimes is like, no. No, I do not.
Starting point is 02:06:02 I can't help you. Do you happen to listen to podcasts? Yeah. And that's a challenging one because like, You don't want to just like, because you know what they probably recognize you from, but you don't want to be the guy that's like, do you watch survive? Like, you're risking your social life by being that person. Yeah.
Starting point is 02:06:19 Like the one time I tried to do that, like I stopped myself before I said Survivor and they said, I remember, you're from the Tallahassee Theater Company. I'm like, no, goodbye. And so, bingo, you got me. So I got to play one in the clock tower there. That was fun. Yeah. But like it's, one thing is really neat.
Starting point is 02:06:37 I was at Universal Studios. and this is maybe going back like 10 years ago and the guy's like, a guy comes up to me and he's like, don't worry. I won't tell anybody who you are. I was like, oh, okay.
Starting point is 02:06:53 It's like, it's not every day I run into Adam Sandler's brother here at the studio. Like, okay, well, keep that on the DL. Hibbidi, bibiddy. Yeah. Oh, man.
Starting point is 02:07:10 Well, I didn't realize Adam Sayler's brother was a commodity. I did he a known entity, Adam Settler's brother? I have no idea. I don't know. It's not like Zach Ephron's brother. We all know him all too well. Famous, famous Samsung phone spokesperson. Yeah, indeed.
Starting point is 02:07:27 And so, yeah, so I enjoyed that. But I'll tell you, sometimes the science, the science track, like they will have, I'm not sure if it's an astronomy track, they will have a representative from NASA, give like the last year in like NASA, NASA, like, space telescope discoveries. And so that's what I was like, oh, that's a great idea. So I give, like, the last year in, like, humanoid robotics breakthroughs. And there's lots of cool videos.
Starting point is 02:07:47 And she has, like, all these cool, like, space photos and space telescope photos. And the room is packed when she's in there. Like, I have to show up early if I want to get to that talk. So that stuff is cool. The board game stuff is cool. And again, see some of the scientist talks is cool. And Emily went to go to Klingon Karaoke. So I think she enjoyed that.
Starting point is 02:08:06 Klingon Karaoke. Oh, yeah. She's a big starry stuff. I'm not sure how many people actually sing in Klingon. Yeah, so I know that, do you dress up in Klingon and sing regular or are you doing the songs in karaoke? I imagine it's got to be an all-comers. So it's got to be whatever people can do.
Starting point is 02:08:25 So if people, there's English singing in Star Trek. And how do you say lonely Starbucks lovers in Klingon? I'll have to ask, you know, Emily might know. I don't know. She's more likely to know Romulan, I feel. I'm not sure. Wow. I didn't even know they had the Romulan language that you could learn.
Starting point is 02:08:43 You know, I might just be BSing Rom. I'm not sure if they do. But like, but she's more a fan of the Romulans than she is a fan of the Klingon. I'm not sure that what it says about my wife. But that's the, but they have Klingon karaoke. And it's just a place where you're going to hang out and just chill out
Starting point is 02:08:58 and not just be bustling around the convention and sit in the chair for the most part. But it's just there's all kinds of wild tracks there. We always joke like there's always the top. 10 anime Wifos and Husbandos track, where apparently someone must go down a top 10 list of their most attractive people they find in anime. I'm amazed at the subcultures that exist. Wow. Okay.
Starting point is 02:09:22 I'm not going to be doing that podcast. Well, we'll see. That's one for Akiva, right? Or Danny Bryson. Yeah. Okay. How about, a question about from Donna, what does the Mars rover tell us, why do certain people want to go there?
Starting point is 02:09:40 Plus, would there be bacteria and viruses that humans could not deal with? So just a real quick on the Mars rover. So the part of this question I thought was super good fun to touch on is the bacteria and viruses on Mars. If we found bacteria or viruses, bacteria on Mars, call the press because that would be life on another planet.
Starting point is 02:10:06 So, like, there's not a concern of sending a person there and catching a disease. If they did, that would be a remarkable discovery, and that person would die a hero. But so we send rovers to Mars for signs of life, but also signs of the history of Mars, and perhaps for the possibility of colonization. Now, why would people want to go to Mars? It's, I mean, the idea in the long term is that you have a place you put people in case something bad happens to Earth. That seems like a really low probability prospect.
Starting point is 02:10:34 I mean, we'd rather take care of Earth now than live in the hellscape that is Mars. Mars sucks for people. I'm sorry. Like, like, people... Finally, somebody said it. It's just like, it's just who... I mean, there are people,
Starting point is 02:10:50 every time that there's a plan to go to Mars, like there are people who volunteer. I believe we talked about the Mars One project at one of our previous podcast, where people, it was like, it was like a private group that wanted to send a mission to Mars and was taking
Starting point is 02:11:03 applicants and like one man like applied for Mars one got somewhere through the selection process but never told his wife and so he says that he was going that he wanted to go to Mars and they of course never left it was kind of a boondoggle of a project but the idea but like I mean Mars is a terrible place to send people I mean the the I mean like the closest analog is when we send people to Antarctica yeah like Antarctica is like it's is a beautiful place but requires constant shipments of supplies. It's hard to live there. People who live there,
Starting point is 02:11:38 they work their butts off. Did you check out Stars on Mars? No, I did not check out Stars on Mars. What is Stars on Mars? Stars on Mars, it was on, I think that, I'm not sure if it was 2024 or 2023. It was a show that was on Fox where they sent celebrities, some very nice people.
Starting point is 02:11:59 They sent them out to go and live, They were allegedly on Mars, but they were really like in Australia, but they had like William Shatner was the host. Was this, it was not intended to trick the celebrities like. No, they didn't trick them. They knew they were not on Mars, but they were supposed to be doing like Mars training missions. But it was very campy also. I imagine that's a lot, you know, I hear men in these all celebrity reality shows often lean into camp. But I'm not, I'm not talking to an expert or anything like that.
Starting point is 02:12:29 So, but I think that, that's, I mean, there was also the space cadets TV show back in the early aughts where they did lie to people, tell them then they were to send them in the space and it was a giant prank show. But when it comes to Mars, it's like the idea that if you had people there, you could do better science than if you could with robots, people were more adaptable, but keeping them alive is very, very difficult. But the mission of finding whether there's any, like, even bacteria is so serious that whenever they want to send a rover to Mars, they take great care to assemble it in a clean
Starting point is 02:13:05 room so that way no bacteria ends up on the rover itself so you don't contaminate Mars with whatever they send there. I'm not sure if it would survive Mars, but they're very careful not to set anything that would actually be any biological contamination. And I can tell you much more about Mars after you eventually watch for all mankind. event yeah look i i have actually resolved myself i'm getting apple tv plus okay i'm getting it so that so now that's step one right i have to get apple tv plus and then i can watch it then you can watch okay how about let's see a couple more with christian um this is a anonymous one dr hubicki do you play any video games at all if not could you talk a little bit about your hobbies
Starting point is 02:13:51 so yeah i mean i know we talked about video games in the past like back to I played Legend of Zelda, too, the Legend of the Zelda games and things like that. And I can finally report that I have completed 100% of Zelda Tears of the Kingdom. Oh, congratulations. Thank you so much. We were literally, we were waiting to get a call from the hospital about our baby, you know, going to be induced to have our baby delivered. And to keep us calm, we would just, we put on Zelda, like we're trying to find the last of the things.
Starting point is 02:14:21 Like, if you play the game, there are these little Kurox seeds, that you have to go and find. There are literally 1,000 of them in the entire world of the legend of Zelda. And we were waiting to get this call and I literally find the last one and I'm like, hey, we got it, we're done.
Starting point is 02:14:39 And then we get a call saying, time to go to the hospital. Perfect time. So we want to get it done before we're on a switch. Switch, that's a switch. There's two of the two Switches are Breath of the Wild and Tears of the Kingdom and they're huge.
Starting point is 02:14:51 And I love doing, I love when I play video games to complete them 100%. That's just a compulsion that I have. And so as a result, I play relatively few games. I will play these big, if it's a big open world game, I'll try to complete all of it. So I spent the last two years on Zelda Tears of the Kingdom. But if I want to play something shorter, I'll play what's called a rogue-like game. People in gaming know what that means.
Starting point is 02:15:17 But what they are is they are games where they are relatively short. There's some element of them that are randomly jam. generated, like the randomly generated levels, randomly generated enemies, and it's really, really hard. And if you lose, you have to start all the way back at the beginning. So they're like, and, and what they're, they're, they're, uh, classic game is one is called FTL that really made this very popular, faster than light. It's like a, like a spaceship kind of game where, you know, so you have to build a spaceship
Starting point is 02:15:43 and then you, you basically go through all these enemies and, and you want to get to the end and beat this really hard boss and you pick up items along the way to make you more powerful, but you probably lose until you eventually really know how to play the game. game. The Binding of Isaac is another one that's very popular that I enjoy. That's now well over 15 years old, the original one. I forget at this point. But another one I'm really into is called Vactorial.
Starting point is 02:16:08 This one's crazy. I mean, your kids play Minecraft at all. They don't play Minecraft that much anymore. They have had their run-ins with Minecraft, but it's a lot more Fortnite, a lot of Roblox right now, a lot of something called Grow a Garden. Gorilla Garden. I'm unfamiliar with this one. But that's a, but, but mine's a very Roblox adjacent.
Starting point is 02:16:30 And some of these things are like in the Roblox universe, but they're like different, like, they're in some different part of Roblox. Okay, okay. Well, this, the whole Roblox universe is completely opaque to me. I feel like it's one of the things I have not crossed that event horizon into understanding. Yeah, I feel like that there is like, from what I understand about it, there is like Roblox is sort of just like the ecosystem, and then there are like different, I don't if they call them servers on Roblox of that, okay, this is this version of Roblox.
Starting point is 02:17:01 This is this version of like it's on Roblox, but it's a whole different world. I see. Well, so Factorial is a little bit more like Minecraft. The Minecraft's like about building stuff, right? And the plot of the movie. What's that? Did you see the movie? Oh, I did not see the Minecraft movie.
Starting point is 02:17:16 I just know there's something about a chicken jockey. Chicken jockey. The most recent meme movie, as far as I'm aware. But I did not see the Jack Black film. But Factorial is more my speed. So like imagine Minecraft but for engineers. It's like you where you start with nothing and you're supposed to build like a base. And the plot is that you're an engineer who's crash landed on a planet.
Starting point is 02:17:42 And you have to build an entire factory and industry to build a space program. to get you off the planet. So you have to start by like just mining ore with your hand, like you're punching trees in Minecraft or whatever, right? And like, and you're building, then you build something that smelts the ore into, into, like, plates. Yeah. And eventually you're researching circuits.
Starting point is 02:18:03 And eventually you have this giant, like, snaking factory of belts that will feed little robot arms that feed in other machines. And it's this giant thing, all of which you're being attacked by aliens who want to, like, to tear your base apart. It's absurd. It sounds like stars on Mars. Yeah. Did they have aliens attacking them
Starting point is 02:18:23 in stars on Mars? Is that part of the plot? Of course there was. Of course there was. Why don't I even have to ask? So I play a little factorial. But yeah, less William Shatner in it, though. Okay.
Starting point is 02:18:36 All right. And then let's do one more. Darren wants to know with spooky season on the horizon. Are you a horror film fan at all? If so, what are some of your favorite horror films franchises of all time? Thanks. No, I am not a...
Starting point is 02:18:50 Oh. Emily is a horror film person. I'm actually gobsmack that that is your answer because I thought that you, this was something that you wanted to talk about. Oh, it is because I guess I get to talk about all the reasons I am not a horror person and how hard it is for me to watch horror films. And this is, at time, is challenging because one, one thing that, like, I'll, like, psychological horror.
Starting point is 02:19:15 Like, give me Silence of the Lambs. I'll watch Silence of the Lambs. where it's scary because of the ideas involved. Where I can't stand are jump scares. I have to leave the room. Like, it's, if, like, if I anticipate, like, it's why I can't watch vampire movies because I'm worried at any moment, the vampire is going to just out and bite someone, like, at a moment. Like, the, the, the, the, I, like, but Emily likes a lot of these things.
Starting point is 02:19:40 Like, she wanted to watch, um, the strain, the vampire movie on FAC, the vampire show on FX. Oh, yeah, it's Dragoy. The Strigoi and everything. And she liked that show. And she was like, you got to watch it. You got to watch it. And I was like, I ended up, it's definitely not for me. But I stuck through it the whole time.
Starting point is 02:19:58 The only way I could is that Emily would pre-watch the episode and she wanted to watch it with me. And she would tell me to leave the room when there was about to be a jump scare. And so I ended up missing about a third of any given episode. And normally it's fine. Like, I'm not a big horror guy. You know, it's rare that I like one. But nowadays, I'm trying to get into, I'm starting to watch more and more robot movies because I'm, because I have to, one, I'm giving these talks at these conventions about, like, robots and movies.
Starting point is 02:20:28 Yeah. And some of these robot movies are more horror adjacent. Like, and it's like, I watched Megan for the first time. Megan 1 or Megan 2.0? Megan 1.0, just because I figured I start with that one. I finally, I finally wants Megan 1.0. I was like, you know what? I should start being able to talk about robot movies.
Starting point is 02:20:45 and somebody's going to be horror things. Yeah. So, like, I had to, like, you like, do you like it? Do you like it? It was good. It was,
Starting point is 02:20:51 honestly, there's some good stuff in it. Um, the, the scariest scene, honestly, was when the robot tries to act like a therapist to a little girl. And I'm like, that's a little too close to home.
Starting point is 02:21:02 That's, that's, that's too close to reality right now. Um, but they get some stuff, uh, kind of presently right. So,
Starting point is 02:21:08 uh, it was, it was campy. Um, but I, but I, but I'll be talk. I want to make,
Starting point is 02:21:12 I had to watch it because I wanted to make sure I had something to talk about. I have a talk I give every year called Robot Science or Robot Fiction at DragonCon. And so I break down a movie and it's like, oh, here's something that's realistic about this robot in this movie. You know, to be real steel. By the way, the movie Real Steel with Hugh Jackman could not be more topical with robot kickboxing, now being a thing. So but now I like to white knuckle through all these horror films that happen to have robots in them. Like, there's that movie companion that came out earlier this year. And it's, there's, it's not really a horror movie, but there's a lot of violence,
Starting point is 02:21:48 a bloody violence fairly early on. And I'm like, I tried to watch it, but I had to click off it, but I got to come back to it. Because it's a robot movie. But anyway, so. We can get away at a remote cabin turns to chaos when it's revealed that one of the guests, a subservient android built for human companionship has gone haywire. Uh-oh. You watched it?
Starting point is 02:22:09 Oh, indeed. I only the first like 30 minutes. And I'm like, I got to go back to it because I got to finish it because I want to talk about it for this talk I have later this week. But anyway, so that makes horror really hard for me to watch. But like, Emily will eventually convince me. She got me to watch what's the one with Pinhead? Hellraiser. Hellraiser.
Starting point is 02:22:30 She got me that. I don't know how she got me to watch that one. Anyway, that's it. All right. Well, Christian, is there anything else? on your mind that we didn't get to talk about? I think we got through a lot of stuff today. We had a lot of, a lot of content.
Starting point is 02:22:46 A lot more science questions this time than normal. I feel like that's just like the sign of the times. But yeah, but I had a good time. Thanks, Rob. Yeah. Okay. Well, we talked about science and policy and nonsense. It was a true embodiment of everything asked Dr. Hubicki brings to the table.
Starting point is 02:23:08 I'm glad to hear that. We try to deliver what people expect. Yes. Okay. And you're going to be at DragonCon. So if anybody's listening to this, last minute, get on over and go to check out Christian at DragonCon. What else are you up to?
Starting point is 02:23:24 Oh, it's a busy semester, even though I'm on leave. I'm really ramping up by my science communication outlets. I'm starting to post some things to YouTube, starting with some, you know, more technical seminars and classes. But, like, also, I really want to start talking more about, you know, science in the public, science and movies. So, yeah, if you have any ideas, feel free to drop them to me on Blue Sky at Chubicki or Instagram. That's a great hook to get people into, like, you know, something is popular and you can talk about, like, the science of it. Yeah, and I love doing it.
Starting point is 02:23:59 I've been doing it for, you know, just on and off for all this time. And now I feel so energized coming back from my, for my travels this summer. I think it's the perfect time to do it. Yeah. Okay. Well, it was such a fun time for me to go through all this stuff here with you. So thank you for carving out the time for us. And, you know, hopefully, you know, we'll get to be around the grimwar again sometime soon.
Starting point is 02:24:23 Looking forward to it, Rob. Okay. All right. Thank you so much. We'd love to hear your comments about what you thought about all of this. And, of course, check out everything else we have going on here over on R-HAP, where we're talking about so many, we're talking about survivor in America, survivor around the world and big brother and so much more right here on RJB. Thank you for joining us. Take care everybody. Have a good one.
Starting point is 02:24:46 Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.