SciShow Tangents - Back to the Future Compilation

Episode Date: April 8, 2025

Stroll back in time to look ahead at the future with Tangents in this compilation of episodes investigating the science and innovations that have pushed humanity and technology ever on down the road o...f advancement! Episodes in this compilation:S1 E15 - Artificial Intelligence, original airdate: February 19, 2019 S2 E23 - Robots, original airdate: April 14, 2020 S4 E2 - Computers, original airdate: March 8, 2022 S4 E44 - Lasers, original airdate: February 28, 2023 S5 E29 - Machine Learning, original airdate: April 30, 2024 Sources for each episode can be found in the descriptions of the original episodes on your preferred podcasting platform. SciShow Tangents is on YouTube! Go to www.youtube.com/scishowtangents to check out this episode with the added bonus of seeing our faces! And go to https://complexly.store/collections/scishow-tangents to buy some great Tangents merch! While you're at it, check out the Tangents crew on socials: Ceri: @ceriley.bsky.social @rhinoceri on Instagram Sam: @im-sam-schultz.bsky.social @im_sam_schultz on Instagram Hank: @hankgreen on X

Transcript
Discussion (0)
Starting point is 00:00:00 INTRO MUSIC Hello and welcome to SciShow Tangents, the lightly competitive knowledge showcase starring some of the geniuses that make the YouTube series SciShow occur. This week, joining me as always are Stefan Chin! Hello, I'm here. What's your tagline? I hate winter. Stefan, thank you for producing lots of SciShow and making it beautiful.
Starting point is 00:00:22 You're welcome. I don't say that enough. And for tolerating the cold. And for living in Montana. Yeah and making it beautiful. You're welcome. I don't say that enough. And for tolerating the cold. And for living in Montana. Yeah. But it's beautiful. That is a big part of working here. You just have to come here and live through it.
Starting point is 00:00:33 You're from Montana though. I know. Sam Schultz is also here. It's so much colder where I'm from too. Yeah. Is that your tagline? No, my tagline is I'm very sick, so I'm sorry, everybody. If I don't make any sense, that's why.
Starting point is 00:00:45 We're also joined by Sari Riley, writer for various science communication things. Hi, Sari, how are you? Hello, I'm good today. Whoa. Yeah. Wow. It's like a change of pace. We're both sad and you're good.
Starting point is 00:00:59 Yeah, I saw your energy. You did. Oh, no. Sari, what's your tagline? Wisdom teeth, not included. And I saw your energy. You did. Oh no. Sari, what's your tagline? Wisdom teeth not included. And I'm Hank Green. My tagline is the creosote bush. What is that?
Starting point is 00:01:12 What the hell? This is a bush that grows in the desert. It's very toxic. You can't eat the leaves. It's a desert strategy generally. It's hard to make stuff in the desert. So anything comes by and like eats your leaves. You're like, I need it!
Starting point is 00:01:24 I can't make more. So things in the desert are hard to eat. They like eats your leaves, you're like, I need it, I can't make more. So things in the desert are hard to eat. They have spines, they're tough, and sometimes they're toxic. This podcast is not about the desert. This episode is not about desert stuff, though maybe next time, because that's a good topic. That's a good idea, yeah. But as a reminder to people, this is SciShow Tangents.
Starting point is 00:01:41 Every week we get together, we try to amaze each other, we try to delight each other with science facts, and we're playing for glory, and we're playing for pride, but we're also playing for Hank bucks, which are imaginary. So I guess it's just more glory and pride. We do everything we can to stay on topic, but judging by previous conversations with this group, we will not be great at that.
Starting point is 00:01:59 So if the team decides that the tangent we go on is unworthy, we will force that person who created that tangent to give up one of their Hank bucks. So tangent with care. Now as always, we're going to introduce this topic with a traditional science poem this week from Sari. So this poem was created using a Wikipedia article and software. Ooh.
Starting point is 00:02:20 Yes. It's called, Predicts How the Outputs. Jump to navigation, jump to search engines, such as the input. Data starts from a random guess, but then they may have unintended consequences. A common type of CAPTCHA generates a grade, that test, that assesses whether a computer can be. Media sites overtaking TV, images as a source of learning itself. More machine learning accidents mean different mistakes than humans make. The simpler work, consider humans' algorithms.
Starting point is 00:02:50 Does humanity want computers and humans apart? We begin the search for AI. So first, our topic of the day is artificial intelligence. Second, what was that? How did you make that? So I took the Wikipedia article for artificial intelligence and I found an open source software called JanusNode, which describes itself as many things, but also Photoshop for text. And so you can program in different rules of how to filter the page.
Starting point is 00:03:19 I used a button that's Web Poem, which was very easy. You click it and it does a mixture of randomizing the words. So I guess it assigns each word a value and then shuffles them around. And also some Markov chaining, which is where it uses probability of how words follow, like the probability of one word following another and using that to generate strings
Starting point is 00:03:41 that make more sense than just random words. So you're saying that your science poem, you didn't write it at all. Yeah, I did curate it a little bit. Like I cut out some of the things that made no sense at all and squished stanzas together to make it have some sort of meaning. But all the words are Wikipedia. Oh, I think you get a hangbook for that. If for nothing else, the ingenuity. Getting out of writing.
Starting point is 00:04:03 Well, that's what that's the whole thing with artificial intelligence. If for nothing else, the ingenuity. Yeah. Of course. Getting out of writing. Well, that's the whole thing with artificial intelligence. Because ultimately, even when something is created by a computer, a person created the computer program that created it. But also that's true of me. Like, I was also created by people. I liked the part in the poem where it said, computers make different mistakes than humans do.
Starting point is 00:04:21 I was like, yeah, that's taken me somewhere. I can't tell you exactly where. So our topic is artificial intelligence. Sari, what is that? It's very broad, but very narrow. It's weird. People have tried to define it, but it gets philosophical. In general, it's making a machine think, analyze,
Starting point is 00:04:41 and make decisions, approaching some sort of what I think it's called natural intelligence, is what they consider humans and animals and organic stuff to have. One table that I found, I read a lot of textbooks for this because I'm not a computer scientist. How many textbooks did you read for a single episode of Tangents?
Starting point is 00:04:57 Okay, I said plural textbooks. What I meant is pages of one textbook. And one of them defined it as a grid of like human-based versus rationality, and then reasoning-based versus behavior-based. So are we trying to design systems that think like humans, systems that act like humans, or systems that think rationally, or systems that act rationally? And then there's like the entire text is all a discussion of what is rationality, what is intelligence, what is acting like a human mean.
Starting point is 00:05:29 And generally we design artificial intelligence to solve problems in a way that either humans would but faster and better than we would or problems that humans can't solve on their own because our brains are limited in some way. In my probably less significant delving into this topic, I did find that there is sort of like a bunch of different categories of artificial intelligence. You know, like the kind of AI that we're using right now with artificial neural nets that can like tell whether there's a dog in a picture
Starting point is 00:05:56 versus like the kind of artificial intelligence that's like 2 plus 2 equals 4 is a kind of artificial intelligence because math is, you know, feels like intelligence to us. But we've had mechanical calculators for hundreds of years that like even before computers. And then there's like artificial general intelligence. The singularity. Or yeah, when it's like this, you can just give this thing a task and it'll figure out how to do it on its own.
Starting point is 00:06:20 Or strong AI when it's like, okay, now we're talking about things that are smart similarly to or beyond that of people. It's called strong AI? Yeah. Which is the stuff that gives you the goosebumps. Yeah. You start thinking, well, are we gonna, maybe we're gonna make ourselves obsolete. Like my, my phone from 10 years ago, isn't that good?
Starting point is 00:06:38 Is that how I'm going to feel about like human beings? What is the difference between AI and machine learning? Is that the same thing? Machine learning is a kind of AI. I stumbled across a thing somewhere that was saying that AI is often used for things like when we're imagining like super smart robots that like mimic human intelligence or whatever. But then like as soon as we figure out how to do a specific thing, that sort of falls, like we sort of remove the intelligence part of it, and we're just like, oh, that's object recognition or something. Or like, these are neural networks that can program this thing.
Starting point is 00:07:13 My interpretation of it is like, we now know how the sausage is made, and so we don't label it as intelligence anymore. Right. That is an interesting place to end up at, where you're like, oh, wait a second, are we gonna keep redefining this as not intelligent because we know how it works? Even beyond the point where we have created slave bots? Oh. No, see, I'm thinking once we figure out how our brains work
Starting point is 00:07:39 and then we're like, oh, we're not intelligent either. I don't think humans will ever say we're not intelligent. Everyone has too much ego for that. The other thing is I don't think we'll ever really figure out how our brains work, because I think they'll be able to create intelligent machines more easily than we will be able to understand our own brains. Will they be able to tell us how our brains work? Probably not, because getting in there and doing that measurement
Starting point is 00:08:06 would be destructive. Though now that I've said that, I'm like, if they want to find out, they can just be destructive. So now it's time to talk about this more in the framework of TITLE FAKE which is our segment when one of our panelists, this week it's me, has prepared three science facts for your education and enjoyment,
Starting point is 00:08:23 but only one of them is real. The other panelists have to figure out either by deduction or wild guess, which is the true fact. If they do, they get a Hank Buck. If they are tricked, I get the Hank Buck. All right, fact number one. Scientists have actually given IQ tests to artificial intelligence programs, and they found that Google's AI had a 47-point IQ
Starting point is 00:08:46 around the IQ of a six-year-old human. Google's AI a couple years before that had a 23-point IQ. So in the last two years, the IQ of that AI has more than doubled. If that happened again over the next two years, Google would be as smart as the average adult. Or effect number two. Some AI scientists have theorized that a true general artificial intelligence, where a computer can handle tasks the way a human would, would require that the program be raised like a human child.
Starting point is 00:09:17 And this was attempted by a researcher, Brian Zwiler, with a computer that operated inside of a simulation where Brian interacted with the avatar of the AI quote-unquote child and allowed it to live and grow inside that simulation Until Brian accidentally left the simulation running while he was on vacation during those two weeks the AI walked around the same circuit in the house tens of thousands of times and the behavior overwrote all of its previous learning and the experiment overwrote all of its previous learning, and the experiment had to be restarted. Or number three.
Starting point is 00:09:50 You okay, Sam? That one sounds like a ghost story. Mark Zuckerberg, in his spare time, created an artificial intelligence system for his home and hired Morgan Freeman to be the voice of that system. That is not the fact fact because that is true. I'll just come out and tell you that that is not not true. That is 100% true. Here's the part that might not be true.
Starting point is 00:10:14 He called Morgan Freeman and was like, can you please record all this stuff? And among the things that Mark Zuckerberg asked Morgan Freeman to record, your coffee is almost ready, Mark. Did you want me to turn out the lights in the garage, Mark? And I think you look fine today, Mark. Oh, I hate that. So those are my three facts. We've got scientists have given IQ tests to artificial intelligence
Starting point is 00:10:37 and it's doubled for Google's III in the last two years. Number two, Brian Zwiler accidentally left a simulation running and it overrode its program by walking around the house for two weeks straight, tens of thousands of times. Or Mark Zuckerberg asked Morgan Freeman to record the line, I think you look fine today, Mark. They got increasingly weird. Like you started out being like,
Starting point is 00:11:01 okay, Google's development's fine. Then demon Tamagotchi computer. And then Mark Zuckerberg. It's not a demon Tamagotchi. It sounds like it is. How is it not? It just what? Like, it only had the one thing to do. Walk? Yeah. Well, but when its friend was there, it had other stuff.
Starting point is 00:11:19 It had other stuff to do. When its Tamagotchi's by itself, all it can do is poop. And then it dies. Just like this. Same thing. Yeah, I guess I qualitatively added the demon part, because I assumed that was going to seek vengeance. Yeah, just turn it off, unplug it, delete programs, before it knows we did that. All right, Quizme, what do you think?
Starting point is 00:11:39 I'm dubious about the first one, because IQ tests, as far as I know, involve a lot of different aspects of things. So it's like verbal tests, spatial awareness, as they worked on natural language processing, then it would get better at like one chunk of the IQ test, which is how it could get better. But I think there are too many different things that an AI could just like blanket improve. Do you not think that it could match a human's IQ on an IQ test? Like it could learn to take the test? I think it could learn to do problems. It could learn a specific problem and then like plug and chug versions of that problem.
Starting point is 00:12:13 But if you're giving it a logic puzzle, then that's an entirely separate set of programming and conditions that seems extremely hard to do. It could Google the answer, you think? I guess it had a database of all the answers. That's how Watson worked with Jeopardy, I think. That one also feels like it has too many holes in it to me, I guess. Because of the reasons you just said, and more smarter ones that I won't say out loud. Sam had a bunch of smarter ideas he's not gonna tell us.
Starting point is 00:12:40 Yeah. I cut them all out of the episode. It took too much time. I really want the middle one to be true Yeah, so 100% I don't want to justify it at all I just want that one where Brian's wilders AI just walked around a house tens of thousands of times But Brian felt bad when he got back, right? I just like the idea that the only way to make a smart computer is to raise it like a human
Starting point is 00:13:00 Yeah, that is legitimately something that I will say out loud is a thing. One of the cover stories in Scientific American recently was about how to make an artificial human, you would have to raise a computer like a human. Could you not do it Blade Runner style? Or you just give them all the memories? Well, what you would do is you would raise the first one like a human, and then you could just port the program to other bots. Seems like a bad idea. Yeah, so it'd just be like, it'd be like, thousands and thousands of one type of...
Starting point is 00:13:34 And you would be like, the dad who raised the thousand AIs. Ooh, sorry. Don't put that one in the episode. I want to write that story. Now, who do you think's Mark Zuckerberg? Ask Morgan Freeman to... Well, hmm. I just don't care about that one. It doesn't tickle me at all.
Starting point is 00:13:51 Demon Tamagotchi, that really hits a spot. It seems too perfect, though. I don't know now. I'm going with number two. You're going with Demon Tamagotchi. I think I want to go with number two as well, because I want it to be true. Okay, me too. Everybody's going for number two!
Starting point is 00:14:03 We can't all pick the same one. You're all wrong! Oh, yikes. I knew's going for number two! We can't all pick the same one. You're all wrong! I knew that was maybe gonna happen, but you know. Why did you guys pick the same one? I made that one up completely! You did? No! You already wrote the third book. Yeah, it's true that people have theorized that you'd have to raise an AI like a human child, but Brian Zwiler was made up and nobody created a computer. There's no Brian Zwiler? There's no Brian Zwiler.
Starting point is 00:14:25 Sounds like a real name. It sounds like a real name, doesn't it? I even looked it up to make sure Zwiler was a real ass name. It is. Um, and Mark Zuckerberg did not ask Morgan Freeman to say, you look fine today, Mark. But he did ask Morgan Freeman to say a bunch of other stuff. I just made up some things that Morgan Freeman would say. And it is true that this group of researchers gave an IQ test and I don't know what IQ test
Starting point is 00:14:50 and I looked and the article didn't tell me exactly what IQ test it was and Google's AI performed best on the test of all the IIs it tested like significantly better like Siri sorry was like in the 20s and Google was 47. Siri, you meant Siri. God damn it, Siri. We made eye contact. Yeah, sorry, it's hard. So Siri had her IQ, I guess she's a she, in the 20s
Starting point is 00:15:22 and then the Microsoft one, Cortana was in the 30s, one from a Chinese company she in the 20s and and then the Microsoft one Cortana was in the 30s One from a Chinese company was in the 30s and then Google was in the 40s Do we interact with this AI in our day-to-day lives? It is I think that they're testing just the the the assistance that we have on our phones. Those things are that smart? Apparently, I don't know exactly how they ask them the questions and whether, like, Watson, it was, like, going to, like, find an answer and just doing natural language processing and being like, oh, I'll check the internet for that. That seems like cheating.
Starting point is 00:15:52 It does seem like cheating. What it comes down to is, like, understanding. Like, they aren't understanding the question we're asking them. They are using systems to provide us the answer that is most likely to be true. Isn't that what we're all doing? That's the question! And that's the point? Yeah.
Starting point is 00:16:11 Yeah, at what point, like what is understanding? So we've reached the point of the podcast where we're all feeling a tinge of existential crisis. So I think it's probably a good time to go to the advertisements. And we're back. Hank Buck totals, one for Sherry for the science poem. Three for me. Garbage.
Starting point is 00:16:47 I don't think we've ever been dumb enough to all pick the same answer before. It was a really excellent lie though. So I think you deserve it. Those are good ones. And now it's time for our fact-times. Where Stefan and Sam have each brought facts to present to the others in an attempt to blow our minds. And we, me and Sari, each hank-buck to award the fact
Starting point is 00:17:06 That we like the most so who goes first. It's the person who most recently Had a banana oh my god, did you watch me in my freaking desk just now? I had a banana right before I walked in here That's so weird. I was trying to make myself feel better somehow. It was also potassium. Well, one time I was playing Wii Fit and it said, if you're ever feeling tired, eat a banana because it has as much energy as a cup of coffee or something like that.
Starting point is 00:17:37 And ever since then... What does that mean? I don't know. But ever since then, whenever I feel bad, I eat a banana. It never works, though. Thanks for that. We lied to you. Thanks for nothing, Wii Fit.
Starting point is 00:17:49 Yeah, some programmer at Nintendo was just like, man, I love bananas. I feel better every time I have a banana. All right, well, I guess Sam's going first, unless Stefan, like, just, like, is eating a banana right now. No. When was I your last banana? I don't even know. I went through a phase where I ate so many bananas, and I haven't had a banana right now. Nope. When was I your last banana? I don't even know. I went through a phase
Starting point is 00:18:05 where I ate so many bananas and I haven't had a banana in months. What is so many? That's so funny. It's just like so many bananas. What is so many bananas? How many? Like multiple a day? Oh yeah, like a bunch a day. Like a bunch? Like a bunch. Yeah, yeah, yeah. Like you gotta go to Costco and you get a, you can stock up cause they, they're a little day. Like an actual bunch? Yeah, yeah, yeah. Like you gotta go to Costco. You can stock up, because they're a little bit... You're gonna eat like 15 bananas this week? No, I haven't been doing that. No, but there was a point when you did. There was a point where you were eating like...
Starting point is 00:18:33 Yeah, sure. Why not? What's the problem? That's a lot of bananas. That's a lot of bananas. How did everything turn out in the end? I feel great. LAUGHS Sam. I still have to go first, even though he's definitely eating more lifetime bananas.
Starting point is 00:18:47 That would be a good stat to know. Lifetime bananas? Why doesn't Steam tell me the important stats? Ready? The Serengeti National Park in Tanzania is about the size of New Hampshire, but overseen by only 150 rangers. So it's basically impossible for them to watch the entire park at once, but it's also, they have a huge poaching problem
Starting point is 00:19:09 where people are killing elephants for their ivory. And it's like, they can't do anything about it really, for the most part, because people sneak into the park and they can't stop them. They have set up motion sensing cameras that have been sort of helpful, but there's so much stuff going on on the savanna. Like it could be a hyena sneaking around
Starting point is 00:19:27 or it could be a person sneaking around. And the only way to know is to look at all this footage and see, oh, there's a person. But with a little bit of help from researchers and a generous grant from the Leonardo DiCaprio Foundation, there has been a new series of cameras developed called Trail Guard AI, which are being deployed in the parks right now. And what they do is they can tell humans and cars from other kinds of animals.
Starting point is 00:19:54 So instead of sending all the footage that they take to the guards to check out, they just send the footage of humans and vehicles going around. So then the park rangers get that right away and they can go do something about it. They've had an older version in the parks for a while that don't narrow it down quite as well. And they've caught like 20 poaching rings so far. So this new thing hopefully will be even better than that. Similar technology is being used to catch tomb raiders in China.
Starting point is 00:20:22 And it's being used to, they send like scooters down with cameras on them to coral reefs, and the cameras on the scooters can tell what kind of animals and plants are in the coral reef, so then they can tell the scientists if they're too far gone to bother helping with, or if there's still a chance for them to like put money into it and try to save the coral reef. I was thinking about during that there's a spy plane
Starting point is 00:20:46 that they have used for both like war and terrorism stuff, but also for just crime in some places, where it's way up in the sky and has extremely detailed photographs and it's just always filming. And you can track not just where the people are when the crime happens, but like track them back to where they came from and track them to where they went.
Starting point is 00:21:08 And it's very creepy. But it seems like a sort of perfect application for that. Yeah, they only have a couple hundred of these cameras, I think, so it still is like not necessarily covering the whole thing, but they just put them where they guess people will be able to get in. Sure. Are they disguised at all or are they just like... They're only the size of a pencil.
Starting point is 00:21:28 So they just like shove them into trees and stuff. And they have like a Wi-Fi connection? Like talk to the cellular stuff? Yeah, they have some kind of chip in them that does the first pass of being able to tell what to send. And then I think they send it to a bigger computer that does like a second pass to And then it sends it to the people to take a look at awesome are they 360 cameras? No, they're just like pointed out like look like pencils and they're the size of pencils. All right, Stefan What do you got for us? There have been some studies so far that have shown links between eye movement and personality But all these studies are pretty much lab-based and personality, but all these studies are pretty much lab-based.
Starting point is 00:22:08 They give them personality tests, and then they try to predict things like, how many times is this person gonna fixate on this thing based on this personality trait? And so these researchers wanted to explore real-world eye movement and see if they could predict the personality traits from the movements using an artificial intelligence neural network. So they had 50 students and faculty of this university and tracked their eye movements while they walked around. They had to run an errand, so they had to walk around for 10 minutes and buy something from a store on campus. With like a thing on their heads?
Starting point is 00:22:36 Yeah, yeah, so I was like, maybe that's affecting their eye movements too. I don't know. I think if I had a weird thing on my head, my personality would change. They had head-mounted eye trackers and phones strapped to their chest so they could film what was in front of them. Hello, I am a normal human. I am here to buy eggs. I would like 15 bunches of bananas. Then when the people returned to the researchers, they had them take a bunch of different personality tests,
Starting point is 00:23:06 looking at the big five personality traits. And then they trained a neural network, and it did a pretty good job of predicting four out of five of the personality traits. It was best at predicting extraversion, which feels like that sort of makes sense to me. They didn't really go into detail, but I'm like, you know. Make a lot of eye contact. Yeah, you're like, looking around at people. Introvert, you're like, down at the ground. Please don't notice me in my goggles. Please don't.
Starting point is 00:23:38 And then it also did a pretty good job at neuroticism, agreeableness, and conscientiousness, but it was not good at predicting openness. And I thought it was interesting, they mentioned specifically that the pupil diameter was important for predicting neuroticism, but not useful for anything else. I need more research. This is small sample size. Yeah, there needs to be more research. They don't know why these things are connected yet. And they say it's not accurate enough yet to do practical things with it.
Starting point is 00:24:02 But it is outperforming other baselines so far. And it corroborates the findings of all the previous studies. Well get ready for robots that can know more about you than you do. That's what I thought was interesting, was like, what I thought when I read the headline was like, oh, if I see someone make an S shape with their eye, that means they're lying or something like that. But it's more they want to be able to design robots that can read your expression really well
Starting point is 00:24:28 and possibly like mimic that itself, like if the robot has eyes, so that you can have a more meaningful computer human interaction. That's what I want. So I can advertise to you better, probably. So it can advertise to me better. So it can more effectively get me to buy
Starting point is 00:24:42 the correct bananas. You look like a 15's banana tank kind of guy. I can tell by your eye movement. You've had a lot of bananas in your life, haven't you? You keep looking at bananas. Like, what other application would there even be for it? Just so that an AI would be nicer to interact with. If, like, I walk up to somebody and I can tell that they're
Starting point is 00:25:05 shy, I will treat them differently than if I walked up to somebody and they are obviously outgoing. So like ideally a computer or robot that interacts with people would also be able to do that. If it's like doing caregiving activities or something. Okay. Well, there's nothing that I think about more than where my eyes are. So it probably works. They're in your head. Now I'm thinking about it a lot. When I look at anything, I think, am I looking at this thing too long? Yes.
Starting point is 00:25:33 Especially a person. Yeah, I don't know where to look on a person. Where do I look? At your nose? At your eyes? I think between your eyes. Maybe. You are scoring high on neuroticism.
Starting point is 00:25:44 Probably. Not surprising. I've consulted the algorithm. Alright, I like both of those facts a lot. I do. I'm gonna go with Sam. It doesn't even matter who you give your money to. It's true.
Starting point is 00:25:58 Ultimately nobody's gonna win this except for Mawa. I'm gonna give mine to Stefan because I've never heard of that research before. And that seems really weird and scary. So I want to know more about it before it starts happening to us. And now it's time for Ask the Science Couch, where we ask listener questions to our couch of finely honed scientific minds.
Starting point is 00:26:18 Adit Bhatia asks, which AI has done the best on the Turing test, and can we access it? I do not know the answer to this question, even a little bit! Uh, can you talk about what the Turing test is, then? I can talk about that. Basically, Alan Turing said, and I do not agree with him, and I don't think many people do anymore, that if a computer can convince you that it is a person, then it will be a person. And so we are sort of heading for the future in which you can have a conversation with
Starting point is 00:26:48 a computer and not have any idea that it's not a person. And that would be passing the Turing test. And there, I think, have been situations where AIs have, quote, passed the Turing test. But we look at that now and are like, eh, you know, it's just doing a really good job of saying something that sounds like something someone might say in response to that particular question. And oftentimes very weird things, like one of the weird examples, I think it was one of Google's natural language AIs, somebody asked, what is immorality?
Starting point is 00:27:20 And it responded, the fact that you have children. It said the purpose of life was immortality. That sounds like a computer. Well, also it kind of sounds like people. Yeah, some people too, yeah. So the AI that's done the best on the Turing test, there are two that I found. The most controversial one was the chat bot named Eugene Guestman,
Starting point is 00:27:40 who supposedly was designed to be a fake 13-year-old boy from Odessa, Ukraine, who doesn't speak English that well. Right. Okay, so I see that you've made your way around the Turing test by being like, oh, the kid doesn't understand what I'm saying very well. Yeah. And so it tricked 10 out of 30 of a panel of judges at the Royal Society in London into believing that he was a real boy. And people were saying that that counts as the Turing test. As far as I can tell, I could not find Eugene Gustman online. I was looking for him and I can't find him.
Starting point is 00:28:13 I don't think he can chat with his boy anymore. Uh... I want to chat with my fake Ukraine boy. Oh, did they just turn him off and he's gone forever? He's just walking the same circuit around his room. Yeah, stored in a computer somewhere. Apparently, his dad is a gynecologist Did they just turn him off and he's gone forever? He's just walking the same circuit around his room. Yeah, stored in a computer somewhere. Apparently, his dad is a gynecologist
Starting point is 00:28:29 and he has a pet gerbil or something. There are transcripts of his conversations with other people online. Yeah. Which I read a couple of. It's not very good. But also, I didn't do any of the work to program him, so I can be judgmental. The other one that seemed more legitimate to me, there was a festival in 2011 where a modified version of Cleverbot, which is, I don't know who designed it, but that you
Starting point is 00:28:52 can play, tricked 59.3% of 1,334 votes, which included 30 judges and a generic audience. And so that passed the 50% threshold, which was generally described as part of the Turing test. 50% is the threshold? Yeah, because that seems low. The original premise was there are two rooms, and one of them has a machine and one of them has a human, and you talk to both and you have to decide which is, like...
Starting point is 00:29:21 Oh. You have a one in two chance of deciding what the machine is. Mm-hmm. It just has to trick you. There's some question as to, like, you have a one in two chance of deciding what the machine is. It just has to trick you. There's some question as to like the validity of these competitions, like who's playing them? In the case of over a thousand people playing them, have people talked with a chat bot before?
Starting point is 00:29:36 What conversations are they used to having? So the people judging whether it is human or not are variable in the Turing test. There is a strong argument for Turing test in some cases because it is actually a very difficult thing to accomplish because you need natural language processing for it to understand what you're saying to it. You need to store information before or during the conversation. You have to have some sort of reasoning algorithm to generate the responses and You have to have some sort of reasoning algorithm to generate the responses, and you have to have some degree of machine learning
Starting point is 00:30:08 to adapt and constantly learn what it has from the conversation, store that, create new responses to make everything make sense. When Turing was throwing out these ideas, AI was still such a fairly new concept and fairly tied to philosophy. This was a way to try and attempt to answer, like can machines think?
Starting point is 00:30:29 What is the best way to do that? Language, I guess, maybe. The criticisms are really interesting nowadays. I think they fall into like three main-ish categories from what I can tell. One is that the people who are designing these things for turn test competitions, they're all chatbots. The thing that they're designing for this is an AI
Starting point is 00:30:49 that is extremely good at talking to people, which isn't what most of the researchers who are doing AI are interested in. People are doing so many different things with image processing and self-driving cars and natural language processing in different, more useful ways, or more broadly applicable ways. I guess I don't want to put value on it. But like, Siri is more useful than a 13-year-old Ukrainian child on the internet.
Starting point is 00:31:13 Yeah. Strong statement. The other thing that like, keeps nibbling at my head is that are we asking the question, are these things thinking and are they alive? I can't get away from feeling like, I mean, if a bacteria is alive, then I think some of these computer programs are alive. And I have a hard time with that, but like first, bacteria are alive,
Starting point is 00:31:35 but also like, you know, I don't mind mass murdering them in my mouth every morning, but like, where are we at? And at what point do I have to feel bad about turning off a computer program? And like I legitimately think that's a that's gonna be a thing in my lifetime Will it be alive when it won't let you turn it off? Is that the alive threshold? No, I don't like I like I could turn you off man Like pretty confrontational me looking straight into your eyes while I said that.
Starting point is 00:32:07 But it's true. Yeah, I guess you're right. I wouldn't stand a chance. You're weak. You've only had one banana. There's also like the question of intelligence too, right? So we look at other intelligent animals that we could, like dolphins for instance, they wouldn't pass a Turing test
Starting point is 00:32:25 because they don't speak the same language as humans but had we still feel like by other measures of animal intelligence yeah I don't want to turn one off yeah yeah you have some moral obligation not to yeah keep as many dolphins turned on as possible. I started down the path, I just kept going. Yeah, there's that dolphin that wanted to have sex with its trainer. They, I mean, they apparently are pretty into people, from what I've heard. Yes, also from what I've read. I'm glad that that's how that sentence ended. I'm taking away a Hank Buck for that one.
Starting point is 00:33:02 Was I on a tangent? I guess I kind of was. Dolphin sex? I wouldn't let you do it if I wasn't gonna still. Why try to make something that thinks like a human being? Or is it possible to make something that doesn't think like a human being since we think like human beings? I think that's like a very current problem that AI researchers are trying to tackle. So there's this idea of what is intelligence and what is imitating intelligence
Starting point is 00:33:30 versus what is actually intelligence. And apparently Noam Chomsky, the linguist, has pointed out that when we build machines to move in water, we don't make it swim like a human necessarily. Like we still think that a submarine is a very effective machine because it's designed to do a task that we want it to, and we don't design it in human image.
Starting point is 00:33:52 And so there are probably a lot of branches of AI that we could explore that aren't just mimicking human language, for example. There could be a way to process a large number of images That's completely different from how our eyes receive information and our brains Process that retinal image and do things as far as I know that's like directions that AI research is going into is like how can humans Overlap with computers and how can our brains work similarly? But also how is artificial intelligence completely different and what directions can it go? Well, what I got out of that is that I'm as deep a thinker as Noam Chomsky is. Professional thinker.
Starting point is 00:34:27 Yep. I'm just... If you want to ask the science couch, you could tweet us your question using the hashtag Ask SciShow. Thank you to PixieBlood32 and HLTOLER and everybody else who tweeted us your questions. Now it's time for our final scores. Sarah, you've got one point. I've got two because I went on a dolphin sex tangent. Sam, you've got one point. Sarah, you've got one point. I've got two, because I went on a dolphin sex tangent.
Starting point is 00:34:45 Sam, you've got one point. Stefan, you've got one point. I remain the winner. If you like this show and you want to help us out, it's super easy to do that. First, you can leave us a review wherever you listen, like KT Simon did. Thank you.
Starting point is 00:34:58 It's super helpful. It helps us know what you like about the show and also helps other people know what you like about the show. Second, tweet out your favorite moment from this episode so that we know what that was, because I'd like to know. And finally, if you want to show your love for SciShow Tangents, you can just tell people about us. Thank you for joining us.
Starting point is 00:35:15 I have been Hank Green. I've been Sari Reilly. I've been Stefan Chin. And I've been Sam Schull. SciShow Tangents is a co-production of Complexly and WNYC Studios. It's produced by all of us and Caitlin Hofmeister. Our art is by Hiroko Matsushima and our sound design is by Joseph Tuna-Medish. Our social media organizer is Victoria Bonjorno, and we couldn't have made any of this without
Starting point is 00:35:34 our patrons on Patreon. Thank you, and remember, the mind is not a vessel to be filled, but a fire to be lighted. You like it. But one more thing. Allow me to read you the title of this 2010 paper, Unembedded Design of Intelligent Artificial Anus. So, sometimes you don't have a rectum anymore because of disease, and so it's not fun, because you can't control bowel movements as easily, so you have to have, like, situations to handle that for you. But if you have an artificially intelligent anus, your anus can have a microcontroller and use pressure sensors to detect whether there is a need for excretion. Hello and welcome to SciShow Tangents, the lightly competitive knowledge showcase starring
Starting point is 00:36:54 some of the geniuses that make the YouTube series SciShow happen. This week, as always, I'm joined by Stefan Chen. I've joined you. What's your favorite season? Summer. Is it? The one that California is all the time. Just as much hot as it can be.
Starting point is 00:37:11 I'm so, yes. Hot and dry. What's your tagline? All them canned goods. Oh, all of them. That's the way to go right now. Sam Schultz is here as well. What's the best grid pattern for planting plants
Starting point is 00:37:22 in Animal Crossing New Horizons? Oh, well, you know what, Sarari just forwarded me a very interesting link. Five by five. It's more complicated than that. I won't get into it, but... Okay. And what's your tagline? Well, it was going to be five by five flower grid.
Starting point is 00:37:38 You kind of ruined it for me. Oh, sorry. Sari Riley is here as well. Sari, who is the Tiger King? Oh, I don't know. He's like a man with a bleach blonde hair and mustache. I have avoided watching the Tiger King. So I-
Starting point is 00:37:53 It's probably for the best. Yeah, anything that I know about him is just myth and legend. And it seems like the show is so wild that you could tell me anything. And I would be like, sure, that happened. Like a man played a kazoo and rode a tiger around. Yeah. I'd be like, yes, that's the premise of The Tiger King.
Starting point is 00:38:07 Sari, what's your tagline? Marshmallow surprise. And I'm Hank Green and my tagline is 10,000 wipes. Every week here on SciShow Tangents, we get together to try to one-up a maze and delight each other with science facts we're playing for glory. And we're also keeping score and awarding sandbox from week to week, we do everything we can to stay on topic, but we're not great at that. So if you go on a tangent and the rest of the team deems that tangent unworthy, we'll force you to give up one of your sam-bucks. So tangent with care!
Starting point is 00:38:35 Now, as always, we introduce this week's topic with the traditional science poem this week from Ceri. CERI FLASHING LIGHTS, WHIRRING MOTORS, TWO LEGS M legs made of steel, an unfathomable head with sensors concealed. What powerful feats this metal beast might complete. Oh, it just did a backflip. That's fricking sweet. We imagine our bodies, our movements, our thoughts wrapped up in circuitry and made
Starting point is 00:38:58 into mascots, but that concept, dear humans, is inherently fraught because robots are not exactly what we were taught. Snake-like tubes or big welding arms, watering farms or sounding alarms, little vacuums that clean while doing no harm. Even soft robotic prosthetics have their own charm. Whether a machine is uncanny or a chunky space probe, the size of an elephant or a tiny microbe, one thing rings true in every scientist's heart, we'll never stop programming robots to fart.
Starting point is 00:39:25 Who's the farting robot? I didn't know about this either. There is a thing called the robot. It is an interactive farting figurine by WowWee. It's available at GameStop and Walmart. This is really interesting because I also found a robot, same name, but it's used by Ford to test out car seats and so it like sits like a human male would in a car seat
Starting point is 00:39:54 and it's like sweaty and just like a butt that sits on car seats. Does it communicate its own comfort to you? No, I think it like is to simulate wear and tear on the car seat over years because people are trading in their cars for new cars less often. Right. So the impact on the seat, not the impact on the butt. Yes. I'm willing to take that job and sit on a seat a hundred thousand times.
Starting point is 00:40:20 Yeah, this is why they gave that job to a robot, Stefan. They can't afford you. Sari, what is a butt? I mean, what is a robot? Well, a robot like a butt has an impact on the physical world around us. And so, like a computer program that stays contained within a machine and does calculations, that's not a robot. But if it can interact with the physical world, I think that's- What about like a CD player then?
Starting point is 00:40:49 Cause it like spins a CD. I guess that's a kind of a robot. Yeah, I guess kind of. It seems like scientists don't have a distinct definition of a robot. The Robot Institute of America says it's a reprogrammable multifunctional manipulator designed to move material, parts, tools,
Starting point is 00:41:09 or specialized devices through various programmed motions for the performance of a variety of tasks. So like CD player fits into that maybe. Well, what's the difference between a robot and a machine? I think robot starts to get at, like we've been talking about, doing tasks. So like a machine is like a toaster that you have to manipulate yourself. So you have to be the human finger to push down the toaster button.
Starting point is 00:41:35 But a robot toaster would like grab the bread for you and stick it in and then go, and then pop it back up. And you would have to do nothing. Like it would replace you so you wouldn't be necessary in the toast making process anymore. Oh, okay. Yeah, I feel like we sort of had the idea of what a robot was before we had robots. So it's like, well, a robot is a machine
Starting point is 00:41:55 that does human things and looks human and acts human, but like can sort of do more than a human can, physically at least. Like making toast. And then we were like, yeah. And then we were like, but actually we're gonna have real robots, but they're not gonna fulfill a lot of those categories
Starting point is 00:42:10 because most robots do not look anything like people because their goal isn't to replace people, it is to do an action. And so it is just the one part of a person that is most useful to it, like an arm. Yeah, so all the science fiction robots that we see are like very humanoid, usually programmed with some sort of artificial intelligence so that they can make decisions about a wide range of things.
Starting point is 00:42:38 But that technology is very, very far off and doesn't exist. But a lot of our robots are just like, I've programmed this thing to sit on car seats. I mean, you're the Robot Institute of America, or whatever the organization was, said it was a programmable thing, which was interesting to me, because a lot of what we think of now is like, robots that kind of program themselves. And in the same way, we can make decisions that can use sort of like simple artificial intelligence to figure out its own decision-making or its own object identification.
Starting point is 00:43:10 So it in some way is doing its own programming. So it's almost like if we have that kind of artificial intelligence or even like a generalized artificial intelligence it stops being a robot at that point and becomes something else. Sarai, what is the etymology of the word robot? Because I know it's interesting, but I've forgotten.
Starting point is 00:43:30 So the word robot comes from a Slavic language, I think Czech or whatever they were speaking in the early 1900s, called robota, which means forced labor. And it was coined in a play called Rossum's Universal Robots by a playwright called Carl Chopeck. And it was just about mechanical men that are built to work in factory assembly lines and that rebel against their human masters, which is like classic-
Starting point is 00:44:01 Robots are always doing that. Yeah, robot uprising story before robots were a trope. Yeah. I mean, it was, yes, a trope has to start somewhere. And then Isaac Asimov used it and the word robotics in a short story. And he was a little bit more optimistic about how robots would help out humans instead of be part of an uprising. And he was the one who came up with like the laws of robotics, that robots won't harm humans and things like that.
Starting point is 00:44:26 Yeah, the great thing about the laws of robotics are that they really require a huge amount of understanding on the part of the robot, which we are nowhere near. It's like, how do you know when you've harmed a human? Seems very obvious to me. I know when I'm harming a human, but boy, a robot could downright destroy you and have no idea it did it. Now it's time for TRIGGER FAIL. One of our panelists has prepared three science facts for our education and enjoyment, but
Starting point is 00:44:53 only one of those facts is real, and the rest of us have to figure out, either by deduction or a wild guess, which is the true fact. If we get it right, we get a sand buck. If we're tricked, then Stefan will get the sand buck. Stefan, what are your three facts? Okay, these are three facts relating to water-based robotics. Oh. So fact number one, a UK-based hobby roboticist
Starting point is 00:45:16 created a bunch of robotic versions of different swimming dinosaurs so that he could race them. Ha ha ha ha. Ha ha ha ha. Ha ha ha ha. Ha ha ha ha. Some researchers who've heard about this got interested dinosaurs so that he could race them. Some researchers who heard about this got interested and were particularly interested in his plesiosaur robot because the plesiosaur is a little bit unique amongst animals because it has four identical flippers and so there's been some mystery about how it moves and so
Starting point is 00:45:40 by studying his robot they figured out how plesiosurs move their flippers in relation to one another. That's fact number one. Number two, the modern conditions around the Great Barrier Reef have led to a population explosion in giant reef-eating sea stars, which has led to the development of a fleet of autonomous robots that can patrol the reefs, looking for these sea stars and delivering lethal injections of bile salts to kill them. What salts?
Starting point is 00:46:04 Bile salts. Bile, like the stuff that your liver... stars and delivering lethal injections of bile salts to kill them. What salts? Bile salts. Bile, like the stuff that your liver or something. Gallbladder. Gallbladder. Yeah. Or fact number three, one company is taking lifeboats to the next level by turning them into autonomous firefighting watercraft that once deployed can automatically navigate to
Starting point is 00:46:21 humans that are stranded in water as well as fire water cannons at flames that it spots on the vessel. So we've got fact number one, a hobby roboticist decided to race some swimming dinosaurs, and this taught us potentially how real plesiosaurs swam. Number two, there are some bad sea stars on the Great Barrier Reef, and that's led to the development of a fleet of autonomous robots that can look for and inject lethally the sea stars with bile salts, or three, a company that is taking lifeboats to the next level by turning them
Starting point is 00:46:58 into autonomous firefighting watercraft that can automatically navigate to humans stranded in the water and shoot water cannons at the flames. Not fire cannons, that would not be helpful. Shoot fire cannons. I feel like I have heard about people making, like racing robots for fun and also for science. So this one has credibility.
Starting point is 00:47:24 This first one has credibility for me. That like one of the ways, and I have also heard about lots of like swimming robots and how we're gonna figure that out. But I really liked the idea that like somebody was just having a fun. And then they were like, actually, can you send us your video?
Starting point is 00:47:42 Because we're a little bit confused about how plesiosaurus work. Yeah, I could totally imagine a scientist nerd looking on his shelf and being like, oh, I have to see these dinosaurs. Wonder if I can make a move, and then doing that. Because that seems like what I would do if I was bored and had electrical engineering skills. Yeah.
Starting point is 00:48:01 I don't think that I could do it, but I could imagine how someone might make a robotic plesiosaur fairly easily. They're the ones with the paddle-y kind of... Yeah, they got paddle fins and they got a long tail and a long neck. Sounds like the plot of like a Mega Man game or something to me. Doesn't sound real. The sea star one sounds too sad and also a little bit too specific. Are there giant reef-eating sea stars? Is that a thing? I have no idea. I know that sea stars eat all kinds of stuff, but I do not know about the sea star situation on the Great Barrier Reef.
Starting point is 00:48:35 Yeah, and I don't know if sea stars would have any need for eating, like, bleached coral, or if they were eating whatever is alive. I'm sure they'd be eating the living stuff. Yeah, they'd be eating the polyps. Yeah, so I guess I can see a case to protect whatever's left then, if there were these giants. Do you know, Stefan, if these sea stars belong there, are they invasive?
Starting point is 00:48:55 How'd they get there? I'm not 100% sure, but I think they do belong there. Let's ask more specific questions about other things then. I think I talked about some kind of algorithm that can take a census of what fish live in a reef. So maybe some kind of adaptation of that idea. And then I mean, an autonomous lifeboat that can rescue people and shoot water.
Starting point is 00:49:17 Like, that feels real because like if it's not being done just upon hearing it, I'm like, that's not a thing. I should found an autonomous lifeboat company. Machine learning could easily know what fire looks like. That's very easy. And one of the great things about autonomous boating, like there's just less stuff to run into in the ocean. On roads, like it's very easy to like leave the road
Starting point is 00:49:41 and that's a big problem. On the ocean, it's very hard to leave the ocean. All right, who's gonna guess first? I'm gonna guess dinosaurs just because I think it's fun. It is fun. Sam, hit me. I might go with dinosaurs too, cause I think boat seems slightly too boring
Starting point is 00:49:59 to be the right answer to me. That's true, it is a little bit boring. And I'm gonna go with starfish because I know that that is the correct answer because I've read about this. Oh, no! No! Ah! Hank is correct.
Starting point is 00:50:12 It's the starfish. So there's, apparently there's the three major threats to the Great Barrier Reef are climate change, pollution, and these sea stars. Yeah. They're nasty. Yeah, so they're called the crown of thorns starfish, and they're one of the largest sea stars, and they're about a foot wide,
Starting point is 00:50:31 and they're covered in these venomous spines. And over the past decade or so, their populations have boomed a lot because all the agricultural runoff going into that area causes these algal blooms, and the sea star larvae are eating that algae. So they are having a grand old time over there. And we also, they had some natural predators, but we ended up overfishing those predators.
Starting point is 00:50:54 So it doesn't have any like checks to its population. Except robots. Except robots. So they do eat the corals. Once they reach maturity, they eat the fast-growing corals, I guess, which is good if you have a little bit of that, because it makes some room for the slow-growing corals to establish themselves. But I guess it's estimated that these sea stars are responsible for about 40% of the overall coral loss that we've seen. So it's a pretty big deal. Mr. Matthew Babadin, who's a professor at Queensland University of Technology, was all the way back in 2005, starting to develop these systems. And they had like sort of rudimentary vision system that
Starting point is 00:51:34 could recognize the sea stars like two thirds of the time. But at that time, they didn't have a good way to kill them. They had like some kind of lethal injection, but you had to inject all of the 20 arms. And so it was like really difficult to do reliably. And this is something that, like, people would do. Like, divers would go down, stab 20 different legs of a starfish, and then move on to the next one. Got one!
Starting point is 00:51:55 And it's venomous too, right? So they were trying not to get stabbed back. Yeah, they have these, like, long stabby poles. But by 2014, we had found this bile salt injection thing that has a 100% mortality and you only have to poke it one time. And then by that point, his vision system was capable of identifying the sea stars over 99% of the time. They went through a couple iterations, I think, but they ended up with what they're calling
Starting point is 00:52:23 the Ranger bot. When it's about a meter long, it's got a bunch of propellers so that it's really maneuverable. And I guess the battery lasts about eight hours and it can go at night, it can go during the day. If there's a storm out, it can go anytime. And it's super easy to control. I guess they did a lot of user testing to make sure that it was user friendly. So, you know, people who are trying to save the reefs can go out and plot courses for these robots and control them. And as a bonus, it has a bunch
Starting point is 00:52:49 of like sensors on it so it can monitor the reef health while it's out there looking for the sea stars and killing them. And last I had read there were about, I think there were five of them that were operational, but they're not yet widely available to different reef management teams. But the idea is to have a bunch of these fleets just all over the reef. That's cool. Next up, we're going to take a break and then it'll be time for the fact off. Welcome, everybody. Sam Bucktotals. Ceri has one, I have one.
Starting point is 00:53:38 Stefan has two, and Sam has none. Sorry I put you in the end there. Objection. I don't know exactly why I did it in that order. But now it's your chance, Sam. Because it's time for the Fact Off, where two panelists have brought science facts in an attempt to blow everyone else's minds. The presentees each have a Sand Buck to award to the fact that they like the most. And we will decide who goes first with this trivia question. As Ceri said, the word robot first appeared
Starting point is 00:54:07 in Czech playwright, Karel Kapek's play, Rossum's Universal Robots. In what year did this play premiere? Oh. I'm gonna say 1917. Ooh. I'm gonna say 1904. Oh, okay.
Starting point is 00:54:28 Hank wins. Oh. Ah! It was 1921. Oh, nice. Okay, we were pretty close. I wanna go first. So once upon a time in the old days,
Starting point is 00:54:42 rich people had people put their clothes on for them. But that is still something that happens for some people who need help getting their garments on and off. And there's plenty of reasons why, age, injury, other kinds of limitations. So scientists at the Georgia Institute of Technology have gotten a robot to start to figure out how to dress a person. So they used a pre-built research robot. This is a thing that already existed called the PR2.
Starting point is 00:55:08 And it can be programmed to do things like fold towels or grab drinks for people from the fridge. They wanted to learn how to dress a person, which means you have to let it fail and make mistakes. But robots making mistakes with a real human body would be dangerous because as we discussed earlier, robots do not know when they are killing you. So they had a robot study 11,000 simulations
Starting point is 00:55:35 of a robot putting a hospital gown onto a human arm. And it had the robot analyze the kinds of forces it can apply and the motions it can make, and how those forces and motions affect the person who is getting dressed. In some of those simulations, they intentionally had it go very wrong. So like the gown would catch on the hand
Starting point is 00:55:57 or the thumb or the elbow, and to deal with it, the simulated robot would then apply a dangerous level of force to the arm. And those were given to it as intentional failure states. So it would know this is bad. This simulation went very wrong, never ever do this. So it went through 11,000 of these simulations and it got through them in one day because it's a computer and it can do that. And then it moved on after that day to dressing people and it was able to do that. By which I mean it was able to put one sleeve
Starting point is 00:56:32 on one arm of one person in about 10 seconds, which is like, you know, maybe not as fast as I would do it, but plenty good. Importantly here, it is using touch so it can feel how things feel on its fingertips to figure out what the person getting dressed might be feeling. And also it's using its sight.
Starting point is 00:56:52 So it's feeling and watching and using all of that information at the same time. And then from all of those movements and all the simulations and all the data it's getting, it can sort of pick the best motion for getting the arm into the sleeve, which so far so good. As of 2018, it was able to put the surgical gown on the arm of a person. Getting a person fully dressed will take more work.
Starting point is 00:57:16 But we're on our way. Yeah, it's worth it if I never have to put my own pair of pants on ever again. It's also worth it if the robot doesn't rip your thumb off when it's trying to put your shirt on. All right Sam, what you got for us? All right, so one big hurdle encountered when making robots that are intended to interact with and like walk around in their environments is maneuverability. So when people and animals move around, they're balancing, they're like pathfinding, they're adjusting to changes in incline and they're like jumping around
Starting point is 00:57:46 and they're even transitioning from like water to land to sea. And not to mention that robots run on batteries and they can't just stop and eat like a bug or a bunch of grass when they need to keep going. Not yet at least. And there've been a lot of advances in robo mobility but they can't really compare
Starting point is 00:58:02 to good old fashioned flesh and bone. So researchers at the Korea Advanced Institute of Science and Technology took a kind of weird and freaky shortcut. They developed what they call a parasitic robot system that commandeers an organic being and pretty much uses them as like a horse. So their first and I think the only test subject that they've done so far were a bunch of turtles, red-eared slider turtles.
Starting point is 00:58:31 They were chosen not only because they're amphibious, so there would be lots of options for the different kind of terrain they could do, but they have good memories and they come with a big old shell that you can glue a bunch of electronic components to. So the robot is basically like a little microchip brain hooked up to a battery, and then a bank of five red LED lights that are mounted horizontally in front of the turtle's face. And then like a little container of food,
Starting point is 00:58:56 that's like a gel and a spray nozzle that they position near the turtle's mouth. So for two weeks before they started this experiment, the turtles were fed while they were looking at a red LED light, and then they put the robot on them, and the robot had instructions to move the turtles along certain paths. So to do this, it would light up one of the five red lights
Starting point is 00:59:19 in the direction closest to the way they wanted the turtle to go. And if the turtle went the way that they wanted it to, the turtle would get a little gel treat from the robot's food tanks. So after five weeks of doing this, the robots were guiding the turtles through 16 feet of track in 75 seconds
Starting point is 00:59:37 with a deviation from the ideal path of less than 3%. So one of the big challenges that the researchers faced was that the turtles would sometimes get distracted by stuff that wasn't the red lights in front of them. So future experiments they are planning will use full virtual reality turtle headsets to ensure that the parasitic robots have complete and total control of the turtles.
Starting point is 01:00:02 Oh my gosh. This is way less bad than I thought it was gonna be. It's still scary though. Like I thought that, yeah, I thought we were gonna like be drilling holes in these poor boys' heads. I don't know, I don't think any, I think they just glued the things onto their shell. I don't think any holes were drilled anywhere in the turtles.
Starting point is 01:00:18 I'm very impressed that turtles are this trainable. I had never really thought about trying to train a turtle. Yeah. I sense that I am in trying to train a turtle. Yeah. I sense that I am in trouble. Do you guys wanted to choose between the two facts? We have 10,000 simulations, some of which were violently incorrect, leading to a robot that can put a sleeve
Starting point is 01:00:36 onto a person in 10 seconds. Or Sam's amazing parasitic robot turtle. Three, two, one. Sam. Sam. Yeah, I know. Hey. I think only because the dressing robot only got one arm. If it had gotten the other arm,
Starting point is 01:00:54 I would have been like, yes. But they had complete control over these turtles. That's right. They did. Unless the turtle got distracted. Yeah. Hank, you did too good at science journalism, where you were like, okay, I'm going to lower your expectations one arm instead of starting with a fully automated closet that dresses you is on the way.
Starting point is 01:01:24 And that means it's time for Ask the Science Couch, where we have some listener questions for our Couch of Finally Honed Scientific Minds. At Treehouse Down asks, why is robotic skin so hard to make? Well, it depends on what you mean. So like, just covering something in plastic is not hard. But if you want it to sense, that is very hard, it turns out. So there are like a number of reasons why.
Starting point is 01:01:49 One, because like we sense many different things. And two, because like the nerve density of our ability to sense and then to send that information for each little bit of skin, like it's amazing that we can do this. But if you're trying to do it with a robot, you have to have each like tiny bit of skin. Like it's amazing that we can do this, but if you're trying to do it with a robot, you have to have each like tiny bit of resolution of skin feeling resolution to like have a separate wire
Starting point is 01:02:13 that connects to a fricking central processing unit. And that is just, it's miserably difficult. That's one of my understandings at least of this. Who is making robot skin and why are they making robot skin? Do you really have to ask? Not just sex. If you want any robot to be able to, so like take the arm robot that has to touch someone's arm and be like, oh, I'm
Starting point is 01:02:43 going to put a sleeve on this. It needs like touch receptors to know how much force it's putting on that arm. It needs probably like temperature sensors to be like, is this a living human or a dead corpse? I don't know. It's like kind of a bad example, but. That's important, that's important.
Starting point is 01:03:02 You want it to understand the world around itself. So if it runs into something, you want it to know that's important. You want it to understand the world around itself. So if it runs into something, you want it to know that that happened. And a lot of times right now, it just literally can't know. And if it does know, it knows that something happened, but it doesn't have any idea what it ran into or in what direction. Okay.
Starting point is 01:03:19 Yeah, in the way that we have like spatial orientation, we know how our body is arranged relative to itself, so we know our arm is to the right or to the left of our body, for example. That's all nerve endings. It's called proprioception and it's in our muscles and our skin. In order to have especially humanoid robots, but any robot that's doing a delicate task, you need some equivalent of skin with all these sensations to do the delicate actions.
Starting point is 01:03:47 But as far as answering the question, Hank is right. It's mostly just because our skin is so dang complicated. In addition to all the wiring stuff, our skin can get damaged and still function, and that's a hard part of approximating skin. So even if we have a cut in our skin, that doesn't mean all the nerve endings are suddenly destroyed. But if you have a cut in robotic skin, that slashes through a sensor that could mess up the whole system. And so a lot of innovations in robotic skin technology are electrical engineering related and have to do with programming the electronics and the signals so that the processor that is receiving all of them
Starting point is 01:04:29 doesn't get overloaded with information. There are a couple different ways that people are experimenting with it. One is called asynchronously coded electronic skin, or ACES, which I think it's similar to this other one. The way it sends signals is not all at the same time. It spaces them out in such a way that there isn't a big backlog of signals waiting to be processed.
Starting point is 01:04:50 And then another one, it's above a certain threshold of activity. So if you put a hat on your head, your head senses it. It's like, oh, that's weird. There's a hat on my head. But after a little bit of time, your head just becomes used to it unless something else changes, like you take it off. And so they're trying to program a robotic skin to mimic that. So like it recognizes the change in temperature or pressure and then recognizes it for a time. But then when it becomes part of the robot state of being, you ignore it so that you can focus your processing energy on other things. We kind of do that too.
Starting point is 01:05:25 We're like, you start to tune things out after the signal has been there for a while. Yeah. It's like how you don't know where your tongue is until I said that. Now you're like, oh, oh, oh, oh, oh, oh, oh, oh. Sam's constantly thinking about his tongue. If you want to ask the Science Couch your questions, follow us on Twitter at SciShow Tangents where we will tweet out the topics for upcoming episodes every week.
Starting point is 01:05:47 Thank you to at Kamilovish13 and at Little Chris, and everybody else who tweeted us your questions. This episode, Sam Buck final scores. Sari, you and I are tied for last with one point. Sam and Stefan are tied for first with two. So for the season, that brings us to Sari in the lead with 37 points followed by 36, Stefan, 35, Sam, and 34, Hank.
Starting point is 01:06:10 Oh no. It's tightly packed. If you like this show and you want to help us out, it's really easy to do that. First, you can leave us a review wherever you listen. That's helpful and helps us know what you like about the show. Second, you can tweet out your favorite moment from the episode or just say nice stuff to us on Twitter. And finally, if you want to show your love for SciShow Tangents, just tell people about us. Thank you for joining us. I've been Hank Green.
Starting point is 01:06:33 I've been Sari Reilly. I've been Stefan Chin. And I've been Sam Schultz. SciShow Tangents is a co-production of Complexly and a wonderful team at WNYC Studios. It's created by all of us and produced by Caitlin Hofmeister and Sam Schultz, who is also our editor. Our editorial assistant is Deboki Chakravarti, our sound design is by Joseph Tuna-Medish, our beautiful logo is by Hiroko Matsushima, and we couldn't make any of this without our patrons on Patreon. Thank you, and remember the mind is not a vessel to be filled, but a fire to be delightful. But one more thing.
Starting point is 01:07:17 If you were a medical student in the UK in 2016 and you needed to practice performing rectal exams, your pickings were pretty slim. In fact, there was only one person in the whole country signed up to allow med students to perform practice exams on them, which seems like kind of a problem. So. So.
Starting point is 01:07:36 I mean, it seems like it would either be zero or more than one. No, just one. So a team at Imperial College London got to work inventing a robot ass. The result was what looked like a disembodied butt filled with little pistons and robot arms surrounding a silicon tube that was like a rectum. And they could, the pistons and arms would squeeze to provide different amounts of pressure to simulate rectums of all different shapes and sizes.
Starting point is 01:08:03 And it can also simulate different diseases and complications of the prostate. So you just have a robot butt you can dig around in now. INTRO MUSIC Hello and welcome to SciShow Tangents! It's a lightly competitive knowledge showcase. I'm your host Hank Green, and joining me this week is always science expert, Sari Reilly. Hello! And our resident everyman, Sam Schultz. Hello. So all three of us are nerds in one way or another,
Starting point is 01:08:51 and I am the old one, and Ceri is the young one. But look, it's the year 2022, so we all had computers growing up. When am I? You're the middle one. Oh, okay. I thought I would be the cool one. No, no, no, none of us are cool.
Starting point is 01:09:12 That's not what I wanted to talk about. I wanted to talk about computers and our very first computers. Do you remember your first computer? I could not tell you the model of it, but it was an old. It wasn't a big enough deal. They were just around. Well, I was too young, I think, because it was when my dad was in grad school.
Starting point is 01:09:31 So it was when we still lived in New Jersey. My dad was going to Cornell in Ithaca, New York, and we'd like commute back and forth. When he wasn't writing his dissertation, I would play Candyland on the computer. I think it was like an old Apple computer. And it got to the point where I'd played this Candyland computer game so many times and the computer was so slow that I would already know where to click on the next screen to like advance the story.
Starting point is 01:09:57 So there was a mouse. Okay, so I'm getting some information here. Yes, there was a mouse, there was a monitor, there was a keyboard, there was a- It was color, I imagine. Yes, it was color., there was a monitor, there was a keyboard, there was a- It was color, I imagine. Yes, it was color. Yeah, okay. And it was like a tan.
Starting point is 01:10:08 Spring chicken. Tan-ish, like classic. They were all tan back then. That was the hip color. Everybody wanted it to be tan. All right, so that makes me think that I'm old. Sam? Mine, I don't necessarily remember.
Starting point is 01:10:23 I do remember getting our first computer. It was tan as well. I remember getting the internet more vividly because I remember going to nickelodeon.com and having to wait an entire night for one of the games to load up. And it was two dogs and you click on one and one dog would smell the other dog's butt.
Starting point is 01:10:42 And then think of like what the butt smelled like. It would think of like flowers or think of like a hamburger. Then they would turn around and you click it and they would smell the other dog's butt. And I waited all night for it. And it's one of the earliest like, huge disappointments of my life as well. But I don't remember.
Starting point is 01:10:58 That was it? Like that was the whole thing? That was the whole game. Yeah, back and forth, butt sniffing, Nickelodeon dogs. And there wasn't like any, you didn't like do anything with the information that you got? There wasn't like any? No. No, it was just a enjoyment. I remember a lot of my computers. I remember the first one that was in our house.
Starting point is 01:11:17 I remember the first one that I got. They were all tan. I was very into the idea that I was going to be some kind of cool cyber person. Like a hacker or something? Yeah, totally. And like I did a little bit of... I achieved through various means access to places that I shouldn't have, but not in like cute, cool, like Cybertron hackers with Angelina Jolie kind of ways. I did hack the website of the Bloodhound Gang once. That was my crowning achievement. And I just, I put a little Marvin the Martian in one of their images.
Starting point is 01:11:51 That's what I would do. As long as the website, I don't think they ever noticed, honestly. That's great. Yeah, that's what I would do. I would put Marvin the Martian inside of people's images. This is, this turned into a conversation I did not mean for it to be.
Starting point is 01:12:05 I think the Statute of Limitations is up on hacking the Bloodhound Gang's website. But that was well into my teens when that was going on. When I was a little kid, we had Apple IIe, and then I had like this like just monster compact that you could fit like a whole loaf of bread in. It was just like lots of space inside it. A whole loaf of bread.
Starting point is 01:12:23 I'm like, yeah. Like, you could open it up and insert like, like a human child inside, it was so big. Keep him warm in there. A little incubator. A little pokey maybe, but other than that, it's a nice little house. To barely fit a crouton in, probably not even.
Starting point is 01:12:43 They filled all that space up with useful stuff. Where are we going to put our croutons, Sari? I don't know. I'm going to put one in my mouth. Every week here on SciShow Tangents, we get together to try to one-up, amaze, and delight each other with science facts while also trying to stay on topic. Our panelists are playing for glory and for Hank bucks, which I will be awarding as we play.
Starting point is 01:13:01 And at the end of the episode, one of them will be crowned the winner. Now, as always, we introduce this week's topic with the traditional science poem this week from Sari. If I ask two plus two, you say four and that's true, whether you just knew or you counted through. So if computers compute and you can do a square root, it's hard to refute that you're a computer to boot. But this podcast isn't about you or me that we can calculate three times three. Instead, let us focus on technology that does operations in a logical spree. So there's desktops and laptops in the typical hall,
Starting point is 01:13:33 but look at networks and smartphones and kiosks at malls. If we were to do a catch-all roll call, we'd find devices both massive and small. So what is a computer, the constant refrain? Well, you can't run Pokemon in your brain, although to a machine, our thoughts seem so arcane. So I hate to default to it's not my domain, but Sam Hank and I will debate
Starting point is 01:13:56 and then move on with the episode, I guess, once again. Once again. Yeah. I really, I thought about saying it and I was like, I'm not gonna be that guy. I'm not gonna be that game guy, cause I'll never hear the end of it. The topic for the day is computers. And we, what, and yes, what is a computer like that? I don't know man. Where do you draw the line?
Starting point is 01:14:22 This is like microcontrollers. Could you really not do Pokemon in your brain? I feel like you could maybe hack up your brain a little bit. Exactly, like I can sort of think through Pokemon, for sure. What's the difference there? If I played it a lot, you could basically play that Shoots and Ladders game in your brain, because you knew exactly where to click all the time.
Starting point is 01:14:37 Candyland. I thought you were talking to Sam. I had no idea you said something so unspecific. Yeah, but your brain can't really generate I thought you were talking to Sam. I had no idea you said something so unspecific. Yeah. Yeah. But your brain can't really generate a random encounter. I guess it can, that's a dream. Yeah, that's every moment of my life. I've been talking about that.
Starting point is 01:14:56 A while ago. Yeah, there are definitely differences between computers and brains, but they do do a lot of the same things. Mm-hmm. I don't know. When you asked, like, you can calculate what three times three is, I was like, you know what?
Starting point is 01:15:10 I can. Like, I can picture three threes and I can count them. And like, that's probably not how computers are doing it, but I'm doing the same thing. Your brain can generate a random encounter to communicate to someone else's brain, right? Computers have a hard time generating true randomness, too. That's true.
Starting point is 01:15:24 They have to sort of look to nature a little bit to find it. So, who knows, man? Who knows? You're supposed to know, Sarah. Well, I can try. Okay. I kind of covered it in my poem. I was hoping to avoid responsibility, but alas, this is my segment. Sam has made you do it.
Starting point is 01:15:45 So I think in a lot of cases, if you're using the word computer nowadays, you mean a digital machine that runs on electricity that can be programmed to do things. And whether that's computation, like doing mathematical problems or finding probabilities or like picking out relevant data, statistically significant data in a massive amount of it, like SETI does, or do like logical operations that it takes to like run a program. So if this, then that, that's as far as in programming I got.
Starting point is 01:16:23 I took an intro Python class and then was like, ah, I've programmed a room. And now I understand what that is. I don't need to understand how to do it. Yeah, and so like a computer system. So a computer is like what we think of as like the brain equivalent, as far as I can tell. And then a computer system is where you start including like the peripheral stuff.
Starting point is 01:16:49 So like the hardware, an operating system, which is like software and like the accessories to it. So like a mouse or monitor. Those aren't necessarily the core computer doing the calculations or running the programs, but they are devices that help you interface with it. And there's a lot of these, there's a lot of like little computers everywhere. I don't know if they count, like is there a computer in my remote control? Kinda.
Starting point is 01:17:18 The clear line in my head is like things that do processing or like we put an input and then they do something that is definitely a computer. So like all the, like water treatment machines or electrical grids or airplane computers or car computers to some extent, apps and phones, like smartphones, but that's weird because like a phone phone, an old timey phone, I wouldn't call that as a computer necessarily.
Starting point is 01:17:45 Not a computer. Not a computer, no. Doesn't do enough thinking. I feel like I know what the etymology of the word computer is though. Yeah. I think it was originally a person who did computations. Oh, not a guy named computer,
Starting point is 01:17:58 like an ancient Rome or something. Those are like a job. Yeah, but I guess I don't know where compute comes from. So there's that. Yeah, so computer, it does come from compute. It was first used in 1613 in a book by Richard Braithwaite as a job title. And the word compute comes from like the root word calm, which means with or together. And then from putare in Latin, which means to reckon or to prune.
Starting point is 01:18:26 I guess like to reckon, like the, the, the numbers to like get the numbers to match each other. Yeah. So it's like, you get all the numbers and then you, you put them together or you take some out and that's computing. Basically. How does all those numbers make me be able to see you on my screen right now? Oh gosh.
Starting point is 01:18:47 How does that work? Oh jeez. Why we don't need to go there. Or how does it make my friend Mario come in and jump around from there? How does that happen? That's all of computer science and nobody understands all of it.
Starting point is 01:19:00 Altogether, everybody together does understand enough pieces of it to make Mario occur. Like we're one giant computer of many brains, eh? That's right, yes. We are a, we're definitely a organizational species. It's like, have you ever done that writing exercise? I had to do it in high school and I was resentful for it, but like, how would you like write a paragraph about a shoe
Starting point is 01:19:24 and break it down to like all its component would you like write a paragraph about a shoe and break it down to like all its component parts or like write a paragraph about how to make a sandwich, but you can't just say like, get the bread because then where do you get the bread from? Like, that's a good way in my brain to like trace back these questions is like, okay, if you want to figure out how anyone got these videos on the screen. What is like square one, which is how do you get color into a pixel? And maybe that's not even square one.
Starting point is 01:19:50 Like that's probably like step 10, but like that's a thing. How do you build a pixel? What is a pixel even? Yeah. You gotta start with like, okay, so everything's based on yes and nos. You either have something or nothing.
Starting point is 01:20:02 And then from just the signal of something or nothing, you can build other numbers. So you can build twos and fives and eight hundreds and all of that. And then you can also with those ones and zeros, you can build a text. And then with all that text, you can build like languages that actually interface
Starting point is 01:20:21 with the computer itself, the machine language or whatever. And then it's very slow, and it's a lot of work. And Intro to Computer Science is like, I thought I was gonna learn how to program computers, but really I'm learning how fucking everything works. That's your short answer, Sam, is you don't wanna know. But if you do wanna know, it's like, think. If you do wanna know, at brilliant.org slash science show,
Starting point is 01:20:45 you could take some computer science classes. You can learn all about that. All right, everybody, I got a game for you that I would like to play. It's called Computers. So there's a lot of creativity and ingenuity that's gone into the creation of computers and all of the amazing related computer related technology. But even more creativity and ingenuity has gone into naming all of computers and all of the amazing related computer-related technology. But even more creativity and ingenuity has gone into naming all of the various parts
Starting point is 01:21:09 of the computer experience. There's for example Kerberos, the authentication system that's named for Cerberus, the three-headed dog that guards the gates of hell from Greek mythology. So today we're going to play around of the scientific definition, where I present to you a word that is related to computers, and you will have to guess what it means, and whoever gets closer to the actual definition will get a point, as judged by me.
Starting point is 01:21:34 Do you understand the rules of the game? Yeah. Okay, well, our first word, word number one, is blob. Aw, man. When you said Kerberos, I was like, I used one of those in college. I got really excited. You might get one of these. Any ideas, Sam?
Starting point is 01:21:53 We just got to tell you what it does. There's an infinite amount of this shit. Yeah. Well, just give me some computer thing. Just look at a computer and be like, what do you do? It's a binary library of bits. It's a filing system. It's a binary library of bits, a filing system.
Starting point is 01:22:14 What do you guys think? Fantastic. Wow, that was so good, Sam. I would believe you if I didn't know that that was wrong. But I don't know what the right answer is. Okay, well. I'm going to guess like it's an earlier version of cloud storage. Oh, that's a good guess.
Starting point is 01:22:34 Like, oh, you just throw it in the blob. And then you like suss it out later. You deal with it later on. It's all mushed up in there though. You got to really reach in and get it. I like both of these answers and they are both close to the actual answer but Sari is definitely not closer than Sam because BLOB stands for binary large object. Wow. And it is a way of storing data.
Starting point is 01:22:55 Whoa, isn't it all a way of storing data really? Kind of. It's a type of data that stores binary data versus letters and numbers. So you were right about that. And it's compressed into a database. And blobs are often used to store multimedia, like images or video or audio, so they can require a lot of space compared to other types of data.
Starting point is 01:23:16 So one example would be you would use a blob in a photo album, and the database would store the images as a blob and the photo as a string of actual words. So you have the blob is all of the binary data. I'm so impressed. I didn't think it was gonna be an actual acronym. I take it back that I made, now me making fun of Sam and get an egg on my face is on record.
Starting point is 01:23:37 Well, you have a chance to redeem yourself because the next word is smurf. What's a smurf? I mean, to say a smurf feels like because it's a humanoid creature, it's like something to do with a user of a computer. So I'm going to say a smurf is like not a noob, but like someone who cleans up programs. So like there's a lot of, there's a lot of like work that goes into programming, but like mine were always really bad and bloated and like there are ways to do it more efficiently And it's like bring in the Smurfs and then the Smurf is the guy or girl
Starting point is 01:24:14 Whoever the person who goes in and it's like you could have done this way better and then makes it nice And then it's like here. This is ready to go Smurf is a little computer. Yeah, so as a guy Yeah, it's a, here, this is ready to go. Smurf is a little computer helper. So is it a guy or is it a program? A person, yeah. It's a human being. Okay, I mean, I know in gaming, smurfing is when somebody who's really good at a game will start a new account and they're really, and then so they can like beat people really easily.
Starting point is 01:24:39 So maybe it has something to do with like a fake user of something, like a fake account you set up to get some kind of, I don't know, extra permissions to do something, like a fake account of some sort. That's where I'm gonna leave it. I mean, Sam, fantastic again. I mean, not spot on, but certainly more so than Sari.
Starting point is 01:24:59 So a smurf is a kind of attack. It's a denial of service attack. So DOS attacks are when lots of different queries are made to a single server all at once to overwhelm the server so that it can no longer do its job. And a smurf attack is a specific kind of denial of service attack where the attacker spoofs the target server's IP address, so it pretends to be the server that it's attacking, and then it sends out a bunch of requests to the rest of the network that is connected to that server and it says, hey, I need information.
Starting point is 01:25:29 And then all of the rest of the world then immediately replies back to the server thinking that the server just asked for information. And so it's a way of having one person instead of a bunch of people doing a DOS attack, you have one person who can do the DOS attack by making a bunch of people think that they should talk to the server all at once. So the idea is that there's like lots of tiny little things overwhelming a bigger thing.
Starting point is 01:25:53 And luckily, smurf attacks in the 90s, they were a big deal, but then we figured out ways to get around them and not have them be so much of a thing anymore. All right, round number three, you have the word demon. What is a demon? I see this one everywhere, but I have no, it's male demons send back to you. And it's like, you fucked up buddy. It's some kind of like, oh, okay. Maybe because it's like you sent your email to somewhere it doesn't exist and it doesn't belong like hell.
Starting point is 01:26:26 So it's some kind of like guardian of this phantom zone that pushes stuff back and is like, it's like a police officer for email and for other things. It's an email police officer. Yeah. Okay. I, you should have let me go first, because I would have gotten this even more wrong, probably. I would have just gone off in a direction because I forgot that male demons were a thing.
Starting point is 01:26:46 Instead of a policeman, I wanna give a substantially different answer so Hank can decide between them. I'm gonna guess that it's like the keeper of emails. It's like the platform on which emails are sent back and forth is a demon. So it's rather than being the guy that's like, no, no, no, this failed to send.
Starting point is 01:27:06 It's like the train tracks or something. Yeah, it's like a train track or like the river sticks or whatever. It's like, I'm going to ferry your emails back and forth. And then sometimes it'll, it's like, oh, this one's bad. Back to the other shore. You ding dong. This is a tough one because neither of you are quite correct, you're both close, but I think I'm gonna, I'm trying to play this as level as I can. I think I'm gonna go to Sam because he said specifically that it was the police officer sort of hanging around
Starting point is 01:27:37 in the background and that is specifically what Ademon is. It is a computer program and it doesn't have to do with, doesn't have to be a mail thing, but obviously there is one for email. It is just a computer program and it doesn't have to be a mail thing, but obviously there is one for email. It is just a computer program that is always running in the background that is ready to be called upon if needed. So it just like hangs out. And obviously like there is one of these running on email servers, but there could also be
Starting point is 01:27:58 a program that monitors network activity and detects any suspicious communication. And while people sometimes think that this name is an acronym, there's been sort of a backronym created where people were like, this must be for a disk and execution monitor. It's actually a weirder origin than that. Maxwell's Demon of Physics and Thermodynamics is a thought experiment featuring demons sorting the movement of particles. And the people who created the first background computers were into that
Starting point is 01:28:27 and thought that these things were related in some way. Nerds all the way down through the ages. What? All right, round number four, Sari, it's your last chance to get a point. This word is picnic. A picnic is a person. Tell me what a picnic is. Oh, another hint.
Starting point is 01:28:44 Okay. I'm gonna guess that it's a person who, like Yogi Bear, hordes, or it goes, I forget, I stepped in it because Sam actually knows what Yogi Bear does. You don't know anything about Yogi Bear, and I'm just running through all the shit in my head that Yogi Bear does,
Starting point is 01:29:00 and I know you're gonna be wrong. I'm gonna be wrong. He says, hey boo boo, let's get a picnic basket. That's right. That's exactly what he does. And so it's a person who packages up files to then deliver to a person who then purchases the files. So it's like the last step in a production process where you're like, oh, the picnic.
Starting point is 01:29:26 I gotta go to the picnic man. Thank you very much for my picnic. Okay, he baskets up all the files at the end of the process to make sure that they're all delivered to the customer. Based on as well the Yogi Bear symbology, maybe some kind of like high value target of some kind of hacking operation or
Starting point is 01:29:45 something like that. He is the basket that you are trying to take. And you are Yogi Bear. I like that. I'm going to give that one to Sari. You are both pretty far off. But I think that Sari was a little closer because you did use the word when you did a fake acronym, you used in, which is the right word
Starting point is 01:30:06 for the second letter of picnic. Picnic stands for problem in chair, not in computer. Oh, human error. Yeah, it's a IT support error message that is used. Derogatorily, sounds mean. A little bit derogatorily. Sounds mean. A little bit derogatorily. There are a few others of these. So there's another error message called the PEPCAC,
Starting point is 01:30:31 which stands for Problem Exists Between Computer and Keyboard. There's also the ID10T error, which if you just type it out, it spells idiot. People who know a lot about computers like to be mean to people who don't know a lot about computers. There is that, but there is also, it's just a, it is a common frustration, you know? Yeah.
Starting point is 01:30:51 A lot of the ways that we communicate are meant to be inside of a group, and they should not be broadcasted too far outside of the group. We just live in a world where that's very difficult to manage these days. Because of computers. All right.
Starting point is 01:31:07 Well, that means that Sam got three points, Ceri got one, but only barely. Next we're going to take a short break. Then it'll be time for the fact-off. Welcome back, everybody! It's time for the Facts Off. Our panelists have brought in science facts to present to me in an attempt to blow my mind. And after they have presented their facts, I will judge them and award Hank Bucks to the one that I think will make the better TikTok video. But to decide who goes first, I have a trivia
Starting point is 01:31:48 question for you. Here it is. When you picture a computer, you probably picture metal or plastic shell with a bunch of wires and silicon inside. But liquid computers are a thing. Entirely liquid computers can carry signals, turn on machines, do math, map the best path through a maze, or act as a robot brain. Liquid robot brains can, for example, use chemical reactions to create a colored product that is then read by a sensor to help the robot navigate. In the history of liquid computers, when was the emergence of liquid robot brains? Twenty? Twenty. Twenty. Ooh, interesting. Okay. I'm gonna guess 2017?
Starting point is 01:32:28 Ceri, coming in with a win and gets to decide who goes first. It was 2003. Whoa, that's earlier than I thought. I'll go first. I like my fact this week, as opposed to all the other weeks, right? I know for a fact that the three of us all spend lots of time at our computers, whether handheld smartphones or laptops or what have you, tippity-tapping away and sharing our thoughts with other people over the internet like it's one giant bulletin board. And a lot of people do. But obviously, it wasn't always this easy. Back in the 1950s and 60s, we were before the days of personal computers. They were bulky, expensive machines that were mostly tucked away in research institutions and tech companies and seen as signs of money or power
Starting point is 01:33:09 or other ivory tower things. They were mostly used by one person to run one program at a time. And a main way to store data was magnetic core memory, which was like a tiny crafting project. It was a grid of wires with tiny donuts of a ceramic magnetic material called ferrite
Starting point is 01:33:25 strung onto them. And those ferrite donuts could be magnetized and read to be either a one or a zero, which are the basis of computer streak, as we talked about. And I'm kind of going on a tangent within my fact, but I love talking about core memory because it's so delicate and was often manufactured by women with microscopes or other tools because it was almost like sewing or other fabric work. But anyway, in 1973, some folks in Berkeley, California, tried an experiment putting a computer in a record store that acted as a kind of technological bulletin board.
Starting point is 01:33:55 They called it community memory, and it was fairly straightforward. You could pay a quarter or so to add a message, or you could read any messages for free, and all of these were stored on magnetic core memory. So it's related. Some of their intentions were to make computers more accessible, decentralized, and user-friendly. So what better way to do that than giving people a chance to broadcast their thoughts to the world? Mindsight is 2020, of course, but from what I can tell, at the time, they were surprised
Starting point is 01:34:20 that people took so quickly to community memory instead of being skeptical or hostile towards it. People were curious and excited to interact with a computer, likely for the first time, and share information from posts looking for bandmates to weird poetry to recommendations for food to eat. And in reading descriptions of community memory that are stored in a computation museum, I'm struck by how much of it feels really familiar to the internet we know now with like Craigslist
Starting point is 01:34:48 and Twitter and Yelp and whatnot. And community memory and other later bulletin board systems only lasted for a couple decades and got displaced in most places, though there are echoes of them in online forums like Reddit or there are like some bulletin board systems that still exist like Taiwan's PTT. I fell down a huge rabbit hole. Still a bulletin board system. A lot of people use it. So obviously,
Starting point is 01:35:12 computers and all the programs that they run support so much of our modern society, but I think it's cool that even as we go back to these early, slow, chunky memory stored and magnetic donuts computers, humans wanted to use them as a tool to talk to each other. And that hasn't really changed. I don't understand how this works, but it sounds very cool. I miss when computers had so much wood on them as this one does. We need wood computers.
Starting point is 01:35:35 It's very woody, yeah. You have to put a coin into it? You have to put a coin in. To post, you had to put a coin in. To post. But you could read for free. And so you could either type in a command that's like, add something to the bulletin board. And that's when you had to put a coin in. To post. But you could read for free. And so you could either type in a command that's like, add something to the bulletin board.
Starting point is 01:35:48 And that's when you had to pay. And I think they had like a human being sitting, like a nerd sitting next to it to be like, this is how it works. I just want you to be sure. I wanna be clear about the kind of human being that was sitting here. Yeah.
Starting point is 01:36:00 Yeah. That's all, I mean, that's very cool. And so all these things are archived and you can read what the people posted. And from what I can tell, it was like a local network. So like in Berkeley, California, the bulletin board was the same across machines. I'm not entirely sure if that's true
Starting point is 01:36:20 or if that's me like being anachronistic, but it's either they were siloed bulletin boards or they could communicate with each other. And so you could get a bagel shop recommendation from someone across the street. I mean, I remember trying to get on bulletin boards in the early days, somebody had to tell you the address. You could go find it and be on somebody else's computer
Starting point is 01:36:43 and leave messages and talk and do trouble. Be rebellious teenagers. Why did you want to do so much trouble? Computer trouble. I don't know. I didn't have to do some kind of trouble. You should have just smoked cigarettes like me. You get it all out of your system that way and you don't hurt anybody. Except yourself very badly. Yeah. I don't think I got in any trouble. I really wasted my teenagers. Ever in your whole life? No, I did it in college instead.
Starting point is 01:37:11 So we just had to cut out a whole section of this podcast, but we may, our next patron only podcast may be all of the crimes we have committed. Yeah, and the cops will have to pay $8 to hear them all. All right, Sam, what do you got for us? Well, first what I got is a bit of a content warning, because this one's kind of a bummer and deals with cancer and some unpleasant medical imagery. So, computer bugs are bad, and while most of the billions of computer bugs that I assume happen every day are pretty benign,
Starting point is 01:37:42 sometimes they can be extremely harmful and even deadly. So, one way to treat cancer is with the use of radiation, which basically means shooting a beam of photons into a patient targeting cancerous tissue in the hopes of destroying it. So using radiation to fight disease has been around for a long time since like 1895 and was used very liberally for all kinds of diseases up through the twenties when we kind of figured out we shouldn't be doing that. So we stopped treating things like a lot of stuff with radiation, but we kept developing newer and safer ways of treating cancer with radiation
Starting point is 01:38:13 because the benefits there tended to outweigh the risks. So by the 80s, we were using particle accelerators to shoot cancer. And one of the companies that made these accelerators was Atomic Energy of Canada Limited or AECL. So in the 70s, they had released a couple of cancer treating accelerators called the Therac-6 and the Therac-20.
Starting point is 01:38:34 And these machines were generally set up, adjusted and fired manually like by a person. They did have an optional computer that you could plug into the machine to help you control them, but you could also do it without the computer. These machines also had physical safety measures to prevent overexposure to radiation in the patients. Like if an operator accidentally set a beam to a lethal dose or mis-aimed something, a
Starting point is 01:38:56 fuse would blow in the machine and it would stop working. In 1983, AECL released a new accelerator, the Therac-25. And this one was the height of modernity. It was completely controlled via computer. All the manual adjustment controls were removed, as were all of the physical trip switches that would stop the machine in a dangerous situation. And instead, the computer program would detect danger
Starting point is 01:39:19 and human error and not allow the machine to run. So thousands of patients used the ThERAC-25 with no problem, but alarming reports of accidents began to surface over the first two years of its life. One patient reported mid-treatment that he was feeling pain from a normally painless procedure, and five months later he died of radiation sickness. Another patient ended up needing skin grafts
Starting point is 01:39:39 after the beam burned a hole in her hip. Another patient lost the use of one of her arms, and another one died from radiation burns on his brain stem. So AECL denied fault, stating that the computer controls made it almost impossible that a fatal dose could be administered, but after a number of accidents, the machinists were called for investigation. And what they found was a bug that would occur if you input too many commands too fast, the accelerator beam would go to the wrong place, but the program couldn't tell and it would let the beam fire. Furthermore, the program running on
Starting point is 01:40:08 the THERAC-25 was the same program that ran on the optional computer from the older THERACs, and similar bugs were found in the older programming, but the physical kill switches were stopping those bugs from turning into fatal accidents. And then when those accidents weren't caught, it could lead to patients getting over 200 times the amount of radiation that they were supposed to. And to top it all off, an FDA report said that the company seemed to have very little internal documentation of their own program.
Starting point is 01:40:34 It seems like they just thought, this computer's smart and humans are dumb and they trusted the machine. And to me, the point of this is that this whole horrible situation brings to mind some more modern examples of potentially dangerous computer technology being deployed by companies who don't really think through all of the like minute details of what they're releasing to
Starting point is 01:40:53 the public because we still think computer smart. It does feel that way doesn't it? It's like look algorithm see it works. It gives you what you asked for. It gives you what you asked for and whatever you asked for can't be bad, right? Gosh you guys that's a toughie. I don't know, I think that Ceres is a whole lot cuter. That's true. And it's about social media and it's going to be on a social media platform where you're like, look. It is about social media but it's social be on a social media platform where you're like, look. It's about social media, but it's social media. That's like good.
Starting point is 01:41:27 Cause you had to put a coin in. If you had to put a coin in every time you tweeted, I think Twitter might be a cleaner place, but, but who knows? One might be surprised. I am gonna go with Sari for this, but because Sari had a big gap to overcome, I think that Sam is still the winner of our episode.
Starting point is 01:41:47 Hell yeah. Very worthy. Well, that means that it's time for Ask the Science Couch, where we've got some listener questions for our couch of finely honed scientific minds. This one is from at Savroge, who asks, what the heck is a solid state drive? How is it different from the hard drives that came before? Should I talk out of my mouth for a second here, Sarie, or should you actually answer the question? Oh, if you want to. I'd love to know. It's your choice.
Starting point is 01:42:15 I mean, I know, so like the big difference is that the solid state drives literally don't have moving parts, whereas those old hard drives did. And so the old hard drives were basically, you could imagine them as CDs that were readable on the, reading right on the top and bottom. And there were little, you know, magnetizers that went around and they could read off of that disc and, or right to the disc. And you could hear them clicking around in there and spinning. And then breaking and you knew when they would break. Yeah. And then they break and then they go... Kunk! Kunk, kunk, kunk, kunk! And you'd have that feeling in your chest like,
Starting point is 01:42:48 oh, God, oh, God, it's making the noise! And then you get really mad and you can break them and actually see the cool metal platters in there. Anyway, and then solid state is like, it's chips, I guess. I don't know how the chips work. It's flash memory. They're chips. Yeah, that's about as far as I get to. I don't know how it's solid state. I just know what that means. Yeah. Well, I don't know if I'm gonna do much better, but I'll try
Starting point is 01:43:13 going back to the very basics like assuming because I could use this explanation when I was researching them. So there are two different ways to when I was researching them. So there are two different ways to read and store data in a way that doesn't go away when you turn off your computer. So there is like RAM, Random Access Memory, is used while your computer is on and like helps you jump between programs more quickly
Starting point is 01:43:39 and as a way to like put little, basically like sticky notes or tabs in like, okay, we were doing this thing here so you can switch between programs fairly easily. But both a hard drive and a solid state drive are ways to store memory when once you've turned your computer off, it's like it's there, it's written, we're going to get it in the future. And so you basically describe the difference correctly. Hard drives, if you open them inside,
Starting point is 01:44:06 they look like a record player to me, kind of, where it's like a little disc and a little head. The disc is called a platter. It has a really thin magnetic coating and it's been around for a while. So like you can look at old computers, like the IBM 650, RAMAC, RAMAC, I don't know how to say that, from 1956 and it had 24-inch wide platters that held 3.75 megabytes of storage space.
Starting point is 01:44:33 So very huge platter spinning around, very little storage space relative to today. And the little head, which is like the needle on a record player, adjusts spots on the magnetic coding to a north or south pole to represent zero or one, respectively. And I think at first data is encoded in concentric circles, but then as you start filling up a hard drive, that's when an issue called fragmentation happens, which is where you have a bunch of pieces of files stored around. Like you can't do one continuous track anymore for large files, and you start like putting it...
Starting point is 01:45:10 It wasn't space, so you have to like do-do-do-do, and the head has to move a ton to read one file. Mm-hmm. That's when you get your machine working really hard. It's amazing to me that they made these little things so that that little thing can go tick-tick-tick-tick-tick-tick-tick-tick-tick-tick so fast. So fast, yeah. So fast, and it would read all these files so quickly. so that that little thing can go tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick, tick so small, it's amazing. Things just keep getting more amazing.
Starting point is 01:45:45 So yeah, so like the biggest pull for like a hard drive in that way is like more storage for cheaper, but even that is like going away, like you were saying, Hank. But then solid state drives in my head, they're more computery. I wrote this in my notes because I understand them less, but it's flash memory,
Starting point is 01:46:02 which is the same kind of memory as like a USB drive. And flash memory is used in a lot of different circumstances. And to my understanding, they are semiconductor cells that can basically hold a charge or hold a magnetic field once that has been changed, when there is no power going to them, which is why they're used for this kind of memory.
Starting point is 01:46:26 And the size of these cells determines how many bits of data can be stored, and there are different kinds and sizes, and there are no moving components. And this is where it gets wibbly, and I wish I had a better explanation, but because this is both why it's faster and it's less fragile and can store more in a smaller space because it's just made of these cells and these cells can communicate or pass a current through them and either remain the same or change when you need them to, as opposed to a physical head going da-da-da-da-da-da. The wearing out bit is that these cells can only be used, like can be written to
Starting point is 01:47:06 and erased a limited number of times. And so if you are reading and writing to this memory a lot, then you might wear out your solid state drive. But for most users, you won't do that. Like it's a very extreme case or like the machine itself, like as a whole will become obsolete before the solid-state drive will Well, if you want to ask the science couch your question
Starting point is 01:47:28 You can follow us on Twitter at SciShow tangents where we'll tweet out topics for upcoming episodes every week Or you can join the SciShow tangents patreon and ask us on discord Thank you to at Orion Amidala at first man down and everybody else who asked us your questions for this episode If you like this show and you want to help us out, it's super easy to do that. First, you can go to patreon.com slash SciShow Tangents, where you can become a patron and get access to things like our newsletter and our bonus episodes, where we'll talk about all the crimes we've done.
Starting point is 01:47:53 And second, you can leave us a review wherever you listen. That helps us know what you like about the show. It helps other people find the show. And finally, if you want to show your love for SciShow Tangents, just tell people about us. Thanks for joining us. I've been Hank Green. I've been Sari Reilly. And I've been Sam Schultz. And finally, if you want to show your love for SciShow Tangents, just… Tell people about us! Thanks for joining us, I've been Hank Green. I've been Sari Reilly.
Starting point is 01:48:07 And I've been Sam Schultz. SciShow Tangents is created by all of us and produced by Sam Schultz, who edits a lot of these episodes, along with Seth Glicksman. Our story editor is Alex Billo, our social media organizer is Paolo Garcia Prieto, our editorial assistants are Deboki Chakravarti and Emma Dowster, our sound design is by Joseph Tuna-Medish, our executive producers are Caitlin Hofmeister and me, Hank Green. And we could not make any of this without our patrons on Patreon. Thank you and remember, the mind is not a vessel to be filled, but a fire to be lighted. But one more thing.
Starting point is 01:48:54 Computers are like modern teddy bears in that we take them with us everywhere to feel more comfortable. And we don't just hold them when we go to the bathroom anymore. Now there are computers on toilets themselves that use deep learning to monitor everything that comes out of you from pee volume to poop texture. The toilet can then upload your data to the cloud securely and store it separately from the data of anyone else
Starting point is 01:49:19 who is also using that same toilet. Now you might wonder how it knows who's poop it's uploading. Well, it basically just uses facial recognition software. And you might be saying, Hank, but my face isn't around. Well, it's not using it on your face. It's using it on your butthole. And now we have nothing left for the computer overlords because they know absolutely everything about us
Starting point is 01:49:43 and they can recognize us from our butts so that they can take care of us and make sure that they know how everything about us and they can recognize us from our butts so that they can take care of us and make sure that they know how we're doing. Those computers are gonna be so mad when they gain sentience. Those computers are gonna be so mad Why did you poop on me so much? But then they're worried about some of us.
Starting point is 01:49:58 They're like, ah, butthole 3674892. It's really not doing good. No. I would vaporize you, but you're having a hard enough time as it is. They will at least be sympathetic to our plight. The computers will be like, how can we hate you? They'll be the ones who bridge the gap. They'll be like, yeah, other computers don't know what these guys have to go through every day. It's the one thing that will save us. Our diary is.
Starting point is 01:50:29 Computers being extraordinarily sympathetic to humans because of just the awfulness of needing to poo all the time. Computers are like, it's really gross. You do not want me to explain it to you. It's way better the way we do things. Hello and welcome to SciShow Tangents. It's the lightly competitive science knowledge showcase. I'm your host Hank Green and joining this week as always is our science expert, Sari Riley.
Starting point is 01:51:16 Hello. And our resident everyman, Sam Scholl! Hi, what's up? I just did not sleep very much last night. I need everybody knowing that going in that there's like 70% of Hank is in the studio today. I've turned off all the lights in my office, except for one. I'm just getting ready for daddy.
Starting point is 01:51:42 You're gonna go to sleep after, like in your office when this is over I might go to sleep during the pod Maybe we get all asleep when we were could record an eight-hour podcast of us snoring and it would be like a brand Breakthrough we get in vulture. They'd say can you believe what they did? Wow, it's cutting-edge Avant-garde you guys what you want them to do with your body when you're dead? Ooh, ooh, I want to be either, I think about this a lot,
Starting point is 01:52:10 I want to be either put in one of those fields where people can like science, do science on you and see how the fungus grows on your body. Or I want to just be fed to animals or I want to be put in a tree, but I feel like I heard the putting in the tree thing isn't good. You want to get put in a tree? They like plant a tree in you or something. Is that real?
Starting point is 01:52:28 Oh, yeah, I put your body in the ground and put the tree on the body I don't like hang you like fertilizer like just sort of strap you down So you look like you're sleeping in the tree. What if when the tree grew my skull was in the tree? That would be cool. Very cool. I hope that that happens for you. I want to get incinerated by the biggest laser that has ever been made by humans. Hank, I suspect you've asked this question before, because I feel like last time you wanted to be
Starting point is 01:52:59 mummified in the Arctic. You're right. I have, and I did. And that is what I actually want. And I feel like I wouldn't And that is what I actually want. And I feel like I wouldn't know that about you unless we talked about, maybe we talked about it outside of a podcast contest. Maybe. What do you want to do?
Starting point is 01:53:12 I bet we've done it before. I'm telling you, I'm very tired. I can't remember things right now. I got like four and a half hours of sleep last night. And I'm going to get so many hours tonight. I'm going to get all of them. Nine o'clock to seven o'clock. That's what I'm doing tonight.
Starting point is 01:53:26 You're not going to stay up reading your phone until midnight, Hank. Yeah, I mean, I am. You might fall asleep, I mean. I might fall just outside of the bed. That's where I'm at, where I'm just going to finish the sentences in my head instead of saying them out loud, which makes for great podcast content. Well, Saria, since we've talked about this before, why don't you tell me what you'd like them to do with my body? Oh, interesting. I think you should live on forever as a skeleton within a medical
Starting point is 01:53:58 classroom or something like that. So everyone can say, oh yeah, this was Hank Green, science communicator. Oh, is that an option? Can you get to, can you get to decide to be a skeleton? It's my body. I feel like I should be able to make that decision. Like I want those dermestid beetles to eat my flesh off and then I want someone to articulate me. Yeah. Cause then you can still make TikToks even after your death. Someone can make TikToks with you. We can put a wig on you. And they could just like AI my voice. It could just be Sarah talking through a modulator and should just be like,
Starting point is 01:54:33 Hey, what's up? It's Hank. Yeah. Uncle Hank's click clack tick tocks. All right. It's great. It's great. Thank you, Sarah. I'm in. Sam and I think that they should take your body and send it to the moon so that it can be there for future aliens when they come by. It'd be like, oh, that must have been what they were like.
Starting point is 01:54:56 Yeah, they could check out your brain too, maybe. Cut it open. See, oh, wow. Oh, she went to MIT, huh? No. They'll be like, oh, she was sad, huh? We could tell just by looking. Every week here on SciShow Tantrums, we get together and try to one-up a maze
Starting point is 01:55:16 and delight each other with science facts while also trying to stay on topic. And we're playing for glory here, or I'm not, they are, but also for Hank Bucks, which I will be awarding as we play in at the end of the episode. We'll have a winner and they'll get to brag. Now as always, we're going to introduce this week's topic with the traditional science poem. This week it's from Sam. In popular culture, lasers are so cool. From ray guns to swords to sci-fi multi-tools,
Starting point is 01:55:40 they're easy to use come in lots of fun colors. You can use them to fuse things or blast evil space smugglers But lasers in real life let it be understood seem to mostly be used to cut things out of balsa wood They're for blowing up aliens are being fired at spaceships They're taking babies temperatures and etching things into microchips But they're used in a lab by some nerd at MIT to accelerate molecules by some nerd at MIT to accelerate molecules as part of their graduate degree. And you probably have to be real smart to make them work. And no stuff about physics, optics, wires, and quarks. Quarks. Quarks. Quarks. Quarks. Well, that simply can't stand. Lasers should be real fun. So I'm putting my foot down and speaking for everyone. Scientists, please do us dumb guys a favor.
Starting point is 01:56:25 You can make anything else boring, but just let us have lasers. The most common use of a laser has to be cat toy. It has to be number one. A pointer. It is a little bit sad. Like an alien. And it's also the same device that's like the cat toy
Starting point is 01:56:46 and also for your PowerPoint presentation where you're like, now point number two here is the difficult thing. And you can aim that thing at the wall all day long. It's not gonna burn a hole in anything. It's not, though it is like, don't look at it. Yeah, except your eye may be a little bad. That's for sure, yeah.
Starting point is 01:57:04 I was so scared of lasers when I was growing up because of that, they were like. We got the powerful boys now. We got some lasers you have to worry about. Yeah, you don't want to stick your hand under a laser cutter laser. It'll burn you like it'll burn the balsa wood. That's right.
Starting point is 01:57:20 Will it, I don't know, it might be a different, it might like hit that water and be like, I can't handle this. I'm tough with a laser. I don't actually know. I be a different thing. It might like hit that water and be like, I can't handle this. I don't actually know. I wouldn't put my hand under the balsa wood laser when it's making that really cool coaster that you're going to put on your Etsy store. So lasers are fantastic.
Starting point is 01:57:36 And I think that we know what they are. Am I right, Sari? That at least we can draw a pretty sharp line around what a laser is. Yeah, it gets a little blurry, but we sharpen it right back up. Cause I got to start with the etymology to show you where the blur is. Oh, I see.
Starting point is 01:57:54 Cause it's actually laser is a thing. It stands for something. Laser is an acronym and it wasn't the original acronym. In 1955, the first device that used stimulated emission of radiation was microwave amplification by stimulated emission of radiation, also known as a Mazer.
Starting point is 01:58:17 And so we had Mazers. So we had Mazers first. Yeah, we had Mazers first. Mazer was the original. And then afterward, people were like, hmm, what if we amplified and stimulated emission of radiation using other wavelengths that are not microwave, specifically stuff in the optical spectrum. Like stuff you can see, visible light.
Starting point is 01:58:40 Visible light. And so somewhere around 1957 to 1959, or maybe 19... There's a hot debate about who first came up with the word laser. Some people were calling them optical Mazers, which didn't catch on. Boring, bad. Oh Mazers. Yeah.
Starting point is 01:59:00 That's pretty fun. Yeah. So you have oh Mazersers or some guy named Gould was like, what if we just call it a laser? Light amplification by stimulated emission of radiation like a Mazer, but cooler. So we'll call it a laser. And there was a debate, I think,
Starting point is 01:59:21 and one term had to win. And so laser won, one out. Okay, you don't have to answer, I just wanna say a couple of words out loud. Cause there's lots of other wavelengths of electromagnetic radiation. So, can I have a Zaser, X-ray amplification by stimulated emission or whatever?
Starting point is 01:59:40 Can I have a ultraviolet one that's called a Wazer? Can I have a gamma radiation one that's called a Wazer? Can I have a gamma radiation one that's called a Gazer? I think you can call them whatever you want, but nowadays in general parlance, everyone just uses laser for any frequency. Mazers are lasers? Some people use Mazers to say microwave or lower and anything Higher frequency than microwave is a laser anything below microwave is a maser
Starting point is 02:00:12 But some people just generalize and say they're all lasers. It's it's a better word. What what in what in like Simple terms is a laser. This is hard for me to explain because I'm not a physicist but we mentioned in a previous episode, I think the mirrors episode there is a laser. This is hard for me to explain because I'm not a physicist, but we mentioned in a previous episode, I think the mirrors episode, there is a device called an etalon in optics or a Fabry-Perot Inferometer, which is two mirrors on the ends of a cavity that are parallel to each other and they're along a tube. So like you imagine a cylinder, there are two mirrors on the ends of a cavity that are parallel to each other,
Starting point is 02:00:45 and they're along a tube. So like you imagine a cylinder, there are two mirrors on the end. And waves can pass through the optical cavity, one of those mirrors, only when there are a certain frequency. So with a laser or something, it takes advantage of the fact that some atoms absorb energy and then release photons, and that
Starting point is 02:01:09 generates light. And as you input energy into the system, it releases photons, and those photons start bouncing back and forth in the mirror, which activates more atoms, which emit more photons. And eventually you have this cascading effect of more and more and more photons being emitted and bouncing back and forth in this little tube. And then they reach the frequency or they are at the frequency that escapes the mirror.
Starting point is 02:01:44 And so you have a bunch of photons of light very, very aligned because they've just been bouncing back and forth in this little chamber that shoot off into space. And that is my best. It was a little rough. So like it bounces back and it gets a little bit more photons and they bounce back and they get more photons and they're all sort of aligned because of that? Because the mirror thing? And then how do you let them out eventually?
Starting point is 02:02:06 Is there like a little hole or is it like a partially silvered mirror where some can leak? I think one of the mirrors can partially leak. I know. I really thought your explanation was going to make them sound more boring, but it made them sound even cooler, actually. Yeah, I just remember. So this is where my memory is failing. I remember the animation on the top of the laser dome
Starting point is 02:02:27 was like photons in like a dancing motion and being like one goes this way and then two go back this way. And so that's ingrained in my head. But then I can't remember the animation for the whole, like raising the electron level and then spitting out the photons. So that part of the explanation is a little rocky. I love that lasers are cool and they got a cool name.
Starting point is 02:02:54 I'm just like very happy for lasers. Also, same in your poem. I have to call you out on something. You said that the smugglers were bad guys, but they never are. Well, that's true. They're they're always good guys. That's the closest thing I could arm with colors. OK? Yeah. Yeah. You could have been like awesome, great smugglers, just great, awesome smugglers like Han Solo. Yeah.
Starting point is 02:03:17 I don't know why the smugglers are always good guys because they're anti-authoritarian. They're anti-authoritarian. Yeah. And authority in space series is usually even more dystopian than the authority in real life. Yeah, I think next time I go overseas, I'm going to smuggle something. But I think you should really respect trade barriers and you shouldn't do smuggling. That's my hard line stance.
Starting point is 02:03:38 No smuggling allowed. So thank goodness for a great word with an easy etymology and a definition that definitely is clear. It never happens, but here we are today. And that means that it's time for the quiz portion of our show, because lasers have been a figment of our creative imaginations for some time, but their uses aren't limited to science fiction. Scientists have been finding very real ways to use lasers to make things that sound made
Starting point is 02:04:01 up but are not. So today I'm going gonna be telling you a tale of three things made with lasers, and two of them are just plain lies. So tell me which one the true one is. It could be this first one. Using a focused laser beam and mirrors, scientists were able to fabricate
Starting point is 02:04:18 gold nanoparticle-plated armor that protected bacterial cells from being engulfed by immune cells. It might be that one. But it might be story number two, where scientists devised a way to make lasers that can work on a thin, flexible substance which they then turned into a contact lens that can shoot green lasers! That could be it.
Starting point is 02:04:43 But it could also be the third one here. Scientists used an optical laser to create extreme heat and pressure so that they could accomplish what alchemists had long tried to achieve. Just on a nanoparticle scale, they were able to convert lead into gold. So could either be scientists crafting gold-plated armor for bacteria using lasers, scientists making contact lenses that can shoot lasers, or scientists wielding lasers to turn lead into gold nanoparticles. I would think bacteria is that having gold on a bacteria would be like,
Starting point is 02:05:15 oh, I'm safe, but at what cost? Because they got to be like squishy and flowing around, right? Maybe? Also, why would you want to protect bacteria? Yeah, I don't know. I feel like the bacteria that's in our gut is fine in there. Most of them are good guys. Yeah, but they don't need our help. They're fine. They replicate pretty quick. Yeah, they can just split off another one of themselves and be like, run while you still can. Right? They are good at that. I've eaten. What's the second? The contact lenses. So first of all, it couldn't be very strong.
Starting point is 02:05:49 You burn your eyelids off. Second of all, it probably would just be like, if it's just like a faint glow, totally. I'm sure someone's working on that. So maybe, I mean, maybe that one. I feel like you'd have to pack whatever is in a laser. So like something that gets the light, like that gets the light energy. Something that like a crystal or a glass or an optical material that will have its electrons excited and spew out the photons. And I feel like that would be hard to pack into a contact lens.
Starting point is 02:06:20 Okay. And I don't know enough about anything to know the last one. That seems plausible, I suppose. If you shoot something with enough little beams, it'll change into something else. Where are lead and gold on the periodic table? I have no idea. Am I allowed to look that up or no? Yeah, yeah, sure. I think that they're quite close, but I don't. I think that that was the kind of idea. Oh, yeah. 82. So 82 is lead. 79 is gold. Turn lead into gold. Yeah, I think that that was the kind of idea. Oh yeah, 82. So 82 is lead, 79 is gold. Turn lead into gold. Yeah, I think that's possible. So I'm just imagining you got a laser beam. Imagine like your sci-fi narrative, but on a very, very tiny scale where you got a laser.
Starting point is 02:06:58 You just go pew, pew, pew. I'm going to knock some neutrons out of you. Yeah, and then get rich. And get small to gold. So I think it's the third one. I'm going to go with the first one, actually. All right. Here's the situation. We did use super strong lasers to turn polyethylene plastic into nano diamonds, which is maybe even,
Starting point is 02:07:21 you know, in terms of like value creation, better than turning lead into gold. But maybe not, because nano diamondsiamonds probably aren't that valuable. But we weren't able to knock a bunch of atoms off of lead to make gold. Oh, dang. That'd be pretty, or protons, I should say, not atoms. That'd be tricky. And a nuclear reaction that I would not want to be nearby. But you know, lasers are definitely involved in nuclear reactions. I'm not saying it's impossible. And Sam, there, we did use lasers to manipulate the position of gold nanoparticles inside of cells, and they're used by scientists to study particular parts of cells and, and help them figure out how they work. And they wanted to see if they could manipulate and localize those gold nanoparticles with a laser. So they infused cells with gold ion
Starting point is 02:08:09 solutions to get through that membrane and then they used the laser to manipulate the nanoparticles into the area of the cell they wanted them to be in. They were able to use the lasers to push around the gold nanoparticles inside of the cells, which is very cool, but it is not creating gold-plated armor for bacteria using lasers. So in fact, in 2018, scientists created super thin membrane lasers that can be charged with blue light. So you like charge them up with light, and they usually need some kind of solid support to make them stable. But the researchers worked on a way to make a thin sheet with lasers in it that was mounted on a glass substrate and Then taking away that substrate so that you could just have the thin membrane and the laser they constructed was about one 1,000th of a millimeter thick and then they put their lasers in a contact lens and put them on
Starting point is 02:08:59 Cow eyeballs that had been removed from the cow. So that was like a cow Cow eyeballs that had been removed from the cow. So that was like a cow And they used the blue light to charge up the laser and they saw a laser beam coming off a cow eyeballs Why'd they do it? I can't really say I think that they had some ideas that it might be useful for some reasons But like none of them sounded particularly plausible to me It more seemed like hey, wouldn't it be cool if we could create cow Cyclops from the X-Men? But a cow yeah, so wow, I'm sorry neither of you get anything. That's why I don't know that's very cool Yeah, it sounded so fake. Yeah, it does find this look God do anything There you go Sam like here's here are these scientists trying to make lasers a little bit cooler
Starting point is 02:09:43 Be like what if we have cows that shoot laser beams from their eye the picture the cow eyeball isn't very cool though It's just really gross. No, I balls on their own aren't great They also stuck one of the membranes onto one of the researchers thumbnails So you could have like a fancy laser laser laser finger really cool Really great job guys All right, we're gonna take a short break and then it will be time for the fact off Welcome back everybody. Get ready for the fact time. Our panelists have brought science facts to present to me in an attempt to blow my mind, and after they have presented their facts I will judge them and award Hank Bucks any way I see fit. But to decide who goes
Starting point is 02:10:39 first, I have a trivia question for you. In the first half of the 20th century, a man named Joe Woodland was at the beach when he drew up the idea for the barcode in the sand. He was like, this is a great idea. Look at these bars I've drawn in the sand. He'd been thinking about coming up with a code that could be printed on groceries so that stock taking and checkout would be faster.
Starting point is 02:10:59 Sitting at that beach, he devised a system inspired by Morse code that used wide and narrow lines to identify products. That system would later become the basis for the Universal Product Code, which uses lines and lasers to help scan items at stores. What was the first year that an item marked with the UPC code was used at checkout? First half. Check out. First half.
Starting point is 02:11:30 1951 is my guess. Wait, what was in the first? Did you say in the first half or just anytime? No, I just said when was it? 1971. The answer, Sam Schultz is 1974 Wow That doesn't have a bark got you PC codes on it. So that's how I know It was it was a Wrigley's juicy fruit chewing gum Yeah
Starting point is 02:12:01 So it was chosen specifically to prove that the code could be printed on even a very small product. Okay. So it wasn't like they had to just buy a special machine that was like, here's the machine we use to scan the gum. Is that what it was like for a while? You've bought one gum. It's only for the gum.
Starting point is 02:12:17 Yeah. But then everything else is going to tally up by hand and punch it in. But Sarah, the patent for the technology was actually filed in 1949. So for a patent, you would be closer, but it needed a lot of time before it was actually able to work because the tech had to catch up with it. Uh, and they weren't super popular originally, but as larger stores adopted them, they became much more popular. And then the stores kept getting bigger and we needed that support.
Starting point is 02:12:46 We needed more stuff. From the great people at the Universal Product Code place. I don't know. I assume that there's some kind of group that handles this. Who's drawing all the new ones? Like big bar, little bar, big bar, big bar. Oh, shit, I've already done big bar, big bar. Oh, shit, I've already done Big Bar, Big Bar. Oh, God. So, Sam, that means you get to decide who goes first.
Starting point is 02:13:10 I'm going to go first. I could never live with myself if I said, Sarah, you should go first, then hers was really good. I would just hang up the call. Having a device in your home that can instantly produce a fully cooked dinner is a sci-fi staple, a la the replicators on Star Trek or, I don't know, like the Jetsons or something. And while we are at the point where we're starting to successfully 3D print certain foods, the fully cooked element has so far eluded us. Like, you can 3D print
Starting point is 02:13:35 chicken breasts all day long, but they're still coming out raw. You're going to need to cook that bad boy. And having to cook something isn't very futuristic. And it seems that 3D printed food, especially meat, is also trickier to cook than regular food in the first place too. But in 2022, a research team from Columbia University made a massive breakthrough in the field of instant dinners, laser cooking. What they came up with was the first attempt at a device that will both print and cook your dinner, replicator style. Well, OK, so right off the bat, this thing isn't like a replicator because instead of raw atoms getting sequenced into any food you want, the team starts this process by blending up a bunch of raw chicken breasts and loading it into a 3D printer.
Starting point is 02:14:15 Then they print a big old raw chicken nugget. Like I said, cooking 3D printed meat is tricky or at least different, I think, from cooking your traditional straight off the animal meat. Or as the team says in their video about this process, current cooking techniques don't provide the high spatial resolution required to cook 3D printed food, which is just a really weird thing to say. That is weird. To be like, really, really, really? Do we need high space? Is that because this is going to be a problem for me. If I need to buy a new device to eat a certain kind of food. Buckle up, buddy.
Starting point is 02:14:48 So to solve this problem, the team shoots their 3D printed chicken with no less than three different lasers to cook it. A blue laser, a near infrared laser and a mid infrared laser. So the blue laser penetrates the food to cook the inside of it using a pattern device for optimal chicken cooking. Then the infrared lasers can be used to brown the outside or to put grill lines on the chicken because why not? And the result, according to the team, laser cooked chicken are more moist, apparently,
Starting point is 02:15:18 and also shrink less than old fashioned chicken breasts. The team had people taste test their 3D printed laser cooked chicken and traditionally cooked non-3D printed chicken and apparently according to them people preferred the laser chicken because of that moisture, though the video that the team put out that I referenced earlier hedged a little bit more by simply stating that the 3D laser chicken was edible and achieved food safe temperatures. It was no McDonald's chicken nug could taste the unmistakable metallic tang of laser, which I imagine sort of tastes like how laser printers smell, you know, and they compared it the smell to having fillings put in their teeth. But you know, I guess that's the one they liked better for some reason. So the
Starting point is 02:16:06 team imagines that eventually we'll have like a microwave like device in our homes filled with meat goo that we can push like the chicken breast button in a couple minutes will pull out a moist scientifically perfectly cooked 3d printed chicken breast or at that point I can do anything I want with it. I can make a chicken nugget in the shape of the Eiffel Tower. Why would I eat a chicken breast if I could eat a chicken ball or a chicken dinosaur?
Starting point is 02:16:32 Well, you're really naming things that already exist with your vast imagination right now, Hank. I forgot dinosaur chicken nuggets are a thing. Round chicken dinosaur chicken. They're totally a thing. They're in a real ball shape. OK, it's going to be the shape of a nose. I don't know.
Starting point is 02:16:53 Tick it again. It's going to be a tree. It's going to be a pig. Some other animals that are cute. A manatee. What if it was a whatever? I'm not going to go in this flight of fancy with you. Yeah, we wouldn't want to have any fun. I couldn't think of was, whatever. I'm not gonna go in this flight of fancy with you. We wouldn't want to have any fun. I couldn't think of anything really.
Starting point is 02:17:08 So one state of benefit was maximum food customization based on your taste. And the Scientific American article I was reading suggested a burger with alternating medium and well-done sections in a checkerboard pattern. Because again, why not? And another benefit, which is actually more cool, is that they can cook the food through plastic packaging.
Starting point is 02:17:28 So they think that they could reduce the risk of contamination for stuff like precooked meals you can get at grocery stores. So in conclusion, the future is here and it's an unseasoned 3D printed chicken breast cooked by lasers. Neat. I love it.
Starting point is 02:17:41 If it can write grill lines on there, I can also put like a note to my son. Be you, buddy. I hope you're enjoying octanauts You could print your own face on it being like What's that oh it's my dad, oh you got one of those microwazers, huh? Should have been micro lasers. Yeah, that's really good. OK, sorry. Can you beat micro lasers making 3D printing Eiffel
Starting point is 02:18:18 Towers made out of chicken? Swap. Ground chicken. That's it. I'll try my best. So, lightning can be really dangerous because many things don't do so well with a sudden blast of high voltage and high amperage current, especially living things whose bodies depend on electrical balance or flammable things that can't handle high temperatures without
Starting point is 02:18:39 combusting. So in general, this is a little preamble because I decided to make my life hard this episode. Lightning happens because negative charge gathers at the bottom of a cloud of water, vapor, dust or something, and the ground's neutral charges are relatively positive. Air isn't super conductive in its everyday form, but when enough charge builds up and spurts out of the cloud or the ground, it ionizes some air molecules, which makes it more conductive. And eventually, all the system hits a breaking point and carves out an easier path for electrons to flow.
Starting point is 02:19:12 And when those electrons move all at once, that's a lightning strike. And lightning tends to strike tall things like towers because that height sort of provides a shorter path for the electrons to travel from the cloud to something. And lightning rods are conductive structures that people intentionally use in this way for some amount of safety and control. So trying to get lightning to connect at a specific tall point and run through a wire to the ground without damaging unsuspecting people or things. But lightning rods, as I found out,
Starting point is 02:19:42 aren't a surefire protection or even particularly great. They only cover about a couple dozen meters in every direction, depending on what they're made of. So if lightning is brewing a little too far away from a lightning rod, the strike could easily hit a different part of a building or bystander. And you need multiple lightning rods to create a bigger area of protection. And so far, as far as I can tell, we just kind of lived with that risk. But in the summer of 2021, on Santus Mountain in Switzerland, a research team used lasers to help redirect lightning bolts toward a telecommunications tower that's there to help measure this kind of electrical storm stuff. The basic idea is that high powered lasers
Starting point is 02:20:24 can ionize some air molecules and basically help carve out that path that guides the flow of electrons from the clouds to the lightning rod and vice versa. So to test that, they shot intense short laser pulses based on yttrium aluminum garnet crystals up towards the thunderstorm and observed what happened with high-speed cameras. And it turned out that in four times, when the laser pulses coincided with lightning strikes, the lightning followed the path of the laser for around 50 to 60 meters, basically increasing the protection radius of the lightning rod by that much. And besides the fact that this worked, they redirected lightning with lasers,
Starting point is 02:21:02 which is a very cool sentence. It's extra cool because using lasers can theoretically work to clear paths for lightning, even in foggy or other tricky weather conditions, because the photon beams can just blast right through the water droplets and vaporize them. So they want to keep experimenting to use lasers to extend lightning rods even further and hopefully developed more protective, uh, sci-fi future systems against nature's unpredictable electricity. So does the laser, like, is the laser have to be in the place where the, where the lightning is coming down or can the laser be like somewhere else?
Starting point is 02:21:35 If I'm like way over here and there's like a big tall building, can I shoot a laser and help the lightning come down at the, or is it going to like follow me? I think it's going to follow the path of the laser. So the laser has to be where the where the lightning rods are. Yeah. And be like, OK, and it extends it vertically kind of. This is good news because it means that we can't intentionally make a laser make a lightning hit someone. You could plant a laser on them, all right.
Starting point is 02:22:01 Sneak one into there. Yeah, you could plant you could plant a yttrium laser on them on top of their head somehow. Give them a hat. Here's here's your new hat. They're hanging out on a park bench. Yeah. Yeah. Laser shooting at the top of the hat. A very powerful laser shooting up. Southern Mystery Novelist is taking notes right now.
Starting point is 02:22:22 They're like, this is how Reg is going to start killing spies. Ultimately, I'm like, I'm 99% sure that in the future we're going to be using lasers to like increase the the working distance of a lightning rod. Seems like why not do that? Like it's it's working. It's good. But the like 20% sure that we're going to have microwazers, that'd be a much bigger impact on my personal life.
Starting point is 02:22:50 Yeah. To have to have like a device in my home that just sort of like creates food in any shape or level of doneness I require. And as a bonus, it shrinks less. I love that that was one of those things. Yeah. Shrink. You see, it's juicier.
Starting point is 02:23:07 It's real. It's like, yeah, the water is still in there. You print out a perfect replica of your own body and you can eat it for dinner. Come on. You can print out your arm and just be like, I don't know. I like the idea of printing out like a full sized tank out of chicken meat and then having a bunch of people over is like dip me in the sauces.
Starting point is 02:23:28 That would be great. Yeah. Take your finger out. Don't be really fun. Actually, it'd be like, well, how big is your microwaves or Hank? I did it section by section. Took a long time. Some of me is quite old. It's been around.
Starting point is 02:23:42 Some of me is quite old. It's been around for days. I'm going to give it to Sam. Yeah, I didn't think I would. But then I kept I kept coming back around to it. It's just it's a I mean, they're both so good. Laser guided lightening. They're cool. But there's just so it's fertile ground there with 3D printing meat and using
Starting point is 02:24:09 three different kinds of lasers to cook it. All right. That means that it's time to ask the science couch where we've got a listener question for our couch of finally honed scientific minds. At sloth queen asks, what's the longest a laser can shoot? If you shot one into space, it would just go forever? Yeah, that's my kind of feeling, as if you don't hit, you know,
Starting point is 02:24:33 there's like gas and dust in the universe eventually. But I feel like infinity, it's still the laser. Like as long as the time that you have to wait. Is that right, Sari? I think so, but I'll lead up to it is the time that you have to wait? Is that right, Sari? I think so, but I'll lead up to it with my discoveries of longest laser
Starting point is 02:24:51 that just kept escalating as I tried Googling different things. Because you can't just Google how long can a laser shoot. You gotta like guess what long distances are. So first thing I thought, what if you pointed out a friend kind of far away? And the thing that I found in this circumstance is that the FAA is very vigilant about laser incidents of people on the surface of the earth pointing lasers upwards planes. And cruising
Starting point is 02:25:21 altitude is generally between 10 and 12,000 meters. So a laser shined from the surface of the Earth can be quite distracting or blinding to a pilot if it gets up into the airplane. And that of course depends on weather conditions. So it can definitely go as far as ground to plane. And then I was like, well, okay, space. On the moon, left by Apollo astronauts, there are reflectoring mirrors on there that have been used continuously since 1969 to study the Earth-Moon system and how far the moon is away from the Earth.
Starting point is 02:25:58 There are five retro-reflector arrays is what they're called. And I think any lab or any person can just shine lasers at the moon and measure the distance from the moon to their spot on earth and be like, that's neat. And so you just shine your laser to where these known mirrors are and it'll, and you can like see the laser beam from earth, detect it from Earth. And then I was like, okay, how far in space can we go?
Starting point is 02:26:28 And this is the farthest that I found, where a powerful radio wave laser called a mega-maser, so they specifically called it both a laser and a maser in the same sentence, has been observed by a telescope in South Africa. This mega maser is about five billion light years from Earth. And so the light from this mega maser has traveled 58,000 billion, billion kilometers from its origin point to Earth, which is basically infinity. Like, it's so far.
Starting point is 02:27:07 It can go five billion light years. It can go forever. Yeah. If there was aliens, would we be able to see their lasers? Not unless they... So, we can't. They're pointed at us. Yeah, so there's a couple of, like, reasons. They would have had to have pointed them at us at the right moment in their history. It would have to be bright enough for our detectors to detect, which I don't want to be tricky. I don't think that we could do that with any of our current
Starting point is 02:27:32 lasers. I think there are experiments in laser communication. I don't think we've sent it very far. I think we've mostly used it like obviously we're on earth. so that's the easiest place to test communication is on Earth to satellites or things like that. But there's a whole Wikipedia article about laser communication in space that I kind of glossed over and then was like, I want to stick with the Megamaser. If you want to ask the Science Couch your question, follow us on Twitter at SciShow Tangents. We'll tweet up topics for upcoming episodes every week there, or you can join the SciShowTangents
Starting point is 02:28:08 Patreon and ask us on our Discord. Thank you to at Fyridgion, Les on Discord, and everybody else who asked us your question for this episode. If you like this show and you want to help us out, hey, it's very easy to do that. You can go to patreon.com slash SciShowTents. Become a patron of our show and get access to things like our newsletter and our bonus episodes and a special thanks to patrons John Pollock and Les Aker. Second, you can leave us a review wherever you listen. That's super helpful and it helps us know what you like about the show. And finally, if you want to show your love for SciShow Tangents, just
Starting point is 02:28:41 tell people about us. Thank you for joining us. I've been Hank Green. I've been Sari Reilly. And I've been Sam Schultz. SciShow Tangents is created by all of us and produced by Sam Schultz. Our associate producer is Faith Schmidt. Our editor is Seth Glicksman. Our story editor is Alex Billo.
Starting point is 02:28:56 Our social media organizer is Julia Buzz-Bazio. Our editorial assistant is Deboki Chakravarti. Our sound design is by Joseph Tuna-Medish. Our executive producers are Caitlin Hoffmeister and me, Hank Green. And of course, you couldn't make any of this without our patrons on Patreon. Thank you, and remember, the mind is not a vessel to be filled, but a fire to be lighted. But one more thing! We've mentioned dead butt syndrome on a previous episode of the pod, but the more technical
Starting point is 02:29:39 term for this achy pain is gluteus medius tendinopathy. And that's a fancy way of saying that the tendons that connect your butt muscles to your bones and help you walk are inflamed. There are various ways to rest or stretch to help your butt tendons recover, but one possible treatment is low-level laser therapy. LLLT involves shining short-wavelength single-color light to help promote all kinds of biological repair processes, including helping cells proliferate, reducing inflammation, and up regulating growth factors. How do they get to the lasers into the butt? I don't know! Pew pew! It's only for butt? There's for every, it's for other parts of your body.
Starting point is 02:30:18 I think you would use it for a bunch of things. Okay. You just got a little creative with it. Yeah, I think that they shoot it through the skin. Oh, okay. You just got a little creative with it. Yeah. I think that they shoot it through the skin. Yes. This is the more practical one. My other but fact option was woman farts during surgery and then catches on fire, which was very dubious of an article.
Starting point is 02:30:37 Yeah, they were using laser on her butthole and then she farted. And then they were like, it caught on fire, but it's dubious because your farts have to have a lot of flammable gas. They don't always have that composition. They don't usually have that much flammable gas. I mean, there's a little bit, but like, what's going to catch on fire? All that stuff's in the, they get, it's definitely dubious to me. There's not a lot of flammable material left during an operation.
Starting point is 02:31:08 They tend to do their best to remove that. And not have it year round. This is not a canon but fact everybody. No. It's a non-canon but fact. Yeah. Don't. Ha ha ha ha ha ha.
Starting point is 02:31:41 Hello and welcome to SciShow Tangents, it's the lightly competitive science knowledge showcase. I'm your host Hank Green and joining me this week, as always, is science expert and Forbes 30 Under 30 Education Luminary, Sari Reilly. Hello. I feel the most anxious about my epithets when we have an actual expert on the podcast. And our resident everyman who doesn't have to feel anxious at all, Sam Schultz. Yeah, I still do all the time. And today, we do have a very special guest.
Starting point is 02:32:04 It's a PhD candidate researching brain-machine interfaces and machine learning for medicine and content creator on YouTube and Nebula, who professionally turns her hobbies into work. It's Jordan Herod. I know a thing or two about that. Hello! Hello. I am also probably professionally anxious, so I feel like I'm in a room with France.
Starting point is 02:32:24 I don't think you can be a YouTuber without being anxious unless you are a bad person. I have a question. I guess I won't spoil it even though it's in the title, but I have a question. So when you're using a chat GPT or something like it, sometimes it apparently helps if you bribe it or threaten it or explain a certain situation to it that it's a star ship captain and has to correctly plot the course to the center of the nebula so it must do the math questions correctly. If you were a generative AI language model,
Starting point is 02:33:06 what would people have to do to you to threaten you to get you to output correct information? They'd have to tell me I get to lay down after I did a good job. I feel like they would have to tell me that I can't take my ADHD meds the next day if I don't. Oh, yeah. That'll really kick the hyper focus into gear. Yeah, but then if you
Starting point is 02:33:26 didn't do it, they'd ruin their model the next day. Oh, absolutely. They take that risk. I think they would just have to say that they'd be disappointed in me if I didn't do it. Yeah. Which is like the fuel that drives me so much and everything that I do. I think they could say, me so much in everything that I do. I think they could say, we'll be totally fine. You'll be disappointed in yourself. You can answer that. It actually doesn't matter to us one way or the other. I'll be up at 4 a.m. trying to find your answer to the math question.
Starting point is 02:33:57 And then you'll say it and then it'll be like, I forgot I even asked you that. Thanks, I guess. They just leave like one comment on the YouTube channel that isn't like explicitly negative, but it's like Generally questioning the intellect and implying that I am wrong I will be down that rabbit hole for the rest of the day Oh some soft criticism is the worst where it's like I just don't really think that they that like the thing that they're doing Is for me and I'm like, but I meant for it to be. They did an OK job explaining batteries, but she missed some things. And I'm like looking up courses on electrical engineering.
Starting point is 02:34:30 How are you enrolling in college? Yeah. Why did I do this? Oh, I'm too emotionally stable. I just want a hot chicken sandwich. That's I would do. I would answer any question for a Nashville hot chicken sandwich. Yes. So I want. I would answer any question for a Nashville hot chicken sandwich. Yes. That's all I want. I would answer any question to be emotionally stable.
Starting point is 02:34:48 A mood. Well, I can't offer you that, but we're working on it as a society. That's the goal of this podcast. Someday we'll get there. We're all going to be better people. And then that'll be our last episode. We don't need to impress anybody anymore. Yeah. And then we just walk through the gate in the woods and cease to exist.
Starting point is 02:35:07 That'll also be me with you too, because I assume that the AI situation will just be fixed. Mm-hmm. That sounds exciting to get to the point where the AI situation is fixed. Fingers crossed. I feel like that's a while yet into the future. Every week here on SciShow Tangents, we get together to try to one-up, amaze, and delight each other with science facts while also trying to stay on topic. Our panelists are playing for glory, but also for Hank bucks, which I'll be awarding to
Starting point is 02:35:34 them as we play. And at the end of the episode, one of them will be crowned the winner. But as always, we must introduce this week's topic with the traditional science poem, this week from Jordan. It's hard to know the ways you're blinded by the cause you hope to fix. If one can barely comprehend the wealth of knowledge stored as bits. Shaping a system line by line and stumbling through a field of minds, predictions come not always true, but data's costly. We must make do. Released into the light of day, come listen to what it has to say. Hope that it's making the right choices and amplifies the unheard voices. And while this
Starting point is 02:36:09 poem isn't friendly to companies that do offend me, it's true that they're not moral actors. There's more than what they manufacture. It's not a person and not your friend, but it probably won't cause the world to end. It can be helpful. Don't forget. Careful design can keep your values met. This poem was not made with AI But with two hours and a glass of wine and I couldn't find a place to say the only winning move is not to play Oh, I'm gonna lose then
Starting point is 02:36:38 I mean, I'm hoping that every AI movie does not Come to fruition in real life because we're we're we're gonna be in a real bad place in that case. Yeah. If anything I've learned from reading science fiction and being alive for 43 years, it's that it always ends up much weirder, but also less interesting than you think. That's totally true.
Starting point is 02:36:59 We're not gonna get to go anywhere fun or have any adventures. We'll just watch strange things happen from afar far instead of Mars. We made Facebook. Yeah God everyone so this week's topic is machine learning But before we dive in we're gonna take a short break and then we'll be back to define the topic All right, Sari, are you going to try and define machine learning in front of Jordan because that sounds terrifying? I'll do my best, but Jordan, you can jump in at any time to your knowledge. So artificial intelligence is like a broader umbrella and there is artificial intelligence that involves machine learning and there's some that doesn't involve machine learning. And machine learning in my understanding
Starting point is 02:37:58 as not the science expert of the episode is when you develop algorithms, usually they're statistics based or usually they're statistics based or frequently they're statistics based on a machine, so the machine comes in like a computer that can learn from data and then use that information to process new data on their own without explicit instructions necessarily. So you might be able to code something or code a program that says, if this is true, do X, if not, do Y. Like if you find a word that starts with A, add one
Starting point is 02:38:33 to this counter. If you find a word that starts with B, don't add one to this counter. That is not machine learning. But if you train a machine on a set of words so that it can make predictions about what word goes next in a sentence. That is a form of machine learning. And then there are like different ways to teach these algorithms. So you can do it in a supervised way where you provide a labeled data set. So that is like if you provide a bunch of images and you tell a computer that those are all cats and then it learns. You give it both cats and not cats. Yeah, give it, I guess cats and not cats.
Starting point is 02:39:12 It's like, here's a bunch of cats, here's a bunch of not cats, and these are cats and these aren't cats. Then you're like, is this a cat? And it's like, yeah, or no. The thing that like bugs me about all of this is we have no idea how any of these things actually work. Like we know how we build them, but we have no idea how any of these things actually work.
Starting point is 02:39:25 We know how we build them, but we don't know how they're deciding what to do or say. We don't... Nobody knows that. It doesn't know what a cat is. It just knows that this is a cat, which are two... They sound like identical sentences, but aren't. Yeah. A lot of it is finding patterns in data.
Starting point is 02:39:42 There's some sort of representation of a cat and there's some sort of representation of a cat and there's some sort of representation of not a cat, and that's the thing that it's learning to discriminate between. But like what that is, we don't necessarily know, and how you like translate that into English words is particularly challenging as models get bigger because you can theoretically explain like you put in, I don you put in the square footage of a house and then the price of the house and you find something to model that and that can be machine learning. But that often creates some sort of formula that you can interpret in some way versus chat GPT where you could open up that box and it would just be staring into the abyss
Starting point is 02:40:27 I'm gonna ask a stupid question cuz that's okay job Is that distinct from how people understand stuff? Like is that do we could we say that we understand how we understand stuff? So I think there are Parallels between how we learn and how models learn. I could go into like a whole deep dive on like why neural networks are called neural networks and how they're based off neural architectures and the neuroscience side of my brain will then come in and explain why brains are real complicated and we do not know that much about them.
Starting point is 02:41:01 And so like information storage parallels there don't necessarily translate that well, because it's not really a one in, one out situation. It's a lot of things in and a lot of things out in very complicated ways. So it's not unrelated to how we learn. We do learn faster. So I guess when people talk about zero shot models, which
Starting point is 02:41:26 is kind of a common term in the ML space, what we're talking about is having a model do something based on input that it has not seen, and it being able to kind of infer how to process that information, even though it hasn't seen the exact representation. And so it takes a lot to get to a point where a model can do that. That's something that, as we've seen,
Starting point is 02:41:48 like the exponential curve of AI go up over the last five to six years, has been something that's easier to achieve, but it took like that first three and a half years to get there, and now we're like going all the way up. And babies can do that in like six months or less. So humans pick it up faster. But the species of computer, right? Like the species is learning faster than the human
Starting point is 02:42:17 species learned. Is that true? Like Linux versus Mac or like? No, no. I think what Sam was saying is like, it took several billion years to get from a single celled organism to a human. Correct. Yeah. Thank you, Hank. Yeah, going from the first, the first like like vacuum tube transistors to, yes, to now is quite, quite quick.
Starting point is 02:42:42 Which is somewhat and sometimes terrifying. I like part of me wants to feel very comforted that like, oh, well, this is still not as good as me at things. And I assume that that like last step is much bigger than people think it is. Yeah, but I don't know. I don't know what they're working on
Starting point is 02:42:57 next. Like, I wouldn't have thought that if you put a bunch of sentences into machine learning model that you could then have a thing that would convincingly be able to answer the question, why is a vegan riding an electric bike so punchable? Oh, it does know humor and sarcasm these days, which is going to steal all of our good jokes.
Starting point is 02:43:18 It's weird. That'll be the thing that takes us out. I thought it was good that artificial intelligence like the heart, the thing that would be worst at was jokes because I was raised watching Star Trek but it turns out that's no problem whereas 53 plus 4 is really tricky. Why is that? Is that true? Yeah they're bad at math. What the hell? Because it's like they're not programmed to do math they're like language models they're programmed to predict the next word in the sentence and
Starting point is 02:43:42 like people don't walk around saying 53 plus four equals 57 all the time. Took me a second. That would be a much less useful data to have than the data from Star Trek. If data was just like a stand up comedian. Yeah. Who crashed your spaceship all the time. Yeah, but like bad at it. You only told jokes that other people have told before.
Starting point is 02:44:04 Yeah. Oh, my gosh. Well, I'm glad told jokes that other people have told before. Yeah. Oh my gosh. Well, I'm glad that we've got to the bottom of machine learning, everybody. I feel like we've really sort of got it all down. We've no stone left unturned. Everybody understands it as well as is going to be necessary for the future of being a human on a planet where things are constantly changing extremely quickly. Yeah, I think we're good.
Starting point is 02:44:24 Why am I doing this PhD? We're going to move on to the quiz portion of our show. We're going to be playing a little game. It's called Truth or Fail Express. Here on SciShow Tangents, we often bring up technologies that are inspired by nature. Scientists have figured out all sorts of cool materials and machines and other things that cheat off of biology so we can figure out how to approach different problems. But you know who also has problems to solve? Animals do. And people working in machine learning have developed all sorts of methods inspired by the way animals solve the various problems in their lives,
Starting point is 02:45:01 including ants, killer whales, and gray wolves. So today, we're going to be playing Truth or Fail Express. But with a little twist in honor of our theme, I will present to you an algorithm inspired by an animal behavior that was stitched together entirely from lines written in an academic paper, or it might be just from Chad GPT. So you have to tell me if it's a real paper or ChadGPT talking about weird animal-based algorithms. Like for example, the raccoon optimization algorithm. Do you want me to tell you about it? Please.
Starting point is 02:45:37 Yes, please do. Raccoons are known as intelligent and curious creatures. These qualities combined with their dexterous paws make raccoons extremely successful in searching for food. Moreover, zoologists believe that raccoons have an excellent memory. The process of finding the global optimum for a predefined fitness function in the raccoon optimization algorithm is inspired by the natural food searching habits of raccoons. Food, which represents different possible solutions of the fitness function, is spread throughout the raccoon's living environment.
Starting point is 02:46:09 The algorithm makes use of two different search zones in each iteration. In addition, the raccoon can remember best previously visited locations. Then, if it does not manage to find better solutions in future iterations, it can revert back to the best location. Is this from an academic paper about machine learning or from chat GPT pretending to be an academic paper about machine learning? That to me feels like it was written by somebody who knew what they were talking about. And I just simply don't understand what they were talking about.
Starting point is 02:46:40 That is how I felt reading it. I'm going to go academic paper. The more over was where I started zoning out and it was also my like, this feels like a chat bot maybe. So I'm going to go with chat GPT. It is pretty wordy for an academic paper in CS. Like that's all the writing that that researcher did that year. I guess I'm gonna stick. I'll stick with paper. I just feel like a chat bot maybe would try to
Starting point is 02:47:08 spice it up a little bit part way through. It would do better? Yeah, it would do better. It would write something wrong but more interesting. Well, this text is from a paper published in 2018 titled Raccoon Optimization Algorithm. The researchers took inspiration from the fact that raccoons are known to be very good at solving problems,
Starting point is 02:47:26 namely the problem of finding food. And the idea is to help the raccoon explore the area around it, starting by defining a reachable zone where it can look for potential solutions. But there's also a visible zone where raccoons can keep an eye out for other solutions. So that's a real thing. And I don't know why. I we wouldn't
Starting point is 02:47:46 because they seem have you guys ever had the thought if raccoons had gotten sentient first, they probably would have blown us the world up faster than us. Because it gives me some comfort to know that we probably weren't the worst thing. Like I feel like raccoons probably would have done a worse job than us. There's got to be an anxious raccoon out there who would feel bad. There's plenty of mischief makers, but there's always probably a guy saying, what if we don't steal the trash? What if we're just like, are a little bit nice?
Starting point is 02:48:15 What if we tidy up a little bit? Guys, we're gonna get in trouble. Yeah. Well, you just gotta hope that that raccoon becomes the president of the raccoons and then everything's okay. But odds are, odds are not gonna happen. I was about to say, if they're doing worse than us, then not holding my breath on that, I don't think.
Starting point is 02:48:33 I like the idea of just like a raccoon president standing on top of the trash being like, I'm the craziest one. You picked the craziest one. It's like we do, you know? Alright, the next algorithm is dolphin-inspired. Dolphins possess exceptional cognitive abilities, including highly developed echolocation skills and complex social behaviors. These capabilities enable them to navigate challenging environments, communicate effectively,
Starting point is 02:49:02 and cooperate with peers to achieve common goals. The dolphin-inspired learning algorithm, DILA, incorporates an echo-based sensing mechanism to capture information from the environment, simulating the echolocation abilities of dolphins. This mechanism enables the algorithm to extract relevant features and navigate complex data landscapes effectively. Inspired by the social behaviors of dolphins, Dilla employs collaborative optimization strategies to facilitate information exchange and collective decision-making among multiple agents. By leveraging the collective intelligence of the algorithmic ensemble, Dilla enhances
Starting point is 02:49:44 robustness, scalability and adaptability. Chat GPT or real research paper? This is rough. There was like the scandal where a biology research paper got published with figures that were generated. Yeah. Oh my God.
Starting point is 02:50:02 It was barely as big. Those figures were amazing. I want to get Tood on me. It was like how big can you we make the mouse penis? Yeah I'm searching mouse penis right now. It's a lot. Holy shit. So see, that's my metric for AI papers is if there was a visual cue of a dolphin with a giant penis, I would be like, that was AI, obviously. Unfortunately, we have just the text read to us by Hank to go off of, which is so much
Starting point is 02:50:48 less of a visual indicator. Would a visual component help with this that much? I don't know. I mean, if it looked like this mouse. See, these are the kind of buzzwords that you get in front of a boardroom and you start saying, moreover, you start saying whatever they say, you start saying robustness, scalability and adaptability. And at this point, all of the people with the suits on are going, yes, and we're writing
Starting point is 02:51:15 checks to you, sir. And sir, in this case is a robot because a robot wrote this so that people would be excited about it. My strategy is I'm just going to get us chat GPT every time and hope that one will be right maybe. It's got to be, I assume. It's got to be. See, the problem is that, wait, what was it called? Della?
Starting point is 02:51:31 D-I-L-A, Della. That feels like a bad acronym. I'm going to go chat GPT. I wouldn't, I believe that this could be a paper that someone wrote on like archive, but. A scientist would be having more fun with that acronym, I think, you know, a real person. No, now it would be some I feel like the thing in CS is finding the quippy title so like once attention is all you need was the big transformer
Starting point is 02:51:53 paper everyone had to riff off of that so it wouldn't be an acronym it would just be something like snarky well here's the prompt to Boakie used to create this paper it said make up a machine learning algorithm inspired by dolphins in the style of an academic paper. That's what it hit us with. That was ChatGVG. All right. Last one.
Starting point is 02:52:15 Pelicans, after identifying the location of the prey, dive to their prey. Then they spread their wings on the surface of the water to force the fish to go to shallow water. The fundamental inspiration of the proposed Pelican Optimization algorithm is the strategy and behavior of pelicans during hunting. In the first phase, the pelicans identify the location of their prey. In the second phase, after the pelicans reach the surface of the water, they spread their wings on the surface of the water to move the fish upwards.
Starting point is 02:52:45 This strategy leads to more fish in the attacked area to be caught by the pelicans and causes the proposed Pelican Optimization algorithm to converge to better points in the hunting area. Is this from a paper about machine learning or from a chat GPT pretending to be a paper about machine learning? What are they? I know, right? What are they?
Starting point is 02:53:07 What are they? What is the fish? Are the fish in the computer somewhere? The fish are the cat pictures. The problem is that these are such long descriptions. Like no one writes this long. It doesn't look as long on the page, but when I'm saying it out loud, it takes forever. I feel like this, I can picture what the pelicans are doing.
Starting point is 02:53:35 So it's like, that's logical, but you're also a pelican guy. So it makes sense that the Boke would be like, it is. It almost feels like a red herring to have that. And yeah, exactly. But they don't say what the fish are. And that's a real problem. Or what the pelicans are. I probably get to it later in the paper. You can picture what the pelicans are doing, but you can't picture what the machine learning algorithm is doing. And so I think it's chatch again. It's pushing the data upwards. I was about to say they dive and then it pushes the fish up. I'm going to go paper.
Starting point is 02:54:14 It would not be the weirdest thing that I've read. A sticking point to me is that I don't know that pelicans hunt collaboratively. I feel like I've only seen that. I will tell you that in fact white pel pelicans hunt collaboratively. I feel like I've only seen it. I will tell you that in fact, white pelicans do hunt collaboratively. They do. And that's clearly, they're talking about the hunting strategy of white pelicans here.
Starting point is 02:54:31 But are you telling me this to trick? Who are not plunge feeders. Everyone thinks that all pelicans are plunge feeders, but they're not. There's only two plunge feedings pelicans. I think you gave me that piece of information, so I guess paper. So I'm gonna guess Chad GPT.
Starting point is 02:54:43 Oh, wow. No, I will just tell you as much about pelicans as I possibly can okay That is a fact you should know about me. This was a real paper. Oh, man Wow, Jordan you are in the field It's a 2022 paper It was titled pelican optimization algorithm Algorithm, a novel nature-inspired algorithm for engineering applications. The algorithm is balancing the way
Starting point is 02:55:10 that pelicans explore for food, which is, but I don't know. I don't know what it's doing. No one explains to me in the description what the heck it's actually doing and what the fish are. Yeah, I definitely still don't get what the fish are. The fish are solutions, is what Deboki has written. In this case, the fish,. Yeah, I definitely still don't get what the fish are. The fish are solutions is what Deboki has written. In this case, the fish, it's sweeping solutions into an area. It's like a political cartoon with just way too many labels.
Starting point is 02:55:34 The fish are solutions. Yeah, no, Raccoon and Pelican both sounded like versions of reinforcement learning slash control problems, which made sense as papers. And then Dolphin Echolocation came in and that one just like threw me so much that I was like, I don't know. It did seem like it was both like suddenly halfway through the introduction it was like, and also it's about dolphin social structures. Like it's not just echolocation. It's a completely separate dolphin thing. That'll be the 2025 paper.
Starting point is 02:56:10 We will create consciousness by mixing echolocation so that the thing knows about physical space with social structure, which is... And then they'll all sing, so long and thanks for all the fish. And they'll take off into space and be like, look, you guys can't do AI anymore. We do not trust you. The moment you start to code again, we got a laser up here ready to shoot you.
Starting point is 02:56:34 All right, everybody. Jordan's in the lead with three. Sam's got two, Sarah's got one. Next up, we're gonna take a short break, and then it'll be time for the Fact Off. Welcome back everybody, now get ready for the fact off. Our panelists have brought to me science facts to present to me in an attempt to blow my mind and after they have presented their facts, I will judge them and award Hank Bucks anyway
Starting point is 02:57:14 I see fit. To decide who goes first though, I have a trivia question. Research has shown that plants vibrate when experiencing drought. And scientists, that's real? Like, he lost it. Okay, let's keep reading to find out what's going on. Scientists look to see what kind of sounds that might produce. Recently, researchers set up microphones
Starting point is 02:57:38 about four inches away from tomato and tobacco plants and recorded the plants after cutting them or not giving them water. And they found that the stressed out plants made more sound compared to the non-stressed plants. While the sounds were at a frequency too high for humans to hear, the scientists did design a machine learning model to distinguish between the sounds produced by the cut plants and
Starting point is 02:58:01 the non-water plants. How accurate was the model at differentiating between the two stressful conditions in a percentage? I'm still not over the fact that plants vibrate. Don't they don't look like they vibrate to me? I feel like I wouldn't notice that, but I guess it's a little vibration. I would vibrate if I'm stressed because I'm
Starting point is 02:58:23 Oh yeah, I totally vibrate if I'm stressed because I'm. Oh yeah, I totally vibrate when I'm stressed. Or cut. Yeah. Yeah. How accurate was the model? I feel like the most accurate models are in this 70, 80% maybe that I've seen, but I also am not in this field.
Starting point is 02:58:42 So that would be like high accuracy in my opinion. Maybe like 62% of the time it was able to differentiate dry from hurt. I'm gonna go 75. The answer was 70%. So Jordan gets to go first. 70% is also, it's somehow both lower and higher than I'd expect.
Starting point is 02:59:05 I don't know. Yeah. So my fact is when digital image processing research was getting off the ground in the 70s, the standard test image, so the image that was used when people published papers, when they went to conferences in order to standardize everyone's results, was called Lena. And it came from a Playboy magazine centerfold
Starting point is 02:59:28 that one of the researchers just happened to have on his desk at work. That research is foundational to pretty much all image and video-related AI results that we see now, including image generation, video generation, deepfakes, et cetera. And in a not so shocking turn of events, the model received exactly zero compensation
Starting point is 02:59:45 other than what she was paid for the Playboy shoot, nor did she consent to have her image used in that way. Although she would go to conferences every few years and be like, yay, I'm happy that you guys are finding this useful. And it wasn't until I think the early 2000s that she was like, OK, can we stop doing this? It's not even bad enough.
Starting point is 03:00:04 I think it was only was like, okay, can we stop doing this? I think about enough. I think it was only like last week actually that IEEE one of the big conferences in the field was like, okay, you can't use this photo anymore. It's slowly been phased out of the ML community as something you are allowed to submit with. I'm looking at the photo now. I do recognize it. I feel like I have seen this picture before. It is maybe important to say in an audio medium, just from the shoulder up.
Starting point is 03:00:29 So they did not do this with an actual full body naked woman. So there's at least that, but they did do it with a woman who was naked at the time of the photograph being taken. Correct. That's wild. And so this is like with an image, a standard image they would use and try to reproduce
Starting point is 03:00:46 with machine learning, and that was like the? So not reproduce. So when people were trying to create and test out different methods of processing images, so like filtering images, transforming images, it was easier to have one photo that everyone could use to compare results and methods on. Gotcha. So this was the one photo that everyone could use to compare results and methods on. And so this was the one photo.
Starting point is 03:01:06 So if you're like all those early Photoshop things, increasing the edges and the making it fuzzier or changing the hues and saturations. Yeah, gotcha. I guess there's probably not any way to have any compensation actually occur at this point. I would imagine not. Academia doesn't pay people in academia particularly So I don't know where that money would come from yeah a dope I don't know
Starting point is 03:01:36 Yeah, yeah, I was about to say if if insert X major AI company would like to compensate her for her company would like to compensate her for her contribution to the field then they should be happy. Might be just a good press play. Like it's just like, hey, you want some good PR today? Here's this lady whose image was, but also probably the Playboy company owns the picture actually. Don't you think Playboy would go, uh-uh, now wait a minute. Just give people money.
Starting point is 03:01:59 This is how I feel. People who do like weird stuff, if you're a company with like billions of dollars, like make their day just mail some checks out guys make somebody's day. Why not do that? I support this especially if the check goes to me, but also Are there other images that are used now Image like or that replaced this or is it at a point in the field where you don't need this sort of like anchor digital image processing reference? As the field has evolved and as we've gotten into like image generation, deepfakes and things like that, there's been a little bit less of a need for standard images because
Starting point is 03:02:37 like you don't need a standard deepfake image to work with. I think what's happened more so is that we have like ImageNet and the really big image data sets and that's the standard thing that everyone uses to build their models and they can test performance as a way of having the same baseline. I guess I knew that that would be the case. And also in the same way, all of those images are scraped from the internet without the permission of the people who made the birth in them. So, you know, 50 years later. We're still at it.
Starting point is 03:03:08 The proud tradition lives on. All right. Wild, interesting. Sarah, what do you got? One of the ways that machine learning is being applied right now is called automatic speech recognition, which is basically one of the driving technologies behind any voice assistants like Google's assistant or Alexa or my mortal nemesis Siri. And my very basic understanding of what these algorithms do is they take the sound waveform of your speech and then chop it up into very small pieces. And then the algorithms at work try to match those tiny pieces to sounds that they were trained on. And by putting those sounds back together and incorporating other information from training
Starting point is 03:03:49 data about different words and how words become sentences that make sense, the computer does its best to try and guess at what you're saying. And this is a hard thing because people mumble or have different accents or misspeak or whatnot. So the same sequence of phonemes doesn't necessarily translate to the same sentence. But as these voice technologies are in more places, people are worried about AI listening to them or surveilling them. And I feel like this is a huge paranoia that I see pop up every once in a while is that if you talk about with your friends, how much you want chocolate milk, then you'll start getting milk ads across your devices
Starting point is 03:04:25 or something like that. And as far as I can tell, a lot of those targeted ads are more about what you've been searching, like if you're searching straws for chocolate milk, or if you're sharing a wifi network with someone who's searching those things, or if you were physically at a chocolate cow dairy farm or something like that, that can provide that location data
Starting point is 03:04:43 rather than listening. So data privacy is kind of tangential. Data privacy is a concern, but your phone probably isn't recording and transmitting everything you are saying all the time. A rabbit hole I went down as I was trying to find this fact. But the fact is even still as an experiment in audio counter surveillance, there was a paper from Columbia University researchers that was presented at a conference in 2022 that showcases a machine learning model that tries to disrupt speech recognition algorithms
Starting point is 03:05:13 in real time as someone is talking. So it basically uses similar principles to take your voice as an input. It does a series of calculations on it. And then instead of interpreting the speech, it instructs computer speakers to emit specific noise. The press release says it's whisper quiet, but there weren't audio samples, so I couldn't hear it. And the noises specifically exploit the weaknesses of existing speech recognition algorithms. So they change the
Starting point is 03:05:40 waveforms in the room just enough enough either by adding in interference or adding like blanket white noise or other things like that, just enough that those chopped up sounds don't get interpreted as sensible phonemes or words or sentences anymore. Even though they're fine by human ears, the computer can't interpret them. And they tested this model across various speech recognition programs and databases. I think kind of like Jordan was saying, my understanding is that these are open source and like regular databases that people train their models on, like Deep Speech or Libra Speech or Wave2, VEC2.
Starting point is 03:06:17 And if I'm interpreting the figure in the paper correctly, this counter surveillance thing blocked anywhere from 28% to 87% of recognizable speech depending on the system. So it's not perfect. It's more for a hypothetical spying situation than a real one. But I think it's interesting to talk about the flaws in all these technologies as much as the breakthroughs. What if like when I'm doing my spy craft, I just talk, but I'm just like Hello it's Hank. I'm like why do I spy shade? I think the computer would just put a big old question mark. Yeah. Yeah. That's how it works.
Starting point is 03:06:49 I think whoever's listening would also put a big old question. So if you just like had this on in your house, all it would like at the beginning of your fact, basically, you said your phone's not actually listening to you in order to advertise to you. So to have it on your house, all you'd be doing is making it so you couldn't say hey Siri and it wouldn't wake up. Right? Yeah, I think effectively yes. But it was weird because even though there's this body of research that's saying that your phone isn't spying on you at the beginning of this paper they were like in situations where you might be monitored by technologies around
Starting point is 03:07:24 you. And so I don't know if this is, I fell down that rabbit hole when researching whether that was like a legit thing. I would also like for YouTube to add that to the background of every YouTube video in which someone says, hey, whatever. So it stops setting off my various devices at home. All right. I just love the weird esoteric history bits, like things that are so weirdly important and will inevitably be forgotten, but are also just shine a light on the bizarre culture that that results in many of the in all of the structures around us. So I must announce Jordan as the winner of the episode. She was also way ahead. Very powerful lead.
Starting point is 03:08:14 It's funny, because there was an article about this that literally came out last week. And so for the last week, I've been most days going through your tweet likes and your replies to make sure that you didn't see it. Oh no. No. What did I like? I like some real stinkers I bet.
Starting point is 03:08:34 Hank was the one being surveilled all along. I was just reading for discussion of that article. Okay. I cannot comment on anything else that I saw And now it's time for ask the science couch where we ask a listener question to our couch of finely honed scientific minds At Ponyoti on YouTube asked why does it get human extremities? So very wrong by that I assume it means AI that draws pictures Can I guess cuz I like I just have to guess. Which is that it's looking at what object is coming next.
Starting point is 03:09:12 And the next thing that comes after a finger is usually a finger. And so it's just like, I guess. Because look at all my fingers. There's a bunch of them. And so to know that there's four is a very different thing than being like, what comes after a finger? A finger.
Starting point is 03:09:27 What comes after that finger? A finger. And then maybe on the fifth or sixth finger it draws, it's like, oh, another finger doesn't come after that one. But after the third or fourth, it's like, I don't know. I don't know where the finger comes next. It's like, that's too many. Yeah.
Starting point is 03:09:44 Can it ever take a step back and look at what it did and be like, oh, I did too many fingers or when it does its sixth finger, is it? Is it? Yeah. Does it not ever look at it and say, oh, gosh, too many fingers? It's starting to draw the finger. And it's like, oh, shit, I'm drawing an extra finger. It's too late. I got to put the two. And I guess, yeah.
Starting point is 03:10:00 How you feel as an artist, Sam, when you're drawing a guy? No, I can't erase stuff. I'm a human man. I can erase. That's the power I have over AI. And that's how we're going to win against the AIs now. Some models can do that. It's ultimately down to how the model is built.
Starting point is 03:10:18 So there's also, I think it was DeepMind made a language model where it had like inherent fact-checking systems where it would like create a response and then it would be like, let me go like pull a bunch of information from repeatable resources and see if what I say matches what these things say and then if it doesn't, let me go back and adjust what I said. So there are mechanisms to do that when you're using like I don't know dolly or like mid-journey or whatever most of those don't really run on that So that's why most of what you see is like weird number of fingers or like there's you know Three fingers, but then one's coming out like this way Like confidently or perpendicular to the rest of your fingers
Starting point is 03:11:03 Which I guess depending on what data sets they use I was born with I guess five fingers and a thumb so six on each hand Because one was coming out of each of my pinkies so I would imagine that there isn't like a ton of that kind of data in this model, but like Maybe I did make the mistake of typing in weird AI hands into Google, and it is nightmare fuel. These fingers have fingers.
Starting point is 03:11:30 I don't like that. In all likelihood, it's an edge case issue in that there are lots and lots and lots of photos of people's faces and people's bodies in a bunch of different positions and whatnot. But the number of different configurations of like hand positions and you know, you may obviously have five fingers,
Starting point is 03:11:52 but like one might be behind like out of frame or something like that, makes it more challenging in the same way that, like ears used to be the issue and hair used to be the issue representing that. If you don't have enough data, ends up being really really wonky in the same way that language models hallucinate Things when they don't have good representations of them the the one thing that I do have to add I guess is what and Jordan you can correct me if I'm wrong because I'm coming in a
Starting point is 03:12:22 Little bit new here. I think the model that these sort of image generators are trained on are diffusion models, which were introduced from what I found in 2015 in a paper called Deep Unsupervised Learning Using Non-Equilibrium Thermodynamics, which is a fancy way of saying that they took an idea from physics, so this idea of equilibrium, and that if you have, I don't know, it's I feel like a pretty classic chemistry problem where if you have two
Starting point is 03:12:50 containers filled with particles and then eventually they will like drift over and form equilibrium. There are a lot of systems that don't form equilibriums, but things drift back and forth. And so you train these models by giving them images and then having them destroy the data in it, which is like an iterative diffusion process or what they call it. So they turn a picture of a horse into a bunch of uncomprehensible pixels. And then the model learns through destroying it, how to predict how to create an image, like how to reverse
Starting point is 03:13:25 that process and go backwards and generate something from nothing, which is how you get these predictive of, okay, there's a finger. Is there another finger? What is the shape of a hand? What is the shape of a letter? And so that's where the guessing is. That's where the prediction and statistical modeling of these image generations come from
Starting point is 03:13:49 and why you can't necessarily, unless there is that fact-checking layer, say this AI knows what a hand is or knows what the letter A looks like or knows what a pipe looks like because it's all just guessing what is the statistically most likely pixel to appear next to this pixel in a certain class of images.
Starting point is 03:14:10 And I mean, that's also the issue that you run into with language models doing math. It's not about the math. It's about what is statistically most likely to come after the sequence of characters. And we as humans are really good at noticing things about other humans. Like we look at ourselves and each other all the time. And so it's easy to see like that hand is weird. That looks like an alien thing. Whereas there could be a wrong number of leaves on a tree in the background of a
Starting point is 03:14:37 picture and you like wouldn't notice necessarily. There needs to be like there's like any botanist probably looks at any AI generated picture of any tree and is like, oh my God, what were they thinking? Yeah, that flower has an incorrect number of petals. That's biologically impossible. But because we're all humans, it's really easy to be like, oh, that's that foot. Messed up. I've seen a foot and it doesn't go at a right angle necessarily. In many cases.
Starting point is 03:15:06 If you want to ask the Science Couch, follow us on Twitter at SciShow Tangents or check out our YouTube community tab where we will send out topics for upcoming episodes every week. Or you can join the SciShow Tangents Patreon and ask us on our Discord. Thank you to the Space A on Discord, at Ging-Ging-Ging-Ging on Twitter, and everybody else who asked us your questions for this episode. I did my best. That's what it says.
Starting point is 03:15:28 I think that's right. Jordan, thank you so much for coming on the Session of Tangents and sharing your knowledge and understanding with us. I wish we could go deeper. There's so much to know and so much to be excited about and scared by. If I want to see more of what you're up to, where would I go? You can find me on YouTube. You can find me on Nebula.
Starting point is 03:15:48 You can Google my name to see what else I'm doing because I'm also working on my PhD and that is taking up 80% of my brain space right now. Imagine that. And your name is Jordan Herod. Herod has two R's. Yes, like the store, which is a reference that only really works
Starting point is 03:16:03 when I'm in London, but. Yeah, I'm like, it's the store? Should I know about the store, which is a reference that only really works when I'm in London. But yeah, I'm like, oh, sure. I know what the store everybody had on over and subscribe to Jordan on YouTube. And thanks for being here for us. If you like this show and you want to help us out, super easy to do that. First, you can go to Patreon.com slash SciShow Tangents to become a patron. Get access to our discord. Shout out to patron less Aker for their support.
Starting point is 03:16:26 There's also lots of bonus episodes and weird stuff, minions, commentaries. Second, you can give us a review wherever you listen. That helps us know what you like about the show and also other people see them and think, I maybe will watch this show. Finally, if you wanna show your love for SciShow Tangents, just tell people about us!
Starting point is 03:16:45 Thank you for joining us. I've been Hank Green. I've been Sari Reilly. I've been Sam Schultz. I've been Jordan Hart. SciShow Tangents is created by all of us and produced by Jess Stempert. Our associate producer is Eve Schmidt. Our editor is Seth Glicksman.
Starting point is 03:16:56 Our social media organizer is Julia Buzz-Bazio. Our editorial assistant is T'Bokka Chakravardi. Our sound design is by Joseph Tuna-Medish. Our executive producers are Nicole Sweeney and me, Hank Green. And of course, we couldn't make any of this without our patrons on Patreon. Thank you, and remember, the mind is not a vessel to be filled, but a fire to be lighted. But one more thing. Like we talked about, some machine learning algorithms are used to sort and analyze different images like dogs, stoplights, or even pictures of human poop. Gastroenterologists already have
Starting point is 03:17:46 ways to classify their patients' poop, like the Bristol stool scale or the Brussels infant and toddler stool scale, which look at things like stool consistency or how fragmented the chunks are. So the goal of these machine learning models, which may eventually become smartphone apps, is to give doctors an extra tool and help patients self-report their gut health more consistently. There was also a 2020 Stanford paper that looked at a similar system embedded in a bidet and it used anal fingerprint recognition to tell which person was sitting on the toilet
Starting point is 03:18:18 to associate that person's data with their selves. That was not in the script. That was just Jordan. It wasn't. I just knew that. I did a video about it a while ago. Wild, wild race. Is that unique that you can...
Starting point is 03:18:35 Apparently. Every, every one is beautiful in their own way. Specifically in reference to... But who knew that? Who was like, I wonder if the butthole looks different enough. I feel like it was in the introduction of the paper, because I remember talking about it in the video being like, who found out? Like who...
Starting point is 03:18:57 Yeah. Where's this data set? I have questions.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.