Embedded - 502: Chat, J'ai Peté!

Episode Date: June 3, 2025

Chris and Elecia talk about Murderbot,  LLMs (AI), bikes, control algorithms, and fancy math.  The website with the ecology jobs is willdlabs.net from 501: inside the Armpit of Giraffe with Meredith... Palmer and Akiba..  The algorithm Elecia mentioned was from Patent US7370713B1. The Control Bootcamp YouTube series is a great introduction to control systems beyond PIDs There is also a book from the same folks (with matlab and some python code): Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Finding bad AI interactions is too easy. Copilot PR mess that was discussed. Lawyers letting ChatGPT hallucinate precedents. Fake (hallucinated) citations in a high-profile report on children’s health. Transcript Nordic Semiconductor has been the driving force for Bluetooth Low Energy MCUs and wireless SoCs since the early 2010s, and they offer solutions for low-power Wi-Fi and global Cellular IoT as well. If you plan on developing robust and battery-operated applications, check out their hardware, software, tools, and services. On academy.nordicsemi.com, you’ll find Bluetooth, Wi-Fi, and cellular IoT courses, and the Nordic DevZone community covers technical questions:  devzone.nordicsemi.com. Congratulations to the giveaway winners!

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to Embedded. I am Amicia White here with Christopher White. This week we are going to talk about, oh gosh, all kinds of things. Where do you want to start? I don't know. What do you want for lunch? That's a longer discussion than we have time for. Okay. What do you think about Murderbot? I think it is a good series of books and so far as a faithful rendition onto television. I appreciate how they have incorporated the internal dialogue in a way that makes sense.
Starting point is 00:00:41 That was very boring. I'm sorry. It's funny. The books are Murderbot diaries and they're all from its perspective. By Martha Wells, who we've had on the show if you want to go find that episode. Which is really fun. Clearly podcast abuse of power. And the show is Murderbot, and there's a difference between Murderbot Diaries and Murderbot. The focus is less on its perspective, although it still has a strong voice. It's still pretty central, but yes, there have been changes because you have to change
Starting point is 00:01:17 when you're changing media. Mediums? Media. When you're changing art forms? What's the word I'm looking for here? Mediums. Yeah. Media is the plural of mediums, media. When you're changing art forms, what's the word I'm looking for here? Mediums. Yeah. Media is the plural of mediums.
Starting point is 00:01:30 I know. It's just confusing. Yeah, so obviously there are always changes when you adapt something. I think this particular story, novel, whatever, I wouldn't say easier to adapt, but it's more aligned with adapting to television because the way she wrote it, they're shorter. They tend to be non-novellas. Right. Most of the Murderbot Diaries books are novellas. It's probably easier for them to adapt a novella into a series of television than say, Wheel of Time or something, which are many thousand page books, right? There's a lot more compression that goes.
Starting point is 00:02:07 I don't feel like there's a ton of compression happening with this. I feel like I remember almost all the beats that are happening in the TV show. So I think they've done a good job with that. What do you think of the casting? I know I'm supposed to say, how dare they cast a white man as a murder bot. And I just, I don't know. I'm okay with it. I love the casting of the preservation folks.
Starting point is 00:02:37 They are so perfect. They're just so perfect. Mensa in particular is exactly how I imagined her. So that was kind of interesting. Yeah, I mean, they've gone out of their way to address the the it-ness of Murderpot explicitly in some cases. So I think... Did not need a full frontal.
Starting point is 00:03:03 I mean, since you're casting something that doesn't exist, it's not like, yeah, it's a human cyborg construct thing. Okay, but the thing is we're not objective. Right. Yeah. We like the series. We are prone to liking many of the Apple TV productions. This is like when episode nine came out, no, episode one. Star Wars. Star Wars came out and it was like, you know, I don't like Jar Jar Binks, but honestly, he could just read the Intergalactic Telephone book.
Starting point is 00:03:47 It was George Lucas reading Intergalactic Telephone book. I don't want to listen to Jar Jar reading Intergalactic Telephone book. I mean, there's some level of I'm just so happy it's happening that... Which fades in time. But this is not, I don't think this is one of those situations. I think this is pretty good, yes. And interestingly, the episodes are quite short for keeping them to the 25-minute sitcom length. It's not a sitcom, although it has sitcom elements.
Starting point is 00:04:15 But no laugh track. Yeah, but they're short, which is weird. The streaming television these days tends to be, we'll make the episodes as long or as short as they need to be for the particular chunk of the story we're telling right now. So you get shows that have hour-long episodes and then the next one might be a half an hour or 35 or 45. These are all pretty snappy. These are all pretty snappy. So it's interesting. I think it'll be- When the last one ended, I was like, no, that was just the pre-show, pre-credits.
Starting point is 00:04:39 And they're releasing them weekly, which some streaming platforms do and some don't. I would binge it. I would watch every single one of them. Weekly, which some streaming platforms do and some don't. I binge it. I would watch every single one of them. I think it's better for shows that they don't drop them all at once in terms of gaining followings and people talking about them and stuff. But it is hard to switch back and forth and be like, oh, this show is really fast paced and also I have to wait a week between each short episode. Anyway, if you like sci-fi and quirky sci-fi, I think you might enjoy that.
Starting point is 00:05:11 And while it is called Murderbot and there is a fair amount of violence, there's also a lot of humor. Well, there's not a lot of murder. It's not like if you haven't seen the show and you haven't read the books, Murderbot is the name that it gives itself as sort of a quasi-derogatory, like it's not happy with its place in the world. It's kind of like calling yourself dummy in your head. Yeah. So, it's not, yeah, it's not about a murderer.
Starting point is 00:05:43 It's not Dexter in robot form or something like that. Anyway, yeah. So I would recommend the show even if you haven't read the books, I think. That's how well they've done with it. So either one is a gateway to the other. So yes, five minutes on Murderpot. Done. We'd like to thank Nordic for sponsoring this show.
Starting point is 00:06:07 We've really appreciated their sponsorship and as the time comes to an end, well, we'll still love you, Nordic. And in the meantime, they did give away some things and we have some winners to announce. Jordan Sal, Emily Dulzalek, and Wojciech StoDolny. If any of you would like to email in and tell me how to actually pronounce your names, I'm happy to do so in the next episode. But thank you to Nordic for their sponsorship, and we appreciate it, and keep on Nordicking. When I mentioned on the Patreon Slack that we didn't have topics for this week, I also said I didn't really want to talk about AI because I feel like we talk about AI a lot. But then David
Starting point is 00:07:05 said that he wished we would and had some nice things about voice of sanity and down to earth and practical experience. So I feel very pressured into talking about this. And I know the Amp Hour did too. I did listen to their latest episode where they said that, or whenever that was. But, so, do you want to start or do you want me to give my short version first? Why don't I give you a short version first? Because I don't know if you looked at the notes. I did not, but I certainly did not expect the 16-point list. Okay, I have two points. First, I do think you should try to use the AI stuff, whether it's Gemini or Chaggpt or
Starting point is 00:07:50 whatever. It's interesting to get to know. Second, it's a tool. Wait, no, I shouldn't tell you because I had two points. So with that, it's a tool. It's not always a great tool. It has many disadvantages. And one of the things that was said to me that makes a lot of sense is you shouldn't
Starting point is 00:08:14 use it for things you can't do yourself. So you're thinking about hacking together a script in an hour to take care of some problems you have, that is a great use because at the end of talking to your AI assistant, you have something you understand because you would have written it if you hadn't been quite so lazy. I'm a fan of being lazy in engineering, but if you didn't know how to get started on that script or you didn't know how to do it or it's using libraries you don't know, then that it's just, you don't have to try to run it. Just don't do it. That's if you can't do it yourself, you shouldn't ask the AI to do it.
Starting point is 00:08:54 The second point involves French. So apparently when you say chat GPT, that's not a point but please continue. And you don't have to say in a French accent it's not chat GPT but it does help if you say it in a French accent. It translates to cat I farted. Chat j'ai pété. And so when you hear about chat GPT is going to ruin the world, you can just translate to cat I farted is going to ruin the world, and it makes the whole thing far more palatable. Okay, so now every time Chris says chat GPT... I'm not going to say chat GPT ever. You should translate. Actually maybe it should be anytime any of us says AIS. That's funny because one of the other ones is called Claude, which is French sounding.
Starting point is 00:09:51 You should just go ahead and translate it to fart. And Claude is a euphemism for an idiot. I don't think that's etymologically correct, but I'm going to go with it. C-L-O-D Claude? Yes, but C-L-A-U-D probably... And chat-je-pete is not spelled... All right, all right, you're right. You're right, okay.
Starting point is 00:10:11 So, I apologize in advance for the next 15 to 20 minutes, everybody. I think you should blame David. Blame David. You did ask for this. I have spent the last few days thinking about this and I have some thoughts. They are mostly disordered. I have kind of organized them by class of thought. I don't have a lot to expound on them necessarily, but I'm going to go through them all. First of all, let me preface this by saying I think everybody is not treating AI in the way it needs to be treated,
Starting point is 00:10:42 and everybody comes at it from a kind of a their particular narrow perspective. Perspective as an engineer. A lot of people who listen to this show are coming at it from a perspective of engineers. What can it write for me? What is it going to do to the code? Is it going to take my job? Should I be using it? How do we use it? What are the implications of this to engineering? AI, and I'm going to say AI instead of LLMs, just because that's what everybody says. But so when I say AI, I mean large language models that do predictive text, not necessarily vision classifiers
Starting point is 00:11:16 or things that separate music into different tracks, which I use all the time, things like that. But we're talking about chat, GBT, claw, Gemini. All of those things. Okay. When you step back from all this and you're not in it, and you take it from a non-purely engineering standpoint, this is so complicated that it becomes almost like talking about religion. Everybody has their things they like about it.
Starting point is 00:11:46 Everybody has their worries, but it's all a big mishmash. So here is my mishmash. First of all, I will admit, I have tried these. I've tried these with the interest of I need to know how they're developing and what's happening. I do not use them regularly. I've written a few scripts with them. I see how they do code.
Starting point is 00:12:04 I've had a few conversations with them. I do not use them regularly. I've written a few scripts with them. I see how they do code. I've had a few conversations with them. I do not use them on a regular basis. Probably once every two weeks I will check in with something and then I will stop myself. I will explain why I stop myself in a few moments, but I'll get there. First of all, it's fun. Right? It's fun using these things. You're talking to a robot. Wow, it's everything we've ever been promised, right? It convinces you, it's very convincing that talking to one of these things is like talking to a person. It has a little quip sometimes.
Starting point is 00:12:37 It engages what you say. It remembers what you say. It has context. All of that is super fun. It's fun to have it write code. It's fun to have it make up limericks, all of that is super fun. It's fun to have it write code. It's fun to have it make up limericks. All of that is cool stuff. It's very interesting.
Starting point is 00:12:50 It's also deceptive, but I'm not gonna say much more about that. Remember, Eliza was fun. If you remember Eliza from the eighties, well, it's actually from the sixties, I think, but Eliza was kind of a little conversational engine. It was purely heuristic, but it was kind of fun to have a conversation.
Starting point is 00:13:07 This is more like talking to Hal, if Hal were super obsequious all the time. I've turned that off. Thanks to some Reddit posts, I have discovered how to do the first prompt so that I get a much colder, much more logical sounding thing. I know that it's not really any more logical, it's not any less hallucinatory, but...
Starting point is 00:13:33 It gives the illusion of being something that's alive. And that's really important for a lot of reasons, both good and bad. But it's not. There's so many sci-fi things. It doesn't understand anything. It doesn't really remember anything. It doesn't really know anything. That's the key bit. It's like a compression algorithm for all human knowledge, but a lossy compression algorithm that when it decompresses stuff, it has errors. And so it doesn't know when something's wrong that it's told you.
Starting point is 00:14:08 Anyway, back to my long list, which is a mess. I'm telling everybody up front it's a mess. We've kind of crossed off number one, it's fun. Yeah. Because it is fun. You wrote like Star Wars plays. Well, back when it first came out, yeah. It was hilarious.
Starting point is 00:14:21 It is capable of useful stuff. I will be the first to admit that. You can write code with it. You can say, hey, Claude, I need a script in Python that will do this, and it will do it. And the script will work, probably. Maybe. Keep in mind the probably. I've used this in a pinch a couple of times when I need some stupid little script to do something that I just do not have time for. It mostly works.
Starting point is 00:14:45 However, it's a really bad coder. Have you read the code that your LLM is producing? It sucks! It's hard to read. It will produce functions that are pages and pages long, non-modular, and yeah, you can go have a conversation with it and say, make this more modular, do this and that. Make it simpler. Make it simpler is really important. Make it simpler. Don't use these kinds of structures. Don't make these kinds of mistakes.
Starting point is 00:15:10 But you know what? You have to know how to do all of that before you can ask. And it's a catch-22. If you've got a bunch of people, junior people, who are using this to write, they're never going to learn good code because either a senior person has to tell them or they have to learn it through seeing lots of examples of good code and since LLMs produce in my opinion pretty crappy code that's going to be a problem so it also appears to be so it is capable of a lot of stuff it
Starting point is 00:15:42 also appears to be capable of other stuff of stuff. It also appears to be capable of other stuff, but isn't. So that's the thing. They will blindly tell you the answer to everything because they can't say no. They can't say, I don't know. They can't say, I don't know. So one thing I've had happen when doing scripts and stuff is if I'm on a corner that is not well-trained for, we get in the loop. Like, oh, OK, write this script that does this.
Starting point is 00:16:07 It does. It doesn't work. Because it's calling a module that doesn't exist, or using a module inappropriately, or it just doesn't know how to do it exactly right. But it'll produce the code. It doesn't work. You correct it.
Starting point is 00:16:21 Oh, you did this wrong. Do that. Oh, I'm sorry. It always apologizes. You can turn that off. Right, you did this wrong, do that. Oh, I'm sorry. It always apologizes. You can turn that off. Right, which is another issue. And then it'll correct it. Here's the right thing.
Starting point is 00:16:31 And I've gotten in loops where it'll go back and forth between one wrong thing to another, A, B, C, D, E, and back to A and never get a right answer because for some reason it can't. But through all of that, it was always saying, I see the mistake, here's the correction. I see the mistake, here's the correction. It doesn't see the mistake. No, it only says it sees the mistake. It says it sees the mistake because that's what it's trained on conversations about code
Starting point is 00:17:00 to do. Okay. That's part tech stuff. I'll probably come back to some tech stuff. What about the... I sometimes use it to increase the amount of tact in my messages. Do you feel like you're learning to be more tactful by doing this process? No. I have reached the age where I am decreasing the amount of tact that I provide for other people. Do you think it's important to increase the amount of tact that you use? People cry if I don't, so yes. And I don't like it when they cry.
Starting point is 00:17:31 I mean, yes, that is a use. And I guess if you're reviewing what it says, that's useful. That's fine. I mean, that's probably one of the useful things. I would say, and it'll become clear as I continue through this multi-point thing. Very long list. I didn't expect it. I don't know if that benefit is worth the hundreds of billions of dollars of investment.
Starting point is 00:17:55 No, because before I started using it, I had plenty of other scripts that I used to soften messages. Or other people. But it's tough to bug other people. Yes, there were also other people I have said contribute to. But there's a tendency to not bug other people now because you can go ask the chat bot and we lose something there. That's probably true.
Starting point is 00:18:15 Some of my friends have become much closer because I asked them to help me rewrite the message. Okay, I'm going to start getting into some things that are going to piss people off. I think people can abuse this way more easily than some previous technologies. Computers, somewhat dangerous, I will admit. Difficult to use. Chatbots, pretty easy to use. You go talk to something and tell it what you want.
Starting point is 00:18:41 And it has useful things and it's extremely dangerous. And it's extremely dangerous at scale when you give a chatbot to every single person on the planet. And I think the analogy is similar to munitions in some way. Very useful in certain contexts, but also extremely dangerous at scale. So you're going to chat bot a bomb? Yeah, you can do that. I mean, of course you can. Or you can produce tons of propaganda and flood social networks with it.
Starting point is 00:19:14 You can produce, and we're just talking about chat bots here, the same line of AI things can produce very convincing video now with a prompt. Very convincing photographs that take somebody who's familiar with the outputs to notice it's AI, very convincing voice. You can make it sound like anyone. So it's trivial. You don't even know that this is our podcast anymore, do you? I just said a prompt, argue against AI, and we're kicking it at the beach. So I think there's a tremendous potential for abuse on the social scale that has nothing
Starting point is 00:19:49 to do with writing scripts in Python. And I think we're seeing that now. So there have been a lot of cases. There's been a lot of legal things where lawyers run to chap GPT and their filings have cases that don't exist. The Department of Health and Human Services just released a big study, a position paper about I don't remember what, probably something bad that they're going to do, where they cited a bunch of scientific papers that do not exist and the authors say I didn't write them. So this is the kind of stuff that's happening.
Starting point is 00:20:28 And at the same time, I don't think the companies and people that are developing and pushing it are trustworthy. I think- Well, going back to the previous point. Yeah. I mean, writing papers and that I want to take that back to writing scripts because those people wrote papers that they could not have done themselves. In order for them to have written those papers themselves, they would have had to be familiar
Starting point is 00:20:58 with the papers they were basing things on. You think the lawyers are not capable of writing legal filings? Well, clearly they're not good enough at writing legal filings to be able to read one and say, no, that's not correct. I think they don't understand LLMs. I think they trusted it and they thought this would speed things up. I think that's the case for most of these kinds of... I don't know about the Health and Human Services, those people are crazy. Actually, this is one of those points.
Starting point is 00:21:27 When this was mentioned on the Slack, somebody said, I can't explain to my boss why it's not the be-all, end-all, time-saving, wonderful thing that he thinks it is. And this is part of the problem. Maybe people can't understand, well, it writes crappy code or it writes code that is very inefficient or any of these things about code that people just don't understand. But this example of it writes legal briefs with phantom precedents, case files, and it doesn't know that. And that might be an example that helps non-techs understand a little bit of it truly, the LLM truly believes-
Starting point is 00:22:15 Just believes nothing. It believes nothing. The LLM truly wants you to believe that the things that it hallucinated are correct when it isn't. So when we take that back to technology, it hallucinates libraries for Python, which is just hilarious. And there are other things like this. So maybe this is a good story to have. The LLMs are generating papers that have bad references and these bad references and it's generating policy papers using bad references.
Starting point is 00:22:52 Okay, so continue. I'm going to blow through some of these because I'm taking too long. So I mentioned before, it's multimodal. You might love it for making shell scripts and stuff, but it's also a single step away from producing propaganda, revenge porn, AI slop articles that poison the internet, images poisoning search results, all kinds of garbage. So a lot of uncertainty that that happens. They're not making any money.
Starting point is 00:23:15 Wait, you missed a couple up here that inputs are largely stolen. Inputs are largely stolen. Yes. To make an LLM, you need to train it on tons of information. That tons of information comes from the internet, and so it came from code that was in GitHub, code and answers that were in Stack Overflow, everything everyone's ever written in a blog post, books, articles, magazines, and they did not get permission to use any of that. The thing that kind of struck me about that one was the Aaron Schwartz case.
Starting point is 00:23:47 Yes. Yes. He was... He was... He downloaded a bunch of stuff from the internet that he didn't necessarily have permission for. But it was mostly... I don't remember the exact details of it, but it was quasi-public stuff. Exactly. And he shared it. And they decided to make an example, and they were going to throw the book at him.
Starting point is 00:24:11 And he committed suicide. And honestly, what they were talking about. It was paper. It was academic papers and things, if I recall correctly. Yeah. He wasn't doing anything wrong. And they were going to send him to prison forever. And they had vilified him.
Starting point is 00:24:32 And then we get Chad GPT and all of these LLMs who are doing the same thing, but in larger scale, including some of the same libraries that he was jailed for. Yeah, JSTOR was the digital repository of academic journals accessible through MIT's computer network that visitors to MIT's open campus had access to and Schwartz as a research fellow had access to. And so basically he took stuff that was lightly behind an access thing for students and made it public.
Starting point is 00:25:08 And the LLMs took all of that info, ripped through it. They've taken everything. Like, they've reached the point where they're running out of stuff. They've trained on so much stuff, there's no internet for them to add, which is causing their later and later models to be... And then they add themselves, and that's just much worse. Okay. So that's a legal, moral issue.
Starting point is 00:25:30 The next thing I was going to say is they're not making any money. There's tons of investment going into it. They're not making any money. Open AI is losing money hand over fist. It's so confusing to me. It's like... Billion, they're losing billions. It's like each query costs... Each query in energy is basically a bottle of water.
Starting point is 00:25:49 I'm not going to get into the environmental stuff because that's getting better and it's a weak read to hang this stuff on now. That's fine. But there is a cost associated with every query. Which is mostly a loss for these companies. Yes. associated with every query. Which is mostly a loss for these companies. Yes. I mean, there are people who are paying for subscriptions so that their queries don't get then folded into the whole cake batter.
Starting point is 00:26:15 Well, I think most people are paying because you get cut off after a certain number of... Oh. I think it's being really past its capabilities in a lot of places. So I don't know if any of you have seen GitHub has new copilot agents that you can add to your team. And they will autonomously assign themselves issues, solve them, file PRs, and push them. Didn't... Wasn't there a Microsoft article about that?
Starting point is 00:26:39 I don't know. There's a Reddit thread, which I will find the link for, where this happened in one of the major repositories. I don't remember which one. It might have been a Java thing. And it is basically a PR, and it puts the PR up for a code review. And the developers engage with the agent and do a code review. And it is hundreds and hundreds of entries long.
Starting point is 00:27:10 No, that's wrong. Do this. No, that's wrong. It is the biggest waste of time I have ever seen. If it was a human, you would say, no, stop. Pull this PR. We're going to go have a talk. But you can't do that. And so it's just this endless string of, nope, that's wrong, nope, that's wrong.
Starting point is 00:27:29 And it keeps putting the PR back up with these changes and getting in the loops I was talking about. It's not ready to do things like that. But yet they're pushing it to be these are these agents that are going to take autonomous action, which is really frightening to me. Autonomous action. Giving more excuses for management to abuse workers through demanding more productivity, go use this and we'll go faster at the cost of quality because you're not going to have
Starting point is 00:27:53 time to bird dog its outputs or just lay people off because we can do everything with AI. That's worrisome. Oh no, I think any company who believes that should 100% do it. And it's happened. There has been companies that I think Klarna was the one that replaced their entire customer support team with chatbots and they had to undo that and hire a bunch of people back because the customers were not happy. Also look at the US federal government and how it's being applied to lay off their excuse for things.
Starting point is 00:28:23 I think this is a point I want to make and I'm almost done. I promise I'm almost done. Paradoxically, as these get better, this is going to get more dangerous because it's going to get closer and closer to working a lot of the time. And when something works 95% of the time, but doesn't 5% of the time, that's really bad because you get into the zone where you're confident that it's working. And that you'll defend it.
Starting point is 00:28:50 You'll defend it, but it works well enough that you're confident, but not well enough it screws up enough of the time that it's dangerous. Same as self-driving cars. A self-driving car that works 95% of the time and doesn't make mistakes sounds great until you think about that it's gonna make a mistake once every whatever, which is really often, and it's gonna do it when you're not paying attention. Once every 20 minutes.
Starting point is 00:29:18 Yeah, you're not gonna pay attention. You're gonna be lulled into a false sense of- Because 19 minutes of boredom means that, yeah. And so people are gonna trust them, they going to apply them to more and more places where they're not applicable or more vital problems. See also the GitHub agent thing and something that only works 90 to 98, even 99% of the time. That's terrible.
Starting point is 00:29:40 You wouldn't fly on an airplane that crashed one percent of the time. Also there's some confusing stuff. So all these companies, you listen to these CEOs and they'll come out and they'll tell you within five years we're going to be getting rid of 20 percent of developers and replacing them with chat GPT or Claude or whatever. I think the Anthropic guy recently came out and said, yeah, we're going to replace developers in five years. We're going to replace developers in five years. We're going to do this.
Starting point is 00:30:07 Why do they have per user licensing? Their entire business model depends on developers paying money to access their stuff, and it's not getting cheaper. So if they get rid of their customers, that seems kind of contradictory. So I have trouble believing that they actually believe that. I don't know what they believe, but it's a little galling to have them say, we're going to replace developers, the very people who are paying per-seat licenses for our stuff and hoping that we get out of the hole that we're in. So I just find that interesting. One final thing, and it goes back to it being fun and this is
Starting point is 00:30:47 happening outside the realm of tech and it's mostly anecdotal at this point but it's sort of worrisome. You have a friend you can talk to all any time and that friend won't get mad at you. You can say whatever you want to that friend and they won't get mad at you, they won't storm off, they won't get mad at you. You can say whatever you want to that friend. And they won't get mad at you. They won't storm off. They won't not call you back. And people are getting addicted to these things. And that's one thing I've noticed in myself,
Starting point is 00:31:15 just a little bit, just in a little bit of usage. It's fun. I'm talking to this entity, which seems to have a personality. And it's smart. It's smart. It waits for me. We're having this conversation. I can take a 30-minute pause and it doesn't say, well, I guess we're done and leave.
Starting point is 00:31:31 We can just pick it right back up. And people are replacing other people with them. They're making them into friends, they're making them into intimate partners. Therapists. Therapists. I mean, because what you really want is to lay out your whole mental anguish to something that may be recycling that into the future. To a Silicon Valley venture-funded company?
Starting point is 00:31:52 Yes. Yeah. That's a worry that's sort of very meta compared to some of this other stuff, but there are social implications to creating something that appears to be alive, sentient and alive. And we are skipping a lot past that. And so that is my final thought on that matter. Like I said, I think people, like you said, I think people should be familiar with these.
Starting point is 00:32:19 I think they should use them. They see what they're capable of, what they're not capable of. I think they should be realistic with what they're seeing when they use them and mindful about what's actually happening. And you don't have to pay for all of them. They're all free for some limited number of queries per day. Yes.
Starting point is 00:32:33 Yeah, and I mean, I haven't, I have used them to do things and not hit that limit. You can definitely, it's not like it's only three and it's useless. You can definitely get a little bit of work done with the free ones. And if you get into coding and you get in a loop, that's when you get kicked out pretty fast. It's like, okay, you've talked to me too much today, come back tomorrow.
Starting point is 00:32:57 But remember what you're using, remember what it can do, pay attention to its failures and pay attention to the implications to our world because these are not the AI that we think they are from science fiction, that are modeled on a human brain, that know things, that can be self-critical. That is a huge missing piece. They are incapable of self-criticism. And until that happens, the whole general AI thing is just a pipe dream. So that's what I think, David. I don't personally use them very much.
Starting point is 00:33:44 I try not to use them at all, but I do dip my feet in the water occasionally just to see if there are sharks. Recently, I came across a story of someone successfully convincing a flat earther that the earth cannot be flat because if it was, the edge points would be tourist attractions. Which is a weirdly convincing argument because if the world was flat, wouldn't you want to go see where it ended? That would be super cool.
Starting point is 00:34:19 See, also Terry Pratchett. Right. Going back to your point about they're not making money, but they're pushing it really hard. They're not making money, but they want to give you more of it. Well, part of the reason they're not making money is incredibly expensive to do the training that they do. Even the inference alone is expensive. Inference is expensive, but it's a fraction of the training.
Starting point is 00:34:49 And they have to buy a lot of expensive hardware to do that training and run them in data centers. There's a lot of cost to that. That may come down. It's like the environmental argument that I think is not a good thing for people who are AI skeptics to spend a lot of time on, because as we know with tech, as things go on, things get cheaper, cheaper they get smaller they get more efficient so that
Starting point is 00:35:08 argument is likely to go away and if you're standing on that ledge as your main point then you need to regroup but yeah it's a little weird but I don't think it's just because it's a high cost I think generally they're they're pushing a lot of free stuff so they're losing money because most people using them are using the free tiers and yeah and I think they're a lot of people aren't using them unless they're at part of the operating system like Apple has done, like Google has done. Those people are using them. So presumably they're getting money from Apple and Google.
Starting point is 00:35:48 Well, Google has their own, so they're just paying themselves, but Apple's paying somewhat chat GPT. No, actually they got a free deal, didn't they? Anyway, Apple scammed chat GPT out of including that. But yeah, it's weird. And there's been a pullback,. Microsoft is pulling back some investments. So I think this stuff, I don't think it's going anywhere. And that's why I care so deeply about it,
Starting point is 00:36:14 because if I thought it was the flash in the pan, I wouldn't be talking to you about it. But I think it's not going anywhere. It's gonna continue to improve. And therefore we are going to need to figure out how to deal with it as a society. And I think that means regulation, and I think that means education.
Starting point is 00:36:32 And obviously we're not in a place where regulation is going to happen right now, at least in the United States, but it's something that I think needs to be considered. And in order to consider it, you have to understand at least a little bit of it. Yeah. And so if you can consider it, you have to understand at least a little bit of it. And so if you can use it, try it out, make your own decision. It can be extremely helpful, especially if you're the sort that ends up looking up everything
Starting point is 00:36:57 you do, which is, you know, some people work with memorization, some people work with looking stuff up. I would agree with that, with the caveat of don't let its funness trick you into not using resources you're already good at using. So if you already know how to look up physics stuff you need to know, or math stuff, or get answers for help on code, you're already pretty efficient on that at the internet. This may seem more fun and efficient, but it may not be. In which case, don't get fooled and spend time with this when you already have good skills to do the things you need to do. That's all.
Starting point is 00:37:37 Changing subjects. Oh, my book is out in Polish. It will be out in Russian soon. It is also out in Portuguese, but that was a few months ago. It's of course out in English. There are quizzes on the Safari Learning site, which is the O'Reilly site for books. I got nine out of 10 on the quiz that I did, that I took. So yeah, I have a book. It's called Making Embedded Sisters. What'd you miss? I don't know. It didn called Making Embedded Sisters. What'd you miss? I don't know. It didn't tell me which one I missed. What?
Starting point is 00:38:08 I know. How are you supposed to learn? Okay. Wow. Okay. So, I had more questions than this. We're not going to get through them all. We can have have a longish episode. Brian, upon listening to Inside the Armpit of Giraffes, said that he knew what he wanted
Starting point is 00:38:34 to do and he's sick of working for the man, design embedded systems for ecologists. Brian also wants to follow Akiba's lead and answer Meredith's call for more engineers to join the effort, understand our world so we can make better choices for the planet. Nice. I totally agree. We had a couple of other folks who talked about how that episode made them want to go try things. I definitely went to the wildlabs.net and looked around, even though I'm currently
Starting point is 00:39:08 overbooked. And the end uses of technology are so fascinating. And that's just one of them that, I mean, animals. Let's see. What else do we have? Oh, Chris and I got e-bikes. I didn't expect to have quite so much fun riding a bike again. Part of the fun is because now there exists cycle ways, places where it's mostly just for bikes. And we have some here in Santa Cruz that are. It's a patchwork right now, but someday 31 miles from South County to North County. And they're not just bike lanes, which are a little scary.
Starting point is 00:39:59 They're completely separate in most cases from roads. Yeah, no, the e-bike technology has gotten really fun. We live on a hill and there's always been this thing about bikes that the last quarter mile would be miserable. I'm also the point where you'd walk them up rather than... Oh, even with the e-bike, I still sometimes don't manage that last hill and we'll walk it that way. You gotta have enough speed coming into it. But there's a turn. If I'm going too fast around that turn, I'm just going to wipe out. Alright.
Starting point is 00:40:33 Anyway, I have to say that while I don't use the e-bike part of it except for that, I just am really liking it. It's a great leveler because you're such a much better cyclist than I am in terms of power. If I didn't have the e-bike part, we would not be able to cycle together because I would flame out probably halfway through the ride. So I can just put it on one pip and get a little bit of a boost. It's not doing everything for me, but it's a great leveler. So it's very nice. And then on hills, like you said, the nice thing about it is if you want to take a long ride but you're not sure you want to take a long ride because you're tired, you just turn that up a little bit and it does some of the work for you.
Starting point is 00:41:23 And so now you can have a nice ride without it necessarily being exercise. That was the thing is we can ride out until we get to the point where we're kind of tired of riding or almost tired of riding that point that I used to think, well, that can't be the midpoint. But then you can e-bike home. Or at least have it assist you most of the way. Yeah. So it's a lot easier. Yeah.
Starting point is 00:41:47 I highly recommend them. It's been a lot of fun. Just the joy of the freedom of riding bikes. And like I said to you, when you're driving someplace, everything goes by so fast, you don't see a lot of stuff. And walking and cycling are at a speed that you see everything between you at point A and point B. It's just a different experience. You're outside, you know? It helps we live in a very pretty place.
Starting point is 00:42:18 Yeah. Okay. Sergio asks, are you learning Rust? Is it being used in industry? No and yes. I have not personally seen it on a client, but I know of projects that use it. I agree. I am not learning it because I am very good at C and C++. Python, I don't see the advantage of Rust. There are advantages to Rust. I don't know. I would have to learn it.
Starting point is 00:42:52 Yeah, yeah. And my team would have to learn it. Yeah, that's the big piece. I don't know where it's gonna end up. It probably will have a place in embedded systems. But I hear Zig is the new hotness. I don't know. probably will have a place in embedded systems. But I hear Zig is the new hotness. I don't know. So here's the real deal.
Starting point is 00:43:10 I am going to be retired before it's required for me to learn any of those things, so I'm not bothering. I would rather learn Cobol. I don't think that's very useful. I'm already pretty proficient with Fortran 77. I think they have Fortran 93 now, so you gotta get up to speed. I know. I'm so far behind. Ah, okay.
Starting point is 00:43:35 Well, we could do cool algorithms, or the derived question about cool algorithms. Oh, I see. Those are kind of connected questions. Or people who influence your life. Let's save the influence life one because that could be long and I would need to prep for that. Okay. Simon asks for cool control algorithms. Control algorithms. Okay. Well, that limits this. Kalman filters, PIDs are common. Have we found any that have worked for memorable applications?
Starting point is 00:44:06 What are they called and how do they work? And then Tom Anderson followed up on that with how much of embedded controls are derived and how much is ad hoc tuning? All right. Where derived involves fancy math. Okay, well my answers to this are kind of broken then because I missed the controls being a thing, but I have something to say about fancy math later, but I... No, no, go ahead and do the fancy math.
Starting point is 00:44:31 No, no, no, no. Let's talk about the controls part first because it makes more sense to do. I have not done a lot of controls, so I'm out of my depth here answering any of that. CalMAN and PID are still... Widely used, yeah. My go-to, I'm working on an inverted pendulum project, a piece of it has an inverted pendulum, and have run across the segue algorithm.
Starting point is 00:45:01 They have a patent that very well describes how they do their algorithm that involves how far your pendulum is pitched over, your pendulum pitch rate, the distance your wheeled body has traveled, and the rate at which your wheeled body is traveling. And it's a neat algorithm. It's very effective for this. It's very well considered. There are all kinds of things for specific implementations, but it is a pretty canned algorithm that's used in a lot of places that I was kind of unfamiliar with a year ago. I'm a lot more familiar with it now.
Starting point is 00:45:50 At the start of that, you start with the equations of motion, the differential equations that govern how an inverted pendulum behaves. I assume they use Lagrangian or something to derive those. Do they talk about the numerical methods they use to solve those? Because that's the thing in Tom's and Simon's, well, Tom's question anyway. There's lots of control algorithms. There's lots of things that are derived from physics and they tend to just come out of equations of motion or like differential equations. And the two tricks with that, of course, is that those are just models and there's other
Starting point is 00:46:33 things that come into those that are not in the equations, friction, heat, the way your physical materials change. And points, boundary conditions. Boundary conditions, things like that, that change. Endpoints, boundary conditions. Boundary conditions, things like that that change. But also a really big one is even if you've got the math and it models the world perfectly, you've got to jam it into a computer. And computers are not things that solve differential equations very well. You need to do numerical methods to approximate solutions.
Starting point is 00:47:03 So that's where error comes in. Even if you've got perfect physics on one end, as soon as you put it in a computer, you need to adapt it to how computers work to solve them and those solutions have errors because they're not solving it perfectly. Does the segue people talk about their numerical methods? No. Okay. No, because their physics is nowhere near that complete.
Starting point is 00:47:29 Well, it has to be somewhat. It doesn't have to be complete. They still have to solve. I assume they're solving a differential equation. Numerical issues are so far down the list of things they don't solve. I mean, not just the segue thing. Friction is just something...
Starting point is 00:47:49 Right, right, but that's just another term. But it's not a term that's in the... Since it's a feedback loop, you can sometimes ignore those terms. But then you know you aren't really in the physics world, you're in the real world, and you just, you don't go as far and that's okay because you measured you didn't go as far and so you go a little further. And so the numerics just aren't... But the question I... They're never on my radar.
Starting point is 00:48:20 Right, okay. But they... Interesting. Other things like perturbations in a perfectly flat... Because what you don't want to have happen is... like when you're just doing integration. This is a well-known problem, right? If you just are trying to do integration on a computer, eventually things blow up. Integration of error terms. Tiny, tiny error terms. Because there's always error and it always accumulates. The smaller your error terms, the more likely it is that everything goes bad. And there's tuning when you're solving differential equations because if you choose a time step that's too small, sometimes things don't work right.
Starting point is 00:48:55 If you choose a time step that's too big, eventually things blow up because you accumulate error because to solve differential equations, you need to integrate. So anyway, to numerically follow differential equations. That was just a question I had because it's the fancy math and getting the fancy math right is one thing, but even if you get it perfectly, computer's view of fancy math is not the human view of fancy math. Let me go on with what Tom wrote since he actually since we're using fancy math in the way he did and I don't know that we've defined it well enough. Tom says, for example, I consider tuning a PID empirically to be ad hoc.
Starting point is 00:49:36 Deriving good PID parameters from physics and control theory is successful fancy math. So Bode plots and whatnot. A Kalman filter is good fancy math, but a Kalman filter with an if statement that bails out when some parameter goes out of limit is not so successful. Same with a PID. Does it have an ad hoc limit or on the integrator term? Then it's not so fancy unless there's some mathematical analysis behind it. How often is fancy math attempted and how often does it work cleanly? What are things that cause the fancy math to fail?
Starting point is 00:50:14 Backlash in motors is more than just- Physical reality. Physical reality that is really hard to model and small enough that you can usually just tweak to get it. Yeah, I mean, the big question is modeling. Is modeling useful? Yes. Is modeling accurate?
Starting point is 00:50:31 Modeling is super useful. What is the thing about modeling? All models are wrong, but some are useful. Exactly. Right. So that's where you come into this. I think I look at the things like the bailouts and the if statements a little differently because I don't trust the way computers do numeric work. Those are things I would
Starting point is 00:50:53 expect as kind of de rigueur to have fail-safes. But I understand where he's coming from with like shouldn't the math be analytically worked out such that you don't need those and these filters work within the bounds of the problem or the PID works within the bounds of the problem. But that's fine until your inputs are unexpected, right? Okay, if I'm tuning a PID, I have inputs and outputs. And I take the... So unit analysis to me is very important here. If your inputs are in milliliters per second and your output goal is in... Gallons. and your output goal is in gallons, you need to have these be on the same playing field.
Starting point is 00:51:51 And each step that you go through, you should be thinking about the terms. Because if you're secretly changing from degrees to radians inside the PID, you're doing it wrong. Oh, sure, sure, yeah. And yet, it's really easy to make that sort of mistake. Yeah. And Kalman filter too. You need to have every... Well, Kalman filter because you're taking disparate things sometimes.
Starting point is 00:52:16 Yes. Yeah. And you need them to be compared at the same, with the same... Scales. Scales. Yeah, yeah. And so, that level of math is more on the ad hoc side. That's preparing everything.
Starting point is 00:52:31 I don't think that's ad hoc. That's just. Well, let me finish. Okay. It's preparing everything to be tuned manually. I see. You can tune manually without doing that, but you really are just poking around in space. Once you have all of the terms kind of in the same units, now you look at your high and low points for input and output.
Starting point is 00:52:55 And if they're not translatable, then one of them needs to shift. Like you need to limit your input. Maybe if it's maybe a physical limit with how much it can come in. Anyway. And then for me, I do proportional until it works a little bit and is a little out of control, and then I do a little bit of derivative until it's no longer out of control. But even the derivative, there are different formulations for how the derivative goes, whether you're deriving or you're taking
Starting point is 00:53:32 the derivative of the error or anyway. There are methodologies and I am happy hand tuning a PID as long as it is tunable and as long as I understand that the input range and the output range can work together. Okay. That's it. With this inverted pendulum problem I have going, I have wandered around the parameter space and I cannot get it to get work. I have the parameters, I understand what each one does. Similarly, I understand proportional integration, what that does, proportional terms and what
Starting point is 00:54:23 they do versus integral versus derivative. I understand, and I understand in the inverted pendulum what they do, which one's increased stability, which one's increased speed. And yet, there is no, even though my input terms and my output power indicate that there should be a solution, there is no solution that I can find. Am I in a local minimum? Probably. But I have wandered around. And so what I really should have done a week ago and what I should do now is go back to the videos describing describing the problem setup and how to find these parameters given the problem setup even knowing that that problem setup won't take into account some of
Starting point is 00:55:14 the problems I have like yeah right on a ramp right right right you're in a middle ground where you do need to do some math I need to find the shape. Even if the model isn't perfect, you do need a model. And I have a model, but my model, I am not connecting them well enough. Yeah. Eventually, Tom used the word adhocary. I don't like, I'm not a fan of the ad hoc thing. I think there's empirical, which he said, and there's analytical kind of stuff. And I think there's a place for empirical.
Starting point is 00:55:50 Oh, totally. Yeah. I just feel like ad hoc is a little dismissive. Like we're just winging it when sometimes it's- Well, I mean, I had an Excel sheet with, and I was kept pushing the different parameters. You were wringing it. Looking at the stability. You were wringing it, but there's a continuum. And like a monkey, I would just try things
Starting point is 00:56:11 over and over again. That I would say is ad hoc, and is probably worth... But I knew what it was supposed to do. It just wasn't doing it. Yeah. And which means that I didn't actually know what I was supposed to do. I thought I knew what I was supposed to do.
Starting point is 00:56:23 Well, that's the key, right? That's the key is you can be in a regime where you have this out of the box thing and you don't understand it, but people say to use it and you put stuff into it and turn things and sometimes it works and that's ad ockery. Or there's, I understand this completely. I know how to set it up and now that I have it, I need to tune, which is necessarily, necessarily empirical because I'm on a real system that has real limitations. So that all makes sense. I think the thing I was saying about the if statement that bails out and stuff, I don't personally, I don't think there's a lot of control systems that internally handle
Starting point is 00:57:11 all error conditions. Like if something breaks, if a sensor breaks instead of being in zero to one range says 50,000 or 10 to the eighth, and you use that as the input to your PID or to your Kalman filter, it's not the filter or the PID's job to fix that. No, but you should have identified that before you called PID. Yeah, it should. It's not an if statement in the PID. An if statement that bails out of some parameter goes out of limit. Well, he also
Starting point is 00:57:38 mentioned integrator term. Well, sure. I mean, yeah, I don't know. I would say by his definition, no, there's not a lot of pure fancy math happening in control systems by necessity because you need to deal with reality of being on a computer. Do it and have the problem that their math doesn't work in the real world and the people who are more empirical do it and then run into problems where they can't solve it. There's all this stuff like MATLAB simulating, where you take a model and it produces code and stuff, and I don't know what kind of error handling you can strap onto that or is part of that. I've never done that. You get parameters. But things like bad sensors should be handled
Starting point is 00:58:28 before you go into the out filter. Yeah, exactly. But things like integrators. You might need to handle it on the output as well, right? You might get... Not if you have your tuning, right? Because you shouldn't be able... No matter what input you have... Do you trust your code?
Starting point is 00:58:44 You shouldn't be able, no matter what input you have, you shouldn't, I mean, a PID filter, you can run it offline. Right. No, what I'm saying, you have to trust that your input filtering is right. The combinations of input filtering is right. And it seems not that expensive to say, oh, my Kalman filter says turn to the right at 700 degrees per millisecond and say that that's out of bounds. I mean, we did that with laser control. You know, we had a control system that had integration and stuff,
Starting point is 00:59:19 like not numerical integration. It was a literal integrating power and things like that. But we had limits, like if this comes out and says this, it's lying, stop. Yes. So maybe I'm confusing what he's saying. That's a safety check. Yeah, that's where my brain is going. This is more like when you have a PID system and your system gets really close to the answer, but doesn't quite get to the perfect spot. So basically you're putting your finger on the inside of the control system to nudge it because it's doing something wrong but you don't know why.
Starting point is 00:59:58 No, no. So you have this, it's almost perfect. Yeah. But because your motor takes a little bit of oomph to get started, it doesn't immediately move when you nudge it. Okay. It has to get to some level. Okay.
Starting point is 01:00:15 So you have an integrator saying, well, the air is still here, I'm gonna add up. The air is still here, I'm gonna add up. And once it gets past that level where it can move the motor, because it is right next to where it's supposed to be, it jumps over because it can't move that small of a space. And now it oscillates back and forth. If it's lucky and then it stops just a little bit off, and so you have this motor that's supposed to be stopped and every once in a while it goes, bo bada-dada-dum, boing bada-dada-dum. Okay, so if you make your iterator, if you make it so that your
Starting point is 01:00:52 integral term is less than your, less than the amount to turn on the motor, then if you are close enough, it won't turn on the motor. Yeah, right. That's a difficult question. But I mean, Tom's point of do we usually empirical solves things or do we do the math? And I'm kind of sad to say I am an empiricist. I would like to do the math more often. I'm just not that good at it. That's the place of engineering that's downstream of theory. You're not inventing new control systems. That's not, you have existing control systems that... But I'm not setting up the problem to
Starting point is 01:01:33 solve it with Bode plots and all of that. And I think sometimes my life would be easier if I did. I misunderstood the question and there was a few areas where fancy math is happening all the time. And I would just like to point out that fancy math does exist. Oh, yeah. I mean, Kalman filter alone is fancy math. Modeling physical aspects. Fancy math, we had optics in many of the systems I used and you model those on a computer and
Starting point is 01:02:04 they make optics and they work and there's no putting the finger on the scale because it's glass and if you put finger on the glass it leaves a smudge. Heat transfer and stuff like that. All that CSC math stuff like fancy lookup tables, digital design, how all the you know all semiconductors are built with simulation, shortest path finding, all those algorithms are closed form fancy math and they don't need any of the stupid numerical things. Those things, the CSC things, don't need any of the stupid numerical things. They're not numerical. They're graph theory and stuff like that. So those are kind of a fun place of fancy math that actually does exist in computers because you know, Dijkstra's algorithm for
Starting point is 01:02:52 the shortest path does not do any approximations. It just works. Anyway, I wanted to put in a little plug for some fancy math that does exist. I've stunned you into silence. I just, I want to be better at fancy math. And I have spent time getting better at fancy math. And then as soon as I don't use it for a little while, it all just falls away. That's not something you can just pick up and put down. I mean, it does make it easier to relearn it a second time and a third time and a twelfth time. I mean, it's really not your job.
Starting point is 01:03:34 So you don't do it all the time. Yeah. And sometimes I wish it was my job, but right now I don't feel like it can be because I'm kind of at a low point. At other places, you'd have someone doing the fancy math and then saying, here's the stuff, turn this into code. And then usually I can talk to that person about what does this mean and how do I do this?
Starting point is 01:03:54 You're doing hard stuff. First of all, you're trying to interpret a patent, which is never written so you can reproduce it. No, it really isn't. Or it's a good patent. I'll try to find a good link to it. You're doing relatively difficult graduate level physics, mechanics, and then combined with your control system and the numerical aspects
Starting point is 01:04:20 and the things that are probably not going right for other reasons, yeah, it's a really difficult problem. So you should be easier on yourself or ask a carbon-based life form for help. Yeah. Okay. Well, do you want to talk about William's question or do you want to go get some lunch? No, William's question, like I said, has gone for another time and when I have some time
Starting point is 01:04:49 to think about... It's about people who make an impact on your life. Yeah, that's what I'd want to... Everybody wants to think about those people and... Like to rummage through the dusty shelves and the cobwebs of my brain. Yeah, I think that's a show. All right. I apologize to everyone who disagrees with me, but you're wrong and that's your problem.
Starting point is 01:05:17 Thank you to Christopher for co-hosting and producing. Thank you for listening. Thank you to our Patreon listeners Slack group for their questions and their excellent discussions. If you'd like to contact us, it is show at embedded.fm or hit the contact link on the embedded FM website, which is cleverly disguised as http colon slash slash embedded.fm. as HTTP colon slash slash embedded dot FM. HTTPS.
Starting point is 01:05:48 Fine, whatever, just type it in Google or whatever, doc doc. Just Google for embedded dot FM. You'll find us. And then there's a contact link. Okay, let's talk about some Winnie the Pooh. There was the poem, we're not gonna do again. They were having a party and Kanga was there and,
Starting point is 01:06:15 okay, so Kanga says, "'Just one more jump, Rue. "'Then we must be going.'" Rabbit gave Pooh a hurrying up sort of nudge. "'Talking of poetry,' said Poo quickly. "'Have you ever noticed that tree right over there?' "'Where?' said Kanga. "'Now, Roo.'
Starting point is 01:06:36 "'Right over there,' said Poo, pointing behind Kanga's back. "'No,' said Kanga. "'Now jump in, Roo dear, and we'll go home. You ought to look at that tree right there, said Rabbit. Shall I lift you in, Roo? And he picked Roo up in his paws. I can see a bird in it from here, said Pooh. Or is it a fish?
Starting point is 01:07:00 You ought to see that bird over there, said Rabbit, unless it's a fish. It isn't a fish, it's a bird, said Piglet. So it is, said Rabbit. Is it a starling or a blackbird? said Poo. That's the whole question, said Rabbit. Is it a starling or a blackbird? And then at last, Kanga did turn her head to look. At the moment that her head was turned,
Starting point is 01:07:23 Rabbit said in a loud voice, "'In you go, Roo.'" And In jumped Piglet into Kanga's pocket, an off-scarper Rabbit with Roo in his paws as fast as he could. "'Why, where's Rabbit?' said Kanga, turning right around again. "'Are you all right, Roo dear?' Piglet made a squeaky Roo noise from the bottom of Kanga's pocket. "'Rabbit had to go away,' said Pooh.
Starting point is 01:07:48 "'I think he thought of something and he had to go and see about suddenly.'" "'And Piglet?' "'I think Piglet thought of something at the same time. Suddenly.'" "'Well, we must be going home,' said Kanga. "'Goodbye, Pooh.' And then in three large jumps, she was gone. Pooh looked after her as she went.
Starting point is 01:08:09 I wish I could jump like that, he thought. Some can, some can't. That's how it is.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.