Python Bytes - #273 Getting dirty with __eq__(self, other)

Episode Date: March 4, 2022

Topics covered in this episode: Physics Breakthrough as AI Successfully Controls Plasma in Nuclear Fusion Experiment PEP 680 -- tomllib: Support for Parsing TOML in the Standard Library What is a g...enerator function? dirty-equals Commitizen Extras Joke See the full show notes for this episode on the website at pythonbytes.fm/273

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds. This is episode 273, recorded March 1st, 2022. And I'm Brian Ocken. I'm Michael Kennedy. Well, welcome, Michael. It's good to have us here. It's great to see you as always. It feels like spring is almost here. It's March. I can't believe it. So, pretty awesome.
Starting point is 00:00:24 Yeah. Fun to be talking Python with you. Yeah. So should we kick it off with your first item? Let's do it. I'm a big fan of science, math, and all those things. And I came across this article because I was reading about science, not because I was reading about Python. But then I thought, oh, there has to be a Python story here. Let's get into it and see if I can track it down. And wow, is it not easy to find. So here's the deal.
Starting point is 00:00:47 I saw an article over on sciencealert.com called Physics Breakthrough as AI Successfully Controls Plasma in a Nuclear Fusion Experiment. That's so cool. That's amazing, right? So let me put a few things together here. Nuclear fusion, not fission. That's the kind of nuclear we want. That is harnessing the sun with no negative effects to like turn hydrogen into helium and so on, right? If we could harness that, that's like free, super easy energy forever. It's
Starting point is 00:01:18 incredible, right? So people have been working on this for a long time. The way that I understand, which is probably pretty, you know, piecemeal that it works is you put some kind of thing, some kind of material like hydrogen or something in the middle, and then you blast it with tons of energy, but then it creates this plasma and you've got to control with lasers and magnets on how you basically keep the pressure high enough in addition to just the heat to actually make the fusion work, right? So there's been some success like, hey, we got fusion to work for a while. It just took more energy than it put out. So, you know, it's not a super great power plant, but it did do the science thing, right? Yeah. So here's the deal. This article says they've used artificial intelligence
Starting point is 00:02:07 to teach it how to make instantaneous or near instantaneous adjustments to the magnetic field and the lasers in order to actually get better results with fusion, right? So take it farther along. And it says, in a joint effort, the Swiss Plasma Center and artificial intelligence research company DeepMind, they used deep reinforcement learning to study the nuances of plasma behavior
Starting point is 00:02:33 and control inside a fusion tokamak. That's the donut-shaped thing where the reaction happens. And they're able to make a bunch of small adjustments really quickly in order to get better results. And it's pretty wild that they did that with AI, isn't it? Yeah. There's definitely Python in there somewhere. You just know it. Exactly. So I'm like, all right, where is this? So I went through and they talk about the findings being in nature, some of the articles that they're referencing. So there's some like deep, as in not super engaging sort of scientific articles, they're referencing. So there's some like deep, as in not super engaging sort of scientific articles, like the traditional academic style of writing that you got to dive
Starting point is 00:03:10 into and then like follow a bunch of links. But eventually in there, you will find that there is some cool science stuff going on and Python is at the heart of it. So it's probably not worth going into too much of the details of how it's actually happening, but it's the Python side of things. But I just thought it was super cool that, look, here's one of the most exciting things happening in energy and for the climate and for all sorts of things.
Starting point is 00:03:36 Yeah. And AI and Python are pushing it forward. That's crazy. And that's what we need for a Mr. Fusion so that we can make flying cars and time traveling cars too. Exactly. I mean, Marty McFly and Doc, they go and they throw their banana peel in the back of the DeLorean, right? You've got to have one of these token mucks to make it roll and got to have Python in your car. Come on. Obviously.
Starting point is 00:03:59 Cool. Obviously. All right. Well, take us back to something more concrete. Well, okay. So I'm pretty excited about this. It's a minor thing, but maybe not too minor. PEP 680 has been accepted standards track for Python 3.11. PEP 680 is TOML lib support. So support for parsing TOML in the standard library. We haven't had it yet.
Starting point is 00:04:24 That's awesome. So we've got JSON. We've got CSV. Why not standard library. We haven't had it yet. That's awesome. So it, we've got Jason, we've got CSV. Why not? Right. XML. Well,
Starting point is 00:04:29 and one of the, and now that we, we've, Pip uses Toml for pyproject.toml, but, and anyway, so we kind of need, I think it'd be cool to have it in the standard library.
Starting point is 00:04:42 I think it's fine to have other outside supports. So what they're doing is, and if people don't, there's some rationale here, but just think it's easier than normal. So TOML is, I like TOML because it's just, I don't know, it's an easy format to read. It's better than any and some other stuff. And for people who don't know, it feels any like, like the dot I and I file style where you've kind of got like section headers and then key value bits. Yeah.
Starting point is 00:05:13 And it doesn't, and often it doesn't like you can use, you can use black and write a pipe project at Tomo file without even really knowing anything about Tomo. So it's pretty straightforward, but we didn't have a way built into the standard library to just use it. So this is this PEP. One of the things there, interesting bits about it is it's only reading.
Starting point is 00:05:36 So it's only adding support for reading Toml. So there's a load and a load S. So you can load a Toml file or you can load a string and that's it. And it outputs a dictionary. So that makes sense. You're just getting a Toml object and turning it into a dictionary so you can use it. But this is built on top of Tomly. So Tomly is being used as the library to basically,
Starting point is 00:06:12 there's an open source project called Tomly, which a lot of projects are using. I think this is the one that PyTest is using and quite a few projects have switched to this. It's really fast. It's nice. But it supports like writing as well. Yeah, writing and code and Dumbass It's really fast. It's nice. But it supports writing as well. Yeah, write and encode
Starting point is 00:06:27 and dump S and all those things. Yeah, right. But that's not the part that's going to get supported. And I think that's fine to just have reading built into. Sure. Some file formats like text and CSV and whatnot,
Starting point is 00:06:43 reading and writing is super common right but these are way more likely to be used as configuration files that drive app startup and like hide secrets you know you put your secrets in there and don't put in git or something like that whatever right those are the kind of uk use cases i i would see and so in that case reading reading seems fine you could always add writing. You just can't take it away if you add it too soon. Right, right. But, but also like, I don't, I don't, and I'm sure there are reasons to, to need to write it.
Starting point is 00:07:13 But I, I don't, you know, it's, it's mostly people write it and computers read it sort of thing. Yeah, exactly. Some kind of editor writes it and then you read it. Yeah. So. Fantastic. All right. Well, cool cool very nice to see that one coming along um alvaro out in the audience hello there says hummel just reached version 1.0 not so long ago so maybe that also has some kind of impact on the willingness like all right the file format is stable. Now we can actually start to support it in the library.
Starting point is 00:07:45 That's true. And we do support Python releases for a long time, so it probably needed to be v1 at least. And Sam also says there's a lot of stylistic choices for how you write TOML files. Like, we need a black for TOML. Not to configure black, but something that then goes against Toml files
Starting point is 00:08:07 and makes them consistent. Yeah, maybe. Yeah, but you could bake that in. All right, what have I got next here? I've got, sticking on the internals here, I want to talk about thread locals in Python. Okay. So last time we had Calvin on and I
Starting point is 00:08:26 spoke about this crazy async running thing that I had built and boy, is it working well. I, like I said, it is truly horrifying to think about what it's doing, but it actually works perfectly. So there it is. But one of the challenges that it has is it doesn't like it if you call back into it again. And I talked about the Nest AsyncIO project last time, which maybe will solve it. I tried those and it wasn't working, but it could have been like at a different iteration before I finally realized like, no, I have to go all in on this threading, like isolate all that execution into one place where we can control it. So maybe it would work, but I just wanted to talk about thread locals in Python, which I thought were pretty easy and pretty interesting.
Starting point is 00:09:13 So I've got this stuff running over there. And one thing that would be nice is each, there's different threads calling into the system to say, schedule some work for me, basically puts it on a queue. The queue runs it on this like controlled loop, and then it sends back the result. The problem is if one function calls that to put in work, and then as part of doing that work, the function itself somewhere deep down, like wraps that around,
Starting point is 00:09:36 it doesn't really like the recursion aspect very much. So what I thought is, well, how do I figure out, well, this thread has running work, and if it calls again, you know, raise an exception and say, like, you need to adjust the way you're calling this library. It's not working right. Instead of just like doing some weird thing. So what I think I might do, and I'm not totally sure it will work perfectly, but the idea is certainly useful for all sorts of things is to use a thread local variable. Now, when I thought about thread local variables, I've used them in other languages
Starting point is 00:10:05 and I had no idea how to do them in Python. It turns out to be incredibly easy. You just say, go to threading, the threading module, and you say local. That becomes like a dynamic class that you can just start assigning values to. So in the example that I'm linking to, it says you get a my data thing,
Starting point is 00:10:21 which is a thread local data blob, whatever. So you could say like my data thing which is a thread local data blob whatever so you could say like uh my data dot x equals one my data dot list equals whatever and then that will store that data but it will store it on a per thread basis so each thread has sees a different value so for example what i could do is say thread you know at the beginning of the call, like I have running work. Yes. At the end, you know, roll that back. And if I ever call in to schedule some work and the thread local says I'm doing, I have active work running. Well, there's that error case that I talked about. And I don't have to do weird things like put different IDs of threads into database, into like a dictionary and then like check that and then lock it. And like all sorts of, I can just say this thread
Starting point is 00:11:04 has like a running state for my little scenario. What do you think? I think that's great. I think it's interesting. Yeah, it is, right? Yeah. And it's, right, not too hard.
Starting point is 00:11:14 Just create one of these little local things, interact with it in a thread and each thread will have basically its own view into that data, which I think is pretty fantastic. So like a thread version namespace thing. Yes, exactly. It's a cool little isolation without doing locks and all sorts of weird stuff that can end up in deadlocks
Starting point is 00:11:34 or slowdowns or other stuff. Anyway, if you've got scenarios where you're doing threading and you're like, it would be really great if I could dedicate some data just to this particular run and not like a global thing, check this out. It's incredibly important to use. Oh, let me pull up one more thing before we move on, Brian. Okay. How about Datadog?
Starting point is 00:11:58 Yes. That's also something else that's extremely easy to use. Yep. Thank you, Datadog, for sponsoring this episode. Datadog is a real-time monitoring platform that unifies metrics, traces, and logs into one tightly integrated platform. Datadog APM empowers developer teams to identify anomalies, resolve issues, and improve application performance.
Starting point is 00:12:21 Begin collecting stack traces, visualize them as flame graphs, and organize them into profile types such as CPU, IO, and more. Teams can search for specific profiles, correlate them with distributed traces, and identify slow or underperforming code for analysis and optimization. Plus, with Datadog's APM live search, you can perform searches across all across the full stream of integrated traces generated by your application over the last 15 minutes. That's cool. Try Datadog APM free with a 14 day free trial and Datadog will send you a free T-shirt. Visit pythonbytes.fm slash Datadog or just click the link in your podcast player show notes to get started. Yes. Thank you, Datadog. I love all the visibility into what's going on. I was just
Starting point is 00:13:10 dealing with some crashes and other issues on something I was trying to roll out. And some libraries conflicting with some other library, they were fighting. And yeah, it's great to be able to just log in and see what's going on. Now, before we move off the thread locals, quick audience question. Sam out there says, it might be better to use context vars if you're also working with an invent loop. As far as I know, context vars
Starting point is 00:13:31 are the evolved version of thread locals that are aware of async too. That's very interesting. I haven't done anything with context vars, but the way async IO works is even though there's a bunch of stuff running from different locations, there's one thread. So thread local is useless for that.
Starting point is 00:13:48 So that's why Sam is suggesting context bars. The side that schedules the work has nothing to do with AsyncIO in my world. So that's why I was thinking thread local. It's a good highlight to say if you're using Async, you may need something different. Absolutely. Yeah. So thanks, Sam, for that. Yeah, so I'm not sure if we've really talked about it much,
Starting point is 00:14:09 but I came across that article from Trey Hunter called What is a Generator Function? And like Python, especially the two-to-three switch, even like dictionary, the items keyword function to three switch, even like a dictionary, the items keyword, you know, function to get all the dictionary elements out. It doesn't return a list anymore. It returns a generator and, um, and maybe it always did. I don't know. Uh, but there's a whole bunch of stuff that used to return lists that now return generators and it kind of, they look, they work great. You stick them in a for loop and you're off to the races but a
Starting point is 00:14:46 lot of people are a little timid at first to try to write their own because it's a yield statement instead of a instead of return and what do you how do you do it and so this is a great article by trey to just say here's what's going on it's not not that complicated. Generally, you just have a, you often might have a for loop within your code. And instead of returning all the items, you one by one yield the items. So trade goes through some of the more deep, some of the details of like how this all works. And it's, it's pretty interesting. It's, it's interesting for people to read through it and understand what kind of what's going on behind the scenes. So what happens is your function that has a yield in it, it will not return the item right away.
Starting point is 00:15:33 When somebody calls it, it returns a generator object. And that generator object has things like next. Mostly that's what we care about. And next returns the next item that you've returned. And then once you run out of items, it raises a stop iteration exception. And that's how it works. But generally, we just don't care about that stuff.
Starting point is 00:15:57 We just throw them in a for loop. But it is interesting to learn some of the details around it. Yeah, they do seem mysterious and tricky, but they're super powerful. The more data that you have, the way better idea it is to not load it all into memory at once. Yeah, and you can do some fun things like chunking.
Starting point is 00:16:16 If you're returning your caller, let's say, and these are fun things to do with this. So let's say you're reading from an API or from a file or from a device or something and um it has you read like a big chunk of things uh like 20 of them or 256 or something like that a whole bunch of data at once but then your caller item your caller really only wants one at a time within your function your generator function you can do fancy stuff like read a whole bunch and then just meter those out and when then that's empty you go and read some more and have intermittent reads and this will save time for especially when you're not you're not reading everything often sometimes the caller will break
Starting point is 00:17:01 and not utilize everything so that's definitely where they're very, they're a lot more efficient on memory too. So if you're, like you said, if it's huge amounts of things, it might be either for memory reasons or for speed reasons. These are great. Yeah. Even computational, like suppose you want a list of Pydantic objects back and you're like reading some massive CSV and picking each row and star star value in there somehow. That's the actual creation of the Pydantic object.
Starting point is 00:17:31 If there was like a million of them, forget memory, like even just the computation is expensive. So if you only want the first 20, like you can only pay the price of initializing the first 20. So there's all sorts of good reasons. Okay. I do want to just say one thing about generators that i wish there was like slightly maybe some kind of behavior
Starting point is 00:17:51 could be added which would be fantastic so generators can't be reused yeah right so if i get a result back from a function i try to and i want to ask a question like were there any items resolved in here and then loop over them if there were like you you kind of broke it right you pulled the first one off and then the next thing you work with is like index one through n rather than zero through n which is is a problem so sometimes you need to turn them to a list it'd be cool if there was like a dot two list on a generator instead of having to call this on it right just like a way as an expression to kind of like i'm calling this and it's sort of a data science flow i want all one expression and you know turn this generator into this other thing that i need to pass along that would be fun yeah so um a question out in the the audience that maybe they they returned that um the dictionary
Starting point is 00:18:41 items and keys returned something different but but Sam Morley says they return special generators, special kinds of generators. So thanks, Sam. Cool, indeed. All right, well, what have I got next? I think I just closed it. Now, would it really be an episode if we didn't talk about Will McGugan in some way or another?
Starting point is 00:19:02 So we got him on deck twice, but we're going to start with just something he recommended to us. That's actually by Sam Colvin, who is the creator of Pydantic. And I'm not sure if you're ready for this, Brian, but this is a little bit dirty. It's called Dirty Equals. And the idea is to abuse the Dunder EQ method, mostly around unit testing, to make test cases and assertions and other things you might want to test more declarative and less imperative. So that all sounds like fun, but how about an example?
Starting point is 00:19:39 So it starts out with a trivial example. It says, okay, from this library, you can import something called is positive. So then you could assert one or like some number and whatever. One equal equal is positive. That's true. That assert passes. Negative two equal equal is positive. Fails.
Starting point is 00:19:57 Okay. How does that strike you, Brian? We're building. These are building blocks. This is like a Lego piece piece not the whole um x wing okay fighter okay but anyway so that's the building block right like take something and instead of saying yes it's exactly equal implement the dunder equal method in the is positive class to like take the value make sure it's a number then check whether it's greater than zero right
Starting point is 00:20:20 that kind of thing i don't know if that includes zero but anyway but then you can get more interesting things. So you could go to a database, and if you do a query against the database, you get, I think in the case that's up there, I think you get a tuple back. It depends on what you set the row factory to be, I suppose. But anyway, you get a tuple back of results. It looks like maybe this is a dictionary anyway. So then you can create a dictionary that has attributes that are like the results you want. They can either be equal or they can be things like this is positive. So in this case, we're doing a query against the database. And then we're looks like there's maybe needs to be be like a first one. Anyway, it says, all right, what we're going to do is we're going to do equal, equal that the ID, so we'll create a dictionary, ID colon is positive int, username colon Sam Colvin.
Starting point is 00:21:15 So that's an actual equality. Like the username has to be Samuel here. Okay. Yeah. And then the avatar is a string that matches a regular expression. That's like a number slash PNG. The settings has to be a JSON thing where inside the settings, it's got some JSON values that you might test for.
Starting point is 00:21:33 And that is created now is now with some level of variation, like some level of precision that you're willing to work with. Right. Because obviously you run the database query and then you get the result, but it's like very near, nearly now, right? It's like the almost equals and float type of stuff. That's pretty cool, right? Do I need to answer?
Starting point is 00:21:57 I mean, I could see the utility. I'm sure your thoughts, yeah. But I don't know. It's the API is a little odd to me but okay yeah i think it's it's definitely an interesting idea it's definitely different um you know pydantic is often about i know it's not pydantic but it's by the creator pydantic is often about um given some data that kind of matches can it be made into that thing and i feel like this kind of testing is in the same vein as what you might get working with pydantic and data yeah all right well it's
Starting point is 00:22:32 definitely it's definitely terse and and uh and useful um so and and i i could totally get used to it if this is uh this is a pretty pretty uh condensed way to compare, to see if everything matches this protocol. Yeah. Yeah. So, Sergey, on the audience, has sort of the alternative perspective. Could be you could just write multiple assert statements. Instead of creating a dictionary that represents everything, you could say, like, get the record back and assert that, you know, get the first value out and assert on it, then get the username out and assert
Starting point is 00:23:07 and get the avatar and assert on it and so on. And it's sort of an intermediate view story where you use the testing libraries, the testing classes, but sort of more explicit. So, right. And one of the reasons why a lot of people, there's a couple of reasons why to not use more than one assert.
Starting point is 00:23:26 Because if you were to have multiple certs, the first one to fail stops the check. It's possible that this will tell you everything that's, that's wrong. Not just the first thing that's wrong. Yes, exactly. And, and then, you know, some people are just opposed to multiple certs per test. It's just for, you know, I don't know. A similar thing. So I have a plugin called PyTest Check, which is just, it uses checks instead of asserts so that you can have multiple checks per test.
Starting point is 00:24:00 But it does come up. So this is interesting. I'll definitely check it out and play with it. Yeah, another benefit of being able to construct one of these like prototypical documents or dictionaries that then represents the declarative behavior or state that you're supposed to be testing for
Starting point is 00:24:18 is you could create one of these and then use it in different locations. Like, okay, when I insert a record and then I get it back out, it should be like this. But also if I call the API and it gives me something back, it should also still pass the same test. Like you could have a different parts of my app. They all need to look like this.
Starting point is 00:24:35 Yeah. As opposed to having a bunch of tests over and over that are effectively the same. And Will is here who recommended this suggests one of the benefits of dirty dirty equals is that PyTest will generate useful diffs from it. Yeah, definitely. PyTest being a reason to use something, I'm on board then. Yeah, sure. Yeah, check it out.
Starting point is 00:24:58 If you do play with it, give us a report how you feel about it. One more question from Sam said Sam Morley PyTest already has something a bit like this with a prox except for it's for floats etc except for a prox is not etc it's just for floats so you can only use a prox with floats
Starting point is 00:25:17 Yeah so we have like approximate now and stuff like that So I'll try it especially you know if Will it, it's got to be good. Exactly. Awesome. All right. What's the final one you got for us here?
Starting point is 00:25:33 Okay. This is more of a question than a, I'm not like saying this is awesome, but I ran across this, actually this, I went, I clicked on a listicle, Mike. I think there's a self-help group for that. Yeah, well, we're definitely prone to clicking on the top listicles. Yeah, so my name is Brian, and I clicked on a listicle. So the listicle was top 10. Where are we at?
Starting point is 00:26:01 It was 10 tools I wish I knew when I started working with Python. And actually, it's a good list. I just knew about most of them as well. So we'll link to it anyway. It's a decent piece of list. It's got the sound of music. It's got Jackie Chan. It's got Office Space.
Starting point is 00:26:15 Come on. This is a pretty solid listicle. Then I got down to number seven and eight, and I'm like, what are these things? I've never heard of them. So Committison and Semantic Release. So the idea, so I tried to commit with this. So Commitison is a thing that you can say, if you install it, you can either brew install it
Starting point is 00:26:37 for your everything, or you can put it in a virtual environment. So that's cool. But it's, you, instead of just committing, you use this to commit and it asks you questions. Right. Instead of typing git space commit, you type cz space commit. on what depending on what you answered if you had a if you had a bug fix or a feature is it a breaking feature did you basically it's trying to it's doing a whole bunch of stuff but it's trying to do these uh conventional uh conventional commits and we've got a link to this too and
Starting point is 00:27:17 and then if you've got all this formatting so it ends up formatting your commit message to a consistent format so that when you're reading the history and stuff, you can do it can take this, um, uh, all this information from these and do some better control your semantic release notes or release. I don't know if it's the release notes or just the release version. I haven't got that far into it, but. Yeah. The commit is an ass. Is this like a change corresponding to semantic versions such that it should be a major change. So it'll like, it looks like it'll increment the version and stuff like that as well yeah yeah um but so uh the in the about uh for committison says command line utility to create commits with your rules and apparently you can you can specify some special rules which is good uh display information about your commits. Bump the version automatically.
Starting point is 00:28:26 And generate a change log. That's cool. That might be helpful. So my question's out to the audience and everybody listening. Have you used something like this? Is it useful? Is there something different than this
Starting point is 00:28:40 that you recommend? And also, what size of a project? Would this make sense for a small or medium project? That's cool. Yeah, let us know on Twitter or at the bottom of the youtube live stream it's probably the best place yep so yeah very cool now before you go on i also have a question out to you you can be the proxy for the audience here okay notice at the bottom it says requirements three six and above uh yeah and python that's not i don't feel like that's very controversial as three six is not even supported anymore right right so this is like every possibly supported version of python 3 this works for would what would you think if i said the requirement is this is python 3 not
Starting point is 00:29:18 python 3 just it requires python 3 knowing that like that means or implying that that means supported shipping real versions of python not python 3 1 right because obviously python 3 1 is no longer supported but neither is 3 5 even like could you say f strings are just in python 3 now without worrying about the version or do you need you still need to say 3 6 plus 3 6. Like, should this be updated to be three, seven? You know what I mean? You kind of have to. You think so? I don't know. I know when I say something is on three, Python 3, actually, I don't even say that anymore.
Starting point is 00:29:55 So, yeah. What do you think? Okay. Well, I used it in the sense like, yeah, you need Python 3 for this, thinking, well, any version that's supported these days and people like well there's older versions that don't support this thing like well you know obviously i'm not talking about the one that was not supported five years ago like at some point yeah python 3 is the the supported version of python i don't know oh that's true yeah okay
Starting point is 00:30:21 that's a bit of a diversion there but i went down that right hole and it on. It's like, I really don't know which way I should go, but I feel like there's, there's a case to be made that just like, when you talk about Python three, you're not talking about old unsupported versions. You're everything. That's like modern three, seven and above should be like an,
Starting point is 00:30:36 an, an alias for Python. I don't know when we were just saying Python three, what we meant was like three, one. So I know we got to get used to that there's no Python 2, really. Yeah. Don't worry about it.
Starting point is 00:30:49 All right. Well, that will definitely bring us to our extras, won't it? Yeah. Yeah. All right. You want me to kick it off since I got my screen up? Yeah, go ahead. All right.
Starting point is 00:30:58 So Will, like I said, he gets two appearances and also his comma. So thank you for that. And this is like in the same vein of what I was just talking about. Like, what is this convention that we want to have? Right? So the Walworth's operator came out in three, eight, and it was kind of an interesting big deal, right? There's a lot of debate around whether or not that should be in the language. Honestly, I think it's a pretty minor thing that that's not a, not a huge deal. But the idea is you can both test for a variable as well, or you can use the test or use the value of the variable in the same place that you create it.
Starting point is 00:31:32 So instead of saying x equals get user, or like u equals get user, if user is not none, or if user, you could just say if u colon equals get user, do the true thing otherwise then it's it's not set right and so will is suggesting that uh we pronounce the walrus operator as u becomes the value so like x colon equals seven is like x becomes seven what do you think are you behind this okay so you'd be like uh when you're reading your code to yourself yes I guess. How do you say it? Like if you say like the lambda expression, like how do you like define like the variables of the lambda? Like there's terms around there that make it a little bit hard to say without just saying syntax, right?
Starting point is 00:32:15 So he's proposing like becomes is the saying, the way we verbalize while we're operating. I like it. I'm going to give it a thumbs up. It's interesting but what how is that different from assignment though do we do you say what do you say with assignment i don't say like x equals i don't know yeah equals um assign become becomes works um all right well put it out there people can think about it and there's a there's a nice
Starting point is 00:32:41 twitter thread here with uh lots of comments. So folks can jump in. Or you can just Walrus. Just talk. Ex-Walrus 5. Oh, yeah. Well, what do Walruses do? I mean, is there like a cool action that is like particular to Walruses? Well, there probably is, but it doesn't apply to this.
Starting point is 00:33:00 It's not very colloquial, is it? Is it ex? Yeah. And then John Sheehan out in the audience says, in my brain, I use assigned to, and he must know what's coming because he's up next. Hey, John. So the other thing I want to talk about is,
Starting point is 00:33:19 did you know, I learned through John, that string starts with, will take an iterable, it says tuple, but I string starts with will take an iterable. It says tuple, but I suspect it might even be an iterable of substrings. And if any of them match, it'll test out to be true. So like A, B, C, D, E, F, you say starts with a tuple, A, B, or C, D, or E, F. I've never used this. I didn't know that that was a thing. I would always just do that as like X starts with AV
Starting point is 00:33:47 or X starts with CD or X starts with EF. No, you apparently can do that all in one go. What's the two for? I have no idea. Oh, okay. I was just thinking that as well. There's a two and I don't know what it's for. Yeah, anyway, that's a super quick one,
Starting point is 00:34:04 but I thought pretty interesting there so that's all i got how about you i just have one thing that i we don't need to put it up but all right my extra is this book you have your physical 2.0 book in hand yes i've got, oh yeah, and for the people not watching, I've got a stack of, it's funny, my daughter uses my Amazon account too. So UPS said, hey, there's a package arriving yesterday. And I said, and I didn't order anything. So I said, I told my daughter, hey, you probably have a package showing up. She's like, I didn't order anything. And then this box arrives with five copies of my book, which is great.
Starting point is 00:34:50 That's awesome. Yeah. Yeah. Congratulations. Thanks. Very cool. Yeah. We abuse our Amazon account badly.
Starting point is 00:34:56 Like there's a lot of people that log into the Amazon. We end up getting stuff shipped wrong places because somebody shipped it to their house last time. And then we just hit reorder again. And like, why do you have our shampoo? don't know yeah um yeah so john adds that the uh two is the starting position of the starting position yeah i figured it had something to do with that i wasn't sure how many characters to compare on whatever well i also didn't know if the that you could pass a starting position for starts with. That's cool.
Starting point is 00:35:26 Yeah, there's a lot going on here. Almost starts with. Yeah, nearly starts with. Yeah, what's the right way? So I want to close this out with a joke as always, but there's the joke we talked about a while ago, where Sebastian Ramirez, creator of FastAPI, saw an ad hiring a FastAPI developer. And he said, oh, it looks like I can't apply for this job. It requires four years of experience with FastAPI, but I can't possibly have that because I only created it two years ago, right?
Starting point is 00:35:57 Yeah. So it's a little bit in that vein. So here we have somebody tweeting and and says here's a conversation with the recruiter and them it says uh recruiter do you have a cs background yes absolutely my cs background and this is a screenshot from the game counter-strike which is often referred to as just cs yeah of course i got a cs background are you kidding me that's pretty good i love it yeah um yeah that's a good one well just a question though if you if you did fast api instead of eight hours a day if you did it 16 hours a day for two years would
Starting point is 00:36:35 that constitute you know four years that probably that probably is about the same amount of experience yeah so what a slacker that Sebastian is. Does he have to eat or something? Does he have family? What's going on? Come on. Well, always fun with hanging out with you and talking Python.
Starting point is 00:36:53 So you bet. And thanks to everybody on the that listens to it on on their podcast player or watches us on YouTube. Yeah, absolutely. See you later.
Starting point is 00:37:03 Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.