Python Bytes - #252 Jupyter is now a desktop app!

Episode Date: September 29, 2021

Topics covered in this episode: * Changing themes to DIY* SQLFluff JupyterLab Desktop Requests Cache pypi-rename Django 4 coming with Redis Adapter PEP 612 Extras Joke See the full show notes fo...r this episode on the website at pythonbytes.fm/252

Transcript
Discussion (0)
Starting point is 00:00:00 Hey there, thanks for listening. Before we jump into this episode, I just want to remind you that this episode is brought to you by us over at TalkPython Training and Brian through his PyTest book. So if you want to get hands-on and learn something with Python, be sure to consider our courses over at TalkPython Training.
Starting point is 00:00:17 Visit them via pythonbytes.fm slash courses. And if you're looking to do testing and get better with PyTest, check out Brian's book at pythonbytes.fm slash PyTest. Enjoy the episode. Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds. This is episode 252, recorded September 29th, 2021. I'm Michael Kennedy. And I'm Brian Harkin. And I'm Ethan Swan.
Starting point is 00:00:42 Ethan, welcome to Python Bytes. You've been over on TalkPython, where you talked about some really cool data science stuff. And now you're over here. So thanks for being here. Tell people a bit about yourself. Yeah, I was on TalkPython 236. So it was a while ago, but that was really cool. I work for a company called 8451.
Starting point is 00:00:59 It's the data science subsidiary of Kroger. And I'm a data scientist, but basically what I do is build tools, mostly which are in Python, for our data science subsidiary of Kroger. And I'm a data scientist, but basically what I do is build tools, mostly which are in Python for our data science department. So we have like 250 data scientists, pretty large department. And I build like packages and some dashboard sort of things, just like various technology helper stuff for data science. Yeah. It sounds really fun. And you all run what we were talking about before we hit record, one of the, probably one of the larger data science groups out there right i think of data science as being like there's a couple of folks that are embedded with like a marketing team or a product team or the software development team
Starting point is 00:01:33 a lot of times but you are a properly large group of data scientists i mean in theory that's what the whole company does um so it's a very cool experience and often i think that's nice for the team i'm on because you don't usually get so many customers on internal tools. You know, we're building stuff for literally hundreds of people to use. And it's a little bit like releasing software externally. So it's, yeah, it's a lot of fun. Yeah.
Starting point is 00:01:56 Fantastic. All right. Well, we're definitely looking forward to having your insights here for the show. Now, Brian, I do want to start off here. I want to talk about some deck staining. Thanks. Yeah. So because those of us who are very attentive on Twitter saw that Brian kindly responded to somebody who sent us a message and said, oh, I see you were talking about pallets. We should
Starting point is 00:02:17 also talk about deck stain and other DIY project resources. And maybe you could put that stupid article on your blog. You're like, we're not a blog. We talk about could put that stupid article on your blog. You're like, we're not a blog. We talk about pallets because it's on Flask. And then Twitter decided, oh, you are now classified under the home improvement category. So are we changing our theme or what?
Starting point is 00:02:35 Apparently just me. That's most people. I've got a few new followers now and most of them are people that like to make things. Well, it's fun to make things as well, but maybe we'll talk more about SQL and stuff like that. What do you think? Yeah, so this was sent to us by Dave Cochessa.
Starting point is 00:02:53 Thanks, Dave. I want to talk about SQL Fluff. So I had never heard of this, but it looks pretty cool. So SQL Fluff is a Python package that is basically uh, basically a linter for SQL. So that's how interesting I haven't really thought about linting SQL code, but it makes perfect sense. Yeah. Well, I mean, there, there is like, I don't really think about it too much either, but there's like things like, should you capitalize all the keywords? And some
Starting point is 00:03:22 people just like it like that. So there is style. there's like both style guides around uh sequel i assume there's style guides um and this lets you help helps you enforce it not not just style guides but just you know looking for mistakes and things um the uh the page looks really slick for i like the logo the fluff logo but um but the one of the things that's great about it is the documentation. So the documentation looks wonderful. And one of the neat things about this is there's different rules or different dialects set up so that it treats different things like ANSI and Postgres and MySQL different. And I'm not sure if these are style differences or what they're doing different, but it's kind of interesting that there is a difference there. Well, one of the things that comes to mind for me, if this reports errors, and I suspect it
Starting point is 00:04:20 probably does, one of the things that comes to mind for me is if you're using like Microsoft SQL Server and you're using a parameterized query because you don't want little Bobby tables in your school, you would say at parameter name, whereas in with like MySQL or Oracle, it'd be like a question mark, right? And I think one is illegal in the other syntax. So at least in that regard, I think,
Starting point is 00:04:40 I don't know for sure it's illegal, but I'm pretty sure like it may be, and it could be that you've got to say what type of parameterized specifications and other extensions are valid. And even I think there's some keywords, right? Aren't there some different keywords in some cases? So it would make sense to have to know the dialect. Yeah. Yeah.
Starting point is 00:04:57 And also, like you were saying, if there really are big differences or even minor differences, there might be some queries that you don't run all the time. And so you're not sure if you switch databases that they might be broken if you're trying to port. So kind of cool. There's a list of, so it has rules, like a lot of linters, rules for failure. And I like the rules page because it talks about the rules, but also shows you the anti-pattern and best practice.
Starting point is 00:05:23 I kind of like that style. I don't know if I like the terms anti-pattern and I practice. I kind of like that style. I don't know if I like the terms anti-pattern and I really don't like the term best practice, but nonetheless, what it's looking for and what you should do different is a good thing to have in the documentation. It's pretty cool. Yeah, I do like the anti-pattern aspect. Maybe pattern. You're going to have an anti-pattern to have the pattern. I don't know. I'm not sure. One of the things that's in the documentation, I can't remember where there is,
Starting point is 00:05:51 that people should be aware of. Supposedly, this even has like 1982. That's interesting. Stars. It's still in alpha phase. So there's a note here that says expect significant changes. So just be aware of that. Cool. It doesn't seem major because you're not doing runtime behavior on it, right?
Starting point is 00:06:10 It's a thing you run against your code and then you look at the output. I mean, maybe it's in your CI system or something. Yeah, but it's not in production. Right. Like you won't get called on a weekend because the site went down because this thing got automatically updated or something to that effect. I guess it could have broken your queries, but you know, whatever. It's good to have an audience because we did have Paul from the chat say, Ethan's correct. There are different keywords between different SQL dialects.
Starting point is 00:06:42 Yeah. We use, oh, sorry, Michael. No, go ahead. We use a lot of SQL as I Michael. No, go ahead. Go ahead. We use a lot of SQL as I would assume most data science shops do. But one, what this made me think of was one contentious topic
Starting point is 00:06:53 in people who write a lot of SQLs, especially when you have a bunch of column names and you're selecting regularly, you know, five to 10 columns, the comma first, I don't know if you've seen the approach where you do a new line, comma, column, comma, column. So it lines up really nicely
Starting point is 00:07:07 and it makes it easier to delete things. That's a very common thing that people feel strongly about. So I could imagine Linter as being very handy to at least enforce one style throughout the company because we don't have that. Yeah, nice. And then I was gonna add that Pam Phil Roy
Starting point is 00:07:22 on the audience says, it would be cool if there was a plugin for dBeaver. And Sam Morley asks, I wonder if it checks if inputs are sanitized. I don't know if it should. But Paul also asks if it validates for syntactical correctness beyond just style. He does say that it catches errors in bad SQL before it hits your database. So I'm going to go with yes. That's pretty cool.
Starting point is 00:07:44 Yeah. Ethan, I was thinking as I was watching Brian present this, that you probably do way more SQL than I do, even though I run in production websites that are backed by databases, not just because there's no SQL, but because I use ORMs and the data structure doesn't change.
Starting point is 00:08:00 But for data science, you're kind of in a more exploratory mode, right? Yeah, I think it's pretty interesting because, you know, like listening to this podcast, people talk about using ORMs a lot. But in data science, you don't really think of data in that relational model as much. I mean, you can, but like thinking of rows as objects is really not common. So I feel like my relationship with databases is totally different. My first couple of years, I was mostly writing SQL, but it was literally just asking questions
Starting point is 00:08:27 for analyses, which is such a different use case than what people use it for, for web development. Right. Yeah, absolutely. It's super different, super different. But if you were to explore data, wouldn't it be nice to have a desktop application instead of a web browser for doing so? So Jupyter, JupyterLab have got to be the most popular way
Starting point is 00:08:46 that people interact with data on the data science side. It's certainly an exploration stage anyway. So super big news, that is old news is new again, but better. JupyterLab desktop app is a thing. I can download JupyterLab. It's an icon on my dock or on my taskbar. I click it, it runs like an app, but inside of it is Jupyter Notebook,
Starting point is 00:09:07 like the whole Jupyter lab with terminal and Python consoles and kernels and all those things. That's cool. That's very nice. Yeah. Have you played with this yet, Ethan? No, so I don't know how common this is, but I think for us, at least,
Starting point is 00:09:22 mostly people aren't working on their local machines. They're really connecting to a session of Python on a remote server. So mostly what we do is we fire up Jupyter on a remote server. And then from our laptops, we hit that URL to actually look at the notebook. So I'm not sure a desktop app would work as well for us, although maybe it's definitely interesting. And I wonder if there's some native features of desktop apps that are available that are going to be a reason to switch. Well, what I would say right now is it's a really nice self-contained thing.
Starting point is 00:09:52 So I'll just read the description real quick. JupyterLab app is a cross-platform standalone application distribution of JupyterLab. It is a self-contained desktop application which bundles the Python environment and several popular libraries to use in scientific computing, like surely Pandas and NumPy and those kinds of things. So what you get is you get just an app that's ready to go, that you could just have somebody install
Starting point is 00:10:16 and you can say, here, open this notebook and run it. And long as you're using core libraries and stuff like that, you don't have to think, okay, go to the terminal, set up the environment and then type jupiter lab oh you need to activate the kernel and you got to do this and that you know it's just like it's it's a real simple here's here's the thing no nonsense type of app yes and you lost a whole bunch of people with just open the command line yeah that's so true
Starting point is 00:10:41 yeah yeah so you don't have to hear right you? You just, it's on your dock. You click it just like you would with Word or Firefox or whatever. And it's, you're there. It starts and manages the Jupyter server in the background. There may be a whole host of command line arguments you can give it to say like run, but use that server and other things along those lines or run and use this conda environment.
Starting point is 00:11:04 I didn't see any of those. And so from what I can tell is it's kind of a local version of Jupyter. So it might be super interesting for you all in your workflow. One place where I think this would be really handy is teaching beginners. So I actually teach some Python, especially for data science classes at the University of Cincinnati. And one thing that regularly is really confusing to people is that you can't double click on a notebook file and have it open because that's such a typical experience of files on a computer. You double click and there's an application that opens that file.
Starting point is 00:11:34 Oh, interesting. And there are workarounds. If you have Anaconda Navigator, it kind of works, although it's a little hitchy. But I would assume that if you have a desktop app, you'd be able to register that with the operating system, whatever that process is to say like when I click on dot I pi and B's open it. Because I find I have to teach students, no startup Jupyter, open your browser, navigate to that file in the browser. Were you in the wrong folder in the terminal when you ran Jupyter? Which is, well, sorry, you're now locked out of that tree, that part of the tree of the folders.
Starting point is 00:12:05 And then suddenly you're having a conversation about paths. Yeah, you go down. It really is like something I don't like to deal with. So maybe this is what I should recommend for people when I teach. What I would recommend is just check it out and try. So I do have a bit of a comment here from Dean out in the audience.
Starting point is 00:12:20 I like the concept of JupyterLab app, but I'm afraid it'll be a VE&V, virtual environment nightmare. So what I found interesting is it discovered, you know, when you're creating kernels for Jupiter, you have to run a command. I always forget it and always have to duck, duck, go or search this to figure out how to do it again. But I have to get the command to say, create this environment and then register that as so Jupyter finds that conda environment, that VNV, right? It's ipy kernel install. I have to do this all the time. Yes, exactly. And I know that it's basically that, but the exact command I always forget.
Starting point is 00:12:59 So that command, it seems like it picked up the ones that I had run previously for standalone terminal JupyterLab. So the virtual environment story is the same as Jupyter itself without that. I think all we're getting here is we're getting the libraries plus Python plus the server starting all bundled together. And it's basically the same as if you just run it on the command prompt. I think as long as, was it Dean? As long as Dean doesn't want to be starting Jupyter from the virtual environment, it should be fine. Like when you said, Michael, about the kernels,
Starting point is 00:13:32 that's the much more, I recommend people do it that way. Because some people do like to just install Jupyter in whatever environment they work in and launch it there. But I have a hard time imagining how that would work in this case. Yeah, I do as well. And Dean makes the point that once you have to go and register all that kind of stuff, like when you're down in the terminal doing this, you've kind of lost those same people and that may well be the case. But I can see, you know, this is sort of a first version of this. I can see that those are some of the desktop things it could add, right? It could add a settings section where you have a dialogue for managing these things
Starting point is 00:14:04 and creating new ones and so on. So it could be pretty neat. Definitely something to watch. All right. Before we move on, Paul out in the audience has a quick question for you, Ethan, a tangential one. Python has some really great SAST tools like Bandit, but I'm not able to find good options for R. And I know that you live in a world that does both R and Python. Yeah. Do you have any thoughts on this? I have no ideas. I'm going to come off as a fraud,
Starting point is 00:14:30 but I don't know what SAST is. I have to admit like what I do, I know I said I'm a data scientist, but in some ways that's nominal. Like really a lot of what I do is software development for the data scientists. Data scientists are your customer in a sense, or your, your target user. Yeah. Yeah. So I think a lot of what I hear from users is that there are certain measurement tools and
Starting point is 00:14:52 certain statistical tools that are available in R that take longer to get to Python. So I wouldn't be surprised if that really is what's happening here, but I don't personally have any suggestions. Yeah. Okay. Yeah. So bandit is um, is like a tool that will scan for known security vulnerabilities, like leaving debug settings on and Django stuff like that. I was wondering if that was okay. Then, then that I also don't know. That's a little different than what I was imagining. Yeah. Awesome. All right. Well, since you got the floor, tell us about your first item. Sure. So, um, I Sure. So I found this requests cache package in a newsletter recently. And this might be a little bit of a shorter one because unfortunately,
Starting point is 00:15:31 I haven't had a reason to use it yet. But basically, what this does, scrolling down here, is you can instantiate sessions just like you would with the traditional requests library. So probably request is one of the most commonly used Python packages, I would guess. For anybody who's not familiar, you use it to make HTTP requests, which is basically to bring anything back. And the tagline, I think, is HTTP for humans. But it's just known for being easy to use and you can access the internet. But one thing that I have found is that especially if I'm testing something in an interactive way, not mocking, but I really want to see if my code pulls back what I expect, sometimes I rerun the same request
Starting point is 00:16:10 over and over. And I say, go get this, go get this, go get this. Often the same data. And sometimes that data is large. And that takes a really long time. So requests cache is a way of creating a session object that looks and acts the same. But when you call a get or a post request on the same URL with the same data, what you get back is actually just the cached version of that data. So you're not waiting every time. The first time you incur the network latency
Starting point is 00:16:37 and if the server has to do anything to like compute the data or if it's enough data that it takes some time to get to you, you wait for that. But the second time everything runs instantly, which is really a big advantage. So I've done some things with web scraping where I'm building some kind of, I want to build like a function that pulls some things down and makes, uh, or pull some things out of that, but just waiting every time to run the function for it to pull from several different pages and,
Starting point is 00:17:01 you know, do some computation on that actually makes it pretty slow. But if you were able to cache it like this, that'd be a lot faster. Yeah, this is nice. I love the fact that it's just a stand in replacement for the request session itself. Yeah. And if you scroll down a little more, it actually shows a way to do that with the regular requests library. And this actually scares me a little bit. This is kind of kind of magical. What's going on here? You just run a one-liner with requests cache, and then suddenly the requests library itself works differently. So I wonder if that's some monkey patching or what's going on. It probably is. But it is really slick. So I would imagine I'll have a reason to use this soon,
Starting point is 00:17:37 but I haven't tested that yet. It does offer a lot of configuration options. And one thing I thought was a good idea to look at is an expiration date uh and that's sort of like when you invalidate the cache and actually pull again because you maybe should trust that the website is sending you all the same stuff today but if you rerun your code in a week make sure that it still responds the same way so it's got some nice options like that i really that's interesting so you could use it not even just for testing it could be for actual data but you know it's not getting updated very often. Yeah, for large data is what I was imagining.
Starting point is 00:18:10 So, yeah, like I said, there's been some times where I've like pulled things from APIs where they send back a lot of data and you don't want to be waiting for that. Yeah. Or even you just want to make sure that multiple calls to it are getting the same data, even if it does change. That's true. Yeah. So keep consistency. Interesting. This reminded me a little bit of the, I don't know if people are familiar with the at cache
Starting point is 00:18:31 or LRU cache. It used to be, and now there's a new one just called at cache in the func tools module built into Python. And that's very, very handy once you know it's there, because often you have a function that you don't want to recompute the work for. And this is almost like somebody rewrote requests with cache in it, which is pretty cool. Yeah, it's got a lot of nice features.
Starting point is 00:18:49 You know, I think a question from a handful out in the audience, can it cache to Redis? Because production and memory production caching, you could blow it up, right? Blow up the memory. So a couple of things that stood out to me that were interesting there was, yeah, you could throw a Functools LRU cache decorator onto an expensive thing, which is fine, but that's in memory, right?
Starting point is 00:19:11 And plus things have to be hashable and whatnot. But you could do that. But it's in memory. And a lot of times if you have scale out as you do on web apps, like in production, as in Brian was talking about, you have web farms, like five or 10 copies of micro whiskey or something running. So then there's still five times you got to do it before it really gets cached. And then also it goes to SQLite. So it gets stored to disk, right? So it's not even in memory, it's on disk. So like you said, there's other backends as well. But I think having just by default going to a SQLite file with an possible expiration means you could just turn this on and leave it. Expire after
Starting point is 00:19:49 a day, go. Tell us about the backends. There's more than just SQLite. Yeah, it does seem like you have some options. I mean, like I said, I haven't had a reason to use this, so I haven't toyed around with all these, but the way this is documented leads me to believe that it really is just a drop-in replacement that you can configure what you want to use as your backend. And I do wonder, so yeah, what you were saying, Michael, about having multiple instances, I do wonder how that would work. Would it check to see if any of the instances had cached this yet? Would it like proactively go reach out to the cache or? Yeah. Well, I think if you have the memory one, it's going to be a hassle, right? Like one of the options is memory. But all the other ones,
Starting point is 00:20:23 file system, gridFS, Redis, SQLite, those are all support, you know, concurrency. Yeah, exactly. So then it will scale across process seamlessly.
Starting point is 00:20:33 Yeah. So that could be actually really helpful for something like that where you have a distributed set of workers. Yeah. Yeah, for sure.
Starting point is 00:20:40 Let's see some fun stuff about your monkey patching comment. Dean says, monkey patching is like having a real monkey. It's very cool when other people have it, but having it in my house is scary. And yeah, Sam just, uh, has a too much experience at the zoo, I think, um, with that as well. So yeah, monkey patching is a little sketch. Oh, nice.
Starting point is 00:20:59 All right, Brian, you're up next. Okay. What do we got next? Um, I, so I, I did something I did something kind of dumb the other day. So I went ahead and I needed, I pushed a new package out on PyPI. Really, I was just trying to remember how to, the whole process,
Starting point is 00:21:20 because I wanted to just remind myself of like, if I have something new, something cool I wanted to share, how do I, if I have something new, something cool, I wanted to share, how do I get it out there to PyPI? So I was walking through that process and I was doing it for a plugin. You did your own typo squatting. Apparently. So I published PyTest Slow. And then, who was it? Brian Skin said, cool, but maybe PyTest Skip Slow would be better. And I'm like, oh man, that is a better name. Cause that's what it does. It skips the slow tests by default.
Starting point is 00:21:52 So, and this is totally lifted from the PyTest documentation about, they have this example, but nobody's written a plugin for it. So I did this, it's a little tiny thing, but so I renamed it, but how do you rename it? So I so I renamed it. But how do you rename it? So I went out and searched. So how do you rename something in PyPI? You can't really do it, but you can create another one. And then so this is nice. Well, who was it?
Starting point is 00:22:16 Simon Willison wrote this up. It's a PyPI renamed cookie cutter template. And I didn't actually use the template, but I did use these steps. So the steps really are create a renamed version of the package, which I did, then publish it to PyPI under the new name and create a final release for the old name that points to the new one and depends on it and have dependencies so that if somebody installed the old one, they'll really get the new one. It sounds more complicated than it is. It's just a few steps,
Starting point is 00:22:47 but there's a cookie cutter you can use. The cookie cutter uses setup tools and I didn't want to do that. But so I used, I did basically copied the entire thing and then he's got a demo. So if you look at it, so if you go to the old version, it'll just have a thing that says, hey, I'm going to the new one now.
Starting point is 00:23:08 So I did that. And it was neat. I really appreciate the steps. And it's all good. Yeah, that's cool. You can also use it for aliases. Like you can install BS4 or Beautiful Soup 4, right? And it's kind of the same.
Starting point is 00:23:24 Oh, is that how they do that? I'm guessing. I don't know. But's kind of the same oh is that how is that how they do that i i'm guessing i don't know but it sounds like the same i didn't know that so now so now when i go to pipe this if you go to the old one it just shows it's now a new name go to the other one instead so but if i install the old one it it kind of just pulls in the new one yes yeah nice yeah very cool yeah um brian you were you were refreshing on pi pi but i actually just pushed my first ever package to pi pi a couple weeks ago and so that was you know a bit of a trial but i was amazed at how straightforward it is the documentation is excellent it really is is pretty seamless actually for somebody who's never done it before so
Starting point is 00:23:58 who knows hopefully i don't make any mistakes on the one package i have and need to read yeah but the immutability of it is a little scary but yeah that's not too bad for me the hard part was just understanding that it really was pretty simple um and then also uh getting the hashes right so you have to like you have to get like uh you know signatures and stuff to make sure that you can push to the pi pi correctly so yeah they but even the documentation there it's a little intimidating but it actually turned out to be only a few minutes of work. So that was, that was pretty nice. Good for them. I guess PyPI is the people to praise for that. Yeah. So what was your package? Oh, I, uh, it, it's called pre-mark. It's, um, it's a spinoff of a JavaScript library for making
Starting point is 00:24:40 slides. And I just make a lot of slides for teaching. And I actually found an existing package by man. I want to, I mean, yeah, here it is. It is not ready. That's why it's a release candidate. Um, but I, uh, I, yeah, I based it on this existing package by at Tyler Dave on GitHub, uh, and talk to him a little bit about it. He had already built a really lightweight tool and I just expanded on it, but I like to write my slides in markdown which is really what this is for you write your slides markdown in a bunch of different files it stitches them together and creates a what's called remark js presentation uh so i use this for my own teaching nice i'll check it out but it really is largely a sample project to just like learn how to use pi pi and things like that okay docs
Starting point is 00:25:22 yeah very cool all right up, we have caching. Oh, wait, we just talked about caching. No, I have more caching. So Django, I have two pieces of news on Django. This one comes from Carlton Gibson, one of the Django guys, and also one of the hosts at Django Chat, the podcast. So they are adding a Redis cache backend to Django.
Starting point is 00:25:47 So traditionally, Django has shipped with Memcache, Memcache D, that cache backend with multiple implementations, I think even. So you can go there like Django has an ORM, it can talk to stuff. So it has a cache backend as well. And it could talk to Memcache, but it couldn't talk to Redis.
Starting point is 00:26:06 And they found that the vast majority of people are using Redis. And they said, well, why don't we have a backend for it? Well, guess what? It's going to. So this was merged. And this whole conversation here around the PR and the issue is pretty interesting. So it starts out and says, this PR aims to add support for Redis to be used as a caching backend with Django, as Redis is the most popular caching backend. Adding it to Django.core.cache module would be
Starting point is 00:26:33 a great addition for developers who previously had to rely on third-party packages. And check out how they've got this little checklist in progress. These are the things for this PR to come along and work. So create the RedisCache class, do a pickle serializer, et cetera, et cetera, waiting for this other task. Here's some open ended documentation. So I don't think I've seen this really before, like this project tracking in the PR. That's really cool. Yeah, I do too. The other thing to note that this came in on May 23rd and there's a large conversation. If you go there, there's 30 pages of conversation about it. And you can see it evolving.
Starting point is 00:27:10 Like, okay, we finally got the test pass and we finally got it implemented. Now let's move on to the documentation. Now, et cetera, et cetera. And then finally, boom, September 15th. That's three, three and a half months, something like that. It's closed. So you can actually sort of track what the Django team is doing for adding features, like core important features to Django. It's always so interesting to watch open source communities like this, especially on
Starting point is 00:27:33 somewhat contentious issues where people disagree in how they manage these things. I think it's really impressive because a lot of teams that even meet in person regularly and are small teams still struggle with that kind of stuff. But these huge open source projects manage it. Somehow they implemented the feature at the end. So pretty impressive. Yeah, absolutely.
Starting point is 00:27:50 It's very impressive. Also, I said this was from Carlton. He participated a lot. I'm not 100% sure that he was the originator. This might be Daniel Abassi. So sorry if I misattributed credit there, but for whoever did this,
Starting point is 00:28:07 the original issue, I think Carlton had put up. So I'm not sure who was really sort of the initiator there, but I think it's cool. And it's also neat how out in the open this whole thing is. Yeah.
Starting point is 00:28:17 Putting the open in an open source. That's right. Hi, Brian, what you got? It's me again. Are we done with our things so we're right no i think i got one more oh no sorry i totally i was for some reason in wrong order yes ethan you're up next sorry totally fine uh so yeah so i uh i wanted to highlight uh pep 612 so i happened upon this i i forget there was some other pep i was looking at and they linked off to this one um
Starting point is 00:28:44 but a little bit of background a pepEP is a Python enhancement proposal. It's basically like how ideas are proposed in terms of what to do with Python as a community or as a language. And I recently have been really kind of diving into type hinting Python. So there's a surprising number of PEPs about type hinting. And what this one does is something I guess I didn't really realize I needed. It was a bit of an annoy annoyance but i didn't realize there was a fix coming um basically what it comes down to is quite often you write functions that take in a function and return another function um so there's this example um and this uh where's the first case where they use it um i think here param spec i. I'll find it while I talk about this.
Starting point is 00:29:27 But basically- A lot of decorator time, yeah. Yeah, what you do with decorators is you write functions that take in other functions and return a function that has the same signature, which is to say it takes in the same parameters of the same types and returns the same return type. It may have some other modifications of the function,
Starting point is 00:29:44 but that's very frequent. And so sometimes what you want to say is my decorator, if I want to type the decorator, say what types of things it takes in, it takes in something that is essentially a generic function type. Any kind of function is fine that takes in any parameters and returns any return type, as long as it returns the same thing. So It's like generics, which you would do with type vars. But in this case, you create something called a param spec, and then you pass that as the... Oh, man, I lost it where it is in here. Oh, here we go. This is what I wanted. So you pass it as the type of callable when you type the function that's taken in, and then you say you're returning a callable with the same parameter specification. This P is a parameter specification.
Starting point is 00:30:30 And you make essentially your callables generic on both this parameter specification and on the return value. So I know there's a lot to that. And I think for people who are typing everything every day, maybe this doesn't seem terribly pertinent. What I do, I said, I write a lot of Python packages for people to use.
Starting point is 00:30:49 And it's important both for quality control and so people know what the return values are and what they should pass into functions to have a lot of typing. But really, what this got me thinking about a little bit is just that the Python typing ecosystem is still really evolving. Like for somebody who's not super close to following it, it appears that like, this is how Python works now. And maybe it's always been this way, but it really hasn't. And there's a lot of holes in how it works. There was no way to do this before, and this isn't finished yet. This is a pep, but it isn't implemented. And so right now you don't have a way to do typing for this particular feature.
Starting point is 00:31:23 And that, yeah flow and type information through different things that is something we haven't done a lot of in python but as you called out generics and templates that's like all you do that's that's the bread and butter of those things yeah and it's the same idea but features that aren't there yet so it's just kind of interesting to remember that this stuff is still being added like keeping an eye on when this stuff comes in it can really make things easier. And in the meantime, don't lose too much sleep not being able to type certain things.
Starting point is 00:31:49 If you can't type it perfectly, that's okay. I've actually been reading Luciano Romalho's book, Fluent Python, and he makes that point really well. That Python isn't a statically typed language and you shouldn't get too carried away trying to type things. As much as is possible and helps you is worth it, but you shouldn't be religious about it.
Starting point is 00:32:07 Right. But if you are building tools and you put this into there eventually, it might help other people who consume your libraries. It might help the editors give better autocomplete and error checking and stuff. And we catch bugs all the time. So as much as it's feasible, I think it's totally worth it. And actually, there's a couple other peps on that note of things still changing. There's a couple other peps that are worth looking at.
Starting point is 00:32:28 There's a new, more convenient way to write optional types. So right now you can say, I know. Oh my gosh. I've wanted this for so long. Yes. Yeah. So you have to say optional left bracket than the thing that is optional.
Starting point is 00:32:41 And then right bracket. Like optional bracket string or optional bracket user or whatever. Yeah. And you got to import optional. Don't forget that. Yeah. That's true.
Starting point is 00:32:47 You got to import it too. And so now there's a pet proposing that you could just put a question mark, which I guess isn't a problem for the parser, which is pretty nice. This one also is in process. Maybe this was something that was needed, the peg parser,
Starting point is 00:33:00 which recently went into- Oh, maybe. Was that 3.9, right? Where it couldn't do it before, but maybe it can now. Yeah, that's it can now. But yeah, you know, they have that in C sharp and they have that in Swift.
Starting point is 00:33:09 And I just love like this thing, question mark, right? Rather than a null check or specifying into question mark, rather than optional bracket of int. It's just clear. I didn't know that was in other languages. That, okay, that makes a lot more sense. And it's phonetic, right? Like if it's an int, you just say int. If it's an int's a into question mark it's int right so you can even just like speak it out
Starting point is 00:33:29 really well like and maybe okay that could be null maybe it'd be none that's not obvious to me i really oh interesting i feel like that's a nice a nice syntax but maybe it isn't who knows maybe that pep won't get approved you know yeah i think i think it may not but i do i do hope it does i mean it's the question mark there's an int or is there, right? Like, is it there? You're not sure. Like, there's some subtle symbolism there. See, I prefer the int or none.
Starting point is 00:33:54 I like that as well. I agree. Yeah, that's not bad. Now that that's more convenient to write, but that's what that's only three times. And the other languages that support this, and I don't know, I didn't read that PEP well enough to know, there's a runtime behavior, not just a type specification behavior so i could say x equals like user question mark dot name it'll either if the user is none the name is none or it'll follow down that that path and say okay user's not none so then i'll say dot name oh that avoids the none type has no attribute. Yeah, yeah, exactly. That's really nice. Yeah. Wow, very cool.
Starting point is 00:34:25 So Will is in the chat and he's got, oh, I did the wrong one. I love that. Hey, Will. That was pretty good. All right, and then Ethan, you want to tell us about one more before we wrap it up? Oh, just another pep.
Starting point is 00:34:42 Yeah, just another thing that is potentially changed how typing works. There's right now, there's no way to specify if you've used a type dict, which is to say a dictionary with some keys having certain types. There was no way to specify
Starting point is 00:34:55 what keys were optional and which ones weren't. You could either say they were all optional or they were all required and there's nothing in between. But there's also a pep to do that. So just there's a lot of stuff on the horizon to keep an eye out for and these three peps i think
Starting point is 00:35:08 are a good reminder of that yeah yeah yeah very cool all right now now can we throw it to you brian yeah now um so this was uh this is a suggestion by john hagan and i just thought i'd throw it in as a as a an extra um just one extra so So we've talked about the effort at Microsoft and Guido and others to make Python faster. And there's a whole bunch of ideas up in the faster CPython ideas. And this links to a couple of slide decks talking about making Python faster.
Starting point is 00:35:45 And one of the things is a slide deck from Guido. In it, he mentions various other optimizations like maybe zero overhead exception to handling. Well, that's neat because that's already in 3.11. So in 3.11, we have Mark Shannon implementing zero cost exceptions. So if you have a try statement that doesn't catch anything, there's no cost to it. So that's pretty cool. That is very cool. I did a little playing around with this idea, and I wrote a program here that calls string upper like 100 million times in a loop.
Starting point is 00:36:22 And it does that also in a try accept block with no errors. And, and so my understanding of this was that it will make entering the try block in the case there's not an exception cheaper. And I ran it a hundred million times and I got, you know, not exactly the same, but it's really similar. But one of the other things, which I'm not doing in my example here, this is a gist, I'll put it in the show notes. The looking into this comment, Brian, is they talk about the number of basically the size of the call stack and some of the other things that happen in there about not pushing the exception onto the call stack or something unless it actually happens and those kinds of things. So it's supposed to make function calls faster as well. So even if my little example wasn't necessarily faster, maybe something else,
Starting point is 00:37:09 there's maybe other situations where it is nice. Cool. Yeah. Ethan, anything else you want to just throw out there for people? Well, one thing I did want to mention real fast about the zero cost exception handling is I think it's always tough to teach people about try accept blocks and then introduce to them that they're actually pretty slow, especially if you use them in a function that gets called many times. And to be honest, I don't know the reasons for the internals being like that. So it's really nice to feel like that might not be true anymore because they're a good practice to have, to be able to say like, yeah, be careful when you write code, especially for people like data scientists who aren't day-to-day programmers to say like, oh,
Starting point is 00:37:43 it's good practice to use these and you shouldn't have to worry about performance. So glad to see that. Yeah, absolutely. And just following up on that real quick. If you look at the issue underlying this, where was it? This is the right one. Yeah. There's an issue that's linked in the show notes and it actually shows you the disassembly into bytecode of what it currently is and what it it's going to be and it's really really similar so you can see currently it does like the first thing it does is set up a
Starting point is 00:38:11 finally and then stuff right at the beginning but now just do like a no op and then do a return value in the good case otherwise it'll do a push exception and then work with it and so on so it pushes off some of the bytecode operations that add like to the call stack, like pushing things onto it and so on at the C of L dot C level of C Python. Yeah. Oh, that's very cool. Yeah. Well, the one thing I wanted to mention, well, I don't know if people have heard of pedal board. I think Spotify just announced this recently. It's basically a Python package that lets you do some things you might usually do using an audio editing tool.
Starting point is 00:38:48 And it's cool on its own. But I had just listened to, I forget if it was last week or the weeks before episode where Brett Cannon was on, on Python Bytes. And he talked about how, you know, anytime you see an issue with documentation, just put in a pull request. Most of the time it'll get accepted.
Starting point is 00:39:04 And he said he's contributed like 200 or 300 repositories that way. So I found this last week. And then in this week, I was thinking about what I wanted to talk about on the show. So I went back to this link. And lo and behold, the last commit was made by Brett Cannon. And it's removing a stray backtick in the readme. So he really practices what he preaches. So he seems to be very active.
Starting point is 00:39:24 He's one of only nine contributors to this this and probably the rest work at spotify so good for him nice yeah that's fantastic nice to uh for that a little bit of real-time follow-up and yeah all right so i have a few extras and again i have my my uh banner for extras extras extras so uh a couple things here Let's talk about something that Kelly Schuster-Perdes talked about. She and Sean doing the Teaching Python podcast, and they're doing great work over there. So one of the things that she found for teaching is this thing called EarSketch. You probably haven't heard of this, I'm guessing. So EarSketch is a project from Georgia Tech that teaches coding, but through like a DJ type of experience.
Starting point is 00:40:11 She's got a cool video up there. It says five minutes and four lines of code. And I got this up there going. So yeah, thanks, Tony, for pointing that out. So here, I'll just play what she created for everyone real quick. People are teaching. I want to get folks involved through music and Python. That's a real cool project, that ear sketch.
Starting point is 00:40:41 And I told you good stuff about Django before. Let me tell you some bad stuff. Oh, no. You might meet little Bobby Tables in the Django ORM if you're running query set order by and passing some piece of user input into what you might be ordering by. You might be ordering by backtick semicolon drop table dash dash or something like that, which you wouldn't want to. So basically
Starting point is 00:41:06 there's a SQL injection vulnerability in Django. What is it? Three to zero up to three to five and three three three zero zero up to three one three. But yeah, less than that, right? Less than three to five and less than three one three thirteen. So if you have those, you definitely want to patch it straight away. That's a critical vulnerability. So check that out. And also, that's on untrusted input. Yes, that is untrusted input. Don't freak if you're not taking, what would you like to sort by?
Starting point is 00:41:38 Please type here. But still, it's easy enough to just do a github update just an update to the requirements now if you're on your code is on github and this is the requirement you pinned your version you probably have already gotten this as a security announcement and an email sent to you it is such a nice feature but if you don't pin your version they're like well you're on the latest version you're good right you won know. So it still may slip through. All right. Yeah.
Starting point is 00:42:07 And Chris May on the live stream has some philosophical thoughts for us. He says sometimes he doesn't even trust his own input. Yes, we've all been there. Chris, don't inject yourself. All right. Shall we wrap this up with some laughs? Yes. Brian, this is going to take some role playing again,
Starting point is 00:42:25 a nice little cartoon for us. This is QA 101. Speaking of the CVE I just spoke about, and you know, if you fix a minor bug, you might get credit, like whatever. We fixed a little tiny bug, right? Formatted in a log file. You fixed a critical bug, like, wow, that seems super important. You've been doing good work this week, right? So here's two developers in an open office sort of space. Brian, you be the guy. I'll be the woman developer. Okay. Which priority should I give this bug?
Starting point is 00:42:51 Is it easy to fix? Yep. I'll fix it immediately. Critical. Critical. Finding the correct bug priority is key, they say. So very nice. I'll link to that little cartoon in the show notes.
Starting point is 00:43:04 I'll get it. Because you're going to get more credit for fixing critical bugs and if you can fix it right away that looks like you did way more work you did so much more work brian over there it's like medium bugs ethan and i took out the critical exactly you do your t-shirt sizing after you finish after you after you take all the work you assume everything you took was a large. Yeah. So, yeah. Exactly. I keep asking people, so what are the points equal in hours?
Starting point is 00:43:32 No, we can't talk about that. Okay. Do I use powers of two? What do I do? Yeah. Cool. Well, thanks, Ethan, for coming on the show. It's fun. Yeah, this was great.
Starting point is 00:43:41 Thanks for having me. Yeah, it's been fantastic to have you here. Thanks for being here. Brian, thanks as always. Thanks. Bye, great. Thanks for having me. Yeah, it's been fantastic to have you here. Thanks for being here. Brian, thanks as always. Thanks. Bye everyone. Thanks for listening to Python Bytes. Follow the show on Twitter via at Python Bytes. That's Python Bytes as in B-Y-T-E-S. Get the full show notes over at pythonbytes.fm. If you have a news item we should cover, just visit pythonbytes.fm and click submit in the nav bar. We're always on the lookout for sharing something cool. If you want to join us for the live recording, just visit the website and click live stream to get notified of when our next episode goes live. That's usually happening
Starting point is 00:44:14 at noon Pacific on Wednesdays over at YouTube. On behalf of myself and Brian Ocken, this is Michael Kennedy. Thank you for listening and sharing this podcast with your friends and colleagues.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.