Python Bytes - #217 Use your cloud SSD for fast, cross-process caching

Episode Date: January 19, 2021

Topics covered in this episode: diskcache TOML is 1.0.0 now. * pyqtgraph* Parler + Python = Insurrection in public Best-of Web Development with Python * Assorted* Extras Joke See the full show n...otes for this episode on the website at pythonbytes.fm/217

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds. This is episode 217, recorded, what is it, January 19, 2021? I'm Brian Ocken. I'm Michael Kennedy. And I'm Ogie Moore. Welcome. Thanks for joining us. Thanks for having me. Yeah, thanks for coming.
Starting point is 00:00:19 Who's first? Michael's first. I'm first. You want to talk about caching? I got some cool stuff to talk about with caching. So I recently got a recommendation from Ian Maurer, who was talking about genetics and biology over on TalkPython, I think 154, so a while back. But he pointed out this project called Python Disk Cache. And it just seems like such a cool project to me. So one of the big problems or not problems, one of the trade-offs or the mix of resources we have to work with when we're running stuff in the cloud so often has to do with limited RAM, limited memory in that regard, and limited CPU, but usually have a ton of disk space. For example, on my server, I think I've got like using five gigs out of 25
Starting point is 00:01:02 gigs, but I've only got, you know, two or four gigs of RAM, right? But one of the things you can do to make your code incredibly fast is to cache stuff that's expensive, right? If you're going to do a complicated series of database queries, maybe just save the result and refresh it every so often, or something like that, right? Well, this library here is kind of the simplest version of one of these caches. Like people often recommend memcached, they talk about Redis. You might even store something in your database and then pull it back out. And all those things are fine. They just have extra complexity. Now I have a separate database server to talk to if I didn't have one before. I've got a Redis caching server now I got to share.
Starting point is 00:01:39 What if you just use that extra hard disk space to make your app faster? A lot of these cloud systems like Linux, for example, they have SSDs for the hard drive. So if you store something and then read it back, it's going to be blazing fast, right? So disk cache is all about allowing you to put this thing in the cache and get it from the cache, but it actually stores it in the file system. That's pretty cool, right? Yeah. So it's super easy to use you can just come up here and say import disk cache just to get an item i just say cache like a dictionary basically and to put it back same thing you give it a key and a value is basically like a dictionary but it persists
Starting point is 00:02:15 across runs it's multi-threaded multi-process safe and all those kinds of things so incredibly incredibly cool it's pure python it runs in. So there's not like a server to manage. It has a hundred percent test coverage, hours of stress testing. It's focused on performance. And it actually Django has a built-in caching API in Django, and you can plug this into Django. So when people say cash with my thing, even third-party apps and stuff, you can automatically start using this, which is pretty awesome.
Starting point is 00:02:42 It has support for eviction. So most recently used, first and so on, you can tag things and say these can get evicted sooner and whatnot. So really, really nice, incredibly easy to use. I definitely recommend people check it out because it's very nice. It has different kinds of data structures that you can work with, like a fan-out cache, a Django cache, a regular cache, and so on. So if you want to work with some code, and it's possibly going to run in multiple processes,
Starting point is 00:03:10 or it's going to start and then restart, start and stop, and then run again, and you wanted to not have to recompute everything, disk cache. Are evictions on hold for 2020? Yeah, well, because of COVID, you're going to need more disk space. No, just kidding. No, this looks cool so what's i'm one of the things i was confused about is um it's a key it's called the cat disk cache but what's the difference between that and just like
Starting point is 00:03:35 a key value store database well those the key value store database in practice would be no difference so okay uh but just but you have a set well but you have a separate server. Like there is a server process that runs somewhere that you have to have like a connection string and stuff too, that you talk to it in this way. This is like, I have a file. I use the same API to talk to it. So instead of having another server to manage
Starting point is 00:03:58 another place to run it, you just say like, let me just put it on the SSD and that's probably quite fast. Cool, yeah. And then we got a quick question here. Brandon asks asked do they talk about any way to scale this out say multiple servers behind a load balancer i did not see anything i'm pretty sure as far as i can tell that it's local just like look uh sort of a per machine type of thing not a but it does go across processes but it doesn't i haven't seen anything talking about multi machine i guess you could set set up a microservice, but at that point you might as well just have Redis.
Starting point is 00:04:28 Yeah, yeah. Redis is kind of on my list of things to try here pretty soon, too. Yeah, absolutely. Another thing I want to check out is some of the, well, I like Toml lately. Yeah, Toml's great. Toml's great. I heard that it reached 1.0 yeah so it is it's at 1.0 now and and um i think that they were kind of headed there anyway um so i was looking through
Starting point is 00:04:53 the changelog um looks like they had several uh release candidates and and i'm anyway we'll talk about a little bit so it's it's at 1.. I mean, a lot of us don't really understand. Maybe I'm speaking for myself. Don't really get what all the specification means. I just use it. It just works. It's easy. And one of the things I use it for is the pyproject.toml file.
Starting point is 00:05:18 It's mostly what I use it for. But pyproject.toml is taking off and this is at 1.0. So what does this mean? I'm hoping that this means that we have like a Python package built into the Python that parses toml. Yeah, now the language is stable, right? Yeah. Maybe it means I need to learn more about toml.
Starting point is 00:05:38 Maybe. But I think there's talk about it. I'm not sure what the state of it is. Maybe we could get Brett or somebody's talk about it. I'm not sure what the state of it is. Maybe we could get Brett or somebody to talk about it. But in the meantime, if you want to play with 1.0 with Python, I think there might be limited choices. So I went out and looked. There's a page on the project page that shows,
Starting point is 00:05:59 it's like down at the bottom, it shows the different projects that implement the various versions of Toml. And there's one project. So there's a C++ project that there are a handful of C++ that support the 1.0.0, the most recent version of Toml. And then various support levels for different other things. There's a 1.0.0 release candidate one that's supported by toml kit so toml kit is a python project that looks um and i think that that might be sufficient to try out most of the features um the new features and then nice uh
Starting point is 00:06:37 then there's the what i would think of is just the toml project in python that one's only so it supports 0.5.0 so i'm not sure what's going on there it'd be great if it would support the latest but then i'm like what does that mean what is what's different between 0.5.0 and 1.0 and so i went and looked at the changelog there's there's three things that jump out that look like they're new really changes one of them is uh leading zeros in exponent parts of floats are permitted so uh okay uh then allowing raw character tabs in basic strings and multi-line basic strings that seems reasonable and then uh the difficult one might be allowing heterogeneous values in arrays which that's cool and i'm yeah so apparently it wasn't there before yeah but none of those seem like super common stuff that's going to be a big breaking change
Starting point is 00:07:32 like oh well of course we use heterogeneous types in here like we're just going to mix it up in a random stuff in our array right it seems like it's it's probably still the built-in or the the pure python one is probably decent still right and i i need need the, I guess there's a whole bunch of these that are listed as clarify, like clarify it, but it is a specification. So clarify might be very important, but I'm not sure how important that is. It probably affects the implementation, but I'm putting this out because I'd like to hear from people that know more than I do about this and how this affects Python and if we should care about it. Yeah, yeah, for sure. It's very cool to see it coming along and it definitely lends some
Starting point is 00:08:09 support to the whole PyProject, Toml stuff. Yeah. Yeah. Hey, before we move on to Augie's first topic, Martin Boris asked, wondering, is this disk cache thing I mentioned, is it a simple way to share data between UVA core and GDN core workers? Yes, exactly. That's exactly why it matters because it goes across the worker processes or across worker process in general, across multi-processes and a consequence of multiple worker processes. Because normally you would either cache in like process memory. So you've got to do it like 10 times. You've got it all fanned out in a different processes running. So this will solve that for sure. And then one for you, Brian, magnus carlson yeah does um
Starting point is 00:08:46 what was that does pep 621 uh the toml spec whatever the pep is for that specify the version of toml to use i don't know to ask brett about that too yeah i don't know either sorry all right augie what you got well oh i'm uh here well thank you for inviting me again this is actually you have two consecutive weeks of uh hosting mechanical Engineers as your guest on the podcast. Why not? So thanks for being inclusive. But I wanted to talk about PyKT Graph, which is not new, but it's- Yeah, people maybe don't know, though, so tell them about it.
Starting point is 00:09:19 Yeah, absolutely. absolutely so pyqt graph is a plotting library uh but it's a little different from uh the likes of matplotlib and the variant on the variants or derivatives from that or a bouquet uh pyqt graph uses the qt framework and it's meant for embedding interactive plots within uh gui applications um and as a consequence of using it as the qt you can actually get some really high performance um out of it which is which matplotlib is absolutely phenomenal for generating plots for publications or you know for static media on websites but the moment you try and do anything like with mouse interactions you might be in for a you know a bit of a tough time um then but with this you're running on like native with cute you're running natively on the os right absolutely yeah you're
Starting point is 00:10:11 running yeah there's no client server relationship like you would get with a bouquet which you might need in some certain situations but and anyway so um part of the pikeQt Graph library is, which I guess I should identify that I am a maintainer of, but is that we actually bundle an example application. So if you're ever curious about the library and its capabilities, and don't feel like reading through dozens of pages of documentation, you can just run this example app,
Starting point is 00:10:41 which I have on the screen share, and it shows you the list of various- This comes with PyQt Graph, right? Yes, yeah, it's bundled shows you the list of various... This comes with PyQt Graph, right? Yes. Yeah, it's bundled in the library. So if you pick and solve PyQt Graph, you get this. And here's some of the basic, you know, plots. But as you can see, you get our mouse interactivity going and, you know, we can do zoom behavior.
Starting point is 00:11:02 But what's really cool about this library is that example here, basic plotting, is generating with this code right here. All those plots was in this, I can't tell how many lines, maybe 70 lines total. But anyway, within this editor here, you can change any of the code and experiment with yourself. And here on the tab, you see all these different items. It does 2D. We have some 3D capability, which you need the PyOpenGL library for. Another, this one is just maybe a dozen lines of code,
Starting point is 00:11:33 but you have a couple of plots here. And then just with the mouse interactivity, right, we can subselect or here you can get our crosshairs and get information about what's the data points underneath the mouse. So for an analysis tool, it is really, really, it can be incredibly powerful. And if you're generating tools for any kind of engineering
Starting point is 00:11:52 or scientific analysis where you want the user to be able to interact with the data in some way, you know, zoom in, zoom out, things like that, or a PyQt graph might be a really good option for you. Yeah, absolutely. Can you run the basic plotting thing one real quick? Oh yeah, of course. So when I was looking at this,
Starting point is 00:12:10 the thing that stood out to me was, while it looks like the graphs are beautiful and they look good, you know, the first couple of days, it's like, I could probably do that in Bokeh or plot layer, you know, map plot lib, so something like that, right? But the nice interaction between multiple graphs as you zoom in one, the other goes in, or that super high frequency yellow one,
Starting point is 00:12:30 that's people listening, it's like refreshing, you know, many, many times a second, right? Getting high frame rates out of those like Jupyter notebooks sounds tricky. Yeah, and I'm actually really glad you brought up high frame rates. I'm actually on the verge of merging a pull request to integrate a CUPAI support, which is the CUDA number of rays or some of the image data. And on some of our benchmarks, we're showing being able to go from, you know,
Starting point is 00:12:57 maybe 20 frames per second of images up to over 150 frames per second, which, you know, at that point, you know, monitors can't keep up, but you know, you lessen the CPU load substantially. Yeah, that's fantastic. We got a comment question from the Anthony Shaw. I use the built-in, built-in, uh, grapher app in Mac OS. I do not know what the built-in grapher
Starting point is 00:13:21 app is. So I am, I afraid afraid i don't know how to answer that you don't know if it can replace or not i don't know either but yeah uh but pi qt graph has a couple dependencies you need some qt bindings and right now we support qt 5 5.12 and newer up until very recently pi qt graph supported like virtually any Qt bindings you could install, even going back a decade, which eventually I had to put an ax to that. That was just too much work. And so we support Qt 5.12 or newer. We don't support Qt 6 yet, although there is a pull request in to add support for PySide 6, which was discussed on this show just two weeks ago.
Starting point is 00:14:08 It just came out, right? Right. It just came out, which I'm really thankful for contributors that are submitting these pull requests. I often feel bad that I can't keep up with the rate that they're coming in, but it's still appreciated. Are you looking for contributors to the project? Yeah, absolutely. Not for contributors to the project? Yeah, absolutely. And not just contributors to the code,
Starting point is 00:14:27 but also people that are willing to look over pull requests or willing to test out pull requests manually. With the plotting library, sometimes testing can be really difficult because like visual artifacts, like how do I test for that, right? And so sometimes a big chunk of our testing is, well, does this break or does this look right? And being able to, you know,
Starting point is 00:14:53 having somebody else, you know, verifies that kind of stuff or is a really big help. So if you're interested in this and feel free to reach out to me directly or take a look at our issue tracker or pull request tracker. Yeah. Oh, and I guess the last thing I should say is it's primarily used in scientific and engineering applications. It's periodically I go through to get log and I look at like the email addresses that people are contributing to. And, you know, NASA Ames Research Center and a bunch of places like that. But but I get a kick out of that.
Starting point is 00:15:33 Yeah, that's super, super cool. Nice. Thanks for sharing that and good work on it. Well, I another cool thing is Linode and they're sponsoring this episode. Thank you, Linode. Simplify your infrastructure and cut your cloud bills in half with Linode's Linux virtual machines. Develop, deploy, and scale your modern applications faster and easier. Whether you're developing a personal project or managing larger workloads, you deserve simple, affordable, and accessible cloud computing solutions. As listeners of Python Bytes, you get a $100 free credit. You can find all about those details at pythonbytes.fm.
Starting point is 00:16:06 Linode also has data centers around the world with the same simple and consistent pricing regardless of location. Choose the data center nearest to your users. You also receive 24-7, 365-day human support with no tiers or handoffs regardless of your plan size. You can choose shared and dedicated compute instances, or you can use your $100 credit on S3-compatible object storage, managed Kubernetes, and more. If it runs on Linux, it runs on Linode. Visit pythonbytes.fm slash Linode and click on the create free account button to get started.
Starting point is 00:16:44 Awesome. Thanks for supporting the show, Linode. Okay, Brian, I want to cover something that comes linode and click on the create free account free account button to get started awesome thanks for supporting the show linode uh okay brian i want to cover something that comes to us from two listeners this comes from jim kring who pointed out some really interesting aspects how python is being used in this whole parlor social media kerfuffle and a great article by my good friend and fellow Portlander, Mark Little. So let's go over the article first. So you guys heard there was basically an attempt to overthrow the US government. Do you guys hear that? That was lovely. God, what idiots. So a lot of the people who were there got kicked off of official social media and they went to this site called Parler. So Parler, according to Wikipedia, is an American alt tech microblogging and social media networking service.
Starting point is 00:17:29 And it has a significant user base of Donald Trump supporters, conservatives, conservatives, conspiracy theorists, and right wing extremists. Not my words, that's Wikipedia. So a lot of the people who stormed the Capitol tried to get into Congress and stop the counting of the votes, they decided to live blog it on their personal accounts. But a lot of them were no longer on Twitter and whatnot, although some were. So they were on Parler. And they probably came to realize, you know, it's probably not a good idea of showing me charging into the Capitol as like hundreds of people are being arrested and charged with federal crimes, right? At the same time, Parler was getting kicked off of Apple's app store for the iOS. They were getting kicked off of the Google Play store. They were
Starting point is 00:18:14 getting banned in a lot of places. So there was this hacker's not the right person, the sort of data savior person, I guess you could say, who came along and realized it would be great if we could download all of that content and save it would be great if we could download all of that content and save it and hand it over to journalists at say like ProPublica, hand it over to the FBI and so on. It turns out it wasn't very hard to do. There was a couple of things. If you look through the Ars Technica article about how the code behind Parler was a coding mess. And I've tried to figure out what technology was used to implement it. I just couldn't find that anywhere.
Starting point is 00:18:47 Anyway, it says, the reason this woman was so successful at grabbing all this data, which she got like 1 million videos and a whole bunch of pictures. There's a whole host of mistakes. So the public API for it used no authentication. Let me rephrase that. Restate that. The public API used zero authentication. Let me rephrase that. The restate that the public API is zero authentication, no rate limiting, nothing. Just yeah, sure. We'll just go ahead. There you go. You have it all. Secondly, when a user deleted their post, the site didn't remove
Starting point is 00:19:16 it. It just flagged it as deleted. So it would show up in the feed, which in and of itself is not necessarily bad, but you pair that with every post was an auto-incrementing ID, which meant you could just enumerate. You're like, oh, I'm on post 500. Well, let's see what 501 is. It doesn't matter if it's deleted. Give me that.
Starting point is 00:19:33 That's crazy, right? So she wrote a script in Python to go download it. And you can actually see like, here's all the videos and all the stuff and their IDs and whatnot. And in here, this is the one that Jim sent over. If you look, there's a gist here that shows you how do you download a video from Parler? Let's go down and find, is it here?
Starting point is 00:19:55 No, maybe it's not there. I think it might be back. There's a part where it shows the how do you download it with Python and so on. So you just go through and like, you know, screen scrape it traditional Python right there. So apparently Python was used to free and capture all of this. Oh, another thing that they did in Parler that made it easy to get was when you upload videos and images to places like Twitter, they'll auto strip the exif, like the geolocation and whatnot from the images. Now they don't need it, just post it. Right. So like geolocation, camera name, all that kind of stuff is all in there. So there's just a bunch of badness.
Starting point is 00:20:33 They've been since kicked off of AWS because, you know, crimes. And now they're apparently trying to get hosted in a server in Russia. Is that right, Augie? Yeah, there was actually, I think there's an article on Ars Technica that went up this morning that they're somewhat partially online on some Russian infrastructure, which...
Starting point is 00:20:52 Yeah, they're only partially online because I looked and it says something like, well, we're trying to come back. Here's a couple of posts. It's not all the way back, right? They're experiencing technical difficulties, as in the world hates them and is trying to make them go away. So I'm not here to try to make this a political statement or anything like that. That's not why I covered the story.
Starting point is 00:21:15 I covered it because I thought it's very interesting, both the security side and how people were able to leverage Python to sort of grab this stuff before it's gone. Some of the journalists were asking, like, is there a more accessible way to get the data? They're like, yes, we're going to build, the woman who got it is like, we're going to build some better way for you to get it. But right now, it's like, I had to run into the burning building and grab the files before they were gone. Yeah. The other thing I sort of want to point out about this story is it's not like Parler was lacking funding to develop these tools. They had, from what I understand, they had significant financial backing. Yeah.
Starting point is 00:21:50 And whether they did not have the technical expertise, the time, I don't know. But I'm really curious as more fallout comes from this. You know, there's going to be some good stories from a technical standpoint on here. Absolutely. Well, pretty, pretty insane. All right, Brianrian let's move on to something uh more devy developer web devy well you know maybe if you want to scrape the web
Starting point is 00:22:12 or something else um absolutely yeah we've got a suggested from douglas nichols thanks douglas uh best of the web development with python so we've seen i would put parlor not in that list yeah um so i we've seen uh best of lists like this before i'm kind of a fan of them yeah but the this one of the things i liked about this is the um the icons are night nice so there's a whole bunch of different icons that are used to help uh you know you can see the likes or the uh lows and stuff of different projects and then there's icons for you can search for fl or the lows and stuff of different projects. And then there's icons for you can search for flask projects or things like that. That's nice.
Starting point is 00:22:50 But it's a it's a pretty big comprehensive list. We've got web frameworks, HTTP clients, servers, authorization tools, URL utilities, open API, GraphQL, which is nice to see. There's even web testing and Markdown listed, how to access third-party APIs. But then near the end, I really liked seeing there's a bunch of utilities sections. So there's Flask utilities and FastAPI and Pyramid and Django utilities, which are really neat.
Starting point is 00:23:20 And what I really was pleased to see was that even though FastAPI is, what, a couple years old now, there's a whole bunch of FastAPI projects that are there to make FastAPI easier, like using SQL Alchemy or coming up with the contributions thing or different React, how to use React with it things like that um so yeah uh nice if you're trying to check out want to look at different tools that are available for web development with python this might be a good place to peruse i feel like that's one of the big challenges in general you know with people coming into python or getting into new framework it's like there's 500 libraries to do a thing yes which one should i use I use? Not, can I find a library? But there's too many, right? Yeah.
Starting point is 00:24:07 Yeah. So do you have a suggestion for that? Well, I think these awesome lists are super good, right? Because they're somewhat vetted and whatnot. I recommend. So like, for instance, if I was building a, well, it's harder now. But if I was building something new with the web development or web interface or something, and I didn't have like which framework to pick is like one of the starter things.
Starting point is 00:24:28 It's the people I have around me as resources. So I know that you're that, you know, about Pyramid, but you're also fairly knowledgeable about FastAPI. And I know some people that are Django friendly and know quite a bit about Django. So if the people, if you've got a couple of friends that already know one of these big hitters, I would go with that so that you can ask them questions. Well, maybe even you don't pick the same thing, but you could ask like you chose this one. Tell me, you looked at a lot of the other ones. Why did you pick that? Yeah. Oh, yeah.
Starting point is 00:25:01 Yeah, that's a good idea. Yeah, for sure. Like maybe fast api makes sense for me it doesn't make sense for you but you can then see why it made sense for me and not for you or whatever yeah yeah absolutely all right all right i am i up now yeah you're up so um so uh mr shaw being in the audience here was a bit of a surprise but uh i one of the things i wanted to talk about is uh i'm gonna butcher this, I apologize, Pyjion. Pigeon. I think it's Pigeon.
Starting point is 00:25:28 Pigeon. Oh, my goodness. Okay, yes. What a wonderful name. And I've been fascinated by this. So what Pigeon is, this feels so awkward to talk about somebody else's project when they're in the audience here, is a JIT extension of CPython that compiles Python code using the.NET 5 CLR. And what's been fascinating to me about this is this is like a whole area of software that I have absolutely no experience with. Like I know nothing about, but I've been following what Anthony's been talking about on Twitter about it. And he's been explaining what he's doing, you know, along the way in these Twitter size increments that I feel like I'm able to follow along with the intent. And I found that this project absolutely fascinating. And I'm seeing like the rates of improvement over time.
Starting point is 00:26:31 And I've just been absolutely blown away. And so I think this has been absolutely amazing. And I really hope that I'm really curious. So one of the benchmarks that Anthony has been using is his own Python implementation of the end body problem, which is sort of funny that's come up because I've been wanting to do an nbody plotting example in PyQt Graph. And of course, this has been sort of on my to-do for some time. So now I'm curious if I should even attempt to or if it's even remotely possible to try and integrate those functionalities together. Yeah, that's cool.
Starting point is 00:27:09 And go ahead. No, no, go ahead. Oh, sorry. or made use of, but is not particularly new, is the NumPy's underscore or dunder array functionality, which is specified in NEP 18. And what that allows for is using NumPy methods on not necessarily NumPy arrays. So, for example, with Kupy, you can use the NumPy methods
Starting point is 00:27:48 that would operate on an nd array, but use it on a Kupy array. And this is not limited to Kupy. There's other libraries that offer this functionality too, but this makes it so much easier to integrate various libraries together with really having minimal code impact and having near identical APIs. And earlier I was talking about the pull request for giving KubePy support
Starting point is 00:28:15 into PyQt Graph and this functionality, which was implemented in KubePy, but it's made the integration so much easier. Nice, because you guys already implemented it with NumPy, and and it's just like we're just going to go through this layer basically yeah i mean there's some other gotchas that you have to have right with hand you know handing stuff off to the gpu and stuff like that but yeah now that's uh but the actual size of the def was not that big um you know for well and you think what it means to run on a CPU or run on a GPU. Like that's a very different whole set of computing
Starting point is 00:28:48 and assumptions and environments and right, and so on. And to make that a very small merge is crazy. Right, yeah. No, it's fantastic. Yeah, as I said, it's nothing new. This functionality has existed,
Starting point is 00:29:03 has been enabled by default in NumPy since version 1.17, which I believe is almost coming up on two years old now. But this is the first time I've made use of this functionality or been impacted by this functionality directly. And I'm so appreciative of it. Yeah. Fantastic. And that's super cool. I've not really found a reason for me to work with QPy or anything like that, but I'm just really excited about the possibilities for people for who it does matter, you know? Absolutely. Yeah, I actually, I always, every time I hear about it, I write a note down and say, oh, I got to check this out. Looks neat. Absolutely.
Starting point is 00:29:40 Well, there we go. Those are six items. Do you have anything extra for us, Michael? This almost could be an extra, extra, extra, extra. You're all about it. So I'm just going to throw a few things out really quick. One, I got my new M1 not long ago and actually had to send in my old laptop. Its battery was dying. Its motherboard was dying, all sorts of things. So I had to put it in a box and send it away.
Starting point is 00:30:02 I'm like, I don't really want to put my data in here. So I just formatted that as well. So now I have two brand new computers. I'm trying to think like, all right, what kind of getting bugged by how much spying, monitoring, observation, all these different companies are doing. So I've started running just Firefox, but also, you know, when things a lot of times, like, for example, StreamYard, I can't use a green screen on Firefox. I have to use Chrome. It says, I'm like, I don't really want to use Chrome, but I want a green screen. So here I am. So I started using Brave.
Starting point is 00:30:29 Whenever something says I have to have Chrome, I started using Brave, which is a more privacy focused browser. So I thought that was interesting. And just turning on a VPN like all the time, just to limit people observing, not that I really need to keep anything super secret. Two conferences are coming out with calls for proposals that are due quite soon. So the Python web conf has got some calls for proposal.
Starting point is 00:30:53 The conference is actually March 24th. That order is not quite right, is it? 22nd to 26th. If you look at their site, the days that it's on are sort of not in order. Anyway, end of March, there's a cool online conference. They did this last year, Six Feet Up did, and they're doing it again this year. I'm actually speaking here. Brian, are you speaking there? At the web conf? Yeah.
Starting point is 00:31:17 Well, there's a call for paper, so you could be. You too, Augie. Yeah, and I think they expanded it out to be like five days or something, so there'll be a lot of content, which is very cool. So I'll be giving a talk on Python Memory Deep Dive there, Augie. Yeah, and I think they expanded it out to be like five days or something. So there'll be a lot of content, which is very cool. So I'll be giving a talk on Python memory deep dive there, I believe. And then the big one, PyCon. PyCon is virtual again this year,
Starting point is 00:31:34 but the call for proposals has gone out and is they're due February 12th. So if you want to be part of PyCon, get out there and send something in. Are you going to submit something? I will probably do it, yeah. It means I got more. Are you going to submit something? I will probably do it. Yeah. It means I got more work to do, but yeah, I think I'll do it.
Starting point is 00:31:50 You got any plans? I'll probably submit some something, maybe three, four, five, six, seven, eight, nine, ten proposals. The more you submit, the better chances you got. Augie, you going to submit to either? There's talk amongst us PyKT graph maintainers about doing a tutorial session at SciPy. So I might, I know that's not listed here, but we're considering doing that, which SciPy is also virtual this year. That makes a lot of sense. Yeah, that's cool. Awesome. Then final
Starting point is 00:32:17 here, here, here, all about it, extra stuff is Apple's launching a racial equity and justice initiative, which I think is pretty cool. Basically they're setting up centers to teach programming and other entrepreneurship skills and underserved communities. Right. And I know there's again, more, a lot of political stuff around all this, but to me, I would just love to be in a world where I look around the community and it, it looks representative of everybody, right? Like people feel included, Like tech is such a wonderful space. I think this is a cool initiative. Obviously it could be, hopefully they deliver it in the right way. It's not just like, we're going to teach everyone how to build iPhone apps. That's what the world is, right? You know, it's a more broad sort of conversation. I could go any which
Starting point is 00:32:57 way. And I hopefully it's just a start. Like if you look, they're saying they're donating a hundred million dollars to this cause, which is a lot of money, but it's also only eight hours of profit to Apple. So, yeah, it's got room to grow, I suppose. Anyway, I just want to give a shout out to that as well. That seemed pretty cool. All right, Brian, how about you? More conference stuff? Well, PyCascades is actually, I don't remember when it is.
Starting point is 00:33:19 February, possibly. February, probably. Yep, February 20th it starts. And there is uh the schedule's up so I wanted to announce the schedule's there so you can check it out there's still tickets available and uh you can see what's gonna happen I really had had I had fun at the in-person podcast gates and uh I think they did a good job for the online one in 2020 so and uh we're gonna be there yeah we are We're on a panel.
Starting point is 00:33:45 Yeah. Along with Ollie Spittel. Yeah, should be fun. But there's definitely be fun. About podcasting, but there's like another panel about writing technical books that looks good.
Starting point is 00:33:55 There's a bunch of cool talks that I'm looking forward to seeing. Yeah, me too. It looks great. I love all these online conferences that it's pretty accessible to everybody. Last year, if we would announce this, it'd be like, oh, well, I'm not in Portland, so it doesn't matter to me. Yeah.
Starting point is 00:34:09 Augie, I know you got some stuff to shout out real quick, but also a quick question, a follow-up from Anthony. Numpy uses AVX extensions for native matrix multiplication on supported CPUs. It'd be interesting if that extension supported the same for non-Numpy arrays. Thoughts? Ideas? Yes, I'm sure you can use those extensions on Numpy. I mean, Numpy doesn't have a monopoly on AVX extensions. You know, it just needs whatever
Starting point is 00:34:35 library you use, I think it just would need to be compiled with the Intel MKL BLAS extension, which goes into build systems, which is way over my head um and uh i yeah i mean i used to live in the c++ world and whatnot but i'm far from that world that you and anthony are inhabiting these days right so um yeah i'm so yeah short i'm not sure um but in terms of the extras a of things I wanted to bring attention to
Starting point is 00:35:06 is I've been loving the Anthony Explains video series. And these are generated by, oh God, I'm probably gonna mispronounce his last name. Anthony Stile, he's been a guest on, can't remember if he's been a guest here, but I think he's been a guest on Talk Python to me. He maintains Precommit. He's a PyTest developer and maintains-
Starting point is 00:35:23 Anthony Sotili. Sotili. Sotili. Yeah. And I've been absolutely loving his Anthony Explains playlist series. The other resource that I've recently found myself having to make use of is LearnX and YMinutes. I, you know, sometimes I have to write something in a tech stack or in a language I have absolutely no familiarity with. And so that resource has been absolutely amazing for the five-minute overview on the real basic operations. And then the other one is this book I've been reading, Working in Public. And I think Guido
Starting point is 00:35:59 plugged it a while ago on his Twitter feed feed but um it's uh talks talks about maintaining open source projects um and some of the issues arising that i i think uh it's you know i'm still not done with it but i think it's both helpful from a maintainer point of view to you know for a sanity check and your experiences might not you know you're not might not be as isolated and i think it's helpful for new uh open, open source contributors to see what, uh, what things might look like from the maintainer's perspective as well. I've heard really good things. Yeah.
Starting point is 00:36:30 Have you read it, Brian? Um, I, it has an audio book version, so I listened to it and it, um, you wouldn't think like a book on open source would be good audio, but it was great. Yeah. Fantastic. Awesome. All right. Well, uh, Brian, should we do a joke?
Starting point is 00:36:43 Yes, we should. All right. So I put two jokes into the show notes. One of them is a rap song, which I know Brian is especially fond of. It's a rap song about working at the help desk. So if you're the help desk for your company or I guess public support as well, it's by a duo called Here to Help. And man, it is so funny. It's a video, uh, song, uh, you know, on YouTube. So it doesn't really make sense to cover it, but I thought I'd throw it in there as a pre, uh, pre-recommendation, what I'm going to actually talk about. Augie, what do you think?
Starting point is 00:37:13 I see you smiling. Oh, I, I, I have to say that that song was just a jam after jam after jam. It is. I need you to click your right mouse button. I only have one mouth. So here's the actual Python related joke for us. And it's a tech support thing. Brian, why don't you be the person that needs some help? Okay. Hi. This is a chat, by the way. Tech support, how may I help you? Hi, I've got a problem. Your program is telling me to get a pet snake. I don't want one. Excuse me?
Starting point is 00:37:46 It's giving me a message telling me I need a snake to run it. Okay, read the message to me, please. Python required to run the script. That's terrible. That is terrible. Terribly good is what it is. Yeah. Hey, I wanted to add some humor as well. All right, do it. So I saw this on Twitter and it was a quote from, I don't know how to pronounce that name. Byron?
Starting point is 00:38:11 Brian? Byron. I don't know. A quote from Byron Hobart. Running a successful open source project is just goodwill hunting in reverse, where you start out as a respected genius and you end up being a janitor who gets into fights. Yeah, that's awesome. And it goes right along with the book recommendation as well.
Starting point is 00:38:29 Well, that's a good way to put a cap in. Yep. All right. Well, thank you, Brian. Thank you. Thank you, Augie. Thank you for having me. Bye, everyone.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.