Python Bytes - #16 Postmodern Python and Open-source Financial Awards

Episode Date: March 7, 2017

See the full show notes for this episode on the website at pythonbytes.fm/16...

Transcript
Discussion (0)
Starting point is 00:00:00 This is Python Bytes. Python headlines and news delivered directly to your earbuds. It's episode 16, recorded March 6th, 2017. And this is your host, Michael Kennedy, here with my co-host, Brian Ocken. Hey, Brian. Hello. Hello, hello. Well, I'm excited to talk about another week's worth of Python news. But before I do, I want to say thank you.
Starting point is 00:00:20 Thanks to Rollbar for sponsoring this episode. Yes, thank you. Yeah. So you've got the first item, and it's kind of an artistic postmodern type of thing, but with error handling, huh? Yeah. So there's an article on Journal Panic, and I've got to say that's a cool name for a website. But it was called Postmodern Error Handling in Python 3.6. And it's actually, it's not really really I guess you could consider it error handling. It's mostly like how to structure your code so that you don't make mistakes and
Starting point is 00:00:51 let the language help you out. The examples are I highly recommend the read because the examples are hilarious and it goes through some error prevention looking at using enum classes and the named tuple class, which I have to admit I just learned about. I've used named tuples, but I've only learned about the class recently. And using type hints in MyPy to help avoid errors in your code. Yeah, I think it's really nice. Some of these are 3.4 only, right? Like, Noom support was added in Python 3.4,
Starting point is 00:01:28 which is pretty cool. Yeah, there's somebody who commented basically the idea was they were talking on some message board like Core or something about, like, how would you handle this kind of error? Let's say, it's not true or false. It's true or false or maybe, right? And they proposed, like, this three true or false or maybe, right? And they propose like this three state type of thing. And somebody said, well, if you were using Rust, you could have enums. And he says, well, I'm using Python 3, so I have enums, right? That was pretty funny.
Starting point is 00:01:55 Definitely. Okay. Oh, and yeah, the tail, it's nice. I only noticed this this morning is the tail end of this article references your talk Python. So that's cool. Yeah. Oh, that's awesome. Yeah. TalkPython. So that's cool. Yeah, oh, that's awesome. Yeah, thanks for referencing that. That's really cool.
Starting point is 00:02:08 Yeah, so I think name tuples are so excellent. I've really started using name tuples a lot. I think they're great. Type hints? Yeah, name tuples I've been using a lot because they're very convenient. But the syntax is a little ugly. And the name tuple class actually makes the syntax a
Starting point is 00:02:27 little bit cleaner and i think that's cool yeah very cool and of course my pi which is great for static type well type hinting i guess and type discovery yeah and now um in the show notes i'm also including a link to another article on my pi and and type hints called Python Type Annotations, and that's from the Cactus group, and it's another pretty good tutorial for type hints in MyPy. Yeah, excellent, excellent. So the next one we're going to talk about, and this is sort of a sequence that I'm going to go through here.
Starting point is 00:02:59 One leads to another in a very cool way. So mid-last year, mid-2016, Mozilla awarded a little over half a million dollars to nine open source projects just in Q2 2016. That's really cool, right? That's very cool. Yeah, so they have these things they call tracks, foundational technology tracks and mission partner tracks.
Starting point is 00:03:21 And so what they do is they go out and they find open source projects that somehow the success of that open source project furthers the goals of Mozilla, you know, an open source free web that Firefox and other things run well on, right? And so why is this on the Python thing? Well, they chose a really cool project. So they said that they've already funded eight projects for $385,000 and they're still considering more and they have actually applications available for what's called a mission partner. So if you have an open source project and you think the mission of your project lines up well with Mozilla, maybe that could be your job,
Starting point is 00:04:02 right? Like maybe Mozilla would pay you to work on your project. So first of all, just check this out. That's awesome. Yeah, definitely. That's cool. Yeah. They also have a track on secure open source for improving open source, the security of open source software. And then the one that's really interesting to me, and I think to the audience is the foundational technology track. And one of the things that was awarded there, one of the projects was PyPy, the alternate implementation that's a JIT for Python. And it's been working with Python 2.7, but not really great with Python 3.
Starting point is 00:04:37 It's kind of sort of iffy. Some of the features are supported and so on. So they received $200,000 in donations to make that happen. That's really great. And that'll make PyPi a lot more compelling if we can get that updated to 3. Yeah, absolutely. I mean, it's one thing to say we're on 2.7 and then we'll move in a little bit, but it's another to say, and we're going to build on a technology that has no Python 3 story. Like that's a pretty serious blockade, but now they have an alpha version of PyPi 3.5 out, and it's looking really good.
Starting point is 00:05:08 So more on that in another section. Okay, cool. Speaking of announcements, I've just been sort of following what Intel's been doing. I think TalkPython had an Intel episode. Yeah, we had a great conversation about like how Intel's actually thinking about the architecture of their chips so that it lines up with the way that Python executes code. Yeah, and they're continuing working on this.
Starting point is 00:05:34 The announcement that we're linking to is an announcement from Intel that says, Intel distribution for Python 2017 update 2 accelerates five key areas for impressive performance gains. And so it looks like they're continuing on. And this version that they've got is both something that it's a special release of Python that's compiled for knowing that it's going to run on an Intel architecture. And it supports Windows, Linux, OSX. And it's got 2.7 and 3.5, oddly 3.6 missing. What's going on, guys?
Starting point is 00:06:10 But the improvements are pretty cool. There was a comment about widespread optimizations for NumPy and SciPy FFTs with stated sometimes could possibly improve 60x over update one. And so they really are hammering into trying to make this fast for some intensive stuff. Yeah, that's really cool. If you're doing any sort of computational stuff involving NumPy and SciPy,
Starting point is 00:06:39 that's a pretty amazing performance increase. And to make it basically as fast as native C and the Intel high performance C libraries, that's really something. They also touched on some improvements in memory management. What I thought was cool was arithmetic and transcendental expressions from NumPy are able to use multi-cores now. And I just like that because I don't know what a transcendental expression is. I think that's like what your face looks like when you're meditating maybe.
Starting point is 00:07:09 Yeah, that sounds awesome. It already sounds intriguing. What good naming a nomenclature. Yeah. But anyway, I think it's neat that they're both, that they're working on it and also that there is a, it's not just for paid people. There is a free standalone version. That's cool. Yeah. Yeah. That's cool. Yeah, yeah. That's very cool. Okay. The next one I want to talk about is also about performance.
Starting point is 00:07:30 But before we do, I want to talk about errors. We don't like errors in our web apps, do we? Definitely not. No. So our sponsor this week, Rollbar, will totally help you take the pain out of errors and solve that problem.
Starting point is 00:07:41 So I use them on my own sites. I know many people out there do as well. So what you do is you basically install with just a few lines of code, roll bar into your Flask, Django, Pyramid web apps. And if there's ever an error, you'll get a notification, you'll get a report containing all the details that you need. So here's some kind of crash, here is the traceback, Here's the browser and the platform and everything that was possibly passed in the whole web request right there for you. So you'll get notified straight away of any errors and you won't even probably have to debug it. You'll just
Starting point is 00:08:15 have to look and go, oh dear, we have to go fix this. So definitely check them out at rollbar.com and I use them. I think they're great. Wonderful. Yeah, absolutely. And thanks for sponsoring the show guys. All right. So let's talk about performance and this is follow on from the Mozilla one. So the guys at PyPi released some performance graphs and stuff from their work on implementing Python three, five. And they said, look, the core new thing that we need to work on is this whole async IO story, right? That's sort of the kernel of the new stuff in Python 3.5. And so what they did is they said, well, let's put out some numbers on how we're doing in that area. I want to say thanks to Guy Fickhell, who actually sent this over to us to say, hey,
Starting point is 00:09:03 you should talk about this. They said, look, we're going to take things like Tornado, AsyncIO, Curio, G-Event, and Twisted, and we're going to run them on PyPy 3.5 and see how they do. And they did pretty well, actually. They have a bunch of graphs, and they're basically five to ten times faster than CPython for all of those workloads. Wow. Yeah. So if you could run five times fewer servers because you don't need them
Starting point is 00:09:31 with pretty much the same code, that would be pretty handy, right? Definitely, yeah. That's cool. Yeah. So there's a lot of interesting things. I feel like this whole let's leverage async IO plus something else is really blowing up these days. We've had a lot of the stuff happening in the web frameworks with Gepronto, with async, AIO HTTP, with Sanic, with this. There's a lot of stuff going on right now trying to do something with async in a way and async IO to make things faster.
Starting point is 00:10:03 So I think we'll continue to hear good stories around this. Yeah, it's cool that it keeps progressing. Yeah, absolutely. Definitely. Well, what's unclear is like, which one of these is going to be the path, right? Like, is DePronto going to be the way to go? Is AIO HCP the way to go?
Starting point is 00:10:17 Like Sanic, there's so many sort of flowers blooming. It's kind of tough to pick the right one because they all seem so promising in slightly different ways. Yeah. And from somebody that's trying to maybe to set up a project and trying to pick that needs to pick one right now, I can see that that might be a little confusing. But I don't think these are terribly different things. I don't know. Maybe it is. I have no idea what it would be like if you had to like, if you went with, I don't know, Sanic or something, and then Sanic disappeared for some reason, you needed to switch, how difficult that would be. But at the very least, having a
Starting point is 00:10:54 lot of people look at it and try to make things faster is a good thing. Yeah, absolutely. Ned Batchelder is the guy that supports coverage.py. to measure your um that's most people use for coverage of looking at coverage of their their code and he wrote a couple articles called a tale of two exceptions and he's got two parts and there's uh what was going on was he was trying to get all of his test test suite to run on jython. And there was some, I don't know the details of the problem, but there was an issue with the Jython that made it so that like the reporting mechanism doesn't quite work or doesn't work. So it's not a crucial part of the system of coverage, but they didn't work. So he wanted to skip the tests that depended on that while
Starting point is 00:11:43 running on Jython. And wow, it's an interesting tale. He kind of walks through the entire thought process of why he chose different attempts and maybe inheriting from the exception class and picking another base for the exceptions that the coverage exceptions use and ends up kind of leaving it not quite wrapped up at the end of the first post. And then with some feedback from one of the readers on the first post came up with a way to use decorators and meta classes to apply the decorators to be able to skip those more effectively. And it's a couple pretty cool articles. So yeah, nice work, Ned. That's a nice write up. So what do you have to do if you want to do coverage on Jython? You basically
Starting point is 00:12:31 run it and collect the report. And then you have to like actually somehow process it with CPython. Well, I'm not exactly sure what the I'm sure that there's, there's enough support to, to understand your coverage, but the, the coverage package has like a whole bunch of cool stuff, like generating HTML reports and, and a lot of other type of reports. And it's, it's possible that those are the parts that are, that are missing. Yeah. Yeah. These are, this is definitely an interesting use of meta classes. There's some multiple inheritance thrown in there. There's a lot of stuff in this, actually. Yeah. And one of the things that I wanted to point out was, it's refreshing to, I've seen a lot of articles with people saying, look, I've got this cool solution I came up with for this particular problem. And it's very refreshing to see somebody say, I've got a sticky problem with a solution that I'm still not quite happy with, and here it is. And it's nice.
Starting point is 00:13:26 And another thing, a good takeaway from it is he didn't present all of the code that he could have. He boiled the code that he puts in the articles, he boiled those down so that you could understand the problem, but they're not huge. That's extra work on his part, but it makes it for a nice quick read. Yeah. A lot of people ask me, how did you come up with this type of problem? Could you explain the thinking that got you here? Because I don't see how you got from A to B. This is a good example of laying that out. I think it's nice. Yeah. And also to kind of prevent people from saying, well, why didn't you try X? Because he already did and he's showing it. Yeah. Nice. All right. Check that one out and he's showing it. So it's nice.
Starting point is 00:14:05 Yeah, nice. All right, check that one out. That's very cool. The last one is also about async and OA and it's also about performance. But this is a totally different story. So all this stuff that we've been talking about, the Mozilla thing, the PyPy, the PyPy tests that they ran,
Starting point is 00:14:19 the majority of those were testing web frameworks. So I want to write a web server. It's going to process some requests. Let's do it faster. This is totally different. This is about, I want to consume services really quickly. And the AIO HTTP library actually has some really cool stuff to do this that I just learned about. So I thought I'd share it with you guys. Wow. That's great. Yeah. So you've heard of requests, right? Like the most downloaded package ever? Definitely, yeah.
Starting point is 00:14:46 Yeah, so we all know about requests, right? It's downloaded like 7 million times a month or something insane. Well, AIO HTTP has a similar library as requests. It's actually very, very similar in the way that you use it and its features and so on. However, requests itself is not, you can't await it in async and await, right? So it doesn't like use some sort of deferred IO callback in order to complete requests. It just blocks. So the big difference with AIO HTTP is you can await the responses at different levels.
Starting point is 00:15:20 You can do it on the network calls. You can do it on the parsing. You can actually do it. It even has like a file-based thing, so you can await writing to files. And so this person wrote up a cool little example, putting those both side by side. And the code is quite similar. If you didn't have async and await, it would be not so similar. It would be really nasty looking.
Starting point is 00:15:39 But because you do, it becomes real similar. So what we're going to do is I want to get a bunch of stats about basketball players in the NBA. There's an API for that apparently. So it's going to run some code and it's going to go collect all the stats. And it took 12 minutes on requests using AIO HTTP and AIO files. It took 22 seconds. Wow. That's really awesome, right? That's 33 times faster. Definitely. And the code is virtually identical. That's pretty cool. So the difference is, like, basically, you begin a request to the API, and normally you're just waiting on the network, right?
Starting point is 00:16:13 You're waiting for a response, right? But you should be able to kick off a whole bunch more of those requests until one of them comes back and you have to process them. So it doesn't even use threads to pull this off. It just uses IO callback type things. Really? Okay. Yeah. Pretty awesome. So this is definitely one
Starting point is 00:16:31 of those things that shows the power of Python 3.5. Yeah, and it's nice to have it on the client side, too. We've got a lot of examples recently of async and await on the server side. Yeah, absolutely. Absolutely. So I want to squeeze one more piece of news in here. side. Yeah, absolutely, absolutely. So I want to squeeze one more piece of news in here.
Starting point is 00:16:47 Okay. Yeah, before we wrap it up. So I talk about PyPI a lot on TalkPython. We grab a lot of packages out of there and talk about them here on Python Bytes. There's quite a milestone that just passed there two days ago, three days ago, something like that. Very cool milestone. A very cool milestone.
Starting point is 00:17:04 So there are now over 100,000 packages on PyPI. How cool is that? It's very cool. I wonder if the guy that, or the, I shouldn't say guy, the person that got the 100,000th one in there, if they know about it. If they know that they are the one that put it over the top. Yeah, we need to find out. We need to contact like Donald Stuffed and see if he knows, if he can find out. I bet there's some sort of query that can be done that'll answer that question. Yeah. We need to find out. We need to contact like Donald stuffed and see if he knows
Starting point is 00:17:25 if he can find out. I bet there's some sort of query that can be done that'll answer that question. Yeah. That'd be cool. That'd be very cool. Okay. All right. That's it for, for me. Got any news to share with anyone? Just that I I've been in the throes of trying to switch over. I switched over the testing code podcast to a new domain name at testingcode.com. And hopefully that should be seamless to anybody that's already subscribed. But yeah, just if you see anything, that's going on. We should let you know? Okay, awesome.
Starting point is 00:17:57 Testing Code. Very cool. So congrats. You have a whole new system driving and everything, right? Like a whole new website. It's basically hands-off for me. I'm letting software as a service do most of the work for me. That sounds very relaxing. Now, I was going to switch the, I have test podcast as the Twitter thing with a bunch of followers. And I also have test and code as a Twitter, but there's only like four people
Starting point is 00:18:22 following. So I have no idea what to do with that. How about you? Got anything you want to announce? No, not too much. I'm just continuing to work in on classes, creating more online classes. So I actually have a surprise one coming that nobody is, I'm sure will be unexpected, but I think it'll be fun.
Starting point is 00:18:38 And maybe next week I can talk about it. I'll practice my surprised voice. Ah! Yeah. All right. Well, thank you everyone for listening. Brian, great to talk to you as always.! Yeah. All right. Well, thank you, everyone, for listening. Brian, great to talk to you, as always. As always.
Starting point is 00:18:48 See you. Thank you for listening to Python Bytes. Follow the show on Twitter via at Python Bytes. That's Python Bytes, as in B-Y-T-E-S. And get the full show notes at PythonBytes.fm. If you have a news item you want featured, just visit PythonBytes.fm and send it our way. We're always on the lookout for sharing something cool. On behalf of myself and Brian Auchin, this is Michael Kennedy. Thank you for listening and sharing this podcast with your
Starting point is 00:19:14 friends and colleagues.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.