Python Bytes - #193 Break out the Django testing toolbox
Episode Date: August 6, 2020Topics covered in this episode: * Start using pip install --use-feature=2020-resolver if you aren’t already* Profiling Python import statements Django Testing Toolbox Pandas-profiling Interfaces,... Mixins and Building Powerful Custom Data Structures in Python Pickle’s 9 flaws Extras Joke See the full show notes for this episode on the website at pythonbytes.fm/193
Transcript
Discussion (0)
Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds.
This is episode 193, recorded July 29th, 2020.
I'm Michael Kennedy.
And I am Brian Ocken.
And we've got a bunch of great stuff to tell you about.
This episode is brought to you by us.
We will share that information with you later.
But for now, I want to talk about something that I actually saw today.
And I think you're going to bring this up as well, Brian.
I'm going to let you talk about it.
But something I ran into updating my servers with a big warning in red when I pip and saw some stuff saying,
your things are inconsistent.
There's no way pip is going to work soon.
So be ready.
And, of course, that just results in frustration for me because, you know,
Dependabot tells me I need these versions, but some things't require anyway long story you tell us about it yeah okay so i was
curious um i haven't actually seen this yet so i'm glad that you've seen it so that you have some
experience this was brought up to us by matthew feichard and he says he was running pip and he
got this warning and it's all in red so i'm gonna have to squint to read this says after october 2020 you may experience errors when installing or updating packages
this is because pip will change the way it resolves dependency conflicts we recommend
that you use use features equals 2020 resolver to test your packages it shows up as an error and i
think it's just so that people actually read it,
but I don't know if it's a real error or not.
It works fine, but it's going to be an error eventually.
Okay, so this is not a problem.
Do not adjust your sets.
Actually, do adjust your sets.
What you need to be aware of is the changes.
So I think we've covered it before,
but we've got a link in the show notes
to the PIP dependency resolver changes and these
are good things but one of the things that matthew pointed out which is great and i've he we're also
going to link to an article where he discusses where like how his problem showed up with this
and it's around projects that use some people use poetry and other things and i can't remember the
other one pipenv that does
things like lock files and stuff but a lot of people just do that kind of manually and what
you often do is you have like a your original set of requirements that are just your like the
handful of things that you immediately depend on with no versions or or with minimal version
rules around it and you say pip install this stuff,
well, that actually ends up installing
a whole bunch of all of your immediate dependencies,
all of their dependencies and everything.
So if you want to lock that down
so that you're only installing the same things again and again,
you say pip freeze and then pipe that to a lock file.
And then you can use that, I guess, a common pattern.
It's not the same as pipenv's lock file and stuff,
but it can be similar.
Anyway, and then if you use that and pip install from that,
everything should be fine.
You're going to install all those dependencies.
The problem is if you don't use the use2020 resolver feature
to generate your lock file,
then if you do use it to install from your lock file,
there may be incompatibilities with those.
So the resolver is actually,
there's good things going on here,
having pip do the resolver better.
But the quick note we want to say is don't panic
when you see that red thing.
You should just try the use Features 2020 Resolver.
But if you're using a lock file,
use it for the whole process.
Use the new resolver to generate your original lock file
from your original stuff,
and then use it when you're installing
the requirements lock file.
There's also information on the IPA website.
They want to know if there's issues.
This is still in a... It's available, but there's still maybe kinks, but I think it's
pretty solid. Not enforced, but warning phase. Yeah, and I
actually really like this way of rolling out a new feature and a behavior
change is to have it be available as a flag
so that you can test in a, not a pre-release, but an
actual release release and then
change the default behavior later but so the reason why we're bringing this up is october's
not that far away and october's the date when that's going to change to not just a flag behavior
but the default behavior so yes go out and make sure these things are happening and if you
completely ignore us when things break in october, the reason is probably that you need to regenerate your lock file.
Yep. So in principle, I'm all for this.
This is a great idea.
It's going to make sure that things are consistent by looking at the dependencies of your libraries however two things that are driving me bonkers right now are systems like
dependabot or pyup which are critically important for making sure that your web apps get updated
with say like security patches and stuff right so you would do this like uh you know pip freeze
your dependencies and then it has the version. What if, say,
you're using Django and there's a security release around something in there, right?
Unless you know to update that, it's always just going to install the one that you started with.
So you want to use a system like Dependabot or PyUp, where it's going to look at your requirements.
It's going to say, these are out of date. Let's update them. Here's the new one. However, those systems don't look at the entirety of what it potentially could set them to. It says, okay, you're using doc opt. There's 0.16 a doc opt. Oh, except for the thing that is before it actually requires doc opt 14 or it's incompatible. And as in incompatible as pip will not install that requirements.txt any longer,
but those systems still say, great, let's upgrade it.
And you're like in this battle of those things
are like upgrading it.
And then like the older libraries are not upgrading
or you get two libraries.
One requires doc op 16 or above,
one requires doc op 14 or lower.
You just can no longer use those libraries together. Now, it probably doesn't actually matter, like the feature
you're using probably is compatible with both, but you won't be able to install it anymore. And
my hope is what this means is the people that have these weird old dependencies will either
loosen the requirements on their dependency structure like we're talking about
right like this thing uses this older version or it's got to be a new version or update it or
something because it's going to be there's going to be packages that are just incompatible that
are not actually incompatible because of this yeah interesting yes painful i don't know what
to do about it but it's like literally this morning I ran into this and I had to go back and undo what Dependabot was trying to do for me because certain things were no longer working right or something like that.
Interesting. Yeah. So does Dependabot, Dependabot, Dependabot?
Yeah, that's the thing that GitHub acquired that basically looks at your various package listings and says, there's a new version of this. Let's pin it to a higher version and it comes as a PR.
That was my question. It comes as a PR, so if you had
testing around in a
CI environment or something,
it could catch it before
it went through. Yes, you'll still get the PR,
it'll still be in your GitHub repo,
but the CI
presumably would fail because
the pip install step would fail, and then
it would just know that
it couldn't auto merge it but still it's like yeah you know it's you're like constantly like
trying to push the water the tide back because you're like stop doing this it's driving me crazy
and there are certain ways to like link it but then there's a force it to certain boundaries
but anyway it's it's like it's going to make it a little bit more complicated,
some of these things.
Hopefully it considers this.
Well, maybe Dependapot can update to do this.
Wouldn't that be great?
Yeah, that would be great.
Well, speaking of packages,
the way you use packages is you import them
once you've installed them, right?
Yes.
So Brandon Branner was talking on Twitter with me
saying, like, I have some imports that are slow.
Like, how can I figure out what's going on here?
And this led me over to something we may have covered a long time ago.
I don't think so, but possibly called import-profiler.
You know this?
No, this is cool.
Yeah, so one of the things that can actually be legitimately slow about python startup code is
actually the imports so for example like if you import requests it might be importing like a ton
of different things standard library modules as well as external packages which are then themselves
importing standard library modules etc etc right so you might want to know what's slow and what's not.
And it's also not just like a C include.
Imports actually run code.
Yes, exactly.
It's not something happening at compile time.
It's happening at runtime.
So every time you start your app, it goes through and it says,
okay, what we're going to do is we're going to execute the code
that defines the functions and defines the methods and potentially other code as well who knows what else
is going on so there's a non-trivial amount of time to be spent doing that kind of stuff for
example i believe it takes like half a second to import requests just requests interesting i mean
that obviously that depends on the system right you do it on micro python versus on like a supercomputer the time's gonna vary but nonetheless like there's a
non-trivial amount of time because of what's happening so there's this cool thing called
import profiler which all you got to do is say from import profiler import profile import
say that a bunch of times fast written it's fine it's fine. Spoken, it's funky. But then
you just create a context manager around your import statements. You say with profile import
as context, all your imports, and then you can print out, you say context.printinfo, and you get
a profile status report. That's cool. Now, I included a little tiny example of this for requests
and what was coming out of it. If you look at the documentation, it's actually much longer.
So I'm looking here, I would say just eyeballing it, there's probably 30 different modules
being imported when you say import requests.
That's non-trivial, all right?
That's a lot of stuff.
So this will give you that output.
It'll say here, this module imported this module and then it has
like a hierarchy or a tree type of thing so this module imported this module which imported those
other two and so you can sort of see the chain or a tree of like if i'm importing this here's the
whole bunch of other stuff it takes with it okay yeah and it gives you the overall time. I think maybe the time dedicated to just that operation
and then the inclusive time or something.
Actually, maybe it looks more like 83 milliseconds.
Sorry, I had my units wrong.
I said a half a second.
But nonetheless, it's like you have a bunch of imports
and you're running code.
Where is that slow?
You can run this, and it basically takes three lines of code
to figure out how much time each part of that entire import stack.
I want to say call stack of that execution, but it's the series of imports that happen.
Like you time that whole thing and look at it.
So, yeah, that's it's pretty cool.
That's neat.
And also, I mean, there's there's times where you want you really want to get startup time for something really as fast as possible.
And this is part of it, is the things you're importing at your startup is sometimes non-trivial when you have something that you really want to run fast.
Right.
Like, let's say you're spending half a second on startup time because of the imports.
You might be able to take the slowest part of those and import that in a function that gets called, right?
Yeah, import it later.
Yes, you only pay for it if you're going to go down that branch
because maybe you're not going to call that part of the operation
or that part of the CLI or whatever.
Yeah, and it's definitely one of those fine-tuning things
that you want to make sure you don't do this too early.
But for people packaging and supporting large projects, I think
it's a good idea to pay attention to
this and make sure to your import time.
It'd be something that would be kind of fun
to throw in a test case for CI to
make sure that your import time doesn't
suddenly go slower
because something you depend on suddenly got
slower or something like that. Yeah, yeah, absolutely.
And you don't necessarily know because the thing
it depends upon, that thing changed, right? It's not even the thing you
actually depend upon, right? It's very, it could be very down the line. Yeah. And maybe you're like,
we're going to use this other library. We barely use it, but you know, we already have dependencies.
Why not just throw this one in? Oh, wait, that's adding a quarter of a second. We could just vendor
that one file that we don't really you know and make it much much faster so
there's a lot of interesting use cases here a lot of time you don't care like for my web apps i don't
care for my cli apps i might care yeah definitely yeah so i've been on this bit of a exploration
lately brian and that's because i'm working on a new course yeah yeah yeah we're actually working
on a bunch of courses over at talkPython. Some data science ones,
which are really awesome.
But the one that I'm working on is Python memory management
and profiling
and tips and tricks
and data structures
to make all those things
go better.
So I'm kind of on
this profiling bent.
And anyway,
so if people are interested
in that
or any of the other courses
that we're working on,
they can check them out
over at
training.talkpython.fm.
Helps bring you this podcast and others and books.
Thanks for that transition.
But I'm excited about that because the profiling and stuff is one of those things that often is considered kind of like a black art, something that you just learn on the job.
And how do you learn it?
I don't know.
You just have to know somebody that knows how to do it or something.
So having some courses around,
that's a really great idea.
Thanks.
Yeah, also like when does the GC run?
What is the cost of reference counting?
Can you turn off the GC?
What data structures are more efficient
or less efficient according to that?
And all that kind of stuff.
It'll be a lot of fun.
Cool.
Yeah, so I've got a book.
I actually want to highlight something.
I've got a link called,
it's pytestbook.com. So if you just go to
pytestbook.com, it actually goes to a landing page that's on
blog that's kind of not really that active, but there is a landing page
there. The reason why I'm pointing this out is because some people
are transitioning. Some people are finally starting to use 3.8 more.
There's people starting to test 3.8 more. There's people starting to test
3.9 a lot, which is great. There's PyTest 6 just got released, not one of our items. And I'm getting
a lot of questions of, is the book still relevant? And yes, the PyTest book is still relevant, but
there's a couple of gotchas. I will list all of these on that landing page. So they're not there
yet, but they will be by the time this airs.
There's a Rata page on
Pragmatic that I'll link to, but
the main, there's a few things.
There's a database that I use
in the examples as a tiny DB,
and the API changed since I
wrote the book. There's a little note
to update this
setup to pin the database version.
And there's something, you know,
markers used to be able to get away with
just throwing markers in anywhere.
Now you get a warning if you don't declare them.
There's a few minor things that are changed
that make it for new PyTest users
might be frustrating to walk through the book.
So I'm going to lay those out just directly on that page
to have people get started really quickly.
So pytestbook.com is what that is.
Awesome. Yeah, it's a great book.
And you might be on a testing bent as well
if I'm on my profiling one.
Yeah, actually.
So this is a Django Testing Toolbox.
It's an article by Matt Lehman.
And I was actually going to think about having him on the show,
and I still might, on Testing Code to talk about some of this stuff.
But he threw together, I just wanted to cover it here
because it's a really great throw together of information.
It's a quick walkthrough of how Matt tests Django projects.
And he goes through some of the packages that he uses all the time
and some interesting techniques.
The packages that, there's a couple of them that I was familiar with.
PyTest Django, which is like, of course, you should use that.
Factory Boy is the one, there's a lot of, Factory Boy is one project,
there's a lot of different projects to generate fake data.
Factory Boy is the one Matt uses, so there's a highlight there.
And then one that I hadn't heard of before, Django Test Plus, which is a beefed up test case.
It maybe has other stuff too, but it has a whole bunch of helper utilities to make it easier to check commonly tested things in Django.
So that's pretty cool. that people some people trying to use pytest for django get tripped up at is a lot of people think
of pytest as just functions only test functions only and not test classes but there's a there are
some uses uh matt says he really likes to use test classes and there's no i mean pytest allows you to
use test classes but you can use these derived test cases like the Django Test Plus test case.
A couple of other things, using a RAIDJACK to assert as a structure,
in-memory SQLite databases when you can get away with it to speed up
because in-memory databases are way faster than on-file system databases.
Yeah, and you don't have to worry about dependencies or servers you got to run.
It's just colon memory, boom, you connect to it, and off it goes.
Nice.
Yeah.
One of the things I didn't get, I mean, I
kind of get the next one, disabling migrations
while testing. I don't know a lot
about my database migrations
or Django migrations or whatever those are,
but apparently disabling
them is a good idea. Makes sense.
Faster password hasher.
I have no idea what this is talking
about, but apparently you can speed up your testing by having a faster password hasher. I have no idea what this is talking about, but apparently you can speed up your testing
by having a faster password hasher.
Yeah, a lot of times they'll generate them
so they're explicitly slow, right?
So like over at TalkPython,
I use PassLib, not Django,
but PassLib is awesome.
But if you just do say an MD5,
it's like super fast, right?
So if you say, I want to generate this and take this and generate it,
it'll come up with the hashed output.
But because it's fast, people could look at that and say,
well, let me try like 100,000 words I know
and see if any of them match that, then that's the password, right?
You can use more complicated ones and MD5 is not one you want.
Something like bencrypt or something,
which is slower a little bit and better,
harder to guess.
But what you should really do
is you should like insert little bits of like salt,
like extra text around it.
So even if it matches, it's not exactly the same.
You can't do those guesses,
but then you should fold it,
which means take the output of the first time, feed it, which means take the output of the first time,
feed it back through,
take the output of the second time,
feed it back through 100, 200, 300,000 times
so that if they try to guess,
it's super computationally slow.
I'm sure that's what it's talking about.
So you don't want to do that
when you want your tests to run fast
because you don't care about hash security during tests.
Oh yeah, that makes total sense.
That's my guess.
I don't know for sure,
but that's what I think what that probably means. The last tip, which is always a good tip, is
figure out your editor so that you can run your tests from your editor because your cycle time of
flipping between code and test is going to be a lot faster if you can run them from your editor.
Yep. These are good tips. And if you're super intense, you have the auto run,
which I don't do. I don't have have auto run on i do it once in a while
yeah yeah cool well back to my rant let's talk about profiling okay actually this is this is not
exactly the same type of profiling it's more of a look inside of data than of performance so this
was recommended to us by one of our listeners named Oz. First name only is what we got. So thank you, Oz.
And he is a data scientist who goes around and spends a lot of time working on Python
and doing exploratory data analysis.
And the idea is like going to grab some data, open it up and explore it, right?
And just start looking around.
But it might be incomplete.
It might be malformed.
You don't necessarily know exactly what its structure is. And he used to do this by hand, but he found this project called
pandas dash profiling, which automates all of this. So that sounds handy. I mentioned before
missing no missing N O as in the missing number, missing data Explorer, which is super cool. And
I still think that's awesome, but this is kind of the same vein and the idea is given a pandas data frame you know pandas has a describe function
that says a little bit of detail about it but with this thing it kind of takes that and supercharges
it and you can say df.profile report and it gives you all sorts of stuff it does type inference to
say things in this column are integers or numbers, strings, date
times, whatever.
It talks about the unique values, the missing values, quartile, statistics, stuff, descriptives.
That's like mean mode, some standard deviation, a bunch of stuff.
Histograms, correlations, missing values.
There's the missing note thing I spoke about.
Text analysis of like
categories and whatnot, file and image analysis, like file sizes and creation dates and sizes of
images and like all sorts of stuff. So the best way to see this is to look at an example. So
in our notes, Brian, do you see where there's like a has nice examples and there's like the
NASA meteorites one. So there's an example for like the u.s census data
a nasa meteorite one some dutch health care data and so on if you open that up you get it you see
what you get out of it like it's pages of reports of what was in that data frame oh this is great
isn't that cool like it's it's tabbed and stuff so you can tabbed it's got warnings it's got
pictures it's got all kinds of analysis it's got histogram graphs and
you can like hide and show details and and the details include tabbed on you know i mean this
is a massive dive into what the heck is going on with this data correlations heat maps i mean
this is the business right here so this this is like one line of code to get this output.
This is great.
This like replaces a couple interns at least.
Sorry, interns.
But yeah, this is really cool.
So I totally recommend if this sounds interesting,
you do this kind of work,
just pull up the NASA meteorite data
and realize that like that all came from you know importing the
thing and saying df profile report basically and you get this you can also click and run that in
binder and google collapse you can go and interact with it live if you want yeah i love the i love
the the warnings on these some of the things that can like saying some of the variables that show up
of like there's some of them are skewed like too many values at one value that there's some of the variables that show up of like there's some of them are skewed, like too many values
at one value, that there's
some of them have missing zeros
showing. It does quite a bit of
analysis for you about the data right away.
That's pretty great.
Yeah, the types is great because you can just
I mean, you can have hundreds
or thousands of data points.
It's not trivial to just
say, oh, yeah, all of them are true or false.
All of them are, I know they're Booleans.
You'd have to look at everything first.
Yeah.
It's one of those things that's like easy to adopt, but looks really useful.
And it's also beautiful.
So yeah, check it out.
It looks great.
I want to talk about object-oriented programming a little bit.
Oh, okay.
Actually, it's not something, I mean, all of Python really is object-oriented
because we use,
everything is an object, really.
Deep, deep down,
everything's a Py object pointer.
Yeah.
There's an article
by Redwan Delawar
called Interfaces, Mixins,
and Building Powerful
Custom Data Structures in Python.
And I really liked it
because it's a Python-focused,
I mean, there's not a lot,
I've actually been disappointed with a lot of the object-oriented discussions around Python,
and a lot of them talk about basically,
I think they're lamenting that the system isn't the same as other languages,
but it's just not.
Get over it.
This is a Python-centric discussion talking about interfaces and abstract base classes, both informal and formally abstract
base classes using mixins. And it starts out
with the concept that there's a base amount of knowledge that people
have to have to discuss this sort of thing and of understanding
why they're useful and what are some of the downfalls and upfalls
or benefits and whatever.
And so he actually starts by, it's not too deep of a discussion, but it's an interesting
discussion and I think it's a good background to discuss it.
And then he talks about, like one of the things you kind of get into a little bit and you
go, well, what's really different about an abstract-based class and an interface, for
instance? And he writes, interfaces can be thought of as a special case of an abstract base class.
It's imperative that all methods of an interface are abstract methods
and that classes don't store any data or any state or instance variables.
However, in case of abstract base classes, the methods are generally abstract,
but there can also be methods that provide classes, the methods are generally abstract, but there can also be
methods that provide implementation, concrete methods, and also these classes can have instance
variables. So that's a nice distinction. Then mixins are where you have a parent class that
provides some functionality of a subclass, but it's not intended to be instantiated itself.
That's why it's sort of similar to abstract base classes and other things. So having all this discussion from one person
in a good discussion, I think is a really great thing.
There are definitely times I don't pull into class hierarchies
and base classes that much, but there's times when you need them and
they're very handy. So this is cool. Yeah, this is super cool actually. I really
like this analysis.
I love that it's really Python-focused because a lot of times the mechanics of the language just don't support some of the object-oriented programming ideas in the same way, right?
Like the interface keyword doesn't exist, right?
So this distinction, you have to make it in a conventional sense. Like we come up with a
convention that we don't have concrete methods or state with interfaces, right? But there's nothing,
there's not like an interface keyword in Python. So I like it. I'm a big fan of object-oriented
programming. And I'm very aware that in Python, a lot of times what people use for classes is
simply unneeded. And I know where
that comes from. And I want to make sure that people don't overuse it, right? If you come from
Java or C sharp or one of these OOP only languages, everything's a class. And so you're just
going to start creating classes. But if what you really want is to group functions and a couple
of pieces of data that's like shared, that's a module, right? You don't need a class, right?
You could still say module name dot and get the list of them and it's it's like a static class or
something like that but sometimes you want to model stuff with object-oriented programming
and understanding the right way to do it in python is really cool this looks like a good one yeah and
also there is a built-in library called abc for base class within Python. And it seems
like a, for a lot of people
it seems like a mystery thing that only
advanced people use. But it's really not that complicated.
And this article uses
that as well and talks about it.
So, it's good. You know one of my favorite things about
abstract base classes and
abstract methods is in
PyCharm, if I have a class
that derives from an abstract class,
all I have to write is class.
The thing I'm trying to create,
parentheses, abstract class name,
close parentheses, colon.
And then you just hit alt, alt enter,
and it'll pull up all the abstract methods.
You can highlight them, say implement.
It goes boom, and it'll just write the whole class for you.
But if it's not abstract,
obviously it won't do that, right?
So the abstractness will tell the editor to like write the subs of all the
functions for you.
Oh,
that'd be,
that's a cool use reason to use them.
That's almost reason to have them in the first place.
Yeah.
Almost.
We've pickled before,
haven't we?
Yeah.
So yeah,
we have talked about pickle a few times.
Yes.
Have we talked about this article?
I don't remember.
I don't think so.
We have apologies,
but it's a short and interesting so ned batchelder
wrote this article called pickles nine flaws and so i want to talk about that this comes to us via
piecoders.com which is uh very cool and we've talked about the drawbacks we talked about the
benefits but what i liked about this article is concise but it shows you all the trade offs you're making. Right? So quickly,
I'll just go through the nine. One, it's insecure. And the reason that it's insecure is not because
pickles contain code, but because they create these objects by calling the constructors
named in the pickle. So any any callable can be used in place of your class name,
you construct objects. So basically basically it runs potentially arbitrary code
depending where it got it from old pickles look like old code number two so if your code changes
between the time you pickled it and whatever is like you get the old one recreated back to life
like so if you added fields or other capabilities like those are not going to be there or you took
away fields they're still going to be there yeah
it's implicit so they will serialize whatever your object structure is and they often over serialize
so they'll serialize everything so like if you have cache data or pre-computed data that you
wouldn't ever normally save well that's getting saved yeah one of the weird ones that this has
caught me out before and it's just i don't know weird. So there you go, is the dunder in it, the constructor is not called.
So your objects are recreated, but the dunder in it is not called.
They're just the values have the value.
So that might set it up in some kind of weird state.
Like maybe pass it to fail some validation or something.
It's Python only, like you can't share with other programs because it's like a python only structure they're not readable they're binary it will seem like it
will pickle code so if you have like a function you're hanging on to you pass it along like
some kind of lambda function or whatever or a class that's been passed over and you have a list
of them or you're holding on to them and that you think that it's going to
save that all it really saves is basically like the name of the function. So those are gone. And
I think one of the big, real big challenges is actually slower than things like JSON and whatnot.
So, you know, if you're willing to give up those trade-offs because it was super fast,
that's one thing, but it's not. And are you telling me that we covered it before?
We did cover it in 189, but I had forgotten forgotten so it was like a couple months ago right so yeah it's a while ago
anyway uh it's good to go over it again definitely be careful with your pickling all right how about
uh anything extra that was our our top six items what else we got i don't have anything extra do
you have anything extra pathlib speaking of stuff we covered before we talked about pathlib a couple
times you talked about chris may's article or whatever it was around Pathlib, which is cool.
And I said, basically, I'm still, I just got to get my mind around not using OS.path and just get into this.
Right?
Yeah.
And people sent me feedback like, Michael, you should get your mind into this.
Of course you should do this.
Right?
And I'm like, yeah, yeah, I know.
However, Brett Abel sent over a one-line tweet that may just like seal the deal for me like this is sweet so he he said um how
about this text equals path of file dot read text done no context managers no open none of that
i'm like oh that's okay that's pretty awesome anyway I just wanted to give a little shout out to that one-liner
because that's pretty nice.
And then also I was just a guest on a podcast out of the UK
called A Question of Code
where the host Ed and Tom and I discussed why Python is fun,
why is it good for beginners and for experts,
why it might give you results faster than tangible code or tangible
programs faster than, say,
JavaScript, career stuff, all kinds of
stuff. So anyway, I linked to that if people want to check that out.
That's cool. Yeah, it was a lot of fun. Those guys are running
a good show over there. Yeah, I think I'm
talking with them tomorrow. Right on.
How cool. One of the things
I like about it is the accents.
Just, you know, because accents are fun.
So I was going to ask you, would you consider
learning how to do a British accent?
Because that would be great for the show. I would love to.
I fear I would
just end up insulting
all the British people and
not coming across really well.
But I love British accents.
If we had enough Patreon supporters,
I would be more than happy
to volunteer to move to England to develop
an accent.
Maybe just live in London for a few
years. If they're going to fund that for you
that would be awesome. London's a great
town.
How about another joke?
I'd love another joke. This one
is by Caitlin
Haran but was pointed out
to us by Aaron Brown.
So she tweeted this on Twitter, and he's like,
hey, you guys should think about this.
So you ready?
Yeah.
Caitlin says, I have a Python joke,
but I don't think this is the right environment.
Yeah, so there's a ton of these type of jokes.
Like, I have a joke, but so this is a new thing, right?
I don't know.
It's probably going to be over by the time this airs.
Yeah, probably.
But I'm really amused by these types of jokes.
Yeah, I love it.
This kind of touches on the whole virtual environment,
package management, isolation, chaos.
I mean, there was that XKCD as well about that.
Yeah.
Okay, so while we're here,
I'm going to read some from Luciano.
Luciano Romalo, he's a Python author,
and he's an awesome guy.
Here's a couple other
related ones. I have a Haskell joke, but it's not popular. I have a Scala joke, but nobody
understands it. I have a Ruby joke, but it's funnier in Elixir. And I have a Rust joke,
but I can't compile it. Yeah, those are all good. Nice. Cool. Nice, nice. All right. Well,
Brian, thanks for being here as always. Thank you. Talk to you later. Bye.
Thank you for listening to Python Bytes.
Follow the show on Twitter via at Python Bytes.
That's Python Bytes as in B-Y-T-E-S.
And get the full show notes at PythonBytes.fm.
If you have a news item you want featured, just visit PythonBytes.fm and send it our way.
We're always on the lookout for sharing something cool.
On behalf of myself and Brian Ocken, this is Michael Kennedy.
Thank you for listening and sharing this podcast with your friends and colleagues.