Python Bytes - #347 The One About Context Mangers
Episode Date: August 8, 2023Topics covered in this episode: async-timeout PyPI Project URLs Cheatsheet httpx-sse Creating a context manager in Python Extras Joke See the full show notes for this episode on the website at p...ythonbytes.fm/347
Transcript
Discussion (0)
Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds.
This is episode 347, recorded August 8th, 2023.
And I'm Brian Ocken.
And I'm Michael Kennedy.
Well, we have lots of great topics today. I'm pretty excited to get to them.
This episode, of course, is, well, not of course, but is sponsored by us.
So if you'd like to support the show,
you can support us on Patreon or check out one of Michael's many courses
or my other podcasts,
or you know how to support us.
So there you know the deal.
Brian, let me throw one more in there for people.
Okay.
If you work for a company
and that company is trying to spread the word
about a product or service,
buy them by set FM slash sponsor
and check that as well.
Recommend that to their marketing team.
Definitely.
And if you are listening
and would like to join the show live sometimes,
just check out pythonbytes.fm slash live,
and there's info about it there.
Why don't you kick us off, Michael, with the first topic?
Here we go.
Let's talk.
I'm going to do a lead- in here to basically all of my things.
Freddie, I believe it was Freddie. The folks behind Lightstar, L-I-T-E star is a
async framework for building APIs in Python. It's pretty interesting. Similar
but not the same as FastAPI. They kind of share some of the same zen. Now I'm
not ready to talk about Lightstar. This is not actually my thing. I will at some
point probably. It's pretty popular 2.4 thousand stars which is cool but i'm like huh let me learn more
about this like let me see what this is built on and so i started poking through what did i poke
through not the requirements but the poetry lock file and and pi project and all that stuff and
came across two projects that are not super well known,
I think. And I kind of want to shine a light on them by way of finding them through Lightstar.
So the first one I want to talk about is async timeout. And I know you have some stuff you
want to talk about with context managers and this kind of lines right up there. So this
is an async IO compatible as in async and await, timeout class. And it is itself a context manager,
not the only way you could possibly use it, I suppose, but it's a context manager. And the idea
is you say async with timeout, and then whatever you do inside of that block, that context manager,
that with block, if it's asynchronous and it takes longer than the timeout you specified,
it will cancel it and raise an
exception, say this took too long, right? Maybe you're trying to talk to a database and you're
not sure it's on, or you're trying to call an API and you don't know, you don't want to wait more
than two seconds for the API to respond or whatever it is you're after. That's what you do
is you just say async with timeout, and then it manages all of the nested async I-O calls.
And if something goes wrong there, it just raises an exception and cancels it.
That's really pretty cool.
Isn't that cool?
There are ways in which, in Python 3.11, I believe it was added,
where you can create a task group, and then you can do certain things.
But I believe you've got to pass.
You've got to use the task group itself to run the work okay so you
pretty sure that's how you you do is and a while since I thought about it be some like task group
dot you know create task and you await it something along those lines right and there you've got to be
really explicit not just in the parts within now but all the stuff that's doing a sink and a weight
deep down in the guts, right?
They all kind of got to know about this task group deal,
I believe, if I'm remembering it correctly.
And this one, you don't have to do that, right?
You just run async stuff within this with block.
And if it takes too long, that's it.
So in the example here, it says we have an await inner.
This is like all the work that's happening.
I don't see why that has to be just one.
It could be multiple things.
And it says if it executes faster than the timeout,
it just runs as if nothing happened.
Otherwise, the inner work is canceled internally
by sending an asyncio.canceled error into it.
But from the outside, that's transformed into a timeout error
that's raised outside the context manager scope.
Pretty cool, huh?
Yeah, that's handy.
Yeah, there's another way you can specify.
You can say timeout at, like now plus 1.5 seconds,
if you'd rather than just saying 1.5 seconds.
So if you want to capture time at some point and then later,
you want to say that time plus some bit of time.
You can also access things like the expired property on the context manager,
which tells you whether
or not it was expired or whether it ran successfully inside the context manager. You can ask for the
deadline so you know how long it takes and you can upgrade the time as it runs. You're like, oh,
this part took too long or under some circumstance, something happened. So we need to do more work.
Maybe we're checking the API if there's a user, but actually there's not. So we've got to create the new user and we've got
to send them an email and that might take more time than we were in the sort of other scenario.
So you can say shift by or shift to for time. So you can say, Hey, uh, we need to add a second to
the timeout within this context manager. Interesting. So basically reschedule it, yeah.
That's pretty cool.
Yeah. So that's one thing.
Then there's one other bit in here, the wait for.
Right. It says, so this is useful when asyncIO
wait for is not suitable but it's also faster than wait for,
because it doesn't create a separate task that is
also scheduled as async wait for itself
does so it's not totally unique functionality in python but it's it's a neat way to look at it and
i i think this is a nice little library yeah i like that the interface too it's pretty clean as
well yeah a good little api there because it's a context manager huh yeah well let's um let's
reorder my topics a little bit. Let's talk about context managers.
Did I change your order?
That's all right.
So Trey Hunter has written an article called Creating a Context Manager in Python.
And as you've just described, a context manager is really the things that you use a with block with.
And there's a whole bunch of them.
Like there's open. If you say with open and then a file name, s file,
then the context manager automatically closes it afterwards.
So really, this article's about, this is pretty awesome,
but how do we do it ourselves?
And so he kind of walks through,
he's got a bunch of detail here, which is great.
It's not too long of an article though.
A useful one, which I thought that was an awesome,
good example is having a context manager
that changed an environmental variable
just with the with block.
And then it goes back to the way it was before.
And the code for this is just a it's just a class
uh with uh it's not inheriting from anything and uh the context manager class is a class that has
uh dunder init dunder enter and dunder exit functions and then he talks about all the stuff
you have to put in here um the and then uh in your example before you said as like uh with the the timer as cm or something
so that you could uh access that to see you know values afterwards so uh trey talks about um how
what is the how do you get the as functionality to work and really it's just you have to return something. And then there's enter and exit functions.
And there's, yeah, how do you deal with all of those?
It's a great, it's just a little great article.
I love using context managers and knowing how to,
I think it makes sense to practice a couple of these
because knowing how to use one in the context of your own code, there's
frequently times where you have to do something and you know you're going to have to clean up
or something or there's some final thing that you have to do. You don't really want to have that
littered all over your code, especially if there's multiple exit points or return points.
In a context manager is a great way to deal with that. I did want to shout out to PyTest a little bit.
So the environmental variable part example is a great, useful one for normal code
if you ever want to change the environment
outside of testing.
But if you're doing it in testing,
I recommend making sure that you...
Oh, I scrolled to the wrong spot.
There's a monkey patch thing within PyTest.
So if you use fixtures monkey patch there
is a set environment monkey patch portion so within a test that's how you do an environmental
variable but outside of a test why not create your own context manager oh you're muted so the
environment variable only exists while you're in the context block, right? That's cool.
The with block.
Yeah.
Or you're changing it.
Like if you wanted to add a path, add something to the path or something.
Sure.
There's other ways to do the path, but let's say it's a, I don't know, some other Windows environment variable or something.
Yeah.
Yeah.
These things are so cool. So if you ever find yourself writing, try finally, and the finally part is unwinding something like it's clearing some variable or deleting a temporary file or closing a connection.
That's a super good chance to be using a context manager instead, because you just say with the thing and then it goes.
I'll give two examples that I think were really fun that people might connect with. So prior to SQL alchemy 1.4, the session, which is the unit of work design pattern
object in SQL alchemy, the idea of those are I start a session, I do some
queries, updates, deletes, inserts, more work, and then I commit all of that work
in one shot, like that thing didn't used to be a context manager.
And so what was really awesome was I would create one, like a wrapper class that would say in this block, create a
session, do all the work. And then if you look at the Dunder exit, it has whether or not there was
an exception. And so my context manager, you could say, when you create it, do you want to auto
commit the transaction if it succeeds and auto roll it back if there's an error yeah and so you just say in the exit is there an error roll back the session if it's no errors commit
the session and then you just it's like beautiful right you don't have to juggle that there's no
try finally there's awesome another one to put it in um something sort of out of normal scope maybe
for people like the database one might be something you think of is great colors yeah colorama so if using something like colorama where you're like i want to change the
color of the text for this block right so you there's all sorts of colors and cool stuff it's
like a lightweight version of rich but just for colors you can do things like print foreground
dot red and it'll do some sort of every bit of text that comes after that will be
red or whatever so you can create a context block that is like a color block of output and then
there's a reset all style dot reset all you can do so you just in the open you pass in the new
color settings you do all your print statements and whatever deep down and then on the exit you
just say print style dot reset all out of colorama and. And it's, it's undone, like the color vanishes or you capture what it is. And then you reset it to
the way it was before something along those lines. Anyway, this is, I really like this,
that kind of stuff, right? People maybe don't think about color as a context manager, but.
But it kind of is it because you always have to do the thing afterwards. You always have to do this.
The reset back. It's so annoying. Yeah. Anything where you have to put the thing afterwards. You always have to do the reset. It's so annoying. Anything where
you have to put it back. Other data
structures that you may
have dirtied, you've got
queues sitting around that you want to clean up
afterwards. Those are great for context
managers. Absolutely. Brandon
Brainerd notices
and points out that there's also
context lib for making
them. I'm glad he brought that up.
I was going to bring that up.
Context lib is great, especially for quickly and doing context managers.
But I think it's, and maybe the documentation is pretty good.
You can do a decorator context manager, and then you can use a yield for it.
But I really like the notion of, I guess you should understand both.
I think people should understand how to write them
with just Dunder methods and how to write them
with the context manager and context lab.
I think both are useful.
But to mentally understand how the enter, exit,
all that stuff works, I think is important.
So thanks, Brent.
Yes, and let's tie the thing that I opened with
and this one a little bit tighter together, Brent. Yes. And let's tie the thing that I opened with and this one a little bit tighter together,
Brian.
There's an A enter and an A exit for async with blocks, right?
So if you want an asynchronous enabled version, you just create an async, async def, A enter,
then async def, A exit.
And now you can do async and await stuff in your context manager if you know which is sort of
the the async equivalent of the enter and exit okay and um and the context lib also has these uh
uh async context manager options a enter and a exit cool yeah perfect yeah exactly
very nice very nice all right let's go to the next one, huh? Yeah. So server sent events.
Let's talk about server sent events.
Server sent events.
People probably, well, they certainly know what a request response is for the web because we do that in our browsers all the time.
I enter a URL.
The page comes back.
I click a button.
That's another request.
It pulls back a page.
Maybe I submit a form.
It posts it.
And then it pulls back a page. i submit a form it posts it and then it pulls back a page right like that's traditional web interchange but that is a stateless kind of
one time and who knows what happens after that sort of experience for the web and so there were
a bunch of different styles of like what if the web server and the client could talk to each other
type of thing right um in In the early days, this is
what's called long polling. This works, but it is bad on your server for what you do is you make a
request and the server doesn't respond right away. It just says this request is going to time out in
five minutes and then it'll wait. And if it has any events to send during that time, it'll respond.
And then you start another long-polled event cycle right
but the problem is you've got it for everything that might be interested you've got an open
socket just waiting try like in the the processor request queue sort of thing it's not great and
then web sockets were added and web sockets are cool because they create this connection that is
bi-directional like a binary bi-directional socket channel
from the web server to the client, which is cool. Not great for IOT things, mobile devices are not
necessarily super good for web sockets. It's kind of heavyweight. It's like a very sort of complex,
like we're going to be able to have the client talk to the server, but also the server, the
client, they can respond to each other. a lighter weight simpler version of that would be server sent events okay okay so what server sent
events do is it's the same idea like i want to have the server without the client's interaction
send messages to the client so i could create like a dashboard or something right the difference with
server send events is it's not bi-directional. Only the server can send information to the client.
But often for like dashboard type things,
that's all you want.
Like I want to pull up a bunch of pieces of information
and if any of them change,
let the server notify me, right?
Oh yeah.
I want to create a page that shows
the position of all the cars in F1,
their last pit stop, their tires,
like all of that stuff.
And like, if any of them change,
I want the server to be able to let the browser know,
but there's no reason the browser needs to like make a change, right?
It's a watching, right?
So if you have this watching scenario,
server sent events are like a simpler, more lightweight,
awesome way to do this.
Okay.
We all know what SSE, server sent events are.
Okay.
So if you want that in Python, there's this cool library,
which is not super well known, but as cool is HTTP X. So HTTP X is kind of like requests sort
of maybe the modern day version of requests because it has a really great async and await
story going on. So there's this extension called HTTP X dash SSE for consuming service and events with HTTP X.
Oh, okay. Yeah. So if you want to be a client to one of these things in Python to some server,
that's sending out these notifications and these, these updates, well, HTTP X is an awesome way to
do it because you can do async and await. So just a great client in general. And then here you plug this in and it has a really, really clean API to do it. So what you do is you would get the connect
SSE out of it. And you just with HTTPS, you just create a client. And then you say connect the SSE
to that client to some place gives you an event source. And then you just iterate,
just say for thing and event, and it just blocks until the server sends you an event.
And it'll, I think, raise an exception if the socket's closed is what happens.
So you just like loop over the events that the server's sending you when they happen.
Okay, cool.
Isn't that cool?
So yeah, so you could like in my F1 example, you could subscribe to the changes of the race.
And when anything happens, you would get like, there's a new tire event and here's the data about it.
And the ID of the event session and all those different things just streaming to you and it's
like literally five lines of code sorry six lines of code with the import statement so what does it
look like on the server then i guess this that's not what this this it's it's not your problem
however they do say you can you can create a server sorry
a starlet server here and they have below an example you can use so it's cool they've got a
python example for both ends yeah so what what you do on the server is you create an async function
and here's a async function that just yields bits of just a series of numbers it's kind of like a
really cheesy example but it it sleeps for about an async second.
It's like a New York second, like a New York minute,
but 1 60th of it, and it doesn't block stuff.
So for an async second, you sleep,
and then it yields up the data, right?
And then you can just create one of these event source responses,
which comes out of the Starlet SSE,
which is not related to this, I believe,
but it's like kind of the server implementation.
And then you just set that as an endpoint.
So in order to do that, they just connect to that.
And then they just get these numbers
just streaming back every second.
That's pretty cool.
Yeah, I mean, all of this,
like if I hit Command minus one time,
both the server and the client fit on one screen of code.
Yeah.
Yeah, that's pretty neat.
What else do I have to say about it?
It has an async way to call it and a synchronous way to call it
because that's HTTPS's style.
It shows how to do it with async.
Here's your async with block.
I mean, it's full of context managers this episode.
And it shows you all the different things that you can do. It talks about how you handle reconnects. And, you know,
all of these little projects, and all these things we're talking about are, there's sort of a bread
crumbs through the trail of Python. So it says, Look, if there's an error, what you might do about
that, like if you disconnect, you might want to just let it be disconnected, or you might want to try to reconnect or who knows, right? What you need to do is not really
known by this library. So it just says, they're just going to get an exception, but it does provide
a way to resume by holding onto the last event ID. So you can say like, Hey, you know, that
generator you were sending me before, like, let's keep doing that, which is kind of cool. And it'll
just pick up, but here's the breadcrumbs.
It says, here's how you might achieve this using stamina.
And it has the operations here and it says on HTTP, it gives a decorator,
it says at retry on HTTP dot reader.
And then it goes how to redo it again.
And how often, so stamina is a project by Henick that allows you to do you to do asynchronous retries and all sorts of cool stuff.
So maybe something fun.
Have we talked about Stamina before?
I don't believe we have.
I don't think we have.
I don't remember it either.
But it's pretty cool.
So anyway, yeah, there's a lot of cool stuff in here, right?
Yeah.
And yeah, so people can go and check this out.
But here's the retrying version.
You can see an example of that where it just
automatically will continue to keep going. So pretty cool little library here, httpx-se.
It has 51 GitHub stars. I feel like it deserves more, so people can give it a look.
Yeah. Well, speaking of cool projects in Python, you probably grab them from PyPI, right?
Of course.
Through a pip install.
And let's take a look at Stamina, for instance.
In a lot of projects, one of the things you can do, you can go down and on the left-hand side, there's project description, release history, download files.
Everybody has that.
All of them have that.
But then there's project links.
And these change they're
different on different projects so stamina's got a change log and documentation and funding and
source and they all have like icons associated with it so i don't know what we have we go to
source it goes to github looks like uh funding it's a github sponsors that's pretty cool
documentation i'm looking at the bottom of my
screen documentation links to stamina.inic.me okay interesting uh changelog anyway these links are
great on projects let's take a look at uh but they're different so uh textual just has a home
page okay uh httpx has changelog homepage documentation.
PyTest has a bunch also.
Also it has a tracker, that's kind of neat, and Twitter.
A little bug in there, yeah.
So how do you get these?
So if you have a project,
it's really helpful to put these in here.
And so there's Daniel Roy Greenfield wrote a blog post
or post saying,
"'Hypi project URLs cheat sheet.
So basically figured all this stuff out.
It's in, it's not documented really anywhere
except for here, but it's in the warehouse code.
And the warehouse is the software that runs PyPI.
And I'm not gonna dig through this too much,
but basically it's the trying to figure out what the name,
if the name that you put on in for a link,
and then which icon to use if that's it. So there's a bunch of different icons that are available.
And anyway, we don't need to look at that too much because Daniel made a cheat sheet for us.
So he shows a handful of them on the on his post, also a link to where they all are. But then what it is, is you've got project URLs in your
PI project toml file. And it just lists lists a bunch of them that you probably want, possibly
like homepage repository changelog. Anyway, this is a really cool cheat sheet of things that you
might want to use and what what names to give them. So it's a name equals string with the URL.
And the names on the left can be anything.
But if they're special things, you get an icon.
Nice.
Anyway, and there's even a Mastodon one now.
So that's cool.
Yay.
They're going to have to change the Twitter one.
Twitter.
Oh, it's Twitter or X.
Interesting.
Yeah.
Think how much math that's going to break. It has to be called x everywhere now no more algebra for you yeah what a demonstra fire okay um
my god the audience points out the icons are courtesy of font awesome and indeed they are
if you're not familiar with font awesome check that out so like we could come over here and
search for wait for it github and you get all these icons here. One of them is the one that shows up. I
don't remember which one of these it would be, but if, you know, so it shows you the code that
you need. It's just fabrands.spacefa-github for the icon there. But if for some reason you're
like, what if there was a merge one i want to merge but
there's no merge that's there like on your other project right then there's there's i don't know
how many icons are in font awesome like six thousand yeah six thousand four hundred forty
four uh in total and uh maybe no i take that back because there's new twelve thousand new ones so
there's a there's a lot let's just say there's a lot here well the top said 26 000 so that's there we go
yeah awesome yeah so oh there's a fire one oh there's so many good ones that'd be good
one for twitter now so by the way if you go to python bytes and you would be i would be
you go to the bottom like all these little icons these are all fun awesome even the little heart
about made in portland ah ah is font awesome a free thing or do you got to pay for it?
Yes and no.
So Font Awesome is, there's like, if you, if I search for GitHub again, you see that some say pro and some don't.
Yeah.
Oh, okay.
The ones that don't say pro are free.
The ones that say pro are pro.
They cost like $100 a year subscription, but I have a, I bought a subscription to it and just canceled it because.
You got the icons you need. I got the icon. If I'm just locked at version six for a good long while that's fine maybe someday i'll buy more but yeah so okay there you go nice so yeah that's that's
awesome but it's cool how you or how you pointed out danny um related that to the pi project.tom
i had no idea that that's how this went together. It's cool. Nice. All right.
All right, well, I've got my screen up.
I'm off to the next one, huh?
Yeah.
We're done with them, aren't we?
That was, I have no more items.
No more items to cover other than extras.
Okay.
Well, I have a couple extras.
So I, a couple.
More people?
More people. More people.
More people on Python people.
What did I want to say?
Oh, just that I had some great feedback.
So I love starting something new.
It's good to provide feedback for people.
And I got some wonderful feedback.
The music that I stole from Testing Code is annoying on Python People because it's a completely different tone.
And fair enough.
So I'm going to go through and rip out all the intro music out of Python People.
And also the next episode is coming out this week.
It'll be Bob Belderbos from PyBytes.
It's a good episode.
So it should be out later this week.
Do you have any extras?
I do.
I do.
I do.
I have some cool announcements and some extras and all of those things.
First of all, physicists achieve fusion
with net energy game for the second time. So, you know, the holy grail of energy is fusion,
not fission, right? Just squishing stuff together like the sun does and getting heavier particles
and tons of energy with no waste, no negative waste, really. I mean, there's output, but
like helium or something, right? Oh no, we need more helium anyway I don't know Brian if you knew but there's
a helium shortage and a crisis of helium potentially we'll see that someday
anyway the big news is the folks over at the NIF repeated this big breakthrough
that they had last year at the National Ignition Facility so congrats to them
and why am I covering this here
other than, hey, it's chemical science,
is last year after that,
or actually earlier this year,
I had Jay Solmanson on the show
and we talked about all the Python
that is behind that project at the NIF
and how they use Python
to help power up the whole
fusion breakthrough that they had.
So very cool.
If people want to learn more about that,
they can listen to the episode 403 on TalkPython Army.
And just congrats to Jay and team again.
That's very cool.
Do they have a 1.21 gigawatt one yet?
That would be good.
They can't go back in time yet.
No.
Okay.
No.
But if you actually look, there's a video down there's this
video demonstration if you actually look at the project here the the machine that
it goes through this is like a room sign like a warehouse room size machine of
lasers and coolers and mirrors and insane stuff that it goes through till
it in it's like a dime size or small marble size.
He's somewhere.
There's like an insane.
There is.
It's not exactly what you're asking for, but there is something insane on the other side
of the devices.
Yeah.
We've got ways to get this into a car.
Yeah.
I mean, Marty McFly has got to definitely wait.
Yeah.
Let's save his parents relationship.
Okay.
All right.
All right. I have another bit of positive news. think this is positive this is very positive news yeah the
other positive news is you know i've kind of knocked on on facebook and google like last time
i think i was railing against google and their their drm for websites and like their ongoing
persistent premise that we must track and retarget you so how can we make the web better like no no?
That's not the assumption. We need to start with no it's not
So I would you know I just wouldn't point out
Maybe like a little credit little credit to Facebook at this time a little maybe a positive shout out
So there's a bunch of rules that I think are off the target by here and for example
there There were a bunch of attempts
and like in Spain,
there was an attempt to say,
if you're going to link
to a news organization,
you have to pay them.
Like, wait a minute.
So our big platform
is sending you free traffic.
And to do that,
we have to pay you,
you know, because newspapers
are having a hard time and they're important. But maybe that's a little bit off. Probably the most outrageous of
this category of them were somewhere in Europe. I can't remember if it was the EU in general or
a particular company, a country rather, sorry. They were trying to make companies like Netflix
and Google via, because of YouTube pay for their broadband because people consume a lot of their
content. So it uses a lot of their traffic. It's like, wait a minute, we're paying already to like
get this to you. And then you're going to charge us to make you pay for our infrastructure. I don't
know. I just like, Oh, no, no, no. That seems really odd to say like, you know, Netflix should
pay for Europe's fiber because people watch Netflix. I don't know. That
just, it seems super backwards to me. So, okay. I'm going to be devil's advocate here. I think
that if Netflix, for example, if Netflix is taking half the bandwidth or something like that,
then all of the infrastructure costs, half of those costs are benefiting Netflix and they're
profiting off of it. I think that's sort of legitimate. It depends on the scale, right? I think like we are not
taking a ton of bandwidth from Europe, so it would be weird for us to have to pay something. But if
I'm taking a measurable percentage, that's probably maybe okay. The other side is like I read Google News still, even though I'm not a huge fan
of Google, but I read Google News. There's a lot of times where that's enough. I'm like, is there
anything important happening? I'm just reading the headlines. I'm not clicking on the link.
And that benefit then for Google wouldn't be there if the newspapers weren't there. So I would say
some money going to the newspapers that are providing those headlines i think that's fair so i i i certainly hear what you're saying with the the
news on that um we still haven't got to the topic anyway okay uh i no no but i i totally hear you i
think with the the bandwidth like the customers decide like no one's netflix isn't projecting
stuff on to the people in europe and they're receiving it out of it. They, they seek it out. Right.
So I don't know.
I feel like,
but we can,
yeah,
that's,
I,
I appreciate the devil's advocate.
Yeah.
Okay.
What was the thing?
And Google news.
So here's the news though.
Facebook and more generally meta is protesting a new Canadian law,
obliging it to pay for news that if,
so if my mom shares an article, say my mom was Canadian
and she shared an article to some, um, some news thing, uh, the Canadian post or whatever,
then on Facebook, then Facebook would have to pay the Canadian post. Cause my mom put it there.
So they're protesting it by no longer having news in Canada. News doesn't exist in Canada now.
On Facebook.
Yeah.
So my mom tried to post it.
They were just like, that can't be posted.
Oh, well, that's weird.
Isn't that weird?
So I actually kind of agree with you on the Google News bit, like where a good chunk of it is there and it becomes almost a reader type service.
But like Facebook doesn't do that.
It just says, well, here's the thumbnail and you could click on it but also there's a lot of a lot of anger below it but get their news from people
sharing it on facebook they follow do they click it that's the do they do they do they often not
uh yeah possibly and is it free is the is the bandwidth if if like if i share it with a million
people um and they don't click on it does it cost
the news paper possibly they might be drawing it for the headline and the image and all that stuff
they might and they probably catch it but they might might not i do so i'll leave i'll put this
out there for people to have their own opinions um but i i think this is something that facebook
should stand up to and just me not speaking for Brian. Well done, Facebook.
I don't think this makes any sense.
Like they're protesting this law that makes them pay if my mom were Canadian and put news into her feed.
Yeah.
And I'll just say, way to go, Canada.
I like it.
Awesome.
All right.
Cool.
That's it for all the items I got.
You covered yours, right?
Yes, I did.
So let's do something funny before we get into fisticuffs.
No, never.
So, well, you want to talk about fisticuffs.
So let's see the joke.
So this joke makes fun of a particular language.
The point is not to make fun of that language.
It's to make fun of AI.
Okay.
So people who want to support the AI, they can send me their angry messages.
People who are fans of the language I'm about to show you, please don't.
Not about that.
Okay.
So if you were working with a GitHub copilot, you know, a lot of times it tries to auto-suggest stuff for you, right?
That didn't zoom that.
So it tries to auto-suggest stuff for you.
Yeah.
And so if you say like, this is is c sharp which people know i've done
c sharp before i like it at all um so not make fun of it but it's just a slash slash day and
then there's an auto complete statement that the copilot is trying to write what does it say right
it says day one of c sharp and i already hate it
so like how many people have written this in their like online journals or something?
Yes, exactly.
What in the world is going on here?
So that's, there's some, uh, there's some fun comments, but, uh, they're not too great
down here, but I just, I just thought like, you know, this, this weirdo, we know autocomplete,
like we're going to get into this.
This kind of stuff happens all the time, right?
This is kind of the Google suggest.
You know, let's see if I can get it to work here.
We go to Google and type Americans are.
And then, you know, what does it say, right?
Struggling.
Entitled.
Yeah, like C Sharp developers are. And then it'll give you like a list or let's do it
python right python python right who are the python why are they paid so much who hired these
people etc right so this is the ai equivalent and but it's going to be right where you work
all the time that's funny and mo and uh joe out there says i wonder what it says for day one of python
i have no idea but somebody had co-pilot installed they should let us know and what maybe we'll
point it out next time yeah interesting i haven't turned it on but no i haven't either
all right all right well apparently many people do and i really enjoy it like other the usage
numbers are kind of off the chart.
Well, so yeah, I'll just say,
one of the people used to not like maintaining software written by others,
and they mostly like writing green field code.
But with Copilot, you don't have to write your first draft.
You can just become a permanent maintainer
of software written by something else.
I wrote the bullet points, and now I maintain what the AI wrote.
Fantastic.
Exactly.
Hope you understand it.
Yeah,
exactly.
But anyway,
well,
thanks a lot for a great day again,
or a great episode.
Absolutely.
Thank you.
See y'all later.
Bye.