Python Bytes - #342 Don't Believe Those Old Blogging Myths

Episode Date: June 26, 2023

Topics covered in this episode: Plumbum: Shell Combinators and More Our plan for Python 3.13 Some blogging myths Jupyter AI Extras Joke See the full show notes for this episode on the website at... pythonbytes.fm/342

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds. This is episode 342, recorded June 25th, 2023. I'm Michael Kennedy. And I am Brian Ocken. And this episode is brought to you by Brian and me, us, our work. So support us, support the show, keep us doing what we're doing by checking out our courses over at TalkPythonTraining. We have a bunch, including a really nice PyTest course written by Brian. Check out the Tested Code podcast, the Patreon supporters. Brian's got a book as well in PyTest. You may have heard of
Starting point is 00:00:33 this. So please, if you check those things out, share them with your friends, share, recommend them to your coworkers. It really makes a difference. You can also connect with us on Mastodon. You'll see that over on the show notes for every episode. And finally, you can join us over at pythonbystuffm.com if you want to be part of the live recording, usually, usually, Tuesdays at 11 a.m. Pacific time, but not today. No, Brian, we're starting nice and early
Starting point is 00:00:58 because, well, it's vacation time. And, well, plum bum, I think we should just get right into it. Sure. Plum bum bomb let's do it it's a new saying it's an expression plum bomb let's just do it let's just do it yeah i have no idea where this comes from um but the uh well i do know where it comes from it was last week uh last week we talked about uh and Henry Schreiner said, hey, you should check out Plumbum. It's kind of like what you're talking about, but also neat. So I did.
Starting point is 00:01:33 We were talking about shh before. Oh, right. We were talking about shh. Don't tell anyone. So Plumbum, it's a little easier to search for, actually, than shh. So what is it? It's a Python library and it's got, it's shell combinations. It's for interacting with your environment.
Starting point is 00:01:52 And there we go. Henry Schreiner, one of the maintainers. So it's a tool that you can install so that you can interact with your operating system and file system and stuff like that and all sorts of other things and it's got a little bit a little bit different style than uh but it uh so i was taking a look at it's kind of like a local command for one the basics are you like import from plumbum import local and then you can run commands as if you were just running a shell but you do this um within your python code and there's also some convenience ones like sh has
Starting point is 00:02:32 like ls and grep and things like that but um but but you it generally looks like there's more stuff around how you operating operate with shell normally, things like piping. So you can pipe one like LS to grep to word count or something like that to count files. I mean, there's other ways to do it within Python, but if you're used to doing it in the shell, just wrapping the same work in a Python script, why not? Things like redirection,
Starting point is 00:03:04 manipulating your working directory just all sorts of fun stuff to do with your shell but through python you know the pipe overriding the you know the pipe operator in python overwrite sort of actually in the language being the same as in the shell it's a little bit like pathlib doing the divide aspect right like we're going to grab some operator and make it that it probably was never really imagined to be used for but we're going to make it use it to so it looks like what you would actually you know the abstraction you're representing which is pretty interesting yeah and they could um like this example they have an
Starting point is 00:03:38 example in the the readme of piping ls to grep to word count and they they like define that as a chain and if and it didn't even it doesn't even run it i don't think um it just defines this new sequence so you can chain together uh script commands and if you print it so it has a uh uh probably a stir or a ripper um implementation that shows you exactly what what all the pipe and the chaining was. So that's kind of a neat thing for debugging. And then when you actually run it, then you call that thing like a function and it runs it. That's pretty neat.
Starting point is 00:04:16 Yeah, it is. You can even do them inline, just put parentheses around them and kind of execute at the end. Yeah, pretty interesting. Yeah, anyway, just a fun little quick shout out to plumbum yeah if you thought sh was cool last time you might also check this out right they kind of play in similar spaces yeah just one of the things i like about python and the python community is um this variety of different different libraries that might solve the same space but um have a
Starting point is 00:04:43 different flavor uh you know some people like chocolate, some people like vanilla. Well, I'm a big fan of caramel. So how about we talk about faster C Python? Okay. So faster C Python, they're really starting to show some results, right? Python 3.11 was 40% faster, I believe, is, you know, roughly speaking, working with averages and all those things. And we've got 3.12 coming with more optimizations. And ultimately, the faster CPython plan was, you know, put together and laid out by Mark Shannon.
Starting point is 00:05:21 And the idea was, if we could make, you know, improvements like 40% faster, but over and over again, because of, you know, compounding sort of numbers there, we'll end up with a really fast CPython, a faster one, you might say, in five releases, five times faster in five releases. And so, you know, that started really with 3.10, 3.11, 3.12, not the one that's coming, but the one that's coming in a year and a few months, 3.11. They're laying out their work for that. And it's looking pretty ambitious. So in 3.12, they're coming up with ways to optimize blocks of code.
Starting point is 00:05:57 So in 3.11, stepping a little bit back, we've got the Adaptive Special interpreter or specializing adaptive interpreter. I don't have it pulled up in front of me. I wish I would have those words go in. But that will allow CPython to replace the bytecodes with more specific ones. So if it sees that you're doing a float plus a float operation, instead of just doing a word, we're doing an abstract plus. Is that a list plus a string? Is that an integer and a float? Is that actually a float and a float?
Starting point is 00:06:30 And if it's a float and a float, then we can specialize that to do more specific, more efficient types of math and that kind of stuff, right? 3.12 is supposed to have what they're calling the Tier 1 And so, which optimizes little blocks of code, but they're pretty small. And so one of the big things coming here in 3.13 is a tier two optimizer. So bigger blocks of code, something they're calling super blocks, which I'll talk about in just a second.
Starting point is 00:07:00 The other one that sounds really amazing is enabling sub-interpreters from Python code. So we know about PEP554. This has been quite the journey and massive amount of work done by Eric Snow. And the idea is if we have a GIL, then we have serious limits on concurrency, right? From a computational perspective, not from an IO one potentially. And, you know, I'm sitting here on my M2 Pro with 10 cores and no matter how much, you know, multi-threaded Python I write, if it's all computational, all running Python bytecode, I get, you know, one 10th of the capability of this machine, right? Because of the GIL. So the idea is, well, what if we could have each thread have its own GIL? So there's still, sure, a limit to how much work that can be done in that particular thread
Starting point is 00:07:50 concurrently, but it's one thread dedicated to one core, and the other core gets its own other sub-interpreter, right, that doesn't share objects in the same way, but they can pass them around through certain mechanisms. Anyway, so this thing has been a a journey like i said created 2017 and it has like all this history uh up until now and um statuses still says draft and now the python version i think the pep is approved but and work has been done but it's still in like pretty early stages so that's a pretty big deal is to add that that's supposed to show up in um 313 and in 313 and in python code and this is a big deal i think that in 312 the work has been
Starting point is 00:08:36 done so that it's internally possible it's internally done if i remember correctly but there's no way to use it from python right like? Like if you're a creator of interpreters, basically you can use it. So now the idea is like, let's make this possible for you to do things like start a thread and give it its own sub-interpreter, you know, copy its objects over, let it create its own
Starting point is 00:08:57 and really do computational parallelism, I guess in interaction with async and await and those kinds of things. And also more improved memory management. Let's see what else. Well, so I guess along with that, we're going to have to have some tutorials or something on how do the two subinterpreters share information. Yeah, exactly. Yeah, we will. What I would love to see is just on the thread object, give the thread object a use sub-isolating, isolate sub-interpreter or new sub-interpreter equals true.
Starting point is 00:09:30 And off it goes. That would be excellent. And then maybe pickles the object. I don't know. We can see how they come up with that. But this is good news. I think it's the kind of thing that's not that important, necessarily, for a lot of people.
Starting point is 00:09:43 But for those who it is, it's like, you know, really, we want this to go a lot faster. What can we do here? Right? Yeah. Yeah. That sounds complicated. Does it make it go faster? Yay, then do it. Well, and you know, compared to a lot of the other alternatives that we've had for, I have 10 cores. Why can I only use one of them on my python code without multi-processing this is one of those that doesn't affect single threaded performance it's one of those things that there's not a a cost to people who don't use it right whereas a lot of the other types of options are like well sure your code gets five percent slower but you could make it a lot faster if you did a bunch more
Starting point is 00:10:23 work yeah yeah and that's been a hard sell and also a hard line that you know put in the sand saying like look we can't make regular non-concurrent python slower for the sake of you know this more rare but sometimes specialized right concur stuff so they've done a bunch of foundational work and then the three main things are the tier two optimizer sub interpreters for Python and memory management. So the tier two optimizer, there's a lot of stuff that you kind of got to look around. So check out the detailed plan.
Starting point is 00:10:54 They have this thing called copy and patch. So you can generate like roughly these things called super blocks. And then you can implement, they're planning to implement basic super block management. And Brian, you may be thinking, what are these words you're saying, Michael? Duplo. They're not those little Legos. No, they're big, big duplos. But it's kind of true. So they were optimizing smaller pieces, like little tiny bits, but you can only have so much of an effect if you're working on small blocks of code that you're optimizing. So a super block is a linear piece of code with one entry and multiple exits. It differs from a basic block in that it
Starting point is 00:11:32 may duplicate some code. So they just talk about considering different types of things you might optimize. So I'll link over to this, but there's a big long discussion lots of lots of graphics people could go check out so yeah they're going to add support to deopt um i support for deoptimization of soup blocks enhance the code creation implement the specializer and use this algorithm called copy and patch so implement the copy and patch machine code generator. You don't normally hear about a machine code generator, do you? No. But either that sounds like a JIT compiler or something along those lines. Yeah.
Starting point is 00:12:11 Anyway, so that's the goal. And reduce the time spent in the interpreter by 50%. If they make that happen, that sounds all right to me, just for this one feature. That's pretty neat. Yeah. Wow. Pretty good. And I talked a whole bunch about subinterpreters.
Starting point is 00:12:24 Final thing, the profiling data shows that a large amount of time is actually spent in memory management and the cycle GC, right? And while when Python, I guess, if you do 40% a bunch of times, it was maybe half as fast before, because remember, we're a few years out working back on this plan in three, nine, three, eight, maybe it didn't matter as much because a percent as a percentage of where's C Python spending its time, it was not that much time on memory management. But as all this other stuff gets faster and faster, if they don't do stuff to make the memory management faster, it's gonna be like, well, half the time is memory management, what are we doing? So they say, as we get the VM faster,
Starting point is 00:13:09 this is only going to be a larger percent of our time. So what can we do? So do fewer allocations to improve data structures, for example, partial evaluation to reduce the number of temporary objects, which is part of the other section of their work, and spend less time doing cycle GCs. This could be as simple as doing fewer calculations or as complex as implementing a new incremental cycle finder. Either way, it sounds pretty cool. So that's the plan for a year and a couple of months. Pretty exciting. I'm really happy that these people are working on it.
Starting point is 00:13:37 I am too. So a team of, I think last time I counted, five or six people. There's a big group of them around guido at microsoft but then also outside yeah so for example this was written by mark shannon who's there but also michael drop boom who was at mozilla but i'm not i don't remember where he is right now cool last name yes indeed all right over to you brian well that heavy. I'm going to do kind of a light topic. We need more people to write blogs about Python. It would help us out a lot, really.
Starting point is 00:14:11 And one of the ways you could do that is to just head over and check out one of the recent articles from Julia Evans about some blogging myths. And I guess this is a pretty lighthearted topic, but also serious but we have some more fun fun stuff in the extras so don't worry about it anyway so there's a few blogging myths and I just wanted to highlight these because I think it's good to remember that you know these are just wrong so I'll just run through them
Starting point is 00:14:42 quickly you don't need to be original. You can write content that other people have covered before. That's fine. Uh, you don't need to be an expert. Um, posts don't need to be a hundred percent correct. Uh, writing boring posts, uh, is bad. So these are, Oh wait, the myths are the myth is you need to be original. That's not true. Uh, myth, you need to be an expert. Posts need to be 100% correct. Also, myth, all these are myths. Writing boring posts is bad. Boring posts are fine if they're informational.
Starting point is 00:15:15 You need to explain every concept. Actually, that will just kill your audience if you explain every little detail. Page views matter. More material is always better. Everyone should blog. These are all myths, according to Julia. And then she goes through a lot of the in detail into each one of them. And I kind of want to like hover on the first two a little bit of you need to be original and you need to be an expert. I think it's we when we're learning, we're learning about the software, a new library or new technique or something. Often I'm like
Starting point is 00:15:52 I'm reading Stack Overflow, I'm reading blog posts, I'm reading maybe books, who knows, reading a lot of stuff on it. And you you'll get all that stuff in, in your own perspective of how it really is. And then you can sort of like, like the cheating book report you did in a junior high, where you just like rewrote some of the encyclopedia, but changed it. Don't do that. Um, but it doesn't, you don't have to come up with a completely new technique or something. You can just say, Oh, all the stuff I learned, uh, I'm going to put it together and like, right. Like my, my workflow now, or the process, or just a little tiny bit, it doesn't have to be long. It can be a short thing of like, Oh, I finally got this. It's way easier than I thought it was. And writing little, little aha moments are great times to just write that down
Starting point is 00:16:41 as a little blog post. Um, the other thing of, uh of you don't need to be an expert is a lot of us got started blogging while we were learning stuff as a way to write that down. So you're definitely not an expert as you're learning stuff. So go ahead and write about it then. And it's a great way to and that ties into it doesn't need to be 100% correct. As you get more traction in your blog, people like let you know if you made a mistake and in the python community usually it's nice um they'll they'll like mention hey this isn't quite right anymore uh and i kind of love that about our community so um of the i want to go back to the original part is you don't even have to be original from your own perspective if you wrote about something like last year go ahead and write about it again.
Starting point is 00:17:28 If you think it's important and you sort of have a different way to explain it, you can write another blog post about a similar topic. Yeah, I totally agree. I also want to add a couple of things. Okay. I would like to add that your posts, the myth, your posts have to be long or like an article or you need to spend a lot of time on it.
Starting point is 00:17:45 Right. You know, the biggest example of this in terms of like success flying in the face of just really short stuff is, um, John Gruber's daring fireball, right? Like this is an incredibly popular site and the entire articles are, it starts out with him quoting often someone else. And that's like two paragraphs, which is half the article and say here's my thoughts on this and or here's something interesting let's let's highlight it or something right and my last blog post was four paragraphs in a picture maybe five if you count the bonus right um i don't not too many people paid attention to mine because the titles you can ignore this post so i'm i don't know i'm having a hard time getting traction with it but um i actually i like that you highlighted the john that good john gruber uh style there's a lot of different styles of blog posts and one of them is reacting to something instead of because
Starting point is 00:18:37 a lot of people have actually turned you can either comment on somebody's blog or talk about it on reddit or something or you can react to it on your own blog um and link to it still link to it on reddit or something yeah yeah not anymore because reddit went private out of protest but you know somewhere else if you find another place or maybe post on twitter no don't do that let's uh mastodon it's getting hard yeah funny um i had another one as well but oh yeah so there's not a myth but just another, you know, another source of inspiration is if you come across something that really surprised you, like if you're learning, right, kind of to add on, like, I'm not an expert. If you come across something like, wow, Python really broke my expectations. I thought this was going to work this way. And gosh, it's weird here. People, it seems like a lot of people think it works this way but it works in some completely other way you know that could be a cool little write-up um also you know people might be searching like why does python do this you know they they might find your
Starting point is 00:19:33 quote boring article and go that was really helpful right so yeah i i still remember way back um when i started writing about uh pi test and unit tests and stuff, there was a feature, a behavior of teardown functionality that behaved different. It was like sort of the same in nodes and unit tests and then different in PyTest. And I wrote a post that said, maybe unit test is broken because I kind of like this PyTest behavior.
Starting point is 00:20:03 And I got a reaction from some of the PyTest contributors that said, oh, no, we just forgot, didn't test that part. So that's wrong. We'll fix it. What a meta problem that PyTest didn't test a thing. Yeah. Well, I mean, it was a really corner case, but I'm kind of a fastidious
Starting point is 00:20:25 person when I'm looking at how things work. But the other thing I want to say is a lot of a lot of things written by you, other people are old enough that they don't work anymore. If you're if you're following along with like a little tutorial, and it doesn't work anymore, because you know, the language changed, or the library they're using is not supported anymore or something that's a great opportunity to go well i'll just kind of write it in my own language but or in my own style but also make it current and make it work this time so that's indeed as well anyway okay well Well, let's go back to something more meaty. Yeah, something like AI. So I want to tell you about Jupyter AI, Brian. Jupyter AI is a pretty interesting, pretty interesting project here. It's a generative AI extension for JupyterLab. I believe it also works in Jupyter and IPython as just IPython prompt as well.
Starting point is 00:21:27 And so here's the idea. There's a couple of things that you can do. So Jupyter has this thing called a magic, right? Where you put 2% in front of a command and it applies it to an extension to Jupyter, not trying to run Python code, but it says, let me find this thing. In this case, you say percent percent AI, and then you type some stuff. So that stuff you type afterwards, then, you know, turns on a certain behavior for that particular cell. And so this AI magic, literally, it's percent percent AI, and then they call it a magic, or it is a magic. So AI magic turns Jupyter notebooks into reproducible, it's the interesting aspect, generative AI. So think if you could have chat GPT or open AI type stuff clicked right into your notebook. So instead of going out to one of these AI chat systems and say, I'm trying to do this,
Starting point is 00:22:20 tell me how to do this, or could you explain that data you just say hey that cell above what happened here or i'm trying i have this data frame do you see it above okay good uh how do i visualize that in um a pie chart or some you know one of those donut graphs using plotly and it can just write it for you as the next cell interesting okay interesting right? Yeah. It runs anywhere the Python kernel works. So JupyterLab, Jupyter Notebooks, Google Colab, VS Code, probably PyCharm, although they don't call it out. And it has a native UI chat. So in JupyterLab, not Jupyter, there's like a left pane that has stuff. It has like your files and it has other things that you can do. And it will plug in another window on the left there that is like a chat GPT. So that's pretty cool. Another really
Starting point is 00:23:11 interesting difference is this thing supports its model or platform agnostic. So if you like AI 21 or Anthropic or OpenAI or SageMaker or HuggingFace, et cetera, et cetera. You just say, please use this model. And they have these integrations across these different things. So you, for example, you could be going along saying, I'm using OpenAI, I'm using OpenAI. That's a terrible answer. Let's see, let's ask Anthropic the same thing. And then right there below it, you can use these different models and different ai platforms and go actually it did really good on this one i'm just going to keep using that one
Starting point is 00:23:48 now for this this part of my data okay okay so how do you install it you pip install jupiter underscore ai and that's it it's good to go and then you plug in then you plug in um like your various api keys or whatever you need to as environment variables they give you an example here so you would say percent percent ai space chat gpt and then you type something like please generate the python code to solve the 2d laplace equation in the cartesian coordinates solve the equation on the square such and such with vanishing boundary conditions etc plot the solution in matplotlib. Also, please provide an explanation.
Starting point is 00:24:27 And then look at this. You go da-da-da-da-da, and down it goes. And you can see off it shows you how to implement it. And that's the only part of that's shown. You can also have it do graphics. Anything that those models will generate is HTML. Just show up. So you could say, create a square using SVG with a black border and white fill.
Starting point is 00:24:44 And then what shows up is not svg commands or like little definition you just get a square because it put it in html as a response and so that showed up you can even do latex like dash dash f is math generate a 2d heat equation and you get this uh partial differential equation thing in um in latex you can even ask it to write a poem whatever you do so that's one of the go back to the poem one yeah it says write a poem in the style of variable names so you can have commands with variable uh insert variable stuff so that's so you can also jupiter has inputs and outputs like along the left side there's like a nine and a ten and those are like the order they were executed you can say um using input of nine which might be the previous cell or something or output of nine go do you know
Starting point is 00:25:40 take that and go do other things right like kind of that's how i opened this conversation one of the really interesting examples that David Q pointed out, there's a nice talk that he gave in a link to in the show notes at high data, like a week ago was he had written some code, two examples. One, he had written some code, a bunch of calculations and pandas, and then he created a plot, but the plot wasn't showing because he forgot to call plot dot show he's and uh he asks one of the ais it depends you know you can ask a bunch depending which model you tell it to target he said why isn't hey in that previous cell why isn't my plot showing it said because you forgot to pull um call show so here's an example of your
Starting point is 00:26:23 code above but that works and shows the plot. That's pretty cool for help, right? Yeah. Jeez. Instead of going to stack overflow or even trying to copy that into one of these AIs, you just go, hey, that thing I just did, it didn't do what I expected. Why? Here's your answer. Not in a general sense, but like literally grabbing your data and your code. Two final things that are interesting here. The other one is he had some code that was crashing. I can't remember what it was doing, but it was throwing some exception and it wasn't working out. He said, why is this code crashing?
Starting point is 00:26:57 It explained what the problem was with the code and how to fix it. Super interesting here. I'll have to check that out. Yeah, we have that link in the show notes. Yeah, the talk is really, really interesting. I'm trying to think there's one other thing that was in that talk. It's like a 40 minute talk, so I don't remember all.
Starting point is 00:27:17 Anyway, there's more to it that goes on also beyond this. It looks pretty interesting. If you live in Jupiter and you think that these AI models have something to offer you, then this is definitely worth checking out. Alvaro says, as long as it doesn't hallucinate a non-existing package. Yeah, that's, I mean, that is the thing. What's kind of cool about this is like it puts it right into code, right? You just, you can run it and see if it's pretty good, if it does indeed work and do what it
Starting point is 00:27:49 says. So anyway, that's our last. Yeah, go ahead. Oh, before we could move away too much, I was listening to a NPR show about talking about AI and somebody did research. I think it was for the Times, york times um a research project and found out that like there were there were some sometimes they would ask like uh when what's the first instance of this phrase showing up in the newspaper or something and it would make up stuff
Starting point is 00:28:18 uh and even and they'd say well you know can you, can you, what are those, you know, show those examples. And it would show snippets of fake articles that actually never were there. It did that for, that's crazy. It did that for legal proceedings as well. And a lawyer cited those cases and got sanctioned or whatever lawyers get when they do it wrong. Those are wrong. Yeah. Don't do that. Also, the final thing that was interesting that I now remember that made me pause to think, Brian,
Starting point is 00:28:50 is you can point it at a directory of files like HTML files, Markdown files, CSV files, just like a bunch of files that happen to be part of your project and you wish it had knowledge of. So you can say slash learn and point it at a subdirectory of your project. It will go learn that stuff in those documents. And then you can say, okay, now I have questions, right? Like, you know,
Starting point is 00:29:20 if it learned some statistics about a CSV, the example that David gave was he had copied all the documentation for Jupiter AI over into there and it told it to go learn about itself. And then it did. And then you could talk to it about it based on the documentation. Oh, that's. So if you've got a whole bunch of research papers, for example,
Starting point is 00:29:38 like I learned those. Now I need to ask you questions about this astronomy study. Okay. Who, who, who studied this and what did, who found what, you know, whatever, right? Like these kinds of questions are pretty amazing. Yeah.
Starting point is 00:29:49 And actually some of the stuff would be super powerful, especially if you could make it not like keep all the information local, like, like, like, you know, internal company stuff. They don't want to like upload the, all of their source code into the cloud just so that they can ask it questions about it. Yeah, exactly. upload all of their source code into the cloud just so they can ask it questions about it yeah yeah exactly the other one um was to generate starter projects and code based on ideas so you can say generate me a jupiter notebook that explains how to use matplotlib okay okay and it'll come with a notebook and it'll do so here's a bunch of different examples and here's how you might apply a theme and it'll create things and one of the things that they actually have to do is they use Langchain and AI agents to in parallel, go break that into smaller things that are actually going to the notebook? It'll say, give me an outline of how somebody might learn this.
Starting point is 00:30:47 And then for each step in the outline, that's a section in the document that it'll go have the AIs generate those sections. And it's like a smaller problem that seemed to get better results. Anyway, this is a way bigger project than just like, maybe I can pipe some information to chat GPT. There's a lot of crazy stuff going on here. The people who live in Jupiter might want to check out. It is pretty neat.
Starting point is 00:31:11 I was not around the Jupiter stuff, but I was thinking that a lot of software work is the maintenance, not the writing it in the first place. So what we've done is like taken the fun part of making something new and giving it to a computer and we'll all be just like software maintainers at that afterwards exactly let's be plumbers sewer overflowed again call the flower no i don't want to go in there and also i'm just imagining like a whole bunch of new web apps showing up that are generated by like ideas and they kind of work but nobody knows how to fix them um but yeah sure i think that you're right and that that's going to be what's going to happen a lot but you technically
Starting point is 00:31:55 could come to an existing notebook and add a a cell below and go i don't really understand could you try to explain what is happening in the cell above? And it also has the possibility for making legacy code better. And if that's the reality, we'll see. Hopefully it's a good thing. So cool. All right. Well, those are all of our items.
Starting point is 00:32:15 That's the last one I brought. Any extras? I got a couple extras. Will McCougan and gang at Textualize um have started a youtube channel um and so far there's uh and some of these i think it's a neat idea some of the tutorials that they already have they're just walking through some of the tutorials uh in video form at this point um and there's three up so far of uh stopwatch intro and uh how to get set up and use textualize. And I like what they're doing over there, and it's kind of fun.
Starting point is 00:32:48 Another fun thing from- I like it too, because textualize, rich is a visual thing, but textualize is like a higher level UI framework where you've got docking sections and all kinds of really interesting UI things. And so sometimes learning that in an animated, active video form is really maybe better than reading the docs.
Starting point is 00:33:10 MARK MANDELBAUM- Yep. And then something else that they've done. So maybe watch that if you want to try to build your own command line or text user interface, a TUI, as it were. MARK MIRCHANDANI- A TUI. MARK MIRCHANDANI uh or you could take your command line interface and just pipe uh use uh trogon all trogon i don't know how you say that t-r-o-g-o-n it's a by uh
Starting point is 00:33:35 textualize also it's a new project and the idea is you just um i think you use it to wrap your own your own command line interface uh tool and it makes a graphic or a text-based user interface out of it there's a little video showing an example of a trogon app applied to sqlite utils uh which uh has a whole what sqlite utils has a bunch of great stuff and now you can interact with or interact with it with a GUI instead. And that's kind of fun. Works around Qlik, but apparently they will support other libraries and languages in the future. So interesting. Yeah, it's like you can pop up the documentation for a parameter while you're working on it in a little modal window or something.
Starting point is 00:34:21 Looks interesting. Yeah, well, I was thinking along the lines of even like in internal stuff, it's fairly you're going to write like a make script or a build script or some different utilitarian thing for your work group. If you use it all the time, command line is fine. But if you only use it like every once a month or every couple of weeks or something, it might be that you forget about some of the features and yeah there's help but having it as a GUI if you could easily write a GUI for it that's kind of fun so why not the other thing I wanted to bring up a completely
Starting point is 00:34:55 different topic is the June 2023 release of Visual Studio Code came out recently and I hadn't taken a look at it. I'm still, I've installed it, but I haven't played with it yet. And the reason why I want to play with it is they've revamped the test discovery and execution. So apparently you can, there were some glitches with finding tests sometimes. So I'm looking forward to trying this out.
Starting point is 00:35:23 You have to turn it on though. You have to, there's, so this new test discovery stuff, you have to go, you have to, like, set a opt into flag, and I just put the little snippet in our show notes so you can just copy that into your settings file to try it out. Yeah. Excellent.
Starting point is 00:35:43 I guess that's all I got. Do you have any extras? I do. I do. I have a's all I got. Do you have any extras? I do. I do. I have a report, a report from the field, Brian. So I had my 16 inch MacBook Pro M1 Max as my laptop.
Starting point is 00:35:56 And I decided I just, it's, it's not really necessarily the thing for me. So I traded it in and got a new MacBook Air 15 inch, one of those big, really light ones. And I just want to sort of compare the two if people are considering this, you know, I have my mini that we're talking on now with my big screen and all that, which is M2 Pro is super fast. And I found like that thing was way faster than my, my much heavier,
Starting point is 00:36:23 more expensive laptop. Like, well, why am I dragging this thing around if it's not really faster, if it's heavy, has all these, you know, all these cores and stuff that are just burning through the battery, even though it says it lasts a long time. It's like four or five hours was a good day for that thing. I'm like, you know what?
Starting point is 00:36:41 I'm going to trade it in for the new, a little bit bigger air and yeah so far that thing is incredible it's excellent for doing software development thing the only thing is the screen's not quite as nice but for me that like I don't live on my laptop right I've got like a big dedicated screen I'm normally at then I'm like out somewhere so small is better and it lasts like twice as long and the battery. So, and I got the black one, which is weird for an Apple device, but very cool. People say it's a fingerprint magnet and absolutely, but it's also a super, super cool machine. So if people are thinking about it, I'll give it a pretty, I'll give it like a 90%
Starting point is 00:37:18 thumbs up. The screen's not quite as nice. It's super clear, but it kind of is like washed out a little harder to see in light. But other than that, it's excellent. So there's my report. I did my expensive MacBook for an incredibly light, thin and often faster, right? When I'm doing stuff in Adobe Audition for audio or video work or a lot of other places, like those things that I got to do, like noise reduction and other sorts of stuff. It's all single threaded. So it's like 20% faster than my $3,500 MacBook Pro Max thing.
Starting point is 00:37:51 Wow. And lighter and smaller, you know, all the good things. But you're still using your mini for some of your workload. I use my mini for almost all my work. Yeah. If I'm not out, then I usually, or sitting on the couch, then it's all mini, mini, mini all the time all the time okay yeah is it black on the outside also then yeah yeah it's it's cool looking you can throw a sticker on that and somebody to hide that it's apple and people might think you just have a dell they wouldn't know that's right run parallels you can run uh
Starting point is 00:38:22 run linux on it they're like okay lin, okay, Linux, got it. What is that thing? It's a weird... Yeah, you could disguise it pretty easy if you want. Or just your stickers stand out better. You never know. All right, so people are thinking about that. Pretty cool device. But, Brian, if somebody were to send you a message
Starting point is 00:38:37 and trick you, like, hey, you won a MacBook? You want to get your MacBook for free? You don't want that, right? So, you know, companies they'll do tests. They'll like test their, their people just to make sure like, Hey, we told you not to click on weird looking links, but let's send out a test and see if they'll click on a link. And there's this, there's this picture, this guy getting congratulated by the CEO, IT, IT congratulated me for not failing the phishing test. And the guy's like, deer head's like, oh no.
Starting point is 00:39:12 Me who doesn't open emails is what the picture says. So you just ignore all your work email. You know, you won't get caught in the phishing test. How about that? Yeah. You've been out of the corporate for a while that that happens we've got i've had some phishing tests have you gone through this yeah yeah well like the the email like looks like it came from so that's one of the problems is it looks like it's legit and and it has like you know the the right third party company that we're using for some some
Starting point is 00:39:46 service or something and and you're like wait what is this uh and um and then the link doesn't match up with the whatever it says it's going to and things like that but um it actually is harder now i think that to to verify what's real and what's not when more companies do use third party services for lots of stuff. So yeah. Anyway. Yeah. It's, you know, it's a joke, but it's, it is serious.
Starting point is 00:40:12 I worked for a company where somebody got a, got a message. I think either I might've been through a hacked email account or, or it was spoofed in a way that it, it looked like it came from a higher up that says, hey, there's a really big emergency. This vendor is super upset. We didn't pay them. They're kind of suing us if we don't. Could you quick transfer this money over to this bank account?
Starting point is 00:40:36 And because it came from somebody who looked like they should be asking that, it almost happened. So it's not good. That's not good. Yeah. Uh, I get texts down. Like the latest one was just this weekend. I had a text or something that said, said, Hey, um, we need information about your shipping for, uh, Amazon shipment or something. And it's like copy and paste this link into your browser. And it's this like bizarre link. And I'm like, no, it would be amazon.com something. There's no way it's going to be Bob's Burgers or whatever.
Starting point is 00:41:13 Yeah, Amazon. Yeah, let's go to amazon.com. Anyway. Oh, well. Well, may everybody get through their day without clicking on phishing emails. That's right. Yeah, may you pass the test day without clicking on phishing emails. That's right. Yeah, may you pass the test. Or don't read email.
Starting point is 00:41:29 Just stop reading email. Yeah, think about how productive you'll be. Well, this was very productive, Brian. Yes, it was. Yeah. Well, thanks for hanging out with me this morning. So it was fun. Yeah, absolutely.
Starting point is 00:41:41 Thanks for being here, as always. And everyone, thank you for listening. It's been a lot of fun. See you next time. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.