The Changelog: Software Development, Open Source - Efficient Linux at the CLI (Interview)
Episode Date: July 6, 2023This week we're talking to Daniel J. Barrett, author of Efficient Linux at the Command Line as well as many other books. Daniel has a PhD and has been teaching and writing about Linux for more than 30... years (almost 40!). So we invited Dan to join us on the show to talk about efficient ways to use Linux. He teaches us about combining commands, re-running commands, $CDPATH hacks, and more.
Transcript
Discussion (0)
What's up friends, welcome back. This is The Change Law. Today Jared and I are talking to Daniel J. Barrett, author of Efficient Linux at the Command Line, as well as many other books.
Daniel has a PhD and has been teaching and writing about Linux for more than 30 years.
So we got him on the show to talk about efficient ways to use Linux. He teaches us everything from combining commands,
rerunning commands, and so much more.
This is an amazing show.
Make sure you take notes. Of course,
a big thank you goes out to our friends
and our partners at Fastly and Fly.
This pod got you fast because
Fastly is super fast
all over the world. Check them out at
Fastly.com. And our
good friends at Fly help us put our app and our database closer to users, and they'll do it for you too. Check them out at fastly.com. And our good friends at Fly help us put our app
and our database closer to users,
and they'll do it for you too.
Check them out at fly.io.
Well, I'm here with Richard Moot,
the API design lead for all of Square.
And we're talking about the GraphQL API that is now in OpenAlpha looking for feedback.
So Richard, what's the story with this API?
So we've announced this at Unbox last year, and we've been just incrementally adding parts to our GraphQL API, it's been a big ask from developers within our community
because it makes using Square's platform
so much easier for particular things.
You're no longer having to, let's say,
call like three or four different APIs
to like pull together, you know, a bunch of different data.
And so we've just been trying to learn more and more
like how developers are planning on using this
and making sure that we get this right
before we actually transition to the next phase
and its release.
So you have the orders API out there,
the catalog API, the customers API,
the merchants API, the payments API,
the refunds API, and the inventory API out there.
And you also have the GraphQL Explorer out there.
Tell me, what are you expecting from
developers? What feedback do you want? What are your expectations? I think our expectations is
to find out all the different ways that you're using it and that we can make it better for you.
I mean, right now, you know, we've gotten really good feedback. We have, I mean, as soon as I
announced the update to our docs that we recently did, the very first question that I got on Twitter
from someone
was like, when is this going out of alpha? And so we're really happy to see that. But we also are
still wanting to hear from developers like, you know, you're implementing this, you're trying to
build something, what is causing you angst? Like what is why are you is it issues with like
constraints around query depths, or a number of queries? Is it fast enough for you? Are you
trying to use it in a particular mobile app, Electron app or something? And like, you know,
what what issues are you kind of coming across? And like, what, how can we make it better? And I
would definitely say that, like, anything that you come across when you come in, you try it out,
whether it's in the GraphQL Explorer, in your command line, in your app. We want you to reach out to us on our Slack or our forums.
Those would be great.
You can also tweet at us.
I will definitely be keeping an eye on that.
But I will probably still always say like, hey, like the forums are a great resource
because we have a lot of questions that are already asked there.
And we really just want to like funnel all that feedback to the team so that we can get
this into there in time to make this ready for
the next phase. Very cool. Okay. So if you want to check this API out yourself,
go to developer.squareup.com. Again, developer.squareup.com. It is an open alpha. They're
looking for feedback. Hit them up on Slack, head to the forums, whatever works for you.
Once again, developer.squareup.com. We're here with Daniel J. Barrett, author of the Linux Pocket Guide and more recently,
Efficient Linux at the Command Line.
Daniel, thanks so much for coming on the show. Thanks for inviting me. It's fun to be here. Linux Pocket Guide, and more recently, Efficient Linux at the Command Line.
Daniel, thanks so much for coming on the show.
Thanks for inviting me. It's fun to be here.
It's fun to have you. I was reading 35 plus years you've been using Linux. Is that correct?
It's almost 40 at this point. I don't recall exactly how long Linux has been around, but I worked with its predecessor, Unix,
definitely in the mid-80s.
And yeah, that is going back away and makes me feel old and creaky.
Experienced.
Yeah, seriously.
Experienced.
I'll remember that one for next time.
I'm experienced and creaky.
Do you still learn new things
or do you feel like you just have it
just down pat at this point?
Oh my goodness, always learning new things.
It's such a deep topic.
And even when you think you know the command line,
if, for example, if you find it fun
to read the Bash man page,
which is only about 5 million screens long,
there's always something in there
that I'd never thought about before,
didn't notice it and try it out.
And oh, I can see where that fits into my workflow.
So yeah, always new stuff
happening I guess Linux has changed a lot too over these these years right like almost 40 years
processor Unix there's a lot of I guess even distros right like the dishes have changed especially
with Red Hat Enterprise Linux and Fedora and CentOS and all those changes there I mean that's
more recent but it's been a lot of changes over the years. Yeah, for sure.
Distros come and go.
The ones that are popular change.
Absolutely right.
I come out of the Cardless and I call it CentOS.
I'm so sorry.
Ah.
CentOS.
They'll never forgive you.
What's your distro choice and has it changed over the years?
I use Ubuntu usually, the Kubuntu flavor of it with KDE Plasma.
For the most part, I'm ashamed to admit it,
the distro doesn't matter a heck of a lot to me.
When you're especially working at the command line,
the set of commands available to you are mostly the same
where you can quickly install them.
And as for window managers and so forth,
as long as I've got windows to move around,
I can adapt to whatever GUI is available.
So I don't have particular allegiances. I think I started off on Red Hat
and I moved over to CentOS for a while. Then I used
SUSE Linux for a while. But Ubuntu is a perfectly reasonable
distro and fine for use.
There's some concerns about snaps and stuff like that. Do you have those concerns?
Like the snaps packages and uninstalling snaps?
Like there's a lot of concerns with the direction, I suppose, of Ubuntu and where it might go
the next turn, you know, the next major changes for it. Yeah, yeah, that's an interesting point.
The snaps, I still feel like snap is fairly new to me. I haven't used it that much, but
I have noticed that some of the packages that I install using Snap are much, much slower as they run than the typical APT installs on Ubuntu. And so
that's kind of unsatisfying. At the same time, the process of installation and removal is
fairly simple and you can do it user by user, which is nice. That's very different from the APT package management.
So I can see advantages and disadvantages,
but generally, if you're running your own Linux machine,
you're the sysadmin, I don't see too much reason
to use Snap over the traditional package managers.
Do you ever find yourself looking at non-Linux?
I'm not going to say Windows, but maybe Mac OS, for example.
Do you look over the fence? Do you get any envy? I know I'm a daily Mac user and I also love Linux,
but I'm not a Linux desktop user. So how do you live in that Linux? Is it just like you're just
so comfortable there? So everything you have is there and Mac OS is just never have a place for
you? Oh gosh, I've used pretty much every OS out there at this point.
I used to be a big Commodore Amiga fan.
So I used that for many years, a lovely multitasking operating system in PC format.
So that was a lot of fun.
Before that, I was a Windows user and also after that.
Because, you know, when you work in industry, you don't always have the choice of which
OS is on your desk.
Today, I'm using Mac, Linux, and Chrome OS every day.
Cool.
So in your blog post, you're writing about why you wrote this new book, Efficient Linux as the command line.
And you mentioned how you were working with one of your colleagues there at Google on Python, and you were watching them kind of do Python code.
And just the way that they were going about it,
I think it was like quitting the editor, going back to the command line,
probably up-arrowing to some test command or something, executing that,
and then relaunching the editor, and then finding their spot again.
It was kind of driving you nuts, because there's just better ways to do it, right?
There are?
That resonated with me, because I cut my teeth on SSHing into a machine driving you nuts because like there's just better ways to do it right but there are that resonated
with me because i cut my teeth on like ssh into a machine and coding on that machine in vim
not because i wanted to because my teacher in college made us which i'm appreciative for now
but i remember just being like i don't even know what like it's just a prompt and i'm like i'll
learn a few commands i learned how to c. I learned how to nano back then.
Eventually he made us use Vim and I learned Vim,
but there's so much I didn't know.
And then I realized you could up arrow to old commands and I was like, okay, now we're talking, you know?
And I learned the history command.
So then I realized you could take history,
which is your history of commands you've typed before.
And then you could like pipe it to grep
and search for something.
Like you start to kind of learn these things slowly,
but that was back in like the early two thousands. And I'm telling
you just a few months ago, I learned about control R, which, which like all nerds already know about,
but I just didn't like somehow I just never knew about it. And you can just like start typing out
and fuzzy match and hit enter. And you, anyways, my efficiency's up, even after 20 years of doing this stuff.
In the last year, I just doubled. And so I was reading that and I was like, okay,
Daniel's onto something here. There's so much efficiency gains you can have if you just have
someone tell you, here's how to do it. You have totally hit the nail on the head there.
I got into efficient command line use largely because of the experiences like the one that you
just cited
about the engineer who was quitting the editor and restarting it and wasn't aware of job control,
where you can suspend and resume commands. And I should mention that was not at Google,
by the way. My Google colleagues are wonderfully adept at Linux. But it's amazing how many folks who use Linux have learned it through trial and
error. And it's a fun operating system. And for, you know, you're a hobbyist, you want to try
something out, that's great. But at a certain point, it will help to be a little more formal
so you can learn these capabilities like Control-R, the bash shell feature you just mentioned that searches
dynamically through your command history. It's so fast, right?
It's so much better. Yeah.
Yeah.
Why didn't anybody tell me about this for years?
Yeah. So there are probably, I can think of maybe three or four completely transformative things
that I learned at the command line that each one of them just made me so much more efficient and I
never looked back. And that's kind of why I wrote the book
Efficient Linux at the command line
because you can definitely double your speed
at the command line
and there are fairly straightforward ways to do it
if you're willing to put in five minutes or an hour
or whatever just to learn some new skills.
Well, let's start right there then.
I mean, if you got three or four or five,
the big ones, the aha ones ones where it's like minimal learning curve, but just maximum productivity
boost hit us with the, with the top ones. Okay. All right. Before I jump into that,
let me just sort of set the context now. Okay. Command line interfaces. A lot of times people
think about them as kind of basic, you know, compared to these
wonderful windowing icon and based OSs where you can click on something or tap and make things
happen. But actually, command line interfaces are the most interesting because every time you type
a command, you are solving a puzzle, maybe a little puzzle. You have a task that you want to do, like I want to list my files
or I want to edit this document or whatever it might be. And you have to invent on the spot a
command to do it. And sometimes it seems kind of trivial, like listing your files, you just type
ls and hit enter. But then there are other times where there is no Linux command for what you want to do. Like, let's say you're a Python programmer and you want to know how many Python files are in your current directory tree.
There is no single command that will count Python files.
So you have to take one command like ls or find and pipe it to a command like word count that counts lines in standard output.
And when you mash them together with a pipeline, you've invented a command that didn't exist before. And so this is one reason I
just find Linux in some way a really joyful user experience because you're constantly solving
puzzles. And who doesn't like puzzles in our community, right? It's fun. But the thing is,
it takes a little while to become a good puzzle solver. So when somebody asks you something a little more complicated, like,
what's the most common initial in the last names of the users in this system? There's absolutely
no command to find the most popular last initial. But with a pipeline of five or six commands,
you can do it. And if you can
instantly produce that pipeline, because you've been learning the concepts and so forth,
you can move really quickly and solve fairly complicated sounding challenges like puzzles
right at the command line. And those are the skills I try to teach in
Efficient Linux at the command line. But I do want to get to the question you asked too.
Yeah, absolutely.
I think that's on point.
That's kind of the joy of it all,
is when you realize that you can actually send this to there
and now you've created this thing that does exactly what you wanted it to do
by just combining these three or four things together in novel ways.
It's joyful.
It's like you've put that last piece in the puzzle.
So I'm with you.
Cool.
So some of the techniques I've learned
that have really made a difference,
the first one's called command substitution.
And it is a way of taking the output of one command
and injecting it into the text of the command
that you are typing.
So it's not like a pipe
where you're sending standard output of command one
into standard input of command two. But I'll give you are typing. So it's not like a pipe where you're sending standard output of command one into standard input of command two.
But I'll give you an example.
Suppose you want to edit all the files
in the current directory that contain a particular word.
And that's not an uncommon problem.
You want to find a word and change it to something else
in all these files, and maybe you want to do it
interactively to make sure you're not having
any false positives on your matching.
So there is a command that will tell you all the names of the files that contain a particular word.
That will be grep-w for word matching and dash-l for just printing the file name.
So grep-wl.
And there's a command for editing files.
It could be nano or vim or emacs or what have you.
And you can combine these two pieces of information. Sometimes people use backticks
on the command line. So you would say name of your editor like emacs and then backtick grep
slash wl and the word you're looking for close backtick. And that will produce the list of names
as if you had typed them on the command line.
Everything between the backticks,
which is taken as a command,
is replaced by the output of that command.
Yeah, that is really powerful.
I have one of those that I do all the time.
So I have various scripts in various folders
that are all in my path
that I've either written or whatever.
And sometimes I want to just edit and tweak a script
and then execute it.
But I don't really know where it lives
and I'm not anywhere near that.
And so I'll type which, W-H-I-C-H, right?
The which command and the script name.
And that will print out the entire path
to where that is on disk.
And then instead of just, I'll take that
and I'll wrap it in backticks
and I'll say vim backtick which command
and it will then take the path and give it to vim.
And so that's an example of just quickly editing that file
without having to navigate to it or anything else
or even know where it is.
I just know that the output of which can be substituted in
to the command line for vim.
And that's cool.
Yeah, that's exactly a perfect use case
for a command substitution.
And you can build some really complicated or complex computations using it.
And in fact, there are ways of nesting it so that you produce output that then produces
other output that then goes into your command line.
And we have lots of examples of that in my book.
And then there's another one that has a similar name to command substitution called process
substitution. And the first time I saw this one, I didn't know what the heck was going
on. I just didn't understand it. So I'll explain it and the understanding will come probably in a
minute as I'm speaking. Okay. So some commands don't work well in pipelines. An example would
be the move command. Like you don't pipe things into move, you're
moving files, right? Or the diff command, you're diffing the contents of two files. So these are
programs that really work just on disk files. But sometimes you want to use these kinds of commands
with the output of other programs or commands. I'll give you a specific example. Let's say you're
a Python programmer and you're trying to debug the flags that you're passing into your program. And so you run your
program twice, once with the flag and once without, and you want to compare the output.
So the slow way to do this is you run the command the first time, redirect the output to file one,
you run it a second time with your flag redirect to file two
and then you diff the two files and then you delete the two files that's how i would do it
so process substitution is this brilliant technique that i only learned i only learned
10 years ago that doesn't i guess with 40 years of unix i didn't know it from the first 30. Process substitution allows you to create
a sort of pretend file, a pretend disk file
that fits right into the command line with you.
So you wind up typing the word diff,
the first command that would have produced
your first output file,
the second command that would have produced
your second output file,
all in the same command line, but you're surrounding those two commands with a particular syntax.
It happens to be a less than sign, a left parenthesis, and at the other end, a right
parenthesis.
So it winds up looking like diff, less than, paren, first command, close paren, and then
a space, and then left, less than, left paren, the second command, close paren, and then a space, and then less than, left paren, the second command, close paren.
And what that says is each of those two commands,
when they produce their output, that output will behave
as if it were in a disk file that doesn't have a name.
And diff will diff those two pieces of output
right there in a single command.
So inside the less than params section,
like the substitution section, is your actual Python execution process,
right?
You're in this case,
in your example,
in your example.
Yes.
You'd say less than params,
Python,
whatever you want to run it.
Foo.py.
Right.
Close params.
And then less than params,
foo dash,
sorry,
Python,
foo.py dash,
X,
Y,
Z,
close params.
Okay.
So it's effectively like taking the output from a process and like giving it some sort of virtual file thing
that only exists until the command ends kind of a thing?
Exactly, and that's why it's called process substitution.
Okay, this is news to me.
This is cool.
Check it out, man.
Check it out, man.
I'm learning new stuff.
You can even do stuff like create a pretend file and copy it to a disk file.
You could say CP and then one of these funky process substitutions and then destination file.
And the initial file never formally exists on disk.
Okay.
But you copy it.
And that output can be whatever you want,
and it just winds up in this sort of pretend file.
I know that there are probably some Linux gurus out there
grimacing every time I say pretend file,
but for the purposes of our discussion here,
it's kind of a simplified description.
Really, it has to do more with file descriptors,
but we'll kind of skip over that.
Yeah.
Well, there's the practical knowledge of how to use the things,
and then there's the deeper knowledge of how all the things work.
And there's a place for both of them,
but you don't necessarily have to have the latter
in order to take advantage of the former.
So I'm happy just to learn the commands,
and then maybe eventually there's a reason why I dig deeper
and realize exactly what's going on.
But the beauty of it is when you don't necessarily have to.
That's like a good abstraction, okay so process substitution okay i've got one new thing to go try do you have any other
times where that might be useful i mean i think your example was a good one of like two python
programs comparing the output are there other ones where that are kind of obvious or does it kind of
just wait until you're in that moment and you'll know it? I think what would be valuable to say here is kind of the concept.
When you are working with a command that requires a disk file,
and you want to send it information that's coming from standard output
rather than using a disk file,
this is the widget that connects those two things up.
Gotcha.
That's what I would say.
That's helpful.
So anytime you have a program that works
only with disk files,
this is a quick way
to make it work
with files that
produce standard output
or commands that
produce standard output.
And it would probably
make sense with
almost every command
that follows the
source file,
destination file,
syntax.
You know, like MV
is like MV,
give me the source,
then give me the destination.
Diff, give me the left and give me the right, or whatever it is.
And they have that two-argument default versus just reading from standard in or something.
Or standard out, excuse me.
It'd probably be useful for all of those such commands.
All right, awesome.
So that's two.
We've got command substitution, process substitution.
I've got a question for you, Jared.
Yeah, go ahead, Adam.
This which command, Jared, and I guess to you as well dan speed draw who answers first when you type which and you're finding that file you want
to find the path that file because you want to command substitution i believe is the is the one
you're doing that with right are you typing that file name out are you tabbing to discover that
file name because like i'm thinking if you're if your file name is like challenging and you have
to remember that file name it's like one more muscle memory to do in that process.
When you type that which command to find that file to limit,
how are you doing that?
What's the exact keystrokes?
Yeah, the exact keystrokes.
So most of the time this is an executable file
because it's a script that I've written.
And so I know what it's called.
We have one called dbrecreate dev,
which just recreates the development environment.
And so that's in my path
somewhere and it's executable and so i can type db underscore tab right and it completely and
that will give me the full executable name and then i'll usually just ctrl a to go back to the
beginning of the line and type which space enter that would be my full i see move and then that
would give me the entire path plus the file name and then i wrap in back ticks and type them okay that makes a lot more sense i was thinking like if you're typing vim and then the would give me the entire path plus the file name. And then I wrap in backticks and type vim.
Okay.
That makes a lot more sense.
I was thinking like if you're typing vim and then the backtick
and then which, it's not going to complete that file name.
So then you have to remember it.
And then here you are, you know, hacking your time together well.
Like you're being efficient,
but then you're manually typing the name of this file name
and you have to remember it.
So you must just have like muscle memory of every file name you want to edit
this, you know, your pin folder, any executable file or whatever.
Yeah. These are executables that I use often or at least enough,
but the same thing would work with anything.
Like the reason why you type which generally is because maybe there's multiple
LSs on your system, which is right. In my case,
there are cause I actually wrap the built in and my own little function.
But in that case, you're kind of like, hey, which version of Postgres am I actually calling?
So like, which Postgres?
It's going to show you the full absolute path.
But if that executable happens to be a plain text file, which they almost always are, right?
Then you can just, in the case of my scripts, they are just text files.
Yeah.
Then you can just vim it, and there you go.
I love it.
That's a great little hack.
Now, Dan, is that how you would do it?
Is there a better way?
Given now you know his keystrokes to get there.
School us, Daniel. School us.
I like your trick.
In fact, one slight variation on it would be,
let's say you wanted to make a local copy of that script
that's somewhere out there in your search path.
So you could say cp backtick which name of script close backtick
and then a space and then a dot.
And that will produce the command line
cp full path to script dot
and make a copy for you
that you could then edit.
Copy in your local directory.
Yeah.
Yeah.
Good idea.
So ready for another one?
Yeah, man.
Let's give it some more.
Hit us.
So since you mentioned the search path, I want to talk about one that a lot of people
don't know about, but is a real game changer for navigating your file system.
And that is called the CD path.
I don't know if you're familiar with it, but think about what your search path is.
It's a list of directories where executable programs are kept.
So when you type just the bare name of a command at your command line, behind the scenes,
the shell searches that entire list of directories until it finds the first match and it runs
that program.
So LS is sitting in your user bin or your bin directory.
That's in your path.
You type LS and the shell finds the proper ls executable.
There is a second variable, not path, but cdpath,
that does the same thing for just the command cd.
Okay.
It searches a list of directories for a destination that you type.
So let's say that you've got a directory that you often visit of your own somewhere deep
in your home directory. Let's say it's your home directory and then you have a subdirectory called
finances. And under that, you have a subdirectory called bank. And you often go to the bank
subdirectory because you want to look stuff up about your finances, let's say. So it's home finances bank. If you are off
somewhere in the file system doing your work and you want to get to that deeply nested directory,
you have to type the full path. So cd tilde slash finances slash bank. And you can imagine
10 levels deep instead of two levels deep. The CD path is a shortcut that lets you say,
I've got a collection of directories that I often visit. So I want you to search for the thing that
I'm, search for the subdirector I'm looking for in all of those directories till you find the first
match. Just like you search for a command and find the first match. So I could set up a CD path that includes directory finances.
And then no matter where I am in the file system, no matter where I am, I can type CD bank and it'll bring me to tilde slash finances slash bank.
Because the finances directory is in my cd path and if you are aware of this and you set up a cd path let's
say for all the first level directories inside home you can get to every directory two levels
deep that you own with a single cd with no path typing i like that are you talking yeah i know
there's a third party tools that are like you know like CD on steroids that provide that kind of thing.
I think there's one called Jump and one called Z-Oxide.
But that's always like teach yourself how to use the Z instead of the CD.
And it's like, well, I've been typing CD for 20 years, man.
I'm not going to stop typing it.
Yes, I know you can alias it.
But I didn't know that that was built in.
It's also nice, the portability, right? Like, is that just a standard thing in probably every Linux out there versus like, does this
machine have ZOxide installed onto it?
No, it doesn't.
Oh, now I got to go get Rust or whatever it is.
I think it's probably precompiled, but you know what I'm saying.
So that's cool.
So CDPath, so you just set that in your environment and you just put into it.
Is it just like similar to the path, like colon separated list of names or how do you do it into it, is it just similar to the path, like colon-separated list of names, or how do you do it?
Exactly, exactly.
And there are a couple of key items you can put into your CD path
that make it even more useful.
For example, you can put in the relative path dot dot.
And what that means is you can CD to any sibling directory
because that's dot dot first and then back back to the
to a sibling oh and so whereas the first explanation i gave you cd path was about
getting to absolute paths quickly you can also cd to any of your siblings immediately with
no path typing it feels like i'm going to get like path overload or something at this point
like have i put so many no it just works unless
you if you have duplicately identically named subdirectories and some of these other directories
it can right you have a rate you're not a race condition but you can have you know the first one
wins but the number of those compared to the utility of doing this is is uh really small
the dot dot example is really helpful for example when, when you're programming. Let's say you've
got a bin directory, a source directory, a lib directory, an et cetera directory, all local in
your current directory. And you can jump back and forth between them just by typing cd et cetera,
cd bin, cd lib. There's none of this dot dot stuff. It happens for you because it's in the CD path.
This is a ChangeLog Newsbreak.
Can you trust JATGPT's package recommendations?
Not so much.
The team at Vulkan have published a new security threat vector they're calling AI Package Hallucination. It relies on the fact that ChatGPT sometimes
answers questions with hallucinated sources, links, blogs, and statistics. It'll even generate
questionable fixes to CVEs and offer links to libraries that
don't actually exist. Quote, when the attacker finds a recommendation for an unpublished package,
they can publish their own malicious package in its place. The next time a user asks a similar
question, they may receive a recommendation from ChatGPT to use the now existing malicious package.
End quote.
These AI tools like ChatGPT are a real boost to developer productivity, but be careful out there.
You just heard one of our five top stories from Monday's Chelog news. Subscribe to the podcast to get all of the week's top stories and pop your email address in at changelog.com slash news
to also receive our free companion email with even more developer news worth your attention.
Once again, that's changelog.com slash news. Thank you. I could see that biting me if I have a bunch of code generated projects, you know,
for people who have lots of projects, especially like imagine a Next.js
programmer who uses Next.js on every project, they're going to code gen, right? They're going to
skeleton out that app like seven times. And so maybe they have seven of those lib directories.
I'm not sure if Next.js has a lib, but if they do, then you're like, well, which one am I getting
into? And it's really just the first one you put in your CD path, I guess. But well, in this case,
if they are all siblings, the bin, et cetera, lib, and so forth, you will only
go to your local sibling if dot dot is in your path.
Oh, I see. You put dot dot first, and then you're always going to go.
So it's isolated to your current directory, essentially.
Exactly.
Your working directory.
Well, it hits that one first and then goes beyond, right?
Right, because the meaning of dot dot changes depending on your position in the file system.
Okay, I'm back. You got me back.
You got me back.
I was skeptical, but I'm back.
When you learn these hacks to sort of hack your Linux together
the way you want to, isn't it a challenge, though,
when you move to a different installation?
Let's say you're SSH-ing to a remote server
that does not have all these niceties set up.
Is this primarily, like a lot of this advice,
one, just good knowledge to have,
but then two, what you would probably do
on your local Linux desktop,
like the thing you sort of tweak out.
And every time you move to a new machine,
you're tweaking it out.
This is not something you do on every single machine
because, I mean, I can't imagine you're setting
your CD path on every single machine you're going to, right?
Like it's going to be tedious.
Well, that's a great question.
And in some sense,
the practical answer is GitHub. Make a GitHub account, store your.files there. And no matter
where you go, just do a Git clone and you're set. If you have right access to the machines that
you're using and you're allowed to make those changes. But if you can, it's great to have your
.files travel with you like that.
You become dependent on
these hacks to some degree.
They're not really hacks, of course, but they're like
fine-tuning your own
machine.
Right.
If you SSH into some machine
and you're like, oh, I'm so used to just
not having to do the.dot dance
and now I have to.
Well, then you just edit your Bash RC and add cdpath equals dot dot,
and then you're good to go on that machine.
If you don't have Daniel's GitHub set up.
I'm just saying, like, it's not like these things are difficult to replicate.
I'm thinking on the advice that we got from Gary Bernhardt
on your Vim episode, Jared.
Like, he keeps his Vim vanilla for a lot of reasons.
I'm like, this is almost the same story.
Keep your Linux vanilla to some degree because then every Linux you go to is Linux.
Sure.
This is the first one where it's actually a config though.
I think Daniel has been just giving us actual command line skills and now he has one that's
a, and you can export an environment variable, you know, ad hoc right there into your current
session and have it disappear afterwards if you want. You can export an environment variable ad hoc right there in your current session
and have it disappear afterwards if you want.
I think I would also maybe add to what you said, Adam,
which is it's important to know how to use vanilla Linux.
Right.
Now, whether you want your primary machine to be vanilla Linux or not,
that's another question.
But vanilla VI, vanilla Emacs, that kind of stuff is a really good skill to have
because you will always be winding up on a machine where you don't have your.files.
Right.
All right, that's three.
I like that this is one that's new to me as well.
So you're two for three on new ones for me.
Do you have any other big wins?
Oh, yeah.
I could keep going all afternoon.
But another one that was really transformative to me was it's a little bit
inspired by the Lisp community, where in Lisp, code and data are fairly equivalent because you
can emit strings and execute them as code and so forth. You can do similar things on the command
line because your shell, I'm going to assume your shell is Bash, just for ease of conversation.
Bash reads from standard input.
When you launch it, it's just a regular old command.
Linux launches it for you when you log in, so it's kind of hidden in that way.
But you can run B-A-S-H and hit enter, and it will do something.
It'll start a shell, and then you hit Control-D and exit, and the shell is done.
It's just a plain old command. And if you know this,
you can use that command to your advantage. For example, we all know the echo command. It just prints its arguments. So you can echo hello world and hello world prints on the screen.
You can also echo LS. Think about that for a minute. You echo LS. All that for a minute you echo LS all that does is print the
word LS on the screen not very useful but you can also say echo LS pipe bash
what do you think that does probably tells bash to execute that command it
executes the LS command that it received on standard in so what this means that's
a trivial example what this means, that's a trivial example.
What this means is you can use other Linux commands
to create sequences of commands
that you would like to execute
and ultimately pipe them to Bash for execution.
Now you may have seen things like this.
There are a few software packages out there
that ask you to run a curl command to download them and pipe them into a shell to install. And that always feels a little
risky when folks do that. But that's an example of sending the output of a Linux command into Bash.
And I'll give you a practical example of something you can do on your own, because
when you're running a curl command, you're getting stuff from some third party. You don't know what's in it. But if it's your own stuff that you're generating, you're
generating your own lines of text, you can check them, you know, 100 times before you send them to
bash. So here's an example that I ran into actually as I was writing Efficient Linux at the command
line. At a certain point, I had 10 or 12 chapter files, and I needed to create
a new file and insert it into the middle of the book, a new chapter. So my chapters 5 through 10
had to become chapters 6 through 11. And I could sit there and type six or seven move commands
by hand to make that happen and move those files into names with integers that are one higher and then
create my new file but i wanted to do this all in one shot just for the challenge of it and so
i wrote a small script that generated the move commands i wanted so it output the lines mv space
chapter five chapter six mv chapter four chapter 5 and move chapter 3 to chapter 4 and so forth
and that script was actually one command on the command line it wasn't like a loop or anything
and with that output i just put at the end of it pipe into bash and they ran instead of like
saving that as a file and then executing the file, you just... Exactly.
Or instead of writing a loop, like for I going from 5 to 10, do this move command.
And it's probably easier to understand when you see it in the book.
But the idea that you can generate any text you want and feed it to Bash for execution
means that you can create, you can use all kinds of Linux tricks
to produce sequences of commands that you can then run in one shot. Actually, now that I've
talked about process substitution and command substitution and piping to bash, there are
probably 15 different ways you can run commands to produce various effects. You've got those three,
you've got pipes, you've got a whole bunch of others that I haven't gone into yet. And the key here is flexibility. If you know
how to do something in multiple ways, your toolbox is set up for you to work more efficiently.
Like here's a trivial example. Suppose you just wanted to list all the Python files in your current directory. ls star.py. That's what 99.9%
of people would write. But if you've got 100 million files in your directory, that ls command
is going to choke. Actually, the shell is going to choke because it's got a limited amount of
buffer space to hold those file names after it expands that wildcard before it can pass them to LS.
What do you do now?
Well, you could also just list the files, just LS straight, no wildcards, no anything,
and pipe that to grep to find file names that end with.py.
That has no length limitations because now we're talking about lines of text, not one
line of text. And so the fact that you know two ways to list the files in your directory
means that you can do things when you run into trouble and one of them doesn't work. It's
flexibility. And that's a skill that I try to communicate through a lot of conceptual examples in the book.
I actually show, I think, 15 ways to list files,
Python files in your current directory.
And some of them are absolutely wacko, okay?
Right, like you probably would never do this,
but here's another one.
But if you know these techniques,
you will at some point find a use for them.
Right.
So stepping back a moment to curl a URL and pipe it into bash.
Is that something that you would never do? Is that something you might do? Just your own personal
like risk profile then? Are you against it completely? Have you done it before? Would
you do it again? What do you think? Yeah, well, fortunately, instead of piping it to bash,
you can redirect it to a text file, right? And then you can read what the commands are that would be piped to bash. And at that point, you're faced with a choice because sometimes
the commands are complicated and you have to make a decision whether the source is trustworthy,
how much your time is worth, how much you need the results of the execution. So yeah, it's a coin toss. I mean, I would never do it for a system
that I'd never heard of before. But for example, if you're a Mac user and you use the brew command
for installing software, I think brew installs by this technique initially. But brew has been
around a long time and is highly trusted. And I suppose it's possible that the Brew website could be hacked and somebody could replace the commands in their installer.
But it's not likely to happen the one day that you are, you know, running that command.
So, yeah, I'll do it sometimes.
I wouldn't do it at work. But for myself, you know, if I'm really worried, maybe I'll spin up a virtual machine and run it and see what happens.
But security, you know, it's layers.
And you have to decide how much trust you're going to have in the sources that you work with.
Right.
I was actually at the homebrew site.
I was going to ask you about this.
I was going to say unpack exactly what this does for us since you mentioned.
I think it's pretty straightforward, though.
It's slash bin slash bash space dash C and then a dollar sign, obviously.
Open parentheses and then curl command.
Then it's dash F, lowercase F, lowercase S, uppercase S, uppercase L.
And then obviously the string,
which is the URL to the.sh command that's on GitHub.
And I suppose, like, to your credit,
it might be githubusercontent.com versus githubusercontent.com.
Like, they could have, like, hacked the website
and redirected where this path might be
or where they just hacked the website
versus the actual repository potentially.
But the point is like unpack that command.
Like what does it do?
What does that command to install homebrew doing?
Which is like what most people do.
I almost, I just go there and copy it and run this.
You know, I've trusted it every single day.
I'm so glad you brought up that example because it has a couple of really nice techniques
in it from a command line standpoint.
One of them is I'll mention the dollar sign parenthesis that you saw.
That is actually also command substitution.
But instead of backticks, it's using a bash-specific syntax of dollar paren.
What's nice about the dollar paren syntax is it is nestable.
You can have command substitutions within command
substitutions, you know, if you want to, you know, have a real exciting day command line-wise. So
the inner part of what that command is doing is substituting into the command line the output of
some other set of commands. And that string that is being produced by command substitution is being handed as an argument to Bash with the dash little c option.
And bash dash c is a very interesting and helpful construction.
It tells Bash to execute whatever is in that string as a command. So you could say, as a trivial example, bash-c,
in quotes, echo hello world, and bash will execute echo hello world, and you see hello world, or
bash-c ls. That's another way of running the ls command. ls is just a string. It's being handed
to bash with the dash c option, meaning execute me.
And I'll give a really great example of using bash dash C in a minute that you may recognize.
But what that command is doing is saying, hello, you know, homebrew.
I'm taking a command that you're providing to me.
I'm using a command substitution to insert new text onto a command line.
And I'm handing that as a string to bash to execute.
So there's two levels of execution going on there.
There's within the $ parenthesis,
there's an execution happening to produce a string,
and then that string is being handed to the bash command explicitly.
So it's kind of like as if you would curl it, this URL,
and it prints it out as standard out, right?
It's as if that, because at that point it's a string,
and this program or this.sh file, this executable,
it's executable in the repository, but you grab it as a string,
and you're saying to bash, just execute this string,
which is why it's also beautiful as well as dangerous.
Yeah, it's here bash, blindly execute this string, which is why it's also beautiful as well as dangerous. Yeah.
It's pure bash.
Blindly execute this string I haven't read.
Whatever comes back from this URL,
go ahead and run it.
Exactly.
Who wants to play a game of,
who knows their curl flags the best because there's lowercase f, lowercase s.
Not me.
I'm actually, I'm a Wget fan.
I really like Wget.
Yeah. You don't like to have to specify the special flag I'm a Wget fan. I really like Wget.
You don't like to have to specify the special flag to save it to a file.
You just want it to save to a file right away?
Is that your Wget stance?
Pretty much, yeah.
Just grab a file from a URL and save it locally.
That's what I like.
With curl, you have to use the dash O option
or redirect to a file or whatever.
I probably use Wget a lot more often,
but curl is extremely useful.
It's just I don't have the options memorized
the way I do a little better with WGET.
All right, I'll try to look them up real quick.
So the dash F is the fail flag,
which tells it to fail fast with no output.
So if the HTTP response doesn't come back,
it's not going to barf.
It's going to just fail quietly, it seems.
Lowercase s. Any guesses? It's probably silent. Don't print error messages. I'm now scrolling.
This is a long man page. Daniel Stenberg, you've been adding lots of flags. Lowercase s, silent.
Good call there, Dan. Capital S, show errors. So it's used with silent. It makes curl show an error message if it fails.
So fail fast,
show the errors,
but be quiet otherwise.
And then dash L is the location flag.
If the server reports that the requested page has moved to a different
location,
this option will make curl redo the request on the new place.
So I guess it's kind of like a follow redirects kind of thing.
Yeah.
Which is interesting,
I guess. I wonder why like a follow redirects kind of thing. Which is interesting, I guess.
I wonder why it would have to specify that.
I thought curl would follow redirects by default,
but I'm getting a little bit upstream there.
Apparently the Homebrew folks like that flag enough to put it on their default installer.
Yeah, I dig this though.
I'm glad you broke this down because people do this every day
and you have to be like, okay, which of these options, if you you had to leave one out could you leave the l out for example and maybe you
could if you're like i don't want to follow redirects so i know the curl command enough
and its options enough to say okay i trust this but as i copy this from the root.sh site i throw
it into my terminal i then edit that command to remove the L because I just want to trust the single destination only, right?
Yeah.
Which gives you, to Dan's credit, like superpowers, right?
You understand your tools enough and their options so that when you do it at runtime,
you have choices on how to do it three, four different ways or not at all.
So since you were kind enough to create a curl options puzzle, I'd like to pose
a puzzle as well. Sweet. And this will be related to what we were just talking about. So suppose
you've created a wonderful program and you would like to have it send its output into a log file.
So you might type the name of your program, greater than slash var slash log slash like my
file dot log. And of course, that's going to fail because you don't have write permission in the log
directory. So you throw a sudo in front of it and you type sudo my program greater than path to the
log file. That should do it, right? Well. But it doesn't.
It fails.
It fails.
And why does it fail?
That's a good question.
You're escalating your privileges,
but you're actually,
I'll say you're not actually switching users.
I don't know why it would fail.
My intuition is that it would fail,
but I don't know why the intuition is that way.
Adam, do you have any guesses?
Maybe because the output,
it's maybe sudo on the first thing,
but not on the second thing.
You are close.
You're on the right track.
The sudo command,
let's make it a little more concrete.
I'll give my command a name.
Let's just say it's the who command,
which prints a list of users on the system.
So sudo who,
sounds like a Dr. Seuss book,
sudo who greater than var log my file dot log. sudo does escalate the privileges of the who
command, but the greater than symbol is a construct of the shell, not of the who command. So you have
to escalate, you have to give root privileges for the whole command line.
And the best way to do that is our friend bash-c.
Okay.
Just like in Humber.
So you say sudo bash-c, quote, who greater than var log myfile.log, close quote.
And that will work because now the shell that is being invoked by bash
to run the string inside the quotes now has root privileges gotcha so i've run into that inside of
cron tabs or like the cron tab it works when i'm running it from my environment but inside cron it
doesn't work anymore and so then you're like well just bin bash dash c and then do it and it's going
to work i never knew why that would actually work. I'm just like, that's my fix to
just do that every time. Yep. That makes sense. So now you have, now you have the concepts for
that. And it's one of, uh, you know, 15 or so ways I mentioned of, of running commands,
command substitution, process substitution, piping the bash pipelines in general, plain old commands, bash-c,
and there's like seven or eight others that once you know them,
you have that flexibility that I was mentioning to you.
Well, it's funny that Jared's used this plenty of times.
I didn't know about it at all.
However, I've used it every time I've installed Homebrew, so there you go.
But you don't, he didn't understand, you know, why it worked, right? It's the why that's really important.
And I really hate to go this far into a show and not mention chat GPT.
Like chat GPT has taught me so much about bash, about, you know, the shell, whether
it's ZSH or a bash shell.
And like, that's just a cool thing.
Like now you can kind of learn if you were like, hey, I need to execute something inside
of a cron job
and I need to make the whole command use the sudo command.
Well, the chat GPT-LLM might tell you exactly what you just did, Dan.
But, you know, here we are having to wait to do this show for 14 years
and finally get you on to explain why bash-c does that.
I'm glad you brought up chat GPT
because I've seen a number of articles recently
about exactly what you were describing,
asking chat GPT for a Linux command.
So you give an English description,
a description in whatever your home language is,
and it tells you what command to run.
And in every single one of the articles that I've seen,
there's been at least one fatal, dangerous command produced by ChatGPT that is not noticed by the writer of the article.
I'll give you one example.
I saw one article where a writer asked for, I think, a chmod command to make all the files in their current directory read-only. And I don't remember exactly the way they phrased it to chat GPT,
but what it returned to the person was a recursive command
that changes the permissions on all files in the entire tree.
And the way the question was phrased,
it could have meant either one, just the current directory,
or current directory and all subdirectories.
And if you are just taking instruction from chat GPT and don't know what the options mean,
you can destroy the permissions on far more files than you meant to change with no way to restore them.
Yeah. I've been using it long enough now that I'm just getting more and more skeptical of its responses because it's been wrong enough.
I think it's best at giving you information
that you once knew because then you're like,
yep, that's it.
I couldn't remember it, but you got it right.
But then when it gives you the one that's wrong,
you're like, no, that's wrong.
Like you have to actually know that it's wrong
and able to be able to use it with confidence
because it's been wrong so many times now for me
that I'm just like, I'm not even gonna ask you anymore because you get it wrong. Nine out of 10, it's like it
started off much better. I don't know what's going on. Which reminds me, did you guys hear
about this whole chat GPT package hallucination security vulnerability going on right now?
So I put it, I was in ChangeLog News this week. I missed it. I didn't listen yet. Yeah,
this is kind of scary. So there's a team at Vulcan, which is like a InfoSec company, I believe. I didn't listen yet. a third-party library, it will sometimes hallucinate fake libraries that don't actually
exist. And so malicious attackers will go out and they'll squat those libraries and they'll make
them exist and they'll put their own malicious code into it. It's called AI package hallucination
vulnerability. That's pretty bad. I mean, evil, evil, evil, like a whole new thing that didn't
exist. Yeah. I mean, this all kind of harkens back to something we talked about earlier, which is, you know,
having that conceptual knowledge, maybe you learned Linux through trial and error.
And at a certain point, it's very valuable to get those concepts because once you have
them, you're just so much better equipped to be able to evaluate the answers that you're
getting back from a bot or what have you,
in addition to the ability to create commands with more flexibility.
Hopefully they will get better from here, but potentially not. Time will tell.
So these hallucinations are well known with what they're going to hallucinate.
How is the attacker learning about the packages that are being hallucinated?
The attacker uses the tool in order to force it to hallucinate something.
And then it goes and puts something in that location
and just waits for somebody else to hit it
with the same line of questioning, basically.
Like that's their attack vector.
Wow.
Yeah.
What question would the NPM world have commonly
that I can leverage as an attack vector?
How would I left pad a string with a bunch of spaces?
Wow.
That's a little joke for those of us.
No, that really is interesting.
And to my credit, I do recall listening to that part of it.
So I do recall the hallucination part.
Okay.
I didn't listen to the full episode yet, though, but I did listen to that part.
Regardless.
Yes.
I had to get my back.
I listen to our shows.
I'm a listener.
All good.
As well as a host.
So, Daniel, going back to the beginning of our show,
I brought up that example from your post
with the Python dev who doesn't work at Google,
but works on Python nonetheless,
who was doing kind of the dorky style back and forth.
And you mentioned there's a much better way.
And Adam said, there is?
And then we said, yes, there is.
But we never actually explained it.
Do you want to launch back around to that?
You were talking about job control.
I want to learn this too, so please explain it.
You want to learn this. This is good.
Yeah, so the use case we're talking about
is you are working in a single terminal,
so maybe you're over an SSH connection to a server.
Because if you're able to create multiple windows,
this whole problem goes away.
Because you can compile in one window and edit in another.
So what this
engineer was doing was in a single window, they would jump into their editor. They would fix
whatever the next thing they wanted to do in their code. They'd quit the editor. They would run the
program or compile or whatever they needed to do and see what happened. And then they would restart
their editor, find where they had been before and continue editing. So there was a lot of stopping and starting and reestablishing the context of where they had previously been.
But the Linux shells all have a feature called job control,
which allows you to temporarily suspend commands and bring them back into the foreground, as they're called.
So when you're in your editor,
for example, you can type a keystroke that will cause the shell to suspend the editor,
which will just give you your prompt back. The editor is still running in memory. I should say
it's still in memory, but it's been stopped. It has a different process state. It's stopped.
And then you can run your Python program or compile or
whatever you want. And when you want to go back to restore the state of where you'd been, you simply
type FG, short for foreground, and hit enter. And it brings back into the foreground the process
that had been suspended, which is your editor. And so that is a much quicker way to jump back
where you were when you're editing than quitting and restarting and trying to find where you were. And that particular individual
shaved hours off of their coding time from this. Oh my gosh. I can only imagine the fatigue.
Yeah. Right. So this one, Adam, you mentioned Gary Bernhardt on our Vim episode.
Right. Remember he did that Vim with me video and he uses this
extensively and his fingers
are so fast at it that he just
control Z's and then FG's
control Z run a test FG and he does it so
fast that you have to like stop him and say
would you just show me what you did there because he's like hopping
back and forth between Vim and the command line
and
it is the fastest thing that you'll
in terms of navigating in a single window so what's the
command to get out of vim then to do this control z right yeah control z like zebra yeah that's
right and that will background it and send you back to the command line and then fg to foreground
it wow so i just had vim opened i was prepared for this oh yeah i control z just now and it says
for whatever reason i guess the process number of PID suspended Vim.
And then the path to my file was in.
And then so to get back, it's just FG?
Yep, FG.
And if you happen to have multiple processes all suspended in the same shell, each of them has an integer job ID associated with it that you can refer to with the FG command. So if you want to resume
job number three, it would be FG space percent three. And that would bring job three back
into the foreground. Now, jobs and processes are different things. You're familiar with process
IDs. You type PS, you see the various process IDs, and those are known to Linux. Job IDs are only known to your running shell.
Linux operating system doesn't know about them.
So within a single shell instance
that you're running interactively,
every command you launch is a job as well.
And if you have a long running job
and you happen to control Z it,
to suspend it,
it will have a job ID that you can access
and put it in the foreground, throw it into the background,
do what you like.
So this isn't useful in the case of losing a connection
to a remote server.
Because that job isn't going to sit around a memory
if your connection disappears.
Whereas you have tools like GNU Screen or Tmux
where you can attach and detach
those sessions and they persist between connections. Right. But this is more just like
more ephemeral than that. If you lose your shell, you lose your jobs, basically. Is that right?
Yes. Yeah, that pretty much. The fact that you mentioned remote connections is also interesting
because when you SSH to a remote server, sometimes you want to suspend
that SSH connection and do a bunch of stuff and then resume it. And you can also use job control
for that. But because you are running SSH on a remote server, you need to disambiguate whether
your control Z is for a process on the remote server or if you wanted to actually suspend ssh and for that you
need to type the ssh escape character which is a tilde on a line by itself so like enter tilde
and the next character you type will be a command to the ssh process and you can suspend it so
tilde control z will suspend an SSH process
and then ordinary FG will just resume it again.
And that's super useful, especially if you just got one
terminal in front of you or you're working on your phone or something
over an SSH connection.
That can be good.
I think this is also a good argument for something like Screen
or Tmux because they provide a like a suite of tools
specifically for this and as well as cool stuff like collaborating on the on a single terminal
and stuff so connecting two people to one cool stuff like that which is kind of above and beyond
what you know the foreground thing is but this is just great especially if you're just
viming away and you want to do something real quick and then get right back to it
super fast yeah or nano in a way or emax in a way i don't want to be particular here vim jared vim i've moved to vim
i used to be a vi guy or a nano person i was like uh and then with all the vim uh conversations we've
had over the last couple years i was like i should just like learn the basics really really well and
just force myself and i did that what do you use, Daniel? I use Emacs.
I started on an editor called Jove,
which is short for Jonathan's own version of Emacs.
It's a really simplified Emacs clone.
At a certain point, it was no longer maintained.
I don't know what happened to Jonathan.
So I started using GNU Emacs.
And when I started using it,
CPUs were still fairly slow
and memory was still fairly limited.
So it was not a particularly pleasant experience.
But today, it's just as fast as any other editor.
And I really like the programming language built in.
I read my mail in Emacs, which is really nice.
Well, it gives you Emacs as a text editor for composing your replies and stuff.
So no, I do use Vim.
I certainly have used it.
And if you like that kind of mode switching model, then it's a wonderful editor.
And it is also very flexible and configurable.
But I've been using Emacs for so many years, the muscle movements are hard-coded into my fingers at this point.
And, you know, with my eyes closed, I can edit.
So I just keep using using it well you're
not alone in that we come across emacs users from time to time they're less vocal and passionate
from vim folks when folks like to talk about it a lot I feel like emacs are just like the silent
majority you know they're just out there getting the work done yeah but uh yeah I have never really
used emacs like I said back in college I was just forced into Vim by my teacher.
And once I got over that ridiculous learning curve,
it's like, hey, sunk cost fallacy.
I put so much work into learning this.
Why would I try to go do something else?
Like I already know it now.
Yeah, well, what's lovely about Vim
is that a lot of the keystrokes that you use
are also usable in other Linux commands,
like sed and ed.
Actually, I lied.
Ed was my first editor, which is a line editor where you don't see what you're working on.
And, you know, you get a prompt and you type some weird substitution command and it just works.
And when I was in college, I had to use ed for like writing assembly language.
And that was quite an experience that I'm sure I've forgotten all my assembly at this point.
But if you know Vim, then using a stream editor like sed, S-E-D,
which is a fantastic command that's so flexible
for producing powerful effects on the command line,
then you're at an advantage if you use Vim
because you already know some of the syntax.
Well said. Do we have any more big wins?
I know we're going to get progressively smaller,
but they're still going to be interesting.
So I'll just keep going, Daniel. You'll have to tell us.
Yeah, sure.
Adam, we have to go to the bathroom. We'll just keep on going.
I've got one other win,
and this was also one of the transformative concepts.
And that was the use of what's called the directory stack for moving around the file system.
So these are the commands called pushd, popd, and drs.
Have you run across those?
I'm aware of them.
I think mostly in reference with some of these other like third-party CD replacements because they kind of have that whole stack thing
and they're like, you don't have to use PushD and PopD,
but please expand.
Well, this changed my whole like use of Linux.
I think I type PushD and PopD more often
than I type CD these days.
But before I geek out too much,
let's give a little bit of context.
So normally when you are working in Linux, you can
CD by an absolute path, you know, starting with slash all the way from the root up. So you can say
CD slash A slash B slash D slash D slash F and move long distances in the file system.
The problem is when you do that and you need to get back where you were, you've got to keep typing these really long paths.
And honestly, one of the biggest obstacles, I think, to new Linux users is typing all these long paths.
And so the directory stack is a way to reduce your typing significantly when you are working in a collection of directories that you use frequently. So an
example I give in the book Efficient Linux at the command line goes like this. Let's say you're a
web developer and web developers frequently work in at least four different common directories
in a Linux system. You've got the directory where you're developing. Okay, that could be anywhere in
your home directory, let's say. Then you've got the Apache directory where you're developing, okay? That could be anywhere in your home directory, let's say.
Then you've got the Apache directory where you've got to configure your web server.
That's slash et cetera slash Apache 2 or whatever it might be.
Then you've got a directory where SSL certificates are kept.
That's like et cetera SSL certs.
And then you've got the directory where your web files get deployed, which is like slash var www blah blah blah.
So you've got to keep moving between all these different directories while you're working.
And if you have four windows open, you know, one to each directory, that's great.
But if you're over an SSH connection or you just want to work in one window, you got a lot of CDing and a lot of slashes.
The directory stack gives you a quick way to say
bring me back to that place i was working a minute ago it doesn't matter where it was or what it was
called just get me back the simplest use of this is to type cd slash and then a hyphen just a dash
and that a lot of people know that one that's a good trick for i didn't know that was part of
this system but yeah i use that all the time i'm not sure that it is but it's all part of the shell
so i suppose in some sense it's part of the system but that's the easy one that says take me back
where i was so if you were in your home work directory and you need to go to the apache directory
you can type in cd dash cd dash cd, and bounce back and forth between those two. The problem is if you CD anywhere else, now you've lost context of where you are and you can't do that anymore.
And now you're going to have to type some long path to get you back.
The directory stack is not limited to two directories.
It can be as many as you want, and they're arranged like a stack like a traditional computer science stack in computer
science it's last in first out and you can keep pushing directories onto this stack piling them up
and then popping them off to go back where you were and so the the command push d p u s h d is
a substitute for cd that says cd where i to go, but also push a directory onto a stack.
And you keep doing push Ds.
It keeps the stack grows, and then you can pop them off.
And this is a great way to move around the file system because not only can you push and pop, you can also swap.
So you can have the same effect as cd dash but not lose your context
so it's so great it once you start using these commands you will be absolutely hooked and you'll
probably want to alias them to shorter names than push d and pop d i was about to say that they
seem like too long to be using that often yeah yeah so i i GD. My push D is GD for like get directory and PD for
pop D. And so, yeah, all the time. And then you can, you can examine your stack. You can manipulate
the stack. You can take things out of the middle of the stack. So it's really not a stack. It's
more like a linked list, but you can still call the stack. And this completely changed my navigation.
That and the CD path I mentioned earlier,
it's so easy now to get anywhere I need to go in very little typing.
Have you ever made any videos of you doing this kind of stuff,
like over-the-shoulder kind of situation
where you essentially, instead of explaining to us how it works,
you demonstrate to us how it works?
Obviously a podcast would not work for this in audio form only, but have you ever done that where it's like in video
over your shoulder and you're explaining exactly what you did, but not in theory,
but in actual practice? I would love to do some videos. I'm not particularly adept at video making.
So I'd probably, obviously I can set up a phone and just talk or a screen recorder. So at some
point I'd like to do that, but at present
it's all in books and a couple of articles here and there. But in the books, what I do is I show
the output of every single stage of the command as it's being built. So let's say it's a 10 pipe
command. You know, I'll show what happens with the initial command and then after one pipe,
after two, after three, after four, and highlight what's different.
So I try to make it very, very educational and easy to read.
But I have not done the video way of doing things.
So I'd love to when I can find the time.
Well, I've seen somebody called Primogen.
If you've seen him out there, he's on YouTube.
Essentially took his terminal from vanilla vim to phenomenal vim
in a matter of like and you watch him similar to gary bernhardt jared as you're saying like with
you know control z fg so fast you see him move just so fast and one you want to not so much
just see somebody be that fast but you want to learn what they've learned to, you know, fine tune their tool to
enable them to be that efficient. Just like you're saying with your book title, you know, being
efficient. And I just wonder like people who are like you all who know this so deeply and it's
ingrained in you, how do you, aside from writing it down, demonstrate it in a way that is replicable?
Like, could I go and watch a 10 video series
of Daniel giving me the 10 ways to be most efficient every day with them or not with them,
but like with, with Linux at large, you know, that would be transformative for so many people
because this video, you see it being demonstrated and all you do is emulate it until you get
mostly good at it, if not expert level at it.
Yeah, that would be great.
I would love to do something like that.
It's pretty quick to put a video together, but then you have to go back and edit it and take out the parts.
You've made the mistakes and so forth. And at the moment, I'm working on two other books.
So it's kind of hard to partition my time in an efficient way so that all that stuff can get done.
So much efficiency.
The reason why I also asked you that question is because it seems that your career track, which we have not talked about really at all, is you've been head of education of some sort.
You've been the person who's been in charge of ensuring that other teams and the organization at, has someone on their side to help them learn better.
You know, essentially an educator to,
if I understand your career path to some degree,
I would imagine with that kind of title and that kind of responsibility,
you have resources that says, okay, Dan is not great at video,
but somebody else is.
And they can essentially be your support.
And you're just a talent, you know?
And not that just in a negative way, but like all you got to do is be who you are and exude
what you exude every day, like you're doing this podcast.
But in a form that's, you know, people are going to listen to the show and they're going
to buy your book, but it's going to go so far because if you don't see it in the speed
that you can do it at and see the command you're running, it doesn't quite get that full fidelity of learning.
Yeah, that's a really good point.
And you're right.
When in the workplace, there's a video team, there are technical writers and so forth who can jump in and help out with these kinds of things. I will mention that in both Efficient Linux at the command line and also the previous
Linux pocket guide, there are downloadable examples, which in a lot of cases will mimic
exactly the directory structure that the commands in the book are running in. So you can type those
commands and see exactly the same thing happen. And so that can also be a way of educating. It's
not video,
but it's a, it's an additional way. Yeah. So these are both O'Reilly books, right?
Yes. They're both O'Reilly books. I've been working with O'Reilly since the early nineties
and probably have done about 11 or 12 books with them at this point. I would imagine O'Reilly would
be all over this, right? Like these are great marketing materials for the books. They're like add-on things.
Yeah.
They have asked me
a number of times
to do online Linux courses
and the timing
has just never worked out.
So never say never.
You know, it could happen.
It will be fun.
But, you know,
only so many hours in the day.
For sure.
And two books in the works.
Anything you want to tease out on that?
Are they too far away to talk about?
I'll say at this point that one is a Linux related book
and the other one is completely different
from anything I've ever written before.
I mean, previous books I've done,
there's one on the SSH.
SSH, the definitive guide,
is one that people may be familiar with.
There's another one on MediaWiki.
I'm very, very into the software that drives Wikipedia.
And I built some Wikipedia-based software in the past.
There's one called the Linux Security Cookbook,
which is a little about keeping your system secure.
But that one is fairly old at this point.
So for concepts, it's good,
but not necessarily for being up to date.
There's a Mac Terminal pocket guide, which is just like
the Linux pocket guide, but it's Mac specific. But this new one is more about software practice
in general within the world today. It's about how to be a responsible software engineer,
given all of the controversies that are swirling today around software that
tracks you and stuff like that, or the fact that every piece of software you write has a climate
footprint of some kind, because if it runs in a data center. So this book on responsible software
engineering is a fairly new direction for me. And that's something that I'm working on right now.
You'll definitely have to come back when that one is ready to print. We'd love to talk to you
about those topics. Cool. Absolutely. I almost had a book title for you.
Oh yeah? A suggestion at least. Let me see if you like this.
Cold Ice Cream and Hot Kisses.
You know, I don't think I can beat that one. My Silicon Valley folks
are totally laughing right now because Gavin Belson went from in the show Silicon Valley.
I'm sorry, Jerry.
I have to do this.
You don't have to do this.
Oh, gosh.
Ring the bell.
Yep.
There was a period of my life in which I would have rooted for the failure of Richard Hendricks.
That was a different Gavin Belson.
That was tech icon Gavin Belson, not literary icon Gavin Belson.
Since leaving Hooli, I've co-authored 37 adult romance novels. That was tech icon Gavin Belson, not literary icon Gavin Belson.
Since leaving Hooli, I've co-authored 37 adult romance novels.
Fonley Margot.
The Lighthouse Dancer.
Cold Ice Cream and Hot Kisses.
Over here, The Prince of Puget Sound.
Lastly, His Hazel Glance.
All international bestsellers.
Gavin Belson is in the show Silicon Valley, and it's renowned as quite famous in our culture.
And he was a tech icon.
He ran the equivalent of what Google is.
It was Hooli.
And I think Hooli in many ways was synonymous to some degree because it was a search engine with Google.
And so he ejected himself after he was sort of done with tech, and he the book, Cold Ice Cream and Hot Kisses, which was a romance novel.
So when you said not at all about Linux, I was thinking, I've got a title for you, Dan.
I've got a title for you. I think he's going way outside of this.
And so then when it was not at all about that, I had to bring it as a joke because Silicon Valley.
We have a sticker right here where I say that.
To be full explanatory, Dan, I say, I mentioned the show Silicon Valley often and that was for the laughs.
So there you go. That's funny. I think there may be a novel in me somewhere,
but I'll save that for retirement. Okay. All right. Fair enough. Well, Daniel's website,
danieljbarrett.com, there you'll see the books available, Efficient Linux as a command line.
That's the new one. That's what we've been talking about pretty much the whole show.
Available, of course, on Amazon, O'Reilly,
all the places.
Where's the best place people can buy this book to give you the most personal money?
Daniel?
Gosh.
Don't know the answer to that.
Yeah, I mean, Amazon's fine.
There's also bookshop.org.
I like bookshop.org
because they support independent book dealers.
Cool.
Yeah, I'd say however people find because they support independent book dealers.
I'd say however people find it most convenient to get it, that's wonderful.
So buy it however you like, but if you want to buy it Daniel's way, check out bookshop.org.
Supporting the independent, something that we are definitely about here as independent media creators ourselves.
Yeah, for sure.
I just want to say thank you for coming on the show, man.
This is awesome.
I've learned a lot.
I'm sure our listeners learned a lot.
We got a Silicon Valley reference in so the show can actually finish now
without being incomplete.
Adam, anything else to say before we?
No, I think that's really it.
I love the idea of being more efficient with Linux.
I loved how you described how the path
for anybody with Linux,
I'm much younger than you are in terms of a usage.
You've got a lot more experience than I do with Linux and I find myself learning as I go. And that's, I think to some degree, the best way
you learn how to use Linux and the command line of Linux, because you kind of use it and learn it
as you go. And it's almost like you learn it at just the right time when you've experienced just
the right amount of pain to finally be like, I'm not going to close this editor and then go run my program and then reopen
this editor and go back to the line of code I was editing.
Like you have someone come along with a book like yours,
with the knowledge you have and express things we do every day to be more
efficient. And that's,
I think that's the best time to learn is when you're ready to learn it.
Yeah. That's a great point. And I think I'll say we,
we are all blessed right now to have it. Yeah, that's a great point. And I think I'll say we are all blessed right now to
have, you know, the web in front of us that we can look up anything we want, you know, in the
moment of need, but it doesn't really build conceptual knowledge that's going to help you
when you're like not right next to the web, or if you need to do something really quickly.
And I think that it's, if people can take five minutes or an hour or whatever to really dig into the conceptual aspects of the command line, save so much time in the future.
You'll definitely get your time back many times over.
So highly recommend people do that.
Very cool.
Well, Dan, thank you so much for being so efficient with us.
We appreciate that.
It's been lovely speaking with you. Thank
you again. Really fun. And thanks. I know that I learned a few things. Okay. A lot of things in
this show. A lot of fun sitting down with Daniel, going through his book, going through, just
thinking about how to construct Linux commands and being more efficient,
being more fun, really, with your day-to-day usage of Linux, whichever flavor you choose,
of course. Check the show notes. There is a link to the book and all Daniel's books
in the show notes. So check that out. And coming up next week, our new friend,
Jake Zimmerman, throws down the gauntlet. He says, types will win in the end.
Of course, he's a little biased.
He's one of the maintainers behind Sorbet, a fast, powerful type checker that's designed for Ruby and is built with love at Stripe.
So we go deep on that topic next week.
Big thanks to our friends and our partners at Fastly, Fly, and also TypeSense.
And of course, those banging beats from Breakmaster are banging.
Love them.
Oh, one more thing.
Did you like my joke?
Cold ice cream and hot kisses.
I loved bringing that into the show.
Hopefully you laughed out loud.
I know I did during the show.
So much fun bringing it in Silicon Valley.
I do it as often as I can.
But hey, that's it.
This show is done.
Thanks again for tuning in. We will see you on Monday. Thank you. you