The Changelog: Software Development, Open Source - Building a secure Operating System (Redox OS) with Rust (Interview)
Episode Date: January 19, 2018We talked with Jeremy Soller, the BDFL of Redox OS, a Unix-like Operating System written in Rust, aiming to bring the innovations of Rust to a modern microkernel and full set of applications. In this ...episode we talk about; OS design principals, Jeremy's goals for Redox, why is Rust, the Micro-kernel, the Filesystem, how Linux isn't secure enough, how he's funding this his development, and a coding style in Rust called Safe Rust.
Transcript
Discussion (0)
Bandwidth for Changelog is provided by Fastly. Learn more at fastly.com.
Error monitoring is provided by Rollbar. Learn more at rollbar.com.
And we're hosted on Linode servers. Head to linode.com slash changelog.
This episode is brought to you by Command Line Heroes, a new podcast from Red Hat.
In this podcast series, you'll hear true epic tales of the developers, hackers,
and open source rebels
revolutionizing the tech landscape. Here's a preview of the first episode called OS Wars,
part one. I'm Saran Yadbarik, and you're listening to Command Line Heroes, an original podcast from
Red Hat. What is a command line hero, you ask? Well, if you would rather make something than
just use it, if you believe developers have the power to build a better future hero, you ask? Well, if you would rather make something than just use it,
if you believe developers have the power to build a better future,
if you want a world where we all get a say in how our technologies shape our lives,
then you, my friend, are a command line hero.
In this series, we bring you stories from the developers among us who are transforming tech from the command line up.
And who am I to be guiding you on this track?
Who is Saran Yadmaric? Well, actually, I'm guessing I'm a lot like you. I'm a developer
for starters, and everything I do depends on open source software. It's my world. The stories we
tell in this podcast are a way for me to get above the daily grind of my work and see that big
picture. I hope it does the same thing for you, too.
Hear true tales of makers who are transforming tech from the command line up.
Subscribe where you get your podcasts or visit redhat.com slash command line heroes.
Welcome back, everybody. This is a new year and this is the changelog thank you for tuning in
my name is Adam Stachowiak editor-in-chief of changelog and today Jared and I are talking to
Jeremy Soler the BDFL for Redox OS. Redox is a Unix like operating system written in Rust
aiming to bring the innovations of Rust to a modern microkernel and full set of applications.
In this episode, we talk about OS design principles, Jeremy's goals for Redox, why Rust, the microkernel, the file system,
how Linux isn't secure enough, how Jeremy is funding his development, and a coding-style Rust called SafeRust. things down rust called safe rust.
So Jeremy, the obvious first question when we speak with somebody, I don't know if we've spoken with too many people who are riding an operating system, but I always just ask
why?
Why this huge undertaking?
Why are you riding, and others, Redox?
Well, that in many ways is the question people will ask, especially in the early days when there's not much work that has gone into it.
And, you know, when it was first announced, I didn't even know it would be announced.
But someone announced it on Reddit.
The first user of the operating system called Tiki, who also set up the chat server,
he put up a Reddit post saying, hey, look at this operating system written in Rust.
I was not ready at all.
And to be honest, at that point, I'm not sure I really had a direction.
To get started into why I wanted to do this,
a long time ago, I was making operating systems in my free time using Assembler, just as a hobby, and to try to get to understand how computers work at a lower level.
And Rust, when I first encountered Rust, really struck me as a language that would make all of the headaches
that I had writing operating system level stuff in Assembler go away.
So I just started toying around with a Rust bootloader.
Then I wrote a little graphics stack, had mouse input, keyboard input,
and it kind of ballooned from there.
But now I think we do have a purpose. The purpose of Redux is not to replace Linux or to replace the desktop operating systems
that are currently out there, but to augment them.
It's to provide an alternative that is secure from the ground up, that's built in a language
that has some provable security aspects, and also has an architecture because it is a microkernel that's designed to
be a little more secure and reliable with the default settings. So most of it comes out of
security, at least for the official reason, but for the unofficial reason, it is a lot of fun
to write in Rust and it is a lot of fun to write stuff at a low level. It's interesting. I was thinking back when I said, I don't think we talked to too many operating
systems developers. The last one that we did, Adam,
the show hasn't aired yet because we have to re-record because we had
Steve Klabnick on talking about Intermezzos, which is another operating system in Rust,
probably with extremely different goals than what you're doing, but interesting
to see how much is happening in the Rust community around OSs.
And I want to go back to Tiki because I was perusing the Redox book today in preparation for this,
and Tiki's name is all over that.
So it sounds like he's been a part of the community since day one, huh?
That's right.
He started the community with that Reddit post.
And the first commit to the repository was on April 20th, 2015. And only a couple months later, Tiki posted. And it kind of exploded from there. bunch of things. He's got his code in the core utils, in additional utilities we have called
extra utils, inside the shell, in the kernel, and now he's working on the file system.
And he wrote some very popular Rust libraries like Termion that is a terminal control library
that quite a lot of people are using for outputting stuff to the terminal and
using control characters and making nice, pretty terminal interfaces.
So after that initial Reddit post and the interest flared up, did you have a sense of
dread or just joy that all these people were suddenly interested in this operating system,
which was fledgling, you know, at the
moment, not like you said, not ready to even be announced. Was that a, was there a sense of dread
or was it mostly just excitement and, and, uh, spurring you on to move forward? Definitely both
because what there ends up being is, um, you know, that there are so many problems that need to be
fixed because you're still working on it.
At the time, it was a unikernel, and everything was running in the kernel, including all of the programs. They were just hard-coded kernel functions. Because I was just literally
trying to figure out how to compile Rust at the kernel level and run it. That was it.
I hadn't figured out how to launch applications
in user space with Rust and all these other things hadn't been figured out. So the fact
that there was so much, even at that state in development, there was so much interest
in it really probably changed the course of the project from being simply a hobby to being
my second job.
Were you looking for a second job?
Not at the time, but that's what it became.
Like most programmers, it's hard to turn it off, to stop programming.
And so you have side projects.
And this ended up soaking up every single side project I was working on. If it wasn't Redox, then there really wasn't a place for it. So it had to fit into that paradigm. And yeah, it's been a time how many people wanted a Rust operating system.
And so immediately from there, I started working on the things that it needed to have.
It needed to have separation where drivers and processes were running in user space.
It needed to have a kernel that wasn't a hack. So we actually had to rewrite the kernel
about a year and a half ago
because all the memory management
had been done incorrectly.
And once I started trying to introduce concurrency
where more than one core
would be running kernel code at the same time,
things started breaking all over the place.
So I scrapped the kernel code and
rewrote it. And a lot of that was, Steve Klabnick has actually been very helpful in some of these
areas, as well as Phil Opperman, who wrote a blog about writing Rust operating systems.
So I actually ripped off quite a lot of his memory management code for redox hopefully he
ripped off a lot of the stuff i had been working on in the early days to figure out how to set up
the build environment but uh yeah i'll say great artist steel sounds like there's a lot of that
going on in programming especially when somebody trailblazes there's no problem in
you know walking down that that hewn down uh trail nothing wrong with that yeah there are
very few communities that really believe in sharing as much as the rust community too
most everything is is uh free software there are only a few proprietary programs written in rust
that i know of and uh those are in-house programs that we use to manage firmware here at System76.
The majority of stuff out there, though, is open source and MIT licensed.
So it's permissively licensed.
So it started off, you were tinkering.
It was a hobby.
Still, I guess, you call it a hobby, although, like you said, it's a second job at this point.
It's a very large hobby, one that probably takes most of your free time.
Didn't really have a goal in mind until you realized that you had to have a goal.
Now I'll read a little bit from just the opening section of the book here, maybe Tiki's words, maybe yours, but a very nice statement here. Redox is an attempt to make
a complete, fully functioning, general purpose operating system with a focus on safety, freedom,
reliability, correctness, and pragmatism. So I think that encapsulates it very well.
That sounds like a very, you know, almost serious business with like a capital S, capital B. It
has the breadth of it and the, has the design,
not the design goals, but the principles and this, this attempt to build this, um,
has it made the, the scope of the project sprawl? Because that's why I always ask why build an
operating system? Because not only is there depth in the technical expertise required,
you know, I'll ask you to describe the microkernel and stuff like that here in a little bit,
but just the breadth of things needed when you're starting from scratch, so to speak,
just to me is an overwhelming thought.
Absolutely.
And there are probably 10,000 hours of work just from my time
into things that I've written for Redux.
It's an immense amount of labor, but the undertaking of that labor does have an end
goal that all the code that's written, provided it follows the coding style that we use,
which is safe rust, ends up being something we can verify more easily for security
properties than other code. Tell us about safe rust as opposed to a different style.
The coding style is very important in rust. And actually what people don't realize is that
rust enforces your coding style, whereas other languages don't. Rust comes in and
prevents certain things from happening to an extreme. Passing around mutable pointers is
trivial in every other language. In Java, you can crash a program very easily. Concurrent memory
access exception, right? In Rust, this is not possible. In Rust, you have to structure things
the way the language forces you to. In some ways, that's a negative because that takes a lot more
effort to learn the language. But once you do, you start to get in the habit of writing things
with safe Rust and using abstractions that are performant and safe at the same time.
This especially is important in the kernel where things can be run at any time in the kernel
because interrupts can be triggered by hardware and they're serviced immediately by the kernel.
It's a lot like signals in a user space application.
Code gets run in the middle of other code.
So if you end up having to lock things and then you hold that lock when the code
that also requires to hold that lock gets called, you have a deadlock.
Rust prevents these kinds of things from happening unless you try really hard.
So people end up falling into a coding style that I will call the Rust coding style.
It's not as though that coding style can't be translated to other languages. In fact,
I would say my C now has more of a Rust coding style than it did before I learned Rust,
where that coding style is to check errors when they happen and to return from the function when they happen,
where that coding style is to check for the validity of pointers before using them, thinking
about how things are being aliased. All of these things enter into your thinking in other languages.
So the only way that this could happen for an operating system is to rewrite significant parts of it in a language that had very strong static analysis like Rust does.
Other operating systems, like if you look at SEL4, they attempt to go even further with verification techniques, but that takes a tremendous amount of time. Rust is one of the
things that can bring a certain amount of verification into your program, but can be
done in polynomial time. You can actually write a program as opposed to having to write a program,
write a specification, learn a lot about formalism, and basically end up having to say, well,
this program is formally verified unless the hardware operates incorrectly, or the operating
system returns the wrong stuff from a system call, or the other libraries on the computer
are not operating correctly, or this or that. By writing a lot of things in Rust at the low level, we have a form
of guarantee that certain classes of errors are going to be thrown out. Those classes are double
freeze, invalid pointers, buffer overflows, things like that. So it was necessary to rewrite a
significant amount of things to get those advantages.
Yeah, especially when you say buffer overflows is, you know, probably the most common cause of exploit out there against many programs.
I guess unless you're talking about the web, then you probably have cross-site scripting as number one.
Yeah, absolutely.
Yeah. So getting rid of buffer overflows is hugely advantageous for security.
Yeah.
So let's talk about the, we stated the goal and by the way, before we get into
the design, cause that's probably where we'll camp out most of the time.
I always like to ask people what success looks like and you know, maybe Redox
already is a success as it is but for you
personally you got so much time into it um you know so much lines of code so much thought and
effort a part of your life what would success look like when you say that the goal is an attempt to
make a general purpose operating system like what would success look like? Success to me looks like I and only myself
can run this operating system on my own machine without having to worry about features missing.
That would be success for me. I really am pretty selfish about this. I started developing an
operating system for myself based on the thoughts of what I wanted on my computer.
I can't really dictate what other people will want.
The lucky thing is, though, that most people in the Rust community want the exact same thing.
They want to run a secure and free operating system.
So if I can deliver on that goal and I can have something that runs on my machines, can build itself from source, so it needs to be self-hosting, which we're very close to doing, and has a browser, has internet access, has hardware access to all the hardware that I need to use, then that would be a success. Then from that success, I think it would grow to other people's forms of success,
which would be this is a widespread, widely used operating system in at least one sector.
If that was, you know, oh, well, 90% of IoT devices are running Redox,
that would be an example.
I'm not looking for the desktop or server market or any of that.
I'm not looking for any market.
I'm trying to write something for my own computer so that I can feel secure
and so that I can tinker with things and stuff like that.
It just happens that a lot of other people want the same thing.
The nice thing, what I like about that measure of success is it's completely
in your control whereas you you can't control market share you can't control traction and
adoption and these other things that you know are required um that's what everyone else measures
success by right they make it like i can run it on my own machine like that's a very i'm not going to
say it's a it's an easily achieved goal
but it's like a clear goal that you you are in control of and you and the community right so i
said one last question before we design uh dive into the design but now i have another one so i
lied uh how close are you to that you said you're almost self-hosting, but how close are you to get into your vision of success?
So I already run Redux on all of my machines partially, right? It's not full-time. That's
the issue. To get to full-time, I need to be able to compile Redux on Redux, and I need network
drivers for wireless hardware. Those would be the two things that I would need.
We would get a browser at some point.
I think the quality of that browser might be debatable.
But once the system is self-hosting, we should be able to work harder on porting software.
And I can always use my phone if I need to go to Facebook or whatever.
There you go.
I'm curious about the secure aspect of this.
What does that stem from?
What did you use prior to this, and how is it not secure enough for you?
So I've been using Linux for a very long time. I've been using Linux since
probably since I was 12 years old, maybe 10. And it has always been a stable and reliable
operating system. And it's always been thought of as a secure operating system. In many ways it is. Any Linux-based
operating system is more secure than Windows. Just that's end of story, the truth. But there
are flaws in the way that most Linux distributions handle security and the way in which the Linux
kernel itself handles security. There are about 400 system calls in Linux, whereas there are about 50 in Redux.
Each one has, I would say, mathematical attributes around it and its use.
In specific cases, you use this specific syscall.
It will not be duplicated.
There won't be another syscall that does the same thing.
That kind of design already yields itself to more security because you have less surface area.
But then if you go further to some of the things we're trying to do with OS level virtualization
and with schemes, what you have is a file system that can be reliably contained.
Whereas with Linux, any process running in any user level can access certain hardware devices
because either through IOCTLs or through the dev file system.
And most users have the ability to gain super user access,
at least what you run your browser as. In Redux, all of the drivers run in user space,
and all of the drivers run in a special container mode whereby they release privileges to access
any hardware after they've gained access to the hardware that they
need to control. What this means is that, for example, the disk driver will open the disk device
and then it will disable its ability to gain access to any other devices at any time in the
future. A vulnerability in the disk driver now went from a privilege escalation allowing any system to be accessed to a privilege
escalation allowing the disk to be read and written. Just as bad, perhaps, you could rewrite
things on disk, of course, you could rewrite the kernel, but you've contained that piece of
functionality. Even better example is the network stack.
A network stack that gets compromised on Redux
is only able to access the network device that it's operating on.
So it can send bad packets and it can lie about received packets.
That's it.
Whereas if you have a network stack that's compromised on Linux,
it gains access to the entire kernel, depending on how it's compromised, of course.
But there have been privilege escalations in the Linux kernel that have been remote vulnerabilities that have allowed remote attackers, through accessing the network devices of another machine, access to the kernel and running arbitrary code at a kernel level,
which is clearly a worse vulnerability. So firstly, the microkernel architecture divides
devices into separate spaces. Each one is in its own process space. Secondly, OS-level
virtualization prevents those processes from accessing any devices after they get into a working state.
They open the devices they need.
They disable their ability to access any more.
Thirdly, everything's written in Rust.
I hate to have to go to that point because I think people who are on the fence about Rust,
when you say, well, you should rewrite it in Rust, they will look back
at you and they will ask, how will that improve my coding quality? How will that prevent logic
errors? How will that prevent programmer failures that happen in Rust anyways? And it won't.
They're right that Rust is not the magic bullet that people seem to keep pushing it as.
Rust is simply one part of the puzzle.
That's why we have to have a microkernel design.
Some people have asked, why not do a unikernel design?
If everything's written in Rust, it should be completely sane to have everything run at kernel space
because you have protection from a language level.
It's not true. It's not true at all. Rust is not perfect. Rust cannot protect against every
single vulnerability. And so in this method, you need to have different levels. You need to have
the microkernel for protecting device drivers and services, keeping them separate. You need to have OS-level containerization
so that processes run in an even more containerized form than by default.
Not only do they have memory access being prevented across process spaces,
but you also have file access being prevented across process spaces.
And finally, Rust, to protect against programmer error,
to prevent but not completely eliminate the possibility of buffer overflows, of bad pointers,
and of double freeze and things like that. Those three kind of together are the reason Redox is potentially more secure. This episode is brought to you by DigitalOcean.
DigitalOcean recently announced new, highly competitive droplet pricing on their standard plans
on both the high and the low-end scale of their pricing.
They introduced a new, flexible $15 plan where you can mix and match resources like RAM and the number of CPUs.
And they're now leading the pack on CPU-optimized droplets, which are ideal for data analysis or CI pipelines, and
they're also working on per-second billing.
Here's a quote from the recent blog post on the new droplet plans.
Quote, we understand that price-to-performance ratios are of the utmost consideration when
you're choosing a hosting provider, and we're committed to being a price to
performance leader in the market as we continue to find ways to optimize our infrastructure we
plan to pass those savings on to you our customers end quote head to do.co slash changelog to get
started new accounts get 100 of hosting credit to use in your first 60 days once again at the
do.co.changelog.
Jeremy, let's pick back up with the microkernel you mentioned at the tail end of the YRust portion there. But can you describe that in detail, the rewrite, what a microkernel
exactly is? And then you mentioned the security benefits, but maybe if there are other pros and
cons to that sort of a design? Yeah, so the strictest definition of a microkernel is a kernel
that only does what is necessary in kernel space to make a functional user space. That is the strictest definition. By that definition,
Redux is not a microkernel. But the definition that's typically used is drivers and services
run in user space. By that definition, Redux is a microkernel. It isn't a seven system call microkernel like the L4 microkernel, but it is
a microkernel. It's 10,000 lines of code. And what it does is provide a framework for file systems
that then user space can use to create file systems and to perform file system operations. That's essentially what it does.
The older kernel before the rewrite had drivers included,
so it wasn't a microkernel.
It was a monolithic kernel.
And even older than that,
if you go all the way back to the original write,
the original git commit, it was a unikernel.
So the first thing that happened was to be able to run processes in user space. The next thing that happened was moving drivers into user space.
In order to do that, we had to write some special system calls for drivers to hook into.
But the majority of system calls in the Redox kernel are file system related. Opening files, reading, writing, seeking,
closing, and duplicating.
That's pretty much it.
There are some timing system calls,
and there are some process control system calls,
like exec and sleep, things like that.
Well, actually, clockosleep or something like that
whichever one has the highest resolution is the the one we implement what this means is that
in user space in redox you have disk drivers file system drivers the network stack network drivers
the graphics stack graphics drivers the input, input drivers are all programs.
This is very different from other operating systems, especially other operating systems
that are at this level of development. Because if you do look at other microkernels,
they have tended to lag behind, in part because it's difficult to write a microkernel, and also
in part because there isn't a lot of interest in microkernels.
But I think after realizing the security benefits,
again, there will be a renaissance in microkernels,
and Redox will be one of them.
So does that architecture put more strain
and slow down the development of these specific,
like the networking stack in user space?
Does it make writing that portion of the operating system
more difficult because it doesn't have kernel-level access?
Or once you get the microkernel set up,
it's really six in one hand, a half dozen in the other?
I think that was the most difficult part,
was choosing what needed to be in the kernel
so that it would be easy to write drivers. because I think now drivers are actually really easy to
write I've written several drivers for networking devices each one takes about
a day of research and implementation effort and most of that is looking at
hardware specific things like registers because
actually getting to the point where you're accessing hardware is not too
difficult anymore the the reason why user space drivers are so hard and well
we could take Linux for example because there are user space drivers especially
for the USB stack there's lib USB and there's lots of different programs use libUSB and implement user space drivers for devices.
Things like fingerprint readers use user space drivers.
What you don't have in Linux that is available for a user space driver programmer is the ability to get hardware interrupts delivered to the process.
This does not exist.
So you can't write drivers for PCI Express devices, for example.
You can only write drivers at a higher level, like for USB.
This has been fixed in Redux by having a file system for interrupt delivery.
So a device driver simply opens a file for that interrupt
and then gets a file event
and can read the interrupt information when the interrupt occurs. So it's all event-based. They
get messages from the kernel indicating that an interrupt happened and they can handle certain
hardware operations. And this is fairly low latency. There has been a lot of optimization in modern
x86 CPUs to handle context switching efficiently, which has also lended itself to making micro
kernels easier to develop in. Now I would say for a new driver, the things you have to do are create
a file system to access that device,
write hardware-specific code to access the registers of the device, and then link the two together so that you have something come in from user space from another process.
It tries to open, for example, disk colon zero. You give it back a handle. When it reads from that handle, you read from the disk.
And implementing this with the scheme mechanism in Redux
has been fairly simple and straightforward.
Another aspect of Redux, the design that interested me right away,
is this concept of everything is a URL,
which those of us well-versed in Linux remember
everything is a file or familiar with that.
Everything is a URL.
Seems like a take on that that seems more holistic or global.
Can you talk about that design decision and its implications in the OS?
So I would say that everything is a file is still the methodology.
Okay.
Unfortunately, these two things are conflated.
When people hear everything is a file in terms of Unix,
what the original meaning was is everything can be treated as a file handle,
not that everything had a path on the file system.
What that meant was that maybe you would open a network socket.
You would not open it from the file system in most Unixes.
You would open it with a socket call,
but you would read from it with a read call.
You would write to it with a write call, and you would close it with a read call you would write to it with the
right call and you would close it with the close call what Redux has done is
unify all of this into the open call and it's something very similar to what plan
9 did so first we have everything as a file descriptor now we have everything
is on the file system like plan plan nine, where every networking is accessed through the file system.
You create files, you read from them, you destroy them.
From the file system, you are able to do networking operations.
You are able to interact with the windowing system.
Everything you can do in plan nine nine you can do from the file system
with a file path so now everything is a file descriptor and it's referenced by a file path
now in redox it goes one step further both of those are true and we add on segmented file systems
what this means is that the beginning beginning of a path identifies a file system
to interact with. This file system is implemented by either the kernel in the case of some of the
low-level file systems like interrupt handling or by mostly by a user space process. And in order
to create a scheme, which is what this level is called, in order to create a scheme, a user space process. And in order to create a scheme, which is what this level is called,
in order to create a scheme,
a user space process opens a file.
It creates a file in what's called the root scheme.
So everything is file-based,
or at least file path-based.
The reason why I say everything is a URL,
the easiest way to conceptualize this for most people is the word URL.
Because you have at the beginning of the URL, you have an identifier of the protocol called the scheme.
And you have after that a path.
So what happens in Redox is that path gets sent to whatever handles that specific scheme.
And then it returns a file handler,
which the kernel holds onto.
And the kernel arbitrates between the two processes.
It passes all of the system calls
that utilize that file descriptor to the scheme handler,
which then passes the results
of each file descriptor operation back to
the kernel, which then forwards it to the process that started the system call.
So to do separate, like do specific user space programs register as scheme handlers
or something?
They say, I can handle this.
Exactly.
So each scheme is typically owned by one process so one process
for each scheme and a process can have more than one scheme usually they don't but sometimes they
can in the event so for example the network stack especially the new version of the network stack, especially the new version of the network stack, it has TCP and UDP and IP
all implemented by the same process. So any operations on those file systems get sent to
the same process. But what this means is that you can easily audit all of the open file descriptors and all of the available paths on the system.
Who supplies them? What is happening to them? All of that can be audited from the kernel level.
So by asking the kernel for all of the open file descriptors and for who created them and who is
using them and all of those things, you can also kind of design a more secure system.
You can say, well, that file descriptor shouldn't be open.
I'm going to modify this process to close that file descriptor.
Or this program should not have access to this file system.
I'm going to modify the permissions that it gets launched with.
Things like that.
Yeah, it's like everything gets passed through this one waypoint or this one
like a canal or something and so you can enforce restrictions right there and it makes it super
easy to do namespaces which are a critical redox concept for os level virtualization okay tell me
more tell me more about namespaces so a namespace is created by a process, much like a cheroot is created in other Unix operating systems.
Except in this case, since everything is going through the file system, everything, a namespace allows a user space process to control other user space processes almost entirely. So if you can control every access to the file system,
you can redirect it to a different place.
That's what a true root is.
But if you can control every access to the networking file system,
then you can firewall a process.
If you can control all the accesses to the graphics file system,
the windowing file system.
You can have remote windowing.
You can do windowing over the network, for example.
So VNC could simply be implemented as running a process
in a windowing container, basically.
Things like that.
Every single thing a process can do can be watched
by a process that spawn that
process so we have a a method of doing docker basically a method of doing to roots or jails
that is more general because every single thing that can be done by a process goes through this
file system mechanism so it can be intercepted by a higher privilege process
and then redirected or modified.
That sounds pretty cool.
So is this all, maybe hypothetical is not the best word,
but is there any of this tested out in practice
or are you speaking at the possibilities
given this architecture?
It is implemented. Okay. And there are usages of it.
Right now, we do have a chroot example that will do this
in order to enable chroots.
It will basically redirect file system calls to a different folder.
We also have a method of processes entering a restricted mode
where they can't open any file descriptors at all. This restricted mode is implemented using
namespaces. So if a process enters into the null namespace, they lose the ability to access any
file paths. And by doing so, if they're compromised,
their damage to the system that they can cause is limited. We do need to implement
a more complex example, which I think would be to run multiple versions of Redox at different IP addresses and then do virtual networking in a user space
process that then outputs them to the real network, basically doing LXC, which is Linux containers.
They are OS level, which means that it isn't virtualization. It is real 100% speed processes running on one kernel,
but certain processes have different rights,
and those rights are controlled by other processes.
So doing something like Docker would probably be the best proof of this system,
but we do have things using it already.
Very cool.
Anything else at the core to Redox's design that you want to touch on before you broaden
the conversation and talk about the ecosystem?
Because there's a lot of other stuff going along around and in Redox that is notable.
I think that's pretty much it.
We talked about microkernels.
We talked about the file system.
The kernel is the file system arbitrator. so I think that's most of it yeah okay so expanding
the conversation and looking beyond the kernel and the file system there's
obviously a lot more that goes into a functional you know general purpose
operating system and you have a lot of other things going on, many maintained by you, many maintained by other people in the community.
Where do we start? I mean, you got you got the little editor, you got the ion shell, you've got, you know, utilities.
I mean, orbital. There's so many things we could talk about. What's the most what's the best place to start there?
Well, the biggest and most important thing right now, I think, is Ion.
It's seen the most work.
It's not maintained by myself, although I was involved in the earlier implementation.
I have been involved in implementing Redox-related things for it.
It's maintained by Michael Murphy, who spends a lot of time and effort on updating Ion and making it work really well.
It has better performance than other shells for a lot of tasks.
The syntax is not insane, like bash syntax often ends up being.
And it's written in Rust.
What more could you ask for?
I don't know.
Is it ready?
Here to go?
Is it usable?
What else can you ask for?
Is like using it?
For the most part, yeah.
Okay.
For the most part, it's usable.
I think the remaining thing is to verify
that things like shell shock are not present in Ion
to verify the syntax and to formalize the syntax,
because right now the syntax has not been formalized.
It's been implemented, but there's not a document identifying exactly what the syntax is.
That's not to say that the syntax is difficult to learn.
We have tutorials. We have things identifying what the syntax is.
There's just not a formal specification of it. And so what we're working on right now is to fuzz Ion to
basically pass it valid syntax from a syntax generator that has different code, that has a
different implementation of the syntax, creates what should be valid syntax for Ion, feeds it into Ion, and then
validates the behavior and does this automatically and randomly. If we have something like that,
I think we could stamp it ready for general use. Is it familiar? Would it be familiar to people
who may be using Bash or Zshell or TSH? Yeah, I think so. So much of it comes from the bash syntax,
the way variables are handled,
the way you call functions and pipe
and pipe into files and pipe to other processes,
the way that you background things,
control C and control Z,
all of those things are the same in Ion.
The differences come out when you start doing if
statements and loops where we've simplified the syntax of the posix specification because it's
not following the posix specification so scripts that do need to follow that can use bash and i've
been thinking about maybe implementing a compliant mode in ion where it can act as bash
and it can run posix uh posix syntax scripts or bash syntax which is slightly different um
i guess that's the point that we haven't we haven't put we haven't really pointed a finger
at this very closely but it's uh redox is itix-like, but it's not POSIX compliant.
And it doesn't want to be.
It's like close to POSIX compliant.
It's to the point where most things that are simple
will probably compile.
But the thing it is compliant with
is the Rust standard library.
And it is compliant with a number of things at the C level, but not all.
Not all.
Because some of those things would invalidate the design.
We don't have IOCTLs, for example, because that would violate the file system design.
We don't have certain networking calls have to be done differently. We've implemented
BSD sockets, but the way it's implemented in the C library is not like other Unixes.
And a lot of things are different, but those things that are different are usually for much
more complicated use cases. We've ported Bash, for example, so all of the terminal control stuff is different at a lower level
but we have access to those
interfaces at the C library level. So I
think the reason why we don't say POSIX compatible, mostly it's because some design decisions
force it to not be POSIX compatible. The file system can never be POSIX
compatible if it uses schemes. It has to be a single file system hierarchy for it to be
compatible. Things like that. They throw out the possibility. So Unix-like because it implements
most of the things you expect on a Unix. Not POSIX compatible because we can't possibly be. Although we don't break
POSIX compatibility on purpose. We try to follow it as much as possible so that porting software is easy. This episode is brought to you by our friends at GoCD.
GoCD is an open source continuous delivery server built by ThoughtWorks.
GoCD provides continuous delivery out of the box
with its built-in pipelines, advanced traceability, and value stream visualization. With GoCD,
you can easily model, orchestrate, and visualize complex workflows from end to end. It supports
modern infrastructure with elastic on-demand agents and cloud deployments, and their plug-in
ecosystem ensures GoCD will work well in your unique environment.
To learn more about GoCD, visit gocd.org slash changelog.
It's free to use and has professional support for enterprise add-ons available from ThoughtWorks.
Once again, gocd.org slash changelog. so jim we talked earlier about security and you talked about you know why you went that route
you know with rust um why it was important to you. And you'd mentioned essentially how Linux didn't fit your bill.
And I'm kind of curious, why not just contribute to Linux,
BSD, the other operating systems out there?
Why not just do that versus essentially go it alone,
accidentally in some cases, and we're here where we're at now
with Redox. Why not go that route?
The major reason is because the design goals don't align.
And at the very beginning, the design goal didn't exist.
But when it did gain popularity, when this idea of a Rust operating system gained popularity,
the design goals started to become more permanent, those being a microkernel and rust rust is possible
in the linux kernel and in bsd you can link to rust you can compile rust you can use it
the issue is that the microkernel architecture is not one that will be accepted those are monolithic
by design i don't want to change their design. I want to live alongside it. I want to attempt
to develop something that may or may not work for a secure future based on a microkernel,
based on Rust. There are, however, microkernels that maybe I should have been contributing to,
like Minix 3, that do have aligned design goals. and i think the major reason is that the micro kernels like
minix 3 and herd don't feel very professional and very developed at this point there haven't
been a lot of people working on them and they've been achieving things very slowly
and it's kind of bears the question it kind of asks the question, why?
Why are these other microkernels not doing so well?
Why are they developing slowly?
Why are they not being utilized?
Why does it look like a ghost town when you try to find people who are working on Minix or working on H.E.R.D.?
And I think it's because the interest died because these projects
in one way or another kept hitting problems with microkernels that perhaps were solved by hardware,
perhaps were solved by software, but at the time were not solved. Hurd for 20 years stagnated. Very recently, Debian has had a version of Debian that runs
on the Heard kernel, which is a great achievement. It's probably more usable than Redux in terms of
all of the software that's available, but it's not as promising as Redux. The reason is that the architecture of Redux, I believe, had to be designed for the time.
For the software at the time, the hardware at the time, the architecture had to be designed from the ground up to produce a platform that could then burst into a fully-fledged general general purpose operating system.
So we talked about what you're up to and how you,
you have the second job now we haven't mentioned is that you do have a Patreon campaign.
So you're giving it a go at,
you know,
people supporting your work on Redox,
not doing half bad,
by the way,
123 patrons,
you know,
giving you a thousand and $85 a month to work on this, but haven't quite reached your
goals. Tell us about your decision to hop on Patreon, you know, how it's going and some of
your goals for your personal, you know, sustainability on this project.
So before I started working at System76, I left the job that I was working at before.
I was working at a very small company at a startup, self-employed basically.
And we were working on computer vision.
It didn't end up doing so well, so I left.
At that point, I started investigating whether or not Redox could be my full-time job and what it would take to be
my full-time job, the Patreon was created out of that. And I posted in the Rust Reddit asking for
people to donate, and they did, en masse. It got up to about a thousand at the same time that I had found another job. That money is used to significantly improve the amount of time
that I can dedicate to Redox.
I dedicate about 20 hours a week to Redox.
Every single night, almost every hour of the night,
I spend working on Redox.
And I feel like the more that the Patreon grows, the more of that
time will be Redox related. And eventually I may be able to dedicate all of my time to Redox if it
becomes large enough. The Patreon is a way for people to give back to the project what they feel
they want to. There's no obligations whatsoever. You can leave at any
time. You can join at any time. But I do have a couple goals that I'm working on achieving anyway.
But as I stated before, my personal goal is to make it run on my personal machine. When that happens, these other use cases will be available. Number
one in the Patreon is to run it on virtualized hardware. We're very close to doing that. The
only thing left is self-hosting and a better network stack. The better network stack is already,
I would say, 90% of the way there. I'm using it in my own builds of Redox.
The only reason I'm not pushing it out is because it lacks DHCP support.
And self-hosting is incredibly close.
I've gotten to the point where I can utilize Cargo,
which is the Rust compilation and packaging system
from within Redox.
So very few things are left before Redux will be able to compile itself
and potentially the build system will move to Redux. After that, the first goal I think will
have been achieved. Running on virtualized hardware is already, it already works very well.
Delivering a image to different cloud providers would be part of that.
And that's something that I would see about doing once we have self-hosting.
The final one, the final goal is the $4,000 a month goal,
which those numbers, by the way, I had to put numbers there.
I'm going to work towards those goals no matter what.
I was wondering, is the dollar per month contingent to put numbers there yeah i'm going to work towards those goals no matter what i was one of the
questions like is the dollar per month per month contingent or the the goals being reached contingent
on the the dollar per month because it seems like you got momentum based on your recent posts and
the feedback like wow the releases are coming in thick and fast great to see project momentum like
it seems like you're going there no matter what i'm going no matter what if people leave the patreon i'm going
no matter what if i get six thousand dollars a month from the patreon i'm going no matter what
if it's successful or unsuccessful doesn't matter wow it's it's extra money that goes
into funding development of redox directly what i did with the amount of money I made on Patreon up until the point where
Google Summer of Code happened, I took $6,500 and I funded the development of our ACPI stack
in Redox as a separate Google Summer of Code project, not affiliated with Google Summer of
Code. So we actually had two students working during the summer. One of them was, and I didn't tell anybody publicly
that I've done this. The idea is if you give any money through the Patreon, it goes back into
Redox development as soon as possible and as efficiently as possible. So we now have an AML and ACPI subsystem developed from the funds that
were raised with Patreon. And the second project was self-hosting, and it made significant progress.
Both of these projects ended up being extremely beneficial for Redux.
That's awesome. When those Google Summer of Codes happen and the student works on a project, I guess in your case you had two of them, do they continue with the community? Are they read, though, that other projects have had issues
with Google Summer of Code because these are students. They go back to school at the end of
summer. It's just a fact. They have other things to work on, and they're trying to start their
career and get their degree and things like that. But they remain in the Redox community.
We haven't lost anybody, but we have had school interfere to say one thing
or the other, which it's very important that Tiki, for example, is pursuing his degree right now.
And he's been pretty quiet, but he still checks in from time to time. I don't think we lose anybody, which is another
indicator of where this project is going. We've been gaining contributors and very few have dropped
out. The two that participated in Google Summer of Code are still working on things in Redox.
I had even more contributions to the ACPI stack after the Summer of Code had ended. I had more contributions to the self-hosting.
We're almost done. And now Ian Douglas Scott, one of the Google Summer of Code people,
well, the one, because the other one was unofficial, funded from the Redox Patreon,
he is working on porting the cookbook to Ion so that it no longer needs Bash as a dependency.
Things like that.
There are a lot of students working on Redux, actually, which has been surprising.
I don't think you would see that with other projects.
I think if you had a new C project, you would see older contributors because the more experienced contributors are going to be older in age.
They're going to be out of school.
They're going to have careers and backgrounds.
But with Redox and with Rust, we've seen a new language bring with it a lot of new talent.
People who are in school who are really looking for something new and exciting
to learn. And I think Rust has really been that language more than anything else.
That's pretty awesome. And I think a great use of those funds coming in through your Patreon
is redistributing them back and really investing back into the project with that money. So
I guess props on doing that.
And it sounds like it's paying dividends already.
Yep.
Let's talk about getting started, getting involved.
What is a potential contributor?
What does their roadmap or their onboarding look like?
Oh, yeah.
The best place to get started is to go to the GitHub repo. The GitHub repo is the link to everything else. The thing that you should do, though, if you really want to become involved in development is ask for an invite to the chat. It's invite only simply because it prevents spam. If you send me an email at info at redox-os.org, then I will send you an invite back.
If you send me other things, I may respond. I may not. I may forget it. I have like 100 emails
that I haven't replied to yet, unfortunately. But if you get on the chat,
you'll have access to the, I think,
250 people who are already there
and probably a dozen people
who are usually online responding to things.
If you want to develop,
you don't have to download all of Redux
because obviously building an operating system from source
is not something that, you know,
it takes a lot of bandwidth, it takes a lot of time.
I would estimate it takes about two gigabytes
of disk space and network downloads,
and about 30 minutes of time to build it from source,
the entire operating system.
To contribute to a single project, though,
you can check out Ion, for example. It's a very small code base, very easy to get into,
and very easy to start contributing to. Other projects are similar. Documentation varies by
project. Some projects are one-offs that had to be written and probably don't have good documentation.
An example is Rancid, which is the ANSI driver.
I have not documented it yet.
So we need help in terms of documentation.
We need help in terms of coding.
And we need help in terms of just utilizing the system and telling us what you want as a contributor.
All of that you can do by joining the chat.
So it's interesting.
I'm curious how your focus is.
I'm curious about how you balance your focus.
One, you've got development of Redox, right?
But then you've also got the contributing community
coming into play.
You've got chat. You've got discourse. You've got a book.
You've got various resources that people can tap into.
It seems like in discourse, there's some people mentioning some trellis being created to kind of give people things to do, like help wanted tags, for example.
Where do you centralize that
how do you balance your focus how do you map out how the community communicates well for the most
part redox has adhd what that means is that i was reading into that a little bit i don't want to say
it but it seems like that because it seems like it's not centralized no. And that's why being on the chat is so important.
We plan things in a what should I work on next kind of way.
And especially what I do, it ends up being holes that I find in the system or something that I want to improve.
And when a new contributor comes in and they ask what should I work on, I respond, what do you want to work on? What feels nice? This is an
operating system. There's a whole world of opportunity. Any piece of the system you can
pick and choose to improve or to change, what do you want to do? And I think that yields better
results than planning. You do have to plan, obviously. We planned the system calls and we continue to
iterate on what should be the stable version of the syscall ABI, because we will have to
stabilize it at some point in the very near future. And we have to plan certain things,
but usually that planning process takes place in the chat at some point in time. And we
reference GitHub issues. Discourse is not very active. It's why at the top of the discourse page,
I say, the chat is more active. Send an email to this address to get invited. The Trello is
something that someone made, but probably will not be kept up to date with what we have in the chat and what we
have in GitHub. It seems sparse, the Trello's being made. It's not official. And in order to
make it official, we'd have to develop a process around how to keep it up to date. So I think the
best thing for a new contributor to do is to get into the chat and to start reading through the
source code and start utilizing the system and see what they think.
Because in most cases, I think if you drive people to specific things, you can make mistakes by giving the wrong task to the wrong person.
Whereas if they're self-motivated and a task really appeals to them and a piece of code really appeals to them,
then they're much more likely to have good results.
I mean, the reason why I asked that was because I see a question around Redox on public clouds
that aligns with one of your funding goals on Patreon, and it's unanswered.
And chat is sort of real-time.
You have to sort of pay attention to all the time.
And I was just curious, you know,
how difficult it is to go from where you were at before wanting to work on this
full-time getting a, you know, a full-time job.
And then now you're at 20 hours a week and you're a,
by any means necessary to person.
So you're going to get there.
And I'm just curious how difficult it is to pull the community along with you or establish community.
Because this is a pretty important question.
August 6th from Anxious Modern Man about public clouds and it's unanswered.
What's the question?
Essentially, how can we demonstrate to cloud providers the safety of deploying Redox?
And maybe it's answered somewhere, but here's a contributor.
If you answer it now, then we can just send the person a link and say,
you need to listen to the last few minutes of this episode of this podcast,
and you'll have your answer.
Yeah.
Well, it's less on the answer, more on community management and just nurturing.
And I'm not trying to call you out.
I'm just trying to figure out where your pain points are and how people can
step in.
The pain point is definitely keeping these things up to date,
especially discourse.
I don't visit discourse.
I would probably,
if,
if I was able to,
I would take it down because we have a Reddit,
we have GitHub issues and we have github issues and we have
We have the chat so we already have real-time
Communication and with the github issues we have not real-time communication and that works better because
Github is tied directly to the source
so
Yeah, the discourse is a poor example of the Redux community because it's, in my opinion, it was a mistake because we had a duplicate form already with GitHub issues.
So I do have issues answering things on discourse in a reasonable time frame.
But if something gets on the GitHub issues,
it will be answered very quickly.
Why can't you change the main navigation to drop forum?
What do you mean?
You can't change it or is it out of your hands?
Change the what?
Well, so here's where I got there.
I was, you know, this is a pattern we see.
This isn't your fault.
This isn't like a thing you've done wrong
but this is a pattern I see happening across
any open source that's
garnering or gaining a community
and doing its best to drive forward
and sustain and build a community
around it at the same time
you've got all these different waypoints
you as an individual trying to
create Redox
and get Patreon support.
So you have that.
And then on Patreon, you have this community tab, which is basically blank.
But then you go to your homepage, which is great, but you have no community tab or no community navigation.
So I was thinking, if I'm at redox-os.org, how do I onboard?
How do I find community if I want to join is it on twitter
that's a that's a great point you know so i was like well forum's the next best thing so i went
there and like you said it's a ghost town because you're not hanging out there and important
questions aren't getting answered and it's not exactly your fault it's just a fractured community
system you got get up issues you've got patreon you've got twitter you've got GitHub Issues, you've got Patreon, you've got Twitter, you've got
real-time chat. How about this for a plan? How about the discourse forum goes away, there is a
community tab on the home page, and underneath that community tab, it says how to get to the
subreddit, the GitHub Issues, the chat, and the Patreon page yeah and then the patreon community page links back to the website
or patreon links back to the website because i think it is pretty fractured right now and
especially there's so many different ways in from the website we have links to documentation the
book and the forum but not to the chat yeah It's non-existent to the site.
And then the forum simply has this banner at the top to try and get people to go to the chat,
but it doesn't always work. And there are a lot more people signed up for the forum actually
than the chat because like we have, I think 5,000 people in the forum because it's so easy to sign
up. They just hit login, GitHub login.
I'm sure a lot of them left and probably won't come back.
But with the chat, we have 250.
So if every single person who had went to the forum had signed up in the chat,
well, it'd be pretty busy, but I'm sure it would be more useful.
Right, it'd be more useful. Right, it would be more involved.
It's less around its busyness and just more encompassing
of truly what the community is.
And you've got the developer chat, which is invite only,
which is fine, except for it's like,
well, how do I get the invite?
Where's the secret password?
Where's the door at?
And as you're trying to do what you're doing
by any means necessary, you're trying to build
a thriving community along with it or maybe not.
And if you are, then, you know, you've got to you've got to give people better waypoints to to to get involved.
Sure.
And it's not saying you're doing it wrong.
It's just saying it's it's the plan of any sustainable open source project is is that's a thing they have to do.
And it's every project has their own challenges to do that.
I appreciate that feedback because I think you're absolutely right that it needs to be improved.
So with the invites, though, the way they work, I get the invites and I screen.
Sometimes, for some reason, I may not send back an invite to the chat.
I think that's probably a negative if that was the only communication mechanism that people could use.
Because I don't want spam in the developer chat.
And also, the developer chat is not really tuned to every single user.
So I guess the problem with the forum is it was set up because we wanted somewhere for, for the general community to be,
but then the developers don't use it.
You can't have a segmented community though. You know,
I would actually flip that in reverse because I think what you're trying to do
with the invite system is, is for a good good purpose but i think you kind of have it backwards
i would let everyone in and have a code of conduct that you can point back to say hey if you're
involved in this you adhere to this conduct which means no spamming you're part of the community
you respect all these things that is common when it comes to that and then if they step out of those
bounds rather than put the barrier up at first you give them a guidance you set an expectation
and let them fail at that and then say hey you've got to go because you've failed at meeting the
community expectation that's probably a better way to go then yeah we could probably do open
invites and it's probably putting a lot of burden on you too to like to be that you know gatekeeper and that's the last thing you want to be right now
you want to be open and welcoming welcoming to anybody that wants to step in and get involved
well in general if they send an invite they will if they send an email to me they will get an invite
back almost immediately yeah the the only times i've done screening are when it goes into my spam
folder which is a problem for whoever sent it but or when it looks weird yeah which sometimes it
does i i i think that process scares people away i think the process of sending an email rather than
simply going to the site and setting things up scares people away from it
because it's a it's um it's a asynchronous process it's confrontation to wait and yeah
you've got to somebody's got to confront you personally to get involved sure and uh you know
that's that can be a scary uh first step that's why i wanted to have it, actually.
I thought that having the confrontation would
prove, and it does.
There has been no spam in the developer
chat, ever. Not a single person
has sent a message
that I've seen that has been
something that made me want to ban that person.
Very few
chats for open
source projects work that way like if you hang
around the gnome irc since you can just join it there are regularly like really horrific spam
messages that get sent out there or in the ubuntu uh in the ubuntu irc and then they have to kick that person and ban them.
But I think if we had the right process, probably, because for the chat we're using for the System76 operating system, it's Mattermost, just like the chat for Redux, but it's OpenInvite.
And we haven't had any issues with that. So we could probably set it up similarly and maybe make a community page where it says for real-time chat, you go here.
This is where the developers hang out. If you want any issues solved, you should go talk to them here.
For issues, go to GitHub issues. For a forum-like structure, go to the subreddit.
And that way there's less fragmentation.
Everything is available from the website.
The invite system is fixed for the chat.
Does that sound good?
I like it.
Yeah.
I just think that clear sign,
like it's just like having like clear signage up, you know,
like these people go here, you go there.
And then people just
know right i mean even just that clarity is will go a long ways for you yeah so in closing if folks
are listening when you get involved the best way is to email you at the email address you mentioned
to get into the private chat and that's to prevent spam. So what's the email address again?
It is info at Redox dash OS.org.
Okay.
So email that if you want in the private chat,
which is private to prevent spam. And then also this is where most of the real time chat is happening so that
you can organize and plan out what's coming next and ask questions and all
that good stuff.
Any closing thoughts for those listening could be, you know, plan out what's coming next and ask questions and all that good stuff.
Any closing thoughts for those listening?
Could be, you know, going to the Patreon page, which will link up in the show notes, of course.
How can, what's the best way you can ask for support? Not just money support, but like support in general to keep Redox going and, you know,
keep you on your mission.
Well, I would actually throw something a little weird out there.
Okay.
Redox is going where it's going no matter what.
I'm not going to stop working on it, and neither is the community built around it.
If you want to be part of that, I love that.
Come join us.
Come join the chat.
Don't feel like you have to give anything.
It is a free software project. I hope that people download parts of it and enjoy it.
I hope eventually people can install it and enjoy it.
I hope that we can work to make it better.
But no matter what, it's going forward.
And I hope you can join. Join me and be a part of that.
Awesome. Jeremy, thank you so much, man, for your initiative and your tenacity.
You definitely have a drive and that's to be appreciated.
And thank you so much for your time today.
Thanks, guys. Thanks for having me on.
All right. Thank you for tuning in to the very first episode of The Change Law back in 2018.
It's been a nice break.
It's been a very healthy break.
We're planning for the future.
We got some new stuff coming out very soon.
When I say go to changelog.com slash weekly and subscribe, do that right now if you want to keep up.
We have something huge coming up.
You're going to love it.
Thank you to our sponsors, Command Line Heroes.
What an awesome podcast.
Great jobs, Ron. Love this show. Great, Command Line Heroes. What an awesome podcast. Great drops to run.
Love this show.
Great job, Red Hat and all the team there.
Also, thanks to Digital Ocean and GoCD.
And as you know, bandwidth for ChangeLog is provided by Fastly.
Head to fastly.com to learn more.
We use Rollbar to monitor all of our errors.
Head to rollbar.com to learn more.
And we host everything we do on Linode cloud servers.
And at linode.com slash changelog,
check them out, support the show.
This show is hosted by myself, Adam Stukowiak,
and Jared Santo.
It's edited by Jonathan Youngblood.
And all the music for this show is produced
by Breakmaster Cylinder.
You can find more shows just like this at changelog.com
or by going to Apple Podcasts or Overcast
or anywhere else you
subscribe to podcasts. Thanks for listening. We'll see you next week.