Tech Over Tea - Chatting PopOS, COSMIC And RedoxOS | Jeremy Soller
Episode Date: August 11, 2023Today we have the one, the only, Jeremy Soller of System76 and the BDFL of RedoxOS on the show. He's had his fair share of drama with the libadwaita stuff but he's been around in the FOSS worl...d for a long time so I was very curious to hear his take on certain things especially involving PopOS. ==========Guest Links========== Website: https://soller.dev/ Twitter: https://twitter.com/jeremy_soller Mastodon: https://fosstodon.org/@soller PopOS Website: https://pop.system76.com/ Redox OS Website: https://www.redox-os.org/ ==========Support The Show========== ► Patreon: https://www.patreon.com/brodierobertson ► Paypal: https://www.paypal.me/BrodieRobertsonVideo ► Amazon USA: https://amzn.to/3d5gykF ► Other Methods: https://cointr.ee/brodierobertson =========Video Platforms========== 🎥 YouTube: https://www.youtube.com/channel/UCBq5p-xOla8xhnrbhu8AIAg =========Audio Release========= 🎵 RSS: https://anchor.fm/s/149fd51c/podcast/rss 🎵 Apple Podcast:https://podcasts.apple.com/us/podcast/tech-over-tea/id1501727953 🎵 Spotify: https://open.spotify.com/show/3IfFpfzlLo7OPsEnl4gbdM 🎵 Google Podcast: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy8xNDlmZDUxYy9wb2RjYXN0L3Jzcw== 🎵 Anchor: https://anchor.fm/tech-over-tea ==========Social Media========== 🎤 Discord:https://discord.gg/PkMRVn9 🐦 Twitter: https://twitter.com/TechOverTeaShow 📷 Instagram: https://www.instagram.com/techovertea/ 🌐 Mastodon:https://mastodon.social/web/accounts/1093345 ==========Credits========== 🎨 Channel Art: All my art has was created by Supercozman https://twitter.com/Supercozman https://www.instagram.com/supercozman_draws/ DISCLOSURE: Wherever possible I use referral links, which means if you click one of the links in this video or description and make a purchase we may receive a small commission or other compensation.
Transcript
Discussion (0)
We are recording?
Good morning, good day, and good evening.
Welcome to episode 180 of Tech of a T.
Today, we have a really interesting guest.
Welcome to the show, Jeremy Sulla,
the benevolent dictator for life of Redox?
Redox? Redox? How do you say the name?
Redox.
Redox. And also a PopOS maintainer.
Welcome to the show.
How's it going?
Thanks for having me.
After dealing with that mess of your three different microphone setups,
I'm happy we got to something that sounds good.
I don't know what was happening with one of those,
where it just sounded absolutely horrible.
Oh, I have more microphones, if you want.
Yeah, I have more microphones as well,
but this is the one that I'm going to use.
Too many. Yeah, yeah, yeah. Like, you as well but this is the one that i'm gonna use too many yeah yeah like you know just you have one that works just stick with that one the rest of them they can just sit wherever they are yep okay yeah so uh i guess the best
place to start is before we get into obviously like the popware stuff and the redox stuff
let's just talk about like your origin into computing and sort of work our way up from there.
So how long have you been like, you know, just not even just in the FOSS world,
just how long have you been playing around with computers?
How long have you been doing programming stuff?
Like what's your origin?
How'd you get into this basically?
I've been messing with computers for a very long time.
I don't even remember
when i started so um but i do remember the first computer i had i was around 10 years old and my
dad bought me a 25 laptop and that was 25 at that time so you can imagine how terrible it was. It had probably four megabytes of RAM. It was running DOS,
some version of DOS. And I just played some games on it and started out there. But I didn't
really get into programming until I had better equipment. So after I turned, I think, 12,
So after I turned, I think, 12, I started programming in Visual Basic.
At the time, we didn't have internet.
So it was basically whatever programming language you could get an IDE for from the store.
And so we were in some bookstore and there was a CD for Visual Basic 6.
And we just bought it and I installed it. And that's when I started. And then a few months later, we got internet. And at that point, I started
to branch out. I downloaded Linux. I started playing around with different things and I learned my second language after basic.
I can't really say I learned Visual Basic.
I mean, you can't really learn something like that.
It was so terrible.
Microsoft kind of killed the whole thing.
Technically, there's visualbasic.net, but I don't know anybody that uses it.
So yeah, I started learning x86 assembler and I was interested in doing operating system
stuff.
So at that time, the only two languages I knew were assembler and visual basic.
What a list.
That's actually...
That's such a great list
so how did you get yourself into doing assembler stuff then it's just yeah it was uh there was
just finding documentation online especially uh and and just trying to learn how to write
some dos programs and then branching out from there, I wrote a bootloader.
And then I was writing a kernel.
And this first operating system I attempted had a very creative name.
It was SolarOS, which is just my last name, OS.
And after probably three years or so messing around with that, I finally published it online.
And it's just, I don't know.
I was working on it as a hobby while at the same time learning more about Linux.
And not really programming in any other languages until later.
I learned some Java.
I learned some C.
I learned some Java, I learned some C, but it wasn't really until I started working that I got into more languages than that.
So when you say you were using Linux, what were you using at the time?
The first version of Linux I ever used was Turbo Linux 6 which is not something a lot of
people will have heard of but again we didn't have internet at the time and the
my dad would would just take me to computer stores and stuff and whatever
they had that was cheap is what we would get and Red Hat was too expensive and
that was pretty much the only other thing that they sold as a CD in the
store so at some point,
he got his hands on Turbo Linux 6. And I installed that. And it really sucked in every way,
shape and form. Like I had, I actually rebuilt my first desktop computer. So I have it sitting
right next to me. It's a Pentium 2 with an Asus P2B motherboard.
And the motherboard, I think, is from 1997. And so that I actually built a computer around that
in like 2000 for 400 bucks total, including the case, everything. So it was already out of date.
And Turbolink 6 was also out of date by a few
years and they managed to barely work together in a in a in a way that might make sense uh i had to
hack it a lot because i had scuzzy drives and turbo link six did not really like that very much
so getting the drives to actually work was a pain and then getting audio to work in any form shape
or factor was a pain and then after about a year of that we got internet I think it was in 2003 and
it was like yeah that's like the opening of the whole universe before then we had internet I mean
I had dial up and i
i came up with some creative ways to to get the dial up provider to give me more minutes
and and try to to keep working off of free trials but uh yeah we had dial up and and once we finally got actual broadband internet back when they called it broadband and it was still only
like one megabit per second uh i managed to download fedora core 4 and install it
and yeah and that was the the first real linux install i had because without internet, it's kind of a dead system.
But once I had that installed and I could upgrade it and I do remember getting Wi-Fi to work was a
pain, but it was still a lot easier than Turbo Linux 6, which is a distribution made by a Japanese
company that no longer exists. It was not like good in any shape, form or factor,
but it was available.
Well,
you know,
available is always better than not available.
And considering you'll say,
how much was red hat at that point?
Like if you,
I don't know,
it was a hundred bucks or something.
That was too much.
Yeah.
Okay.
Especially back then.
Like that's a lot more CDs from work.
So they were free.
Oh yeah. I had no money. Yeah. No like that's a lot more. I think my dad got the Turbo Linux CDs from work, so they were free. Oh. Yeah, I had no money.
Yeah, no, that's fair.
So you made your way onto Fedora Core, and then I guess the rest is pretty much history.
Well, I stayed there for a while, just going back and forth between GNOME and KDE,
and eventually Ubuntu came out and I switched to it and I was running GNOME 2
for a couple releases and then switched to KDE and I really really liked KDE 3
for a very long time it felt like although it was probably only like a year and a half but it felt
like a really long time because I was so young and then kde4 came out and i'm like
what what is this i'm gonna switch to gnome and then gnome gnome 3 came out and like what what is
this and then unity came out and then i switched to unity and i stayed there for a little bit and
then finally i warmed up to the to the gnome design and went back to GNOME 3. And I was running Ubuntu GNOME at the time
that System76 hired me. And I floated the idea to Carl, who is the CEO of System76,
hey, we should try to do something like the GNOME spin of Ubuntuuntu but include a later kernel and include the nvidia drivers include uh
newer mesa things like that so that we don't have to have as many problems shipping hardware
and that's where pop came from that was carl's decision to what year was that that was 2018 2017 and then we released in 2018 1804 well we had a
1710 release uh ubuntu 1710 and then we made pop os 1710 but i would not qualify that as a real
release because it was kind of more like a beta release and then we had a bunch of features we built up for the 1804 release
so in 2018 uh we released it with the recovery partition with the new installer
with the upgrade feature so a bunch of things were built in that that now are critical to uh pop os
so with that so with 1710 that was sort of just getting it like mostly pieced together it wasn't exactly ready at that point you would say yeah at 1710 we were using ubiquity as the installer
which was the ubuntu installer at the time and we were basically pre-filling it with drivers
support like the nvidia driver for the NVIDIA ISO.
And we're also doing some modifications
to packages running on this.
Yeah, new drivers for things.
Your audio just got way loud for some reason.
I might have to move.
Like there's someone mowing outside.
Ah.
Yeah.
If you wanna take a quick break, we can, unless...
We might have to, yeah.
I'm going to have to go upstairs.
Okay, totally fine.
So, you're talking about the 17.10 release.
You're saying you're getting, like, driver modifications in with Ubiquiti,
and, yeah, that's pretty much where we were just at.
Yeah, it was a rebuild of the Ubuntu ISO.
So we downloaded it, extracted it, replaced some files,
and then recreated it.
And it wasn't really Pop! OS yet.
It was more like a beta release.
And then 18.04, we were creating it entirely from scratch so very
different process so why did you want to do a boon to with like a slightly newer
kernel newer drives and things like that like why why was that the direction
when to go well we shipped at the timeuntu systems, and we couldn't ship Ubuntu because Ubuntu wouldn't boot on a lot of NVIDIA systems, at least at the time.
And Ubuntu would not work with a lot of the newer CPUs we were getting.
We would get the hardware within days of it becoming public,
that the hardware even existed,
and then Ubuntu's kernel would not change for the whole release cycle.
So we would have to, no matter what, inject stuff in.
And the way we were doing it before was to install Ubuntu
and then install a PPA that contained a driver pack
that updated some things like Mesa, the Linux kernel,
and other related things.
Right, I'd never really consider that from the perspective of a system integrator.
Because, you know, most of the time you're going to be able to get Linux running pretty early.
But when we're talking about a distro like Ubuntu,
usually Intel will put out a big wiki page.
Be like, this is what you need to do.
These are the extra packages you need to install. Huh install huh now that actually makes a lot of sense then so that sort of that
sort of was the origin of of pop west and then from there like did you ever expect pop west to
sort of grow into what it is today or sort, was it just intended to be that sort of,
we're making this to make the system integration a lot easier?
It started with, we want to have something where a user can download the installation media
and it will boot on all of our computers.
Right, right.
And that was it.
And the answer was, we have to update it more regularly.
And Ubuntu does a lot of wonderful things, and they're very good at supporting hardware.
But the update cycle for their ISOs is too slow to support new hardware.
So if you want to support the new hardware, you get the Ubuntu ISO, you add something like no mode set to the kernel command line, and then you boot it,
and then you update it, and then you install the NVIDIA driver, and then you reboot and you remove
no mode set from the command line, and then finally you have a good working system.
And this was, that's also basically what we have to do with our Ubuntu install still,
but we've streamlined the process because our installer can install like
our PPA along with Ubuntu for all of our Ubuntu installs. And then for all of our pop installs,
it's all out of the box, everything works. And so this was really a, how do we make the
reinstall process so simple that any system system integrator could take pop os and install
it on their hardware and that also led to some things like the hp dev1 where where they could
take pop os and modify it the way they wanted to and then every pop os iso installs on that piece
of hardware the way that that uh that that team wanted it to. So we've integrated all those things in,
whereas if you have something like Windows, Ubuntu, Fedora, I mean, those are like,
it's a generic image. And it may be missing some things for a particular piece of hardware. So
Pop!OS was to integrate things that were required for our hardware but it turns out that
our hardware is so heterogeneous that there are so many different pieces to it like we have laptops
that have amd cpus intel cpus we have laptops with nvidia gpus we sell desktops with amd intel and
nvidia cpus and gpus it's a very wide spectrum.
And on the desktop side, we have different motherboard manufacturers we work with.
And so with this wide spectrum of hardware and us coming in and saying,
we're making a distribution that works on brand new hardware,
that seemed to bring in a lot of people who are interested in buying new hardware,
even if they weren't buying
it from us.
So that's a market that especially is tended towards gamers where they're upgrading their
hardware and they need to know if I buy a new NVIDIA GPU, is it going to work?
Or am I going to buy it, put it in, the system doesn't even boot,
I have to take it out and wait, wait a few months for something to be released.
We're selling that as soon as we can. So Pop!OS has to support it.
Right. Like the way I would recommend like upgrading systems for a normal person is just
don't buy the new stuff. Just't do that obviously like you guys are
selling it but like if you're upgrading your own system i always wait like i my current cards are
generation behind because i just don't want to deal with those early adoption issues even though
arch is gonna have those fixes pretty quickly there's gonna be issues along the way they get
the drivers might be a bit buggy and i just personally don't want to deal with that but
if you're going to be selling hardware,
you need to make sure that process is going to be as smooth as possible,
especially for those people who do want to buy the latest system that is available.
Yeah, it's important that somebody does it.
I think if you're going to go out and buy an old laptop
and put any distribution on it,
it will work just fine. And the the release cycles of other distributions are tended towards
the stability of new of older hardware. And that I think it's great that users have both options.
And that's so if you go out and buy a brand new piece of hardware, you know, there's a distribution
that's that's tending towards that.
And if you don't, you have more options of different distributions
that are not trying to do that.
And there's a lot of flexibility if you're not trying to do that.
You don't have to go with the latest Linux kernel.
And there are regressions in the Linux kernel that happen all the time.
So sometimes we've had to make calls where we're like, well, this new kernel is absolutely required for the new AMD GPU.
What do we do if it has a regression on an older GPU? And then we have to make a decision.
And I'm hoping that the fact that someone is actually in this space makes that less of a likely occurrence.
The fact that someone is tracking every new kernel release, and we do track every kernel release, even if we don't release every new kernel release.
We don't release the ones that don't pass testing.
Right, right, right.
And so that means there are eyeballs on how is Linux working on brand new hardware.
And we submit bug reports upstream as necessary, and we submit fixes.
And we have regularly interacted with hardware companies as well
to tell them when things are broken.
So it's been a beneficial space for someone to be in.
And I'm happy that we were able to create a distribution
with this model of trying to support the new hardware.
So what you're saying is even if it's not directly released in Pop!OS, we can thank
System76 for making sure hardware works.
In some way. I mean, mostly you can thank
and you can curse the hardware vendors themselves.
When an NVIDIA GPU has a problem,
it's an NVIDIA problem.
And not a Linux problem,
not a Pop!OS or Ubuntu problem
or anything like that.
As long as the distribution is doing
what it should be doing to keep updated,
then I think generally we should blame the hardware vendors and hope that they
see our market as important. And that's why System76 is important because we sell directly
from those hardware vendors and have direct communications with them where we will bother them if these things are not working and they will lose money if they do not get them
working so we have multiple times especially with with nvidia and with amd had to say hey this this
new hardware is broken i have to give intel props they seem like they actually test their stuff regularly. But we've also had
issues where we've had to say like, hey, Raptor Lake CPUs aren't operating correctly on the new
kernel. Will you check it out? And that has led to updates that go through the entire ecosystem.
So without somebody trying to actually sell hardware running Linux, there's no interest from Intel, AMD, NVIDIA on making sure it runs.
Wait, what am I hearing now?
Oh, it's the motor again.
It's on this side of the house now.
It's a lot quieter though so i was in the
basement before and the fence is right there and then they were on the other side so now i'm on
the top floor so let's hope they don't have that much grass though they gotta be finished soon
all right i think they'll be done in a couple seconds. Okay. Okay. Okay.
So with PopOS.
So PopOS is a downstream of Ubuntu.
At what point did the PopShell come along?
PopShell we created for 2004.
And the purpose of that was a lot of us had not been using GNOME for a while at that point. I had switched to
i3. I know some other people had switched to different tiling window managers. And the question
was, well, if our engineers aren't using what we're shipping as the default, what can we do to
make it interesting? And the answer ended up being tiling for at least for our engineering
team. And it was very important that the tiling would be automatic. That I've seen some other
designs where you create windows and you choose where they go kind of like the Windows 11
design. Windows 11 now basically the only feature they introduced beyond what windows 10 had
which they could have just introduced to windows 10 so i don't really know what the point was but
anyways uh the only the only company i've used to hold a flame regularly is is microsoft um
everyone else is okay but uh yeah they released windows 10 and everyone had to pay for it and
update and you don't get to use hybrid cpus like alder lake properly because of somehow the new
scheduler or something wouldn't go back to windows 10 i have no idea how that stuff works like why
couldn't they just release an update to windows 10 that makes the new scheduler work but anyway
they have a little selection utility to select where
windows go. And that's what I would call manual tiling. You select a window and you choose how
to tile all the windows together. Automatic tiling in Pop! OS is a little different. You
create a window, it positions it automatically. And it positions it based on what window you have focused right now and it tries to preserve a
aspect ratio
So if you create a new window and you have a very wide window that you have focused and it will split in half
vertically and if you have a very a very skinny window, it will split in half horizontally, so
That automatic tiling was something that i3 did not have
without some extensions there is an i3 auto tiling uh script that i use that works really well is it
is it the one just called like i like auto tiling or something i3 auto tiling yeah i use the same
link for my dot files and i've set up basically every desktop environment that I try
with the same kind of settings in my.files.
So I have KDE and I think I have a KWin tiling script
that does essentially the same thing with all the configuration
to do the same thing.
I just like the way it works.
And we're porting the same system over to Cosmic.
So with some improvements, some major
improvements like the ability to tile things in three wide or four wide, whereas before
the pop shell script would always divide things in half. Right, right. So you're sort of just
giving the user more options in the way they want to lay out their windows. Right. And a lot more tight integration into the system.
A big reason behind Cosmic is that we just grow tired of writing JavaScript that runs inside of a single environment.
Wait, wait. Hold on a second. Is that what Genome plugins are written in?
Yes. PopShell is written in typescript and transpiled to
javascript right and then the javascript gets inserted into the runtime of gnome shell
so gnome shell plugins are all hot loaded into the same process space all running one javascript unit
no multi-threading or anything every Every extension is running in one JavaScript context.
Wait, so if one of the plugins were to crash,
would that take everything down?
Yes.
Okay.
And it happens.
It happens.
Right.
Okay.
So, yeah, it happens a lot, I'll have to say.
And that's a major reason behind the designs of Cosmic,
which are to attempt to modularize this so extensions can be added as separate processes.
Imagine that.
Yeah.
Which is, I mean, it's something that's been going away because even KDE is doing a similar thing where they're injecting scripts directly into the same process.
And often it's the same process that's the compositor, either the Wayland compositor or the X11 server.
So it goes down.
Everything goes away.
So you download your Make My Desktop Pretty script or extension, install it, and it crashes.
Everything disappears and every process dies and you have to restart the
compositor. I don't want to flame them because I understand the reason behind it and the rationale
behind it. It is a simple mechanism to understand. It is relatively easy to write extensions,
it is relatively easy to write extensions.
But because there's no separation between extensions,
it's also relatively easy for extensions to have bugs that can affect the rest of the shell.
And so there are pros and cons to this design.
And a more modular system is more difficult
to write each individual extension
and more difficult to design
an API that they interact through, but we will have each extension running its own process space.
So if it crashes, it's not going to bring down the whole shell. It can be restarted.
None of the windows go away. That's a much, yeah, that's a, yeah, that's a much more sensible
design. Like if I, cause like a lot of the plugins are going to be made by some, you know, that's a much yeah that that's a yeah that's a much more sensible design like if i because like
a lot of the plugins are going to be made by some you know it's going to be made by what like one
guy probably and it's gonna probably not have a test suite it's probably gonna be like you know
written well enough that it doesn't crash most of the time on one version of gnomeshel on one version of gnomeshel yeah and once they
the way that they update of course is to change the javascript inside gnomeshel itself because
the majority of gnomeshel is inside that javascript context right so they are making
modifications to the what to the internals of gnomeshel's JavaScript context, the things that run inside that context
have to adapt to the new context, to the new code.
Because if you want to, for example,
put an indicator in the top panel in Gnome Shell,
the only way to do that is to modify
the JavaScript object that is the panel.
And if they change the way the panel is named, or any element of it changes,
you may have to rewrite that extension. And so there are some very creative ways that we have
had to adapt to new GNOME shell updates, because they don't change the internals based on the way
extensions work. They change the internals based on what their goals are, and then the extensions have to adapt.
And that model has eaten up, like,
it became to, it came to the point
where porting all of our extensions took up the majority
of the six month release cycle.
Right.
And then, well, porting them and then, you know,
testing them as well and making sure they're actually
stable yeah exactly right so where did at what point did cosmic become an idea because the name
cosmic i know you guys have been using since before the like the new rust cosmic thing
so when did you guys start referring to the pop west thing as cosmic
well when gnome 40 came around we wanted to preserve some elements of gnome 3.38
particularly the ability to have vertical workspaces and this this created a design that we needed to name and that name was Cosmic.
And over time, that design
has eaten up a lot more things
and led to the development of a new desktop
environment. But for the very first
release,
it was just to change a few
of the elements of GNOME Shell
to better mimic
what we wanted to do.
Now we've expanded so we'll have both horizontal
and vertical workspaces selectable per monitor.
And then we will have a lot of other configuration options,
but the primary change in GNOME 40 that led to the creation
of the Cosmic GNOME extension uh to try and preserve the 3.38
design was the switch of the workspaces and the the view too where um it zooms out all of the
workspaces at the same time so uh yeah it was uh mainly our our start at at creating a desktop environment through any means possible.
Cosmic was the name given to that.
And the first version of it was just to continue down this path of trying to modify GNOME Shell to do what we wanted to do.
Right.
And then Libidueta came out.
Or at least information about it came out.
And I'm not going to talk about
it because uh it's been talked to death but the issue simply was we did not want to adapt to um
to their concepts about theming right and they did not want to adapt to ours and this led to
the natural conclusion that they should be free to go and do what they want to do.
And we should be free to go and do what we want to do.
And that involved us choosing a different toolkit.
And at that point, we started investigating.
And we've been deep into Rust for all of our other projects.
Every new piece we add to Pop! OS,
if possible, is usually written in Rust.
So we had a installer backend called distanced
that was written for 18.04.
It was written in Rust.
We have an upgrade daemon, pop-upgrade.
It was written in Rust.
We have some other elements of the system written in Rust.
So we wanted to see, okay, what can we do with Rust in the UI space?
And this led us to evaluate a ton of different toolkits.
And at the time, we settled on Iced.
We settled on particularly making a layer on top of Iced that would come with some simplifications for creating
Cosmic applications.
So we have a libcosmic that integrates directly with Iced and is completely optional, but
it provides automatic loading of the themes that a user can create, which are quite extensive
and customizable, as well as
integrated UX ideas so we have widgets there that are created by our UX team
that are then placed into libcosmic and then we use that to create the elements
of cosmic and again all those elements are separated into modular pieces so you
have cosmic comp that actually is the compositor,
the Wayland compositor. And the only piece we haven't really figured out how to modularize yet
is tiling. And this is another thing that I've told to everyone who's asked, like,
can you collaborate on tiling? And the answer is it really should live inside the compositor.
We found so many problems with PopShell that would be solved if we could just modify mutter. But modifying mutter was not on the table.
What do you mean with collaborate with tiling? I'm not sure we're getting out with that.
Yeah, we've been told, well, why don't you just work on gnome to bring in this idea into gnome
shell? And there's a lot of different ideas floating
around. And technically, there's no technical reason it couldn't be done. But from a pragmatic
perspective, to integrate everything in to a already existing compositor, and have all the
people who use it be happy with that is much more difficult
and so i'm i'm glad that they're going off and going to try to integrate tiling in i don't know
what that will look like it probably won't look like pop shell but it may be a good base
i don't know a lot of reasons why we need to go down our own route. I don't know if you'd seen anything of the talk that was happening.
No.
Okay, so it's not tiling exactly in the same way that you'd want it to be for Pop!OS.
So Tobias talked about the idea of just making Windows be tiles.
That's not exactly
what they're going for they sort of acknowledge that some applications don't play nicely with
having their aspect ratio changed and it's more like sort of i guess it's it's sort of putting a
grid of windows to get that together but not necessarily making use of the entire desktop
if that makes any sense uh i'll send you the talk afterwards if you want to have a look um okay but i don't right now it's still a mock-up um so i don't know
how like customizable that tiling would be if you guys wanted to work with it anyway like whether
you're going to be able to write plugins that are going to be able to interface with that or
it's going to be exactly the way they want it to be at this point we've
already completed the tiling implementation and cosmic comp right yeah and it's rust and it's
great and it's fresh and it's modular modularized and we don't ever want to go back it's just how
it is um and i i tried and they took my attempts at trying as an offense. And now I don't want to
talk about it anymore about GNOME. And I've had a wonderful history with using GNOME. But I'd like
to alternate GNOME and GNOME because I'm still not sure which one to say. Please do. That history is at an end.
I'm now a cosmic man.
So yeah, there's Cosmic Comp.
The compositor is at the center of the whole thing.
And the modular design means that the panel is a different process.
Not just the panel, but every applet inside of the panel is created by a different process.
panel, but every applet inside of the panel is created by a different process. So even the network applet, if it crashes for some reason, which it probably won't because
it's written in Rust, but I mean, I don't want to say Rust is a silver bullet for every
single problem.
It kind of is, but it's as close to one as we can produce. But yeah, if it crashes, which it could, it won't bring down the whole panel.
The panel will just reinstatiate it.
And if the panel crashes for whatever reason, then the panel will be reinstantiated.
It will be respawned by cosmic session, which is a handler and basically it says well i need to run
cosmic comp and i need to run cosmic panel and if either of them crash i'm going to restart them
now cosmic session crashes then my belief that your computer has not been compromised
is very low i believe it has been compromised. Like it's a very simple program.
It only spawns two things.
And if either of them crash,
it respawns them.
That design is so that we have a single,
we do have a single point of failure,
cosmic session,
but something has to be a single point of failure.
And so we make that thing be as minimal as possible.
And then the things that that are not
single points of failure like the applets we spawn those in different processes so they can't kill
each other and uh if one of them does die we can recreate it and uh sorry i was gonna ask you at
this stage how big do you reckon cosmic session is cosmic session in terms of in terms of what lines of
code or lines of code that's the easiest thing to go with i mean it would probably be
there is one thing that's in there that's a little uh strange which is the notifications
because notifications in in the free desktop specification have to be handled by a Dbus server,
and there has to be only one of them.
Basically, a socket is created by Cosmic Session
and then handed off to Cosmic Panel,
and also to a separate server, the Cosmic Notification Server.
So it's probably about 200 lines of code. And the majority of that is
we're creating that socket and sharing it. And then the socket on the cosmic notifications side,
that server creates the D bus interface, which has to there has to be only one of them.
But then the panel can communicate over the socket. So there can be multiple consumers of the notification data.
So it's multiplexing the notification data. So it comes in through D bus,
and then it goes out to each panel instance of the notification widget, or applet. So that way,
each applet can can read the notifications. And by having permissions defined, we can prevent any other applets
from reading notifications.
So each applet can be sandboxed.
And this concept is primarily important
because we want to have third-party applets be available.
So they're going to be sandboxed
such that they have permissions like it can read the screen
or it can it can
communicate with audio devices uh like changing the volume or it can read notifications things
like that okay that makes sense um well there's a couple of things i want to unpack throughout that
i was gonna ask but you pretty much answered it anyway uh i'm sure people have asked you why make your whole
own separate thing and not just fork off of of a gnome but i think that's pretty be made pretty
clear throughout some of the stuff you've said but i do want to go back to the whole using rust
like why was it rust that that the lang why is rust the language that you guys were drawn to
what is the value in this language?
And, you know, just go with that, I guess.
Yeah, sure.
Well, if we're starting from scratch for any project,
we don't really have anything to bind us to any other programming language.
And in that kind of free field,
it's very hard to rule out Rust as the language to start with.
Okay.
It is statically typed, which is important for preventing errors that happen all the time in dynamic programming.
We love duct typing.
Duct typing is great, isn't it?
It's the reason why PopShell is using TypeScript, so that we can add a layer of static typing on top of a dynamic language.
Because it just means if you pass the wrong object, the compiler complains.
You don't have the whole system crash three weeks after the user boots into a session
because something was wrong in one specific tiny little use case because of a mistake,
which happens all the time in Python and JavaScript where,
oops, at that one point in that one, you know, rare case, I passed something in value error,
you know, where did that come from? Let me just wrap the whole thing in try catch and
hopefully everything's fine. Right? Try catch solves everything.
Try catch the whole entire piece of code main function
just nest the try catches down like exactly every function has a try catch it's uh so yeah well
that's it's easy to write dynamic program uh dynamically typed programs because you don't have to worry about the types matching.
But when type matching is required, you run into cases where in JavaScript, you have things
interpret themselves as true or false in a completely unpredictable way, you would think.
And you have to like learn the entire ethos of the creator of JavaScript to figure out why square brackets are true and curly brackets are false.
And it's like, I don't want to make fun of JavaScript, but I feel like out of all the programming languages I could make fun of, that's probably the main one.
Just because so many things in it don't make any sense to me.
So, yeah, static typing was important.
And then you have to rule out, well, why not do it in C?
Why not do it in C++?
And so another thing about Rust is it's not really just, like, about memory safety.
Memory safety comes out of the borrow checking system. And the borrow
checking system is kind of an extension of static typing, where now the typing system is also
evaluating, have you mutably aliased an object when you shouldn't have. And this extension,
this this borrow checker, just rules out a ton of other bugs.
And another thing is the non-existence of null.
Null doesn't exist in Rust at all.
You have to use the option type, which is a strong type.
It either has a variant sum or a variant none.
You can also use a result type, which has a variant okay and a variant error.
And you have to match the variant. You have to handle both cases. You can't just have a function
that takes a pointer and the pointer could be null or could not and then the programmer could
forget to check for null. Every function that takes a pointer to something is usually done through a
borrow and the borrow is statically checked by the compiler to always be a
valid pointer and if you don't want it to be a valid pointer you rock wrap it
in an option type which then the programmer has to actually unwrap that
option type they have to handle both cases. In C and C++, often the mechanism by which you
pass errors back is to return negative one. Well, what if negative one needs to be a real thing?
How does the user of the function know by just looking at the function if the error type is zero,
negative one, 100? They don't know. They just know it returns int.
What does int mean?
And in C++, pointers are used all over the place,
and it's very easy to pass in pointers when they're already being used.
For example, iterating over an array while you're removing elements from the array.
In Rust, you are not allowed to do this unless you do it the correct way.
In C++, it is very easy to get seg faults because you've removed an element at the same time that
you're iterating an array. So, these are all things that a stronger type system prevents. So,
I'm very into types, and I would have gotten into Haskell if I had a PhD in applied mathematics, but I don't.
So I'm not into Haskell.
So I got into Rust, which is like the next best thing.
As you were going through those examples, I was thinking back to my software engineering classes,
and I was thinking back on every single time i did exactly what you were
saying yeah yeah and it's not saying and the apologists will say well you just have to get
better yeah you just have to get better and i i don't think that's an answer because people don't
there is a limit to to the ability to understand a system and when you drop somebody into the linux kernel and it has
you know six million lines of code or whatever it is at now you can't expect them to to understand
every other part that they're interacting with well and all those parts have to work for the
whole system to not have bugs it's not just that like if you can get the computer to do that checking for you
like why would you bother doing it yourself like go just you know go wash your clothes by hand
why would you use a washing machine like no pointing that like you can just do it manually
oh well washing machines don't you know that they have the 5g signals that communicate with your
covid implant and blah, blah, blah.
There are some really crazy people out there.
You were joking about the 5G thing,
but I guarantee there are actually washing machines
that are like...
There have to be washing machines with 5G.
There are so many smart...
Why does anybody need a smart washing machine?
I don't know, but they are there.
They exist.
A smart refrigerator.
Yep.
Why? So it can fail.
So it can just... So it can play doom there's a
software bug and then all your food rots because the software bug turned off the refrigerator it's
the same reason why cars may have a ton of of things going through the entertainment system
but usually they try to segment out a few of the critical things like do the brakes work through
the entertainment system i don't think so i don't think so maybe on a tesla i was gonna say maybe on a tesla but probably not on most cars
uh we got a little side check there uh we're talking about rust um right on the topic of rust
so you guys are going with the iced toolkit i don't know anything about rust gooey toolkits but what
were the other options at the time you guys were considering it is a very nascent field and there
is a lot of interest in in uh creating things that actually work and when i got into it there
was absolutely no text rendering being done in rust it was. It was all wrapping other libraries at best. And at worst,
it was basically rendering text without any handling of complex text items like shaping
or like anti-aliasing or things like that. And so, ICE was doing it that way.
Mm-hmm.
But so was everyone else.
Accessibility is something that we're still working on building into LibCosmic.
So it was really hard to come in and evaluate any of these because so many things would have to be done in-house for it to be finished.
The first evaluation was, do we want to use a pre-existing toolkit not written in rust like gtk
cute do we want to use one of those and we eventually decided not to and it was a very
tough decision on the gtk front it was primarily due to the attitudes of the GTK developers towards how we were doing theming
on Pop! OS. And although technically it's probably still possible to theme Libidueta
and to theme GTK4, they have made it tougher on purpose, and that does not give us confidence
in the future of the platform. For Qt, our main issue with Qt was our difficulty to integrate
it with Rust because we still wanted to write the business level code that controls the GUI
in Rust. And we wanted to integrate that as tightly as possible. And GTK RS is really good. It was easy to use the GTK
wrappers for Rust. But we just didn't trust the platform. And for Qt, it was difficult to use the
wrappers for Rust. So that led us to the thing, well, if we'd have to work on either of these
either way, either we'd have to trust, we'd have to bring ourselves to
trust GTK to continue working for us. And our use case is definitely strange because we will be
loading custom theming into the application based on the user's config files and that was that was something that we weren't sure was going to last
in gtk uh and then um yeah on the cute side we we would have to do work to improve their bindings
for rust let's look at the work we have to do for the rust tool kit so that led us to uh several
different tool kits ice and slint being the major ones. And I still recommend everyone to take a
look at Slint. Okay. I think for a lot of applications, Slint is a better choice.
Iced integrates directly into Rust. Slint has a layout language. And I think depending on your
preferences, one is going to be easier than the other. And for us we decided to go with iced uh at the time it
seemed to be a more flexible option but slint has done a lot of work recently to improve so now it's
kind of if i was going to recommend a toolkit it would be either iced or slint and we had to flip
a coin and decide to go with one um and uh we we went with iced and and you guys do any like prototyping early
on just to mess around with the languages we did prototyping we did prototyping in gtk
and and we did prototyping in iced and in slant and and the the ability to wrap
iced in another library was particularly impressive. Just the way that
it exposes things and it's composable at a code level. I think it makes it a little more
difficult to use at a higher level for someone who wants to draw up a design in a design
editor, something like Glade or Qt, or Cute Creator, which I've used Cute
Creator a lot. I used to do a ton of C++ stuff. So, anybody from the C++ world who's mad about
the stuff I say, just realize I used to not understand, just like you. But now i know the glory of rust uh so yeah i i'm aware of those and i think slint
they provide like a website editor uh to mock things up and and uh they they did some work
to integrate our cosmic theme and that was recently something they published and it looks
really great uh wrapping it into into Rust and creating custom widgets the way
we wanted to do and it just iced with something that appealed to us. It also follows an Elm-like
structure, which a number of people on my team were interested in. We had been experimenting with Realm 4, which is a wrapper
for GTK 4 that has a Realm-like structure as well. And Iced was a little cleaner in terms of
following this model because it was able to separate things out that using GTK would be
difficult to separate. Yeah, at the end, we had a lot of work to put into Iced. For example, I had to spend
about a month just re-implementing all of the pieces of shit about text layout
that nobody thinks about, but have to work before you have a working toolkit.
And so that became Cosmic Text. And now Cosmic Text is integrated into Iced.
And Slint is planning integration of Cosmic Text as well.
Okay, wow.
So that was something created by System76.
The first pure Rust shaping and rendering solution.
It integrates some other libraries that were very important.
There is a Rust port of, I forget the shaping library's name now.
I can find it very quickly.
I'll just go to the Cosmic Text GitHub page.
Easy.
Because I don't want to throw out that we did everything it was an integration of multiple different efforts
it did require a lot of system 76 work to get to get everything working
shaping who does shaping shaping is provided by rustyyBuzz. It is RustyBuzz, yes.
Render is provided by Swash.
Right.
And RustyBuzz is itself a Rust port of HarfBuzz,
which is the name I was looking for.
And that solution, basically, Swash was able to render everything.
RustyBuzz was able to shape everything.
But combining the two and actually doing layout
is an incredibly complicated task
that no one had really attempted yet in Rust.
And so a lot of Rust libraries
were either not handling international text correctly
or they were wrapping very large C libraries
to do the same thing right right and
you can use gtk and just do that and gtk rs but then the majority of your code is actually c
and you're using pango and cairo and harf buzz and the free type and those c libraries to to uh
handle shaping and layout um and font fallback is another thing that people usually don't think about,
but it's something completely integrated into cosmic text and not from an
external library.
So you have to scan the system for all fonts and you have to have a list of
fonts that are preferred for each like a script.
Right.
Like if you have Hindu script, you need to have a set of fonts for every operating system that are preferred for each like script. Right. Like if you have Hindu script,
you need to have a set of fonts
for every operating system that are preferred
and you need to find which font has the right character
and you need to do this for every single character.
So it ends up adding up quite a lot
and being a very difficult performance problem.
So caching the results of that
and ensuring that every single group of characters
that could
be ligatured together got the right font is a lot of the parts of Cosmic Text.
And yeah, so in the end, we were able to add international text handling to Iced.
Slint is going to have it as well using Cosmic Text.
Accessibility is something we're working on
and those are things we had to take on because we wanted to do a rust toolkit it was a very
nascent field as i said still is in many ways so whilst your goal is improving ice feet like the
the use of cosmic it is having a a wider effect on the general rust gooey space yeah cosmic text is is now being
evaluated by a ton of different toolkits if it's not already being used by them so not just iced
and slant but also bevy the game engine is looking at integrating cosmic text i think e-gooey if i
remember correctly there's like two choices if you're making a toolkit.
You either wrap another toolkit or you have to do text layout in the language that you're in.
And different Rust solutions do it different ways.
So, Iced was interesting to me because it wasn't wrapping any other toolkit.
It was handling the rendering all inside of Rust. So it was
using WGPU and going directly to the GPU for every operating system. And I helped to fix
up this crate called soft buffer so that ice could provide software rendering on every single
major operating system, including Redox. So now ice works completely 100% on Redox. And so does
slint, because there's a slint team member who, who is basically tasked with ensuring that slint
is working with, with a whole bunch of different operating systems
and Redux is one of them.
And providing the cosmic theme for Slint
is also one of them.
So we'd be probably, if I,
I mean, I don't want to throw this out there
and be wrong.
I could be wrong,
but we might be the first desktop environment
where you have multiple toolkits
that implement the UX style of that desktop environment where you have multiple toolkits that implement the UX
style of that desktop environment.
Hmm.
I can't think
of another one. Yeah?
Yeah, I was saying I can't think
of another one. You have KDE
and you have Qt. You have GTK
and GNOME. Yeah. You have
XFCE and GTK and Gnome. Yeah. You have XFCE and GTK.
GTK 3.
GTK 3, which I mean, I prefer in some ways.
Yeah.
But yeah, it's usually a one-to-one ratio.
And now we're talking about multiple different ones,
and they're all going to load the cosmic theme.
multiple different different ones and they're all going to load the cosmic theme and the the concept is that we want to have uh a a overall set of of gui libraries that any rust gui developer can use
and they can be assured that the whole entire stack is rust so all the way from their code down through the toolkit, down through
the rendering libraries
to finally the interface
with the operating system and that's where it
cuts off. Unless they're on Redux, then it
keeps going all the way.
So whilst we're
getting this consistency within
major Rust libraries,
one concern I have seen people
have is, well what's going to happen
if I want to run a, you know, a Qt application? Like, you know, a lot of people are probably
going to have something like Kdenlive installed. Are you guys going to be handling any sort of,
at least trying to handle some sort of consistent theming there as well? Or
what's, what's going to be the go-ahead there? There is a desire to.
And the thing is, Pop! OS,
when it releases with the Rust desktop environment,
it will be still mostly GTK apps.
Okay.
It has to be,
because we're not going to be able to rewrite
every single application.
And it's just going to include those.
And our process is going to be to generate themes for GTK
and potentially for Qt as well.
And I hope the KDE team is interested in this,
because we are interested in how can they port their theme over into the Cosmic's theme configuration
whenever they load up KDE, the user, so that our apps fit in.
Right.
And when KDE apps are inside of Cosmic, how can we port our configuration into KDE's theme configuration?
I think it's totally possible.
It just is something we will have to work on.
And I'm very interested in doing that.
Hmm.
No, that is really good
because I know that's like...
You know, a lot of people are really concerned
because, you know, the Linux desktop
for the longest time has just been Qt
or Qt, whatever you want to call it, and GTK.
So bringing a third player,
or I guess a fourth if you're gonna include Slint as well,
bringing an extra player into this space,
it has a lot of people worried about how it's gonna integrate and what's-
I don't personally care about consistent theming, like anyone who's seen my desktop, it's a disaster.
But I know a lot of people out there really do care about it. So it is good to know that you guys do have at least that in mind whether it's going to be doable like on a wide
scale and whatever it's going to be doable with the gtk side it's good to know that you guys at
least have that like in your mind is something you do want to work on yeah it's it's something i i don't see it as as a
requirement absolutely because already the linux app ecosystem is it contains so many different
applications with different styles and different electro in there it's a mess yeah yeah it's um
the likelihood that a user is actually going to have a set of applications that all fit together
is very low. And that's not just a Linux problem. That's also a Windows problem.
It's also an Android problem. That's a problem anywhere where there is so much freedom in the
creation of applications that each app developer is going to generate
their own style.
And I think that's acceptable so long as we are taking whatever steps we can do to make
very common applications fit in and be styled by the desktop environment.
styled by the desktop environment.
So I would hope that a set of similar applications,
file manager, terminal,
would live in any desktop environment and would fit in there.
I am skeptical that it will actually be possible
to the level that these users seem to want it to be.
Again, there are lots of different
applications with lots of different design concepts and there's no way you're going to
make all of them fit together. But to the extent where we can, I would like to be able to port
themes from KDE into Cosmic and vice versa.
And there are some attempts to standardize this at a free desktop level as well.
It's kind of minimal right now with accent colors being considered.
We can probably go a little further than that with Qt and Cosmic integration.
probably go a little further than that with cute and and cosmic integration. Because the accent color proposal really just makes the colors of the application match. It doesn't particularly
make the sizes, shapes of things match and and just the accent color. So there, there's still
a lot that would be not matching across toolkits. I don't think adding more players really changes the landscape that much.
It's not really fair to say there's only GTK and Qt.
There are so many applications that are using custom toolkits
or using different toolkits that I think it's...
And a lot of them are pre-installed.
I guess what I meant there is the major desktops,
that's what they're using.
Sure.
Yeah, and so I feel like it's the default set of apps that should be most concerning,
and then that usually those are bundled
with the desktop environment.
If you install Kdenlive,
you don't really want the interface to change
such that it looks like a GNOME application
because a GNOME application that does video editing will look very different. It will have
different UX design decisions. That's fair.
But matching, at the very least, matching dark and light mode is a guaranteed thing. Matching accent colors is very close to being something
that's standardized. Going beyond that is something that has to be done on a per toolkit level.
Like from this toolkit to that toolkit, what do we want to synchronize? Do we want Cosmic to output
a KDE theme file for every KDE application that runs inside of cosmic I'm not sure
I think it's something we should try but if it ends up being bad I would I would easily go back
on that if it ends up not looking right or not working right or being being too much of a hassle
right no that's I think it's a good answer um well let's let's gears a bit. And one thing I didn't want to...
I've got a couple of things people brought up on
when I asked on, like, Mastodon.
Someone wanted to know your take
on this, like, recent wave of
unmutable distros coming out.
Like, you've got Vanilla OS, you have Blend OS,
you have Snaps and new, like,
you know, Snap Desktop thing.
Like, what do you...
Like, do you think this is an interesting direction for
linux desktop to be going do you think there's a place for pop west here or pop west is gonna
i know you guys had your like immutable core thing that happened early or being discussed
like earlier in the year um but yeah what what do you sort of think about this i'm very interested
in immutable systems but i still think that the research on them has not been completed. There are so many pitfalls.
There are so many ways that you run into walls and the only way to really fix it is to build
your own image. And I feel like as a user, you can work your way through those only if you're really experienced with Linux.
Because the guides that are out there for modifying Linux usually assume you don't have an immutable distribution.
Right, right.
And if that's the case, then we are producing something that's supposed to be better for beginners, but requires an expert level if you want to go beyond so there's almost a gap in in the learning
abilities in the abilities the user has and how do you jump the gap from I'm a
I'm a unexperienced user who's okay with immutable distributions because I have
no use case where I would have to do something hard
to work with them. And I'm a very experienced user that knows how to get past those hurdles.
There is a gap there that I feel like has not been filled yet. And I'm happy that there are
so many other people willing to explore that and try to figure it out. I feel like with most of our customers being business customers and being in
spaces where they customize the OS at a low level, there are very few distributions that
are immutable that actually meet their needs. NixOS might be the closest thing because it's
just so easy to configure changes to NixOS
compared to other immutable systems. But that comes at the cost, I think, of it being difficult
to define the packaging for packages. And you have to learn the Nix language and learn a lot
of other things before you become proficient enough to create new packages. If the package
already exists and all you want to do is configure it, great. But if it doesn't exist, I say this
because we did have to recently see how hard would it be for Cosmic to be ported to Nix.
It was difficult. It was difficult to port. So I say it depends on who you are. And a lot of people could live inside of a immutable
distribution. But there are still gaps, there are gaps with users specifically who want to modify
the system level, but don't have an easy, well documented way to do that. On on silver blue,
you may have to reproduce the image, which is actually what one of our developers
does so she is the primary developer of cosmic comp victoria and she uses silver blue often
she produces a brand new image uh not she's not using rpm os tree to put packages in she is
recreating a brand new image when she wants to update.
And this custom image is then put into the OS tree as a new commit.
And I feel like that's a very difficult thing to do.
It works technically, but it's far more difficult than installing Cosmic on a mutable system.
And so there's a gap there that needs to be bridged. How do we bring
moderate level users, users who have enough technical ability that they need to do something,
but not expert level where they want to go learn how to package things for an immutable distribution.
And that can only really be bridged by immutable distributions themselves actually trying to handle these use cases instead of just saying Flatpak is enough or Snap is enough.
It really isn't enough, especially when you consider that Flatpak is a CLI, is devoid of any cli like provisions so if someone wants to install a a mysql on
on silverblue they can add it with ostree how do they configure it okay they have to go through
this special silverblue specific mechanism that means silverblue has to produce documentation
specific to silverblue whereas for all Whereas for all the mutable distributions,
like the ArchWiki, those guides are usually portable to every single other mutable distribution.
So, all I'm saying is it's a very, very good thing for producing absolute truth in terms of
the operating system image. But there are some very difficult gaps with
specific use cases that are far more difficult to do with current immutable systems.
And what I feel like a user would want to do is just be able to run sudo apt install something
just straight out of the guide. And it just works. And I went down this path with PopCore
to try and figure out how to make an immutable system
on top of a Debian-based system.
And I did not make enough progress.
I feel like there are some very big issues
with the state of packaging in
Debian and Ubuntu that assume a mutable system and are very hard to move into an immutable system.
Namely, the presence of distribution scripts, of packaging scripts. Those scripts are completely
Turing complete, right? They can do any command they want at any time.
They can try to change, for example, the bootloader on the disk,
which the grub package absolutely does.
And so if you have a package that does this,
then how do you integrate that into an immutable system?
You have to, and what Fedora did for Silverblue
is they took all the RPMs that they had,
and they filtered out the ones that would fit in. And they started with that set and converted them
so that they could fit into RPM OS tree. But there's still a wide set of packages that are
unavailable with RPM OS tree. And and that set of packages will be difficult to port over.
And then there's everything in the Copper repo
that are third-party things.
That's another thing that irritates me about,
and another reason why I love Ubuntu and Debian
is that so many things are in the main repository.
With Fedora, with Arch, you have to use third-party repositories,
and the AUR is absolutely a third-party repository.
Anybody that says it's part of Arch,
no, it's really not.
Those are...
Somebody else uploaded those scripts.
Does any of this make sense?
No, I get exactly what you're saying.
So basically,
you need
firstly, good documentation on how
to do it.
Absolutely.
But also some sort of more automated configuration tool for generating these new images.
So you wouldn't have to go through that process of,
okay, I need to read this like 10-page wiki on how to build my own image.
It's more like, okay, I can run this command.
It will just automatically make the new image for me.
It just works.
And NixOS, for the most part, does exactly that.
I feel like the difficulty there is just,
it takes a lot of effort to add new software into NixOS.
And they would probably disagree with me on that. But again, I do not have a PhD in applied mathematics. So I don't understand
Haskell and I don't understand Nix. Those are the two things I don't understand. But once I get my
PhD, I'll be able to get into functional programming. And when I get deep into functional
programming, I'll be able to understand Nix. I did package some things for Nix for Cosmic.
I did package some things for Nix for Cosmic, and then another member of our team, Ashley, repackaged them or created flakes for them.
And I don't understand flakes, but the basic concept is that eventually, at some point, NixOS will support flakes, and then everything will be easier.
And I'm waiting for that to happen with with uh with bated breath uh well another thing i wanted to ask about is someone asked me well you guys got your cosmic
thing but a lot of other distros out there they'll provide other official desktops as well like you
know i had someone ask me like is there any chance at some point that it could be like an official kde version of pop west or an
official some other desktop things like that well it's an easy question to answer if you say there
if there could be a chance yes there could be a chance okay how much of a chance it depends on
how how well cosmic hits and if people are still asking the same
question. And if they are and they're really interested in Cosmic, we've done some
changes to our packaging that should make it easier to add more desktop environments in
the future. I don't know if they would be official or unofficial like community spends,
but they would be there and they would be fairly
well supported okay easy enough yeah i i know you guys are very focused on cosmic right now so i
didn't think there would be like anything you could specifically say on that but i guess i guess
we should probably at some point talk about your other project you're involved in.
It's fun.
Redox. I had a surprising number of people ask, like, I didn't know there were this many people
that were interested in it. I really didn't. Oh yeah, of course.
No, like, it's really, I guess, like, I personally haven't done, like, a ton of
research on it myself, so why don't we just start with the basics of what is
this project so redox os is a micro kernel and a complete full user space that is written primarily
in rust and that's that's the basics of it uh and the history is that i I needed a first project to do in Rust.
And I didn't know how to write a Hello World, but I knew how to write an operating system.
So I started there.
The first project I wrote in Rust was I took somebody's bootloader sample and I added,
I ported over all the code from my other operating system into it so it had it had
a simple GUI it had like mouse cursor moving around keyboard but it was basically a unikernel
everything was inside of the bootloader and then I I rewrote the whole thing to split it up so that, and I decided to go from the absolute insane side of unikernel
to the absolute insane side of microkernel.
So I split it up so that everything was in separate processes
and it grew from there.
Well, I was going to ask you why you made it,
but I guess you, I i guess why is a hard
question why why why does anybody do any programming when we all know that humanity is doomed and
probably only has another 60 years before before we're cooked off the planet either via climate
change or nuclear explosions or something else but why uh well it was fun i needed to learn rust
why not write an operating system in rust and uh and i i so my friends say that it's just like a
a casual thing why don't we write an operating system in rust that's just you know i mean yeah
somebody write a tic-tac-toe program in rust why Why don't write a calculator in Rust? I don't know how to do that.
How do you even...
Tic-tac-toe? Game's too complicated.
It's too complicated.
Yeah, yeah, okay, sure.
I feel like
the major thing that drew me
to it was
that
at the time,
the only thing that was running in like very low level Rust was that demo program that I found called Rust Boot.
Right.
And all it was, was to basically be a bootloader and print some text on the screen.
And I'm like, okay, well, that's interesting.
I'm doing that in Assembler.
Why don't I try this new language that my friend recommended, who is a Haskell fan,
because he has a PhD in mathematics and can understand it. And he recommended I look into Rust. At the time, I was very deep into C++. And I was trying to figure out
what kind of rules would I need to add to C++ to make it operate the way I wanted. So I
wrote a specification and a set of scripts called the safe object language. Boy, was I wrong. It
wasn't that safe, but it was safer. It should have been the safer object language. Boy, was I wrong. It wasn't that safe, but it was safer.
It should have been the safer object language. And what it was, was basically a set of lints
that would run on top of C++ code and say, when you did some stupid stuff. And, and I was very,
very short into this project when he sent the link to Rust, which was still in like alpha stage to really having fun with was operating systems at the time.
So like, well, where is all the operating system code written in Rust?
Oh, nobody's doing it?
This irritates me often.
It is a very empty space.
There are very few people actually interested in low-level stuff.
And they're all employed.
They're not doing it as a hobby usually. actually interested in low-level stuff and they're all employed.
They're not doing it as a hobby usually.
So it's like you can find a lot of stuff in the open source world but when you start to
look for operating system stuff, it is just a handful.
It is a handful of operating systems and they're all written in C. So, I was like, okay, this presents an opening.
What if I try to use this language that supposedly has all these wonderful properties
and it was in my opinion even higher level than C++. Although a lot of people would
disagree with me, it felt like it was higher level. So, I was like, how? like this is wrong how can you even use this at such a low level
uh it it felt it just felt wrong to me it felt like I was cheating uh if I if I were to write
all this stuff in Rust and not have to do it in Assembler line by line and every time I add a
single instruction I have to recompile the whole thing and test it because God knows it's impossible to make changes to Assembler without making mistakes.
And sure enough, it was easy and worked well.
And I'm like, well, this will scale.
I can start to write more things with this.
And yeah, that's where it came from.
And yeah, that's where it came from.
I just went straight from Assembler.
Visual Basic, Assembler, with some C++ on the side, but I hated it.
Rust.
And then that was it.
I still do a bunch of Assembler because you can't really do operating systems without it.
But yeah, doing so much in Rust and making it a microkernel and it just scaled very easily. I could write a driver for Redux in half a day.
So I had a network card that was in one of my old laptops.
And this is another thing I love doing with Redux is seeing how old I can get stuff to
work. And it had I had a driver for the RTL 8169. But it had an RTL 8139.
So I'm like, okay, I'll write a driver for it. I download the data sheet. In about four hours,
the driver was done working, working on the real hardware. And that was it. It was a new process, new modular component, running in user space, no changes to the kernel
required, new piece of hardware is supported.
And yeah, it feels to me like micro kernels are still the right way to do things.
But nobody has really invested that much time into them outside of research.
And that also irritates me.
Just the lack of resources at the lower level and the heavy investment of resources into
high level things and how can we get Rust to run web server stuff and run inside of
Wasm and things like that.
And that's got like 99% of the entire Rust community working on it.
And I'm over here like, hey, you guys realize we can use this anywhere.
It can be at low level stuff too.
Yeah, it's been interesting.
So correct me if I'm wrong here on the dates with Wikipedia,
but Redux started before the first stable release of Rust.
Yes.
About a month before the first date.
Jeez, okay.
So you've been like there from the start, basically.
It was a little earlier than that
because the first public commit I have was probably,
it was like a few weeks after I had started the project but yeah it was somewhere
around april 20th of 2015 and that's exactly 420 is a just a great time to release stuff on 420
and and it also means that the anniversary is right around april fool's time too we've had a
few april fool's jokes until everyone decided april fool's time too we've had a few april fool's jokes
until everyone decided april fool's wasn't cool anymore yeah it's no fun just stop that i'm the
only one who makes april fool's linux videos what do you got yeah guys join in on the fun
nobody wants to have fun anymore no fun i just want to complain yeah exactly so I guess that explains
why
Orb TK was a thing then
yeah Orb TK
was a
rust toolkit pure rust
because there wouldn't have been a toolkit
like when you started
yeah when I started there was very little
I don't even think Iced was around
Slint may have been around as 60 FPS which it was was very little. I don't even think Iced was around. Slint may have been around as 60 FPS, which it was known as before, but I don't even know.
At that time, I wasn't sure if we could even port other toolkits to Redux.
It was so different of a system and had so few things to fill in.
different of a system and had so few things to fill filled in so i had i had a window manager called orbital and i had a client library called orb client and all the apps at that time were just
rendering directly to a frame buffer and then orb tk was to try and wrap that up in a way that was a little easier to interact with.
And we used it for a couple applications.
But in the end, I just, I don't know.
I think I filled in enough of the GUI stuff that I kind of didn't have interest in it anymore.
And so I worked on other parts of the system, especially on driver support,
on making sure Redox booted up on all hardware, on the file system, things like that,
porting to new architectures. The first microkernel version was 32-bit x86.
And then it really sucked. And I wiped it all away and rewrote it. And I rewrote it 64-bit x86 and then it really sucked and i wiped it all away and rewrote it and i rewrote it 64 bit x86
and then i got so bored with new computers that i added in 32-bit x86 support to it again
so it's been a cycle. Why did you get rid of the 32-bit in the first place?
Well, it was terrible code.
So the kernel had to be rewritten.
Fair enough.
Especially the memory management.
It was just too many problems with the memory management.
Right.
And then that was fixed.
And then it was written for x86-64.
And then there was a port for arm uh and then
there was a port for 32-bit x86 which is now my favorite platform to work on because i can work
with with really old stuff that linux doesn't support at all like you try to you try to run
even the most lightweight linux distribution on Pentium 2 computer, it will absolutely not
boot. It won't have enough memory to even load up the installer. And then you try to run Redux,
and it runs just fine because it's 128 core Threadripper or 128 thread Threadripper processor with 128 gigs of RAM or whatever.
So, yeah, it's scalable.
I really like having that aspect that I can write a piece of code and it will run across so many different pieces of hardware.
And now I have Iced programs running on that Pentium 2.
It's kind of nuts
because the whole stack has been integrated correctly where we have a software rendering
fallback and cosmic text is there and just it's optimized for memory usage so
it fits well on such a low-end system so i know that you have um sunset uh orb tk why did you decide to do
that well like i said i was not working on gooey stuff for a really long time and there was some
work by some other folks to uh to try to uh modify it to to use it basically this company ergo docs
hired someone named florian blasius and florian um he he wanted to modify orb tk to use in their
in their company's products wait i'm sorry ergoocs the the keyboard company oh or we're talking
another ergodocs it's a different one okay i was gonna say i was very confused from there
yeah it is really confusing i think it might be spelled differently it might be ergodoc or
something i don't even remember he works at slint now okay yeah so uh what happened was he left that company.
He worked at Slint.
Nobody was working on OrbTK.
So I'm like, okay, you're working at Slint.
We already chose Iced as a toolkit because OrbTK wasn't up to the task.
Now Redox is going to target these two toolkits.
So I made sure SoftBuffer and WinIt, which are two libraries in Cosmic Text, were all working on Redox.
And by doing that, I was able to ensure that Iced would run on Redox.
Slent also could reuse the same libraries, and now it's running on Redox.
So now we have the opportunity to port Cosmic applications directly to Redox. So I don't have to write applications specific to Redox. And that means I can actually
get paid to write applications that would be ported to Redox, rather than being interested
in other things. For Redox, I'm usually interested in low level things, because I feel like we should be bringing in the GUI from a third-party source.
It's not a critical aspect to Redox to prove that there can be a Rust GUI.
It's a critical aspect to prove there can be a Rust kernel and Rust drivers and the low-level parts of Rust.
And if we have a Rust thing being developed and being being paid for by
another another company why not just bring that in and so now the plan is to try to bring in as many
elements of us from cosmic and uh written in ice written in slant and people are confused about
that and really the two options are both equally viable it's
whichever one you prefer um yeah i know there was a question on on fosted on or on twitter about uh
so for redox which one are we supposed to use iced or slant it's like well both which that's the that's the nature of finally having enough toolkits
you're not you're not tied to one and hopefully redox will have much more toolkits to work with
in the future well let's shift gears to the micro kernel design because i you've talked i guess you
sort of were talking about this in another sense with when you're talking about GNOME.
With GNOME's effect, obviously it's not a kernel, but it's that same monolithic design where all of the plugins are running as that single process.
And if something goes down, it all goes down.
Linux is the same way where you have your drivers in the kernel, the driver crashes,
goodbye Linux. So why is it that you feel this way about the microkernel design? Obviously,
that is one aspect of it, but I'm sure there's more to it than just that.
Yeah, the reliability concerns are very important that it is possible for monolithic kernels to try and sandbox their
drivers. But at the end of the day, when you're compiling all these things into the same process,
the same program, there are limits to what you can do. And so having a well-defined interface
between different driver processes, that means that they're not relying on each other in any way,
shape or form. And we can swap them out very easily. We're not trying to define something
inside of a space where any driver could decide to go around those definitions. And I feel like
there is a lot more work that has to be done up front to define an interface like that.
And this also continuously has to happen as we come up with new problems that can't be solved
without having new functionality. But that was a very short period of time. Now we're in the place
where we can just pump out drivers so long as we have data sheets for them which is really the hardest thing is to get
hardware companies to actually describe what their hardware does they will dump
in a huge file into the Linux kernel and okay here's this big-ass file it works
but it's not really portable it's's specific to the Linux kernel, and it means the Linux kernel gets support for hardware preferentially,
and they don't send those things to BSD often, free BSD, open BSD.
They're not sending their hardware patches there.
There are customers that are saying, hey, we're using this on a Linux server.
Can you support it on linux and like okay here's this huge ass driver file yep just to put into links and uh the
the mon the monolithic model is easier for vendors to work in in that manner because
they can just dump huge pieces of of code into the Linux kernel and do literally anything they have to to make it work.
But in the long run, what that means is that these huge pieces of untrusted code are all living together in the same space.
And because the information about the hardware is never public and is written by the vendor,
like Realtek will write a driver for something
where the data sheet is NDA.
You can't read it.
You can't distribute it unless you contact them directly
for every instance, every copy of it.
And that means that most Lanx kernel developers
are never going to see information about the hardware
being handled by that driver.
But that driver coexists
in the same process space that they're coexisting in it's a it's a messy situation and bugs happen
repeatedly in this kind of situation where um where drivers operate in ways that aren't expected
and this can especially be seen with the nvidia driver which is kind of
aren't expected. And this can especially be seen with the NVIDIA driver, which is kind of,
it still lives in the same space. It's inserted into the process space, but it comes from separate code. This is a very, very strange model. It's almost like a microkernel, but doing the opposite.
You have a process that is reading third-party code that nobody who created the Linux kernel
is reading all the code that goes into the NVIDIA driver,
unless you're talking about the new open-source driver they have,
which is still using a huge 500-megabyte blob in user space
to control things, so it's not really open-source.
They put this huge DKMS module into the Linux kernel,
and it runs in the same code space.
You have some modularity at the source code level, which I like improving modularity,
but you don't have any review of that code that goes in the same code space.
So having each process be independent, sandboxed, each driver each each vendor and whatever weird stuff they
need to do has to be through a unified interface it just really appeals to me so do you see the
monolithic design of the linux kernel to be sort of a failing of the project or more just a symptom of the circumstances it was made under
where, you know, Linus just made it
because he had a CPU lying around
and the monolithic design was the easier way to approach it
as opposed to doing a microkernel design.
Yeah, absolutely.
There were, of course, the debates between Tenenbaum and Linus.
They're very fun.
And to be fair to both of them, I don't think either of them really understood the other one's position enough to talk intelligently about it.
Because Tenenbaum wanted to create a research operating system that would embody the microkernel spirit.
And Minix is definitely that.
Minix 3, amazing piece of work.
And Linus created a...
I think Linus is working more from a pragmatic standpoint
of everyone else is doing monolithic kernels.
So I'm going to do one too.
But it's going to be open source.
And that was the primary concern.
So now it's built up so much momentum and there's so much involvement with the Lynx kernel,
it's impossible to change.
And it's been that way for quite a long time.
It would be impossible.
And I don't think Redox is necessarily going to succeed in drawing in any kind of vendor integration.
Like the vendors love this model because you can create Android devices and put a huge blob into the Linux kernel.
And it almost absolves them of any open source responsibility because if you look at like the blobs that these especially gpu vendors
dump into the lanx kernel it is some of the worst code i've ever seen and and there is no way you
can look at that code and actually create a specification for the device from that code
it is it is too nasty and and and uh in many idiotic and, and it often doesn't work.
And so they're constantly going back to the board.
All three of them, Intel, AMD, Nvidia, plus all the arm ones that are even worse.
Like, like, and I don't know what the arm GPU situation is, but like
Malley or whatever, they, they just Broadcom these companies, they, they
are absolutely terrible about it, but it works for them.
They are not going to switch to a different kernel or a different model.
They're invested in Linux.
And as long as people buy their hardware, that's, what's going to dominate.
And I don't blame Linus for that.
He did it for pragmatic reasons and 10 and bomb didn't have anything to demonstrate that had the same
kind of force. So if you're going to say, hey, guys, look at Minix, it's possible to make a
microkernel. Well, someone can just say, well, look at Linux. It has so many more devices supported.
It has so much more software available. So maybe you're wrong. And in end i think micro kernels definitely are a better design
for reliability for safety and so many different reasons but if nobody is invested directly into
creating micro kernels uh for especially for the desktop market uh then it's not going to happen. I think the best part about the Lioness and Tannenbaum debates
were every single comment pretty much ended with,
and this won't matter when Gnu Herd is done.
Everyone was like, Gnu Herd, this is going to be the microkernel
that replaces everything.
And then that project, we see where it is today it's it's you know
not in the best of states well yeah it's uh it it's again a failing of of economic forces to
if you if you can't find a business case for a microkernel then nobody's going to have the time
to make a microkernel because it takes significant investment to actually make one. And I say that as someone who is making one,
it's a significant investment of resources. And I can want to have NVIDIA drivers on Redux, I think there
are very few ways to actually make that possible without directly involving Linux.
Right, right.
Which is actually something someone working for Redux right now is working on.
Mm-hmm.
What, like having some sort of like compatibility layer or?
Having, it's a little more than that.
It is a virtual machine running Linux that gets the GPU passed into it and only the GPU.
So it doesn't get access to any other hardware and it runs as a microkernel driver and that
driver then can basically take in commands through the Mesa library on Redux, send those commands to the,
to the virtual machine running Linux that has access to the GPU,
whichever GPU it is.
And then that turns it into commands to the actual card.
And it's a,
it's a way to kind of bypass the problem that GPU vendors in particular are
very bad at releasing documentation.
The only one that I've found real documentation for that's public is Intel. And even then,
it's a very difficult chip to control. And I want to say Intel is going leaps and bounds above
the competitors. AMD does a huge code dump
into Linux kernel and Nvidia, of course, does a huge proprietary blob. That's that's not
part of any open source projects. So we have to layer them look kind of grade them in that
form and fashion. Intel gets a B, AMD gets a C, NVIDIA gets an F. And Intel could improve if they
did some work to kind of make it simpler to understand how to integrate an Intel GPU into
an operating system. But again, they have engineers that they pay and those engineers
produce the Linux kernel driver. So there's not really any business case for them to make the documentation any better.
They do release a ton of documentation, but it's kind of all over the place and hard to digest.
So this running a Linux kernel, basically...
So let me just understand.
So you're basically running...
let me just let me just understand so you're basically running the at least the idea would be to run the entire linux kernel as a driver just to basically bootstrap the drive the like
nvidia driver amd drive something like that yeah it would be a customized linux kernel so it would
be essentially yeah it would be linux plus a it's almost like what Windows is doing with WSL to to pass through
graphics but they're doing it the opposite direction so they have a a driver that they
insert into their Linux kernel that runs on WSL and that driver takes in all the stuff
from the Mesa library going through EGL or whatever,
and converts it into DirectX commands that are then sent to the Windows driver.
And this is kind of a similar mechanism, but opposite, where we have Redux running as the
hypervisor.
Linux is running as a VM.
Linux gets the GPU. Linux has drivers for the GPU we are we
are sending the EGL commands through to Linux and the Linux Mesa library handles them so our Mesa
library would would just be to forward those to the library running in the VM. And that potentially would give us
perfect graphics acceleration on all the GPUs that Linux can support, including on NVIDIA GPUs,
because we could run the NVIDIA binary driver on top of this Linux kernel.
driver on top of this Linux kernel. Huh, that's really cool. So, what sort of response have you had with Redux? Like, I know a lot of people were excited for you to be here to talk about this, but
like, from your personal experience, like, has it been like a relatively positive
sort of feedback you got from it or yeah, just,
just how, how's, how's it all going?
Yeah, very positive.
But at the same time, it's really hard to find people who are interested in collaborating
on this space because it is a low level thing and it is a hobby thing.
But luckily we've gotten a few very good contributors that, that, and we've gotten enough donations that now we can do a regular program where every summer we have, and this summer we're doing three different people.
We have people apply and then we accept them and we pay them a pretty good amount to spend the entire summer working on Redux.
and uh yeah i feel like right now it's at this kind of stage where we can perpetually advance on maybe one use case like using it on a very simple laptop platform or desktop platform
and ignoring gpu acceleration for the meantime just doing software rendering which is still pretty quick if you're
not talking about games uh and even if you are there are a few games that run pretty well on
redox with software rendering like i was uh i don't know if i can legally talk about some of
them i made the joke that every time i i emulate super mario, I take my completely 100% owned by me cartridge and just put it on top of the computer.
I don't actually do that, but I do own the game.
But Nintendo doesn't agree that that's enough.
You have to own the game and Nintendo has to bless with their magic fucking, I don't know, process.
Like anoint with oil and and tap with
the fucking sword on on both shoulders the the console that you're using to actually run the
game right yeah yeah yeah um but super mario runs really well inside of redox um the the ported
version it's like sm64 ex ah that one yeah the the extra we don't like that to exist
version yeah but they can't do anything about it for some reason which i think is great
i i just i cannot ever bring myself to give nintendo any more money like i'm not gonna
buy a switch i'm not gonna buy any games for a switch if there's a
game on switch that i want to have i'm just gonna find another one of the fucking hundred thousand
games that run on pc to play right how many games run on the switch like a thousand maximum
it's uh consoles have been getting worse and worse and worse in terms of the number of exclusives
they have too which is fine but
there's so many releases on pc yeah i'm so happy that like playstation has got into releasing
almost all of their exclusives on pc yeah and they run on linux microsoft realized that they
finally owned windows and they can release games on windows now exclusives for them are just
releasing it on windows and on xbox which you know besides the
anti-cheat issue which sometimes occurs most games work like you can just if it's public
microsoft is gonna be absolutely good i can run it on on on linux almost all the time
there are very few games i haven't i haven't been able to run on linux mostly vr stuff
and then and then the son. Like, hey guys,
let's have Final Fantasy 16 and fund that
and have it only be on
PS5.
They've brought in everything.
Just late. They'll learn.
It'll happen eventually.
They did Spider-Man.
I was really...
I didn't expect them to do that but then they did
then they brought in they brought in uncharted last of us although the port for that sucked but
whatever they tried yeah god of war there's a lot of games yeah sony has basically made it their
their method to to port all of their popular titles to pc like a few months after or six months or a year
after so they can saturate with sales except bloodborne oh man it's it it has to be coming
yeah i don't know i don't know what they're doing like surely from software wants to do bloodborne
on pc maybe like a bloodborne Prepare to Die edition,
or just something. Like, how have we not...
How is Bloodborne...
Or at least, like, a
PS5, like,
60 FPS version. Like, how are we
still stuck with 30 FPS
Bloodborne?
Like, I don't understand.
So there's a Twitter
account.
It's just, is Bloodborne on PC?
It has 52,000 followers.
And as of 15 hours ago, they say, no, it's not been announced for PC.
But there's plenty of people asking for it.
It's not Twitter anymore, it's X.
But there's plenty of people asking for it.
It's not Twitter anymore, it's X.
Yeah, I think if it's a Sony developed title or a studio that they interact with a lot,
they'll have more say.
But for that, it would be up to the From software
to actually port it, which...
You know, I would throw my money at the screen
any time any of these things come,
and they work on Linux.
If they don't work on Linux, I return the fucking game.
There's a two-hour return time on Steam,
so I just return it.
With Bloodborne on PC, I wouldn't even complain
if they just did what happened with Demon's Souls,
where it was handed off to another studio.
Because Demon's Souls was handled perfectly fine.
Yeah, usually it's...
The thing they don't want you to know
is that all these consoles are just PCs.
At the end of the day, it's just a PC with static hardware, right?
It's just an AMD CPU with RDNA 2 graphics.
It's like...
It's a fucking Steam Deck, right?
It's the same hardware.
Well, even more so nowadays.
Early on with the PS5, there was this big deal about the ps5's ssd
and it makes so many games possible i hated that it's literally a you can take that ssd out stick
it in your computer reformat it and it will work like it's just off the shelf consumer hardware i
hated the the fucking fanboys who are who are just saying pc is so slow look playstation has a faster
disc than pc now i'm like no because playstation is just a pc yeah that's all it is yeah so
whatever is there is available on pc if maybe amd got got you know their custom chip into it
a little bit before but that wasn't even the case it's
ridiculous it's um yeah the and what it was was that they were claiming the compressed
speed was 10 gigs per second or something i'm like well you can compress stuff on pc too
it's not against the law or anything and a lot of games do that already.
So, yeah, it was built-in compression, which then Microsoft did the direct IO thing, where the concept is that you have the NVMe drive directly feeding data to the PCIe graphics card over the PCIe bus.
And that's something that might work.
But, again, that was available on pc too and linux can
support that kind of thing too with sriov i think the one like one major difference you have with
the consoles is the big shared uh memory cache rather than having your separate gpu memory and
then system memory it's all just one big. That does make certain things work a little bit
differently, but
that's the only major difference.
Yeah, still doesn't make the PS5
any faster.
I guess just
the laws of physics can't be...
The thing I hate about
the state of, like,
gaming right now is
we're at a point now where the
consoles are so similar
to a PC, yet the
ports we are getting are the worst
they've been in so long. Like,
we had the PS2, which was
this dumb emotion engine thing.
Then we had the PS3, which is this
stupid cell processor.
Right, completely different architectures.
Yeah, they're these nonsense,
like, the architectures are just
completely ridiculous. The PS3 was being
used as, like, in...
There's this, like,
there's this story from years ago where
I don't remember what organization
it was, but they bought a bunch of PS3s and
turned it into a supercomputer, because it was
ridiculous architecture that made no
sense for game design, but now that we just have pcs the ports are garbage i don't know how this has happened
well i will tell you exactly why because i know okay uh because they're so similar right you
literally click a fucking button and most engines will spit out a pc game yeah that's fair will the pc game be
good no because you've locked down like the frame rate and resolution for the console or you have
one of the dynamic resolution changing things that they're using so that they can hit 30 fps
but they change the resolution right uh it's uh you? You go from a very static ecosystem to one
where technically the same code is really close to working on PC but then you have so
many things to account for. Like you have users who want to run at 144 hertz which I
do. You have users that are using weird aspect ratios like a widescreen monitor
or an ultra wide screen or super widescreen where it's like nobody.
So I guess five.
So game developers are pretty much targeting 1080p and 4K.
And often they're using like the same code for both.
So it's like, well, we'll just do the 4K path all the time
and then downscale it to 1080p.
Although really 4K on most consoles is upscaled from like some resolution in between 1080p and 4K.
But the point is that they hard code those things.
And then to actually make a good PC port, you need to decouple those things and actually make them configurable.
And so I know I'm in for a good ride when I launch a game and it doesn't even
ask what resolution to run at. And then I check and it's not running at the resolution of my
monitor. I know I'm in for a good, good ride. Yeah, that's fun. When I go into the settings
and there's no settings for frame rate, I know I'm in for a good fun ride because i have a 144 hertz monitor and i i'm like 90 sure that
you can't divide 144 by 60 or by or by 30 right it divides by 72 um the point is that they are
they're running at 60 and we're running at 30 i'm just not happy no yeah i i can tolerate
i can tolerate 60 like 60 is it it's it's not cut like i i have 165 hertz display 60 is not good
but it's tolerable 30 30 is rough like unless it's some game where like it's a grand strategy where
like the only thing that's moving is your cursor like that's the only case where 30 is remotely
tolerable it's painful yeah and um some people believe that 30 FPS on a console is somehow magically better than 30 FPS on a PC.
And that's just not true at all because, again, they are the same thing.
There are still people that think that 30 FPS is fine.
Like, there was a game recently, I don't remember what the game was,
someone's going to tell me, where, like, the Xbox One version,
whatever the Xbox, whatever the current Xbox is called,
they're, like,
they were making a big deal early on
that it was going to run,
I think they were saying it was going to run at least 60,
and then for the launch,
it was actually going to be running at 30.
And there was all of these people in the comments
being like, 30's fine,
I play games at 30, it's great.
Like, stop.
That's probably, like, every single game
that's been released for Xbox One in the past
I think Cyberpunk
Cyberpunk is running at
that was barely running at 30
at launch exactly that wasn't
even they had to recall the
the PS4 and
Xbox One editions because they were so
shitty
I can't blame them because PS4 man
it's
like you look at pc hardware that's like
10 year old hardware by now so it's really difficult to fit into that i do like the ps5
is backwards compatible right now my ps5 is basically a ps4 emulator exactly the ps5 is
is uh just a pc that you can run less software on. The game I've been playing most
is Final Fantasy 12, which
is an HD remake of a PS2
game.
Yeah, I've been doing the Final Fantasy 7
remake. Ah, nice.
I run it on Steam Deck, I run
it on my big-ass PC,
and it runs
fine. I think most of the time those ports
are actually okay. I would like to have a Steam
Deck. Valve doesn't sell it in
Australia. I would have to go through some
importer or do
some other... Why don't you have a Steam Deck?
You have a PS5, but not a Steam Deck?
Yeah, but if Valve started
selling it in my country, I would buy it.
Everybody complains
they can sell this here, sell this there.
You can get it, I'm sure. I don there you can you can get it i don't know
where i can get it just send me 600 bucks and i will send you the 300 buck version
in an envelope it'll get there in like two weeks i don't trust the money's gonna get there
hey if it's lost that's on usps that's on the postal service that's not on me
i swear i sent it everything i've heard about usps i don't want to trust them with six hundred
dollars in an envelope yeah but it's like compared to the other carriers i feel like it's a requirement
anytime you want to handle somebody else's stuff and move it to somewhere else you have to drop it at least five
times and if there's an arrow it has to be dropped the opposite way uh fragile keep this way nope
drop it on that side i did hear this funny story about this um this bike company uh they were
having like a ton of uh a ton of like breakages and a lot of customers were really annoyed when they started shipping
to the us so what they did is change the box to make it look like a flat screen tv and put a
picture of a tv on it all of a sudden their reports of damages went down by like 80 oh yeah yeah we
have to do a lot of creative things to ship thal, to ship our desktop line because we ship them with the GPUs
installed. So, we actually have a shipping brace that is extra super duper and we didn't used to
have this. So, there were issues where customers had GPUs that were pulled out of the slot and
like, how does this happen? Because we did drop tests and the same things were not happening in
normal drop tests like the only way you can do this is if you drop it from 12 feet or drop it
down a flight of stairs or something like ridiculously and then we came up with a gpu
brace and so that screws in and it's like the gpu is like held in by like eight different points
just like completely grabbing it.
It doesn't have anywhere to go.
And then now it seems like it ships okay.
But yeah, they see the fragile sign and they,
I guess we don't have a logo of it being a desktop computer,
but I feel like if we did, it would still be dropped just because,
just out of spite, like some playstation gamer would drop this
pc brody brody would be handling it like a stupid thing doesn't have bloodborne yet i'm dropping it
i play most of my games i just like this thing's a ps4 emulator i play most of my games on pc
yeah i thought about getting a ps3 or something because it can run it can run ps2
and if you get the right version you have to get one of the fat ones that was the first one and it
actually has the hardware for a ps2 and a ps1 i believe included in the same package then you can
run ps1 games and ps2 games at native everything uh off of a hard drive
so i thought about that but uh yeah i don't think i need yet another m platform like that
well on that note we've gone way off topic of the main stuff but it is what it is tends to happen um let's end off the show so let the people know
where they can find you where they can find pop west and redox and everything else you want to
mention i have a website that links everything it's s-o-l-l-e-r dot d-e- v solar dot dev here we go
and yeah so that links everything
links to redox
system 76 pop os and all
of my personal links
please give me money on patreon
on my patreon
please follow my soundcloud
the money
on patreon
no there's not a soundcloud there
no there's not a soundcloud there's a page not yet if i could make make a little bit extra money
for redox the patreon is 100 used by redox so money there goes into new developers there
which goes into fun stuff that probably won't compete with Linux,
but you know,
it's fun.
Well,
this was a lot of fun.
I really enjoyed this.
For anyone who noticed that I might've been like coughing a lot throughout,
I did mute my mic most of the time.
I am currently recovering.
I made the mistake of going to an anime convention, and when you do
that, 99% of
the time, you're going to get sick.
And I got sick, so...
You're just not going enough. That's a good point.
You need to go every
time there's an anime convention. Well,
I would, but they were shut down because
of, you know, the whole... Yeah, then you'll get
your immunity up. The world being shut down thing.
You just gotta get, yeah, get your immunity up thing you just gotta get yeah get
your immunity up i don't go to conventions man i just i can't handle it like you say i get sick
every time yeah i hope you feel better yeah i'm pretty much most of the way back it's just
slight cough uh yesterday was way worse but i guess it's 2 30 in the morning right now so the
day before whatever it doesn't matter uh anything else you want to mention or is it is just that
that's it uh keep an eye out for virgo we're making our own laptop in-house at system 76
it is an open source gp uh not gpl3 anymore we we settled on the cern open hardware license which is basically gpl for for
hardware and it is the as far as i know the first open source x86 motherboard so we will see wow
that's really cool uh yeah um as for me my gaming channel is brody on games right now i probably finished off black
mace so i'll be i reckon i'll be playing portal with ren so come check out with that i'm terrible
at puzzle games ren's terrible at video games so that'll be a mess uh also probably still playing
through final fantasy 16 it's a really good game, highly recommend it, released it on PC already, uh, yeah, uh, main channel, Brody Robertson, I do Linux videos there
six-ish days a week, I have no idea what'll be out by the time this comes out, I'm way ahead in my,
like, my podcast recordings, I need to take a break off the podcast and bring things up,
like, you know, closer to when things release, uh, but there'll be something there, hopefully
not more Red Hat stuff, but there. Hopefully not more Red Hat stuff,
but there'll probably be more Red Hat stuff
because that never stops.
And this channel,
if you're listening to the audio version,
you can find the video version
on YouTube at Tech Over Tea.
And you can find the audio version
pretty much anywhere.
You can find an audio podcast.
There's an RSS feed.
Stick it in your favorite app.
I like AntennaPod.
It's great.
I'll give you the final word. What do you want to say? I want to say that we need more open source in this world
and things are not moving in the right direction. If you want to support it, you have to pay for it.
You have to buy it. You have to show your love for it. You have to show everyone everywhere that
you're using Linux until they are so annoyed with you that they use it just to get you to shut up.
So please keep doing that.
I don't think you need to tell people to do it anyway.
Especially let them know about Arch Linux.
Exactly.
Yep, exactly.
Awesome.
Well, I'm out.