LINUX Unplugged - 665: Patch Me If You Can
Episode Date: May 4, 2026We dig into the Copy Fail vulnerability and test a proof-of-concept against our own box. Plus, Jon Seager, VP of Engineering at Canonical joins us, and we kick off the BSD Challenge!Sponsored By:Jupit...er Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free!Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love.Support LINUX UnpluggedLinks:💥 Gets Sats Quick and Easy with Strike📻 LINUX Unplugged on Fountain.FMCopy Fail — CVE-2026-31431 — "An unprivileged local user can write four controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root." — TheoriCopy Fail: 732 Bytes to Root - Xint — "A single 732-byte Python script can edit a setuid binary and obtain root on essentially all Linux distributions shipped since 2017." — XintLinux Kernel Bug Explained - Jorijn — "CopyFail is more portable. One script, every distro, no offsets. Dirty Pipe needed kernel ≥ 5.8; Copy Fail covers 2017–2026." — Jorijn"Kubernetes Pod Security Standards (Restricted) and default seccomp do NOT block the syscall used." — JorijnArs: Most Severe Linux Threat in Years — "The most severe Linux threat to surface in years catches the world flat-footed." — Ars TechnicaSysdig: CVE-2026-31431 Analysis — "The flaw was introduced in 2017 via commit 72548b093ee3, which switched AEAD operations to in-place processing." — SysdigCERT-EU AdvisoryUbuntu Security TrackerThe Register: Crypto FlawKernel Patch (reverts 2017 optimization) — "This mostly reverts commit 72548b093ee3 except for the copying of the associated data." — Kernel CommitBuggy Commit: 72548b093ee3 (2017)DeepWiki: AF_ALG Internalsoss-security DisclosurePSA + GRUB Mitigation - Jan WildeboerUbuntu 26.04 LTS (Resolute Raccoon) Released — "Ubuntu 26.04 LTS sets the example for providing best-in-class resilience while simultaneously embracing innovation and the advancement of open source." — Jon Seager, VP Ubuntu EngineeringThe Future of AI in Ubuntu - Jon Seager — "Throughout 2026 we'll be working on enabling access to frontier AI for Ubuntu users in a way that is deliberate, secure, and aligned with our open source values." — Jon SeagerUbuntu 26.04 Release NotesUbuntu AI Features Throughout 2026 - Phoronix — "Canonical's approach to AI is refreshingly thoughtful — Microsoft should take note." — ZDNetCanonical DDoS Attack Update — "Canonical's web infrastructure is under a sustained, cross-border attack and we are working to address it." — arcticp, CanonicalUbuntu Weekly Newsletter #942Canonical AI Approach - ZDNet9to5Linux: Opt-In LLM Toolsuutils/coreutils: Cross-platform Rust rewrite of the GNU coreutilsLINUX Unplugged 636: Engineering the FutureLiveCD fails to start X session on QEMU · Issue #354 · ghostbsd/issuesMonty's “rescue” drive NixOS configMagnolia Mayhem's BSD Challenge ReportPick: NASty — NASty is a NAS operating system built on NixOS and bcachefs. It turns commodity hardware into a storage appliance serving NFS, SMB, iSCSI, and NVMe-oF — managed from a single web UI, updated atomically, and rolled back when things go sideways.Pick: Defuse — Defuse is a GTK4 application for removing image backgrounds locally.Defuse on Flathub
Transcript
Discussion (0)
Manufacturing is never simple, but Epicure makes it easier.
Our industry-built ERP and AI tools help you increase throughput,
reduce downtime and improve cash flow without adding complexity.
If you're ready to run a smarter, more efficient factory, visit epicor.com.
Hello, friends, and welcome back to your weekly Linux talk show.
My name is Chris.
My name is Wes.
And my name is Brent.
Hello, gentlemen.
Coming up on the show today, we'll cover the copy-fail vulnerability tearing through Linux distributions out there.
plus Ubuntu 2604, the Resolute Raccoon is here,
and John Seeger will dig into the details with us.
They'll round out the show with some great boosts and picks
and a heck of a lot more.
So before we get into that, this is like three big shows in one.
We've got to bring in our virtual lug.
Time appropriate greetings on the room.
Hello, Bruce.
Hello, hello.
And shout out to everybody up there in the quiet listening,
and everybody on the live stream,
Persiate you.
We see you, you hear us.
something like that.
A version of that, somewhere in there.
You boost? I don't know.
I don't know.
Also, good morning to our friends over at Defined Networking.
Go check out Manage Nebula from Defined Networking.
It gives you a decentralized VPN built on the open source Nebula platform that we just love.
And what I really like is the flexibility.
You can build the network you want and the way you actually want it,
from maybe your home lab to a full enterprise setup.
And you have the option to run your own lighthouse nose.
so you own the stack end-to-end,
but you don't have to start the hard way.
Define gives you a full managed experience
so that way you can get up and running fast
with speed, security, and resilience baked in from day one.
No big tech login required.
Try it for free, 100 hosts, no credit card,
at defined.net.net slash unplugged.
That is defined.net slash unplugged.
You go over there, you support the unplugged program.
Define.net slash unplugging a big thank you
to the defined folks over there.
and the fine, fine folks at Define for sponsoring this here program.
Let's get right into it, gentlemen.
We have ourselves quite the vulnerability this week,
copy fail, which is an unprivileged local attack
that allows, say, even just a generic Brent user
with no admin rights to pop your box.
Yeah, that's not great.
No.
And it turns out it's been baked into most Linux distros since 2017-ish.
Is that right?
Yeah.
So that's a while, and just about everything that's shipping right now.
and some distros are still working very hard to get it patched.
Ours writes, it's the, quote, most severe Linux threat to surface in years,
and it has caught the world flat-footed.
And my tongue.
What do you think there, Wes Payne?
I want to get your take on this first because you've actually been playing around with the exploit.
Yeah, it is worth noting, right?
You do need some sort of access, right?
So if you don't have a user account on the box, you need some other chain, some other vulnerability.
Maybe it's a, you know, some kind of injection in a web app, whatever it is.
but once you have that user access,
then yeah, pretty much any system,
because it's a kernel logic issue,
the particular, like, the first POC that was released
was like a Python 3 thing
and sort of made some assumptions
about particular set UID binaries like SU.
But those are all particular implementation details,
so it's important to realize that, like,
the core thing is this kernel flaw,
which we could get into.
Because it's kind of fascinating
because it, as often is the case.
Well, one, I guess we should,
should note it was an AI-assisted finding, but began with insight from human researchers at
Theori, Tay Yang Lee, who's studying how the Linux crypto subsystem interacts with page cache-backed
data, and we'll get into that. So there's a few layers. The first is the VFS layer. There's this
call called Splice that kind of lets you combine pipes and file descriptors. So you can, like, open a file
for reading, and then combine that with a pipe, and then pipe it into other things that you're using
when you're calling kernel APIs.
And in particular, there's this AFALG API,
and it lets user space programs take advantage of all the cryptographic stuff
or a lot of the cryptographic stuff that exists in the kernel,
which is good, both because, like, you don't have to re-implement it,
but also the kernel has access to hardware stuff.
Like, there's various reasons the kernel might be able to do it faster
or more securely or better than the random user space program
that needs to handle encrypted data.
The problem is that in 2017, there was an in-place optimization,
made. So basically to avoid
allocating duplicate memory
during decryption processes, you
have some encrypted data, you're calling the kernel to say,
hey, please decrypt this for me.
The kernel tries this in-place operation.
Basically, you need to
pass the data into the kernel for what you
want to decrypt, and there's various parts. There's like
some of the cryptographic primitives,
such as the actual encrypted data, and there's
the authentication tag.
And it builds this buffer that it's going
to pass into the kernel, and it copies
some of the first parts in there.
But it doesn't actually copy the tag.
Instead, it basically passes a reference to the tag's memory at the end there instead of, like, allocating new memory and putting the tag in there.
And unfortunately, later on in the cryptographic algorithms, this spot, the RXSGL, the destination, is inherently treated as writable.
So in this stage, like kind of when you have the splice side of it, it's all fine.
It's like read only.
You're just kind of like splicing on this read only reference.
And we'll get more into that.
But it's really the problem where we did this optimization in the mechanism that lets user space programs call into the kernel.
And then you need another piece, which is there's a particular encryption mechanism for IPSEC, off and ESN, for extended sequence numbers.
And it's basically the 64-bit numbers that they need to do stuff and rearrange some of the bits for.
And it kind of cheats.
it uses the, yeah, it uses the caller's destination buffer,
which is the thing we were just talking about,
as a temporary scratch space.
That's okay, though.
And specifically it uses scatterwalk mapping copy
to write four bytes past the end of the legitimate plain text data
precisely at an offset.
So you kind of put this all together,
and this is what the actual, like, exploit does,
is the attacker.
So basically you need some sort of file
that you have read access to.
That's important.
You open that file for reading.
We should just, I mean, that should be pretty easy.
As long as you can get on the box.
Yeah, yeah.
So there's some file that the user can read.
We'll get into that.
But could that even be like a web server process?
I mean, that's pretty generic.
Okay.
Yeah, it doesn't need special permissions or anything.
Yeah, yeah.
So it basically opens that file, which loads it into the page cache, right?
Which was the memory cache that sort of like you put things in so that you don't have to go fetch them from disk all the time in the kernel.
And this ends up being system-wide, by the way.
And so it has that available.
and then it makes this splice call to sort of set up this pipe
that gives it basically a memory reference to that data.
So the attacker aligns the splites offset
so that what it is thinking is this tag reference,
right? It's trying to pass a tag into the crypto API.
That actually points exactly over where it's trying to write
in whatever the target is.
So it opens, say, the SU binary, right, with this,
opens it just for reading.
And then it kind of aligns things,
then so it passes a memory reference
to wherever it's trying to overwrite.
and then it calls into the crypto API
and then it just blindly changed
that page cache reference because the actual
reference it gets is the page cache.
It's the page cache for the binary.
That's the key part of it, right?
So it's like when it opens it for reading,
the kernel happily goes and reads it
and then gives it a reference to the memory
that corresponds to the actual page cache entry.
So then when the crypto stuff happens,
it just changes that tag thing,
which ends up being the page cache reference
onto like the actual final buffer
that it's going to be used.
and then the particular IPSEC encryption algorithm does its byte shuffling stuff and writes those
four bytes, which are now attacker-controlled, past the normal sort of plain-text stuff it's
supposed to be using, and then that, because it was aligned from our first thing with the splice,
then writes not to the file on disk, but it writes four bytes to whatever you like
in the page cache version of the file. They have a very clever little payload. It's like 158 bytes,
and people have been golfing this further,
but it's basically a super clever,
tiny little minimal elf thing,
that all it really does is look,
so instead of having to figure out,
like, target a particular binary
to patch in a particular way,
they just override it from the start
with a super minimal,
little tiny custom binary.
It's clever to be,
like, as position independent as can be
and, like, work as in many places.
But it basically just,
it calls set UID zero
to, like, really sort of sink in
and, like, make sure it has full root permissions.
It's already running here
in, like, a context of a set UID binary,
but it makes sure that sync to all the kernels places.
And then it just has like a nice clever way to call slash bin slash SAH.
But it could do anything here that you wanted.
This was just a click way to spawn a root shell.
Now what's interesting is like if you just do it on say like an affected Ubuntu instance,
it'll just work.
But the script that was first sort of put out there, hard-coded user bin, SU.
So like on a NixOS app when I was trying to play with it, at first that didn't work just for the reason that that's not the right path for where S-U lives on a NICS system.
but then also it turns out that on NixOS a lot of SUID binaries
are the wrappers for them are configured as execute only
so you don't actually have read permissions so that's another way it could fail
and you could think that you're not actually impacted but those are all just
implementation details because you could also like there are a bunch of stuff that you
kind of have to be able to read like shared libraries Lib C
Pam Unix you could also target something like Etsy password say right there's
like a lot of stuff that you could override that you have to have read
access to you to be able to do. And then what's so tricky about this, right, is then it's
poisoned the page cache, but the kernel in all of this stuff that's happening, it never,
there's like an error, but it never sort of undoes anything. And it never then marks that
as dirty. So it doesn't know that it needs to be re-read, like anything's wrong. And so if you
go try to hash it, the hash will go check the, like the hash thing command will actually go get the
bytes on disk and it'll be fine. But if you make an, like an exec call, like the EXC, VE,
call in the system call,
the kernel is just going to use the page cache.
That's the whole thing. That's the point.
Oh, man. And then so, like, it doesn't...
And you're not checking that.
Right. And so you have to be much more clever
around how to detect it because you can't just, like,
do the hash. Now, it does mean it's not persistent
because if you reboot, then the page cache is gone.
Okay. So that's a small blessing here.
Yeah, all right. But it's, um, yeah, it's widespread,
kind of nasty. It does need some chaining in a lot of cases,
but...
Containerization doesn't solve it because it's still using these same primitives.
It does seem like systems that have strict
SE Linux and perhaps app armor profiles might be better off.
Or like you said, if you have a, if you have it where execute is only and read isn't an option.
So key takeaways are this is a bad one.
And it's been on machines for a while.
And we're going to have a lot of patching to do.
And the bug was found by an AI assisted coding analysis tool in roughly an hour.
So expect the cadence of deep kernel disclosures to pick up.
Yeah, I guess folks at the Xint.com.
and they've got some various
setups and harnesses to kind of go poke around.
So they had some hypotheses,
partly from human researchers,
exploring various places that might have bugs.
And then I guess they threw some AI at it.
It turned up a bunch of stuff,
and this is what it rated is like the highest severity issue.
Mm-hmm.
Mm-hmm.
So 2604, not currently affected?
No, I believe not.
That's good.
And, of course, the Debian Security Channel has a patch.
Alma Linux has it patched.
So the patch is getting out there.
NixOS has something, even though, like you said, your box wasn't.
But all the big distros are going to get it affected.
There is a way people can tell, right?
If they just look at their kernel version, if they have, well, basically anything since 2017.
And you can also, like, there are very safe, like, little test exploits you can run.
Yeah.
You can also check, like, you do need some of these modules.
Like, some kernels have them built in.
Some are as, like, loadable modules.
So you could sort of remove them and prevent them from being loaded.
So there's various mitigations per distro, sort of, depending on how your kernel is set up.
I think we're going to have to, as a community, look at this as an opportunity, not as a burden, even though it is absolutely going to be a massive workload.
But as a community, we have always championed the idea that more eyes and shallower bugs.
And now we are getting promatically more eyes.
We are getting exponentially more eyes.
More AIs means less shallow bugs.
More AIs means, yeah, exactly.
And the upshot is, our software will get more secure.
Yeah, so when they fix this,
they didn't actually fix the IPSEC part
where it was kind of cheating
and using that little scratch part.
They fixed the in-place optimization
from 2017 so that it never passes
this reference anymore.
So there's no longer some sort of coupling
between the input and output
and reusing some of those stuff.
And the part that's so funny
is in the commit message,
they note there's actually no benefit
in operating in place in this way
since the source and destination
come from different mappings.
So like we didn't even really need to be doing this.
I have a question for you, Wes.
It's more of maybe your opinion.
Given this has come out now
and is somewhat obscure for the colonel,
any thoughts on, given the colonel's complexity,
like how many of these little things are just hiding in there?
That's a good question.
Yeah.
It's hard to estimate.
And think about every library,
every service,
every service on the internet that listens remotely,
but we probably have been needing to do this for a long time.
We probably should have had a lot more humans focused on this,
but we just weren't doing it.
It's a hard problem.
It is also right, like that where, you know, and maybe using assistance where you can, whatever, to try and up your posture and do things more by default because, like, if you do things like, you know, okay, fewer permissions on set UID binaries or you try to take as much advantage of all the hardening that system D services offers by default so that like it sees less of the system and has less access to things even rate only.
Like there are tools we have and we need to grow better ones.
But I think it just means defense in depth will become even more important.
I do think when we see these, a lot of times folks that have taken the time to actually have a solid S.E. Linux setup and actually use it have been validated over and over again that it's worth the effort because they end up protected from these kinds of things.
I want to thank our members for supporting this here podcast. It really has made quite the difference recently as we're very lean on the advertising. And we're trying to turn that around. But in the meantime, the members and the boosters are really keeping us going. If you sign up to,
you get quite the bootleg this week.
We had a chance to go to Valve,
and we tell that story in the bootleg.
So go to Linuxunplug.com
slash membership and sign up.
You can get the bootleg edition
or the ad-free edition,
whichever fits your schedule better,
because the bootleg is kind of long.
And, of course,
you can support all the shows,
including the launch,
this week and Bitcoin and more
at jupiter.com.
And you get special access to all of them,
including the bootleg for this here show.
You can also boost an episode
with Fountain FM,
and that gives a signal
on what you thought about that.
particular topic, how we did, the value, et cetera. And it also goes to each one of us directly and
editor Drew, as well as a developer, and the podcast index. So it's a nice way to kind of put it all
around automatically. Support all the great things. And it's nice, too, because it's all transparent,
it's open source, free software stack. And I like, I like to say, the contract is in the RSS feed.
So you as an audience get to see exactly where everything goes. But we appreciate the memberships
as our foundation and the boosts as our signal. Well, this week we had a chance to chat with
John Seeger. He is the VP of engineering over at Canonical for Ubuntu. And of course, the big news is
2604 is out. And this is one of the LTS releases, the 11th LTS release. Wow. Wow. Yeah. And this,
as they often are, was the focus on stability, but a few bits of innovation worked in there.
We have a couple of highlights in here, like TPM backed full disc encryption. Wayland's now the
default. Obviously, they're now, like, they're shipped in the interim, but now they're shipping,
And I actually shipping some of that rust stuff in the user land and the core utilities, which is exciting.
And something I want to chat with John about, too, is they've done that thing they do in Ubuntu where they one-click something kind of.
And for now it's Kuda.
And Rock M. I don't hate on AMD.
No, I love Rock M.
But it's just that.
There's so much demand out there.
And you combine it with new hardware.
It's just that's a really nice thing, especially for an LTS release.
And then John also made post recently about an AI strategy.
that they're taking on at Canonical, which made a lot of news.
So I think that's something we could chat with him about as well.
So John is joining us on the Unplugged Program.
We didn't scare him away last time returning to the show,
John Sager from Canonical.
John, welcome back to the Unplugged Program.
Hello there. Thanks for having me.
Hello, and congratulations on the LTS release,
which, rumor has it, is also the first LTS under your watch.
That's right, yeah.
So the questing release was my first kind of interim that I had.
I guess a full cycle
and then this is the first LTS yeah.
Okay. Is it a little different this time?
I mean, is it feel different?
Being on the inside.
Does it hit different, John?
I mean, from the perspective
of like the release and planning and
the whole, we do this sprint in London
where the release team come along and we get everything together,
it all felt kind of similar, but just with
a little bit more pressure to, you know,
it kind of has to fly a little bit quicker and
be a bit less or even
more bug-free, I suppose, than an interim.
but we also decelerate slightly the pace of change
and we make the call slightly differently
as we get close to the release state,
whether or not someone's going to make it
based on our view on whether it's going to cause
any instability or issues,
whereas the early interims,
we would maybe be a little bit more risky.
So I do hear that a lot,
but is there still something in the LTS release
that is like the thing that you're excited about releasing?
I'm sure that happens with the interim releases
where you're like,
this is the thing we're really looking at. But does that happen with an LTS release or is it all old by
them? It's all known. There's nothing new that we've never done in this release, aside from
the shipping, Rokham and Kuda, part of canonical. Sure. But of course, it is also the first time
that everybody, or 90% of our users, will get the Rust Core Utils and the Rust Sudo. And so
as much as I'm confident in those changes, we've done testing, we've had lots of feedback,
There are going to be a whole bunch more people getting that over the next few weeks.
Right. Now it's really getting out to a whole new group of users, the real base.
Yeah, I can kind of tell in some of your writing that, like, you know,
there's obviously the regular sort of professional pride in releasing a nice product
and running a good team and all of that.
But it feels like you all get the, you know, just how much of the Internet and the cloud
sort of is underpinned, especially by these LTS releases.
It's one of, honestly, one of the most exciting things about working on Ubuntu in my view.
I quite like that sort of sense of impending doom if you get it wrong.
Keeps you shout.
That means your work matters, clearly.
Okay, let's keep talking about the rest stuff for a minute,
because I saw a recent, I think it was a post on the community discourse that I may be getting some of the details wrong.
So, John, so please fill the details that you hired a third-party audit firm to go through some of the Rust core utilities.
They found some stuff.
and now you guys are working through
bug fixing that. Can you work me through? Because obviously
I'm a little vague on the details. Can you work me through that?
So when we first
committed to doing this, our security team
which is pretty large at this point, as you
would imagine, we're keen to take a look and
double check. They found a few issues themselves
and as a kind of abundance of caution
we decided to fund a third party
security audit with a company called Zellick.
They found a bunch of stuff, fixed a bunch of stuff,
worked with Sylvest, who runs the Utils
project. We were pretty happy,
we thought we would go again, do another round of security audit, and also get some assistance
from Zellick on patching in some cases. I found a bunch more stuff. They were pretty great to work
with Sylvest, I have to say, did a phenomenal job. I think we piled a lot on his plate. We
gave some funding to the projects, and we tried to be as careful as possible, but we found a lot
of issues. There was a lot of bug reports from our users, et cetera, and he handled it superbly. And so
where we've landed is we patched the vast majority of the vulnerabilities that we've found or the
issues that we found. There are three utilities which are still affected, which is CP, move, and
RM. And so we chose not to make those the default in the LTS, just sort of out of an abundance
of caution. So this is a time of check, time of use error. They're all linked to kind of the same
problem and we'll get patched over the coming weeks and we will then switch those utilities out for
the next interim. I wonder if it's, you mentioned that,
specific vulnerability. I wonder if these rounds have been informative around sort of the things that
like the Rust improvements can address and the things that, you know, still just need to be
addressed via, you know, more traditional software techniques. Right, that's it. Writing Rust code
does not mean bug-free. It means a much lower likelihood of memory safety violations were used
correctly. And this is kind of proof of that. Although not all of the issues we found are
exclusive to the Rust versions. We found issues in the Ganoos versions. And if you actually look at the latest
the latest Gnu core utilities release.
Sylvester is one of the most prolific contributors to that release.
So it's nice.
There's nowhere near as much animosity as perhaps people might suggest.
It's quite collaborative.
The game here is not to discredit Ganoe core utilities.
If we find things that can benefit them, that work goes upstream too.
That's great.
I mean, especially because having two implementations is just all the better for the whole ecosystem.
It did strike me to you touched on working upstream.
and just, especially with the Rust version,
just it's one thing to get funding and doing all the support that y'all are offering,
which is wonderful.
But it's kind of another to then, I don't know,
implicitly be willing to accept some of your priorities,
if in like an upstream way of, you know,
being able to prioritize those things and keep the working relationship.
And this was, and I'll be honest,
it got tough a couple of times.
You know, there was a point midway through this cycle where I think we didn't quite get
that balance right or our communication wasn't perfect
and we had a bit of a shaky moment with Sylvester
and we'd gone on a call.
and we made it better.
We found a little bit more funding to help.
But it is hard, and this is why when we started this project
before we ever announced it,
we started with conversations for these projects to say,
we'd love to do this, but it could get kind of intense.
What do you think?
Is the project in a position to support this?
How could we help?
What funding, what support?
Could we try and extend where we have the resources
to kind of make it a success?
accessible as possible and also not bury the project. It's no good for us to switch it in the LTS if the project then flames out and disappears.
Sure. Yeah. Okay. I know we've touched on this before, but somebody listening maybe that hasn't heard
our previous conversation, they've got to be asking themselves, why go through the trouble? You've already
got these great utils. They've been around for 30 years. Why not just use those? John, why are you going
through all this trouble for this buggy software? It is, there are a lot of answers to that question.
So one is it's a bit of a statement of intent.
So some 90% of vulnerabilities in the software world are due to memory safety violations.
And so I think if we move to a language where that becomes very difficult or impossible, that's great.
Do you know what I mean?
And you could argue that starting with the core utilities maybe isn't the best target,
but starting with the core utilities is kind of the statement in a sense.
There's also loads of them.
Yeah.
And I think it is a way.
way of us getting more people engaged with open source development. There's lots of new graduates
who are learning Rust at university and I think Rust is very exciting and we need to keep thinking
about ways to keep people involved in open source and having them learn or work in languages
they're interested in with modern tooling with a vibrant community is one way. There's the cynical
angle which was kind of highlighted to me after we committed to it, if I'm honest. The cynical angle is
we get paid for fixing security vulnerabilities in code. We long-term support and security management
is how canonical makes its money. And one assumes that over the next 15 years, we will have to
address fewer with this change. Though we bluntly have made quite a large upfront investment in this,
so I think it'll be a few years. But it is seen as a large, it's seen as an upfront investment
in perhaps a long-term payoff for support a decade down the road. I think it will be personally.
I, you know, that wasn't particularly front of my mind in the decision-making calculus.
What was front of my mind in the decision-making was, I want to ship the most resilient operating system
I can. And the fewer things, or the more things that are written in a memory safe language
that are high performance that are well tested, the better. So core utilities is one of those
projects where there was quite high conformance with the original test suite. SuduRS was another
of those where it was quite a high quality, quite a mature project already. The next one will
be NTPDRS, which I'm actually really excited about because I think that'll be the first time
we get a single binary that can handle NTP, NTS, and PDRS. And PDRS.
ETP, all in one utility, that is both the client and the server.
Whereas previously it has been a bit of a dance.
So that's part of the work that we're funding on the way to rusty time-syncing in Ubuntu.
Rusty times thinking.
Well, and that seems exactly kind of what you said, right?
You're interested in a resilient, robust, well-functioning operating system.
And, you know, most of us don't usually have to think about it, but especially in distributed systems,
keeping the time is of critical importance.
Right.
And another target is going to be compression libraries.
And the bit that genuinely gets me fired up here is, okay, compression could be a little faster, maybe.
What's even more exciting is the energy usage?
You think the scale of Ubuntu and how many machines are running Ubuntu, imagine if we took
1% energy usage of every single one of those machines on the planet.
Now, we're not going to get there by changing CP and RM and move.
And we maybe even won't get there with compression or a single,
compression algorithm, but cumulatively over the space of five years, we could genuinely make
a meaningful difference to the idle consumption of a machine anywhere on the planet, which I think is
an interesting goal.
These are the kind of goals that you don't necessarily start an operating system having in mind,
but something like Ubuntu being around so long, you can start to have the luxury of having
these greater goals in mind.
So I like hearing that.
I do have a question about Rust.
You mentioned an obvious reason to adopt Rust, the memory safety aspect of it.
I'm curious how that has affected the team in adopting Rust and also why Rust if you ignore the memory safety.
There are other potential languages out there.
How is it going?
Maybe is the general way of putting all that?
It's going.
I would say mixed.
Like with any other push for new tooling, there are people who are really excited by it.
and people who are less excited by it.
Our foundations team have really leaned in here.
We're doing some work on boot at the moment, which will be in Rust.
I am trying to get us away from, in as many places as possible,
from things that are becoming more antiquated.
And so I have asked the team to stop writing new C code.
We still have to keep maintaining old C code.
That's going to happen for a long time.
We need to maintain our app to still written in C++, for example,
although we'll start to introduce Rucco.
I suspect in the next year or so.
But when we're writing new code
and when we're looking at bits of tooling
that we use for building the distro,
I would really prefer it if we stopped using C.
We ideally didn't use Bash and Python.
Bash is great for small scripts,
but as they get bigger and bigger and bigger,
they get harder to maintain and test.
Python language is nice,
a bit of a packaging distribution nightmare.
And so I'm steering my teams at the moment
towards generally go for where things are
very networky or very concurrent.
It's a programming language designed for doing networking and concurrency,
and I don't think that's kind of low-level systems programming towards Rust.
I think Rust is the best option we have right now as a replacement or a successor to
the C&C++ ecosystem.
So we don't tend to adopt loads and loads and loads of programming languages
are canonical.
We're not that big.
It wouldn't be very helpful if we had one team doing Haskell and another team doing Lerlang and one
doing Rust and one doing Zieg and one doing Java.
We try to be quite deliberate.
and generally those languages at Canonical are Python Go and Rust.
Well, it makes sense that you can kind of, like, Rust is one of those languages now
that can target a lot of things between, like, a modern tool chain that, you know,
I'm sure a bunch of, and Connoll has a lot of experience trying to find the limited pool
of developers who are up to date on the sort of esoteric desktop Linux, you know, how you put
together at Distro.
So if you have that wider, but you also get, right, like, abstractions that don't have
as much runtime cost, and you can have security benefits.
So you kind of get this all in a package, and there's just not that many other languages
right now that compete with that.
Yeah, and I think the Rust Foundation, the core team take it very seriously.
So we recently joined the Rust Foundation as gold members, and that was partially to support
the ecosystem and the folks who develop the language, but also within a bit of an agenda
of our own, which is to try and work with them on things like the crates.io security story,
on things like hopefully enhancing the standard library.
I have some opinions about where Rust could go with that.
and potentially some of the mechanics around things like async I.O.
Or async, sorry.
So we joined the foundation to give funding,
but also to try and contribute expertise from canonical
where we have them in the right discussions, that kind of thing.
And they've been, we only joined formally in,
when was it, whenever KubeCon was February, I think, February or March.
But they've been great to work with so far.
And I'm looking forward to seeing where that goes.
Well, there's a lot in there.
I want to bring us back to Ubuntu a bit because just before we got on the horn yesterday or so,
you posted a post on the discourse that was titled The Future of AI in Ubuntu.
And it's a rundown of kind of canonical's approach, your thoughts around integration of these tooling,
how to get the balance right, and all of that.
And this, of course, is a huge topic.
AI is such a huge encompassing term for a bunch of different technologies that users are going to
want to use on top of Ubuntu.
So, John, can you kind of walk us through what the announcement is here and what the plan is?
Yeah, so, I mean, I'll preface this by saying, like, I knew this was going to be spicy.
Yeah, I imagine, right?
If you thought the rust stuff was spicy, right?
Right, like, I figured I'd annoyed all of the people I probably could with that, so it's time
to shift on something even more explicit.
So I think the point here is
any time there's a change like this
and I see people reacting,
I always think I try to understand
where they're coming from and the thing that I would
try to remind people of is
whatever your feeling is valid.
If you really don't want AI in your operating system,
that's a perfectly acceptable position.
But what I try to
articulate in a way that isn't too brash
is like Ubuntu is not for me.
It's not for you.
Ubuntu is for millions of people.
And for everyone who is desperately trying to avoid AI, who is an Ubuntu user, there are probably
as many people who can't get it quick enough.
And so the challenge that we have is always like, how do we walk that line balancing,
you know, either two sides of a feature like AI, but more broadly making an operating system
that is appealing to educators and students, to two-man startups, to Fortune 500 companies.
it's a difficult line to walk.
And so we haven't pounced on this too quickly.
And really this is the first post to open the conversation about how AI will play a part in Ubuntu's future.
It will play a part in Ubuntu's future, partly because I truly believe there is some value in the technology when it's applied correctly.
And partly because it's kind of difficult not to in 2020.
So customers, partners are asking us what our plan is.
So we've thought about this quite a lot.
We've taken what I think is a really measured approach at Canonical.
You see lots of, frankly, quite scary things on the internet about companies setting
token quotas for people and measuring the percentage of code they write with AI.
And I don't really believe in that.
That doesn't seem like the right approach.
We're taking a more careful approach.
We are heavily, as of this year, heavily encouraging, incentivizing our folks to go at
the team level, to go.
pick a vendor and a tool,
ideally an open source harness
if possible, but if a team really wants to use Claude,
we'll let them use Claude, understand it,
you know, get to know it.
And then,
that way we can get a sense of like which are the tools
that work for canonical, etc.
And we'll kind of ramp up our expectations.
It'll be start with experiment with something,
then it would be demonstrate that you've built a bit of a habit
around it, perhaps demonstrate that you've been able to
accelerate a roadmap feature with it,
and then demonstrate that there is rigor around it
in terms of running evals and really understanding how it can be embedded into automation workflows,
potentially things like Clause, I know you guys have been having fun with.
These are all possibilities, but not things to be taken lightly.
There was news this week of, I forget the name of the company, an AI bot that supposedly went rogue
and took out production infrastructure.
The AIBot didn't go rogue.
The AIBot was given far too broader permissions, right?
That's what happened there.
And probably vague instructions.
Right. So our challenge is how do we, like if we're going to, how should we integrate AI into Ubuntu? And I see this in two camps. I laid this out in the post as kind of implicit features and explicit features. And the way I would think about this is implicit features are enhancements to things the OS already did. So this could be screen reading, could be speech to text or text to speech, could be follow focus on a camera, things that people have kind of become accustomed to being enhanced by ML.
And I wouldn't necessarily call those AI features, even as we add models to those features,
there wouldn't necessarily be decorating them as AI features.
But think about, you know, from the perspective of a user who is hard of hearing or visually impaired in some way,
this could be a huge game changer.
Right.
Screen readers are pretty tough to use.
And imagine you could point a camera at the screen instead and ask another level what's going on.
And it's an area in particular that Linux could use some help with, right?
It's an area that...
For sure.
Yeah.
for sure. So then the explicit side is a little bit harder to quantify because I don't want to tell you what we're planning yet because we're still planning it.
The explicit is much more like this is an AI feature.
Like how, and I would describe this as features that introduce a new mental model
or a new way of working with your machine that you didn't have before.
Like you guys have already explored this.
You're sending telegram messages and matrix messages to a bot that is doing things on your behalf.
That is like you have not been able to telegram message your computer before in such a rich way.
That's a new mental model for interacting with the machine.
Standing up infrastructure via telegram, basically.
Right.
But also, like, Linux is so wildly powerful, but also kind of vexing for people who aren't experienced.
And imagine if, you know, you could bring up a box and say, my Wi-Fi's not working.
Why isn't my Wi-Fi working?
Can you help me fix it?
Or I don't know.
I'd like to run a Postgres container.
Can you help me with that?
Right.
Right.
And interestingly, lots of the things we've been working on over.
time. I don't think we could have necessarily predicted this much of a fit. But things like snaps turn
out to be kind of a bone here, like individual tools or models confined with individual profiles
of confinement that say this thing is allowed to read these directories, access the camera,
you know, do this on the system. And we can have a bunch of them on the machine with very,
very tightly scope permissions using a mechanism that we trust that is in the kernel that is app
And one of the questions that got asked a lot on that thread, and I posted a follow-up, was about would we do an AI kill switch in Ubuntu?
Which I think controversially, I answered no to.
And I answered no to because I don't think we can hand on heart honestly do that.
There are so many ways which you can consume software on a machine.
What happens if I say, we're going to ship a kill switch, you turn the kill switch on, and then Mozilla ship a package update in their official debt that you see in LLO.
Yeah, or a driver even.
I mean, it could happen at any level.
Or driver.
Yeah, it could be.
Unless you're proxy in every request that any system makes.
Like, how could you even have that?
It could, yeah, I could just sneak in.
Yeah, that's where we're at now.
And for better or worse.
Yeah.
So how do you address that?
Because it does seem like a user, there is some sort of user demand there for that.
Either for performance or for privacy or variety of things.
We have seen Mozilla try to offer some kind of Kill Switch in Firefox.
Yeah.
And I think in something like the browser, it makes a bit more sense.
as a kind of product where you can,
it's a bit more isolated.
So no browsers are huge now,
it's a bit more isolated than a whole OS.
So my approach is,
firstly,
for all of the distaste
people have for snaps,
this is an area where it's actually
going to be really beneficial.
So we can't ship LLM models
in the installer because our ISO will be...
It's already a little hefty.
It's already,
it carries a little timber these days,
so I don't want to make that decision.
So my plan is that we will
as part of the first run on boarding wizard,
you will get the opportunity, you know,
we'll say, hey, we have this thing,
thing to be defined.
Do you want in or out it uses AI?
And if you are in, then it will go off
and get the correctly sized model
to run locally on your machine, right?
And so the kind of,
the irony here is lots of the same people,
I think, who have displayed some distaste for snaps
are now displaying distaste for AI,
but it is the snaps.
They're going to allow them to remove the AI
from their machine very cleanly.
Oh, that is ironic.
It does seem like the,
you've mentioned a few things
that snaps help here.
It does seem like the sort of
architecture awareness
that snaps have
is probably pretty helpful here
considering all of the AI models
and custom silicon and all that.
Yeah, really.
We did some work a few months ago
called inference snaps.
I talked about this at a meetup
and if you search for inference snaps,
you'll find the details.
But this is essentially,
we are packaging models like Gemma 3,
deep seek,
Kuen, Nematron from Nvidia,
and then you say you can snap install Gemma 3,
you can snap install Nematron,
you can snap install Deepseek.
But the work we're doing that's actually interesting
is we then work with all the silicon vendors,
like AMD, Nvidia, Qualcomm, MediaTech,
and we work with them where they want to
on particular models to get, like silicon,
how does it describe it,
silicon optimized versions of those models
precisely for your hardware.
So there's like a manifest, your machine goes,
hey, this is what I've got, talks to our store,
and our store goes, ha ha, we know all about that GPU.
So does AMD, here's a model that works just great
on that GPU. Just the tensors for you.
Wow. That's amazing.
Right. So it saves you having to do this,
go to hugging face, hit search, and then sit there scratching
your head for a few minutes trying to work out this model's going to fit on your
machine. You just go, I've heard of Gemma 4,
I want Gemma 4, let me install it.
And so the foundation for AI in Ubuntu will be these snaps.
So local first, local inference,
with models that we distribute,
having worked with the silicon vendors,
the most efficient form of it to you that we can.
Yeah.
With some confinement around it as well, right?
So does part of this process work when, you know, you're looking at the roadmap for
Ubuntu and hardware partners come to you or come to Canonical and they say, in the next two,
three years, we're going to be building these inference chips into our laptops and desktops.
We'd really like your desktop to take advantage of this.
And then so you're looking at the plan, you go, okay, this is some ways we can do that.
Is that part of the calculation here?
Yeah, absolutely.
It's actually quite interesting to me that I hadn't really appreciated this until I stepped into this role, even though I'd been at Canonical for some time.
The Silicon Partnership side of our business is increasingly one of our strongest assets.
Oh, okay.
You think about the work we just did to ship Kuda, so like apt-install Kuda, Rok-M, apt-install Rok-M, that's huge.
It is.
From a perspective of like a developer getting up and run and getting the right version that works with their kernel, you don't end up with loads of D-KMS modules building every time.
100%
it's a huge deal
even just renting a GPU
it probably spins up
in Ubuntu VPS
so the better
that gives it's
Yeah really
right
and so the same
is true
of other
kind of harbor
and eggments
so one of the
things we're shipping
is the
doka OFFED stack
which is
the accelerated
networking stack
like data center
networking stack
that NVIDia
well that's
the SDK that
NVIDia
distribute
so I think
it is really
important
things like
AI in Ubuntu
and being
able to
with some
confidence
tell you
that that
would be
plausible in a local first way
is only really possible
if we work with the people who are building the chips
really closely. And it's quite a symbiotic thing, right?
They want to build the best silicon possible.
They don't want to concern themselves with Linux distribution
packaging and,
because they have their focus, right?
And we have ours. And that partnership
worked out really nicely for us with Nvidia with the
DGX Spark. We sort of went on this
journey with Nvidia where they used to take Ubuntu,
you know, with our agreement with us,
repackage it into a thing called DGXOS.
and then put some extra stuff on top of it
and ship it with their DGX machines.
The DGX Spark was the first time
Nvidia went,
do you know what?
We're just going to ship Ubuntu.
And so the DJX Spark,
which is like a $4,000 AI workstation,
went out the door where the only supported OS was Ubuntu.
And it was just like not special Nvidia Ubuntu,
not like weird, frank in Ubuntu.
It was like, just go, download Ubuntu,
put it on a USB stick, off you go.
And I think it's a nice experience.
It's really great.
It's the perfect positioning at the right time.
This could have gone a different direction where all of this was done on Windows or Macs or something like that.
And it's, you know, have what people say about AI and how they feel about it.
I am very grateful that is, you know, Linux is very much part of this.
And people that are deploying all this infrastructure are deploying it on Linux.
And there's been a lot of great open source work here, like just with, you know, Lama CPP,
just all kinds of stuff in the space.
But there are some things that the open source community side is less well.
situated for, which is things like working with partnerships with, you know, companies making
hardware. Yeah, yeah, that's, yeah, that's very true. And it's interesting. So one of the things
I would argue that has been complicated for Linux's desktop adoption is the fragmentation. And I think
fragmentation in the desktop space is simultaneously Linux's biggest strength and also weakness. It's
strength in the sense that there have been like thousands of really bright people who have
scratched an inch that they've had over time and done amazing things. The drawback is,
they're not always necessarily motivated
to make it work seamlessly
with other people's stuff, which is why if you look at
the modern Linux desktop, it's like
so many different things kind of stitched together
and every time something breaks on my
Linux machine, I'm simultaneously kind of annoyed
and also kind of stunned it works at all.
Yeah, I agree.
I agree.
But I think in the world of agents
and think about what I was saying about
perhaps an experience where you could
ask your machine to do something or troubleshoot
itself, like all of a sudden that fragmentation
problem isn't such a problem if you've got a thing that already knows all the things, right,
or knows how to go and get information about all of the particular parts of the system that
you have. In reality, I don't know anybody, even the best Linux admins I've ever met do not
know everything about every package on their machine. Yeah. But now we have something that can
pull the actual source and read it and teach itself a lay of the land in, you know, a few minutes.
I think we're going to end up with a lot more Linux usage. Yes. Don't you think we're just going to see
even more free software, more Ubuntu, more Linux.
deployed because of this. I do. And I totally recognize people's skepticism. I have a lot of empathy
for the people who are replying to my posts a little hot under the collar. And I guess it is
our responsibility to demonstrate to our users that we will keep privacy in mind. We will try to pick
models that are licensed in such a way that it feels aligned with the values of open source.
Because I think even when you talk about things like open weight and open source, they just carry a
different meaning in this space. It's not the same thing that open source people have been used to.
And so we have to work out and navigate that in a way that is useful to the people who are all
in and want to play and provides a nice on-ramp, but not offensive to the people who just want out
at the end of the day. And my goal is absolutely not to start shipping a Clippy or a co-pilot button
on everyone's dock, forcing you to use it and keep, do you know what I mean? That's not the model.
No doubt. I can almost hear people typing about the Amazon
an affiliate link in a bun to from like 15 years ago.
It's not going to be like that.
Now we're introducing Debbie.
Yeah.
Yeah.
Right.
So we are going to build a layer features in, I hope, you know, as an experiment, but I'm
quite committed to it.
It's not an experiment that I think will fail.
It's just that, you know, we have a few ideas.
We'll try them out.
You know, I'm excited because, John, it has a lot of potential, especially when you're
saying the solve my Wi-Fi, why won't my printer connect, my second monitor isn't turning on,
because you have an opportunity to focus something that knows the system well.
It knows the version of Ubuntu it's on.
It knows it's on.
There's these things that will just the agent or whatever it'll be that's running on the system will be aware of
that a user would have to spend a lot of time if they just opened up open code or something
the first time trying to get the same results out of.
So I think that has a ton of potential there.
That's exciting.
I'm curious if you, Jens, think that that will make us Linux users less aware of our systems
and how they're built.
Because part of the joy, I think early on in probably each of our Linux journey
is like breaking all the things and learning how it's all put together
and then being able to customize it in such a way that makes it our own
or makes you understand some users' challenges and solve them
if you have that kind of position at somewhere like canonical.
And so is using some of these tools going to take us away from understanding what's under the hood?
I don't think so if you have a...
an interest in understanding what's under the hood.
But I think if you are someone who wants your computer to work and you don't care how,
it's a huge level up.
So I have, you know, a year ago, I was very much in the skeptic category.
I have completely immersed myself in clawed code and played around with claws and all this
stuff.
I've gone really deep on it and tried to learn as much as I can and use it as kind of natively
as possible.
And I have found it the most unbelievable accelerator for learning some things I've always
wanted to learn for trying out, perhaps new architecture patterns that maybe I'd never
have had the time to do. So of course, one can poke the machine blindly accept what it has
and ship it. And actually, for a little personal projects, why not? Do you know what I mean?
Like you want a tool that's going to do something for you. Do the thing. But an example,
like I built this coffee tracking app. I'm an insufferable filter coffee nerd. I built this
thing. And I think you guys picked up on the book thing, which was actually a fork of the coffee thing.
There was a bunch of stuff in there that I had never done.
before and it took me a while to build it, but like it was really interesting to be able to
go through that process. You know, I was bringing me, this is how I want this application to be
structured, I want to use domain-driven design, there are some rules I don't want you to break,
and it was able to assist with the bits I didn't know. And I was, it felt more like a long-lived
pair programmer than someone who was just doing the work for me. It wasn't a vending machine
for an app, you know what I mean? I was he heavily involved in it. It's a fascinating journey
I think people take, similar one myself, very skeptical, it's just auto-complete, what's the
point to finding an extremely useful and an accelerator myself and realizing it's very powerful Linux
tool as well.
It does make me think we have an opportunity for the show, just in that, like, to Brent's
point, like, you learn a lot when you have to constantly fix things.
The tradeoff is you don't always get to choose, right?
Sometimes you have to fix it when you'd much rather be using your computer for something, right?
So then the danger is maybe you never stop to ask.
If you don't have to fix it, you never ask.
But I think that's maybe an opportunity for us to make sure people who want to be curious,
know that there are questions they can't ask.
And I think, John, I don't know about, I don't know what for you, but it reminds me, too, of some of the arguments we're still having to this day about cloud computing versus spinning up your own Linux system or serverless computing is, you know, it's essentially abstracting away part of the, part of if you do a one app deployment on DigitalOcean or if you deploy something on AWS or you serveless technology, you're not really learning Linux either. True.
You don't even know NTPD needs to be a thing.
I see lots of the arguments. And this is, I don't know, like this is maybe a whole.
tape, but like lots of the arguments sound exactly like the arguments people made when we first got
compilers. Yes. Well, I don't trust that to write code. And package managers as well. I'm not going to let
that install stuff in my Linux box. Are you crazy? Yeah. And so what I say is like to people who
perhaps have been skeptical, I've been there. I feel it like I really get it. But space moves so
fast that if your opinion is even six months old, it's worth just playing around and seeing what happens.
I think that's so true. I think that's it. I've also seen people who have bounced
off it where they've said, okay, well, I've heard about this vibe coding thing, and they've gone away
and tweaked their VIM configuration and, like, try to get an LSP. And like, okay, cool, you can
kind of make it work with VIM, but like, just spend a day with anti-gravity or VS code and Claude.
Do you know what? Spend a day in an environment that was designed to be used in this way, and just
like, just poke around a bit, see how it feels, like, understand. You know, my, my feeling is that
this really isn't going anywhere. And I think there are two ways we could
try and stop this or try and shape this.
One is stamp our feet and say we're not doing it.
We don't like it.
It's open source.
It's big tech.
And be petulant over it.
We're not going to win.
The other way is to educate ourselves as much as we possibly can be part of the conversation
and influence it so that it isn't a burden on open source.
It is a positive force.
So right now, lots of projects are absolutely suffering
because people are irresponsibly hurling commits them that they have.
haven't reviewed. I think it is the responsibility of us all to basically try to work with those
people and say, hey, this isn't quite what we're looking for. Like, can we work with you to kind
like, we like, we like, we work with you to get this in a state that we can review it? And over
time, we'll have a generation of people who really understand how to yield these, like wield these
tools in a way that gets great results. Right. Yeah, we don't have a lot of culture yet, you know? Yeah.
We don't know how to use these. We're constantly discovering what we can even do, let alone how we should
do it with each other. However, I think that's the right mindset to start building a culture around
this tooling in Ubuntu. I think you have the right recipe there to build something responsible
in Ubuntu. So I'm looking forward to see where you take it. It's an exciting time. I personally
have gone from being, like I said, very skeptical to feeling like I'm more excited about
coming to work and working on tech than I've been in a really long time. Yeah. There is like something
unlocked in my mind and I am building a side project at an alarming rate.
And just, love it.
It's just, it's been, I, I sort of also, I have sympathy for the, like, it's taking my craft and I can see that people are, I can see how people would have the other reaction.
My experience has been the opposite.
And like, all of a sudden, there's all this stuff I can build that I've been thinking about for years.
Yeah, we've been saying it's the most fun we've had with computers in years.
It feels like finding Linux again in a way.
It really does.
And to your point, too, you're right.
There is a bit of a craft and art that I see Wes Wince when, you know, I produce some, some slides.
lot thing. But at the same time, it's a comparison that's a little cliche, but I was just thinking
when you were talking, it's very much like digital photography. You know, everybody now has a camera
in their pocket. And because of that, I have incredible pictures of my children that I wouldn't
have had otherwise. So I'm glad the digital photography and cameras came along, even though it's sort
of, sort of wrecked the art of photography a little bit for everybody trying to get that perfect
golden hour, sunset shot, right? It was a tradeoff, but now I have these keepsakes that I'll treasure forever
that are extremely valuable to me.
And I think it's kind of a similar trade-off with, yes,
some of the craft and the art of programming will be lost.
That's not going away.
They're still photographers.
Exactly.
But I also will have these keepsakes and these personal things that are extremely valuable to me.
And it makes me very excited.
And I'm glad that Ubuntu isn't shying away from it and that you seem to have a very
responsible and practical, pragmatic take for it.
So I think it's great.
John, I mean, this has been a great week.
It's been a great chat.
Is there anything else you want to touch on before we scoot?
No, other than we're going to need help.
So if this sounds interesting, then hit us up.
We are hiring like crazy, which is a little unusual at the moment in tech.
But we have a lot of openings and a very famous hiring process.
If you'd like to come play, then I would recommend it.
But otherwise, I think the next exciting thing is, let's make the interim's crazy again.
I promised it when I took over Ubuntu.
So the next release is going to be the stonking stingray.
Good name.
I like that.
Very excited about.
And so, yeah, we'll start to see the first of these new features landing.
We'll see where it goes.
We'll keep an eye.
John, thank you so much.
I hope we can chat again soon.
Likewise.
Thanks very much.
Well, dear listeners and distinguished host, you may have noticed this week is Linux unplugged 665.
Oh, yeah.
And we've been teasing that, well, this week, this coming week, is the BSD Challenge Week.
We officially are kicking off the BSD challenge.
This is my stupid stinger.
Ooh.
Yes.
Is that what it sounds like when you bid BSD?
Yeah, that's my, that's BSD in a song, in a stinger.
So you've mentioned BSD a great number of times this week compared to, I don't know, every other week this year.
So I'm wondering, have you.
Mention's around.
But what else?
But what else?
But what else?
Yeah.
But what else?
Yes, I have.
Yes, yes, I have.
Because I wanted to hit the first.
ground running like we do with these challenges.
There's no rule that says you can't poke around a little bit before the starting line.
Oh, of course not.
You know, like if you're going to race a car, you take it on the track a few times.
So I wanted to have the best experience possible to flip my impression of BSD as a desktop
operating system.
Oh, what's your impression currently?
That it's for masochists.
It's for people that like to hurt themselves and just want to struggle the entire time
They're using computers or trying to get software running or anything like that.
Okay, great.
And so I thought GhostBSD would be a great way to kind of get a modern take on FreeBSD designed for the desktop to kind of smooth over some of those rough edges and give me a good shot of changing my impression.
And that may be the case, but I wanted to test the car out around the track a few times.
So I downloaded the latest release and tried to get it going on my machine in QMU, KVM.
and it just wouldn't start up.
It started to boot and would fail.
Started to boot and fail.
And I looked into it,
and it turns out that,
gosh darn it,
wouldn't you know it
for the most recent release
of Ghost BSD,
there is a currently open bug
where the live session
fails to start X under QMU.
And so...
Just your luck.
So I'm like, oh, okay,
okay, before I saw this bug,
I'm like, I'll go get the community ISO,
which uses XFCE instead of MOTA.
Sure.
Same problem.
Same exact problem.
Come on.
And then I found this open bug report that exactly is my issue, which doesn't mean I couldn't use it on a desktop, and I still might. It's still a candidate.
You couldn't easily try it.
Yeah, I couldn't easily try it. So I decided to pivot to FreeBSD15.1 because the beta just came out this week, and I liked me some fresh stuff.
And this version of FreeBSD is supposed to offer, in the Tu-E installer, Plasma desktop.
Oh, and I'm like, oh, imagine if I could get myself a modern plasma desktop on BSD.
That's pretty good.
I'd have Kate, console, I'd have all the stuff I like.
I think I could make that work, right?
This feels unfair already.
So I download this morning before the show, thinking I'm going to get this in and I'm going to get a sense of it, so I have an answer for this segment.
And I boot it up in the old VM, and it starts.
And the installer, you know, classic free BSD text-based installer, Tewy, whatever.
doesn't have the plasma option.
It's not in there.
They talked about it being in there.
It's not in there.
It's not in there.
So what I got was a headless free BSD install.
Well,
that's always what you were going to get really, right?
Didn't we know that?
Didn't we know that?
Good try, though.
But you could add it later probably.
Well, I tried that.
I tried that.
And I do get SDDM working,
and I can log in.
Okay.
And then I get a blank session
because there's some kind of bug
that's preventing X-11
from working under QMU on FreeBSD.
See, I've got that working on.
I don't have 151.
Oh, which one do you have 15?
Yeah.
Maybe we should trade notes.
Maybe I should try the 15 ISO.
Although I did then end up just for convenience.
I started using a VNC session.
Okay.
You could also try that.
So you've been kicking tires.
Yeah, I got an I-3 going on FreeBSD.
That seems like a good choice.
I took the cheating route, though, because I noticed that props to FreeBST,
they provide a bunch of pre-build images and stuff ready to go,
like Cloud Init minimal ones and, like, more full ones,
including with ZFS set up in a pool already just as a thing.
Yeah, I did do ZFS on route.
Why not?
So actually, I need to do it play with the installer because this just meant I haven't actually tried the installer yet because I was able to just sort of DD that right into memory and then boot that in QAMU and start munking around.
You know, get my RC conf.
You and RAM disk all the way.
Yeah.
Nice.
Just because it was an exploratory setup, right?
I do.
So, like, I was trying to, I did have some issues.
I do think there probably are some things we could figure out or work around perhaps maybe.
I mean, look into around the QMU stuff specifically, especially for the graphics side.
I also at this point, I could just give up.
I mean, I'm ultimately for the week.
I'm going to run it on hardware.
Yeah.
So I could just, I wanted to just try out a few options to see which one I wanted to commit to hardware.
I guess silly me.
But yeah, all right.
I might, I mean, I don't know.
So which one are you going to, which one are you going with officially?
15?
FreeBSD 15.
Yeah, well, I wanted to try 15 one.
I was just having some issues.
Maybe I needed to do some setup because I think they've made some changes to how some of the like package and user land stuff is getting shipped.
So I was having, but I was using it slightly before the beta was officially out, so I don't know.
Brannley, have you picked a BSD that you're going to roll?
I think my choice may be less responsible than both of yours.
I was under a car for most of the week, and then I had this brilliant thought while I was under there, as you do, that I might give Nick's BSD a try.
Whoa.
I can't believe we didn't think of that.
I know.
I know.
I was waiting to see if either of you.
So you're going to have a real advantage possibly here.
Or disadvantage.
We're going to find out.
Yeah, you might have some compiling to do.
Oh, I hope he does.
I think I also probably need a backup because I'm not sure.
So I would love to hear from the audience.
You vote, and I will honor this, vote for which BSD Brent needs to try.
You better get in quick.
If mixed BSD doesn't work.
Yeah.
So we need a boost.
We will read them ahead of time because I probably in a dare to.
We'll desperately need an option B.
or send us an email, Linuxunplug.com slash contact,
or even if you're on Matrix, we've got the Linux Unplug feedback room,
so I'll keep an eye on all three of those.
And I will honor whatever crazy BSD choice you guys send out there.
Okay, I'm thinking, for me, I might go retro hardware too,
which may increase the suffering.
Now, I have different degrees of retro.
We got the whole museum over there.
What are you choosing?
PJ, I don't know if you remember,
but is that Dell, that prototype Dell laptop,
is that in working shape at the moment,
or did we have to harvest from that to make the o-droid work?
It should work fine.
It just needs a drive, actually.
Okay.
Okay.
So I may try...
And power.
Right, right.
And it takes a lot of power.
I may try running BSD.
Oh, there we go.
I mean, this laptop is...
Whoa, this is a chunker.
It's a Dell prototype that was gifted to us.
when I toured Dell way back, I don't know, four, five, six, seven years ago.
It needs 180 watts.
It takes 180 watts.
That's going to be the biggest issue.
Because the reason is it has two zions in it.
What?
It has two zions.
It can have up to something like three or four drives, an insane amount of RAM.
Although it doesn't have an insane amount of RAM in it.
Oh, it's got that old docking connector of theirs.
The old classic docking connector.
Obviously came with Windows Pro.
It's beautiful on the inside.
When you open it up, it is absolutely beautiful.
It's huge, too.
Open up that, open up, just go ahead and...
How would you describe the size of that, Wes?
It's larger than any laptop, probably on the market.
I feel like I'm sinking into it.
Monstrous.
Yeah.
I mean, it was a big one, so they were, I don't think I were planning to ship a lot of them, so they just went crazy.
Small track pad.
Yeah, tiny track pad.
Well, actually, it's a big track pad.
It does have a track point, though.
It is actually a big track pad.
It's just a huge laptop.
That's what's going on.
The perspectives are all shipped.
Yeah.
Because look, it's got a full 10 key and a full QWERTY keyboard.
I have a question.
When's the last time this thing booted?
This couple last couple days ago.
Really?
Jeff got it powered up.
Whoa.
What did he power it with?
Some USBC battery thing.
Some high power battery.
And he didn't leave that for you.
Well, it's his toy.
It was pulling nearly 100 watts from that, by the way.
Oh, this thing weighs a lot.
Yeah, it's very heavy, too.
Holy.
Yeah.
You've got H.D.M.I. though.
Yeah.
Huh.
It's a USBC.
This thing is...
It does have USBC, however, it's sort of an early implementation of USBC due to the era, and it does not pull enough power.
So you have to use the barrel connector to actually properly power it.
Wow.
And I don't know if I'm going to find that.
Okay.
Good luck.
So that's my leading candidate for hardware, just because it'd be a lot of fun to get that old thing running again.
It's been on the shelf for a long time.
And it's a one of one.
However, I don't know.
I may have to go a different direction.
So it all kicks off after the show.
We have to officially start knocking off the points.
We do have the details.
We'd love you to participate and let us know how it goes.
LinuxUmpug.com slash BSD.
And it will give you the details on the Linux on Plug 666 BSD challenge.
Join us, won't you?
Oh, so, oh, ho, ho, ho, episode 666.
Please send in, you know, your experiences because we want to know how it went for you as well by that episode.
you've got one week.
Mm-hmm.
Good luck.
The scoring system is on the website.
And now it is time for
the boost.
Ooh, Congaroo Paradox kicks us off
with a baller boost.
177,000 sats.
Hey, Rich Lobster!
Mr. Paradox, right?
It's been a while since I boosted,
so here's some value back
for all the value you provide each week.
I think you're also getting the right balance
of your AI coverage. Keep it up.
Nice.
Woo-hoo! Make it show.
Thank you.
very much.
Did you mention how much the boost was for?
Yeah, 177,000.
Okay, great.
I missed that.
That's unbelievable.
It is.
I just was in a state of, yeah.
But you know what else is unbelievable?
Oh, Derivation Ding is coming in with 102,767.
What?
Oh, my, good.
Wow.
All right.
Also, we just got to see Derivation Dingus.
Yeah, so one of these is a live boost.
Great seeing you guys.
Thanks, Fast was blessed. I'm writing this while sitting directly in front of you.
Oh, amazing, at our live show.
Right, right, right.
And then also props to Dink is for sending us a really nice breakdown of some of the copy-fail stuff,
including some neat disassembly visualization there.
A little pre-value, because he saw the lit pending item, saw we were going to be talking
a little copy fail and hooked us up with some 4-1-1.
Great.
Thank you, Derivation, for that said.
That's a double-layer value this episode.
Very nice.
And it was indeed great seeing you at LinuxFiles.
Yes, indeed.
You know what else is unbelievable?
What's that?
A dude trying stuff is also a booster with 100,000 sats.
What?
Rich lifestyle!
Oh my goodness!
What is going on?
Boosting in to celebrate getting a new job.
Hey, congratulations, buddy.
It is rough out there, gents.
Been applying for over six months.
Wow.
Way to stick with it.
Thank you for doing the most to keep us updated on the happenings in the community
and helping me keep my passion and remember how awesome
software really can be.
Cheers.
Cheers to you and congratulations.
Indeed.
Thanks for sending you to value our way, dude.
Nice to hear from you too.
Keep trying stuff.
Yeah, keep trying stuff.
The dude abides comes in
with 65,432
Satoshai.
I hoard that which all kind.
It's quite nice as well.
Hey, yo, I just realized
the last time only a portion
of my boo scott, Talia, so here's a little
bit more.
Live boost!
Thank you.
Very nice.
Amunday boosts in
Big Ducks,
22,000
Looking up for all but duck
22 cents
Live show Linux Fest Northwest
Boost
Very nice, thank you
boosting right there from the audience
How about that?
That was fun last week
Mm-hmm
Mm-hmm
Well, Tomato or tomato
Or tomato boosts in
4,444 sets
You say tomato
Foy!
Love the Linux Fest Northwest
coverage
I've got Dragonfly BSD
and Open BSD both downloaded.
I've never run either of them before, so let's see how this goes.
666, devil horns, et cetera, et cetera.
Ah, yes.
Dragonfly BSD, surprise that didn't come up.
I don't know if you sobbedroom, but I started a little poll for you in the Matrix chat.
Oh, thank you.
That's very kind of you.
Yes, good.
Let's get to voting.
And you know the mission.
You know what the mission is here.
NetBSD all the way.
So good.
Hey, our buddy-a-ur-udacy.
Our buddy-sie, Audit.
Odyssey Wester from Spookin
comes in with 5,151 sats.
You make me want to be a better man.
Great to see you all live.
Odyssey, it is always great to see you live.
I was saying to the guys and to Angela,
it's like it's not a Linux fest unless Albert shows up.
Oh, and I got a little 3D printed gift from...
Yeah.
Albert as well.
These little tiny, really impressively printed penguins.
Super smooth 3D printed penguin, little touch penguin.
And he gave me 3D gift to the kids, which I did.
I saw them a little all over.
Yeah.
Also, Odyssey Wester is in the live chat right now saying,
I'm just trying to get Ghost BSD to boot on this damn Chromebrook.
Can't get it to boot, though.
Can't mount the U2F mounts.
Well, it is famous for its wide variety of Chromebook support.
Thanks for the value, Albert. We appreciate it.
Moon and I boosts in with 2,000 sads.
I've been using Ventoy for a while,
and I'm curious what baggage and edge cases you all are referring to in this episode.
I don't know.
These two guys don't like Ventoy.
Zero.
I love it.
I have never had success with Ventoy.
I don't know what I'm doing wrong.
I have tried several times, but I always run into an issue where it can't boot the specific
guy, so I want to be booting.
And I don't know if it's a hardware issue.
I don't know.
We should try, like, maybe one that we make.
Yeah, maybe.
I mean, the long time is.
No, I have.
He has one on a, like, really fancy USB drive with SSD.
doesn't work. Still using that, by the way,
from the very first time we covered it, I'm still using
that same thing. That's true. MVME in there, and it
rocks. It's got C on one end, A on the other end.
It works on every machine ever. Not at all.
Wow. Yeah. So let us know what you think about Bed Toy.
I will add, so there's the
some people struggle with it. It seems like
maybe some firmware's or UFI setups
similar just don't like it. So mileage may vary.
But then separately, there's some concern around
binary blobs that are present in the webbase.
Yeah. And so there's been brought up,
it hasn't ever really been fully addressed.
It might have been gotten,
there's been, like, more tension about it over time.
So some folks have sort of provenance and trust issues
with the delivery of how you get Ventoy.
You can pivot to that now.
Okay, that makes me feel slightly better.
But thank you for that question.
Clearly, Muna night, it needs addressing.
And I'd like to hear what people think about Ventoy.
I guess in last year, the dev did have some response
saying that the blobs come from other open source
and propose to build them from GitHub CI.
I don't know if any of that's actually like really
happened, yeah, so.
Maybe instead of blobs.
Make your own judgment, I don't know.
It kind of depends on what your trust.
What if we call them magic boxes?
Oh, instead of blobs.
And then it's not so bad.
You know, it's got some magic boxes.
It ships with a few magic boxes.
Ah, I mean, I'd like to know how it works, but it's magic.
Okay, I'll accept that.
And then we don't call them blobs, right?
Blobs is something you fight in a video game.
Distro Stu comes in with 3,300.
And I says, you're doing a great job.
Well, thank you, Distro.
you're doing a great job.
Should we,
you want to play a little,
I got,
that's not possible.
Nothing can do that.
I do like the Leonard Nimoy clips
from time to time.
Live long and prosper.
Superior ability.
Breeds superior ambition.
That's a good one.
And then as distrost be requested.
You're doing a good job.
There you go.
I just need to leap,
let that steep for a bit.
Well,
Monty comes in with a row of ducks.
Thanks for the push
to get my rescue drive system updated and in place.
Nice.
I added a rescue Nix config with boot to RAM now to my flake
and have a USB drive plugged into my ProxMox host.
From my laptop, I can update the config,
build it, and flash it over the network to the USB drive.
Nice.
With Just File, so I don't forget.
And I can then boot a VM on ProxMX that has the USB drive
passed through to test it.
Oh, that's using your kidneys.
That's fancy.
I can pull the rescue drive out whenever I need it.
And then we get a link to.
to Monty's config.
Oh, Monty.
You know we love to convicts.
Thank you.
Oh, yeah.
Oh, I'm just saying, I like up front.
He's got the structure listed in the ReadMe,
a real quick blurb for 30 plus hosts.
My goodness.
And then even MIT licensing on there.
Well done.
It's a clean, lean machine.
Whomever whiz boosts in 10,011 sets.
Eric here, I had the best time at Linux Fest Northwest.
The brilliant and creative members of this community are intellectually inspiring, funny, astute, inclusive, and generous.
I even managed to hook up on Matrix before I left this year so I can stay connected and keep that conversation going.
Oh, great.
Thank you so much for showing me the way to find my people.
Fun will now commence.
Fun will now commence.
Yeah, it really is great, isn't it, Eric?
It's more than you can even imagine from afar,
and I'm really glad that you had a chance to share that with us.
Thank you for the value, too.
Mr. Mayhem is here with 6,660s.
Week 1 is done, and he posted a full write-up.
So far, he's started, and he is going for maximum points.
The Madman is repurposing what he calls a bad luck NixOS machine
into a fresh free BSD setup.
He's avoiding past Ghost BSD concerns.
He's going for full graphical desktop.
He's got a browser.
He's got his user count and mounts.
Audio are already working.
It's got that done.
System administration tasks include package updates, OS updates, SSAH, service, and scripts.
And apparently he has a BSD jail with EngineX running inside of it.
Apparently.
We're going to see the submission.
Yeah, I just think it's sounding like mayhem is winning.
Yeah, so far.
Stretch goals include a PF firewall rule, BHIVVS snapshots, and Dragonfly BSD.
Oh, yeah.
So mayhem
I mean that's a good playbook right there
If you just want to join the challenge
You want to just set mine up too?
If you want to be my challenge
Yeah send us a disc image
Boosted in
That would be great
Thank you everybody
Who boosted in
Also thank you everybody who stream sats
18 of your stream sats
As you listen collectively
You stacked 25,350 sets
Coming in hot with the boost
When you combine that with our baller boosters
And everybody who boosted
And we had some great ballers this week
It turned out to be a tremendous
this episode. And this is the interesting thing about value for value. About three days ago,
it looked like it was going to be a rather low episode. And then just a couple of members
in the community stepped up. And now it's one of our better episodes. And it's funny how that can
happen sometimes. And we just ride the wave. We're so grateful. So thank you, everybody. We
stacked a grand total of 526,592 sets. Thank you. That is very, very great. We really do appreciate
that. And if you would like to boost in, Fountain FM makes it really easy these days with
Fiat or SATs, including multiple ways to do that and connect into your own AlbiHub. Now, if you go
AlbiHub, you can integrate with lots of different applications, including the podcast
index and you can just boost from the web. It's a great way to support the show, or you can
become a member and put your support on autopilot. Thank you everybody who supported episode 665,
and we look forward to hearing from you, and you boosted in on the BSD Challenge. Let us know how
it goes with a boost or the contact page.
B-S-D boost.
We should have, yeah, 6,666.
Is that the B-S-D boost?
Any number of sixes we'll do.
All right, so we got a different kind of pick for you this week.
This is kind of, we're going to ask you to try it and report back.
Since we're busy with the B-S-D challenge.
Ask not what your show can pick for you, something like that.
I like that, yeah, but what you can pick for your podcast.
There we go.
We'd like you to get nasty.
N-A-S-T-Y.
It is a NAS operating system built on NixOS and B-Cash-FS.
It turns your hardware into a storage appliance that serves NFS, Samba, ISguzzi, MVME over Ethernet, managed, all from a nice web UI, updated automatically, has rollback support.
New version just came out that integrates a complete backup system using Rustic Core so you can go to anything basically Rustic supports, which is a lot of the things.
The new log viewer in the UI is services page with Unified Services Configuration for NFS, SOMB.
Aiske ZMVME, over networking, UPS stuff, SSH, Docker, backup, server, et cetera.
And arm support, it's GPL3.
So if you are not participating in the BSD challenge and have some time.
An alternate B-based challenge.
Try out nasty.
And report in because what we're trying to essentially get to is if it's worth us giving it a full go.
I feel...
I feel...
I feel...
Wes, is this your project?
I am interested.
I will also just sneak in here, BcatchFS a day ago, had V1382 come out, a bunch of performance stuff, cycle detector and six locks improvements, B-tree right buffer multi-threading, B-treating, B-tree node merge attempt thrashing fixed.
So now, Kent Wright's, if you've got a workload where we're slower than Butterfess or ZFS, let me know.
Oh, is that one?
Uh-huh.
All right.
We'll ride that wave with Nasty and let us know how it goes, and if we should try it out.
Now, we do have a pick that I think is very handy.
It's one of those...
A legacy pick here.
You know, a standard pick.
But it's one of those you need it when you need it.
It's called diffuse,
and it allows you to remove backgrounds from images locally on your desktop.
It's a GTK4-based application written in Python using Libidwadia,
and it just uses web GPU acceleration on X8664 systems.
So it's using web GPU to do an accelerated GPU removal of the background.
ground on just a little simple purpose-built application.
You don't need to go to a website.
You don't need to go to a service.
You can just use a diffuse.
Processing is performed using the ISNED general model through Onix runtime.
And it is also GPL3.
Neat.
Have you tried it?
How does it work?
It's pretty good.
It does have some challenges on hair, but for some of the stuff, I had a funny picture of
Brent to make a sticker.
I wanted to use it to make a sticker.
Oh, good.
Yeah, we need Brent stickers.
And then there was also, I have this picture of Jeff when he's holding a
long pull and that made for a great sticker.
I have them for when I need them.
If you get a moment, you make a sticker.
That's right. That's responsible sticker.
It's also available on FlatHub.
The stickers? No, no, diffuse.
We'll put a link to that in the show notes.
Link to NAS. Link to everything we talked about today will be in the show notes.
You can find those over at Linuxonplug.com slash 665.
And of course, you know it next week's.
It is the result of the BSD Challenge.
to hear how it went for you too.
And Wes, some pro tips for people before we get out of here.
You got any?
Yeah.
I'm hooked on structured metadata.
Maybe you are too.
So we have an XML file.
And in that XML file, we have a JSON file.
Actually, maybe several.
What?
Yeah.
And that has chapters.
Oh, nice.
Yeah, which is metadata about the show.
Uh-huh.
And we have even more metadata if you want, like...
We do?
Yeah.
Well, if you want like an S-R-T file.
Mm.
Yeah.
Or a VTC file.
And then that's like, who said what?
When?
In there?
Right?
Right.
Embedded.
I mean, it's like a...
You go to the XML, and then that points you to the SRT.
Uh-huh.
And then you got the data.
You got it.
Yeah, I got it.
All right.
But there's these things called podcast apps.
Yeah.
And a lot of them do that for you.
Oh, what a great idea.
Yeah.
It's quite the ecosystem.
They call it podcasting 2.0.
They also support live stream.
See you next week.
Same bad time.
Same bad station.
Yeah, that's right.
We are live on a Sunday.
make it a Tuesday by joining us over at jbblive.tv or jbblive.fm. Or like Wes said, in your
podcasting 2.0 app of choice. A lot of them just support the live streaming in there. We go,
we have it pending so you know like a day before the show when it's going to be. Boom, you hit the
button. You're listening. It's incredible. It's amazing. It's a podcasting. We also got the website,
LinuxUngug.com. That uses HTML and CSS. It looks pretty good and it gives you links to stuff.
Check it out. You're going to love it. But thank you so much for joining us on this week's episode of
your unplug program.
Hope you enjoyed it. Let us know what you thought.
Here's a big episode.
And we'll see you right back here next Tuesday.
As in Sunday.
