Tech Over Tea - Save Linux From 2038 Bugs Before It's Too Late | Bernhard Wiedemann
Episode Date: March 6, 2026Today we have the maintainer of OpenSUSE Slowroll on not to mainly talk about that project but instead his other endeavour, identifying and resolving Year 2038 bugs.==========Support The Channel======...====► Patreon: https://www.patreon.com/brodierobertson► Paypal: https://www.paypal.me/BrodieRobertsonVideo► Amazon USA: https://amzn.to/3d5gykF► Other Methods: https://cointr.ee/brodierobertson==========Guest Links==========T-12 Years: https://www.reddit.com/r/linux/comments/1qfw17a/today_is_y2k38_commemoration_day_t12/Slowroll: https://en.opensuse.org/Portal:Slowroll==========Support The Show==========► Patreon: https://www.patreon.com/brodierobertson► Paypal: https://www.paypal.me/BrodieRobertsonVideo► Amazon USA: https://amzn.to/3d5gykF► Other Methods: https://cointr.ee/brodierobertson=========Video Platforms==========🎥 YouTube: https://www.youtube.com/channel/UCBq5p-xOla8xhnrbhu8AIAg=========Audio Release=========🎵 RSS: https://anchor.fm/s/149fd51c/podcast/rss🎵 Apple Podcast:https://podcasts.apple.com/us/podcast/tech-over-tea/id1501727953🎵 Spotify: https://open.spotify.com/show/3IfFpfzlLo7OPsEnl4gbdM🎵 Google Podcast: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy8xNDlmZDUxYy9wb2RjYXN0L3Jzcw==🎵 Anchor: https://anchor.fm/tech-over-tea==========Social Media==========🎤 Discord:https://discord.gg/PkMRVn9🐦 Twitter: https://twitter.com/TechOverTeaShow📷 Instagram: https://www.instagram.com/techovertea/🌐 Mastodon:https://mastodon.social/web/accounts/1093345==========Credits==========🎨 Channel Art:All my art has was created by Supercozmanhttps://twitter.com/Supercozmanhttps://www.instagram.com/supercozman_draws/DISCLOSURE: Wherever possible I use referral links, which means if you click one of the links in this video or description and make a purchase we may receive a small commission or other compensation.
Transcript
Discussion (0)
Three...
Two, one.
Good morning, good day, and good evening.
We're not starting again this time, so welcome to the show.
You never saw what happened 10 seconds ago.
It doesn't matter.
So, I've done a video in the past couple of years talking about the 2038 problem,
and today we have the person on the show who has been writing these posts and doing a
lot of this work who I was told I pronounce your name incorrectly so how about you
introduce yourself and we'll go from there yeah so when you pronounce it in German
I'm Bernhard and Wiedemann but you might go with the English version of
Burnett or something that's fine anyway so yeah introduce yourself
tell us what you do I work for Sousa since 2010 that's over 15 years by now
So quite some time and I work on OpenSuse and a lot of other stuff.
At some point I did the OpenQA test automation.
That's very important for Tumblewilling release distribution now,
but other people take care of that now.
So my baby got adopted.
It's strange feeling.
And then I started to work on reproducer builds and that means testing that you can build
software and build it later and still get the same results.
There I found the problem we want to talk about today, but maybe want to go with more basic.
Yeah, I think a good place to start is just explaining what the 2038 problem is for
anyone who's completely out of the loop.
Yeah, you know, computer's track time.
For example, if you start to download, you want to know how long does it take and have a percentage
meter based on previous download speed and for that you need a clock and to calculate differences
on the clock it's very easy if you have an integer for the start and integer for the end and you just
do subtraction of one and the other and have a difference yeah but for that your integer needs to
start somewhere and that somewhere was 1970 for the Unix folks midnight January 1st
Uh, 1970.
Exactly.
And for some reason, they made it a signed integer.
So they could represent dates from 1901, because maybe you import old files from older machines
that were not Unix and whatever.
I have Git repos with Git commits from the 90s.
And then Git wasn't around because I imported old stuff.
So yeah, it might be a correlation.
We don't know.
That would make the most sense.
It's just, just how they did it.
Yeah, and that's original 32-bit counter because that was very enough for the far future back in the 70s.
Yeah, if you're writing code in the 70s, it's a reasonable assumption that nobody's going to be using that code 60 years later.
Yeah, not only that. Compilers back then were much simpler in matching.
They even had this 64-bit integer type built in.
So if you wanted to work with 64-bit numbers,
then you would have to do some more involved arithmetic
somewhere.
So everything was complicated if you had 64 builds.
And of course, it's lower.
You want to do the difference of two numbers.
And if you have to do more operations, then it's lower.
And we should expect them very much smaller, it's lower.
So yeah, they saved some space in time.
Problem is there's numbers being used for a little bit longer than anyone would have anticipated.
Yeah, I'm not sure what they thought.
How long computers will be around, but they're still around and even more than like then.
It's an awesome quote from some IBM guy who said, yeah, maybe there will be six computers.
just somewhere in the world.
There's at least six.
He wasn't too far off.
Yeah, but back then, computers were very big and very expensive.
And these days, that's not the case anymore.
Yeah, so you have these counters of seconds since 1970,
and they keep ticking and another second,
and another second, plus one and plus one.
So I think we reach the 2 million seconds.
And shortly after 2 billion, we will reach this number when you write it in a hexadec decimal.
It's 7 f f f f f f f f so 7 f that's 32 bits, no 31 bits and when that gets a plus 1 and then it rolls over and becomes a negative number again to represent 19.
one.
Yeah.
And that can cause trouble for systems because if you have your download running and want
to calculate the difference between two numbers and suddenly the difference is not three seconds,
but four billion seconds, oops, that will be a minor inconvenience to you, but there might
be other machines that will have stronger failure modes.
If nobody tests, he checks the code and things can fail.
The specific point this runs out is 3.14 and 7 seconds 19th of January, 2038.
For anyone who really cares about it, it's 2.147-ish billion.
You can just look up 2038 problem. I don't want to read out the entire number.
It's around about that point is where things run out.
Yeah, you can use a calculation.
later and calculate two to the power of 31.
Well, you could do that, yes.
So that into a date command with the date minus D at sign and the number.
And yeah.
Now, the 2038 problem is often compared with Y2K, also known as the year 2000 problem.
And I feel like a lot of people sort of, I've seen discussions about the year 2038 problem.
people sort of undervalue the problem because of what people in the general public saw with Y2K,
where a lot of people saw the end result where a lot of work had gone into fixing it,
so people now assume that nothing ever actually happened there.
Yeah, that's a sort of paradox, huh, where you spend so much effort and go into all the code and review it.
and fix things and after you spend all of this effort,
you did, everything keeps working.
Very nice.
It's like every day you take out your toothbrush
and brush your cheese and what's this effort for?
Teethes are fine.
So nothing ever happened to your teeth.
So yeah, I think it could have been worse.
of no effort has ever been spent.
I think hard.
I'm gone.
You know, in 1999 was a different time.
There was no iPhone and no Raspberry Pi.
So, and computers were usually,
I did have a laptop, but it was like C and something kilograms
and it had 32 megabytes of rum and it's more hard drive.
and you hardly could run Firefox and real computers well,
these big boxes and heavy and so there were a lot fewer computers and fewer servers
and smaller machines with less run and less fresh storage.
So you had smaller programs with less code and even less coders.
It's over twice a number of software developers these days.
And so all that means there's no.
a lot more code and more involved code and more code that you didn't write yourself because
other people wrote it and the whole open sourcing took off since the 2000s.
Right, you're relying on libraries that rely on libraries that rely on libraries.
The JavaScript environment is really bad for this.
You'll see dependency chains that go 30 packages.
Yeah, like if you ever done a Hello World NPM thing that's like hundreds of
of big up here's a lot of code from other people in there and you didn't review that so you
don't know if there's an issue. So even if your code is compliant there might be somewhere, somewhere
down the dependency line where somebody did something and like how are you going to find that?
Yeah, exactly. So to take all the source code and somehow
looking for it, but it might be some automatic ways because it's made to be mentioned,
leaveable. That's a nice thing about source code. So there's hope that we can get some
lintas or compiler warnings about these things. And there's one thing, oh yeah, why do we count seconds?
You know, there was a story where people wanted to issue
certificates for the cloud environment and they wanted to be valid for one year. So what did they do?
They didn't use the seconds, but they just increments a year by one, year plus one. And that worked
for year and another. And one day it failed, you know, which day it was, it was February 20 nights.
Because when you do year plus one, there's no February 29th. Suddenly, yeah, and
These kinds of things you can avoid if you just have seconds because then you do 60 times 60, then 24 times 365, and then you have the number of seconds you want to add.
I know some time node is going to comment in the comment section about leap seconds.
We're not talking about leap seconds right now.
Those are also a thing.
Yeah, yeah, that was also.
I remember an issue in the Linux kernel when a hosting provider thought, oh, we measured
power usage went up at this moment because all our servers had the sleep second issue.
You need to reboot at once to get back to normal.
That was a decade ago or something.
One of the other problems that didn't exist when Y2K rolled around
is devices were far less interconnected.
So it's not just that there were less devices.
It's, oh, you might have had a device, but that device was offline.
that device was just its own thing.
Now, you might have a laptop that's connecting out to, you know,
20 or 30 different servers all at once,
and all of those are also important to make sure things are still working as well.
Exactly.
In 2000, I had a dial-up modem with 56-kilobit,
and I dialed in, fetched my mail,
and went out again to not incur charges per the minute.
And so that was a very different time.
only the happy one set
DSL.
I think part of why people
undervalue this is a problem
is because of how
sort of extravagant
some of the reporting on what
would have happened was, you know,
planes were going to fall out of the sky,
nuclear bombs were going to be launched
because the computing systems
would just malfunction, like
things that
just were not possible with how those systems were built.
Like, a plane wasn't going to fall out of the sky if the GPS system broke.
Like, because of that, people now are, maybe now it's going to be different.
But people assume that because the worst things didn't happen, nothing happened.
But things did actually happen that weren't patched.
There are some small video rental stores, for example,
that sent out late fees that were hundreds of thousands of dollars,
because, you know, you didn't return it since 1901.
Yeah, so something did happen, but machines were less interconnected.
And maybe there was more humans in the loops.
If you have AI and it's not trade on these kinds of things, because why shouldn't?
Yeah.
But you don't know.
There's so much electronics these days
and all the places in cars, in smart TVs,
and home routes,
and Raspberry Pi's embedded anywhere.
It's so much stuff.
So one thing I often hear,
which is obviously incorrect,
if you know what's going on,
but I often hear people say,
this isn't a problem anymore
because we're not using 32-bit systems.
That's obviously not correct, but why is that not correct?
Yeah, so there's one part that is improved for 64-bit systems that's in Glypsey.
You have this type called time underscore T and this time T is 64-bits on 64-word systems.
So that part is fine.
So everywhere where your programs use time T, correctly everywhere, that's that's deadlist, that's fine.
But then you start sending stuff over the network.
And your network packet has a defined binary structure.
And maybe that defined binary structure was made at the time when people thought,
32 bits is enough for this kind of thing.
And like SNMP did it.
Or there's XMLRPC also called Sope.
And that has types for characters and for strings and for 32 bits inns.
I think it didn't have a type for 64 bit in.
So if you transmit a time and just use trivial 32 bit int to transfer it,
and it might run into trouble.
So there's these transfer formats or file storage formats.
MariaDB had this problem where this time stamp somewhere and they're fixed binary format.
And they didn't want to change it to 64 bit because that would break back for
compatibility, so they did a different fix, which was to redefine it as unsigned it.
So it runs for another 68 years.
Yeah, that's fine.
That's a long time in the future, you know.
Right.
Surely no one's going to be using it in 64 years from now.
Oh.
I think they can figure out some other solution.
So I was going to ask, there are temporary stopgaps that can be done, like dropping the
the signed integer.
Yeah, replacing it with an unsigned integer.
That's another project just did that last week.
That was so something.
They also had some lockkeeping internal stuff,
but it was a bimonin, 4-2 bit,
and they didn't want to mess with backward compatibility.
So they just said, okay, it's not cited anymore.
No, it's unsigned.
And all the previous values keep working,
because the top bit is zero still at the moment and once as a one bit hopefully everyone will
have upgraded it's still nearly 12 years not yeah every year on january's the 19th i say let's remind
people next year it's only 11 years left maybe at some point people start you know caring about the problem
Maybe.
No, they already start.
So I see patches being done.
When I did the post, someone said, oh, there was patch and Grubb was in the new release recently.
And Grub does his own calculation for conversion of timestamps to an actual date because they don't have an operating system to rely on at that point.
And they fixed their calculations.
Now, 64-bit might not sound like it's that much larger than 32-bit, but the number, because it is, you do this to a power of 2, right?
Just how long is this for anyone who might not really know how much, how little of a problem it is anymore if you go with a 64-bit number?
Yeah, let's say long enough.
The sun will be out and the Big Ben will long be forgotten.
And maybe you can even reference times before the Big Ben, which nobody ever needs.
But the zero still at 1970.
Basically, you have more than enough numbers at that point where it's...
Look, if we outlive the current universe and move to another universe,
and somehow are still running the same software that hasn't been updated in billions of years.
I guess they can worry about it then.
Yeah, I think we will run out of IPV6 addresses before that.
Maybe we'll finally make IPV6 the standard by then.
Yeah, it's at 50% adoption or something.
I'm still waiting for someone to run an IPV6 only server for something that
people really want, like, you know, that adult stuff.
And then people will be incentivized to finally adopt IP6.
Yeah, I guess that's what we did.
So, okay.
Why did you start getting interested in identifying these problems?
Yeah.
Let's say I like things to be correct.
I did this whole operating system, automated testing thing,
to ensure that all those releases are correct and working.
And then I tested for the producing builds where I found actual bugs in the build process.
But sometimes, oh, there was an error and we didn't check for it.
So, oh, no, you have 40 main pages, binaries in your package that you never noticed because it's only random.
It's only randomly 10% of the time or something.
And I found these things.
And for the producer, it's be tested in the future,
you can still get the same bindings as today,
because sometimes there's here embedded in,
oh, that's not copyright, 2027,
because you put the current here into the generated file.
And it's rare, but people do that,
kinds of stuff, or just times that.
just timestamps and I noticed that some pictures failed.
Been able to them past 2038.
I looked into that and found that this actual instance
of this issue even on x8664.
There's a large time T, but people convert this time T
to integer and do things with it because
in density for type and C programs.
And is that a 30,
2-bit int?
Yes.
Right.
Standard int and C is 32-bit.
There's a long type and on 64-bit systems long is 64-bit, but on 32-bit systems long as 32-bit, so it's
not enough for tracking time.
And if you really want to be sure you use long long, then it's definitely 64-bit.
Why are they doing those time to do that?
time to ink conversions.
Maybe they just didn't think about the type thing.
Or there's the standard C functions, for example.
There's one called A2I, Aski to integer,
and that returns an integer from an ASCi,
where you have maybe your config file or your HTTP request with some number
and you get this number in and you sold at your A2I function
and it gives you back your integer that got converted out.
got converted out and it's fine now. And after 2038 it doesn't work. Right. So you need to
use the right function, the A2A is a long long version of it. And then I did some patches in
their direction as well. There's the something they catcher set. So what are the, you
mentioned the the time T to in conversions. What are other common
areas you tend to run into that mistakes crop up with.
Yeah, you can have this other reaction.
So let's say we have a network protocol, you have a server and a client,
and you want to communicate timestamps between this one.
Then your server has this time T and it needs to convert it to something
can send over the network, which is either an ASCII string of your number
or binary representation of your number.
And then the client has to do the opposite conversion of this binary ASCII.
spyingi-a-oski into your time t and if at some point of this conversion on one side or the other side,
you need to get both right for this transfer to work.
So you have this time t to end and into time t and both have to use actually
64-bit integers in their conversions in their transfer protocol.
Only then.
So there's a lot of things you need to get right into one,
One of these transfer steps is wrong.
And maybe the client writes a file for storage,
for logging, whatever.
If there's a printf function where it can put in percent I,
and person I expect an integer argument.
And yeah, there's this kind of,
there's language defaults and a lot of this language default say
in this default.
And in this default, it's easy to use,
but wrong for times.
So it's not necessarily
directly storing the numbers
that you tend to see problems with,
it's more the conversion between types
for moving that number around between functions
and moving that number between networks.
That's one usually reason for that test.
So as you've been finding these problems
and reporting into upstream projects,
what has been the response you've gotten
from developers in some of these projects?
Yeah, different.
So some just say, oh, here's a nice image request.
Let's merge it.
Blan, done.
If they're nice, they even saw an as an exclamation mark
and a smiley or something.
And others, I don't see anything back because it's an often project
and they don't care about the software anymore.
so sources and maybe in a few cases they said,
oh, there's a time traveler and why is this clock on the year 2039.
Something is wrong with this and T-shirts check the system instead.
I have some copy pasta that I throw in there why I'm testing with the clock to the future.
and that helps avoid some of this confusion.
Right, because I could imagine, especially for things that involve like, you know, SSL certs,
they're like, okay, you've got the time set wrong.
Obviously, that's not going to work, right?
Yes, that's a completely different part that plays into testing,
because there's these certificates that can expire next year.
And then usually it's in the test part of the software.
So when I do these build tests for the
principles, there's a build part where it unpacks a table,
runs a build, installs the binaries,
and then it runs tests.
And these tests, they use pre-generated certificates.
And this certificate, they always have an expiry date.
And if it's 2027, then it stops working next year.
to run these tests, software itself,
and we find that we can't test because if he set the date
to 2038, then there will still be the expired certificate
and sort of blocks the view on the other issues
that might be hidden behind it.
Well, how do you deal with that problem?
Not yet.
I find issues for upstream.
And so then maybe generate
generate these certificates on the fly at runtime, so they're always valid for yet another year or generate certificates that are valid until year 3000. It's someone else's problem then or
yeah, so there's different solutions and sometimes I send them per request or with them decide how they want to solve it.
And some accepted, some don't.
That's like a mix of outcomes.
Yeah, but that's the problem.
This one idea that maybe we could patch our open SSL
and BUTHL and Mozilla NSS and everyone who does this kind
of certificate expiry checks and have some common way
to say, OK, here's a certificate.
it, but please consider expire you, as if today was this other date.
And then they virtually never expire, and then you can test for these other issues.
You mentioned some people have been confused about why your date is set to some time in the future.
Have you seen some, like anyone with confusion about why you're even looking for these problems
and why you kind of like, why it even matters to fix this now?
because I'm sure some people will think, oh, the problem's X number of years away,
12 years as of recording, how of many as of when you're watching this.
Like, that's a problem for later.
Yeah, I've had one or two people with replies.
Like, yeah, I acknowledge it's a problem, but I have much more urgent things to do right now
because maybe there's an embargoed security bug and he has to solve it.
urgently and hit a deadline.
His company runs all of money and
yeah, so there's practical concerns
and people don't always prioritize fixing things
that only are a problem in 10 years from now.
Now obviously, you can really only test stuff
that is open source.
There's a whole world of private.
software that very well can have all of these same problems that I would hope the companies that are involved in this are dealing with and testing, but we really have no idea to know what's actually going on there.
Yes, but of course you could do some black box testing, black box testing, where you have your windows and your proprietary software on top of it and you tell your windows, okay, this is able and DPBORNTPS.
and now your data is set to 20, 39, and just see how it goes.
That obviously works for runtime problems, but build time issues,
you're not going to be able to do that, yeah.
Yeah, yeah, because you don't build it.
You don't even have the source to build it.
That's for other vendors to figure out,
because they make it proprietary, they get all the problems from that.
So, you're only, like, kind of like,
one person doing this. If somebody else wanted to help out find problems in maybe software they
make themselves or software they use, what's the easiest way someone can go about actually,
you know, doing a test like this? Yeah, you could test with your system clock set or in a VM.
It's even easier because then there's a KVM option where you say minus RTC space base equals
and skip it to timestem. Then the VM starts running at that time.
Or you just do it interactively in the OVM and set the time and then you can play around the air of the stuff.
So that's how I do it for my build test to run in KVM anyway, to have some isolation.
And that would be a way or just review a code.
You can grab for time T and functions using time.
They are not that many.
But should hopefully become easier soon when we get some compiler patches.
Some work happening on GCC.
We want to get warnings for this time T to in conversion and for the in to time T conversion.
And when we have these both, then you can compile your server source code and your client's source code and get warnings for the actual things and maybe even declare these warnings as error.
So it's those in your face when you try to compile your program.
to compile a program and says, okay, in this place you do this conversion and you need to be very sure that it's actually what you meant and that's safe for in 2039.
I did see some discussion on, I don't remember which one it was, whether it was timed to end or into time T, where one of the maintainers were saying it would make more sense to have that warning closer to language fun ends rather than having it where they are.
Yeah, let's call it an implementation detail because these compilers have internal structures where they have a front end that passes the actual language constructs and then have an intermediate language.
And from that intermediate language, they constructs the machine code output and optimizes and whatever.
They discuss where in the stack of, within the compiler they put the warnings.
But in the end, I want a warning and wherever it ends up.
It's fine.
Hopefully that can happen.
Because, you know, at least having a warning there,
it can give you some indication of doing something wrong.
Because you might have assumed that,
especially in a large code base,
that you may not be the only developer on,
that there are none of these mistakes.
Like, you may personally have not made a mistake,
but if you're dealing with a, you know,
million line code base that has 30 other people working on it,
it's very easy that somebody else has made that
and nobody has reviewed that part of the code yet.
Yeah, there's different review cultures in different parts of the software, let's say.
And when for the producer bits, I've done a lot of patches over 2000.
And in one instance, I noticed that I sent a pull request and there was a typo that meant there was a syntax,
I wanted to compile it or and they just merged it and only half a year later I noticed that to compile and then
I set a fix-up to that thing, and they also merged it.
They can be very little review and looking at things that go in.
Other projects are better.
They actually discuss things and go big and forth and improve things
and make things really nice, but that's not the standard everywhere.
So we kind of touched on this earlier, but...
there is the possibility that if this is not taken seriously, this could be a much larger problem than we saw with Y2K.
And especially, like, we can talk about how many computers there are now.
That's not slowing down.
There are parts of the world that didn't really have internet connections until very recently that are now coming online, places like India, that have massive populations.
and over the next 10 years
there's going to be a massive boom of
a massive boom of development there
more computers there
and that's going to bring a lot more people
that are going to be affected by this
there's only going to be more computers
then you have all of the data center buildouts
that are happening
by 2038
who knows how many computers there will be
by then let alone
how much more connectivity there are
between these devices as well
if you say computer
you also need consider smartphones which are computers by all means because there's a Linux
car running on it in some user space. And so there's a lot of these billions of devices.
And when you think about the physical world, like our city, they got new streetcars or tram.
or drum.
And the previous ones that are still going,
when they're busy hours, they were from the 70s.
And you can see how long this stuff keeps around.
And yes, they got some electronic upgrades over the time.
But these days they have screens and showing,
next station, and when they will be there,
so they do have a clock.
And I'm not sure if anyone ever tested
if they will keep working after 2013.
2039 because they will definitely be going the new ones after that date and can expect these things.
So there's long-lived hardware out there and that will have trouble.
And these small devices that get replaced maybe slightly more often, but some stuff,
so it just keeps running and you have it somewhere, not you but institutions.
Institutions have this kind of thing where it gets set up by someone and it gets put there and that runs and
runs and nobody remembers how it was done, but it keeps working and doing his stuff.
Right. Somebody has a Debian server sitting in a closet somewhere. They know it's powering
something. They don't know who put it there, but it's very important. Yeah, and these days,
it's not Debian servers, but Raspberry Pi, a size, VPN servers. And they keep running.
It's this kind of thing. You just have them internally and not exposed to the internet.
it's not even that big of a threat security way.
I can imagine things to keep running for another 12 years
and be forgotten about.
And at some point, yeah, my break, stuff like.
But the closer we get to this, like the...
The closer we get to this, the more likely it is
that there is going to be a system that is still in deployment
that has one of these currently unpatched problems
that is still going to be in deployment then
that may become an issue.
Yeah, last week I talked to someone named Troy
and he set up more official initiative with ITU, I think,
international to the communications union
and another organization called First
that cares more about security issues,
but if you have these SSL certificates that are supposed to be expired
and then suddenly they're not expired,
that can be a security issue as well.
So, these things so.
Yeah, but safety definitely can be involved.
And where was there?
You talk about security, SSL, expiry,
someone who's
someone named Troy
set up a thing. So they want to get more people involved
in these things and ensure that more
large organizations get involved in
like governments and an enterprise.
If you have an enterprise and you rely on IT
somewhere at every enterprise does,
then you want to just keep it work and ensure that it keeps working.
And for that you want to go to your vendors and say,
hey, window, do you know if your software is safe for running past 2038?
And then the vendor has to go to its sun vendors and subcloth vendors and figure out if they are sure what's in there or maybe they have also source code because it's an open source product.
but there can be complicated supply chains.
If you've ever looked at Android,
there's these chip makers and boat makers
and different vendors for different parts of systems
and can be complicated supply chains for some things.
One of the things that you included in the list of topics
was whether or not governments, enterprises,
should mandate that vendors,
certify 2038 compliance. I'm assuming that you're possibly in some sort of favor of that?
Yeah, I think they need to at some point. Because would you buy a device where I don't know if it keeps working until next year?
So it's like on food you have your best before date.
So you know, okay, I can say the user tomorrow.
or next year and you want to know how long is your device good for use and for some there's
these pixel phones where say okay we'll supply updates for this many years and that's got to know
because other than this don't supply this information they just give you oh here's hub it's already
outdated when they give to you and they never updated that's um they had that with some
running up to the 2000s. I don't know about Europe, but I know the US government was
throwing money at that problem to try to fix the year 2000 problem to basically make sure it didn't
become something that was that big of a deal, right? I assume things maybe happen in Europe. I actually
have no idea about that. Not yet. Well, with the year 2000 problem. I don't know about with
the current problems.
Okay, I'm not sure.
So I was in university backside,
so I only sought on the side when people made a big deal about it.
Your video froze was here.
Oh, yeah, it's done a couple of times.
I could still hear you though.
Yeah, works fine.
Yeah.
So I think, I expect there will be mandates
to ensure that the software they procure to use for the next year will be good to use for the next years.
And especially you can't always rely on vendors to stay around or except if this other initiative gets holds where you want open source for what you buy.
That's called public money, public code.
You've heard of this initiative?
It vaguely rings about?
That's from the Free Software for Duration Europe, I think.
They say, oh, these governments, they spend so much money on software,
public money.
They spend on software.
So why not say, if something gets built for us,
and then please get the source code for us.
So we can have it if the vendor goes down.
Or if the vendor says, oh, that's a harder problem.
We can't fix that.
then they can take the source code and have some else.
Have a try on fixing this thing and throw money at it.
And that's a really good initiative.
And that can also have issues with the year in 2038.
Because then at least you have the sources, you can recompile it.
And two things, yeah, but even if it's just a binary and you run it on your lip c
then maybe you can just patch your lipcy
and be fine
so they might be raised with outsource
because it's nice of a source for it.
I think the first step to getting
any sort of change happening here is to at least
get people knowing about the
problem and, you know, talking
about this issue. This is why
I'm kind of glad that like every year you put out
that post, hey, here are the problems
that exist, here are the ones that have been dealt with
and why I've been covering
for the past couple of years as well, because I do
think that this is a
problem that is really important.
And maybe there's like some developers here and there that are fixing problems, they spot them.
But it's not something that there is this sort of wide initiative to fix, wide initiative to discuss.
It's kind of just there's people here and there doing things.
And that's basically it.
Yeah, but it's good first step.
Because people are more and more aware that this problem exists.
and that they need to not throw their time tea into an int and send it around.
So yeah, every bit helps.
So maybe that's already enough to get half of the issues fixed.
So when it comes to spending a lot of money, then we only have to deal with half the number of issues.
So it helps.
I'm bit nervous.
I was going to say, have you considered taking the topic to do a talk at Fosterm or something like that?
Troy did. He ran above at Foster. Last Saturday, I think. I didn't hear back from him.
But I assume he was there and people attended and they discussed all these things that we
we discussed right now.
I discussed before with him and I collected all these reasons why 2038 will not be a big problem.
And one reason that appeared in some of these weathered posts in the past was, oh,
2038, maybe there will be nuclear apocalypse until then and we don't have to care about it.
Yeah, I, I saw some of this.
so many of these comments, like, oh, you know, like, very well, there might be, or, you know,
we can talk climate change.
Like, you can talk about any of these, just, like, world-destructuring problems.
Like, sure, fine, okay.
Yes, there's other important problems.
That's true, but we can't rely on the nuclear apocalypse who serves this.
Right, right.
It's not a good solution.
A more recent one I've heard is, oh, by then, we're going to have AI.
that's going to identify all of these problems
and it's just going to magically fix it.
And sure, maybe it will.
Yeah, maybe if you have the source code
and can recompile it.
And then, yeah, it's like a good blinter
that can review all your source code.
So maybe actually it can help, but, yeah, not sure.
It may be missing,
that it may have false positive, false negative.
And one other,
funny response was, yeah, I will be retired by then.
Fair enough.
And do you expect to have a working society when you are retired.
Do you have kids, do have friends?
I have no doubt.
That was a big part of Y2K as well.
Oh, I wrote it.
We only store date with two numbers.
I'm going to be retired by then.
I don't actually care.
Yeah.
But it's not good luck.
It's not a good argument, no.
We need to go and fix these things if they are broken.
Maybe it's not that much, actually.
So I counted and was in the 60, around 60 things that I reported so far.
So maybe it's not as broken.
At least the things that I think were already submitted fixes,
but other things we don't know.
It is unknowns.
Well, let alone.
software deployments that were written years ago,
you know, there's, there's, we see all of this,
there's Cobol jobs that are out there, right?
Like, there's this really old code bases,
and who knows what those code bases are doing at this point?
This sort of goes more into the proprietary stuff.
There's really old deployments out there.
There's software out there that hasn't really been updated in over 20 years.
I know someone who works in a call center that uses a DOS VM because they're still using the call center software from when everyone was on DOS.
Like this is a thing that exists right now.
Yeah, if it works, it works.
So I keep using it.
Because why I spend it forward.
Even if you deploy a modern stack, no, you can't be sure either.
that it keeps working.
So maybe just better off, I don't know.
But our own company, Souser, is in the business of enterprise Linux,
which means we have this best version and some aspects of the space version remain constant
over the lifecycle of the whole version, which can be 10 years, 12, years, 15 years,
and the current lease is less 16.
is less 16 and the previous release less 15.
You know, there's extended support and extended extended support.
And the longest, longest version of that ends in 2037,
for some reason. I don't know why.
What is the, what is the longest support you can get with the Sousa version
if you like, if you pay as much money as Sousa wants for it?
There's this long-term extended, I don't know how...
Whatever it is.
there's a website for it
and the pricing
depends on what's the use case
and how much of the software you actually need
and that's around 15 years
or something.
So it varies from version
to version. Usually the newer versions get
longer support because customers demand
it for some workloads
but that also means that there are
versions of Sousa now that people are still going to be running
post 2038.
Yes, the 16 version will be supported and running beyond that date.
So that's also one reason to go in and fix these things in the software we ship.
Because it's our stuff, we ship even if we don't make all of that, we ship it.
And customers rely on us to ensure that it keeps working.
And if it doesn't, they call us and make us reliable.
That's why there's money to be made in prevention here.
That's good because that we know, reduce the number of calls.
So there's always this food note.
When we release a new service pack, then the new service pack will be supported and that's sort of compatible to the previous service pack.
And the old service pack will then go out of support by 38.
So we still have time to actually fix.
to actually fix things.
But yeah, maybe we can't do major version upgrades
of some parts because it would upset customers too much
because they compiled their stuff for our stuff
and needs to keep this binary compatibility,
the API application binary interface.
That's also concerned why, for example, for Intel,
32 bit input.
People just keep the old 32 bit 20 around
because the old programs were compiled
to have this binding interface.
So if they call the time functions,
they expect the time function to return a 32 bit integer.
So somehow need to keep that interface as is.
Can't just change it below the programs
without them noticing.
So it's tricky.
It's a tricky claim.
I think the Debian people had a big discussion about it.
They said, yeah, for Arm, we just change it, we switch around and we say on our old
Vastel Pires that Arm V7, which only has 32 bit time T, we just change it and now it's 64 bit and we really compare everything.
But for the old Intel, there's so much legacy software around that they just leave it and there's not
All the new CPUs can run 64-bod code.
So even if you still run 32-bit code on these now,
you don't want to do that beyond 2038, hopefully.
Or maybe you need to find another solution until then.
There's this idea to redefine it as unsigned in the G-Dipc.
So for the software, the software.
The software that see this time T and hopefully doesn't too much processing with it.
Maybe it just takes it, stores it, retrieves it, and use it, and then it will see and use the Lipsi function to tell, here's my type T, please give me the date and here.
Then it can return a year of 2039 from the Lipsi because the Lipsi interprets it as unsigned.
that could be away out there.
So what are the
like main kinds of problems
you run into when you run
into software that has problems? Is it just
they're not building or are the
other things you tend to run into?
Yeah, mostly it's not building.
There's one case
I remember where
the producer builds really
showed an issue and showed
okay if I compile
this software after
then suddenly I get different binaries for some reason.
And that was the SET software compiled with a gem tool from the boost collection.
But it seems to have been fixed in newer versions.
So it's not that big an issue right now.
But mostly I build it, I run the test and test file.
That's what I find and where I found most of this issue.
found most of these issues, but some software does not have good tests.
For example, the Maria DB issues I didn't find by building the MariaD package,
but I found it by building the Pearl MySQL package, which is bindings for the server.
So it runs a server at build time to do the tests on top.
And that said, through the error message and said,
oh, you can't run this MariaDB server after 20308.
So they actually put it in Bechle message about it.
She knew it's limited.
But, yeah.
Wait, wait, sorry.
Sorry, they had an error message for it, but the problem was that.
You know what?
I love that.
Okay.
So I will at least say they identified where the problem was, but they're like,
that's someone else's problem.
We're just going to document it.
Yeah, sort of.
But that's not the only case.
There was this other case of.
CMAC, where SCMEC test failed, but it wasn't in CMAC itself,
but SCMEC uses LIB Archive, which handles TAR for all SIP files,
ZIP files, and LIB Archive had a place where it said,
okay, I have this timestamp here for my files in the archive,
and if it's greater than 2038, then I, so on error,
because it's not supported.
So people know there's limitations,
and they test for these limitations
and some sort of error and that's one fatal mode.
I've got the Wikipedia page for the issue up
and there is a fun one in here where
AOL server knew they had a problem with the 2038 problem
so instead of actually fixing a problem
they just set the maximum time to be minus
the 32 bit integer minus a billion seconds
so it became a problem in 2006 instead.
So I guess that's not a good solution,
but they had time to fix it.
Yeah, good enough for now.
Let's say, so, yeah, let's say 60 years is a lot of time
to actually fix things in another round of former changes,
proper care changes and these things.
So, yeah, even 10 years is still some time to get things fixed.
It's not like need to rush and be busy doing nothing else now.
But it's not too plenty of time.
At what point do you think this is like a critical problem
where this should be first priority to fix?
I think things will start failing already in 2037.
When you have like these cookies that expire one year from now
or SSL certificates that expire in X years from now.
Browsers limited for security reasons,
but so there are really stuff that fades before,
likely, and so we can't know how early,
but if you ship software, why not fix it now?
Because you have to fix it in some point anyway,
and if you have some spare time.
Go in and start fixing.
Could help.
In most cases,
is it like not that difficult of a problem that you've run into?
It's usually just handling numbers incorrectly.
Like that's what you sort of mentioned earlier.
But is it usually a pretty simple fix?
Most of the time, yes.
When there's spinal deformants involved,
we want to keep back what compatibility you can be.
And more tricky like the Maria de big case, I think the pitch was bigger.
But other cases like this A2I becomes A2L and in multiple places and then you run a test again and find another place and iterate and then at some point it's fine.
So if you really have good tests, that helps.
And it's a problem. Writing tests, a lot of people don't do those.
Yeah.
At some point I also want to add to the open QA testing for every Tumbleweed release,
for Open to the Tumbleweed distribution, we run a lot of tests where we run the install in the KDM and go to the installer and partitioning.
after the install, we run Firefox and Leopard's and Leopard's Office,
and on top, a lot of applications and tests,
and if we just change the data and have these workarounds
for expired SSL certificates,
then we could check that all the software cover
as they runs at a different time,
and that could really unearth some problems
that we don't see right now.
Right, yeah, doing that at a distribution level
for something like Pumbleweed which ships relatively up-to-date things,
that's going to, I have no doubt it's going to identify a bunch of things
which you probably haven't even remotely looked at yet.
Yeah, exactly.
So even if it says, all fine, that's a good thing,
because then you know that you could still run Firefox and Legal Office in 2039.
And
So
Yeah, that was this famous quote
I think it was Dykstra
Who said
Testing can only prove the
presence of bugs, not the absence
Yeah
Yeah
Yeah
I don't know if you've been paying attention to the work
Being done with
U-U-U-Tils
The Rustry ride of the Gnu U-Tils
but there are I think
96 or 97% compliance
with the Ghanu tests
the problem with that is all of the
undocumented bugs
that became features
that also they need to worry about
yeah
that can
maybe happen in some place
but I'm not sure if someone wants to rely
on time overflow
sure yeah
I'm sure there's got to be,
I'm sure there's got to be some weird setup out there
that somebody is using where something relies on it.
It makes me think of an XKCD.
There's an XKCT on...
Space by heating?
Space by heating.
Yeah.
Exactly.
That one.
Just so often irrelevant XKCCC.
And of course, other one about supply chain where you have this one unpaid developer keeping up the whole stack of software on top of it.
And that is also sort of relevant here because there's a lot of software with a lot of unpaid developer somewhere.
And yeah, at least we have their source code and we can review the source code and we can send them patches and test these patches before.
that have all this reasoning why it's a good change and make it really easy to press
a merge button and hopefully not have some drugs at our end there.
Yeah.
Yeah.
Like, I'm sure there's going to be weird issues that are identified further along that,
especially like as you're saying
with doing it this to the distra level
I think even just past
building right if the software
doesn't build that's a problem that you can
immediately identify
the issues are the
runtime issues that may not necessarily
be as easy to spot
that require
actually interacting
within a certain way
that's going to
I think take a bit longer to
properly identify
Yeah, I think so.
So my testing quality is really limited.
And so I found issues and it's good to fix these issues, but it's by no means complete.
So I think really getting these warnings and fixing these warnings can get us a long way.
Because everything that really explicitly uses time T will be fine.
And maybe there's places that handle time stamps that never touch a time time time.
they will not pop up in this morning, but I hope it's not too many.
Right, and most of the discussion here has been about C,
but this is a problem that can happen in other languages as well.
And, you know, people handle things in all manner of weird ways.
Yeah, Python, for example.
When you have your Python code, then it creates a cache file,
and this cache file can have it 30-2 times.
in there and that part was buggy for a while and that blocked me from testing other things
was 2038 and they still have a 22 bit times in there but now they just roll over and don't care
because they just match for identity and for them it's fine so they have a good spec and the spec
says it needs to be handled in this way and if everyone follows the specs and everything is fine even
after 21006.
And very good.
Python people.
Actually, a lot of these issues I found
were in tests for Python packages,
even if the actual bug was not in Python,
because the Python people have so good tests for coverage
of their Python bindings for Z code,
that it did find runtime issues in Zcode stuff.
And that's really nice.
That's really nice. So good tests.
Are there any other areas that you sort of, or any other languages, any other areas that you tend to see these pop up in?
Not so much. So I think in Per are only at one or two.
Python, they were a couple, but not always directly from Python.
And yes, and series really is the,
very much used.
I'm not sure what's about Rask and Go.
But I can't really remember seeing issues there.
I have no doubt people in the web space are doing weird things, though.
Web developers just do weird things.
NPM and Node.js and stuff.
Yeah.
And the problem is that stuff kind of creeps into the desktop,
with how much electron stuff there is as well.
Yeah.
Let's bundle a browser and send it out as an up.
At some point I counted in our open-souza distribution how many chromium instances we have in there
because we have an electron package and another electron package and QT web engine for QT5, for QT6.
And so I think there were at least six, six form.
The, what is it, chromium embedded framework, which is what OBS uses for its browser docs.
Yeah, so that's a couple.
So you have a lot of code duplications.
And if you fix an issue in one place, then you have to ensure that the fix will be done through the chain to all the other places where there's a copy of this code.
Right.
let alone the applications that bundle it and then you got that problem as well.
Yeah, vendor tarwholes.
I can tell stories about vendor tables.
There's this bug and I fixed it upstream in the original sources and then we have vendor tarballs of the things that go with other rust packages and Go packages.
And I think NPM also does vendor tables and they have multiple copies of the same code in different versions and throw it into one.
Because yes, they can do this thing with their package manager.
So why not do that?
Yeah, sure.
Yeah, fun things.
So let's assume there's a lot of old software out there and keeps getting bundled for a while.
And it might be not only one year, but two and five.
And that can get dangerous at some point.
Right.
It sounds like you've had a lot of experience here.
You don't sound very excited to talk about this topic.
Oh, I'm very excited to get things fixed.
So, because he can find these things.
So not for the 2038 problem, but for other things.
I looked into a really old quote with Lisp and TCL article,
some of pronounce it.
And it's fun doing patches there
because it's
understanding yet another language
and it's so different with all these
brackets and
different mode of operation.
How long have you
been involved in
not just in Sousa but just
in Linux stuff in general?
I got my first Linux
running in 1999.
You remember what that was?
Linux.
It was a Linux before it was open-souza.
So, and from there it's always been upgraded and upgraded.
Yeah, it's quite some time.
And my first open source stuff I published in 2001.
I did some kernel hacking to get overlay.
And there wasn't no official overlay file system yet.
And at some point it broke because it was a really dirty hack.
Yeah, it sort of bit rotted.
That's called bitrotted.
So there's so much bitrotts.
And in December I looked at Old Code,
where we have these old games.
We have a whole collection of 700-something,
the game packages where people,
have these old rocks and diamonds and pegmen and break out and tetris and open source ports of these
and these were developed for very old systems with xorg and sdl1 and
old stuff and at some point compilation broke and i looked into it and there's
i got a lot of and those in spec working and it's nice
BitRoyle's another one of those problems that
people kind of just ignore as well
they think, oh, if I write something once
then it just works forever. And in some place
I think X-11 gave people a lot of security there
because X-Or has changed so little over the past 20 years
that you can run an ancient X-Or
some ancient X-11 application
and it's generally fine.
But that's not how most other things
work? Yeah. Wayland is coming up and some places are already dropping the Xorg
servers. So then you have to go with Xvalent and have these compatibility layers and some
things don't really work with screen sharing or whatever. But hopefully it will all work out
in the end. What are you personally running on your system?
that much. There's this distribution called slow roll where I'm the maintainer of and it's a
tumbleweed where you have the space tambourine version and it keeps stable for a month and then
it gets updated to the next month's tumbler version in between there's tiny patches for security
issues and other bugs that hopefully don't break too much and usually it works well.
I didn't realize you're the maintainer of slow roll. I had mean I when slow roll was first announced,
I'd meant to reach out to someone from the project to talk about slow roll.
So, you know what?
We talked enough about 2038 for a bit.
Let's segue to slow roll, because you happen to be here anyway.
Yeah.
How did that project get started?
It got started because at some point,
there was an uncertainty about SES 16 and Leap 16,
which is using Sless 16 binaries and sources to,
as a base like the kernel bash Glypsey that's all inherited and then we add stuff on top like there's no kDE in sless so we up kd on top from tumbleweed and there was uncertainty this at year 6 316 when will it be how much will it contain will it have a desktop and this uncertainty meant that there were two competing concepts and one concept was this is lower
where we said, okay, we have this working tambourniz and it's really stable with this open QA and
yeah, why not get it going a bit slower and in the beginning I started two cycles of eight weeks,
maybe that's two months or so I kept the base and it ran and it ran but tambour weed is moving all the time
and when there's new major versions I don't want to have these nature versions and then you get
minor patches on these major versions and more banana patches on the major versions and they
can't take them because there's this major version already difference.
So the amount of patches I can automatically pick from Plyt.
Gets less and less over time.
So at some point I have to sync up and get this major version bump, I call it, to get
all these new major versions and be more than sync with Ptlampovid.
And currently, I do it in a number.
I do it in a way where I take this tambourbit snapshot.
It reminds me.
It's the start of the month, so I need to take another snapshot.
And so I do that to get in sync.
And then I have some more days to let tambour bead accumulate
small patches.
And then I say, OK, no, I publish it first.
So I'm always behind some days.
We had tambebebit intention.
It also gives me some time to get the installed DVD build.
So, yeah.
So, like, how has that project been going?
I don't really hear, like, when I ever hear about Open Sousa, most of the time I hear people talking about Tumbleweed.
Maybe that's just because I use Arch and I have, like, an audience of people that, you know, like rolling stuff.
But what does adoption of slow rolling like?
how have people been liking it?
Yeah, how's generally going?
Yeah, it's, let's say mixed.
So there's some people who say,
I prefer a tonnebivit because then if a bucket fixed,
it gets fixed, they're pretty quickly.
And for slower, depends.
Right.
So if it's really a minor patch and there was no major version update before,
then I get the minor patch pretty quickly.
And for some bugs,
You might remember the XZ vector event, that never went into slow roll, because it happened in the middle of the months and slower still had the old version and Tampa Vood did already have the new version for two weeks, but it wasn't yet time for another version.
So I still had the old version and never was affected.
And there were a handful of other major issues.
you know, we have this automated testing for Tumbleweed that runs in KVM.
So there's things we can't test like Intel Wi-Fi broke,
AMD graphics broke at some point and then it's good if Tamborett
hits these issues first, gets a fix-in and then slower news,
maybe never experience this bracket. So it can have advantages but yeah,
sometimes there's also bugs that are really hard to fix with it's really hard to understand
where does it come from how do I fix it so the easiest would be to just update everything from
Plyt because we know there it's working so in the early times I sometimes I did that I just did
another version bump in the middle of months because it was the easiest way out of some mess I created.
But in the last two years I didn't need that. So at least it was safe enough, but yeah, sometimes
since there's this issue with dependencies. So I have these automated scripts that look at version numbers and
is a change log saying, oh, there's a CDE fix and I say, okay, I want that immediately,
that thing it fixes the bugs and I let it rest for a day or three and then I pick it over.
And so people need to have good change logs.
Sometimes CVs are only published later and the fixes all the in there and then it doesn't really match up with the reality of the code.
Or sometimes there was a dependency thing with just that people
they're using Liest even if it's marked is deplicated.
But just this dependency of Ruby,
and it doesn't always declare this dependency corrective this version,
and then you have merge mismatch.
The Tumblridge already has a Ruby 4,
and I said the Ruby 3-4, and I pick new version and it says,
I need Ruby, and I say, okay, I have Ruby,
and then it says, I don't work with this version for you.
So it happens.
And then had to downgrade it and block it for this cycle.
Yeah, but it was broken for a day and people reported it.
And that was a pretty easy fixer.
Yeah, and there's this idea that it could become
an official distribution.
So we have leap, we have tambour wheat,
and I think micro OS is also pretty official,
but I'm not sure how much.
users ceased
What is it still considered
What is it considered right now?
I would say beta
So it works
Sometimes
You just close
No
I actually did disconnect that time
Okay
I heard you
Basically you've just started talking about slow roll
That's where I cut out
Like what
What slow roll is considered right now
You said it was a beta version
And that's basically cut out
Yeah
Yeah, so I say it's beta.
So in the automation, we don't have enough tests, let's say.
So I check that packages build for the right versions and then they go into Rees to users.
So it's not very good, but it happens to work because Thumbweet has a lot of testing and I take these packages all from Tumblridge.
The only issues that I might face are integration issues where these versions don't match up and misalign.
Yeah, and it would be good to get more test coverage for that.
But I didn't find the time with energy to do that.
For the version bombs, we got some testings from a colleague of mine who said, okay, I am doing that.
And then I scroll in the new DVD for the months and it runs a test and says,
it gets normal KDE right and it looks good and sometimes we identified issues and then I could
already push the fixes for these issues before users even saw the new DVD. So that already helps,
but it's still maybe I'm just too perfect. I could say here it's done and go use it and
sometimes I look at the statistics on our download servers and they say they say they're
is around 5,000 users.
Tambavit is much larger in the 200,000 maybe.
And then you need to consider that we have mirror service.
And everyone who uses mirror servers to download all their packages
they don't appear in our statistics.
And then we have a CDN.
And everyone who goes to the CNN and hits a hot cache,
they get in reply from the hot cache
and never go to our download service.
was in another account, no statistics.
So, yeah.
There's a chance these numbers are actually a factor too large or something.
Yeah.
The sort of distinction between having tumbleweed and leap,
it always seemed kind of weird to me.
It really did feel like there needed to be something in the middle.
And, like, I don't know what the numbers are like.
I assume you have better numbers here,
but I never really heard anyone talking about Leap.
Like, again, whenever I heard about Open Sousa, it was only ever discussions regarding Tumbleweed.
So having something which is somewhere in the middle, I think does make sense.
Yeah.
The part that's tricky is to leap.
Leap is a complicated matter, let's say.
Because for the Leap's 15 series, it became very old.
This is 156, still under support for some months.
And that's based on Leap 15-0 as a base version.
And that was like seven years ago.
And seven years ago, and it still has the same bash-based version,
still the same whatever.
A lot of basic tools that are inherited from our enterprise distribution.
still in the same base version with patches on top
to account for problems of one kind of another.
So that becomes a problem.
We had that with Ruby and Python,
there was Python 36 and that's unsupported upstream.
So we added a Python 311 at some point,
but maybe only the interpreter and not the thousand modules
and so wasn't very nice either.
So at some point it really becomes a problem that the base is so old.
That's the problem with sleep.
Now we have sleep 16, that's very new because it's 16.
0 very recent.
So currently it doesn't have the problem.
But is it going to have that problem as things progress?
Like if is it going to get to a point where it's seven years old base?
Maybe it will have a seven year old code base as a base at some point.
base at some point and it could face similar problems but there's a chance we are managing
source codes in Gidna for the packaging parts for the spec files for RPM and that can
maybe help make it more easier to manage things and maybe for more upgrades and I'm not sure
about the business sites and the enterprise users because
expectations in the enterprise can be complicated where they say, okay, I have the systems and I want to set it up now and I want to keep it running unchanged for five years, for 10 years.
And maybe this is one component I care about, this Piscese, this whatever stack.
And that stack, that needs to be on the most modern version and it needs to work on this very old base.
And that's hard work getting these things to make.
match up and that's why we have a lot of hundreds of engineers and they do some of these
hard work.
Well, that's one of those areas where containers generally make a lot of sense to do things.
Partially, but if you have a container, do you want to create your container eight, five years old, ten years old?
Maybe not because then you have security issues and that's all.
Does it really help with something?
Maybe it helps that you can decouple your application from the host rating systems and you can mix and match these things after.
So for the container, there's only the host kernel that really matters.
And what we do is usually get a major version update for the kernel every two years in the enterprise space.
And if that even aligns with the upstream LTS kernel,
that's even better.
Then we can share the maintenance burden with others.
It really helps to align that.
Yeah, and for slower, I think one, yeah,
let's say problem is that it really needs to stay
very close to the time with it.
So at some point you can say, what's a difference?
So why use local of a tambouryte when tumbleweed is so stable and there's a little difference
and it's just one month behind?
It depends.
So maybe it's not that make of advantage.
Maybe, yeah, this aspect of updates.
So if you only get minor patch updates, you can be more certain that it doesn't break your workflows this month.
and you can say, okay, you have your stuff on top,
like your customs scripts and these customs scripts
expect behavior from stuff underneath it,
so that only breaks once a month at most.
That helps.
And some people have limited bandwidths,
and the next, they can say,
within the months, okay, they get the most important fixes
and that's less bandwidth than a tumblrweed account.
It's 30% of the updates.
I think it made a fairly good argument before
that problems tend to do.
get identified on Tumbleweed
before they make their way to slow roll
like with XZ, that never made its way on there.
So,
in a lot of cases,
maybe it's not even that bad,
maybe it's just some sort of
GPU driver regression,
something like that.
Things like that can be spotted
on the faster moving system
before they make their,
like, you know,
I run Arch, right?
A lot of problems get identified on Arch
before they make their way
to something like Fedora or Ubuntu
or something like that.
And there is some value in,
not wanting to run something that new
and letting other people
that are willing to do that
sort of deal with those problems first.
Yeah, that helps.
But for that, of course,
you need users on the leading edge distribution.
There's people that are crazy enough.
So, as you said,
there's 200,000 people at least on tumbleweed.
So clearly there's someone willing to do it.
Yeah, right now,
but imagine slower becoming so,
good and so useful
that everyone moves from
Tambourri too slow
that couldn't work so there always needs to be a balance
right right
Slowro doesn't work with our tambour
because it doesn't make any sense anymore
if there's no Tumblrit users who report issues
and get some fixed
I think you're always going to have people
that enjoy living on the edge
they don't enjoy
like things
look right
there's a lot of people that aren't
using Linux for production, they're using it purely as a hobby. They like to play around with
things, they like to break things, they like to use things that they know you can go a little bit
wrong, right? There's people that use Gen 2. I don't have the time to do that, but there's people
that do. And people are willing to, there's always going to be at least some group of people
that are willing to, like, be on something they know can be a little bit less stable than
their tumbleweed seems to have a pretty good reputation just because they know that they
can get the latest thing and they don't really like weeding.
Yeah, in time the wheat is really stable.
So if you're unhappy with your arch, I invite you to try.
I have, I have Fedora people bugging me to use Fedora all of the time.
I don't need someone open zoos and tell me to you the open zoos.
If you're fine with Arch, then keep using Arch.
you're not forced to join our plushy geckos.
Look, we'll see what I do in the future.
I've just been running the same system for seven years now, probably.
So I just, it hasn't broke entirely yet, so I haven't done anything with it.
So did it break, but you were able to recover it?
I had a faulty drive at one.
point, that's the only problem I had.
Yeah, that's nothing arch or any else can fix, so...
Yeah.
Yeah, so it happens when you have a lot of our power outages and you're running on Exty-4.
No, XT4 is fine with power outage is fine.
Usually, but you have too many of them, things ignore eventually.
Really? Okay.
Yeah, I might have some systems where I power them off.
hard sometimes.
Usually.
Usually it's fine.
Usually it's fine.
It has this genre thing and that is supposed to keep things good.
Yeah.
Yeah, so making distributions is really fun.
So.
It sounds like you do enjoy.
It sounds like this is something that, like, you're really passionate about.
Yeah, it's useful.
So even if there were no users, I would,
do it for my own systems because it helps knowing what it runs and fixing things when they break
and understanding the components and actions more deeply.
So when did you actually find yourself working at Susil?
Like how did that happen?
That was in 2010.
I would say some people in Susa got aware of my work on OpenQA because I started to test this
the thing before Tambavid was called Factory and factory just got the latest shit thrown in
and it broke so often and
I joined the testing team and in the testing team he split our tasks and I said, okay, I want to test the installer and testing the installer meant downloading
4 gigabyte ISO over a line that back then was at most 50 megabit and with at most 50 megabit, it takes at least half an hour or something.
I wish I had 50 megabit in 2010.
I think I was running something like that.
Yeah, Australia didn't have good internet until very recently.
Yeah, exactly.
So imagine you have two megabit and then you start to download your ISO and when you're done,
you find that you're rolling release distribution released and ISO that doesn't even install.
Yeah, fun.
So you can't even test the installer.
Probably.
So then I...
And then I started this automated testing of operating systems that put the Isis on a server that has a gigabit connection and then it could run these tests at night with a scheduler.
So at the morning I looked at the videos that it produced and saw, oh, that one worked. The next one didn't.
So I could see, okay, here was a breakage exactly on that date and so that.
that had in BESacting problems.
So you can see only these five packages got updated
and four of them are harmless.
So you can see this one package had a change log of,
we changed exactly the thing that broke.
And that tells a lot in pinpointing packages.
And later on when others took over this open K for me,
they added pre-integration testing.
So before things get merged into factory,
They run a staging project and in a staging project,
they only saw a small set of packages that somehow
belong to the other, and then these kept built as an ISO
and runs through these tests and then test show green,
then they can get merged.
And after a lot of staging gets merged,
then there's another big round of testing on the big ISO again.
That covers more things and might show more issues.
So that really tests.
that helps in ensuring quality.
Modulo was a Dijkstra quote from earlier about being only able to prove presence of bugs.
But again, we can show that certain things work a certain way we expect.
And whenever there's test showing something didn't work out and it's expected,
it might be that actually the test is updating because there was a Firefox version update
and now it's supposed to behave differently,
but sometimes you find actually bugs in software.
And we have found a lot of bugs in the software.
Oftentimes, now, before we even integrated or before we ship it to users.
So then in the latter case,
these got integrated and then we saw, oh, it's broken now,
so we don't ship it, and then
Tumblrit doesn't get updated for a day or two, or maybe five.
So I think the longest run was a week or two.
So it's two weeks no upsets.
It's really, really long for Tumblrweed.
Most of the times we have like five snapshots at weeks.
It's a very good number.
And then it keeps rolling and slow roll and depends on a tambourn
rolling because I want to keep some distance from tumbled.
And that's really hard if tumbled wheat has these big jumps already,
and then I can't have these small implementer updates.
So how does the Tumbleweed update model actually work?
Well, it's rolling, but is it just...
You said there were snapshots there, so it's not just bringing in packages as they are ready?
Or how does this work?
I'm not very important.
I know the two people who do the merging.
And I think they do merging rounds.
Like you have all these stagings,
and the stagings have their tests one,
and the test show green.
And that means it's ready to be merged.
And then they have multiple of these
and they say, OK, merge this collection,
merge this collection, and this other.
And that fifth one, that's not yet ready,
that waits for later until it's ready.
And I think they do batches that gets integrated into this project called OpenSuser Collin Factory in All Build Service.
And that gets built, ISOs fall out, and these gets thrown into the automated testing.
And there's one big ISO once a day.
So when that shows to not work.
to not work, then it meets another day until there can be a new good idea.
So how far, how far of a gap do you try to keep with slow roll?
Some days, so for, yeah, it really depends. So for CVA updates, I usually get updates on the same day.
Sometimes even an hour before Tampereet, because for Tumbleweed, we have a
mechanism where everything is ready.
It's on our staging server.
The mirrors get zinged, but it takes some times for
mirrors to sing.
So we delay in our two and for really big updates, some more hours.
And for Slough, I don't have this delay,
so I pull this update because I know it passed testing.
I see there's in the version, it has a CBE fix, I pull it over,
I built it for the packages I have in Slora,
I publish it and it's done.
It can go out an hour before.
That's fair seven.
Yeah, that's for the very fast pass of CPUs,
for other bug fixes, minor bug fixes.
So that's one day, three days.
So I'm not sure what the script says now,
but it seems to work.
And then the really big versions,
they wait for the monthly update,
for the monthly thing,
we really get everything.
Right.
Yeah, because sometimes there's these things to change,
change changes when you have a new GLIPC and you compile that and you want to really,
everything to use that new thing, you have to rebate everything, and that's not something I can do for still away.
So that's a known limitation.
And for one second that has bit me a bit in the behind when I tried to update the KDE stack
because there was some QT update needed and then I had to update all the QT stack and there's a lot of QT applications
and they have these private symbols that are supposed to,
not sure if they're even supposed to be used, but some programs use these private symbols,
and they're very specific to a specific version of QT and then you can't run this old binary with a new Qtie or a new binary with the old Qtie.
So you really have to match versions and that was a mess and I was happy when I could do the version but and have everything really in sync again.
I feel like we talked about all the 2038 stuff like 40 minutes ago. So is, uh, is,
Is there anything else you wanted to really add to that or anything else you wanted to add about slow roll?
Is there so much fun things?
But yeah, it's in the technical details.
I'm not sure how technical you are.
If you want to get technical or anything, yeah, go right ahead.
Not sure.
I'm a bit exhausted.
That's fair. That's fair.
Most people aren't used to doing a long-form discussion.
So I guess if there's nothing else you really want to touch on,
we can start to wrap this up.
Yeah, that's fine.
Okay.
I guess, do you have, like, a blogger or anything?
Or do you just want to write people over to Open Souser?
I don't have really broke.
It's just news, Open Souser,
where we published one article about the 2038 thing, but it's not usually about other things.
Sure, sure. Can we expect to see another post next year regarding 2038?
Of course. I already have a schedule in my calendar as a yearly reminder.
Otherwise, I would have forgotten about this movie.
I have so many reminders about things.
Hopefully.
Hopefully, the tumbleweed stuff can happen,
so the things found at a larger rate.
That would be very nice.
And hopefully we don't suddenly discover
there is a lot of bugs.
That would be good.
Maybe it's not good.
Maybe it means the testing isn't great,
because maybe there's bugs that we're not seeing.
But hopefully it's good.
Yeah, oh, that reminds me.
There's a way to find out how many bugs are in something.
That's by introducing intentional bugs and looking how many of these are found in review, for example.
And let's say you add 10 intentional bugs and 5 are found in review.
Then it means if they found two or three other bugs,
that they actually three more bugs that they didn't find.
and review because that's around the ratio of things that get missed and review.
Huh.
Okay.
I'd never thought about that.
Yeah, but of course, we don't want to start
other intentional boxes for it through software.
So, yeah, I guess
check out
Slowroll then
Can people just get that on the
OpenSusa website?
Yeah, it's a bit hidden because it's not yet official
but on the wiki we have
E.N.org
slash portal, column, slowroll
with Capital P and Capital Essend.
Very, very good list of my stuff.
Yeah, maybe you can just Google it.
Sure, we have this
get.compsuit.org thing.
I'm not sure if it's on there.
We'll find out.
But at least you can use TumblrVit.
It's nearly the same and...
No, it is not there.
Yeah, so we hide it a bit because it's still in progress
and we don't have good testing for these updates.
But if you like to use it and they use the advantages of having less downloads,
downloads within the months.
That can be a good match.
Okay.
Yeah, for the year's 2038 problem, if you're a developer, you want to look into a code for
time T occurrences and if you're a government official, you might want to look into
wanting software to run in the future.
That would be nice.
Yeah.
I don't know why my camera keeps freezing.
I've been doing this the entire time.
The audio's been fine, but Jitzi's just not happy with my camera today.
I don't know.
Let's tell you.
Yeah.
If you don't even have 2038 yet.
They're trying to shut us down.
It's a question to shut us down.
Say, say you don't want to know.
They don't want you to know about 2038.
So spread the word.
We will.
Nothing else you want to direct field to?
Just open Zusa, worry about 2038 problems, fix them if you find them.
I'm not sure if there's a good write-up yet.
But we have the added post.
So that's pretty good.
It has some example patches and bugleports.
And some of these are still open.
So I feel like you can ping the MAMCash D-Maintainer
about the thing that's still open since in 2023,
and he said he wants to take a look, look,
and maybe just forgot.
Hopefully, I can not be the only person making videos about this.
Maybe, I don't know, look, people like round numbers,
maybe 10 years I can convince other people to talk about this as well,
and we can get some noise about this.
Yes, let's make a big anniversary.
in 2028.
We schedule for the 19th of January and then we do a big discussion around with all the
YouTubers that do anything about tech.
I like this plan.
Yeah, I think it's enough time to plan ahead.
We can do a small round next year for January the 19th and space awareness and then
the next year we go really big and everyone talks about it.
Hopefully by then we have the GCC pitches and it becomes much easier to just see the warnings and mark them as well.
That would be nice.
That would be nice, yeah.
Make everyone's life a little bit easier.
So, yeah, I guess I'll do my outro and then we'll sign off.
Okay.
I don't do that.
My main channel is Broaderie Robertson.
I do Linux videos there, six each day a week.
Sometimes I stream.
I don't be streaming lately, but I'll get around to it when I get around to it.
I've got the gaming channel, Bridion Games.
Right now I'm playing through Shenmoo, and maybe done with Silk Song by now.
I don't know, whatever going on in that slot, check it out.
There's things happening there.
If you're watching the video version of this, you can find it on...
If you're watching the video version, this can find the audio version, wherever you find audio podcast.
There is an RSS feed, search tech of a T on whatever your favorite podcast platform is, and you will find it.
video is on YouTube. Also, we have video on Spotify if you like Spotify video for some reason.
I don't know why you'd want it, but it's there. I'll give you the final word. How do you want
to sign us off? Yeah. Be good. Have a lot of fun. This opens as a motto. So, do that.
Fair enough.
