LINUX Unplugged - Episode 273: International Hat Machines
Episode Date: October 31, 2018We speculate about a future where IBM owns Red Hat, and review the latest Fedora 29 release that promises a new game changing feature. Plus Chris returns from MeetBSD with his review, and we get the i...nside scope on System76’s Thelio hardware. Special Guests: Alan Pope and Martin Wimpress.
Transcript
Discussion (0)
Well, Wes and I were having this conversation before the show started, and it went something like this.
It was, in some ways, Linux has suffered greatly by having its focus scattered across so many different initiatives and so many different efforts.
You know, you have so many different takes even just on GTK desktops, let alone GNOME versus Plasma.
Even in the Plasma space, you've got LXQ.
There's all these different iterations.
there's all these different iterations and you look at like the Apple model
or the Windows model and you go, if we just had
a team that focused on a singular
application, things would be
a lot better. And the example you went to was
gparted, right? Oh yeah, right.
gparted's fine, but it really
hasn't gotten a ton better.
And there's other partition managers like kpart
or whatever it is that are fine
but they're not like the definitive
beat all tools.
And if we were developing one partition manager, it would probably be pretty great at this point.
And I think all of us have always, you know, we've thought about that.
But in light of the news of IBM buying Red Hat, you kind of have to appreciate the, I
don't know what to call it.
There's like upsides of this diversity of thought, right?
Yeah.
We have less net things per unit, but we have a lot of redundancy.
It takes us way longer to get there in some cases,
but we also are not victim to the bus factor as much.
Linux could be hurt if, say, all of Red Hat's open source desktop initiatives went away in a year.
Desktop Linux would be hurt,
but that code would still be free software.
It wouldn't end desktop Linux.
And those contributors that worked at Red Hat could still decide to contribute.
And new people could pick it up.
Like, yeah, there's a lot of options.
So it is both a blessing and a curse in a sense
because sometimes it means we can't have the shiniest,
but it also means you really can't take it away from us.
Yeah, I think that's why you have to be careful.
I mean, you just said, if it goes away, I mean, it won't go away.
It's free software.
It's out on the internet.
None of this stuff is going to go away.
And I'm a glass half full kind of guy, and I don't have any inside knowledge of the IBM
Red Hat deal, but I can't see them throwing away that giant body of good free software.
I just can't see them throwing away that giant body of good free software. I just can't see that happening.
And even if it did, the software is still out there.
People will coalesce around it who are passionate about it and will keep it going,
just like every other free software project that gets dumped by a corporation. This is Linux Unplugged, episode 273 for October 30th, 2018.
Welcome to Linux Unplugged, your weekly Linux talk show that's sort of wishing it was a stock advice show right now.
My name is Chris.
My name is Wes.
Oh, some of that Red Hat stock would be looking pretty good.
We have a big show for you this week.
I'm back from the BSD den, went down, hanged out,
Meet BSD at the Intel campus.
I'll give you my book report on how it went at Meet BSD.
Then we'll get into some community news.
Well, actually, we'll start with some community news,
including that massive IBM buying Red Hat story.
But there's other things, too, to talk about.
In fact, there's quite a bit of community news that I would still consider this a packed news show.
Oh, yeah, even if that didn't happen, it's a giant show.
We've been doing less news, but there's just so much we've got to talk about, so much going on.
The community keeps making news.
It's great. It's a good time to be a Linux user.
And then today, as we record the latest and greatest Fedora, Fedora 29 is out.
We'll give you our first impressions of Fedora 29, and I'll tell you about my favorite feature
that I am so excited to see finally land in Fedora.
Also, my report, good or bad, on how my Fedora Cloud upgrade went.
This is the one that I've been upgrading for a while now.
And it's now also the home to my in-production NextCloud instance.
So this, it mattered more than...
This upgrade counts.
It was, yeah.
This upgrade really counted.
So I'll give you my report on how that upgrade went as well.
Then some follow-up items from the past couple of episodes,
as well as a huge announcement for a friend of the show.
And then a couple of workarounds if you're sticking with Dropbox and don't feel like formatting your disk,
give you a couple of workarounds for that.
I can tell you're thinking about this a lot.
Who, me?
Yeah, you.
But before we go any further, we've got to bring in that virtual lug.
Time.
Appropriate greetings, Mumble Room.
Hey, people.
Hello.
Good morning.
Hello.
Hello, everybody.
We've got Eric in there, Mini-Mac, Popey, Sean.
Is that two Seans?
No, C.
I gotta look, I'm an old man.
And Wimpy.
It's good to have you guys in there.
Rumor has it a few others may be trickling in as the show goes along.
Speaking, speaking, speaking, speaking, Wes.
Speaking-ling of trickling in.
Mr. Linus Torvalds is trickling back into command of the Linux kernel.
The will be, you'll be.
This is actually kind of old news now by the time we're talking about it in this show.
I mean, the Ubuntu podcast guys talked about it on last Thursday.
It was fresh for them.
They got, the Ubuntu podcast guys were the first, I believe, to air with the story.
Breaking news.
Yeah.
Look at you guys over there, Mr. Breaking News over there at the Ubuntu podcast.
This is CNN Breaking News.
But you don't have to watch CNN. You can just listen to the great Ubuntu podcast instead This is CNN Breaking News. But you don't have to watch CNN.
You can just listen to the great Ubuntu podcast instead.
I like that more.
Way better.
It's way better.
Yeah.
So this is kind of old news,
but I just wanted to kind of close the loop on this story.
Greg KH posted in the kernel 4.19 release announcement,
and it's a long post.
In there, he writes a couple of things that I thought were powerful statements, powerful, that we should read here on the show.
He says, these past few months have been a tough one for our community, as it is our community that is fighting from within itself.
The prodding from others and with prodding from outside.
Don't fall into the cycle of arguing about those others.
That is the trap that countless communities have fallen into over the centuries.
We all share the same goal.
Let us never lose sight of that.
It's good.
That's a good message.
I mean, it's true, right?
I mean, we're all here for the same reason,
to support the same project, to help the project grow.
The thing that resonates there with me, Mr. West,
the real synergy I got with that, you know,
the top-level thing there.
Oh, stop. Oh, stop.
Okay, sorry.
It was, this is something that has affected
countless communities over the centuries.
Like, it's a human nature thing.
It's not a Linux kernel mailing list thing.
And that's true, right?
I mean, this is a human activity done by lots of humans
with different motivations and goals,
and that's what makes it so difficult.
He wraps it up with, and with all of that,
Linus, I'm handing the kernel tree
back to you. You can have the joy of dealing
with the next merge window. I'm out of here.
That's good. It's good to see Linus back.
Everybody's been watching
to see how Linus
behaves.
The spotlight
was on him before, but holy
crap, now it's like everything
he's writing is being analyzed by the news media.
Have you seen this?
Like people are just copying and pasting like mail his entire post.
Yeah, he writes something and then it's news now.
Oh, that's annoying.
Let the man work.
I don't know, Mumbaroom, any take on Linus being back?
Was that a long enough break for him to really make an actual substantive change?
What do you guys think?
It's long enough for him to put an email filter
on his email system. So yeah, probably.
Is that the takeaway here? Is that
the lesson that Linus learned? Because that's what
everybody's talking about. Well, it looks like you put a filter on.
It will be the most
popular email filter in history
and everyone will be using it this time
next year. There will be whole businesses
built on top of it.
And IBM will buy them.
There it is.
I would hope the willingness to implement a filter.
I don't think any of this is a thing you learn in a month.
Hopefully it just means that he's thinking about it.
These are things he's aware of
and that there's a little more focus on the health of the community.
Yeah, it's an ongoing thing.
Right, it has to be.
That's a good point, Wes.
That's very wise.
That was sage.
I like that.
This is something I could use some sage wisdom for.
Now, Michael Arable over at Pharonix,
and I haven't seen,
I don't think I've seen confirmation of this anywhere else.
So keep that in mind.
Yeah, you're right.
Because this seems like big news.
He's reporting that Samsung has apparently shut down
the Samsung open source group,
commonly known as Samsung OSG, which you'll see like in their kernel contributions and whatnot.
And it appears that they are disbanding this division,
which is responsible for quite a bit of contributions.
They've been around since 2012.
They employ a dozen developers for a number of years now.
Yeah, and I mean, they've contributed to things like, well, Wayland, X.org, Cairo,
Clang, GStreamer, FFmpeg, and of course, a whole bunch to that there Linux kernel we were just talking about.
Yeah.
Yeah, so it would kind of be a big deal, especially you got to wonder about Wayland if they were to go away.
They're usually in the top five kernel contributors, like total, in the top five.
I mean, and this just feels very timely, right,
where corporations play a big part
and they are not obligated to in open source development.
So we just, right, we're lucky to have them.
We benefit from this.
Why are they closing this?
Maybe they're done.
Like, hey, we got everything we needed.
All right, well, Google says we're going to have to use
this kernel for the next X amount of years
so we don't have to develop any new drivers for a while.
We're out. See ya.
But that's not how it works.
That's not how software development works.
It could be a hundred things.
It could be just a restructuring.
Right. There'll still be open source work. It's just under one umbrella anymore. It's a good question.
One hopes, Wes. One certainly
hopes because it is... Funny, I'm not
Samsung's biggest fan.
But it seems like they've made genuine
contributions that
really have made a difference. I mean, you look
in the top five, that matters.
So hopefully it's not a total
abandonment. It also makes me think about just
you know, we look at like Linux kernel contributors
but when a company contributes across
so many projects, we don't often get a view
into just all the things they're doing. You might see
pockets of that, oh, they support Krita or whatever.
But a lot of these companies are pretty broad support.
You know, you're touching on something there
that really grinds my gears, Wes.
And that is that a lot of times these companies
are contributing to open source and they do it kind of quietly.
Like you're not even aware of what they're doing.
And there's a real resistance to them talking about it.
They don't, and I say this from somebody
who tries to talk to talking about it. They don't, and I say this from somebody who tries to talk to them about it,
and it's, they'll talk to you
in like
a non-formal setting, off air,
but they won't talk to you in any
official capacity. And when you
go to events, in a lot of cases,
they won't even talk to me.
I'm not allowed to speak to an engineer.
If the engineer realizes
that I'm from the media,
then they go grab the PR person, and I have to talk to the PR person.
And then if I have questions, I have to schedule it.
And, like, I just completely lose all access.
They don't want to talk about it, and I think it's for liability.
Is this still, like, a business enterprise sort of uncomfortability with open source?
Like, it's okay to do the lawyer's vet things,
but it's not our business line.
It's not our core thing.
We don't want to talk about it.
Or it's a liability thing, like from the patent wars.
Or it could be simple things like, you know,
companies like Samsung work on, yes, stuff like Wayland,
but you look at where they put it.
They may be developing IVI systems
for large motor vehicle manufacturers.
And those people, those motor vehicle manufacturers,
are probably the kind of people who don't want this kind of information revealed
because it's so competitive, such a cutthroat market.
They don't want some engineer at the bottom of the stack at Samsung
revealing that there's some deal going on for the next generation infotainment system
in their new car that's not even out yet.
Have you run into this working through Canonical?
Yeah, totally.
It's frustrating, isn't it?
Yeah.
All right.
Well, I've said my piece.
And so, yeah, what you said that made me think about this is we don't really have a great
window into what their contributions were.
So how do we quantify what the loss is?
Yeah, exactly.
We have an idea by, oh, well, here's a percentage
of code contributions
to the Linux kernel.
That seems valuable,
but it doesn't really tell me what.
Is it support for some stupid vibrator
in a Samsung phone somewhere?
I mean, is that what that
kernel contribution was?
Or was it to make it possible
for Wayland to render
a Firefox pop-up on my...
Very different scales, yeah.
There's way more development that happens outside
of the Linux kernel as well that
revolves around hardware enablement
and what have you.
Boy, we could do a whole episode on that, really.
It's a fascinating world.
I got some insight
into that when I was down at Dell
and I was watching how they work upstream
with different hardware OEMs
and yeah, it's fascinating stuff.
But I'd like to take a moment and talk about some rapid-paced development around Proton.
You can now play 2,600 Windows games on Linux via that Steam Play,
which is using Proton underneath.
That is massive.
That is a huge accomplishment.
And along with this, kind of an addendum to this,
is the launch of ProtonDB at protondb.com.
I'd like you guys just to know about this.
You may remember when we first talked about Steam Play on this show,
I mentioned there was a Google Docs spreadsheet
that was keeping track of the games that was compatible.
Oh, yeah, right.
Well, that's now developed into ProtonDB.com.
And it got so popular, they basically broke Google Docs.
For a while, they were just breaking Google Docs.
It was pretty funny.
So they have all kinds of compatibility reports and whatnot.
And to celebrate the.com launch,
they're also redesigning the look of the overall database.
Like, they're making it look really nice, get the initial data reports in there They're also redesigning the look of the overall database.
They're making it look really nice, get the initial data reports in there so it's easier for people to see what the compatibility expectations are.
And it's sort of remarkable to me.
Oh, there's some good games in here too.
I mean, gosh, classics to modern stuff.
It's all over the map.
It's remarkable to me that they've gone to 2,500 games
and this community that looks almost like it's a commercial venture.
Like, this site, this ProtonDB site.
Oh, it's well-made, yeah.
It's branded pretty nicely.
It doesn't look bad.
18,459 compatibility reports in this database for those 2,000 games.
So it's the year of the Linux gaming desktop?
And just to circle back again on another topic,
is I've now gotten multiple confirmations.
And aren't you one of them, actually?
People in the Google Stream beta,
the streaming video games under GNU slash Linux.
Hey, it's happening.
15 megabits is like the absolute minimum
that's going to work with something around there.
Yeah, right.
And you have to have like less than 5% packet loss
and 1% packet loss.
Less than 1% is recommended.
For a while, it wanted you to have an external controller to plug in as well.
I don't know if that's still true.
I don't know.
I don't think so, actually.
It sounds like it's not.
So you combine that streaming system with Proton.
It's a really good time to be a hardcore gamer like Wimpy.
I mean, Wimpy's a hardcore gamer.
Hardcore!
I'm just pretty excited.
Have you tried any of these, Wimpy,
with all the travels you've been doing?
Have you had a chance to try the Proton thing out?
Not while I've been traveling, but at home I have.
I've invested in probably more than a dozen racing games
that were originally launched for Windows.
more than a dozen racing games that were originally launched for Windows.
And some of the ones I enjoyed on the PS3 years ago,
some of the Need for Speed series,
and I'm playing those quite happily now on my Linux desktop.
Very happy.
I have a question for you.
I thought I saw you on Twitter posting pictures of a ThinkPad,
but I thought you got a new XPS or a new Dell 15.
I went with ThinkPad in the end.
No, really?
That monster P1, gosh.
Yeah, and how has it been? Have you had much of a chance to
play with it? I'm using it right now.
And you upgraded
what, was it the RAM and the drive in that thing?
Well, I couldn't
upgrade the ram post market
because only major oems can actually get the 32 gig dims um so that delayed the order in fact oh
wow so i think i may have been the first person to actually order one of these with 64 gigs of ram
what are you doing to us yeah but um so i but so I couldn't source those after market,
so I had to get it ordered with that.
But I ordered it with the cheapest SSD option,
which was a 256 gig SSD,
and then purchased some 970 EVOs, a 512 and a 2 terabyte.
So two drives.
Yeah, so I've left Windows on the drive it came with sure
put the two new drives in and i've got 500 gig uh boot partition and a two terabyte home partition
oh my gosh what do you need two terabytes is that for vms for home drive think about how many
electron amps he runs it's for all the code that i work on and all of the vms just you know
just copies of data, just stuff.
This is a five-year machine at least.
Yeah, I think so.
In fact, I've had it for a few days now because I obviously got back at the weekend
and it had arrived whilst I'd been away.
And I've decided to go all in with this.
So I've actually ordered the Thunderbolt docking station for it.
And I'm going to set that up and actually use this
as not just my laptop, but my maining station for it. And I'm going to set that up and actually use this as not just my laptop,
but my main workstation for work.
So I will plug it into the Thunderbolt dock,
which connects to the screens and everything else.
Have you decided which Thunderbolt dock you're going with?
Because I've been considering doing the same thing.
Yeah, there's a new one that Lenovo have released.
Oh, really?
Yeah, that has an adapter that connects alongside the the power so the power
adapter in the laptop is next to one of the thunderbolt ports and they've made a cable
which is like connects to both of those in one connector do you have a link i'd love that's
that's it right there that's what i want yeah i can i can look one up and send it over to you
this is great so does this this mean if you're switching away from the dell to the thinkpad i'm
not going to be looking up your nose whenever we're having video chats this is true
in fact it sounds ridiculous right but the placement of the webcam was actually a serious
consideration in all of this and i would say was actually the clinching feature that took me over
the edge because everyone talks about how small the dell bezels are, but the bezels on this are also very small too.
Yeah.
Apple announced a new MacBook with small bezels today,
and one of the things they specifically took time in the announcement to call out
was that they had placed the camera at the top of the screen
because people care about that kind of thing.
And I completely agree doing meetings.
I do Zoom meetings a lot now, a lot of Zoom meetings.
And they're all video.
Everybody's looking at my face.
So that was an issue when I was reviewing the Precision.
Or I'm sorry, when I was using
the XPS and the Precision has the
same thing.
So you wouldn't want to just carry around like a little
C920 or whatever?
No, no way. No, I don't think so.
But what if you already have to live the Dolgo lifestyle?
You've already got your little go-bag.
The cord. The cord's not good. It's long. Well, I don't think so. But what if you already have to live the Dolgo lifestyle? You've already got your little go-bag.
No, it's the cord.
The cord's not good.
It's long.
Well, congratulations, Wimpy.
This one's got a mini RJ45 port in it,
which is a proper gigabit Ethernet port,
and it has a little stub that expands that into a regular RJ45,
so it has proper Ethernet on it as well,
and it's got USB-A and USB-C ports on it and HDMI ports and head jack ports.
It's got plenty of I.O.
This is a serious dock.
This is not cheap, but it's really nice.
Yeah.
There's a version of that that's more expensive
that's got an NVIDIA graphics card in it as well.
Oh.
Was I not just talking about that?
You just were.
Before we started going on there, I was like, you know what would be
really nice is to do an eGPU Thunderbolt
dock. Yeah, well I don't need that because
I've already got the eGPU.
So I'm just getting the one that's
connect all of your stuff up.
Yeah.
Well, I'll be curious
to hear your thoughts on how the dock works out because I think
I want to pick one up too.
I may live vicariously through you
for a bit.
Well, speaking of new
hardware, our friends
at System76 have been
teasing this Thelio
system for a while.
Oh, yes, they have.
And we've got some
details.
Yeah, okay, so they're
not actually going to
announce it until, well,
November 1st.
So it's soon, but you
don't actually have to wait
that long. They're calling them open hardware systems, and in theory, they're shipping in
December. Yeah. And so I had the fortune of Carl willing to be able to stay up late and email
exchange with me last night to answer some of my questions about what do they mean by open hardware?
Where's the innovation here? What's,
you know, what's different about this? And so I got some insight into that. And, you know, I think
the first thing I want to clarify is what makes Thelio open hardware? Because that's, I think,
a big question here. And according to them, Thelio's design that they've been working on for
more than three years now is completely open source. Anyone can study, modify, distribute,
make, and sell the design
of the hardware.
I'll get into that again in a second. I'll come back to that.
But another part that I think is maybe the part that's
more interesting to the audience
is,
he writes in a blog post,
to further our open computer ambition,
we are working to remove functionality
from the proprietary main board,
like the motherboard.
To that end, we've designed Thelio IO, a daughter board that We are working to remove functionality from the proprietary main board, like the motherboard. Okay.
To that end, we've designed Thelio IO, a daughter board that manages thermal and chassis control, also providing storage backplane for the drives in Thelio.
So they've got this daughter board system where they're going to be moving the proprietary functionality.
I see.
So all of your little proprietary blobs or chipsets that need them, black boxes, that's on one board.
And then just standard more open components can be on the actual main board.
Yeah, exactly.
So I had an email thread back and forth with Carl.
And this machine, first of all, they're going to have three SKUs of it essentially, like big, bigger, and massive.
And they're going to be desktop x86 workstations
with a ton of horsepower.
Like beefy.
The thing they're doing
that it's hard to describe
in audio,
but when you see it in pictures
looks really sharp,
is they've worked on this
custom case design
that even the back end
of it's beautiful.
Like it's just really clean.
But it is a combination
of brushed metal with like a metal powder-coated finishing.
Oh, yeah, premium.
Really nice.
And wood.
Wait, wood?
And wood.
So like maple's in there and it looks really good.
It's got this beautiful contrast when the black and the maple wood.
And there's no seams, like no bumps.
This sounds like a premium desktop.
It looks really good.
So this one goes on the desk, not under the desk.
Yeah, it looks really good.
I won't share all the details,
but what I did get to see and what I did learn about it,
it looks like a killer Linux desktop.
Just something that they've put probably more effort into
than I think they've ever put into any other product.
And it's a big commitment,
because this is the first thing they're making in this new factory,
is this.
Is that right? Okay.
And then he says long-term.
So this is the testing ground.
Yeah, long-term their goal is to work,
to try to open the rest of the hardware in the machine.
Right, because right now it sounds like open hardware
kind of means like open specs.
Like here's how you put all these proprietary parts together,
and with some effort made to use less of them.
I can tell Carl's really excited about it,
and he got me excited about it.
I would say after I read his description of the machine,
it exceeded what my expectations were
of what they would be working on.
We have pretty soon, November 1st.
That's pretty soon.
Right around the corner.
So we'll find out more about it.
And I may get a chance to fly out there and see it in person, too, before it's shipping.
But if I was in the market for a Linux desktop right now, I would 100% wait until November 1st.
Yeah, to keep your eyes on.
Because just the cases alone look gorgeous.
And then the hardware they're putting in this thing is pretty top-end. It's pretty top-end hardware. And the prices are look gorgeous. And then the hardware they're putting in this thing is pretty
top-end. It's pretty top-end hardware, and the prices are pretty reasonable. So if you're in
the market, I would definitely be waiting for a Linux desktop right now. I love that idea of a
really powerful Linux desktop and a nice, light, portable laptop to go with it. Yeah, I did get a
little peek. I got a little peek at the machine. She's baking. She's baking in the chair. I did get a little peek. I got a little peek at the machine. She's baking cheese bacon in the chair. I'm asking if I got to see a little. And yeah, this is going to be big for them. And Carl's, you know, you can tell that he's really passionate about it because he's taking the time, Colorado time, to chat with me at night where I'm in the Pacific time zone. It's some real user, real creator evangelism. Yeah, and he's talking about it in ways you can really tell he's very proud of it.
And it does – pictures I got to see are really sharp.
I think when you guys get to see it, you're going to get to see even more than I got to see when it does go public.
I think you're going to be pretty impressed.
So, yeah, the Thelio hype is real.
It isn't – some people are hoping it's a risk five system that would be totally open.
It isn't like that.
But I think their innovation here is this daughter board system.
I mean, it's probably a good thing, right?
Like we don't want them to bite off more than they can chew and fail.
It's probably better to go in small incremental steps.
They'll get their new facilities up and running.
They'll get this guy and go and provide more revenue for them to work on, you know, slowly building up these relationships and hopefully freeing more of it.
Make for a great plasma workstation. Oh, yeah.
Frequent virtual lug
contributor and friend of the show,
Alex, had reached a major
milestone himself recently.
His project, Linux Server IO,
his project and the team over there, just
passed one billion
total polls from Docker Hub.
One billion polls, so congratulations to Linux over I.O.
I have actually, before I even knew Alex,
I bumped into Linux over I.O. a couple of times
just pulling down, I think it was like an NZB client
or something I was pulling off of Docker Hub.
There's a lot of popular software packaged up by them.
Yeah, a billion, you know, that really shows you
the size and scale of Docker Hub.
When you can start a project like this
and you package up applications that have been packaged
many a time before in various formats,
you put it up on Docker Hub
and it gets a billion, a billion downloads.
Over a billion.
So congratulations to them.
Yeah, there's a lot of hard work,
some good insights there.
Go check that out and then go use some of their,
you know, some cool applications you can run easily.
It's a great mix, you know,
because you can throw it up on your own machine,
a laptop or a VPS and try out something that it's all packed it up and ready to go.
And you can at least see if you like it and then go build it yourself if you want
or stick with the image.
All right.
We got to do something because this next story.
How do we get there?
We've got to talk about the inescapable story of the week.
I don't think I'd ever be ready.
I think that's it.
I will.
There's never going to be a point in which I am ready to talk about it.
We're still in shock.
All right, I got one.
I got something.
OK, Wes, I'm going to give you a little quiz.
Are you ready?
Oh, oh.
OK.
OK.
Mr. Payne, see if you can guess the project.
With 80 components tracked, 55 of those components at 100% completion,
and many other components with significant progress,
what beloved
open source project
is near 80% done
for its next major version?
Well, I was about to guess Linux Mint,
but it sounds like, I don't know if that's there.
I don't know if that's there. Spoiler!
No, you can't
do that one. Okay, well, you said open source,
so it's not Duke Nukem Forever, despite all of our hopes.
You ready? You got a guess?
What could it be? There's just so many good options.
Okay, I'm just going to say that it's got to be the next GNOME release.
Oh, you're close. You're close. So with 80 components tracked, 55 at 100% completion,
and many others with significant progress,
XFCE 4.14 is 80% done.
And they've just updated the roadmap.
It's time for one last big push.
And then you're going to get new XFCE goodness.
Hey-o!
After this long, long wait.
That story was really for one member of our audience, and that's our editor.
And now it is time for us to actually get... We got to keep him happy.
Yeah, it's contractually obligated for us to talk about XFCE once every quarter.
All right, so on Sunday, IBM and Red Hat announced that IBM would be purchasing Red Hat and becoming
the world's, as they put it, number one hybrid cloud provider, which is pretty easy when you're making up a term.
Red Hat is being purchased for $34 billion.
That's over 60% of their value on Friday when the stock market closed.
And today, after the news has had time to hit the market,
depending on when you check it, obviously,
these things are going to come and go.
But when we checked it, Red Hat stock was up, I think, 34% on the news.
No, 45% on the news.
Damn.
What?
45%.
Damn.
This is one of those stories that you didn't ever really expect.
In all fairness...
Well, one, you probably don't think that much about IBM,
like, unless you're at Enterprise, right?
Fair point.
You don't talk about it.
Fair, fair point.
And two, I just sort of assumed Red Hat wasn't for sale.
I guess I've been struggling with this in a way that felt really personal to me because I'm
not a Red Hat employee. I'm not even a Red Hat user. In fact, I don't even like using RHEL. I
just, for some reason though, I still, I felt a really personal struggle with this one. And I And part of that is I feel like the great wars of the late 90s, early 2000s were for nothing.
Like what was it all for?
Because in 2018, Microsoft is one of the key open source contributors and now owns GitHub officially.
That's gone through as of Monday.
hub officially that's gone through as of monday and the star rebel the shining open source revenue generator red hat is now owned by an establishment company or is going to be ibm like all of our
trailblazing and cause fighting and like we went through so much like in the 90s microsoft was so
dominant and and that by by by the late 90s and early 2000s, we had all of these projects.
We had Wine and Mono and all of these attempts to just coexist in a world that was dominated by Microsoft.
And we finally get through all of that.
And we look back at it.
What was it all for?
And I feel the same way with Red Hat being bought by IBM.
I thought they would be the ones buying IBM.
I thought they would be the ones buying Amazon.
Not the other way around.
That's not the way I saw it going down.
So I just, I don't know,
the old man in me just really sort of
took it hard a little bit.
Isn't this something though
that we always had to be aware of?
I mean, it's kind of like using a proprietary service where you've said it many times, right?
Like, you use it, but you know that it could go away, it could change, that business could
fold, or they could just stop interrupting with however you like using it.
And, I mean, for as long as Red Hat's been public, you know, it's a for-profit corporation.
They may have a culture that really likes open source, but that's not why it exists.
Right, and as Red Hat will tell you,
they don't have any intellectual property.
They have staff,
and they give away all their software.
They have been a services company
for a very long time.
They were one of the early services companies.
That's what they make their money on,
is the services around
the open source software that they sell,
and in some cases,
the closed source software they sell.
They have been more like
IBM for a very
long time than we'd like to
acknowledge, because we like to look at
the things that they do that feel
like
the open source fairy delivering
us great software. You know, we get things like
Network Manager, and we get things like Pulse Audio,
and SystemD, and Pipewire,
which we just recently talked about, and so many other fundamental technologies, Flatpak, et cetera, et cetera,
that don't really seem to have a direct cloud angle, don't have a direct services industry
angle, but yet are extremely useful and beneficial to all Linux distributions.
And that's the thing that we like to talk about.
That's how we like to look at Red Hat in that light.
But the reality is Red Hat is a for-profit public company
who makes money selling services.
Much more like IBM than we'd like to admit.
But it still was shocking to see it.
Like I said, I think I always expected that Red Hat wasn't for sale
and that they would be the ones doing the buying.
Right, they've been our juggernaut.
They're the ones like, look, Linux can make money.
The services open source model, it can make money,
it makes sense, these are good philosophies
to build a company on.
And obviously that's still true,
but they obviously had a little bit of a special culture too.
And how much of that can stay, will stay, we just don't know.
But the big difference between IBM and Red Hat, I think,
is that, I mean, how much does IBM really give back
to the community, whereas Red Hat seriously has think, is that, I mean, how much does IBM really give back to the community,
whereas Red Hat seriously has multiple full-time employees. They have two or three that work for
the Gnome Foundation. They have others that work on other projects. Some people have been hired
by Red Hat just to keep working on those essential applications that we need.
Yeah, it's hard to really think of what major contributions IBM has made
to the industry in general recently.
Well, I think it is another case that we kind of talked about before
where they do have a lot of
contributions to open source in their history
in a broad manner, but they don't
talk about it, right? Red Hat talked
about it. It was a big thing they did. The people
they employed were very prominent in the community.
So it is right. They have both
a decent history of open source contribution,
but they don't do it in the same way. They don't
exactly have the same attitude, and they don't spend all their
time talking about it, especially from
the business executive side.
Right. That's a big part of Red Hat's branding
is Jim Whitehurst, open culture, open source.
They wrote a book about it. They talk about it at the very
top, whereas IBM, it's like an implementation
detail. Exactly. Okay, so
you know, when I first saw this news too, the first question that I tried to frame all
of this in was, is it possible that the new owner could be a better owner of what Red Hat
has?
Like Red Hat's properties.
Can they be, could IBM be a better owner of those things?
Can IBM be a better owner?
Can they own this stack better than
Red Hat can? I don't know. I don't know. But I think the one thing that I did recall is that a
long time ago, like back in 2000, 2001, IBM made a hard play into Linux. They just made this abrupt turn, and they announced a billion dollar investment in Linux,
and they hired
a whole crew to film
these Linux commercials.
And one of my favorite had
Captain Sisko in it.
1991 Helsinki. A 21-year-old student
named Linus Torvalds writes a new computer
operating system. He calls it Linux,
then does something revolutionary.
He gives it away,
free, over the internet. The powers that be dismiss him as an eccentric, a freak,
but everywhere coders and freethinkers embrace Linux, improve and refine it.
Now the forces of openness have a powerful and unexpected new ally.
It's a different kind of world. You need a different kind of software.
It's a different kind of world.
You need a different kind of software.
That IBM went away for a while,
and they became a services sales company.
And I wonder if we're seeing IBM return to that.
That is what I hope is happening.
I mean, they've done a lot of good stuff, right?
A lot of good engineering work has come out of them.
And you're right, they haven't really kept up.
Maybe a question is, can you do that? How much can new blood be injected into a company?
Is this really going to change things?
What is their capacity for change?
I have heard from more than five, less than ten,
I'll just put it that way,
Red Hat employees now who are very not happy.
They are very not happy.
I would say panic and disparity is the tone.
Interestingly enough, I've also heard from people who do not work at Red Hat, but know people
really well who work at Red Hat, and they say they're pumped. So there is some people inside
that are very excited. But the people I heard about, they're primarily concerned about things
like they have this memo list internally that's a very expose kind of thing. They're worried that's going to get shut down.
They have a very unique culture. They're worried that that's
going to get shut down. Right, that seems like a lot that can be difficult.
Yeah. So there's
internally, the
theme sort of is, I work at Red Hat
because this is Red Hat
and I am passionate
about open source and I wanted to work somewhere
where my open source software
would ship to end users
and make a difference.
And that's why I'm at Red Hat.
Because it's Red Hat, not because it's IBM.
And the thing that Jim Whitehurst knows very well
is Red Hat's real value is the software developers.
That's the talent.
Their engineers are their most valuable asset
because everything else is free, for the most part.
You know, all of their source code stuff,
it's free.
The source RPMs that they still,
you know, that is free.
And so the developers are their real talent.
And if they leave
because it's no longer Red Hat,
then this deal loses a ton of its value.
A couple of other interesting things
I have grokked from my conversations
with friends is
the way this is being sold to the
staff right now is the reason Red Hat won't change. The reason why Red Hat will remain its own
independent entity within IBM is because that is the true value of Red Hat. The way they phrase it
is Red Hat needs to remain Switzerland. Red Hat has to be this independent, non-vendor lock-in neutral platform
that IBM sales teams can go to these high-end clients and sell a solution that doesn't have,
quote, vendor lock-in. So the way IBM CEO frames this, she says that essentially they have clients
that are so big it would blow your mind, the greatest clients ever, they're the best clients,
they have clients that are so big it would blow your mind.
The greatest clients ever.
They're the best clients.
And that 80% of their workload is not yet in the cloud.
See, that first 20% that's in the cloud was all the easy stuff,
the stuff that's, quote, cloud native.
But now, now it's time for hybrid cloud.
And this is what she says.
Hybrid cloud is going to bridge the on-premises stuff with the things that normally, traditionally didn't work in the cloud
that are now being moved into the cloud.
And Jim Whitehurst says that what IBM and Red Hat will do together
is they will create the one unifying platform
that bridges on-premises and cloud,
and when you're going to do this in a large capacity,
this is who you go to.
And these clients, the number one thing they're afraid of these days,
according to IBM CEO,
vendor lock-in.
And so Red Hat must remain independent.
Their, quote,
go-to-market strategy must remain independent, end quote.
And that reason is because
they want to sell into competitors
that IBM is currently competing in similar markets with, and they want Red Hat to still sell to them, and they want to be able to sell into competitors that IBM is currently competing in similar markets with,
and they want Red Hat to still sell to them, and they want to be able to sell to large
clients, quote unquote, they're afraid about vendor lock-in.
So IBM can go, no, this isn't a vendor-specific solution.
This is Red Hat.
It's totally our solution.
You're totally getting locked into us.
Go ahead.
Deploy Red Hat.
It's got Kubernetes.
It's got OpenStack.
It's going to be fine.
It's an interesting question, too,
because I think we've seen this from other things,
either like larger companies like Alphabet
or just different sections of Amazon,
where you're like, you're selling to one hand
and buying from the other.
How does this work?
How well does it actually work?
And sometimes it seems like business is pragmatic enough
that it's fine.
You know, people are like, all right, yeah,
install me my OpenShift cluster managed by IBM's managed services
and away I go.
Today, October 30th, Mark Shuttleworth released a statement
on the IBM acquisition of Red Hat.
And this is great.
This is, I didn't expect this.
Canonical usually plays it quiet.
They usually tend to default to not saying anything.
Like, they just don't say it.
They don't get in the mix.
But this time, Mark wasn't standing still.
And this isn't the first time we've seen this.
I think you guys will recall almost a year ago,
we had a ZDNet article where Mark was taking some shots
at Red Hat's open...
Oh, yeah.
Yeah, it was good.
It was a good read.
I love it.
This, though, this is even better.
I love this. He writes in here,
public sources of data on Linux trends show that we've had a clear move.
And he's setting up that, essentially,
Red Hat has been losing market shares is his position,
and that they were pushed into this.
He says, we salute Red Hat for the role it played in framing
open sources for familiar shrink-wrapped replacement for additional Unix on Wintel.
In that sense, RHEL was critical in the open source movement.
Nevertheless, the world has moved on, and replacing Unix is no longer sufficient.
The decline in RHEL growth, contrasted with the acceleration in Linux more broadly, is a strong market indicator of the next wave of open source.
Linux more broadly, is a strong market indicator of the next wave of open source.
So what he's saying is RHEL market share decreasing, but overall Linux deployments were going up,
market indicator.
And I think he's right.
And that's what's so great about it.
And then he goes into containers and whatnot, essentially kind of implying that IBM was able to scoop them up because of this situation that Red Hat found.
The time of RHEL is past.
Maybe true.
Red Hat's explanation is that we just simply
couldn't sell into the clients
at the scale we wanted to anymore.
Like we had reached
the end of our sales capacity
and to get to the next big fish,
we had to get even bigger ourselves.
That's their version of the story.
Besides the fact that,
you know,
all the shareholders
make buckets.
Right.
Which fine,
that's a legit enough, you know,
that's, who would say no to that?
Well, having interviewed
with their consulting
department recently, I can actually
verify their statements about
they've got customers
lining up through the door, and they just don't have the
consulting staff to actually
do the on-site
implementations that they're wanting to do.
So there may be some validity to that.
Wall Street seems to think so.
I mean, up 45%.
Jeez, that is really something.
I don't know.
I'd be curious to know, any old-timers, what your thoughts are just in terms of,
I mean, this is Red Hat.
Like, this is Red Hat we're talking about right here.
And for us old-timers, this, to me, feels huge.
Go beyond market strategies and things like this.
Besides Microsoft's complacency in working with open source,
is there any other major, major, major indicator
we've seen of how successful free software
and open source has become in the industry?
$36 billion makes this the largest software acquisition
in the history of software acquisitions.
It's the largest, and it was an open source one.
It was a company making free software.
I don't think we see the same focus. Like, obviously, AWS has built a company making free software. I don't think we see the same focus.
Like obviously
AWS has built
a lot of free software.
It's not marketed that way.
That's what Red Hat
was so special about, right?
It was,
I guess it was not
technically an
open source company
but it sure felt like it.
Well here's what gets me
is
that we talk about
companies
that
have a smaller impact on the market
than Canonical does.
Canonical has this weird problem
where their deployments of Ubuntu
are like shadow deployments.
There literally could be
a safe margin more Ubuntu deployments
in production workloads in the world
than there are RHEL deployments
when you consider VPSs and containers and VMs,
which the vast majority, not all,
but a lot of them are Ubuntu-based.
And I have had specific information given to me
by AWS engineers who tell me
that the majority of the Linux instances on AWS
are also Ubuntu.
And I've heard the same thing from Microsoft employees about Azure.
They don't like to release this information publicly,
so you just have to trust me.
I'm sorry I can't link to anything, but this is what they've told me,
is that they're Ubuntu-based.
And yet, we talk about Ubuntu as if it is in another tier from Red Hat,
as if it's along the same deployment scale of Fedora or Mint or elementary OS.
But in reality, it has
tens of millions more
deployments because of the container
and VPS craze. Well, I think part of it
too is like, RHEL means Red Hat, right?
Ubuntu kind of just
means Linux. Like, it's just
Ubuntu Linux and you got it because it was
easy to use, it was already there, it might have been the default,
but you didn't have to talk to Canonical. You might not even know who Canonical is, you just know about Ubuntu Linux, and you got it because it was easy to use. It was already there. It might have been the default, but you didn't have to talk to Canonical.
You might not even know who Canonical is.
You just know about Ubuntu.
Unless you're buying managed services.
I was having a conversation with some of the leadership at Linux Academy
talking about beefing up the Ubuntu courseware at Linux Academy.
And it's something they're interested in doing,
but they have zero student data showing,
I shouldn't say zero,
but they have less student demand for Ubuntu
than they have for Kali Linux right now.
And I know that just simply is incongruent
with the market share of Ubuntu.
And the way we talk about Ubuntu
is incongruent with the market share of Ubuntu.
And the way the tech industry,
we talk more about Fitbit and Netflix
than we talk about Canonical.
Meanwhile, Canonical's running all of this stuff
that these companies are using.
It's really, it's a bizarre thing.
And so the other thing that really sort of landed hard on me is,
does this mean Canonical is next?
Is Microsoft now going, shit, well,
Azure is powered by, I mean, like 30%, 40% of the rigs on here
are Ubuntu-based.
Right, can we afford for one of our competitors to control this?
Right.
Right.
And I wonder if there wasn't a bidding war going on for Red Hat. The way
this news came out, the way it came out on a Sunday, it was leaked and then they had to confirm
it before they could even tell their own staff. It makes you wonder if there was another company
involved in the bidding process for Red Hat. And this is all my way of saying we have major
shifts still ahead of us. Like when you look, there are certain shifts happening in the industry because now of the success of open source.
And this is the result of that success.
Right.
We can't ignore that like for it to be successful means it's used by big companies and they have different ideas of what they want to do with that technology.
of what they want to do with that technology.
To me, it also underscores, like, it's still important that we have some non-commercial distributions,
like things like Debian or Arch,
that, like, maybe I don't run them on my server.
I mean, I have certainly in both cases,
but they're there, and that is a thing that keeps,
I think, like, keeps the community focused a little bit,
or at least, like, that's our fallback.
So, Brandon just joined, but I don't know if he's at his mic,
and I'm going to steal his thunder a little bit maybe.
I want to propose kind of an alternate reality to this merger.
Brandon and I were talking the other day when this happened about an article that was posted about maybe IBM bought Red Hat and Red Hat takes over IBM.
So, instead of this being the corporatizing of Red
Hat, I'm not sure if that's a word, but it is now. Instead, Jim becomes the new CEO of IBM in the
next couple of years. And Red Hat's culture overwhelms that of IBM. And now, you know,
open source takes over from the inside. I mean, I like that idea.
300,000 employees, though, it feels like a drop in the ocean.
It's one department within a massive entity.
I think that in reality, it'll be somewhere in between.
I'm hoping for a best case scenario that it'll be like Microsoft and LinkedIn.
If you didn't know the business behind it, you wouldn't know that Microsoft owns LinkedIn. And I think that's kind of based on
the articles that I've read. I think that's kind of what IBM wants to do with Red Hat,
because they're supposed to be an autonomous subsidiary of IBM.
That's what they say. You know, but you think about this. I mean, really, IBM has been around in one form or another since the early 1900s, 1911. They weren't called international business machines until 1924.
that a division of a company that has been around for nearly 100 years will change the culture is optimistic.
It is definitely optimistic.
I'm hopeful, though.
If nothing else, maybe they could use them as a steward.
Like IBM trying to navigate a new world and a new technology landscape
could defer to Red Hat's expertise in certain areas.
Yeah, it's interesting.
We kind of assume that the IBM culture will crush Red Hat,
or that's a lot of the fears.
But hey, maybe it'll go the reverse,
and IBM opens up and some of this new blood distributes.
But you're right, it's a numbers game.
Brandon, did you want to jump in there with some thoughts?
Yeah, just IBM's contributed open source for a very long time.
They have an open source culture.
Yeah, I'm not,
I'm a former Red Hatter,
so I have a different perspective,
but I don't think it's going to be all the doom and gloom.
But that's just my personal opinion.
Good. Do you feel it is because IBM
has been a good steward to open source in the past,
or what is it that's making you feel?
It's been a good steward.
A few people in the chat have mentioned
some of the things that they've done.
They have a lot of projects between Red Hat and IBM.
I think they've contributed more to open source
than any one period.
So it'll be interesting.
If not, if all else fails,
everything Red Hat does is open source, just go fork it.
Yeah. Right. And I think that is, that is a nice, that is a nice safety hatch we have here is
there's a free software and the original contributors still own the copyrights of them. So that's impressive, and that is good to note.
I hope so.
I think in some ways it could supercharge their sales engine,
and that could be really good.
And in the meantime, business continues as normal.
Today, Red Hat Enterprise Linux 7.6 shipped,
and Fedora 29 shipped.
Now this I am very excited about.
There's a big feature in Fedora 29 that I've been looking forward to trying.
And in just one week as we record this episode,
it will be 15 years since they announced the release of Fedora Core 1.
Yeah, it's big.
That's big.
That's a big deal.
And now they have not just like the core Fedora,
but they have workstation, they have Server, they have Atomic Host.
They have all these different like little spins and stuff that are semi-popular.
Of course, they have the cloud version that I use.
They have support for ARM devices.
And if this wasn't maybe thinking ahead a little bit, they even have a System 390 spin.
So they're good to go.
So they're good to go.
This release, though, I think is probably, from an end user perspective, the most appealing because it ships GNOME 3.30, which has some performance improvements.
They've got better support for ARM images, including ZRAM on those, vagrant images for Fedora Scientific.
But there's something else in this one that we wanted to talk about.
The West got a chance to kick the tires on,
and so we kind of took two tracks.
I have been managing a Fedora Cloud instance for years now,
and I've been dutifully updating, updating, updating, updating.
And then in the last release,
after having zero issues of updating
after release, after release, after release,
I decided I wanted to rebuild
and build a bigger,
stronger Fedora box and use
Cockpit to manage it and
move everything. I had multiple systems
and I moved everything to one system and then put
it in containers on that one system.
I wanted to experiment with that and I wanted to use Fedora
Cloud to do it.
I ended up using my NextCloud
instance in production. I attached
an extra 250 gigs of digital ocean storage to that droplet,
put my Nextcloud folder on there, and that's mounted inside the container.
It's all very nice, right?
So I sit down.
I'm like, okay, Self, you're going to take this Fedora 28 system up to Fedora 29.
We're going to start this upgrade all over again.
You got your tool belts on.
You're at the console.
I was.
It was good.
So the first thing I did is I logged into Cockpit
and just did all the security and system updates.
I couldn't do a distribution upgrade in there, though.
So I had to.
So I SSH into the box.
Oh, old school.
Yeah.
And, you know, the DNF commands are well documented.
Like, first you do, like, this update thing,
and then you install, like, an upgrade plug-in.
And then there's like three commands.
And then the final command is do a DNF reboot.
And this is neat because it ties in with systemd.
Reboots your system, and while it's in the boot up state, clean, installs the packages, checks it, and then reboots the system once more, and you boot up with all the fresh stuff.
Oh, that is nice.
So I do that once with cockpit.
So that way I have a 28 update that's totally current,
all up to date.
Everything's working.
It just feels like the best way to do it.
Then I SSH in, and I do all that DNF stuff,
get the 29 upgrade going, does the same process.
It reboots, you know, does the install, upgrades it all.
So now I have a full 29 install that's up to date.
Just like that.
Boots up.
Feels like it takes forever,
because when it's doing that systemd upgrade
in the background, it's offline.
And so you're just sitting there waiting
for your box to come back.
And hoping that it pings again
or your SSH connection resumes.
I have been remotely upgrading servers
in one form or another
literally since the 90s.
And it's still just as anxiety-inducing during that moment as it ever has been.
It doesn't matter if I have an IPKVM or an HTML5 console
that I can use in any browser I want via the Ting dashboard for days.
It doesn't matter.
I still am anxious during that time when I can't ping the box.
I mean, it's just natural, right?
Nothing else is happening.
I'm just sitting there staring at the terminal.
That's all that's happening. Well, even with the best of intentions, it's just natural, right? Nothing else is happening. I'm just sitting there staring at the terminal. That's all that's happening.
Well, even with the best of intentions, all the snapshots,
all the snapshots,
immutable file systems, like, things can still break.
It's still just, I don't know, man. I just don't know.
And of course, you know what? Now I think about it.
I didn't do a snapshot beforehand.
I think I have backups for that box,
but I didn't actually. Oh, you reckless
admin. I just went for it. I just went for it.
You can tell I'm from a day before.
I'm pre-VPS, Wes.
Anyways, so it boots up finally.
And I type in the Nextcloud URL.
Hit enter on the old box.
Page cannot be found.
Oh, no.
This is it.
After like years of doing these upgrades, like this is the one that broke.
This is it.
And I'm like, oh, all right.
What about my cockpit URL?
Because cockpit, by the way, it's just great.
So I log in, cockpit's working.
I can SSH into the box.
Okay, so it's up.
I can ping it.
Cockpit's fine.
I'm digging around.
I go to the container section
and I see that my containers are restarting.
All of them are just stuck in this restarting loop.
And it's hitting my CPU because I've got a database container.
I've got a NextCloud container.
Excuse me.
I've got an MB container.
And they're all just rebooting constantly, rebooting, rebooting.
As fast as the system can reboot a container, these things are cycling.
And I'm like, okay, what?
I knew I've seen this before.
How did I fix this before?
And I'm still looking at it.
And I look at the, because in Cockpit
it puts the console log right there, so you can click on
the container, you can actually see the console log in Cockpit.
And I see permission denied,
permission denied, it's just permission denied scrolling
past my screen as fast as it can possibly.
I know I've seen this
after an upgrade before.
Think about it for a second. Oh, of course.
It was SELinux. Of course it's SELinux.
Of course it's SELinux. So I had to SELinux. Of course it's SELinux.
So I had to get that, and that just got reset as part of the upgrade.
Right, it's because you enabled a lot of custom stuff, right?
You really tuned SELinux perfectly for your use case.
Oh, yeah.
You know me.
Not that off button at all.
No, not for me.
I got a question, though.
So that's not the thing I was most excited about.
That was just my standard test I do with Fedora 9.
But the feature that I am the most excited about,
we had talked to Matthew Miller about it ages ago.
We've been teasing it for a long time.
It's finally landing in Fedora 29, and it's modularity.
This release is particularly exciting
because it is the first to include Fedora modularity,
which features a system that allows you to install a version.
Boy, this is hard to explain. It allows you to install, say, a version of Node.js that is not in the main package repository of the version
of Fedora you're using. So say Fedora 30 ships and you want to use a newer version of Node.js,
or you need to use the version that was in Fedora 28. You can now mix and match the versions with
these modular repositories. You no longer need to upgrade your entire OS just to get the new Node.js.
And what I love about this is it's sort of solving the same problem that snaps and flatpaks solve, but from a different perspective.
Snaps and flatpaks are very developer-driven.
That way, the creator of the software can directly distribute to the end user.
And I think that's perfect for a lot of types of applications, even server-side applications.
But package management, that's a sysadmin system. That's software distribution designed by system
administrators for system administrators. And there's still valid uses for the package management system to manage server software.
And the only real downside has been these version issues you run into when you're on, say, like, RHEL or CentOS or something that doesn't update very often, but all of a sudden you need a newer version of a package that isn't in the repo.
It starts to fall apart.
That's the perfect thing.
You're like, oh, I have safety.
I know nothing's going to break.
And then you're like, oh, wait, no,
but I need one more version
because there's a bug fix that only affects me.
And right now, one of our solutions,
Flatpaks and Snaps,
if you're dealing with server software,
it's really only Snaps.
And that works great.
Like for something like,
especially for like Plex Media Server,
it's the perfect solution.
But for like low-level stuff,
like your Apache server,
or maybe Postgres,
or Node.js,
do you really need to have that as a snap when you could maybe just have modular repositories
that allow you to pull in that newer version on the older version of Fedora?
It seems like magic, but it works.
Oh, yeah, it does.
I mean, we've been playing with it here the past couple of days.
The way to think about it is before, you know,
a Fedora release comes along and they sort of pick,
like, okay, here's the version of Node
that we're going to ship with this,
and there would be a different one for 28 versus 29.
Now they're just taking whatever version of lists of Nodes
that they're going to support
and building them for all the Fedora versions.
So it doesn't matter which one you can pick and match,
your version of Node and your version of Fedora support
totally independently. So this isn't some crazy container system they're doing.. Your version of Node and your version of Fedora support totally independently.
So this isn't some crazy container system they're doing.
And you're not running multiple of those.
What you get to do is this.
You get one Node.js.
You pick your version, but you get one Node.js.
Yes.
And DNF now has this module system that you can tie in.
There's commands for it and you can see.
And you basically choose like, all right, well, I know that my app is targeted at the older Node 8.
So I'm going to make sure that that module is what I pick.
And then when you go and use DNF to install Node,
it knows which dependencies and which systems it actually needs to use.
And then it just gets updated as part of your DNF upgrade process. It's just one of the many packages installed.
That's the nice part, is that you can get some newer software,
but it's still tied in with the distribution.
And this is as much of a DNF change as it is a repo change.
They had to make changes in how they keep track of versions,
how they build them for the different versions of Fedora.
Yeah, a lot of build system changes.
It's like a multi-component change that they've made to enable this modularity.
That's also an impressive project-wide effort.
And you've got to imagine by the time 30 lands, this is going to really be dialed in.
Some of the things I like about this is,
obviously, as you say, there are a ton of third-party options
for getting software, even just building it yourself if you have to.
But distributions do a lot of hard work.
Fedora does a lot of hard work getting the stuff packaged,
having updates for the software that needs to be there,
security updates, whatever else that you have in mind.
With a system like this, you still get all those things.
You can just get newer versions, or as you say, older versions.
I'll say the story now,
as they call it, of how you get software
on Fedora is getting a little
complex. So you've got DNF
in the repositories, you've got
Flatpaks, you've got Copper,
and you've got modular Fedora
repositories, and I think I'm actually
missing... I mean, there's things like RPM Fusion,
which it just plugs in, but it's a separate project.
It's true. There is that, too.
So there's a lot of ways to get software.
When we were trying out Fedora,
I said,
I'll tell you what I feel like, because I feel like Fedora
is really great if you're just doing
get down, start working, you're doing a terminal,
doing a web browser. It's so great for that.
But what if you need it as your daily driver, Wes?
Can you get Slack and Telegram
and these things on there? How did that go for you?
How did it go trying to get these Electron apps
or these proprietary applications?
With FlatHub, honestly, it was
pretty good. Now, it went
I'd say 85% of the
way to being pretty seamless where I could
just go to the website, follow the
installation instructions, and click through on
the FlatHub.org website and install stuff.
That didn't quite work with GNOME software,
but all of the command line versions of those same stuff did work just fine.
GNOME software just kicks an error up.
Yeah, so that was a little cryptic.
I didn't find an easy solution within the first two Google results.
I didn't try any harder than that.
I'm sure it's easy to fix, but I just wanted to play with it.
But I wasn't sure, so that's interesting that you ran into that,
because I wasn't sure if I was doing something.
So if this is an actual bug,
I might try to reproduce
the process and file a bug
for this because I thought
maybe when I ran into it,
I thought I was just being dumb.
Yeah, we'll have to play
with that a little more
because it was almost
really slick.
Yeah.
I should stress too
that just running it
on the command line
and installing the name
of the flat pack
works totally fine.
Slack, Dropbox, Telegram,
they're all there.
Once you add the FlatHub repo,
you have to do that first.
That part was easy.
That was easy.
That was like you just go to FlatHub,
they have a thing you can click,
and just if you're on Fedora already,
it's all integrated so that it calls up all the right applications,
it launches GNOME software, it installs correctly.
Yeah.
And then you're able to get Telegram and Slack
and all of the proprietary goodies
that you use to communicate during the day.
And it works on the Fedora desktop.
Any other kind of impressions you had when using it?
Any other takeaways?
Like there's that installer.
I mean, I'm just not a fan of Anaconda.
I will say that their newer, what is it, Bevet or something?
Their new version of their disk manager in the installer.
That's actually gotten better.
I like it a lot.
I hate the original version.
The new version is pretty easy. It's going
to work. And I will say that
once we got it all set up on our JB test
machine, it boots really fast.
Once it's all installed,
it's really nice. Yeah, it's
probably one of the fastest booting Linux distros
out there right now. I mean, so we've recently
tried Ubuntu 18.10,
18.04, obviously I'm very familiar with.
Elementary OS, we've recently
booted on there. I guess
it's about as fast
as Haiku was.
Right? Yeah, you're right.
It's about as fast. Which is really fast.
Which was pretty darn fast. Yeah, and then
so the test machine
is Intel-based. And so
it took both Wes and I,
it took us until
there was a bit of a glitch.
You saw it in Firefox.
Oh, yeah.
But it took us
a little bit
before we realized
our test machine
was on Wayland.
Totally on Wayland, yeah.
We got a far way
into the workday
before we realized that.
And it was a small glitch
that went away
in seconds
and the rest of it
was pretty darn smooth.
Yeah.
Yeah.
I will say, too,
the setup,
like when you've got everything
installed and then you boot up into Fedora
then it has you like set up your first account.
Yeah, that welcome wizard thing. Pretty smooth.
Like it just does that and then it reloads the GNOME session
and logs you into your new account that you've just made.
I'm surprised more distros don't use that.
But a lot of them just get
it done during the installation process.
Which Fedora, some of those questions that they ask
post installation, they've removed from the Anaconda store.
There's obviously upsides to both, but I was glad that it worked
and, you know, it didn't crash. Crash-whelaned.
Yeah, that's true. It didn't crash.
Oh, that's a good point.
Huh. In fact, I'd have to use it harder.
I bet you I could get it to crash.
You know, I'm pretty good at getting up shell to crash.
You are.
I get it to crash.
You have a gift.
I do have a gift for it.
Yeah, so this modularity, I I think long-term is the bigger deal
because you look at Fedora,
it's got essentially like a 13-month support cycle per release.
And that sort of means you're going to need to update.
If you were going to actually deploy this in a production server environment,
about every 13 months, probably just about right there on the nose, you're going to
need to go log into your servers and upgrade them. And one of the reasons I've been doing this
experiment with this droplet now for ages is because I wanted to know how far I could push it.
Is that really possible? I have now experienced more than five or six, I lost track of so many
flawless upgrades. It just keeps working.
Without issue.
Part of it helps that the applications
are containerized.
So they're isolated
from the system.
And I have to keep
those containers updated too.
But it has made
the risk surface
of the upgrade
a lot smaller.
And so I have
a very small
core Fedora install
that can just be
pretty standard.
Yeah.
And when you combine
this software repository modularity,
it means that when 30 comes around
and I'm rather compelled to upgrade,
if something I need to do my job
still depends on 29's version of something,
modularity will allow me
to upgrade to Fedora 30
but still run that software
dependent on Fedora 29.
When you combine it with RHEL, too,
it seems like a really good story
because, like, let's say you're developing on Fedora
and a new one comes out and you're not ready to update Node
because your whole production stack's based on this one version.
Well, you can keep it, and then RHEL's released,
and, of course, it's going to have an older version,
but if both of them have the modularity set up,
well, it doesn't matter.
You just pick and choose and match them.
Right. This modularity is clearly going to be
way more important when it hits Red Hat Enterprise
Linux. And that's
in light of this acquisition.
You have this, this is the first
release since we know about
IBM wanting to buy Red Hat. And
here's the Fedora project. And the thing
that we are probably, collectively, the
most worried about is, what the
hell's going to happen to Fedora long-term?
Because they're becoming a hybrid cloud company.
And I don't really know what a hybrid cloud is.
I know what the cloud is.
I know what hybrids are.
Those are cars.
So I don't know what a hybrid cloud is,
but whatever Red Hat's going to do
to be the dominant platform of a hybrid cloud
probably doesn't include a lot of Fedora on the desktop.
Yes, right?
Like, obviously there's lots of stuff
that isn't just desktop-specific
that gets put in and tested in Fedora,
but we don't talk very much or hear very much
about the Fedora server story.
So maybe even internally, maybe there's ways
that they test all of that, but 13 months,
you're not going to run your five-year
production application on it.
It's a weird future.
And that's just really the beginning of it.
I mean, Gnome Shell is primarily a Red Hat project,
in a big way, GTK.
And so using Fedora now, in light of that,
you sit here and you realize this is an important project.
This is a pretty important project.
And I don't want anything happening to Fedora.
I want it to stick around.
I want to see 30 ship because when 30 lands, there's some –
I think we're going to see more project Stratus or Stratus Storage, I think it's called now, land in 30.
I think we're going to see a more complete version of Pipewire land in Fedora 30.
Plus you're going to see another go at this modularity stuff in Fedora 30.
And aren't they also saying
that that's when like Silverblue is going to be...
There's hopes there that Silverblue
will be pretty prominent.
Maybe even the preferred workstation choice
by Fedora 30.
I don't know if they'll get there,
but lots of neat stuff there
with OS tree and immutable system packages.
Yeah.
Yeah.
So overall, if you're a Fedora user,
this is just another easy slam dunk.
And with Flathub and Flatpaks now and containers for server-side applications,
the risks for upgrading between Fedora releases are getting really low.
You've got to figure, if you kept that installation around for a while,
when Fedora 30 came out, those Flatpaks are still going to work.
Just fine.
And they've done some things, you know, like it's easy to get H.264.
They've even got the write-in GNOME software.
You can enable the repo for Google Chrome.
Really?
Yeah, I was impressed with that.
I didn't do that.
I've been using Firefox.
And that's totally fine, too.
You've got a browser, you've got a great desktop and a good shell and all of that,
and then you've got all the flat packs for your proprietary Electron apps.
That's it. That's what I need.
That's my production machine for just day-to-day use.
Yeah.
And DNF, every time I use that tool, I'm like, damn, this is great.
It's come a long way from the days of Yum.
It's a package manager if you were going to make a package manager,
like if today you were going to start over and build a package manager,
DNF is probably what you'd come up with.
There's still some things, like there are times where it's weirdly slow
or some of the things don't feel very optimized.
But you're right, like the principles behind it,
the concepts, the abstractions there,
they're really solid.
Yeah, so you can get it at getfedora.org
and go grab an image.
They don't have the upgrade posts out yet,
but they almost always have those.
Oh, nope, still
the old ones. But actually the commands
still work, I think.
You just have to change the release from 28 to 29.
Just don't blame Chris
when your machine doesn't boot up.
Yeah, so if this modularity thing sounds confusing
and odd to you,
trust me, it is.
And we have a link in the show notes
where you can read more about it at linuxonplug.com slash 273.
And they have a diagram up there on how modularity kind of works.
It really just comes down to just making all the different components work, making the repository descriptions work correctly, making DNF understand what you're asking for, all of that.
And then the application is being linked to the right libraries.
I think it's a pretty cool system.
I know they've been working on it for a long time.
I'm surprised it took so long.
I mean, this feels like something we should have had ages ago.
Yeah.
Well, I had an opportunity last week, was it,
that we did our Pipewire episode?
Oh, yeah.
When that episode came out, I was down in Santa Clara
at the Intel campus at MeetBSD.
Look at you.
They have these, boy, these BSD guys, they have a lot of conferences. But MeetBSD is
only like twice a year in California, like the one down here in the States.
Oh, holy.
That's, or once every two years, I'm sorry.
Oh, yeah, okay.
No, yeah, the other way around. They have plenty of other ones going on all the freaking time.
But this one, the one that's put on by iX Systems is once every two years.
And so I went to the last one, and that was at the Berkeley campus.
Oh, yeah, right.
Yeah.
And so I got a chance to go to this one.
And I didn't know what to expect exactly because not only was this post-Linux Academy acquisition, so I wasn't sure if that
would be awkward or not, but also
at the time, it was when Linus
was getting a whole bunch of, you know,
well, just all the attention for getting,
and he was taking his break. Like, he just wasn't sure what to expect.
And it must be interesting, too. I mean, you're coming into
a community, obviously, you know
and like that the BSD is, but it's not something you use
in your day-to-day life. Yeah. And I know some of the
people there. And the other thing is, too, it's like going to learn something about something that you
kind of are tangentially familiar with, but not like crazy familiar with.
So it's not like the kind of event I would normally go to, but I always find it to be
worth the time.
So I knew when I had the opportunity to go to this next one, I knew I was going to go.
There was just no question.
But I thought what I would do, because I have such a bad memory, especially when it's a week later,
I took an audio recorder with me to capture some of my thoughts there at the conference. And I
wanted to kind of recreate some of the conference experience for you. So this was me arriving at the
Intel campus, which, holy crap, the Intel campus is multiple blocks with multiple sky bridges
and multiple parking lots. That's what matters, the Intel campus is multiple blocks with multiple sky bridges and multiple parking lots.
That's what matters, the sky bridge count.
When it's more than one sky bridge, it does matter.
Like, this is a big place.
So, like, the first 15 minutes of me getting to MeetBSD were figuring out how to get from the parking lot to where I was allowed to go.
But it was fascinating to be at their home office.
So this is day one of MeetBSD California 2018.
I've said my hi's to, hey, Alan.
There's Alan.
Hi, people.
There's Alan.
I said my hi's to Alan.
And we're getting seated.
They're showing a montage of BSD people.
I don't recognize any of them.
I just keep waiting to see Alan up there.
And you know what else I found?
It's mostly just them eating food
just mostly them eating food
but we're getting ready, I think it's going to be a good day
because we're at the Intel campus
in one of their
convention rooms I guess
what do you call this room?
gorgeous
yeah it's a big room
and they got a big screen, they got lights, they got professional audio
and they're getting everybody settled for what is clearly going to be a great day a big room. And they got a big screen, they got lights, they got professional audio, and
they're getting everybody settled for what is clearly going to be a great day. And I
got an Intel badge for the day. They gave everybody an Intel badge so you can get to
certain areas of the building and steal all their secrets.
Hey, there's a picture of Alan. Lots of pictures of Alan. He's a bit of a celebrity around
there. So the first talk was good. It was a really good talk. It was more of a fundamentals talk to get us going in the morning. Producer Q5Sys did a great Q&A in
the morning, got everybody talking. It's a crowd of like 80, 90 that first day. So it was big enough
that you felt like you're an event, but small enough that we could pass the microphone around
the whole room and everybody did introductions. Everybody talked about what project they work on.
That is kind of intimate.
That's neat.
Yeah, and lots of joking about Linux because a lot of people were there because they're passionate about BSD, but they make their day money programming or developing on Linux.
A lot of device driver guys there.
Network performance was a big focus for a lot of them.
I have a blog post where I go into some of the details of the companies that were there,
linuxunplugged.com, if you click on the blog link at the top.
Somebody from eBay was there, Juniper Networks, Cisco, Groupon, obviously Intel, just lots of companies that are interested in that low-level stuff.
And so it was fascinating to hear what they're working on as they pass the mic around. But the second talk, I think, was really symbolic because it was from an individual
in Intel's open source group
who has been a graphics driver,
a low-level graphics driver developer
for Linux for years.
Like, we owe this guy a lot of thanks
for enabling tons and tons of low-level graphics capabilities in Linux, not just on Intel Harbor, but on Intel in general.
He now works on enabling general BSD technologies.
I don't know what that means.
But there was a clear tone to his talk.
He had gotten that presentation at this event for 80 or 90 people super scrutinized by the Intel legal and brass in a way that his talks never get scrutinized because there was an area in there that they felt very delicate about.
This is in my opinion.
And that's the area where he apologizes for how bad Intel screwed the BSD community with Meltdown and Spectre.
And that's got to be, right?
That has to be on everyone's mind.
And in there, and I have a picture in the blog, and I have the quote.
In there, they adopt a security first pledge that he makes.
And you can see that in the blog post.
But here's my thoughts after that talk.
It's lunchtime, my favorite time of day one at MeetBSD 2018.
I just wanted to share some initial impressions of the Intel campus,
because that's one of the coolest things about going to this event,
is getting access to the Intel campus with our security badges and our whatnots.
But it is only a small area.
We're confined to a specific area of the Intel campus.
But even there, you can still glean a few things about their corporate culture.
Of course, you get to see how they do security, and you get to meet some engineers.
In fact, I've already met a couple of Intel engineers who are big Linux fans, and a couple of them are BSD fans too.
But don't tell the meet BSD coordinators that. They wouldn't want to know.
And it's unlike going to an event
center that is designed to hold these events. You are seeing a corporate building be adapted
for an event. So we're using their corporate auditorium that they must give big product
presentations and Apple style events to their own employees in this room that we're sitting in.
They're talking about BSD now.
And they've had to give us access to their network so we can get on the Wi-Fi.
There's facilities that have to come in there to provide food, like catering.
And Intel has to facilitate all of that.
And I've gotten the impression that it started a little broader.
They were a little more concerned.
They wanted everyone to have an escort when they left the room.
They didn't want to provide internet access.
They wanted to have reviews of what was going to be discussed.
And over time, as they worked with the MeetBSD coordinators, Intel loosened up.
They kind of became a little more hip to what was going on, the idea of it, the unconference fact of it.
And now we just have, like, full run of this area of the Intel campus.
So day one has been really about everybody kind of getting to know each other.
One of the first talks out of the gate was Chris Moore, and he talked about people building future BSD projects off of a true OS build system that they're creating.
Don't call to distributions,
but that did sneak in there once. The word BSD distribution snuck in there once.
The second talk was from an Intel employee who, in part, wanted to apologize for the fiasco of Meltdown and Spectre to the BSD community, but also was talking about some of the current things
they're struggling with and trying to fix, things they were able to solve in Linux that they're now
trying to solve in BSD. And that was a lot more interesting than I expected,
to hear an Intel insider talk about the things they're working on day to day
for open source that just generally doesn't get the day of light.
Now it's NetBSD after lunch,
and then the one I'm looking forward to, a ZFS discussion panel.
Yeah, I mean, when you've got the room full of folks that you do,
you've got to talk about ZFS.
And they did.
It's so funny.
The ZFS panel now,
they've done this before,
and they don't bother going up there
and having a talk anymore
because they really just have it down to a Q&A
because everybody just has all of these questions.
How do I solve this problem?
It's a room full of people trying to solve storage
problems. And the thing about the BSD community
is they don't have this debate
about what file system to use.
There's only one file system.
I mean, there's two. There's UFS.
Yes, yes.
But you know which one they're for, right?
You know what you're doing, where you're using it.
Yeah, the original developer of UFS
gives a talk that I cover here in a second.
So the ZFS panel, they go up there.
Alan Jude, of course, up there.
Dan Langill up there as well.
It was great.
It was really good.
I got to hang out with Benedict, Alan, and Dan.
That sounds like a fun trip.
It was a lot of fun.
So here was my reactions after getting to hear a little Alan Jude ZFS talk.
It's the end of day here at Meet BSD, for at least the first day.
And the conference had a pretty strict schedule, but a very relaxed feel.
I mean, every 45 minutes or so, something's happening, for the most part.
There was a few areas here and there where the conversation got pretty in-depth,
and things maybe ran a little long. It's a few areas here and there where the conversation got pretty in-depth and things maybe ran a little long.
It's a very enthusiastic crowd.
Everybody here is very passionate about this particular technology area.
And they just wrapped up the ZFS panel, which got a bunch of questions from the audience.
The entire time, the panel did nothing but take questions from the audience.
Now that we're all done with day one, it's time to go have a pizza social.
The BSD people are serious about their food, just about as serious as I am.
Where they had this pizza social, they had IT vending machines.
Did I show you a picture of these?
Oh, yeah.
That is neat.
Touchscreens that run the whole length of the vending machine,
and it's got a companion storage box next to it that's also the size of a vending machine.
And inside it is Lenovo ThinkPad batteries, MacBook dongles, headsets with microphones,
like your average everyday.
HDMI adapter.
Yeah, a mouse, you know, that kind of stuff that you just IT accessories.
And you quote unquote purchase it with your company badge.
So you run your Intel badge and then out comes a mouse.
It's the craziest thing.
So that pizza social was
pretty fun. Cause I just got to chat with everybody. And I left early because I've,
I've done enough of these events. Uh, and if it's not like, if it's not my core crew,
like sometimes I'll go out and I'll party late, but if it's not my core crew, I'm like,
well, it was great seeing you guys. You guys have a great night. I'm going to bed. And I go to bed
like at nine o'clock and I get sleep, and I don't get sick.
You see, the other folks, though, they didn't really follow that.
And I got there early, and I thought I was in bad shape, but like the whole crew was hungover.
Good morning.
It's day two at Meet BSD in Santa Clara, California.
Today is October 20th, 2018, and the BSD folk are trickling in. You feed them,
and they show up. Some of them were out quite late last night, 1 a.m. or so, being very geeky. I,
on the other hand, being the seasoned fest goer that I am, was in bed by 10 o'clock. You see,
I got these systems I keep in place. Keep hydrated, watch the hands, and go to bed by 10 o'clock.
And then you don't get any con crud.
But here, for these people, it's a rare opportunity to actually get together and talk in person.
They do all of their communication primarily over IRC and email.
So to actually get here in person and chat is a nice opportunity.
So a lot of them show up an hour before the thing even starts.
So think about that.
They've been up until 1 a.m., and then they show up at the Intel campus at 8 a.m.
First talk doesn't even start until 9 a.m., but they get a chance to catch up with someone else, to chat with somebody else.
And some of them are just pretty hardcore.
Plus the free pastries, coffee, juice, and fruit doesn't hurt either.
Today they have a very special guest joining us.
We'll get to hear from him, and it'll be a nice, steady pace.
But I'm looking forward to getting a few more opportunities
to have a few conversations with people.
Now this special guest was sort of teased.
Like we weren't sure if he's going to make it.
He might make it.
And I asked Denise, who was organizing the event,
how do you not know if your star speaker is going to make it or not?
Like that's sort of –
That's rough.
She's like, trust me.
I know.
It's very, very rough.
We'll just vamp.
It'll be fine.
But they had been kind of teasing this might happen,
but they didn't put it on the official schedule because they weren't sure.
But in the BSD land, you have your royalty.
There are people in BSD that are real royalty that the group has a reverence for. If they speak,
everybody stops talking to listen. When they enter a room, all the heads turn. There's
real respect. And they're not snooty. They're really down to earth and they're nice. In fact,
if you weren't part of the BSD community at first and not maybe paying attention to the social cues, you wouldn't know they were anybody special.
But because, you know, they're just another person hanging out with the BSD people.
But when you know what to look for, you can tell how respected these individuals are.
And maybe one of the most respected is the creator of that file system, UFS.
And he was able to make it.
He walks up on stage,
and the moment he's up there,
he has all eyes.
Nobody's looking at their laptops.
Everybody's paying attention.
Even though he's probably told the story
they've all heard a hundred times before.
Everybody's paying attention
because, I don't know,
it's sort of like when you're that family member
that you love the stories the family member tells.
He goes up there, he tells one of his famous stories.
Well, that was a great history talk by Dr. Kirk McKusick.
The room really seemed to enjoy that.
And that was pretty neat because they weren't sure
if Kirk was going to be able to make that talk
because of jury duty, and at the last minute, he was able to make it.
So they popped out some lightning talk rounds.
They had a batch of lightning talk rounds and popped in Dr. McKusick in there.
He went a little long, though.
I don't think anybody's going to mind.
Yeah, they follow the schedule pretty closely until Dr. McKusick's up on stage.
And then he can take as long as he likes.
There was sort of a different tone to the second day,
and I go into this a little bit more in the blog post,
but in the second day, it was a smaller crowd.
I thought Saturday would be bigger, but the second day was a smaller crowd.
Oh, that is weird, right?
You think more people would go on a weekend.
But everybody, so you'd have that first day where everybody had done introductions
and sort of broken the ice.
Then everybody hung out until like 1 a.m.,
getting drunk and having a good time.
And so the second day had a really casual, friendly vibe.
Just like that, day two of being inside Intel comes to a close.
They're doing the breakout sessions right now,
and then they're going to wrap things up.
The end of the day actually is the beginning of the party for FreeBSD's 25th birthday.
So that's what we do next.
Day two, though, is the one to go to.
If you could only make it to one day, it's almost day two.
Because by day two, everyone's guard's down a little bit.
Everybody's cracking jokes.
It's more friendly. It's more cracking jokes. It's more friendly.
It's more laid back. It's more natural.
It's a great, fun scene.
It's like hanging out with friends day two.
Day one is great, but everybody's still kind of
getting to know each other and letting their guard down.
It takes that dinner, that hanging out,
and that next day where things have thinned down
and it's just a little bit smaller of a crowd.
All those things add up to everybody being a little more relaxed, and it makes it a really fun environment. It's been pretty cool to hang out inside Intel. They've limited where we
could go, but where we've been able to go has been very nice, and they've been very gracious.
You combine that with the constant supply of fruits and snacks and coffee. It's just been a
really good educational event.
And now I'm going to go eat a whole bunch of pizza in the name of FreeBSD's birthday.
They had a cake.
They had, actually, it ended up being a taco bar.
That was the other thing.
I was there with listener Ryan,
and friend Ryan, really,
and he and I are Linux admins.
I used to be, and he is currently.
And so we kind of shared like a common bond
over like processing this BSD information
because we both have like a Linux-esque admin background.
And he's working on tons of systems.
He's got a whole bunch of BSD systems
he's responsible for too.
So he's coming at it from two different angles.
So I asked him afterwards, I said,
what do you think, was this valuable?
He was like, every talk I learned something,
every single talk.
And because it's not huge,
you didn't have to go to multiple rooms.
You stayed in the auditorium the entire time.
You weren't scrambling around trying to figure out which thing you were going to listen to.
Yeah, you just basically had the hallway track or you had the auditorium.
And the hallway track was constantly, Intel provided like this area where you could hang out and chat.
And there was always like, you know.
It was easy to do.
You weren't in the way.
It was just a space for that.
Yep, exactly.
Or you could go sit in the main auditorium and catch a talk.
And so you always were able to catch all the talks.
Or if there was one that just didn't really appeal to you, you know, you could take a break.
It was really nicely done.
A lot of the Linux events that we go to now are so big that there's multiple tracks happening at the same time.
so big that there's multiple tracks happening at the same time.
And you get there, and this is, I think, a common problem that newcomers to conferences have, is you get completely overwhelmed because you can't attend at all.
And there will often be multiple talks going on that you're interested at the same time.
And it's really overwhelming.
Whereas with MeetBSD, it's all in one room because of the size and the scale of the event.
And it was a good size for me.
I really enjoyed it.
It seems like from what you've said, there's a bit more
of a cohesiveness
or, I don't know, at Linux events
there's obviously lots of people there with the same mindset.
They're almost always wonderful, right?
You like what-minded people, people interested in the same
stuff that you nerd out about. But you're
interested in maybe different things, like maybe you care
about Linux or you just care about open source desktops
and Linux is an implementation deal. Or you
really love GIMP and you don't care at all about containers, right?
Was there anything different in this in that, like, if you go here, you probably use a BSD stack, right?
You use OpenBSD on your firewall.
You use FreeBSD for a whole bunch of your desktops.
Was it different?
You know, there is a cohesiveness and a singularness of thought that we don't have in Linux, for sure.
More so than we have in Linux.
But at the same time, more than I've ever seen, I saw a lot of fragmentation.
I was really surprised.
So behind the scenes, I think one of the things that I learned in this event that the BSDs are really going to struggle with is modernization.
event that the BSDs are really going to struggle with is modernization.
A lot of people like to use BSD because it's how things have been done for a long time,
which means when you need to modernize, people don't like it.
Before the general MeetBSD people attended, there was a developer summit.
And in there, some of the developers were trying to make a case to the other developers that we need to move to Git off of SVN.
And that wasn't going over.
I mean, the sell is, well, okay, well, we'll build a bridge
so you can keep doing SVN, and on the back end,
we'll have that going up to Git.
So there's a resistance to change, which could be challenging for them.
Yeah, I mean, I think that's both, like many things,
that's a positive and negative, right?
Yeah, it means that it's more carefully crafted.
But it means you have certain developers who are like,
come on, man, we need to start doing this in a better way, in a newer way.
And then you have other people like,
this has worked fine for 20, 30 years.
We don't need to change this.
Yeah.
And that's a hard argument that they get stuck at, I think.
But they're managing it, and they're trying to walk that line
while still being respectful to the old way of doing things.
So they're trying to walk the line.
The other thing is, I was really surprised.
Like, they asked how many people in the room are using Trident,
you know, the new desktop?
Like, only the developers of Trident were using it.
Interesting.
A lot of them were just using the command line.
Like, there's a lot of differences in how they use it,
but I'd say they have more in common
than the average group of Linux users does.
That was my over...
I was surprised where they differed.
And I realized, damn, that could be a drag on the project in some cases.
But I felt like they differed in less places overall.
I mean, yeah, it's impressive.
BSDs, I mean, obviously there's separate communities and there's separate BSD projects when we say the BSDs.
But in general, it seems like they just have a—they've grown a really good community that's supportive and interesting.
And they all do a good job of just making cool shit.
Professional.
I felt like it was a professional event.
It was well run.
iX Systems does a great job.
And it was really gracious of Intel to host it there.
So anyways, that's my take on MeetBSD.
It was totally worth going even as a Linux user.
I would go again to the next one.
I agree.
I don't think every talk I learned something, but absolutely every day I learned several things that I thought were valuable.
Is there anything you think Linux conferences or conventions could learn from the way MeetBSD happened?
It's really hard to say because one of the things that made it so great was the size.
Wasn't so big.
And that's really hard to deal with because we just have a lot of people that come to these events.
But that's something worth, I think it's, I do think it's something worth thinking about.
And I think it's something that organizers of Linux
events should consider doing is attending some of these
BSD events. Because
it feels like you're not overworked,
but you're getting stuff done, you're learning stuff.
It's worth your time, but it's not
like you don't feel like you're at work.
And you can feel like maybe you don't have so much
FOMO, you feel like you understood the whole
conference.
I want to do a follow-up to last week's episode
where we finally got that review of the Dell Precision 5530,
that monster of a system.
It looks like a bigger version of the XPS 13.
But I wanted to follow up and say
we've posted a companion blog post of the review
so that way you can see what it actually looks like
because it is a beautiful machine. And I posted some additional thoughts of the review so that way you can see what it actually looks like because it is a beautiful machine.
And I posted some additional
thoughts about the review and links to
the benchmarks that I mentioned in that
review. So if you go to linuxunplugged.com,
look for the blog link at the top of the site,
and then you'll find the post for both MeetBSD
and the Dell machine.
It's a good-looking machine, so it's like we had
to show some pictures of it.
But be honest, the real reason was just to show off that cute picture of Dylan.
I did debate putting that picture in, but I thought it was good scale. You know, here's what it looks
like with the kid in there. It's a good looking laptop
and he's a good looking kid. So you put them both in there. I mean, the thing I like about that too is it's the
perfect example of like, you might not bring it with you to the beach every time, but
if you need to, you definitely can. Yeah. I actually, this is, this sounds, makes me sound like a hipster,
but I actually walked like a mile to a, to a coffee shop carrying that laptop. And I thought,
you know what, if I can carry this laptop a mile, uh, that means it's light enough.
And so my, my, my review of the weight that I put in the blog there was, uh,
it's heavy enough that you notice it's in your bag,
but light enough that you can carry it a mile to the coffee shop.
So, you know, good enough, right?
It's good enough.
Anyways, I don't know if we're going to keep doing this blog stuff.
So I did one for MeetBSD and did one for this Dell Precision 5530.
I'd like to know your thoughts.
Tweet me at ChrisLES or go to the contact page.
Is it useful?
I think for like
the hardware it would be
to see a picture
of what we're talking about
and then you can
check it back later on.
Like if you get closer
to time to make
like a purchase,
you could always come back
and check this.
But at the same time,
it means I'm out there
taking pictures,
we're writing up blogs
and stuff.
It's a fair investment in time.
Not negligible,
but if people are going
to look at it,
if this would be
a helpful addendum
or extra reference for the show,
it'd be interesting to do.
We're exploring other ways
of just sort of, you know,
increasing our coverage
and the ways we do that.
And this is one way we could do it.
You know, there's lots of ways
we can make that happen.
And boy,
speaking of expanding coverage,
Crazy Noah is blowing out the doors.
Crazy Noah is right.
It's just, when I saw this, I thought. We cannot keep up with that guy.
My God, man.
My God.
So Ask Noah is near 100 freaking episodes now.
And it's massive because not only is he doing like a daily show push to get to episode 100,
so he's doing like episodes on like nearly every day
this week, next week.
But for episode 100,
he's going to have a party
that looks awesome.
It's at the Tamarack,
I think it is,
Tamarack Taproom.
And man, does that look good.
And I think Noah's
going to be buying too.
So Ask Noah 100 is coming up.
He's got a lot planned for it, so we wanted to give him
a shout. So, if I show up, I can ask him
Linux questions, and he'll give me beer?
That said that word? I wonder. I mean,
so it's going to be, I think it's
November, I think the party's going to be November 7th.
And I'll have a link to
the taproom in the show notes, if
you want to check it out. It is in
Minnesota, so just keep that in mind.
I'll tell you what, though.
If I could make it, I would so.
I called him up.
I'm like, Noah, Noah, when's your party?
Can I make it?
He's like, it's on the 7th.
I'm like, damn it, Noah.
I'm going to be traveling on that day.
So I can't make it, but it looks really great.
And 100 episodes.
That is not an easy thing to do.
That is a lot of content.
A lot of it?
I mean.
Yeah, he's going to be doing, with these daily episodes,
he's going to be doing like format experiments to see what people like.
Try switching up the format of the Ask Noah program and see what sticks.
So he's going to be looking for feedback and stuff.
So if you've listened to the show before, maybe it didn't click for you,
now's an opportunity to listen while he's doing this
and let him know what did click.
Yeah, exactly.
And he'll do more of it. He's pretty keen on that.
That's the hardest thing, I think, right?
There's a lot of ways to present some of this stuff
and not everyone likes all of it,
so your feedback goes a long way.
Now, just a couple of quick links
before we get out of here.
A couple of workarounds.
If you're still using Dropbox out there
and you don't have Extended 4,
there is now DBXFS.
You can mount Dropbox folders
locally as a virtual file system
on XFS. There's one.
It's using Fuse, as you probably guessed.
And then I'll link to another. I knew, I love
the internet. I knew this would happen. I knew this.
I said, well, the first thing I said, you know, you could just make a loopback.
Just make a loopback.
Put a loopback image. Make a loopback.
And put that right there on your XFS.
And make that extent of four
and you're good to go. That's why I said that, right?
You said exactly that. Make a loop back. Loop back.
Well, it's easy
to do, but I just didn't really
get around to doing it yet. So
Ange P wrote
up a rough how-to
on Reddit to do this loop back thing and then
mount it and make it your Dropbox folder
etc. So I'll link to that in there too if that's something
you want to do. But you know,
I'm just saying,
I'm just saying this could be
with the effort you put into that, that could be like
half the effort it would take to move to Nextcloud. I'm just saying.
Yeah, and I think one of the problems
is there's a lot of things going on in Dropbox.
It can be hard to emulate like DBA
XFS. There's just a lot of things you have to, there's a lot of things you on in Dropbox. It can be hard to emulate like DBA XFS. There's just a lot of things you have to,
there's a lot of things you give up
because they have all that magic with the file system.
A lot of these either require connectivity
or you're going to have to do manual syncing.
Let's just go the FOSS way.
Yeah, you know, the thing is,
NextCloud has been really,
so this blog post I did for MeetBSD,
I use NextCloud to, so I'm taking the pictures on the phone.
Yep.
And then I'm writing the blog post on the laptop.
Well, how do you think I'm getting the pictures from the phone to retext, which is what I'm writing up the blog post in?
I think you're sending them to yourself one by one on Telegram.
No, that'd be crazy.
I'm using Next Cloud.
So I take the pictures, they sync up to Next Cloud.
I've got a specific MeetBSD folder I created beforehand to have them sync to.
Oh, nice.
And I am, so what I'm doing is I take the pictures.
I go to the ones I like in the photo album.
I share those to the NextCloud app, which is set already to go to the MeetBSD folder.
I upload them, sit down at the old laptop.
They're already there right there on the file system.
Bada bing, bada boom.
I'm working.
Bada bing, upload them up, and done.
So it's the file system. Bada bing, bada boom. I'm working. You slot a bit, upload them up, then done. So it's a nice system.
I miss Dropbox still in a few ways,
but it is working for me.
And my Fedora upgrade didn't break it.
So all in all, I'm calling it a win.
Calling it a win.
All right, well, they haven't said much for a little bit.
So I encourage you to go get more Popey and Wimpy
because you're probably missing them right now.
I know I do.
I know.
And they had a really good episode on Thursday where they were first with the Linus News.
And they had a fill-in for Wimpy and Jesse, and he did great.
It was great to hear Jesse again, and he was on there.
And I miss Wimpy, though.
I'm not going to lie.
I'm not going to lie.
But it was a good show nonetheless.
Another excellent edition of the Ubuntu Podcast, Ubuntu Podcast.
Go check that out.
Go get more Wes Payne,
techsnap.systems,
and at Wes Payne
on the Twitters,
P-A-Y.
Any.
Not P-A-I.
Downright.
Right?
Get it straight.
No, it's classier with the Y.
Classier.
I got to say.
And I'm at Chris Elias.
The network is
at Jupyter Signal.
And of course,
everything we talked about today is linked at linuxunplugged.com slash 273.
So if you want to find something we talked about, grab a tool.
We got all the links this week.
So, I mean, it's like last week.
We take all the time to add them.
So you got to go click on them.
Somebody better click them.
If you have got a great app pick, too, we didn't have one this week, let us know.
Let us know.
Just take all the cool software that you find and hurl it at have one this week. Let us know. Let us know. Yeah, just take all the cool software
that you find and hurl it at us.
I would.
Rapid fire style.
I would.
And we'll sort through it.
Jeez, that would be great.
That would be great.
Also, call back to last week's episode.
Used our Jack audio setup on the road
to record Linux Action News.
It worked out great.
We were crushing it,
so those scripts,
checked out in production.
Good work, Mr. Payne.
All right, thanks so much for tuning in to this week's episode of The Unplugged Program.
And we'll see you right back here next Tuesday! Get it out of here. make to-dos in plain text. Well, I found this really cool one.
There's this little-known Twitter account that you can follow that will sometimes tweet new applications.
Nobody knows about it.
It's my little secret.
It's the at SnapcraftIO Twitter account.
And it tweeted about TaskBook.
Some mysterious individual has discovered this and let us all know.
So you can Snap install TaskBook, and it is a text-based to-do manager.
You get a chance to actually play with it
before you tweet about it, Popey,
because it looks really good.
No, I didn't actually.
I did the bare bones of installing it.
Oh, that looks good.
Yeah, it does.
It does actually look quite good.
It's sort of that perfect blend
of syntax highlighting, text-looking,
kind of console-based looking
kind of text editor.
And with like a rich command line interface,
which are two of my favorite things.
I know.
I did switch.
I'm going to look out for your post about Dropbox
because I've switched away from Dropbox to Syncthing.
So I'm using Syncthing again across all my machines,
and I love it.
So why Syncthing and not something like C file or Nextcloud?
Because Syncthing is kind of complicated.
Not really.
All I did with SyncThing was snap install SyncThing, run it,
and then it discovered the other machines on my LAN.
It said, do you want to add this one?
I went, yep.
What about this one?
Yep.
And I just added them all, and they all synchronized with each other,
and I just dropped files in a folder, and I'm done.
So it's not that much harder than Dropbox, to be honest.
I've got to try that again.
I always had problems with the discovery not working here at the studio.
Ah.
Yeah.
And it's a snap now, so that also is nice,
because the other issue I had was my sync thing systems
would lose version sync, if you'll allow the terminology.
Yes.
Yeah, they get out of sync with each other, don't they?
But that seems like with snaps, that wouldn't really be as much of a problem.
No. Hmm. All right, I'll't really be as much of a problem. No.
Hmm.
All right,
I'll give it a go again.
I will.
That actually is very encouraging
because SyncThing's pretty nice.
Yeah, there was a long time
where SyncThing was like
half of my personal infrastructure
of getting things around.
I should do that again.
It feels like it.
It worked really well.
I just stopped using it,
but it wasn't because
it wasn't working.
No, it's like a real
network file system.
It's like everything's available
on all your machines.
Just put it somewhere
and there it is over there. They sync to each other peer-to-peer. If I remember right, you can also do like proxying. No, it's like a real network file system. It's like everything's available on all your machines. You just put it somewhere, and there it is over there.
They sync to each other, peer-to-peer.
If I remember right, you can also do like proxying.
So if you had like a SyncThing in a DigitalOcean droplet,
it might get around some network issues.
See, I did try that for a while, but then it just, that also stopped working.
And this is what made me lose faith.
I guess I wouldn't say lose faith,
but this is what made me reconsider using SyncThing,
was I had these issues where randomly they wouldn't say lose faith, but this is what made me reconsider using SyncThing, was I had these issues where randomly they wouldn't discover each other,
or maybe I could see a local system, but I couldn't see the one on the droplet.
Actually, the problem I'd often have is I could see the droplet, funny enough,
but I couldn't see my local boxes.
Yeah, I don't know.
But again, this was ages ago.
The only problem I've had with it, with machines not discovering each other, is because the machine that couldn't see the others had an IPv6 address.
Ah, I was going to say it wasn't using the same DNS server.
Well, it was using a different protocol to talk,
and so it was looking for the others on IPv6,
and none of my machines have IPv6,
so I just disabled IPv6 everywhere, and now everything's working sweet.