LINUX Unplugged - Episode 274: Open Source by Default
Episode Date: November 7, 2018Have the revolutionaries won the war against proprietary software? That’s the argument being made. And we argue, what else did you expect? Plus some performance improvements inbound to Linux, and th...e perfectly proportioned open source project we’ve recently discovered. Special Guests: Alan Pope and Brent Gervais.
Transcript
Discussion (0)
So I have something that I thought we'd share.
We kind of teased this last week, but it's official this week.
Fedora has turned 15.
On November 6, 2003, Red Hat announced Fedora Core 1,
which people are still upset about, the whole core split thing.
But that's a huge milestone.
15 years.
Wes and I were just down at the Portland Linux Users Group,
and I kid you not, there are people
that are still upset that Red Hat
split Fedora off
into its own thing because they were
building their own Red Hat desktop for a while.
Isn't that wild? I thought we'd all
moved past it. I mean, Fedora's kind of become
its own thing now, right?
It stands alone. It's great now.
It is great now. We're getting it back though.
Fedora Core OS. Oh yeah. That's true. Yeah. We're getting it back, though. Fedora Core OS.
Oh, yeah.
That's true.
Yeah, we're getting the Core back.
Yeah, I like that.
That makes it feel retro.
But I think Noah might have been one of those guys once
that was a little upset that they went the Core route,
they went the Fedora Core route and broke it off from main Red Hat.
Were you one of those guys?
I don't know if I was upset about it.
I didn't entirely understand it at first.
After speaking with some of the folks from Red Hat,
I think I got behind it pretty quickly.
Yeah, you can kind of see where they were going at the time.
And now you see how Fedora sort of fits into the lifecycle
of even Red Hat Enterprise Linux.
But back then it was like, are they abandoning the desktop?
What's happening?
And here we are 15 years later, and it kind of makes sense now.
This is Linux Unplugged, episode 274.
Welcome to Linux Unplugged, your weekly Linux talk show that's live from all over the world.
My name is Chris.
My name is Wes.
Hello, Mr. Payne. We have a great show.
I want to tell everybody about a little bit of my travel adventures, but then really we're going to get into some community news, including some new software, a couple of things that have gotten a lot of attention in the community, and some really great news for anyone who uses a Fuse file system.
Anything that's in that Userland file system category, I've got great news for you. And then
we'll open it up to the virtual lug. Is the open source revolution over? And maybe we won? A couple
of points there and some looking forward conversations in that section of the show. And then, towards the end of the show, I am so excited to talk about an open source project that we have begun using at Jupyter Broadcasting to automate many tasks.
And it's very close to a, but more powerful, clone of if this, then that type service that you can self-host
and you can build all kinds of neat event-based things around.
And we'll tell you how we're using it,
maybe give you a few ideas how you could use it
and talk about the project.
And then, of course, maybe we'll have a few suggestions
from the virtual lug on alternatives that we could try.
So let's go no further, Mr. Payne.
We must, at this point point bring in that virtual lug.
Time appropriate greetings, Mamba Room.
Pip, pip, pip.
Hello.
Hey.
Hello.
Hello, guys.
What a friendly bunch over there.
We got a great crowd, too.
We got Brent, Charlie, Eric, and another Eric.
Fun.
I'm going to say I like fun-a-tills, but it looks like it should be fun-u-tills.
Fundatulous.
Mini-Mex in there.
New Nix is in there. Popey Rotten, Ryan Fundatilus. Minimex is in there. Nunex is in there.
Popey Rotten.
Ryan.
Sean.
And Simi.
All rocking in there.
Plus we've got a few other stragglers hanging out in the off-air quiet listening.
You know, we never really mentioned that.
But you could join the mumble room.
And if you're just getting used to the idea, you could just listen in the quiet listening room.
You don't even have to come on air.
And then if something piques your interest, you could pop in.
I'm just saying.
Just saying. Go Google Jupiter Colony mumumble. Join us. It's an open. That's how you feel really close to us. You know, you're right there next to us. You can
almost talk. And when you're ready, you can. It's like being right there without having to
smell our breath. That's a big bonus. I don't know why. I don't know. Hey, so let's talk new
software. Let's kick things off with the community news. Let's talk, actually, let's talk all kinds of new releases.
So I guess first off, before I go too far,
if I sound a little different or the show sounds a little different,
I'm on location this week.
I'm down in Texas doing some meetings and conversations here at Linux Academy,
touching base with the stakeholders, Wes, you know?
That's important.
The stakeholders love to be informed.
They want to give you their opinions.
They want check-ins.
So it's very important that you're doing this.
Oh, man, the worst trip yet.
It went all down as soon as I got in Texas.
In Seattle, it went fine.
Traffic was typical.
Got over to the airport with plenty of time.
Got myself the TSA pre-check.
So I just blast through security now.
Oh, nice! Yeah.
Yeah. Got myself some nice
fast Wi-Fi so I could get connected while I
waited for the plane. You know, I was able
to pre-sauce before the flight. No complaints
there. Good flight.
Land in Texas and discover
that there's no cars available.
I tried to book
the day before. Like, in all, there's just
no cars. They don't have cars there anymore
in all of Dallas, Fort Worth area
nobody, at least in my reach, has a car
not Hertz
not Avis, not Turo
not the cheapo place
that I can't remember the name, that's a local mom pop
nobody had cars because
there was a
football game and a NASCAR event
and some major industry down here
is having an all hands meeting. And you're not, you don't get to go to any of those fun events.
You got to go work. Right. Right. And you know what else that means? No hotel for me. No hotel.
I apparently, um, yeah, well apparently my administrative assistant, uh, me made a mistake.
Oh, that guy's he's the worst. He is the worst. And this is a
mistake that he doesn't normally make, but apparently he made. So I got to fire that guy.
He, I guess, only booked the hotel for the day he's leaving town, not for the whole date range
that I'm going to be here. So I get in. And at this point, I've been traveling since 7 a.m.
and it's now about 7 p.m. And, you know, when I start getting tired,
I don't really feel like being really social. I just kind of want to be left alone, you know.
And Uber rides, not to be a complainer, but they can be a little rough in that because I just,
I don't know, I just, I don't really feel like making pleasant conversation because I'm kind
of exhausted from 12 hours of traveling. And, you know, so you make your pleasant conversation for 30 minutes or whatever.
You get to the hotel and I'm ready to just lay down.
Haven't had food yet because all I've had is booze and airplane snacks.
And the lady says, we don't have a room.
And I said, what do you mean?
They got to have a room.
I booked.
She's like, oh, well, Mr. Fisher, I see that you're booked for the 8th for one day.
Yes.
You have no rooms available right now.
And this is where it's starting to hit me like I'm in Texas.
I can't get a car, so I can't get around very well.
And now I don't have anywhere to stay.
And it's getting close to my bedtime because I'm an old man.
And so she very nicely called around and found a hotel that's about 30 minutes further away.
And they got me in there.
And the hotel's fine.
I got in.
That's very nice of her.
Yeah, it was.
It was one of those moments where she calls around, first hotel, nothing.
Second hotel, nothing.
So now I'm three hotels, bubkis.
She calls the last hotel and has a conversation.
They have a room. It's got two beds. You know, it's not perfect, but, and I'm like, I don't care. You know, my neighbor has
a shack out back. It doesn't, it doesn't get too cold until the winter. So you'll be fine.
So it's like on some golf course and I show up there and the guy's just given me this
unbelievable attitude. At this point I'm frazzled because I don't have
a vehicle. So I'm waiting 15 minutes for Uber drivers to show up because it's Texas and it's
big. And they don't know where we're going. I don't know where we're going. It's dark.
I've been traveling since 7 a.m. and it's 8 p.m. And I just want to go flip through the
television in the hotel room and just chill out. It's your ritual, man. It's your ritual.
It is. Flipping through TV channels is my only time with live TV. And I enjoy it. I get there and the guy's like,
we don't have a reservation for you. I'm like, no, no, it was a transfer. They called and they
said there was a, nobody called me. No, I called. I was standing there. Nope. I did not receive a
call, sir. Well, they talked to somebody because I was standing there. Well, they didn't call her and they didn't call me. So they didn't talk to anybody here.
And I'm like, whoa, dude. Okay. Well, do you have a room open? Oh yeah, we have a room. Sure. Yeah,
that's no problem. Okay. Can I have that room please? All of that just to like, I want to pay
you money for the service
that supposedly you offer to the public.
I was so done at that point.
I was so done.
But yeah, I'm down here
at the Linux Academy headquarters
to plan for 2019.
Big plans.
Got some ambitious goals.
And they're also doing,
speaking of ambitious goals,
they're doing a content launch
this last part of the year.
They're launching over 200 content, like library courses and just tons of stuff, right? I mean, when you think what
it takes together to put like a how-to, and these are so much bigger with labs and interactive
diagrams and of course, video and audio voiceovers and tons of things you could do to actually try
it on production systems. Like it's just, it's so much bigger than a how-to and they're launching
so much stuff. So they're doing a whole bunch of live streams and I've got some experience with
that. So now that they have somebody on staff that's got quite a bit of experience with that,
they have me come down and I'm more than happy to do it. And we co-anchored the first live stream
this morning, which also apparently my Google calendar is set to not update its time zone.
So you're stuck in PST.
Yeah, well, I can fix it, but I hadn't.
And so I wake up and, oh, crap, I got a live stream at 8.30.
I better get over to the office.
I get here and it's not until 10.30.
So, yeah, yeah.
But just as an aside, I got a little peek behind the curtain,
and I didn't even see the whole thing.
But by my count, 20,000 Linux servers are running this place.
Let that sink in for a second.
20,000 Linux servers.
Yeah.
That's some power right there.
That's a lot of Linux servers.
And students can then individually spin up to nine VMs themselves,
which are all Linux.
That's per student, right?
So there's a lot of students going on there.
Yeah, that is, I mean,
it's just mind-blowing when I heard that number.
And I also, just as an aside,
talked to a couple of Fedora users,
speaking of Fedora,
and they're all on 28.
They have not upgraded.
None of them have upgraded to 29 yet.
I think anybody who uses Fedora
as a workstation does that.
They wait.
Isn't that the secret?
You're like, well...
Yeah.
I mean, even, honestly, on Ubuntu, I'll sometimes do that too.
If you don't need to update, there's not like one critical update you're really waiting to come through.
Better to just play it safe.
Yeah, I suppose so.
It seems like that's the...
And, you know, I asked them, and none of them are worried about their extensions breaking.
They're like, oh, no, it's fine.
It's fine. Because they're all... Oh, I should say they're all on GNOME show.
All right, so speaking of GNOME, let's talk about KDE.
See what I did there?
KDE Connect has got some new stuff.
So first of all, version 1.10 of KDE Connect has shipped,
and that is for the Android version.
That's the Android app, and it's not only got a new UI, but they're now targeting Android 8, which has several implications for you active users out
there. Targeting Oreo comes with an updated support library, which forces the project to drop support
for Android 4 and below. Well, there are still people out there, even in this audience, that are using Android 4,
surprisingly, and probably for very, very good reasons. And according to the KDE Connect stats,
it works out to be about 400 of their users that will be affected by this.
This is kind of surprisingly a big update because with, you know, moving things to Oreo,
there's more restrictions,
things like to be able to run in the background all the time, which is sort of a core feature
of what KDE Connect does, you have to show a persistent notification. Now, the good news is
Android has decent controls here, so you can hide the notification. The bad news is they can't do it
by default. Yeah-hmm. Yeah.
Now, it doesn't affect notifications in general for KDE Connect,
just the persistent notification for the application itself.
I hadn't gotten enough experience with KDE Connect yet
to appreciate this particular issue,
but one of the things it's fixed in 1.10
is mouse input now works at the same speed
independent from what the phone's pixel density is,
which I didn't realize was even a thing,
that that would be a problem.
But I guess.
Wow.
I mean, that's something to think of,
because if I was having that issue,
it would be really confusing.
Also, very nice,
the media controller now allows stopping of playback
of what's on your machine.
Yeah, that's actually super handy.
And I've used that one already.
The other thing I saw that was really neat is they now have registered up with kdeconnect:// as a URL.
So if you're integrating with NFC tags, QR codes, third-party applications
that want to launch something in KDE Connect, you've got a way to do it.
That's so awesome.
That is so great. I have a lot of respect for the project. I've met some of the individuals involved with it, and I just thought they were great people. I was at a KDE
meetup of some kind that was in Seattle. And one of the developers, I think it was one of
the main developers, had just moved to Seattle at the time. Yeah, I was there with you, Chris.
Oh yeah, that's right. Can you fill in the details, Eric? Because I'm fuzzy on them at
this point. Was it like a design meetup? I can't remember.
It was the guy who invented KDE Connect.
Right. I know that was the guy that was there, but what was the event? Why were we even there?
It was a celebration of some sorts, but it was also just a kde meetup that was happening in the seattle area at the time yeah it was it was good i enjoyed
it and uh he was there and at the time uh i i had talked to him about the project and i'd asked him
you know where are you going to go with it and uh even back then like i think some of the things
we see now like gs connect and whatnot and whatnot, were kind of on the horizon.
But now KDE Connect is a mature project with tons of users.
GS Connect has just recently seen some nice improvements, which is the Gnome Shell side of the camp.
So you can get some of the similar functionality that KDE Connect provides.
I should say, if you're not familiar with KDE Connect,
it is a companion app and a piece of software that runs on the Plasma desktop that has a Plasmaoid
and some settings that you configure.
You launch the Android app.
You get a shared pin code.
You enter that code.
The two things are connected over Wi-Fi.
They're talking.
It's encrypted.
It's beautiful.
You get a shared keyboard.
You get shared media controls.
You get shared notifications.
You even transfer files back and forth. It's really cool. So KDE Connect's a shared keyboard, you get shared media controls, you get shared notifications, you even transfer files back and forth.
It's really cool.
So KDE Connect's a great project,
and GS Connect is worth checking out.
Anybody else have anything to add
on the KDE Connect or GS Connect topic?
And I saw that recently, I can't remember,
I think it was on the Ubuntu podcast,
that I think someone mentioned that GS Connect
is in the repo for 1810,
if you want to get the latest and greatest.
Yep, that's right.
There you go.
Ooh, that is nice.
GNOME users suffer no more.
Unless you want to run that GNOME desktop on a new Mac.
This is the story that has gotten the most attention this week so far.
And it's had an interesting life cycle
because Pharonix ran a piece that I have been
wondering about that said Apple's new hardware with the T2 security chip will currently block
Linux from booting. And everybody kind of got upset. And then it sort of ducked down when
everybody said, oh, hold down, hold on. If you disable secure boot, everything's fine. And then
the story kind of died down. And then it came back. Even if you disable secure boot, everything's fine. And then the story kind of died down. And then it came back.
Even if you disable secure boot, Linux is still screwed.
And now the story is blown up again.
So at least until further notice, the new Apple systems that they've been recently announcing,
like the iMac Pro and the Mac Mini, that use the T2 chip, will not be able to boot Linux on them. If you're not like Chris, who follows the Apple news just, you know, ravenously,
the T2 is a security chip Apple's made or, you know, got made for them custom that's embedded
into all of their newest products. And it's a secure enclave. It does APFS storage encryption,
UEFI secure boot validation, touch ID handling, a hardware microphone disconnect for all you
privacy lovers, and a whole bunch of other little security functions. But as a result, and you probably guessed this with UEFI in there, the T2 restricts
the boot process and verifies each step using crypto keys signed by Apple.
Now, on its own, this would probably be a pretty good thing, right?
I mean, this is a lot of security functionality.
If you're concerned about that, there's a lot of controls here.
Unfortunately, there's not a lot of ways for the user to actually control it.
Yeah, there's a few settings they expose,
but as the Pharonix article notes in its second update,
apparently, reportedly, it's still blocking Linux.
There is a way to use Windows 10 on there right now,
so I'm sure eventually somebody will figure it out.
Right. The difficulty there is that they've used the key
that Microsoft uses for Windows, but they have used the key that Microsoft uses for Windows,
but they have a separate key that they use for all the third-party things like, you know,
Fedora or Ubuntu went through all that work to go get Microsoft to sign their bootloaders.
Right.
None of that gets us anywhere here.
Right. Oh, my gosh. That's so frustrating.
And, you know, to be fair, Apple's own documentation makes it pretty clear.
It even mentions Linux.
It says, and I'll read it just directly,
there is currently no trust provided
for the Microsoft Corporation
UEFI CA 2011,
which would allow verification code signed by
Microsoft partners. The UEFI
CA is commonly used to verify
the authenticity of bootloaders
for other operating systems
such as Linux variants.
And since they don't have support for it,
no love for you. So no Mac mini running the latest and greatest
GNOME Shell or Plasma for you.
There was a lot of confusion because for a while
it seemed like you could boot up into recovery mode
and disable secure boot entirely
and then not have to worry about this
and be able to boot into Linux.
But the latest reports say no dice.
Yep.
All right.
So there they just crushed your dreams for that Mac.
So better go to your backup plan.
You know, like that new Thelio hardware is looking pretty good,
although it's much bigger than a Mac mini.
Now, see, that was probably the most outrage story of the week.
Every unplugged needs one, right?
We got to have one.
I sometimes think that's what fuels our Linux.
So, you know.
The next story, though, I think has people going, what?
Really?
Huh.
Okay.
And that is that Microsoft's making some more open source code.
ProcDump is coming to Linux, which it's sort of like a reimagining of the classic ProcDump
tool that was from the sys internal suite of tools for windows which actually
is a pretty legit suite of tools and proc dump provides a convenient way for linux developers
to create core dumps of their application based on performance triggers it's pretty sweet and they
have uh they have it up on github right now you can do things like get the cpu threshold at which
to create a dump of the process um or below Like there's all kinds of like memory limits.
You can, when a memory limit is triggered, you can begin to do a proc dump and get a
snapshot of what's going on.
This is pretty neat, eh, Mr. Payne?
Oh yeah, actually it's really easy.
While you were describing it, I installed it up here in the studio and no problems.
Even on a newer operating system, the 16.04 DEBs work just fine.
I'll also note that this isn't some super weird port.
If you look at the code, it's really minimal, pretty clean C.
So it seems sort of like they've been not necessarily porting
the actual sysinternals stuff, but being inspired by
the user experience of sysinternals,
and then bringing similar tools to Linux.
Because as everyone has pointed out over on our Linux already,
like, look, we don't need this, right? We have all the information we want.
We know how to scrape slash proc. We have
a bunch of command line tools already.
But we have to admit that
on the non-open source side of things,
Microsoft has spent a long time
either buying or making, or both in this
case, refining administrative
tooling. So if we can get a little bit of that perspective
on Linux, hey, that's not a bad
thing. Yeah, and I think also there's
is brand
the right way? Some brand recognition with that name?
You know, you're coming
over from Windows. Now you've got
SSH. You've got
the Windows subsystem for Linux. You've
got ProcDump. You've got all these tools
that are sort of like industry known.
And even if there's
already ways to do it i think that's perfectly valid i was digging around i could be way off
because i'm no expert but i was digging around their their um project page on github and it
looks like maybe they've been kind of tooling away at this for nearly a year and um only a few
days ago i think two people start discovering it and And I was checking out to see what the license was,
which was, that was like one of the,
I think that was the very first commit they made,
was the license, which is an MIT license.
And that commit was made on November 10th, 2017.
So this has been coming for a little bit.
That's what's, I think, underappreciated
about these Microsoft open source announcements
is they, in a lot of cases, I come to discover,
have been in the works for a really long time.
You're absolutely right.
Yeah, we don't see it until they actually dump it or release it.
You don't appreciate that that took one internal buy-off, right?
That actual developers are getting paid to work on their work time.
And that eventually it did get open source because it's so easy to have those projects be created internally one, like internal buy-off, right? That actual developers are getting paid to work on their work time.
And that eventually it did get open source because it's so easy to have those projects
be created internally and then just wither and die.
Yeah, I think it's hard to appreciate from the outside
just because we don't know what their processes are.
We don't know at what speed they move in general.
And so we don't have really any tool sets
to recognize when they're moving fast versus moving slow.
But every time I dig into it, it seems like it's a multi-year process.
We just talked about recently on Coder Radio that.NET Core is now becoming like the de facto.NET.
And that's a pretty big deal because.NET Core has only been around for a couple of years, a few years.
Whereas.NET, the thing that's been Windows only,
well, sort of, for so long,
it's been around for like, what, 15, 20 years?
I don't know, it's somewhere in that range.
It's surprisingly old now, which makes me feel old.
And when I had a conversation at Microsoft
with Jeffrey and a couple of other folks,
I asked them, I said, with.NET Core,
why do you have.NET and.NET Core?
I don't understand why you have two things.
It seems like a mixed message,
and I didn't understand
who.NET Core was for
versus who.NET was for.
And why is one proprietary
and closed source and one's open source?
I just didn't understand any of it.
And the way they explained it to me was kind of a bombshell in a way
because if you parse what they said,
it was sort of intention signaling like a while ago.
They said.NET Core is their current focus
and.NET is going to be for legacy platform support.
Well, the platform that.NET
runs on
is Windows
and they called it
a legacy platform
now maybe
you know
maybe that was just
the words they chose
to use
but that's an
interesting mindset
I think
so are we coming
to the agreement
that maybe I was
right all these years
well does Azure Sphere OS count? Because that is technically a
Linux distribution that Microsoft is shipping. No, no. I said that they would have a Linux
distribution with a Windows desktop that gets shipped by default in all machines
like currently Windows does. That's a thing. It's going to
happen. Mark my words. And I have another view on this thing
of the core. It is just that if you have
any open source project that has already everything defined people generally don't come around and
jumping to contribute because everything is defined so this is a good strategy for people
to want to contribute actually have community involvement while they still keep bringing the
other stuff it also is a great refactor, because they really have found a hard time hiring people that, you know,
understand the real old code base. So it's an opportunity to refactor, make it decoupled,
and at the same time, gain the momentum of open source. They love this, they're doing the right
things, but it's just gonna take a while. 2020, come on, it's getting closer.
You know, you know, as it gets closer, I start to think more and more you're right.
Initially, when you said it, I'm like, that's the craziest crap I've ever heard.
But now, I'm like, I could almost see it.
They just reversed the subsystem.
And so you run a Linux-type desktop as the main desktop,
and then you have a Windows subsystem that runs Windows on Linux.
Pretty much.
That's exactly what I said.
If you go on the backlog of the shows, I said exactly that would happen.
Well, you make sure we never forget, too,
so I don't have to go to the back catalog, Dar.
But, you know, it will be the ultimate prediction when it happens.
Five-year prediction.
Well, you don't have to go five years out for good news about Linux kernel 4.20.
Everybody's excited for the 4.20 kernel.
And this one is going to improve your Fuse, man.
While everybody loves the user space file system,
nobody really considers it to be very performant.
And over time, over time, things have gotten better with Fuse.
But it looks like there are more performance optimizations to be had.
Oh, yeah.
That includes symlink caching, a new hash table optimization, and copy file range support.
Plus, the maximum IO signs for Fuse has been increased from 128K up to an actual, a full, a full meg.
What?
And there's like a whole bunch of other improvements that are working on this.
I thought it was interesting too because some
of this stuff, like actual users
are submitting it.
Some users saw that they were having problems where
symlinks just weren't cached the same as normal files
and that was a bottleneck. So they're like,
Facebook actually in this case, Facebook said like,
look, this is a big problem. We're using symlinks a lot.
It's slower than using regular files.
Their fix adds a 10% improvement on top of that. The other thing I think that's neat about this is a big problem. We're using symlinks a lot. It's slower than using regular files. Their fix adds a 10% improvement on top of that.
The other thing I think that's neat about this
is there are some eBPF fixes
in particular for that copy file range support.
And that is actually pretty important
if you want to do server-side copying,
which I'm sure you've run into, right?
You're trying to copy a file,
two folders on your NAS somewhere.
You don't want to be in the position where you have to copy it
all the way to your workstation and then copy it back to the server.
Thanks to some of the eBPF changes that they've got,
some of the changes in Fuse, that's going to be a lot easier.
Oh, so have we given our whole eBPF spiel on this show?
I can't remember because the Portland user Linux group thing
where we talked about it too has kind of thrown it off.
Wes and I kind of did like a mini show down at the Portland Linux users group thing where we talked about it too has kind of thrown it off. Wes and I kind of did like a mini show
down at the Portland Linux users group last Thursday, was it?
Yeah, it was last Thursday.
And a ton of fun.
Jumped in my golf and we drove down to Portland,
which is like a three, three and a half, four hour drive.
And we hung out with Michael Dexter,
who pretty much runs the Portland Linux users group.
Ironically, Michael Dexter is sort of famously a BSD guy.
And he runs it with one of the FreeBSD co-founders.
I mean, Rod doesn't really run it, but he's there with Dexter all the time.
So it's a couple of...
Moral support.
Yeah, I asked him about it.
I said, so Michael, I've been meaning to ask you, but as a BSD user, why are you so involved with the Linux Users Group?
And he said, well, it's really a Unix users group.
It's just mostly Linux.
And it's kind of true.
And you know what?
They have a good open discussion around it, too.
It's like everything's on the table.
And it was just, I don't know, I just really enjoyed it.
It was nice to just chat casually, you know what I mean,
about this kind of stuff that we read about all the time.
You know, even though we do shows about it, you know what I mean?
It was still fun.
It's a little more high pressure, right, in the shows.
We're trying to get things right.
We've got interviews or we've got, you know, guests with heated opinions, both for or against or good or bad.
And this was more of just a group of people who were all interested in this for their own reasons, coming together, having a chat. Yeah, and it really drove home that whole concept that we have here on the show
that Linux user groups
are like one of the best,
best things
about the Linux community.
And you don't have to be
in a town that has a user group
to be part of a Linux user group.
They had,
I'm blanking on the gentleman's name,
but they had a member
that is essentially from New York that was in town and so participated in the Lug in person
that time, but normally watches on the live stream and participates on the mailing list and is an
active member of the Portland Linux users group from New York. And I think that really demonstrates.
I asked him, I said,
so why are you a member of a Portland Linux,
that's on the other side of the country?
He said, well, you know, the nearest one to me is about 200 miles away.
And I just really liked this group.
And it's the Portland Linux users group
has been around for like 20 years.
So it's pretty well established on the West Coast.
And so he's a member of it from that far away. And I think that's something everybody should just
give some thought if you want to get more involved in the Linux community, if you want to talk to
more like-minded technical people. And I'll say, you know, we have a virtual Lug too. And it is a
group of people that get together and we hang out every week for a couple of hours. We meet more
often than a regular log does.
Right, yeah.
I mean, that was what surprised me is, you know,
Michael Dextral does a lot of work to keep that going,
but you get surprising results.
You know, they have a live stream, they have an IRC room.
I'm sure there are times where, you know,
no one really actively joins the live stream or participates.
But if you leave something out, if you set that up so people can communicate,
I think you're often surprised by how many people reach out.
Yeah.
I kind of want to go again.
I should totally say that there was this time
that the Portland Linux Group used JB a long time ago,
not sure if you guys remembered,
to brought some event they had.
And since then, there was actually a Linux IRC channel
of this Portland Linux Group
that essentially dragged lots of people to actually keep visiting that IRC channel of this Portland Linux group that essentially dragged lots of people to
actually keep visiting that IRC.
And since they have such a regular publication of their actual events online in the stream,
lots of people can actually attend remotely, which is something that maybe other groups
don't have as much, which helps.
Yeah, you know, that was when Linus visited the Portland Linux user group that we streamed in.
That was a special.
Yes.
That was a special, yeah.
He was there for an anniversary, and he just took questions from the plug.
And I don't know, after I went to it, I felt like I could do this more often,
especially if I took the train or something, because we drove,
and it was a little exhausting to drive four hours, one each direction. Yeah, I think next time we'll make a
whole day of it. We'll stay overnight. We'll get a nice Portland brunch in the morning. Yeah. Yeah.
Although Wes and I did have a nice romantic evening. That was, that was lovely. It was quite,
not only was it geeky and we did some work on the project that we're going to be talking about in a
little moment on the drive down. So Wes got his ThinkPad and his laptop while I'm driving with the MiFi going.
But then we couldn't really find a place to eat.
But we decided we were going to set out and travel around Portland and just find something.
We started walking around.
Everything looked a little too fancy.
But, you know, time was getting tight.
It was almost time to go to the meetup.
So we walk into this place.
And it didn't seem – it was like just on the line. like I had cloth tables and all that stuff and cloth napkins,
but it seemed like they, you know, they would also take somebody like me who was a bit of a
slob. And she's like, oh, well, would you like to sit in the dining area or would you like to sit
in the bar? And both Wes and I look over and they have, you know, they have a fine bar. It's just
a small little bar that's part of a restaurant. And we're like, well, I guess the bar. She's like,
all right, well, that's around the corner, down the hallway to the left, and then up the stairs.
Like, what?
Okay.
Stairs?
What are you talking about?
This place was great.
It was, that was where we wanted to be.
It was a happening place.
What was the theme of the restaurant?
Was it a, it wasn't an Irish bar.
It's a British-Irish-inspired comfort food sort of thing.
Yeah, it was kind of like a mix.
It's called Raven and Rose.
Yeah, there you go.
You can look it up if you're ever in Seattle.
And then ask to go to the bar.
Don't go to the main restaurant.
And then you get to go to the Secret Upstairs Bar.
That was pretty good.
So we went down there.
We chatted with them.
And the Fedora things came up.
But also, a lot of people had the whole IBM acquisition still in their mind.
That was something that we talked about quite a bit there.
And I just really enjoyed it.
So I just want to take a moment to show and encourage everybody, give it a consideration.
Even if you have to participate from half a country away or more, still give it some consideration.
Plus, you know, it always gives you a place to visit.
Yes.
And I mean, to that end, too, I think it's just a nice reason to meet people.
You have a common interest, but you don't have to talk about that.
So while we were there, we got a surprise lecture about carnivorous plants. And that was just a neat thing to learn, a way to engage people.
Didn't have to do anything with your regular interests or your job or anything else.
Just an excuse to chat about something fun. Yeah, it was definitely, it's an example of
something I wouldn't look up on YouTube. I wouldn't intend to talk if I was at a conference.
But if you're at a plug for a couple of hours, you're at the plug and somebody brings it up and
you're there and it's actually interesting. It was
pretty neat.
So check it out. Anyways, get involved.
I wanted to bring up, speaking
of the whole IBM acquisition and
Microsoft and all of that, some bigger
picture topics for this week's episode.
I thought we could zoom out a little bit,
get away from some of the specifics, and
talk big picture. And Mumble Room, I want you guys to get
in on this because I miss you.
I want to hear from you.
So I thought we'd start with this article that was over at lightreading.com.
And it's in their open source section, so you know it's got to be legit
because they've got an open source section.
And the author, Mitch Wagner, proposes that the open source revolution is over and the revolutionaries have won.
And he writes, what happens when the revolutionaries win?
After they've stormed the castle and they've tried on the king's clothes, they've slept in his bed and they've drunk the royal wine, then what?
Well, I think you actually have to figure out how to run things.
Well, I think you actually have to figure out how to run things.
But while we've seen like a ton of changes just recently,
probably this has been happening for a while.
Now, of course, there was Microsoft buying GitHub and just last week, IBM purchasing Red Hat.
Those are huge moves, both just focused right around open source.
But as we've talked about on the program for several years now,
open source is the mainstream way
of working for big
enterprises to small startups.
Yeah. I think
it's a bit of a developer-driven
revolution, too. Developers want
to be able to grab a library
that's been written that solves a problem for them
or they want to have somebody
help them problem-solve and communicate and collaborate.
If you do everything in closed-source silos, that's a lot more difficult.
But Wagner here, the author, reached out to Jim Zemlin of the Linux Foundation, the executive
director over there at the Linux Foundation.
Oh, that guy.
Yeah.
And Jim says he sees multiple factors contributing to the ascent of the new paradigm.
He says open source won the support of mainstream technology leaders such as IBM and Oracle
because Linux became a standard server OS
and it was embraced by non-technology companies
such as Toyota
and the entertainment companies
and the telcos like AT&T
and those other big ones.
And of course you can't forget people like Google.
I mean, Google's been leveraging open source
for a long time,
especially projects like Android. And mean, Google's been leveraging open source for a long time, especially projects like
Android.
And that's what Android become basically the largest operating system in the world, right?
Think of all of the handsets out there running Linux in the form of Android.
So I wanted to ask, so let's pause there.
I wanted to ask the mumble room.
So far, we have, you know, it became the default server OS, but they go on here in a moment to say that really it's Android that brought this open source revolution.
It's because of Android and the mainstream enterprise companies that watched Google use open source successfully that open source has blossomed so much.
Thank you, Android, they say.
I don't think so at all.
Yeah, I also disagree. Thank you, Android, they say. whole system as a whole to be able to optimize for it and to take the best. So that meant that
developers didn't want the friction of having to use the changing Microsoft feeling or the Mac stuff.
Docker is now being used in companies as, oh, you have it installed on your machine
and you run some stuff in there. And then the question becomes like, I'm already running all
of those Linux things.
Why don't I just use Linux?
And since companies tend to have some Linux available in there, you start using the desktop and you start developing there.
Which is why every other player is basically trying to have some Linux tools available for their desktop.
Because otherwise, people will just use Linux.
And as a consequence, you get Linux being,
I mean, you get exposed, you get to use it,
you get to use it even more.
That's it.
It's not Android.
Android is like the end user, like the consumer.
All right, well, so I think I could totally follow you there.
They make the case that it's not so much
that Android itself made people use open source,
but that Google using open source gave the CIOs and the CTOs the permission
to also use it because Google was doing it.
And Google made it cool, safe, and hip for the enterprise.
I know we had somebody else in there that disagreed with that.
Go ahead.
Yeah, that just sounds like a flimsy premise to me because, I mean,
like a flimsy premise to me because Linux is not
nearly the only open source project being used
in large deployments.
Before that we had FreeBSD and Apache
and how many people
are totally lost on the command line without using
Bash? This open source
revolution as the author is framing it has
been going on since before linux was barely even a minix ripoff yes linux is the most prominent but
that's partly because people just won't shut up about how much they love Linux. And that's not necessarily a bad thing.
It's just this is the most visible, but it has been there for years,
long before people made a big deal out of DevOps and Docker
and all these new old technologies being retrofitted onto Linux.
FreeBSD has been powering our networks for ages now.
And I almost forgot that we need to remember Linux now with its close to, it's what, 20 years now,
right? So for the longest time, universities are very slow to adapt and universities still teach
Windows for a large degree. But the fact that also universities are now like for a decade
Windows for a large degree, but the fact that also universities are now like for a decade
teaching Linux Using Linux as the research material means that the job market now is like oh you can just take that person
I know how to use it knows how to do with that stuff. And that's what they grew up with and
We need to remember what is the the turn rate of producing new brains on any technology, right?
So it first it was the shift from universities
actually producing people with Linux skills
to also them getting to the market.
So that takes a while.
And if you look at the time span, it just fits right.
I'd just like to say I think open source
has a huge influence over the years.
I found, especially in the Windows world,
often to solve problems in the Windows world,
the solution was use open source.
Like use Firefox instead of Internet Explorer.
Use LibreOffice instead of Microsoft Office.
You know, use a huge range of software that is available now
and commonly acceptable to use in windows because i think a lot of the companies
are changing the software and doing such practices that it's leading everyone to open source
and especially with windows 10 and the changes from software being a program you install to cloud
there is a lot of kickback and people want that old-fashioned a program you install to cloud, there is a lot of kickback
and people want that old-fashioned program that you install.
And with the last couple of years of free speech being an issue,
people are looking into software
that isn't from a traditional million-dollar company
and open source is getting more popular that way as well.
An investor nowadays looks at what are your costs and if you can have if for example if you are a competitor
of amazon and then you run your services in the aws what if amazon raises your prices right
so all of that kind of stuff the big companies are so entrenched in so many things you really
have no option but to be in the place that is neutral, you know, economically speaking.
And yes, it is true that there is some weight
on using industry best practices
as an excuse, right, to use Linux.
And Google does give some authorization
to teams that don't know what they're doing
because if you use that as a way to decide your technology,
you know, but that's a personal argument.
Yeah, yeah, I see your point.
I think, too, this is what was supposed to happen.
This is what we wanted to happen.
We wanted open source to kind of be the default,
and that's the point we're reaching.
It's not universal, but I was just in a meeting today
where they start with, okay, well, what components of this
can we start with as open source?
That's what the conference, that's, when I talk about what Linux Academy is building next year, that's where they start with, okay, well, what components of this can we start with as open source? That's what the conference, that's when I talk about what Linux Academy is building
next year, that's where they start the conversation. What component of these can we open source? At
Jupyter Broadcasting now, we're doing the same thing. When we're working on the system we're
about to talk about, the conversation Wes and I consistently have is, what of this do we open
source? What open source tools can we already take advantage of so that way we can save time in building it? It just makes sense. It also helps reduce bus factor. I mean, there's tons
of reasons to use open source. And I just, I reiterate, like, what else did you expect?
Did you want open source to be successful, but not the de facto? I don't know. So these articles
that try to pin it on one thing or another, they seem too narrow.
Right. I mean, it's been a slow evolution this whole time, of course.
And what other outcome would we want?
I would also say that this is the hard work of a lot of advocates,
because even if you have FreeBSD servers running somewhere in the basement,
that doesn't mean that the people at the top actually get it.
And I think most of the time it means that people at the top have no care,
don't know what's happening in the tech below.
And finally, we saw companies where the management and the parties in control are willing to embrace that and not be scared of open source and realize like, hey, this could be better for our bottom line.
We can share and interrupt without having to be scared of it.
We can still keep the stuff we need to keep secret.
You get the lawyers on board, you get the board on board, and here we are today. Open source makes sense as the only way forward
because it does provide the faster, better way of development.
But the people at the top rarely even care what their phone is running.
They don't know or necessarily care.
They care about the bottom line.
Open source saves them money.
Without making this an endless discussion, I think that is also changing.
We are actually having more technically savvy leaders because people don't buy into just the person.
The Steve Jobs are not popping up and they're available everywhere for people to pick up.
So technical leaders are now going on.
If you look at all of the current major tech companies,
the leaders are actually more tech savvy
or come from a more tech background.
But I digress.
And I should say that whenever we're looking into Linux becoming the norm,
well, we did a good job.
Yeah.
We should give us credit.
I mean, this show is a good credit to that
right just the amount of people i know that actually have actually i was surprised i had i
have some i'm in germany now and i have some co-workers that was never expecting they actually
are jb listeners and it was like yeah that's that's that's new great you know that's actually
something that i didn't used to also find frequently.
Like I would say, hey, have you heard about this Jupyter Broadcasting?
And people are like, I have no idea what you're talking about.
And this is becoming also a normalization thing.
It's like the culture, we're finally actually being able to, some projects, they go big, they're still open source, they're still friendly to new people, but they actually are stable.
They're still open source, they're still friendly to new people, but they actually are stable.
And whenever we're not stable, you can just fork it and stabilize it at your pace.
I would bring another thing to the discussion.
There was this article, the cathedral and the bazaar, and I think that principle has won.
So companies start to realize that closed developer groups have their limits, and open development is often faster and more creative.
And in that regard, I also see the acquisition of Red Hat by IBM.
I see that in that context, that not a spirit, that bizarre spirit starts to change the world, in my opinion.
starts to change the world, in my opinion.
And since open source is the standard now,
it seemed like that was the direction for Jupyter Broadcasting to go with its new projects.
And we've been trying to launch an automation system
that will pragmatically publish the shows.
So that way, when they're finished
and it goes in the master RSS feed,
it goes out into all of the various other types of feeds.
It gets published to the website,
maybe even eventually completely automating the encoding pipeline.
So, you know, Joe drops a WAV file in a folder,
and the next thing you know, videos and MP3s and RSS feeds
and website posts have all been automatically created.
And we were kicking around how to solve this particular problem for a while. Like, is this
a series of cron jobs that sit around and check RSS feeds? Is this something that we have like
an admin click a button and then it executes all the various tasks like from a dashboard?
And Wes came across a pretty cool project that in a sense lets you build your own if this than that, but like way
more powerful. And it's called Huggin. At least that's how we're pronouncing it. It's, you know,
one of those H-U-G-I-N-N. So it's kind of a vague, could be Huggin, but we really like Huggin
because it's like Huggin all of our tasks together and our agents are on standby. It's really a
system for building agents that perform automated tasks for you online.
It could read the web, for example.
It could watch for specific events.
It can take an action on your behalf.
And agents create and consume events,
which then propagate them along a directed graph.
So think of it as a hackable version of Zapier
or if this than that, that you run on your own server.
And Wes has set it up to
publish shows automatically. And you wrote some custom code in there and things like that. But I
don't know, Wes, just kind of wanted to have you share your thoughts with the class on and Huggin
and building something like this and why this versus why not just a series of cron jobs?
You know, that's a that's a really good question. And it's one of these tasks that's sort of
trivial enough that probably anything can work.
It could be an entirely custom application.
It could be a series of cron jobs that pass just arbitrary text files
or JSON blobs around.
Some of the things I liked about Huggin, Huggin, whatever you want to call it,
I will say it's based off of Norse mythology.
Huggin and Munin are two ravens that sit on Odin's shoulders.
Yeah, there you go.
But I really like Hugin because it's like bringing all of our tasks together.
You know, it's hugging it out.
Yeah, exactly.
Hugin, I guess, is the official pronunciation.
It makes sense.
If you dig in here, look, they even have a picture illustration.
So I guess that's pretty clear.
Yeah, that is pretty clear.
The parts I really liked about it was that it did have a decently fleshed out event
model for you, and it has a lot of things
that we wanted to use already, right? So things like
Slack integrations or a whole
bunch of other stuff, right? They've got Mattermost,
a lot of other agents, things to
automatically follow RSS feeds.
And while I'm not always a
huge Ruby fan for a variety
of reasons, the project has done a
good job of packaging things
up pretty nicely. They've got multiple Docker files already ready to go, easy to change,
easy to customize if you need to. And if you want to write a new plugin, well, you just make a new
Ruby gem, load it on up, and away you go. And because it has a pretty flexible event system
already tied up with this directed graph, which is probably honestly what I would end up implementing if I was going to write this all myself without all the nice UI and the
extensibility on top. So it was kind of pre-plugged. Now it doesn't do everything we want it to do.
That was kind of a hassle. But because all we have to do is write a gem to say, you know,
update so that we can work with one of the partners that we use to re-host our feeds, or
we're working on a plugin right now to go do better video encoding and manage some of our uploads for us.
That's all easy to do.
All it has to do is consume and emit these events,
which you can kind of think of as just sort of JSON blobs,
arbitrary lists and maps.
You know, you had some custom code in there to properly parse the feeds
and get the episode title information so you could repurpose it for other things.
But one of the things you're able to do, which is really cool,
is I now can go to the dashboard, the webpage that we have.
We've hosted this on one of our droplets.
I log in and I can see all of the agents,
which are essentially the things that are like monitoring the different show feeds and whatnot.
And the thing that's cool is some of the parameters are changeable.
Like they're all broken out and there's a clear spot where I can change some of the parameters are changeable. Like they're all broken out and there's a clear spot
where I can change some of the settings.
Like I can set it to a test publish
instead of a live publish.
I could change a few parameters
that Wes has been able to call out
specifically for the user
and it gives it a specific area in the GUI
for me to see those changeable variables essentially.
Right, that's a nice thing.
I don't have to make some custom UI here
or, you know, have a config file
that's easy for regular humans to understand.
Now, of course, you can go do all of that if you want to,
but Huggin's got a pretty
decent UI right on top, and they have
this concept of, like, options. So,
for any plugin that you write, you can expose user-facing
options, they can customize it, they can load custom
data.
Some of the places we post shows, for instance, have a lot of weird oddities.
None of the names are exactly the same pattern, so each one has to be custom.
Instead of me having to pre-program that into the plugin, I can just let the user enter that in once they use my plugin.
So it makes it a lot more extensible and a lot more reusable.
The way you've set it up is it's ingesting from the RSS feed,
then it's making decisions based on the show,
and then triggering other events,
which then sort of go through the same process.
And for us, it also then gives us a spot to audit what's happened.
So one of the things about building an automated system is,
especially when you're just building your own scripts,
is you're maybe not as inclined to really put in a ton of logging
in your own bash script or whatever, you know?
So this system has pretty extensive logging built in.
So if something doesn't successfully publish
or there's an error somewhere in the process,
it's fairly straightforward for us to track that down.
Yeah, it kind of seemed to be just enough structure
because it's not a huge project.
It's not going to do every single thing for GB.
But I think, you I think there will be a
few odds and ends, probably a number of ones that we can
throw in here. But it wasn't as big.
It's not like, it's not
Jenkins and it's not Rundeck. Those are both
good tools. But for the things we wanted, we didn't, you know,
we're not using it to build software. We don't have
a lot of complicated, stateful stuff.
It's really just chaining things together.
Right. I mean, there's going to be times where this
thing's just, you know,
kicking off a bash script to do a bunch of work on the back end
where maybe something more complicated is happening.
But we can trigger that based on a previous event, which is where Huggin will come in.
Yeah. The other nice part is they have decently fleshed out, you know,
just arbitrary sort of web request plugins as well as connections.
So you can do, you know, pushes to Huggin.
So you can have a, you can stand a server up there, get a
HTTP push of things that you want,
and then spawn other events from there. So pretty flexible.
Hey Wes, I have a question for you.
Oh yeah, Brent.
How do you feel about the future proof of this
system? I know you've put a lot of work into it,
but how long do you think
it'll be useful to JV?
You know, that's a good question. That is
something I've been thinking about.
Part of the hopes was that with the developed UI,
I mean, next time JB launches the next exciting JB show,
stay tuned, everyone.
Then it'll be easy.
It won't have to be me that's involved.
It'll be straightforward enough
that someone else on the network,
someone involved or just administrative staff
or whatever the case may be,
could go in and update it.
Yeah, that's the nice part.
That is really, you define events for a show
and then those are exportable and we can re-import them,
change the names to protect the innocent
and in a sense duplicate the work
and it doesn't have to be Wes creating a bunch of custom code
for a new app for a new show.
Yeah, exactly.
The other thing is it's been relatively active,
not super active, but relatively
active, and it's not that complicated.
One thing I do like to audit
for things like this is, you know,
can I grok the source code? Can I scroll through it
and figure out at least the high level
of what's going on? You know, oh, here's the piece I need to go.
Here's the piece over there that I need to find.
And in this case, yes. So even if it
stagnates terribly
and no new features get developed,
I think we can still use it.
It's already packaged up in a nice little Docker file.
And it's really just a bunch of Ruby dependencies.
So as long as those don't have super major breaking API changes,
we'll be fine.
The other thing is we're just using it internally.
So it won't have any regular user interface.
So we can go up on the security on that side,
make sure that only the people that need to have access
have access to it.
And then even if we do have an old dependency,
well, I'm less worried about it.
Looks like some minor stuff in the documentation
was updated six days ago.
So it's not under heavy development,
but the team is still alive,
or the individual, whoever's behind it,
is DS and or is still active.
So that's part one.
So that's Hugin or Hugin or Hogan or Hogan Serials,
whatever you want to call it.
That's the open source by default that we went with.
But now to make it really hip and modern,
and actually for a very practical reason,
it's all packaged up inside a container.
So we have it running on the Fedora 29 host
that I've talked about just on the last episode.
And it is now one of the containers on this box.
Why did we go with a Docker container, Wes?
I mean, we could have just installed it on the Fedora machine.
You know, we definitely could have.
But it meant that we got to skip a lot of the little details.
For one, the main Docker container it's based on and a lot of the developer environment seems to be on Ubuntu.
One, the main Docker container it's based on and a lot of the developer environment seems to be on Ubuntu.
Now, that's not a huge deal,
and we could certainly bootstrap all the gems
that we needed right there on the Fedora host.
But with the Docker workflow, you get a lot of nice things, right?
So right now we have our own fork of the project.
Later we'll be pushing everything upstream
as we solidify on design choices and et cetera.
And that means when I'm ready to,
I can just go build a new Docker container,
push it over to our server, reload with the new container.
It's got a separate database kept off somewhere else statefully on the machine,
so we don't have to worry about that.
Reload the container and the latest release, all the changes are right there.
I know that there's no weird local state that, you know,
you edited something and forgot to change it,
or I didn't notify you about changing something,
because it always builds fresh.
It always comes from our Docker source.
Yeah, that's pretty neat
because the other part of that is down the road,
it could be deployed to a laptop.
So when I'm on the road, I'm offline,
I could, in theory, have the entire automation
and encoding system in a container on my laptop.
Exactly.
It would be the same exact one that we run
on our production server.
And that's pretty neat
because you can start to modularize
that a little bit.
So say we later add video components to this.
Well, video encoding could be its own container
that perhaps while I'm on the road
is unnecessary.
I won't be doing any video jobs,
so I don't need the video encoding container.
But on the server,
you would deploy both the main Huggin container
and the video container so that we could the server, you would deploy both the main Huggin container and
the video container so that we could do all the jobs.
And that modularity
and that flexibility is pretty sweet, and by
having it all tied back upstream,
that means that we're always, when we're
deploying, we're always deploying the same
consistent image to both my
production laptop and the production server.
Yeah, and Huggin has some pretty good support
for both dry runs and injecting
manual events. So even if you're in a case where
you wanted to run it yourself, you had a one-off, you just did
a special telling the JB audience about
a cool new project you were working on, or you had
a neat little vlog post that you decided to get back
into doing, you released that to the stream,
you could do it all locally and you wouldn't have to
jumpstart, you wouldn't have to make sure you have all the web
hooks wired up correctly, you could just inject a
manual event, push it through the system.
Yeah, isn't that just so neat? It's going
to be so neat. And if
we can down the road, we may even just publish
the container up on Docker Hub. So if you
want to do the same and you want to use
our encoding and publishing system, you'd have to
change the configs and stuff.
We have to go through and make sure that we
don't have anything embarrassing in there like our passwords.
But we've actually been trying to build it from the beginning in a way where we could always do that.
So we have it in mind.
We just have to double check.
And the idea could be, I mean, one day what we're going to try to get to is you have everything you need to really get one Linux box or a fleet of Linux boxes as a podcast production system.
one Linux box or a fleet of Linux boxes as a podcast production system. Because when you combine from two episodes ago when we released our GitJacked scripts,
which I used today even to do this show,
I just used those scripts that we've released as open source
to completely pre-configure my Jack environment and route all my audio.
And I'm sitting here talking to you right now from a laptop that really is,
it's amazing how quiet and well-performing it is when you consider
it's an entire studio being routed inside my machine right now with multiple ins and outs
and physical hardware and software sound devices and multiple applications that are feeding it.
And we released all of that as open source. And there is hopefully going to eventually be sort
of this complete picture where the setting up of the machine, the publishing of the podcast, the encoding of the podcast, more and more will become just as we expand our process will become open source that people could use to start their own podcasts.
And then eventually we'll get into like actual like hardware and that'll be just documentation that we do. But that's all down the road and stuff that
I won't even fully commit to yet because we have
also big ambitious show goals.
But the idea, hopefully,
we'll be able to, the more we can automate
with this system, the more time we have to work
on open sourcing and building the rest of that stuff.
That's the idea.
Just for those of you that are curious
what kind of change this makes for us in-house,
is last time I tracked it, and this may actually have increased since then,
is it's 175 individual steps that a human being, actions,
that a human being must take to publish an episode.
You figure we've got one or two sometimes going out a day.
And that's not even the editing, right?
That's just from the edited final wave file to pushing it out to all you guys.
That's just from the edited final WAV file to pushing it out to all you guys.
Best time, mind you, that I've been doing this for about 12 years,
my best time is usually about 22 minutes to get an episode published.
And that's if I pre-stage things.
Like I pre-stage all my tabs.
And I have it down to a science.
Like I know I have the WordPress installed so well.
I know it so well that I know like what things take a while to load.
So I go to the other tab and I take an action in that tab.
I know how long that takes to load.
So I switch back to another tab and then I do that.
I mean,
I'm telling you, there's no downtime here.
Everything's preloaded.
The artwork's done.
Everything's ready to go.
And it's still 22 minutes is like my best time.
And that's after doing it for 12 years.
And so it's about 175 actions.
I haven't counted for a while because it's so tedious.
So that is huge because that takes up a part of my day every day, a part of Angela's day every day.
And it also means frequently that people are sitting around waiting for episodes to get published.
And it's just a matter of like the show's done, but nobody's awake to push all 175 buttons.
And it's just manual labor because until now,
it wasn't a really clear way to tie it all together.
And that's where Hugin, or Hugin, or Hugin's Heroes,
has really come in and given us sort of this cohesive piece
that we can build off of to tie all of these things together.
And we're just beginning, and we're trying to build it in a way
where we can share it with everybody,
but we wanted to give you sort of a bit of an update
so that way as we come across a couple of open-source tools
that just are like, holy crap,
this is going to be such a game-changer for us,
like, got to share it with you guys
because maybe if you just want to monitor the weather
and have it trigger an event, you know, this will do that.
Like, it doesn't have to be like
you're trying to publish podcasts.
Like, there's all kinds of things you could use it for. So we'll have a link to it in the show notes.
I'm going to check it out. And also we'll have a link in the show notes to our submission.
I guess it's a Google form. I don't know what they call these things. It goes into a sheet
and it's a, it's, we're trying to collect suggestions for what to name the entire
system when we're all done, because Huggin is just part of it. There's other components to it.
And we want to just have a name so we can say,
go get the latest version of,
and it will be encompassing of the entire thing.
And we've gotten a bunch of really funny submissions,
really good ones so far, and some great, like,
ones that I'm surprised I just didn't think of.
Like, there's some really obvious ones in there,
but there's a whole range.
And what we're going to do is we're going to try to distill it down
to, like, four or five them, and then open it up
to a vote. And then the community will
name our new baby.
So find the link in the
show notes at linuxunplugged.com
slash 274
and look for the automation
system naming Google form.
Mr. Payne, is there anything else we want to mention about
Hugin there, or Hugin?
I just think it's kind of new ground we're exploring here.
So if you have similar systems, you have technology you like for automating sort of responsive event-driven workflows.
And to keep in mind that we're not a giant enterprise, there's going to be four people who touch this for the time being maybe.
So a tool that's right size for that, I would love to hear about it.
Yeah, that's a great point.
There are a lot of tools, but we need one that's appropriate for our scale.
Here's a tool for you.
You know, why not Electron app all the things, I always say?
You know, why not?
Please don't.
Just get 64 gigs of RAM and Electron all the things.
No, here's our app pick for the day.
I kid, but at the same time, it kind of looks legit.
It's called CPOD.
It's an open source, cross
platform podcast app, and
yes, as you guessed it, it is Electron,
but it's FOSS,
has what I would say is the best write-up
on the internet about this, hands down,
with some beautiful late-night
Linux album
artwork there in the demo screenshot,
and later on some Linux
action news. That may be making me slightly biased towards their post,
but either way, they do a great write-up here.
Simple, clean podcasting app
that does a really decent job
of listing out the podcasts on your computer
and let you listen to them.
It has a dark mode.
It uses the iTunes podcast directory,
which a lot of these apps do,
and it has, of course, all of the essentials
like the speed-up and and slow down playback modes, auto fetching of new episodes, multiple language support, all kinds of sorting options.
But I thought this one was a good one that I think more apps should implement.
You can sync your podcast subscriptions with gpotter.net.
And that right there.
Yeah, when I saw that, I just, I love that.
I don't listen a lot.
I'll either use the Pocket Cast web player or the Overcast web player if I'm listening on the desktop.
But you'd be surprised how many times we get people writing into the show asking,
hey, how can I manage podcasts on the desktop?
There's not like a great, there's like not a clear way.
So CPOD could be one of them.
It says here in the article that you can snap install CPOD.
I tried to do that, and it did not work for me.
I tried that as well.
It didn't work great.
But I do have it running thanks to AppImage right here on the studio machine.
It's up on the live stream, and it's pretty easy.
I mean, I've already got, I'm listening to Light Night Linux and Linux Action News right now.
Nice.
How's it look?
Do you think it's a good, I thought it was a good UI.
What do you think?
Oh, yeah, it's pretty clear and easy.
It didn't take me very long to understand.
You know, it has the normal sort of stuff,
like a place to go find new podcasts.
And then once you've subscribed,
a home location where all of your subscribed podcasts
show up in a sort of feed style.
Like you'd expect from a podcast app.
And now you got one for your desktop.
All of the good luck for them.
Maybe reconsider the name
because a search for their name
also shows chronic obstructive pulmonary disease.
Oh, no.
I'm serious.
Yeah, all right.
Well, okay.
That's my advice.
All the luck, open source.
It's Electron, but, you know,
pulmonary disease and Electron.
I say, why not own it?
Call it like Electron Potter or something.
Just own it.
Because in the name, it also says it's cross-platform.
So if you've got several desktops
of different OS's and you want to sync your
podcast across all of them. If somebody had
a bad experience with Electron in the
past and they read Electron and it
shouldn't really matter what the technology is
if the tool is good, right?
Yeah, yeah. So check
it out. We'll have a link in the show notes to
its bosses right up so that way you can see the advantages.
It's also available in many of your local
repos. Surprise, surprise, including
the Arch user repository. I know,
shocker on that one. Didn't see that one coming, did you?
I thought, though, the podcast
audience might appreciate a
podcast app. So that's up there if you want it.
Linuxunplugged.com slash 274.
Go grab that. And if you've got an app
pick or a topic suggestion or something like
that, let us know. Linuxunplugged.com slash contact.
Or you can tweet me.
I'm at ChrisLAS.
Mr. Payne is at Wes Payne.
That's Payne with a Y.
And you are more than welcome to come hang out in our virtual lug on Tuesdays.
We do it at 2 p.m. Pacific, which turns out it's like 4 p.m. Central Time.
Who knew?
Nobody knew until I arrived.
Nobody knew that.
No, not a single person in that time zone had figured it out,
which is they're lucky they have you.
Yeah, I know.
Man, if I didn't travel around the world,
how could I inform people of my wisdoms?
It's a good thing.
It's a good thing.
And also, it gets converted to your local time
at jupiterbroadcasting.com slash calendar.
One more plug.
Go help us name that automation system because it's something internally that we'll be referring to for the rest of time, likely.
And it'll be something that we say in meetings.
It'll be something that we say when we're introducing new people to how we do things.
It'll be potentially something that people deploy on their own systems.
So help us name this thing.
And I think what we'll do is we'll leave the votes open for one more week
because I really want to get a name for it.
I'm very excited about this.
So we'll probably do one more week or so.
After that, we'll probably have the vote.
So if you're listening, say, two, three weeks on delay,
the vote may already be happening.
So you might want to check the latest show notes.
But again, that's linked to it, linuxunplugged.com
slash 274. But that
wraps us up for this week's episode. Thanks so much
for joining us, and we will see you next
Tuesday! Hey Chris, I have a question for you.
Yes, sir.
Today is November 6th,
and November 7th marks the Dropbox switchover.
I just wondered if you had an update,
and if anyone else had any updates.
Oh, man.
Turns out Chris has been embracing the EXT4 lifestyle.
He's all about it now.
I have actually loaded a couple, like my new ThinkPad is EXT4,
just in case I had to kick this can for a little bit.
I got this, see, here's the problem.
I got this network effect going on that has amplified recently.
The crew here at Linux Academy is using Dropbox.
They use it fairly extensively.
The other problem is that several of the publishing services
that we currently use are integrated with Dropbox.
Like, I actually, like, queue the files up for encoding
by moving them to a Dropbox folder,
and then they get uploaded to the service,
which then ingests them,
and then it outputs the results to another Dropbox folder.
So it's pretty integrated right now.
And then on top of that, I have not been fully happy
with any individual replacement for various reasons.
Like, for example,
NextCloud is pretty much set now.
I'm pretty happy with NextCloud.
And their mobile app is sufficient
where I can upload images like when I'm at MeetBSD or whatever to start writing a blog. That's all
working fantastic. And the web interface has been tops. The performance is where I want it at now,
which has been my primary complaint from years past. Performance is where I want it now.
They've got undeletion of files. Desktop sync integrates beautifully with Plasma Desktop,
way better than Dropbox does.
The only issue I have is that, no, it's that network effect.
I mean, the performance, I believe, is better with Dropbox still,
but they have massive CDNs.
And I'm uploading to a single droplet, so that doesn't surprise me.
loading to a single droplet. So that doesn't surprise me. I just, I don't have enough time to admin the storage. And so what happens is I get going on a project and I fill up what I've
allocated and my next cloud quits working. And like I sit down to go to my computer to start like
to go into production mode and my files aren't there. And that's how I discover,
oh yeah, I did. oh yeah, that's right,
I did see a critical error message pop up.
It's very verbose.
I get a lot of, your next cloud instance needs your attention,
critical error. I get errors and then I go there and there's no error.
So I guess there's one question.
Would it help you, because even though you would probably
still want to read the message at some point,
if it just automatically scaled your droplet,
that's what the API of droplets are for.
Right.
But then I get into a cost range where it's not really economically.
Yeah, yeah, yeah, exactly.
Then do you have any other place that you currently have for like,
you probably do, which is like for cold storage locally.
Oh yeah.
Oh yeah.
We have a big old free NAS in the studio.
All right.
So I can see this as for your pipeline, right?
There's something that just triggers.
If for some reason you reach the limit, it scales it for a little bit.
Yeah.
We could use Huggin.
It triggers the other process.
It triggers the other process to offload from it
because you don't need it there anymore after all.
You have the cold storage in the NAS.
That could be a system.
And then you're just there.
Actually, the Huggin, I would be very interested to see Huggin doing this.
And at least you can actually not succumb to the proprietary world, Chris.
You can do this.
I trust you.
Maybe.
There might be a way.
It's just it's very, you know, I've got to build a solution to do it. You can do this. I trust you. Maybe. There might be a way. It's just, it's very, you know, I gotta
build a solution to do it, and so that
You have Hugging. Really. That's the whole thing.
Let's put it to test. I would
say you have Wes. You got Wes to do it.
Well, that's what I would do. I'd ask Wes to do it.
I would ask Wes to build a Hugging connection.
Just R-sync it for me, Wes,
please. No, I hadn't really
you know, I hadn't really landed on a way
to solve that.
I bet you there'd have to be some reorganization to start offlining stuff,
but I bet you there's a lot of stuff we have in there
that could be accessed once a month, once every quarter even,
that could totally be sitting on the free NAS rig.
So, yeah, that might be a pretty clever solution to this problem.
I don't
know when I would get it done. That's the issue,
is I ran up to the deadline now. So there's
probably machines in the studio that I guess won't
sync anymore. So I got to solve
it, I guess, when I get back. It's probably one of the things
on the top of my list.
Thanks, Brent.
I have one more little itty-bitty
question about that. I know a few years ago
you guys were looking,
you and Noah were looking at NextCloud when I was much younger,
and you were running into some file syncing issues with,
I think it was a large number of files.
Are you seeing any of that now?
It was not called NextCloud by then.
Come on.
Yeah, back then it was OwnCloud.
Sorry.
No, you know, I guess I haven't put like the two or three terabytes that I have in Dropbox in there, but I've put 100 gigs in there and I haven't yet had an issue.
The thing that helps a lot is I have been a little more selective about what goes into Nextcloud.
So it's a little more current.
And so it's, I don't know how to put it.
Like, it's not just like I haven't just been doing like big data dumps like we did early on. So it's been a little more like. And so it's, I don't know how to put it. Like, it's not just like, I haven't just been doing like big data dumps
like we did early on.
So it's been a little more like selective
about what goes in there.
So I haven't done like a four gig file
that's landed in there.
But the other thing that I really like in Nextcloud
that I hadn't played with in the past,
which makes a big difference,
is they have an ignored files list that you can use.
And when you go in there and you edit this,
you can actually leave out an entire directory.
It's not just individual files.
And so unlike my laptop with a small SSD,
I just won't even sync those files now,
which also alleviated the problem a bit.
Are you thinking about Nextcloud?
So I've been playing with it for the last few weeks,
and it's been going really, really well.
And so a few of those questions were at the back of my mind,
like, hmm, am I getting into a trap
here? But so far it's been quite
amazing. And I don't generate a huge amount
of data, but there's...
It's only a trap if you try to consume the content
directly from it. I'm sorry, I
shouldn't say these things. It's true.