LINUX Unplugged - Episode 255: Fedora to the Core
Episode Date: June 27, 2018Big changes are coming to Fedora with the merger of CoreOS. We chat with a couple project members to get the inside scope about what the future of Fedora looks like. Plus the big feature of the new Gi...tLab release, how Pocket might be Firefox's secret sauce, and why Chris is really excited by PeerTube. Special Guests: Dusty Mabe and Eric Hendricks.
Transcript
Discussion (0)
No one can fully replace the Fedora Project.
Absolutely not.
Exactly.
Don't worry, Ian.
It'll be just a pale comparison.
But you know what?
Rumor has it you have some experience with Fedora Atomic, which is kind of topic irrelevant right now.
Relevant, not irrelevant.
So I think it's going to slide.
Absolutely.
And Dusty does even more than I do.
So that'll be great.
All right.
There we go.
All right, gentlemen.
Well, let's start the show.
This is Linux Unplugged, episode 255 for June 26, 2018.
Welcome to Linux Unplugged, your weekly Linux talk show that's deploying containers with cockpit like a madman.
My name is Chris.
And my name is Brent.
Hello, Brent. Yeah, Wes is out on vacay this week.
I think he's on some kind of road trip.
Who even knows? But we'll get into that.
Coming up on this week's episode of the Unplugged program,
we have many community announcements to make for you, including some great news for the Blender project,
a little bit of a background Skunk Works project that I'm working on, a Flat Pack history,
and then a couple, perhaps three, maybe two gentlemen from the Fedora Project.
It's all still getting ironed out.
We'll be joining us to talk about Fedora Core OS and many other things that are happening in the Fedora community.
Plus, GTK3's got a big update.
Neon's going for the 1804 push and looking for testing.
But be careful before you make the plunge.
Firefox is back, and they've got a lot of attention.
And they think perhaps local processing, not cloud processing,
might be the future of their success.
And are you ready to replace MP3 at least for your podcast?
We'll tell you about a
new, yes, a new open source codec. It's not Opus. It's not Theora. It's a brand, it's not Vorbis.
It's a brand new codec called Codec 2. And it's designed to fit an entire long arse podcast
on a floppy disk. Yeah, I said a floppy disk. I'll tell you about that. Plus, we got a new release of GitLab,
and I got a small, sweet,
but really kind of to-the-point interview
with the GitLab CEO
about what the hell autodevops is
and why it's landing in GitLab.11.
But, Brent, let's not go any further
without addressing the elephant in the room.
You're not Wes.
I think I sound a little different.
He and I, he's probably the Seattle version of me.
Or maybe I'm the Northern Ontario version of him.
I wasn't going to say it.
I wasn't, but that is so true.
So I'm really glad you said it.
Because you guys are like,
if I ever travel to Ontario,
you're my guy, Brent.
You're like my guy.
And if anybody ever travels to Seattle,
like Wes should be their guy. Not me, but Wes. But he's out like driving around the Pacific Northwest right now.
And you hit me up and you said, hey, I'd love to join you on the show. And it was just the
timing worked out perfectly. And the audience might remember Brent because he was on the show
ages ago during Linux Fest Northwest, talking about his photography workflow on Linux. And
Brent, since you've done that
episode, I've probably gotten an email a week asking, how can I make that switch? So it seems
to be an area, pro photography, where we're picking up a lot of users on the Linux side.
Is that all still, you're still rocking Linux to do all your pro photography, still using Darktable
and all that goodness? Absolutely. I don't see that changing anytime soon,
and the software is just getting better and better and better.
So no looking back.
Yeah, I'm surprised that so many people are giving it a go.
I'd really love to get more insights into that.
In fact, if you're a pro photographer,
photographer, that has switched,
I'd love to hear why.
Go to linuxunplugged.com slash contact.
But before we go any further, Brent,
we've got to bring in that virtual lug.
Time-appropriate greetings, Mumble Room.
Hello.
What's up?
Hello.
Hey, guys.
So in the Mumble Room right now,
we have Dusty joining us from the...
Dusty, are you with technically Red Hat
or do you consider yourself with the Fedora project?
What's your delineation there?
Both.
Oh, well, that's not confusing.
No, I work for Red Hat, but a lot of what I do is related to the Fedora project. And if I were
to leave Red Hat, I imagine I would try to still stay involved. So nothing that I do there is really
tied or specific to Red Hat specifically. I mean, I could do that job for another company if I
wanted to. Fascinating. And Ian was going to be joining us too, but it looks like he just stepped
out for a moment. He might come back. I'll keep an eye out for him. Oh, where are you, Ian? Where
are you? Oh yeah, there you are. Geez. I'm sorry. I'm sorry. The screen's across. Ian, you're also
over, again, same question to you. Do you consider yourself Red Hat? Do you consider yourself
primarily Fedora?
Explain that to me.
I thought Dusty gave a great answer.
I think a lot of people in Red Hat would say the same thing.
We can shift hats, sometimes strategically, depending on what we're trying to argue for.
Nice.
So you can chew and walk at the same time of that bubble gum.
I like it.
Well, guys, we're going to get to the big news surrounding the Fedora project here in just a little bit,
but I wanted to start with an update around Blender because this is personally extremely interesting to me.
As you recall from last week's episode, YouTube had blocked all of the Blender videos,
the Blender projects videos, the open source Blender projects videos worldwide.
Now, on Thursday after the show, YouTube turned their channel back on with the
exception of one video from Andrew Price, which is still blocked in the USA. And apparently,
they are requesting that the Blender project signs a new monetization agreement,
which sounds really bad just in general. I mean, Brent, is there no longer a space
just for uploading your stuff for free
and sharing it with people?
Like, isn't that still something that's worth pursuing?
Well, maybe it just means you don't do it on these platforms.
Hmm.
Yeah, yeah, maybe it does, doesn't it?
Maybe it means we need things like PeerTube.
PeerTube is what Blender turned to
when YouTube shut down their channel,
and we covered it a little bit last week, and I'm really happy to say this week that they were undergoing a fundraiser and they reached their goal.
In fact, they've exceeded their goal by just a bit.
They still have nine days left to go.
And I think maybe it'd be worth kind of pushing this just a little bit further.
Right now, they've raised 23,000 euros and their goal was 20,000.
With nine days left to go, they've reached 118% of funding.
But there's still work to be done.
They talk about their next milestone.
They say there are important features that they want to get in version one,
and those are video redundancy, so you can share bandwidth between instances
or as a fallback of the original instance if it goes down,
and the ability to subscribe to users and channels
throughout the entire federation of PeerTube. So for those of you that maybe aren't familiar
with PeerTube, I did a little deep dive this week. I loaded up on a Fedora 28 cloud instance
using Cockpit. And as you know, I love it. And you just go right into Cockpit when you have all
the Docker stuff installed and you go to containers and you just search in there and it will search
the entire Docker hub and it will pull down PeerTube. And
you can get started with PeerTube in five minutes using Fedora 28 and Cockpit. It's great. And I
have that running on top of a DO droplet and I gave it a go and it's a little rough at this point,
but holy crap, if this doesn't have some potential, as you probably guessed, it's using
BitTorrent as the backend with a web seed. So when you're the first person to watch it, you get a web seed and it plays pretty fast.
But when you're the second person to watch it, you get the web seed and you start streaming it
from the other person watching it. And I think this has massive potential for Jupyter broadcasting
content. So I wanted to load it up and give it a go. And you know, it's early days. It's early
days. But like, when do you decide to
make a jump to something like this? Like if take, take you Brent, when did you make the decision?
When did you say, all right, I can make a jump to a totally different technology stack, even if it
doesn't have 100% everything I need. How do you make that decision? Especially when it's like a
creative endeavor. Yeah. I mean, it's most certainly going to be a personal choice as far as how much risk
you're willing to take. Yeah, exactly. But for me personally, at least in the, in the photography
sphere, it was keeping an eye on the tools and knowing that, okay, they're in their infancy,
maybe six months from now, they're going to have the very basic of the tool set that I need. So
for my transition, it was like, okay, this has 70% of what I use on a daily basis.
Is that enough?
And for me at the time, yeah, okay, 70% is enough.
And maybe I can fill in with that extra,
you know, that remainder
with some of the other proprietary tools for the moment
as the open source
or the alternative tools develop further.
Yeah, so what I like about PeerTube right now
is that I could self-host it.
And then when I upload a video
to my own little lonely self-hosted instance,
it's in development.
It's kind of working now.
It has an announcement feature
where it can announce it
to the rest of the PeerTube network,
which are a bunch of federated servers.
And so you could be
a member of one server. And if you have everything turned on right, and that server has everything
enabled, you can cross-pollinate the content options, which sort of addresses this issue
when you go self-hosted about discoverability. That's one of the things that's really driven
GitHub, is that when you move to GitHub, you automatically get discoverability. People can
search and find you in one central location. And when you switch to GitLab or your own self-hosted
repo, you kind of go off the grid a little bit. And that's why Jupyter Broadcasting has always
published to YouTube and to Google Play and to iTunes and Stitcher and all the other ones,
because you got to go where the audience is. But this sort of squares those two things.
PeerTube is letting you self-host
while also spreading the word
when you have new content available.
And I'm kind of all in.
I kind of, I'm thinking about ways
maybe I can contribute to the project financially.
Jupiter Broadcasting is a company, I mean,
not me personally.
And I've played around on a container, and it is at a stage right now where it needs more of my attention than I have time to give it.
It's a little rough, and it's going to have more updates.
But I will say this.
wants to reach out and work with me to set up a PeerTube-hosted solution,
I would be more than on board with importing all of our YouTube catalog and setting up PeerTube as an option for the Jupyter Broadcasting audience
to consume our content.
100% would be on board with that.
I would even consider doing the main embeds on the JB website using PeerTube
if the experience is 70% there,
because I think the audience would be with me too.
So if this sounds
appealing to you, setting up a PeerTube server on a DigitalOcean droplet or something like that,
and working with me over Telegram, I put this out there to the community right here and right now.
Let's be early adopters, Jupyter Broadcasting. Let's be early adopters of PeerTube. Let's push
this forward. Let's help collapse the YouTube monopoly on online video. And let's do it with,
in my opinion, some of the best Linux and open source content out there. And if you want to work
with me, contact me on Telegram at Chris LAS. Let's set it up because I think this could be a
great opportunity. It's so close. I would almost do it all myself. It just needs a bit more
babysitting than I have time to give. And I recognize that fact and I want it to be successful.
So I want somebody in the community to work with me. But this, this is, this is the solution. It'll be this and a million
other things. Instagram has just recently launched IGTV. Facebook has their, their YouTube competitor,
but I feel like PeerTube for our audience could be the future. It won't necessarily replace YouTube,
but it could be one day the predominant way people watch Jupyter Broadcasting videos.
I'm pretty excited about it, Brent.
Do you think it would isolate anyone at all?
I don't know.
I mean, I like their federation spreading the information.
It definitely is not going to reach people on the proprietary commercial platforms.
And so I feel like for a period of time, I'm going to have to publish in both locations.
You know what I mean?
Like a transition period? Yeah, yeah.
Like a transition period, and then
once people know about it
and the word of mouth is starting to spread,
kind of make it our primary platform.
And how awesome would it be if I could self-host that stuff
and use BitTorrent to absorb
the primary bulk of the bandwidth. Isn't that
great? It's fabulous, yeah, yeah. Fabulous.
Makes me want to do all the things over Torrent.
I've always wanted to.
I was ready to use BitTorrent Sync as a way to
distribute shows, but that didn't work out either.
So we'll see. We'll see where that goes.
Do you think Torrent can handle the bandwidth?
You know, if we get like, you know,
thousands of these servers going, can it
do all that? I think it gets even better.
That's the thing.
Have you ever used any of those apps
like Popcorn Hour or whatever it was
that would live stream torrent files?
No.
Yeah, it works.
Because if you think about it, you do segmented downloads.
And if you synchronize the download with the player,
like if the download mechanism is aware of the player position in the video,
it can start trying to chunk up
the section of the video you're watching.
And so it actually lends itself pretty well to video streaming.
It's a little bit slower in some cases,
but the nature of BitTorrent,
where it can download different chunks of the file
at different times from different locations,
happens to fit pretty well with the nature of video streaming.
So I actually think it could be a pretty good experience.
I guess we'll have to wait and see.
In some ways, it seems like a meant-to-be transition, right?
It's just about time that this comes along, isn't it?
I hope so. I really hope so,
because I've had many concerns about YouTube now for quite a while.
Let's shift gears for a moment,
and there was some great progress that came out of London.
Over on the Ubuntu blog,
there is a report from Will Cook about the GNOME software design sprint. And he writes,
a couple of weeks ago, representatives from across Canonical met in London to talk about
ideas to improve the user experience of GNOME software. We had people from the store team,
Snap Advocacy, SnapD, design, and the desktop team. But we also were fortunate enough to be joined by Richard Hughes,
representing Upstream GNOME Software.
Now, Richard's name may ring a bell when we've talked about
the Upstream Firmware project that I'm a huge fan of.
And Will writes,
We started off this week by making a very quick,
a.k.a. long, list of features or improvements
we'd like to see in GNOME Software.
They broke it all down,
did a big whiteboard session, and some of the things they talked about were adding a verified publisher's identity to GNOME software to show where a publisher has undergone some kind of
formal checking so you can be assured that they are who they say they are. Also, they thought
about adding a way to report problems within applications right there in GNOME software.
Maybe it's a link to submit a bug to their bug tracker or something like that.
Or maybe it goes back to some central repository and then gets sorted out from there.
They also had whiteboarded some better layouts and the ability for publishers to add extra
theming and artwork to their application page.
So you could have, for example, FileZilla could have some slick branding when you click
on the FileZilla icon.
Also, they worked on a refreshed home screen for GNOME software to surface new applications,
application add-ons that are related to things you already have installed,
and news and updates for your favorite applications from the software developers.
Their initial designs are going to be added to GNOME's community design pages on the GitLab
just to get feedback and things like that.
And then hopefully
they'll begin engineering work and it'll be done to a branch that's upstream of GNOME software and
be proposed for upstream and general inclusion if all agreed seem to like it. Interesting that
these sprints are still going on now that we're done with 1710 and 1804. It's an interesting thing
that happens is different commercial interests,
in this case, Canonical, who has a pretty wide interest in GNOME software, will bring folks in
to sort of forward the things that they see are critical. And then it's kind of up to the rest of
the community to adopt their changes. And this is sort of the groundwork where forks begin,
not necessarily where forks happen, but this is sort of the groundwork
where forks could begin.
Say Canonical really gets invested
in some of these changes,
like publisher verification,
which is a pretty good one,
especially after the Snap Store stuff
and the Docker Hub stuff we've talked about recently.
That's a pretty good one.
But maybe for some reason,
let's say GNOME Upstream doesn't like it.
Now Canonical will have a decision on their hands.
Do we fork our version of GNOME software,
which will be superior,
or do we go with Upstream, stay Upstream,
keep it simple, but lack the features we want?
It's kind of an interesting,
semi-uncomfortable situation
that crops up in open source development.
You agree, Brent?
I do, but in a way,
it also allows us
a whole bunch of different options, right?
If there's two camps that are in this stuff
every single day
who feel differently about a project,
certainly there are other users
out in the world who do as well.
So to have two options sometimes
is the best.
It is sort of an organic way
to let things work out.
Sort of an evolution versus intelligent design, if you will.
And whichever one's stronger in the marketplace sort of wins out.
I know myself, I want to see continued improvements in the way I can ingest Flatpaks and Flatpak
repositories and snaps.
They've done a lot of work there.
But just recently, I was trying to add, I can't remember if it was
Atom or Visual Studio Code, but there was a Flatpak repo for it. And within step two, I'm
at the command line, I'm adding repos, I'm then installing software. And it's a little bit of a,
honestly, I just wish I could click a link in my browser and have it launch GNOME software
and do all of the work for me, maybe prompt for my password once, and we're just not there yet.
So that's where I'd like to see is improvements with these universal package formats.
Speaking of Flatpak, it's been almost four years since we started talking about it.
Can you believe that?
You might not actually think that's true because when we first started talking about Flatpak, we were talking about XDG app. You might remember your humble host here talking
about XDG app and how I was excited about it being a universal installation format. It used
OSTree, XDG app, it used the OSTree to store and deduplicate applications. It used kernel
namespaces via a helper called XGG app helper to do unprivileged containers. It used kernel namespaces via a helper called xggapphelper to do unprivileged
containers. It had a split between applications and runtimes. It was very much like what we see
Flatpak today. But there was a great renaming for Flatpak from xggapp to Flatpak. And that's why,
that's when in this show, we could tell things were going to the next level. And it has a – Flatpak, the name itself, has a great history that goes back way, way, way, way, way back to the ancient Egyptians.
I'll link to the post in the show notes that explains that at linuxunplugged.com slash 255.
Go check that out.
So in 2016, we got the new name of Flatpak.
That was version 0.6.0.
Version 0.8.0 was released in December of 2016,
and that was the first long-term stable edition
that landed in Debian Stretch and RHEL 7.
And we've had another stable release since then, version 0.10x.
It's always been a decentralized system
that anyone could host their own applications.
But since then, they've realized it is kind of beneficial to have a centralized hub called
Flathub.
So the Flathub project was launched where you can find most applications these days
that are available in Flathub and Flatpak.
But it's not finished.
And it's not necessarily bug-free either.
In fact, Alexander writes in his blog that they have a lot of things on the list for Flatpak 1.0.
And there's a few core things they want to have done before they announce Flatpak 1.0.
So if you keep your eye out, in this next week,
you'll probably see the first release candidate for Flatpak 1.0 called 0.99.1.
It'll probably be later on as you're hearing this episode,
a couple of days afterwards.
And then later this summer,
we're going to finally get version 1.0 of Flatpak.
So we've got Snap and Flatpak 1.0s.
And actually, Snap is further along in their development.
A little bit,
I won't say much faster, but noticeably faster development pace there. But Flatpak, since XUD
app, since 2016, has been really kind of progressing in this nice, orderly fashion.
And the summer is going to be the summer of Flatpak. This is going to be a big summer for
both of these formats.
And it's just a couple of months away.
And soon, some of your favorite applications will be packaged up if they haven't already.
As I wrote these notes for this show, I realized I was using a snap of Telegram.
I'm using a Flatpak of my editor.
I can't remember when I even installed that. I'm using an app pack of my editor I can't remember when I even installed that
I'm using an app image of Etcher
I have all of these different applications I use on a daily basis
that are installed using these universal package formats
and what we've ended up with
is like 3 instead of 1
really it's 3
because you've got to include app image
in this conversation.
Is three manageable, Brent?
What do you think?
Is that too much?
Or if we just landed on three solids
for application delivery,
is that good enough?
I guess let me throw a question
right back at you.
Do you see anyone pulling ahead
or is it kind of everyone's keeping the same?
I kind of feel like it's different use cases are emerging.
So you're answering your own question.
Like snaps are emerging as a great solution for complicated commercial applications or
applications where you really want to have a high availability of discovery.
Flat packs have been extremely useful for like, for example, I run on some systems a development build of gHeaded,
and I use a flat pack to do that.
That's really nice.
It's like flat packs, to me, are in a way what PPA should have been all along.
Like Canonical tried out PPAs, and it kind of sucked as far as I was concerned.
It was great for a lot of users.
I didn't like it a lot.
Whereas flat pack is like the perfect PPA. If I could have had PPA as this, I'd have. It was great for a lot of users. I didn't like it a lot. Whereas Flatpak is like the perfect PPA.
If I could have had PPA as this,
I'd have been a lot happier with it.
Whereas Snaps feel like XYZ vendor
has just released an application for Linux
and now it's available.
And JJ, you're pointing out like a lot of these,
both Flatpak and Snaps also have like
this wine compatibility layer now.
Yeah, and it's pretty interesting.
I actually got to test out the GOG Galaxy app on my system
and it works relatively well,
although there seems to be some focus issues
on my particular system
that I don't know if anybody else is having
with regards to if I go away from the game
that's currently playing,
I cannot go into the,
put my mouse inside of the game again. So that's a bug
that I might need to report. But it sort of is interesting because I wouldn't need to have any
sort of complicated setup of Wine in order to run all these applications. And it sort of gives me a
little bit of hope for Linux for users such as myself who have
infrastructure slash
universities that
refuse to quote unquote
officially support Linux.
I agree. Yeah, it's been nice
in the bundling
of like a compatibility layer
in the case of Skype one day it's going to be
to fix audio issues in the case of some
others it's to load a wine library.
It's very useful.
And I've also seen CM2 in the chat room is saying
that it's a good way to have GNOME stuff,
the newest GNOME stuff in CentOS 7.
So you can have like a RHEL stable base,
but you can have the newest user land applications.
That obviously appeals to me too.
Maybe beta software as well.
I think Firefox beta is also available
in these package formats.
Yes, and Tenbit, you're pointing out
that it is getting easier for folks
that like to use these GUI installers
like the GNOME Software Center or KDE Discover, yeah?
Yeah, it seems to me like if you're on the desktop anyway,
you've got some options for not really caring
which one you're using.
It makes it way simpler, doesn't it,
when you don't do it by the command line
and you just use like Discover or GNOME software?
Yeah, I think so.
Yeah.
Now, I want to talk about Fedora Core OS.
Yeah, I said Fedora Core OS.
Matthew Miller on June 20th posted over at fedoramagazine.org,
welcome to Fedora Core OS.
Everything you know is changing.
Now it's time for us to figure out how we can welcome and include container Linux into the circle of Fedora friends.
The Fedora additions strategy intentionally makes space for exploring emerging areas in operating system distributions.
Core OS will help us push even further and bring new ways of doing things as a project.
Now, Fedora Core OS is going to be built from Fedora content
rather than the way it's made now.
It won't necessarily be made the same way
we make Fedora OS deliverables today, though.
No matter what, we absolutely want Core OS's user experience
of container cluster host OS that keeps itself up to date
and you just don't have to worry about it.
That's the experience they're going for.
Fedora being a community-driven project,
community-driven project, though, obviously,
this is your chance to get involved.
Now, I read that.
I just quoted you word for word.
I just read it from Matt's post, and it barely makes any sense to me.
Barely, barely makes any sense.
From what I grok, Fedora is becoming something completely different,
but it sounds like it's something that's even more awesome than it currently is.
So I've invited Dusty and Ian on the show here to help me sort this out.
They're both involved in the Fedora project, involved with Red Hat, and they have a better insight into this than your humble host here does.
So, guys, welcome into the show.
And, Dusty, because you were here first, I'll start with you. Where do you want to begin the conversation around Fedora Core OS? Are we
talking about sort of a refactoring of Fedora here? Are we talking about a new Fedora in a sense?
I think what we're talking about here is kind of taking the use cases that Container Linux and
Atomic Host have been targeting and trying to, you trying to bring the best of both of those worlds,
so the best from Container Linux and the best from Atomic Host,
and to also try to bring those two communities.
Obviously, Container Linux has a large user base,
and Atomic Host has a very engaged community.
Trying to bring those two together and make something, you know, better,
more focused. If we combine our efforts, you know, hopefully it'll be better and it'll be more rock
solid. Now, Ian, you kind of come at this from the Atomic side of the house. And I got to ask you,
is this kind of where the puck was going to skate regardless of the Coro S merger?
Because to me, I've always kind of wanted Fedora to go more atomic.
Yeah, well, I think, you know, it's funny to me to hear you say that that almost makes sense to you
because, you know, I saw Matt shared that announcement with me and I saw it go out and I thought,
oh, yes, this is exactly what I wanted to say.
There's something that occurs to me when you did say that.
You know, I think a realization that we came to, if you've been in the Fedora project for a long time,
I've always thought of it as something that was very nimble and very quick.
And certainly, as someone in Red Hat who's also trying to manage an OS that we want to have a 3-5-10-year lifecycle,
it seems extremely nimble.
But some of what Fedora does now seems a little bit less nimble than, for example,
something like Container Linux was able to do.
So I think a big element of what Matt was trying to get across in that announcement is that we're going to be open to slightly different ways of perhaps constructing this subset or this sub distribution of Fedora that maybe we weren't necessarily before we actually talked about marrying up with the Container Linux community.
before we actually talked about marrying up with the container Linux community.
Because looking at some of the ways that they do things,
how they put stuff together,
the sort of rolling update model and things like that,
it is really quite nimble.
And it's something that I think we aspire to do more of as we sort of merge those two upstreams.
I see. I see.
So in your perspective,
this isn't necessarily a huge shift for Fedora, the main project itself, but maybe for
one of the additions of Fedora? Am I tracking? Yeah, and I think Matthew also used this concept.
We've had a concept in Fedora called spins, which is meant to be another place where people perhaps
experimented a little bit more with the way that the distribution was composed. Again, it doesn't
mean that we're going to go completely off-piste, as it were, and start pulling in content that comes from nowhere else.
But we do have a fairly – and Fedora is very – this is one of the things that makes Fedora great as a full-fledged distribution that's released every six months or so.
They have a very rigorous process for composing the entire thing together.
It's something like 20,000 source RPMs.
They experiment with all sorts of different ways to do interactive installs,
all sorts of different combinations of install configurations. And they do that quite well,
but we don't necessarily need to be constrained by that when we release an update of our minimal
container host. It's meant to support an entirely different use case, right? So these are the kind
of things that we're hoping to create a little bit more space for now that we've created Fedora
Core OS and have sort of announced some of the guardrails for that community.
I see. So if I today am a Fedora Atomic user, and I'm just a minimal installation,
but I found it to be a pretty competitive, pretty great server OS, what should I expect now as a
Fedora Atomic user? Like long term, what kind of shift should I expect here? Because it sounds like
Core OS and Fedora Atomic had a lot of the same goals in mind.
So as a user, essentially what I'm trying to get out of the project sounds like it's staying the same,
but some of the technical implementations might be shifting.
This is where I get to sign Dusty up for whatever I want, since I've told you what to expect.
I love it. Dusty will be... I'm going to give told you what to expect on that. I love it. I'm going
to give him a second to try and compose an answer to that too before I write my wish list.
Yeah. What do you think? As a Fedora Atomic user, should I be a little concerned that the thing
I've just started to fall in love with is about to change from what I come to expect?
I don't think so. I guess it depends
on how you look at it. But like I said earlier, we're trying to take some of the best technology
from both. So one thing that Container Linux users really liked is Ignition, which is kind of like a
early bootstrapping tool that runs in the init remfs um so if you've ever tried to add a new systemd
unit to your cloud instance using cloud init um you realize that you know adding a systemd unit
via another systemd unit is actually pretty it gets pretty hairy pretty quick so running and
modifying the system in the init rem fs before you switch the root is actually pretty powerful.
So that's one thing that we're kind of carrying forward.
Another thing is some of the update technology behind Container Linux was forked off of Chrome OS.
And it's something that had not been maintained very well.
And it was kind of something that was a little more scary for some of the container Linux
maintainers.
However, RPM OS3 and OS3, we have some of the core developers for that technology in-house
at Red Hat.
So if there are things that we need from those technologies in the future, new features or
whatnot, we can easily add them because we have the people, the expertise there.
Oh, yeah.
So as far as Fedora Atomic goes, it will actually continue.
There will be a Fedora 29 release of Fedora Atomic.
After that, Fedora Core OS will take over.
But, you know, in the long run, it's not going to be, you know, in the long run, it's not, it's not going to be, you know, much, much different than
what you have today, as far as the, if you know, RPMOS tree already. Okay. I'm going to put Dusty
on the spot here and say, Dusty, would you anticipate that there's going to be a continuous
upgrade path between say Fedora 29 atomic host and whatever comes out of Fedora Core OS? Is that
already something we're thinking of as a goal?
I think it might be possible.
I just don't want to commit to that.
Interesting.
Very clever.
Yeah, probably a good move.
Yeah, what we're doing as part of this, you know,
merging of the two communities as well is looking at it as an opportunity
to get rid of bad design decisions
that we may have had early on. And if we get to a point where we decide, hey, there's this
change that we want to make that means that we couldn't, you know, in-place upgrade Fedora
Atomic Host to Fedora Core OS, now's the best time to do it, right? So it's very early days in the development of Fedora Core OS,
but we're building off of pretty stable technologies.
Obviously, there's a community behind Container Linux
and a community behind Fedora Atomic Host today.
Those technologies work.
So we're starting from a good position.
We're kind of throwing everything in the middle there and then coming out with something better, I believe. Yeah, you get to pick from the best,
really, in this situation. It's sort of a, you have more good things to pick from than you can
really kind of make sense of. It kind of sounds like an ideal situation in a way. Now, all right,
so that's all fine and good. But as a Fedora user, I've got to wonder, eventually, does this become main Fedora?
Is it too soon to even say that?
But it seems like to me,
this would really be a standout differentiator for Fedora
if this wasn't just one of the additions of Fedora,
if this was Fedora.
Right.
So I think, at least the way I think about it,
so for example, we have Fedora Atomic Host, right?
We have Fedora Workstation.
We have a new Fedora IoT edition.
We have Fedora Server.
The technology behind Atomic Host
and the technology behind upcoming Fedora Core OS
is already starting to branch out into some of those other additions, right?
So, for example, Atomic Workstation, which was rebranded as Fedora Silverblue,
uses RPMOS tree and OS tree in the background.
So you've got your get-for-your-operating-system-like approach
with the transactional updates,
but it's just a different package set, right? So, to answer your question, you know, is Fedora
Core OS the future of Fedora? I mean, I think Fedora Core OS is more or less an addition of
Fedora, but the tooling behind it, right? The tooling behind it, the tooling behind what we were
doing with Atomic Host and what we're doing with Silverblue today is possibly the answer to your
question, is that the future of Fedora, right? Because, you know, could Fedora Core OS be the
future of Fedora? Well, you know, there's a lot of users that use Fedora as a workstation and
Fedora Core OS, we're not including, you know, GNOME, right? So that's
kind of hard to answer that question unless you look at it, is the underlying technology the
future? I think that is an open question and that is, it will be interesting to see what happens
there. Yeah, that's kind of where I was going. Exactly. Is it seems like that fundamental
technology that's making this such a great option would be just as, just as useful on the workstation.
Because I, I guess I think about this and I think this is kind of confusing to the average user.
There's a lot going on here. Uh, it's not like open SUSE levels of confusing yet, but we're like
getting there. Um, and I wonder, I guess this is kind of a nebulous question. I don't know if it's
better for Ian or for Dusty, but how much do you guys
give thinking to the messaging around all of this and the end user's ability to track this?
Seems like it's pushing it. You know what I mean? Well, so I think the messaging initially on this
is that this is a continuation of the interest groups that grew around Container Linux and the Fedora atomic host, right?
So the focus of this is very much that use case.
I would reiterate Dusty's point,
is that the idea of having a largely immutable host
with this sort of atomic update
has at least two other major applications
that people are able to experiment with within Fedora,
what's now called Silverblue, and some initial
work on IoT. And I think that's a very natural way for it to be allowed to develop sort of in
parallel with contributions coming across where there's commonality, like again, at the OSTree
or RPM OSTree level. But if you're a Fedora user and you're confused by this, but you haven't been
involved in the Atomic Host project and are not interested in containers, it's probably not an area that you need to be looking at right now. But if you were
part of the communities that were interested in container technologies, if you're interested in
container orchestration, or if you were actually part of the Atomic Host community or container
Linux on the CoreOS side, then I hope you're not confused by Matthew's announcement.
And get involved, right?
Yes, yes, please.
Yeah, that's what I like the most about what you guys
are doing, really, is because in a
way, and my co-host on Linux Action
has pointed this out to me, it was obvious, but
I hadn't thought about it,
this could trickle upstream
to RHEL eventually, what you guys do
here, yeah? So it's, in a way, it's a way for
the community to impact RHEL,
which is one of the largest commercial and open source products in the world. Yeah, and we like, I mean, I would say that
that's true of many things in Fedora, and that's quite explicit, so. Yeah, fair enough. Okay, so I
don't know if this is a Dustin or Ian question. Dusty, maybe this is more towards you, but can
somebody explain to me what the difference between a Fedora project leader is
and a Fedora program manager?
Because you guys just got yourself a program manager over at Fedora.
It looks like it's Ben Cotton, and he is pretty excited to take the new role on.
But what is a program manager, and what's that all about?
I'm not sure exactly.
I imagine the Fedora project leader is more of a community-facing,
Exactly. I imagine the Fedora project leader is more of a community facing, you know, let's kind of organize and get people inspired to, you know, contribute and collaborate.
And the program manager is probably more, you know, looking at what's coming down the pipeline.
Do we have people working on these things?
Are we going to be able to, you know, get this feature into Fedora 29, that type of thing. Yeah. So very much sort of like a sort of herding the cats role, which can be a absolutely full-time
position. In fact, it can be one of the most, most arduous positions. So herding cats in the
community. Fun. Oh, I can imagine. Let me tell you, Dusty, I, I can imagine. And it is, it is a
pain in the arse. Let me tell you, it's, It is the hardest thing I do for 12 years, maybe 13 years now of podcasting.
It is the hardest thing I do.
So it does take some serious dedication.
Well, guys, is there anything else around this that we should talk about?
Anything else you think the audience should probably walk away with
knowing about the recent announcements, the merger of Fedora and CoreOS?
Is there kind of like a takeaway that you want to make sure you hit on?
I would say that we're deadly serious about the ability for people who want to get involved in the community
or are already involved in either of these two communities to come on board
in the sort of early requirements gathering phase of this.
We're absolutely, we have a number of, I think, interesting avenues of exploration right now that we're going down,
and we would love to have people come on board. Yep. And I believe we'll have our first community
meeting tomorrow. I think we're going to try to have the same time slot as what we were doing
for the atomic community meeting, which is Wednesdays at 1630 UTC. But we'll see. We should
have our first one tomorrow. So if I wanted to get kind of in the ground floor level,
wanted to maybe influence some of the decisions,
is that sort of the first thing I should start attending?
Is that Wednesday meeting and start contributing there?
Yep, that's a good place to start.
But if you're not able to come there,
we have a mailing list.
We have a discourse set up for discussions.
Really, there's a lot of different ways to get involved. We have a discourse set up for discussions. Really, there's a lot of different
ways to get involved. We have an IRC channel. If you have any questions and want to get involved,
drop by our IRC channel and we can point you in the right direction too.
Awesome. Guys, here's what I love about what's going on. First of all, it really is something
where Joe, sysadminmin or Joe developer can get involved
and push the direction of this thing,
which eventually will trickle upstream to Red Hat Enterprise Linux,
which is, that's remarkable if you think about it.
That right there itself is remarkable.
But the other thing I really like about it is
this is about as transparent as it can get.
I mean, I appreciate and understand there was some business sales
and behind-the-scenes stuff that had to happen,
but really even before this thing is completely formed, you guys, two of you, Ian and Dusty, are on the show.
Matt was going to make it, but he couldn't.
Three of you were going to be on the show to talk about this.
The blog posts are out there.
The weekly community meetings are happening.
This is about as transparent as it
gets with something like this. And the reason why I think it's worth underscoring that a couple of
times is this really will affect the industry. It goes from Fedora and it trickles down to
so many systems, so many enterprise grade systems, the things that people in our audience make their
living off of. So I really appreciate you guys taking the time to come on the show and helping me wrap my brain
around it because it's part of that transparency. So thank you, Ian and Dusty. You guys are totally
welcome to hang out with us. We're going to keep going. All right. Thanks for having us. Thank you,
Chris. Yeah. So let's talk about GTK3 for a moment because it hasn't gotten a lot of attention for a
while. GTK4 has sort of been the
new big promise and we've all kind of been holding our breath, but it's taking a little bit longer
than initially thought. So over on the GTK Plus development blog, we have ourselves a post from
Mr. M.C. Lason, and he talks about GTK 4 being just a little bit out. So in the meantime, we're going to see ourselves a GTK324.
And it's going to be a maintenance release that has some nice new features, including, Brent, I know your favorite feature, new emoji support.
Yeah?
I mean, that's got to feel good.
How did you know?
I don't know.
Because I feel like you and I could have a conversation in emojis, you and I.
You had this very special emoji that you sent me.
It might have been someone with a beard.
Those are stickers.
Do not confuse the emojis with the stickers.
I do love my Putin stickers.
They're just upgraded.
Yeah.
I got Kim Jong-un stickers and I got Vladimir Putin stickers,
and I'll use them all the time.
So the long and short of this is really kind of a practical acknowledgement
by the GTK community that GTK4 is taking a little bit longer.
So we're going to – I know we've been telling you that you could just plan on GTK3.22 being final
and it was just going to stick around, but we're going to make some changes.
But it's not just emojis that they're including.
There's going to be support for new OpenType font features and other nice things and bug fixes.
So keep an eye out.
The GTK3 cycle is still alive, and I wanted to give you just a heads up about that.
Another bit of community news for those of you that are running KDE Neon.
If you are crazy, you can now update to
18.04. KDE Neon is sort of this promise of an LTS Ubuntu release with a rolling KDE release
that can come within hours of the new hotness because the developers are using Neon themselves.
And this, since the project's inception, is the first time they're hopping LTSs. They're
going from 1604 to 1804, but it hasn't been done before. So they're looking for your help to test,
and I underscore test. It is not recommended for production systems. But if you are a lunatic
and a masochist, you can go try it out. I'll have a link in the
show notes and help them test it. I myself will be doing this upgrade after enough of you throw
yourselves on the sword testing this. I am not going to try this one myself. I don't need to
break the last remaining neon systems I have. And in their post on community.kd.org, they say it's
likely to break your system right now.
And it's gotten better over the last few days.
I've been following the Telegram conversation around this.
It's a full-fledged upgrade.
And I advise caution.
But if you're willing to try it, so that way I don't have to, it's available for you.
Brent, what desktop environment are you on over there?
Are you a GNOME guy?
Oh, no.
KDE.
Really?
You know, and that's a recent, that's actually a recent thing for me.
I was on, and don't hold me against this if you feel strongly about it,
but I was doing Cinnamon for a long time and suggesting that to a lot of people.
But I just, I've had my eye on KDE since forever, maybe 10 years,
and just sort of tracking what they were doing.
And I felt like, okay, I'm going to jump.
So about a year ago, sure enough, did that.
And like you are discovering now, there's no looking back.
I'm really enjoying it.
It's just super slick.
It does exactly what I need.
It does a lot more, but it just gets out of the way. And so that's me. I, for some reason, did not know that.
Last time I talked, I don't know if that came up, but that's interesting to hear. Because you were
using, let's see, last, when you were here at JB, you did use a KDE desktop to do the photo stuff,
as I recall. Right at home. Yeah. I hadn't even thought about it.
That's why I didn't complain about it.
Yeah, it's funny you didn't.
You just got right to work, actually.
I got a link in the show notes for you guys to check out.
It's just worth mentioning.
There's not really a lot to it, but Google has completely ignored Windows in their release
of a VR video editor, a virtual reality video editor.
It's mostly for 180 perspective video,
but they released it for Linux and Mac,
but not Windows for a VR video editor.
Now, what does that mean?
Maybe nothing, maybe nothing,
but I thought it was worth mentioning.
And if you're interested in grabbing a VR editor, maybe like Wimpy might be, go check the show notes.
I'll have a link there.
But Brent, you sent me this link to Digital Ocean's post about cracking the Enigma encryption in 13 minutes using 2,000 droplets.
How great is this post?
How great is this post, Brent?
It's like...
This entire post, and I have to say, I came to it not directly from DigitalOcean,
but a roundabout way. And when I saw that it was a DigitalOcean post, I thought,
geez, it's even better. And so this is an article that is wonderfully placed between
sort of the tech lover and everything DigitalOcean is capable of and more.
It sounds like they pushed the boundaries here,
but also some really neat historical challenges.
Yeah.
Yeah, of course, you guys are, when I say the Enigma machine,
I bet you that everybody in the audience knows what I'm talking about.
It was a complicated encryption system during World War II.
The apparatus consisted of a
keyboard, a set of rotors,
an alphabet ring, and
plug connections, all configurable
by the operator. For the message
to be both encrypted and decrypted, both operators
had to know the two sets of
codes. A daily base code
changed every 24 hours,
which was published monthly by the Germans.
Then each operator created an individual setting used only for that message.
The key to the individual code was sent in the first character of the message,
and it was coded in the base.
That then created over 53 billion possible combinations
that changed every 24 hours.
Because of this, the machine was widely considered unbreakable. Now, as you know, we did eventually crack it. Now, it still makes for a
great challenge, especially when you're trying to get AI to crack it. So, Lukasz Kunzowicz, which I'm
not sure on the pronunciation there, but he is the head of a data science division.
He's Polish, and he was working on this problem.
And he decided to recreate the most complicated version of the Enigma machine, the Nazis' Navy version of the machine, which was the most sophisticated iteration.
His team started by recreating the machine's rotors and plugs in Python.
In Python.
Let's just take a moment and appreciate now
that one of the most famous encryption machines from World War II
has been recreated in Python.
A moment here, if we will.
And he recreated all of these functionalities in Python.
Initially, they tried to teach their AI to decode the Enigma code itself,
but it just didn't work.
And they also tried to spin up Lambda functions from Amazon
to do like the computation.
But the problem was Lambda functions were not very quick.
And some of the limits around execution time bit them in the arse.
So they decided to keep pushing forward.
They started blogging about this.
And somebody at DigitalOcean noticed that this was a thing,
contacted them, and quickly agreed to provide what they call
their ML one-click droplets, their machine learning one-click droplets.
It's like, you know, this is by one of their R&D engineers who noticed this.
It's like, this is kind of what we do, guys.
We're kind of like super focused on developers.
You guys are developing code to do this.
Just come over here.
We'll consider this a marketing thing.
So they worked with their R&D engineer.
He set up these machine learning droplets.
And they contacted DigitalOcean's help desk and said,
hey, can you spin us up a thousand droplets?
A thousand droplets. A thousand droplets.
And they did it. They spun up a thousand droplets for them in a day. And they call it,
they have a cute name for it. They call it, they hydrated. They hydrated these droplets
in one day. They hydrated a thousand droplets. So it took the team about two weeks to train
the machines to create the Python code.
And then it took another couple of weeks
before they had their first success.
The way they train the machine learning, though,
to read German is kind of weird.
They fed the machine learning children's stories
and a few, like, military telegraphs and things like that.
But fairy tales written in simple language like Rumpelstiltskin, Cinderella, Grimm's fairy tales,
Hansel and Gretel, those plus 200 others were the data set. They fed the machine learning algorithms
to pattern match German language.
Really, it came down to simple language because these messages were generally short and very simple.
It actually worked to train them on children's books.
And really interestingly, at the end of the day,
the German language wasn't understandable by the AI in any shape or form,
but it could pattern match.
It could pattern match. And so it didn't need to learn to speak German. It could pattern match.
So they did that for two weeks. And they ran the Enigma code through a thousand droplets.
And after they trained it how to read German, or pattern match, I guess,
and they threw a thousand droplets at it,
after 24 hours, they broke the
Enigma code, and they were able to read the message.
That was interesting.
That's not bad.
But what if we threw another thousand
droplets at it? What if we had
2,000 droplets
to break this code?
Once they threw 2,000
droplets, they were able to crack the code. Once they threw 2,000 droplets,
they were able to crack the code in 13 minutes.
13 minutes.
But if you think about it,
and this is the author, he says,
you know, you really have to accept the fact
that even if you have 2,000 droplets,
you still have billions of combinations to be checked.
And the neural network that we used
is really only good at spotting German language patterns.
It's not a speed demon.
And because it uses recurrence,
which gives us a boost when dealing with languages,
but you pay with calculation time.
So for the AI to shine,
we actually had to use 2,000 droplets
to do all the tedious work.
Everybody praises AI,
but it was actually the droplets that did 99% of the work.
Like, right?
We wrote one minion in Python,
and then DigitalOcean has a very nice API for storing images.
So you create one minion, and then you tell DigitalOcean,
create this as an image,
and then please go create 2,000 copies of this image, and it's off. The code simply just runs. But because we could add more systems, it meant
there was no state. They were not coronated in any way. Each minion doesn't know anything about
the others. They're fully autonomous, and that's great because it means we can have 200,
we can have 2,000, we can even have 20,000 of them.
The more we have, the less time that it takes to break the Enigma code.
Wow.
And the hard numbers here?
The 2,000 virtual servers ran through 41 million combinations per second.
It took them 13 minutes for the minions to do the work.
That's a lot of Python.
That's a good one, Brent.
I love how they call the servers minions.
Yeah, that's good.
That's good.
And it really shows you how easily
that these things are going to be cracked in the future.
When you can just deploy 2,000 droplets,
it's just a whole other game.
It's really a whole other game.
Why don't we mention DigitalOcean right here, right now?
do.co.unplugged.
That's where you go to get a $100 credit.
do.co.unplugged.
Now, DigitalOcean is an easier way to deploy servers.
It's, oop, let me move that mic there.
It's a way to get infrastructure, oh, on demand.
Now, I could tell you that,
but why not have the professionals with great voices tell you?
Digital Ocean's cloud computing platform was designed with simplicity in mind,
giving development teams the ability to easily manage infrastructure.
That's why thousands of businesses around the world are building, deploying, and scaling their applications faster and more efficiently on DigitalOcean.
Using our simple control panel or API, you and your team can seamlessly go from deploying
to scaling highly available web, mobile, PaaS, DBAs, or machine learning applications.
In just a matter of seconds, quickly set up one to thousands of virtual machines, easily
secure servers and enable performance monitoring,
and effortlessly attach more storage.
Plus, you'll always know exactly what you'll be paying every month with a predictable flat pricing structure
across all global data center regions.
By using DigitalOcean,
you'll get the infrastructure experience
that development teams love
with the features your business needs.
Oh!
Sign up for DigitalOcean today and experience simplicity at scale.
It's true.
My favorite rig is $0.03 an hour.
They got these flexible droplets now, do.co.unplugged, to get a $100 credit.
They have a great setup with a fantastic dashboard.
Also, a big thank you to Digital Ocean for sponsoring the Unplugged
program now for years.
And while I'm giving out thanks, let me thank
Linux Academy. Linuxacademy.com
slash unplugged.
That's where you go to learn more about their platform
and sign up for a free 7-day trial.
It's everything you need to learn about
Linux and all of the stuff that runs Linux.
And if you've been a subscriber for a while,
buckle up, because your membership is going to get way more valuable. In July, they're launching 150
new challenges and content and courses. And some of the new topics they're launching are going to
get your attention. I know you've been thinking about security a lot lately. So have I, and so
has Linux Academy. And they're stepping up. They're also helping you get a SaltStack certification and the Red Hat certification of expertise in virtualization
and architect full support. They also have an AWS security specialist certification courseware now,
and speaking of Lambda, a Lambda deep dive. And that's like four things out of over 150 new
things that are launching at Linux Academy.
They're going crazy. They're going nuts.
Linux Academy is blowing out the doors.
Crazy Linux Academy is going to give away a seven-day free trial too
when you go to linuxacademy.com slash unplugged.
Now they're doing a live stream to celebrate the big giveaway.
And I just am super nervous about this.
In fact, I'm going to fly down to Linux Academy. They're going to pay
for it, but I'm going to fly down there to help make sure the stream goes well because it's all
on Linux. It's the whole new OBS setup that I did for them. And now they're going to do this big
content launch on the OBS system. I'm a little nervous, but I'm also crazy, crazy excited because
I've heard what they're working on. I saw the effort and I'm
going to go down there because I, I, they didn't, they're not paying me to do it other than paying
for my flight because I just really am excited about what they're doing. I think it's great.
And it's going to bring even more people into Linux and open source. So linuxacademy.com
slash unplugged where you go to get started. And then you can also read over on their blog about
all of the new stuff that's coming.
It's a big, big deal.
And I'm going to be there to help them announce it all
and grok it all.
So if you're in the Texas area,
I'm going to be there again in about a week or two,
about two weeks, because the big announcement,
the big live stream is on July 10th.
And I'm going to get down there around the 8th or so
in the Dallas area,
and I'm going to need to have dinner on the 9th.
So if you're in the area and want to get dinner July 9th, let me know.
I'll be in the Dallas Keller area.
Fort Worth, really.
Let's do it.
Because I'm going down there again to make sure this thing's a success.
Because damn if I'm not going to switch a company to Linux and then not back it up.
So I hope you tune in live to watch how it all goes.
And if you're a Linux Academy subscriber or if you're not one yet, go check it out.
Linuxacademy.com slash unplugged.
Also, a huge thank you to Ting for making my entire road trip to Texas and back possible.
I'm in Seattle again.
And the month that I was away doing shows on the road was powered by Ting.
They have a CDMA and a GSM network,
and damn if I didn't leverage the hell out of that.
It's $6 a month for what you want.
Just one line. Boom. $6 a month.
Then you just pay for your usage on top of that.
Your minutes, your messages, and your megabytes.
However much you talk, text, and data you use, you pay for.
It's a fair price regardless.
And they have nationwide coverage.
Like I said, they've got two networks, so there's a lot of devices you can bring. Check their BYOD page.
There's no contracts, no service agreements, or no weird loopholes where they try to get you for
a couple of years. And they've got the best control panel in the business for mobile. So
check it out by going to linux.ting.com. That'll take $25 off your first device. Oh, and it'll give you $25 in service credit if you bring a device.
And they got a lot of compatible devices.
I'm just saying.
Now, every now and then, Crazy Ting has a crazy deal,
and they're blowing out the doors right now with $1 SIM cards.
I'm serious.
This is it.
This is what Noah and I tell you about all the time.
Go buy 10 of them. the time go buy 10 of them
seriously go buy 10 of them
why not
they're a dollar a sim card
and they don't even charge you until you activate them
and when you do activate them they're $6
so both Noah and I
load up on these suckers
I just bought 5 of them
go buy 10 of them seriously
I bought 5 of them because I'm poor
but if you can afford 10 bucks go buy 10 of them. Go buy ten of them. Seriously. I bought five of them because I'm poor. But if you can afford ten bucks, go buy ten of them.
Because you can put them in your bag.
You can hand them out to people.
You're an event.
Nothing says a baller like you're handing out data.
So go get the GSM SIM.
Go get the CDMA SIM.
Whatever you're about.
I don't care.
But the $1 deal on the Ting SIMs is about once a year.
And hallelujah, it is here. So go grab it. But first,
start by going to linux.ting.com. Linux.ting.com. So a big thank you to DigitalOcean, a big thank
you to Linux Academy, and a big thank you to Ting, linux.ting.com, for sponsoring this here episode
of your unplugged program, linux.ting.com.
All right, back to the news we go.
Brent, have you noticed the uptick in positive coverage around Firefox OS?
I have in the show notes a link to a New York Times article.
Headline, Firefox is back.
It's time to give it a try.
In the freaking New York Times, Brent.
How about that?
You sent me this article, and I didn't think it was real.
But I love it.
I love it.
It's great.
That's a funny way to put it.
Yeah, when I first saw it too, I was like, that's not the New York Times.
That's something else.
Yeah, and they make the case.
They talk about how the web has become a dumpster fire.
Of course, they have a bit of an interest there as an advertising-based company,
but they say Mozilla recently hit the reset button
on Firefox about two years ago.
Again, this is the freaking gray lady here.
About two years ago, six Mozilla employees
were huddled around a bonfire one night
in Santa Cruz, California,
when they begun discussing the state of web browsers.
Eventually, they concluded there
was a crisis of confidence in the web. They write, if they don't trust the web, they won't use the
web. That was Mozilla's chief product officer. They just felt to us like it, like actually,
it might be the direction we're going, that people might stop using the web. So we started to think
about tools and architectures that brought a different approach. Now Firefox is back there, right?
Mozilla released a new version late last year, codenamed Quantum.
It's sleekly designed and fast.
Mozilla said the revamped Firefox consumes less memory than the competition,
meaning you can fire up lots of tabs and browsing will still feel butterly smooth.
That's about as positive as it could possibly get lots of tabs, and browsing will still feel butterly smooth.
That's about as positive as it could possibly get when you got the old gray lady writing about a web browser.
Don't you think, Brent?
Like, that doesn't get any better.
It feels to me like it's not actually the New York Times that wrote that.
And they just write it.
It feels like, oh, yeah, this is easy content, feels really good,
and we're getting an audience that we don't typically get,
so maybe it's a win for everyone.
The Verge has got a podcast. And in their podcast, they had an interview
with the Pocket CEO. Pocket, as everybody knows, is baked into Firefox now. So Nate Weiner went on
the new Verge podcast, the Converge podcast, and he said that they're going to try something radical,
something new, something unheard of in the industry. they're going to try something radical, something new, something
unheard of in the industry. They're going to process your local data to figure out what you
like. What? By analyzing the articles and videos people save into Pocket, Wiener believes the
company can show people the best of the web in a personalized way without building an all-knowing
Facebook-style profile of the user.
They call it a personalization system within Firefox,
and I grabbed a clip from that interview in the Verge podcast.
Yeah, we're testing this really cool personalization system within Firefox
where it uses your browser history to target, like personalize,
but none of that data actually comes back
to Pocket or Mozilla.
It all happens on the client inside the browser itself.
Because there is this notion today, I feel like that, and I feel like you saw it in the
Zuckerberg hearings.
It was like, oh, users, they will give us their data in return for a better experience.
And there's like, that's the premise, right?
But I feel like, yes, you could do that. But we don't's like, that's the premise, right? But I feel like,
yes, you could do that. But we don't feel like that is the required premise that there are ways to build these things where you don't have to trade your like life profile in order to actually
get a good experience. Oh, if that isn't music to my ears. So they can use what people are saving
to pocket and your local browsing history and do the computation on device, which he goes on to say later in the interview is how they're doing it.
They're looking at your web browsing history on your device, all local.
None of it's being sent to Mozilla and comparing that with what people are sharing in Pocket or saving, I should say, in Pocket.
And without ever having to build a huge online profile of you,
they're able to determine what you might be interested in.
I love this.
I know Pocket is controversial, Brent, but I love this.
It sounds almost like they're pushing in the right directions, right?
It's a nice mix of features that they're offering the user,
but also some privacy consciousness that is refreshing, really.
Yeah, yeah. It shouldn't be. You know what I mean? Like, at this point...
It should be the standard, right?
Yeah, it should be, but it does feel refreshing. That's a good way to put it.
Speaking of refreshing, Debian has been refreshed. There is an update out there for those of you
still using Debian 8, Jesse. Now, this is the last release.
It's the end of the line for Jesse.
So this is your public service announcement right here on the show.
I know.
I know.
It's working fine for you, and you love it.
But look, your buddy Chris is just the messenger.
It's time to move on.
Why not upgrade to Debian 9, perhaps?
Now, the 8.11 release has a number of bug fixes and security issues
and it's shipping a new NVIDIA graphics driver.
So you get some life out of it, but it's the end of the line.
It's really time for those of you still running Debian Jessie
to upgrade to version 9.
Because really, in terms of Debian, 10 is just around the corner.
Buster is kind of tentatively scheduled for a release from a year from now.
So it's time to upgrade on the Debian scale of things.
Now, I don't think there's much wrong with MP3 these days.
Now that the patents have expired, I'm more of a fan of MP3 than I ever have been.
You combine that with the crazy industry support from services to hardware that mp3 seems to enjoy it's never
going to be replaced even when something better comes along perhaps like opus however there is
still a gap in the market for like audiobooks or like really long podcasts two three four hour
podcasts or longer there's not a good codec for that. Opus isn't great for it either. And our
friends over at Alphonic, which I'm going to tell you more about here in a moment, you can find it
at a-u-p-h-o-n-i-c.com. Alphonic, it's German, have an idea called Codec 2. Codec 2 is an open source designed for speech, and it aims for compression rates between 700 bits per second all the way up to 3200 bits per second.
So it's really, really low bandwidth.
The man behind it is David Rowe.
He's an electronics engineer currently living in South Australia. He started the project in September of 2009
with the main aim of improving low-cost radio communications
for people living in remote areas of the world.
With this in mind, he set out to develop a codec
that would significantly reduce file sizes
and bandwidth required when streaming.
Let's check in some boxes.
Ro's perceived applications included VoIP trunking,
voice over low bandwidth, HF and VHF digital radios,
as well as worldwide remote area communications,
including military, police, and emergency services.
But Alphonic believes that it could be used for long podcasts,
presentations like talks at events, and audiobooks,
allowing for low storage and minimizing the effect of bad network connections.
I think this is great, and I want to tell you why I'm giving this so much credence.
It's from Alphonic, and these guys, they really know audio. They know
audio so well that at the beginning of this year, I switched out the entire Jupyter Broadcasting
backend encoding system to Alphonic. You probably noticed this maybe if you have a really tuned ear.
Our compression has sounded better recently in the last three or four months. You've also probably
noticed chapter markers in our podcast,
like this very here podcast.
Chapter markers and those features,
also the new YouTube videos that YouTubers don't really like,
but the ones that are automatically generated with our album art
that get published to YouTube automatically
from the main edited podcast file,
all of that is Power Biophonic, the service we are using now.
And it's been, that was the final piece that made us 100% Linux too, by the way.
I do plan to document all of this at some point for the studio.
But the last bit, when we converted the entire studio over to Linux,
was we still had a Mac in the role of editing and encoding
for commercial software purposes for editing
and encoding purposes for software purposes for editing and encoding purposes for
software that could add chapter markers. We were able to convert the editing and now we have been
able to convert the encoding using Elphonic. And I can tell you as somebody who's been encoding
podcasts for 13 years, these guys have a great algorithm. They have a great sound. They have a great system.
It's a super good service
that does way more than just encode and upload video.
It does way more than that.
And if you're a podcaster
and you're struggling with chapter markers
and you're struggling with audio encoding
and you want to publish to YouTube and SFTP
and other services like Libsyn,
check out Alphonic. A-U-P-H-O-N-I-C.com.
Not open source, but the backend runs on Linux
and they make it possible for us Linux podcasters
to do a complete 100% Linux solution.
And now they're throwing their weight behind Codec 2,
the podcast on a floppy disk codec.
I'm all about it.
I love Opus.
I love MP3.
I even think AAC is great.
But there is room for that audiobook-length podcast or audiobook that is just super long, four-plus hours.
That's where our current codecs just kind of fall down.
What do you think, Brent?
Am I crazy or is this something you've run into yourself?
Well, some thoughts I had around that was building giant libraries for ourselves. You know, if we're collecting content, especially
audio books are getting everywhere these days. And sometimes that's the limit on some of these
devices that we own is space, right? And so if you're able to hold, let's say a whole year's
library worth of books, audio books on the devices that you take on a plane or something like that,
then all of a sudden you don't need to buy devices as often, perhaps, if that's your main limitation.
I know a lot of just standard everyday users. Fair enough. Fair enough. Yeah.
Yeah. You just want to have your own big library of... I probably own 300 Audible books,
and it's not lost on me that they're all wrapped up in the Audible DRM.
Audible books, and it's not lost on me that they're all wrapped up in the Audible DRM.
If there was a solution that wasn't Windows-based to rip that DRM out, I'd probably rip it out and just store them somewhere. And if I had a codec that was better for keeping them long-term,
I absolutely would use it. So that's a great point. We're almost done. Just one last story
for the day. I wanted to talk about GitLab because after the Microsoft purchase of GitHub,
it's been a huge, huge topic of conversation.
And without missing a beat this week,
GitLab announced that they're moving off of Azure
and they're switching to Google Cloud,
which I think makes a lot of sense.
But they've also released
their auto DevOps management system.
Now that sounds fancy, right?
Auto DevOps?
I didn't know what it meant.
So I had an opportunity in last week's Coda Radio, episode 313,
to ask the GitLab CEO what is Auto DevOps in our interview with him,
and here's what he had to say.
Now, a little birdie tells me, speaking of some of those features,
that there's something called Auto DevOps that's coming along.
And I was wondering if you could share a little details about Auto DevOps.
Yeah, so we're really, really bullish on Auto DevOps.
A week from now, June 22nd, we'll release GitLab 11.0, and Auto DevOps will be generally available.
And what you do is you just push your code
and GitLab does the rest.
It will build your code, run your tests,
check the quality,
do static application security testing,
dependency scanning, license management,
container scanning.
It will boot up a review app,
kind of like a staging environment,
per merge request,
dynamic application security testing, and deploy it.
It will do browser performance testing, and it will do monitoring of all the vital metrics.
So all of that just by pushing your code, nothing to configure.
We think that is the future, and we're really excited about having it out to the world.
Doesn't that sound fascinating?
They're just hustling.
It was a great interview
we had with the GitLab CEO. Go check out coder.show slash 313 if you'd like to get the entire thing.
Sid is really sharp. He talks about how 100% of the GitLab staff are remote. And of course,
because it would be negligent if I didn't, I asked him what his thoughts were on the whole
Microsoft purchasing GitHub thing and how that is going to affect GitLab long term. So again,
go check that out. Coder.show slash 313 if you want to hear that full interview. Brent, man,
thank you for making it today. I don't know what your availability looks like, but if you are around
next Tuesday, I would more than be happy to have you join us again. I don't see any reason not to hang out
with everybody on Tuesdays. Atta boy. You know what?
Everybody, everybody, everybody
should have that philosophy. In fact, you could.
If you go over to our IRC
room, you can get the secret details,
but if you just Google
Jupiter Colony Mumble,
you'll probably find it.
But if you're in our IRC room, irc.geekshed.net
pound hashtag Jupiter Broadcasting, you can chat during the. But if you're in our IRC room, irc.geekshed.net, pound, hashtag Jupiter Broadcasting,
you can chat during the show
and you can also do Bang Mumble
and get the mumble info.
And then you can join our virtual lug
just like many folks did,
like Ian and Dusty and many others.
You can find out all of that
at the Jupiter Colony site
or in our IRC chat room.
We do this show live on a Tuesday.
You're more than welcome to join us.
Get it converted to your local time at jupiterbroadcasting.com slash calendar.
Thanks so much for tuning in this week's episode.
Links to everything we talked about at linuxunplugged.com as well as our subscribe links.
Thanks for being here.
We'll see you right back here next Tuesday. DUDE! Yum, love show.
Oh!
There we go, Brent.
We're officially in over...
Not overtime.
In post-show.
Overtime's for something else.
Thank you, sir, for making it.
Appreciate it now.
We need everybody to go to jbtitles.com
and vote their heart outs because there's
a lot of good titles.
36 titles over there.
There's some good ones. Fedora to the core
at a drop of a hat
contained in a pocket.
Protect your pocket.
Quantum is a sign of the times.
Oh,
get it. Do. Get it?
Do you get it?
That's good, right?
That's so good.
I got to make good I want to do on the show, you know?
Because there's so many web apps we use these days that I never, ever, ever have talked about Natifier.
And it is a wrong that I must write right now.
It's NPM, so you got to have all the NPM stuff installed,
which is pretty straightforward these days
on any given Linux distro.
But it's called Natifier,
and it does exactly what you're probably guessing,
is it takes a web application
and makes it a standalone Node.js-style Electron app.
Catch your breath.
So take, for example, you want to take Google Docs or in my case,
IRC Cloud or something else that you always have to use in the damn browser. And you'd like to just
have it as a standalone application on your desktop that gets its own taskbar entry. That's
where Natifier comes in. And it's just npm install natifier-g. I'll have a link in the show notes.
It's solid. I've been using it now for
months to run IRC Cloud, to run SourceConnect, to run a whole bunch of other stuff. I just want to
give it a plug right now. Natifier, if you're stuck using web applications that you really
would rather have their own damn desktop entry, check out Natifier. I think it's pretty good stuff.