LINUX Unplugged - Episode 257: Security Amateur Hour
Episode Date: July 11, 2018We reflect on recent FOSS security screw ups and ponder a solution powered by community. Plus get you caught up on community news, Firefox changes, and poke the new minimal Ubuntu. ...
Transcript
Discussion (0)
Well, first and foremost, it is great to have you back, Wes.
Thank you.
It is awesome to be back.
I missed the show while I was away.
We missed you.
And now you're like a well-traveled man.
I can be like, oh, yeah, my buddy Wes, he goes to Bali for vacations.
Hey, you know it.
Yeah, you can increase your coolness factor.
Just latch right on.
You're always welcome.
Now, in your wild world travels, Wes, have you ever seen anything as wild as a Commodore 64 running Slack?
Oh, my.
No, I have not.
Yep.
Brent found this and sent it to me during the week, and he's like, you've got to see this.
This is Slack running on a Commodore 64.
And you've got to think, like, how is that even possible?
Well, a Raspberry Pi is involved with a little bit of Linux, of course.
possible? Well, a Raspberry Pi is involved with a little bit of Linux, of course. You see, the C64 has an extension port called the, quote, user port, which via an adapter can communicate over
RS-232 serial. So this guy connected that user port to an RS-232 serial USB adapter connected to,
well, a Raspberry Pi. But the part that I like is the cable description.
This is the cable that he used, and I think it's great.
It was an artisanal, locally sourced, homemade cable with a user port connector on one end
and a USB TTL RS-232 converter on the other.
He says the fastest he got this thing to communicate, the fastest you could use Slack, was 1,200
baud or 150 bytes
per second. That should be enough for anyone. This is Linux Unplugged, your weekly Linux talk show that, unbelievably, is already live again from Texas.
My name is Chris.
My name is Wes.
Hello, Wes! Oh my gosh, it's good to have you back, you world-traveling suave man, you. Good to have you here.
It's amazing, and I get back, and suddenly you're not here? What is going on?
I know. We're like just two podcasters passing in the night. As my plane is taking off,
your plane is landing. It's been like that since my first Texas trip. So it's just kind of the way
it worked out. But I'm glad after all that we're back together again.
Someday we'll see each other again in the real world,
but we have the magic of technology, so we're here anyway.
One day.
You know, and that event may be barbecue that brings us together.
But coming up on this week's episode of the Unplugged program,
a great bunch of community news you can use,
including a really popular open source app that's asking for your attention.
We have a couple of Firefox fast takes that you probably need to know about.
And then we'll get into the Ubuntu corner.
A couple of new announcements coming out of the Ubuntu project,
as well as some things around them.
And then holy crap, holy crap.
Wine is so old, but like in an awesome way.
Now it's also going to get cheaper car insurance.
We'll tell you about some new milestones for wine
and some big changes
that are coming to the project, new architectures, etc. And then, later on in the show, we now have
the results from the recent Gentoo hack. We know what happened. We know what went wrong. They've
posted an entire post-mortem, and it's in-depth, it's detailed, and I think it's educational.
So we'll cover that, we'll describe what happened, and then maybe talk about the bigger picture about Linux distributions maintaining infrastructure.
Now, I will admit I may have some sysadmin bias in this particular segment, but I'm really looking forward to hearing everyone's thoughts.
Like our mumble room, let's bring them in.
Time-appropriate greetings, Virtual Lug.
What's up? Hello.
Hello! And you may
have noticed in that,
I don't know what you call that, those series
of voices there, there is one Mr. Brent
in there. Brent is joining us via the mumble room
this week. Brent, so glad to have you here as well.
Hey, it's great to be with both of you this week.
Absolutely.
We all of us need to get together in the studio.
How fun would that be?
The three of us in studio doing a show, eating some barbecue.
I'm salivating already.
Yeah, this needs to happen, doesn't it?
Yeah.
Didn't that happen already?
Wasn't that the first experience on the show for me?
That's why we know it needs to happen again, Brent.
We know it works.
Every single week it should happen.
I think that'd be great.
Yeah, we'll just get this huge budget and we'll just fly you out every single week it should happen i think that'd be great yeah we'll just get this huge budget and we'll just fly you out every single week for the show like like some sort of high-end radio
show or something you know i could stay for more than a week there you go no now you're thinking
yeah we could just set you up a little room in the studio i've got the space you know decent
connection decent wi-fi all the things you need I did sleep in the studio last time, if you remember.
That's right.
That's right.
Yeah, that's how our barbecues go.
And if you're lucky, a little Levi comes and snuggles with you.
Man, I tell you what, that is the hardest part about traveling is that dog.
I miss that dog already.
All right.
Well, let's get into the community news that you can use.
Let's start with Katie and Live's significant refactoring that they're doing, that they're asking for testing.
I feel like this is a really big deal because this, at least in my view, is open source's most
potential Final Cut killer. Let's put it that way. It's really coming along and they've added
a couple of new features that I think are really handy, including proper slow motion video support, massive improvements to their timeline, the ability to automatically separate clips that have both video and audio tracks, which is very handy when you're trying to remove like what we have is like what we call is scratch audio on the camera so you record scratch audio
on the camera and then you record the actual source audio using a professional microphone
going into a professional recorder and it's really handy to be able to just drop the audio
from the video separate it out and then get rid of your scratch audio so that's coming in there now
support for generating lower resolution video in the timeline preview which means it makes it
easier to work with something like 4k video on a machine that maybe can't render 4k in real time. They're like, they're
changing the layout of the keyboard. Yeah. And it's a lot of stuff and just a bunch of other
enhancements. So it's available now as an app image, if you want to help them test it. And
they're trying to get the word out there. They've been contacting a few media outlets and saying,
Hey, can you spread the word that we need people to test this? We really need people to test this. Time to bring back the video version
of Lasten, isn't it? Just kidding. You know, what's funny is when I saw this story, I'm like,
oh, that's so nice. I'm glad that I'm glad that this is really improving. And there was a version
of me that would have been almost hyperventilating with excitement. But, you know, since since we've
spun down some of the video stuff,
I've really taken a break from video editing. It's been easier on my body.
My RSI issues have started to clear up a little bit. And it's just-
You've been less depressed, happier. And maybe if you're lucky, by the time you come back around
and are doing video again, Linux will be the best of the lot.
I would be amazing because it sure is for audio production. I tell you what, it really is. And
a lot. I would be amazing because it sure is for audio production. I tell you what, it really is.
And that's the thing about where I'm at with audio production is I enjoy it. I like it. I feel good about it on Linux. Like I want to do it on Linux. Like I don't have to force myself to do the audio
production stuff on Linux. I want to do it on Linux because it's, in my opinion, the best.
It's just actually a good tool set. Yes. And so that's where I would love to see video get to.
And part of what's really helped me switch my workflow over is also at the same time switching my workflow on the desktop over to the Plasma desktop.
And so I've been following the desktop's developments pretty closely.
And there's some new features that are coming to future versions of Dolphin that I think are really slick, especially if you're a Telegram user.
And it's built around this new framework called purpose.
And it's got its own dedicated blog post.
I'll try to put a link in the show notes if you guys want to know more about it.
But the idea is it's an extensible framework to fulfill the developer's purpose while providing
abstraction.
Yeah, I know.
Right now, they really don't have much going on other than this
new share menu that's coming to Dolphin. And if you've been following the weekly usability posts,
there has been some talk about this new share menu. When you right click on a file,
it does a lot like you would expect to do, say, on a mobile device. You can share it to
KDE Connect. You could share it to a Bluetooth device.
And they're collapsing some of the stuff
that's in the menu into this new share menu.
But you could also do things like Telegram
because the system will detect
what applications you have installed
or like NextCloud, another example,
or maybe Imgur or Twitter.
Like there's a lot of extensions
you could put in there.
And then you can right-click on a file.
There'll be a built-in share menu,
built-in Plasma.
Anything that uses that framework can get access to it.
So soon, on your Linux desktop,
if you're using the Plasma desktop,
you could right-click on the document,
select Telegram, then choose the contact,
just like you do on your mobile device.
And right now, it's in development,
and they just put up a blog post
answering some of the questions about, like,
what about adding more clutter?
Or what about, you know, accidentally sharing a file? And all of that kind of stuff. He's answered all of the questions about like, what about adding more clutter? Or what about,
you know, accidentally sharing a file and all of that kind of stuff. He's answered all of those
questions. And it's, it's worth calling out if nothing other than to talk about the rapid
improvement that we're seeing on the Plasma desktop, even though there's recently been some
complaints about communication between developers and the visual design group.
Actual Plasma itself is progressing extremely fast. And we've really now entered a flip-flop situation where generally GNOME is considered the slower, heavier, more less stable, I'll put it nicely, desktop.
And Plasma has become the fast, lean, efficient desktop
that has all these new things coming to it.
Like, I feel like they've flip-flopped in a bit,
and you really see that the Plasma desktop
is moving forward fast with some of these new features.
The share menu is great,
and I recommend reading the link in the show notes
if you're curious about the implementation.
It's one of these small things
that really, I think, draws a contrast between Plasma and Gnome Shell. Files
over the years has gotten simpler. For better or for worse, it's gotten simpler and simpler.
Whereas Dolphin has managed to get simpler by default, but more powerful when you need it. And that is the balance that I find
myself to prefer. They do seem to sometimes err on the side of features, but they get back to that
usability balance over time. I love this kind of stuff and this kind of communication. That's part
of it, Wes. You know, like this clear communication about this is why the project is doing this. This
is how it's going to work.
It's coming down the road.
Prepare yourself.
I love that kind of stuff.
I think that's huge.
And it really seems like Plasma Team has done a lot of slower efforts over the years that we're now seeing with solid foundations laid, with some clear purpose, if I can say that.
They can iterate fast.
They can change things.
And as long as they communicate clearly, we're not left in the dark. So we don't have to speculate, oh, which,
you know, which feature is going to disappear next? What horrible thing are they going to add?
They've thought about it. It's clear that they, you know, they're thinking about a wide range of
users. So you and I might love all these features, but as you say, there's tons of ways to disable
it if you're never going to use it. And it's not, it doesn't add a whole bunch of clutter for
everyone. Yeah, it's interesting that they're including the option to disable it.
Like, oh, you don't ever just want this at all to show up?
All right, you can just turn it off instead of just, no, it's baked in now.
You can just use it.
I do think I have gotten really used to doing that on Android, you know, just be able to
send it right to an app and have that all work after I've installed it.
So I will be using this in Telegram probably from day one.
Yeah, Telegram, Slack, Imgur too. I mean, how handy would it be to quick screenshot,
right click, upload to Imgur, although you can actually do it within Plasma's screenshot tool.
But the whole point, right, is that maybe you've started a new image hosting platform. You're not
as popular as Imgur and this can work for you too.
Yeah, that is very nice. So let's talk about the AUR for a moment.
We've recently had stories about a crypto miner that made it into the Snap Store.
We also talked about some crypto miners that made it onto Docker Hub for quite a while.
It was really pretty embarrassing.
And now, almost as if we could just check them off a list,
we now have a story about the Arch Linux AUR repository containing
malware. Don't panic, it's
already been resolved, and it's already been
cleaned up, but it is interesting.
It seems to be a trend
we see. Now, the thing about
the Arch user repository is from the very
beginning, they warn you
that it is user-submitted software,
that you should double-check all of it.
And when you follow the Arch guide, like the first time you're getting started with Arch,
they have you build things from the AUR by hand.
So that way you learn how the package build files work.
You can see what URLs it's calling, things like that.
So in Arch's defense, they do warn the user as much as possible.
But I'm surprised this almost hasn't happened before.
And there's been a full investigation now that shows that a malicious user with the nickname Exactor modified on June 7th an orphan package that didn't have an active maintainer, something called AcroRead.
Hmm, Acrobat.
I don't know.
Never heard of it.
The changes included a curl script that downloads and runs a script from a remote site.
It then installs persistent software and configures systemd to start it up from time to time.
I love it when systemd is involved.
Now, there was two packages total that were modified.
They were both maintained by the same person and modified by the same person.
And the investigation reveals that the executed scripts were just
doing data harvesting about your machine.
Your machine ID, the output of uname A, the CPU information, the Pac-Man output of information,
and the output of systemctl list units.
The harvested information was then transferred to a pastebin document, and the AUR team discovered
that the scripts contained a private API key,
which shows this was probably done by an inexperienced hacker
because you can track that down to somebody.
But the purpose of gathering that system information remains unknown.
Just doing like a system scan, you know?
Isn't that funny?
And using systemd too to make sure it starts.
I really do kind of wonder what, well, I mean, systemd is just so reliable, Chris.
That's a natural choice.
It does definitely seem like sort of inexperienced in that, you know, if you wanted to do damage,
you could do a lot more damage.
You could also probably get more sensitive information if that's also what you wanted.
So it is kind of peculiar.
And you're also right that we all use helpers,
or at least most of us, but to the AUR's credit, package builds are actually pretty simple. And so
if you do have the time, especially if you don't use that much AUR software, it's pretty reasonable
to go look at least before installing something. Just go scan the package build. It's pretty easy
to see, does this point to the real Git sources? Does this point to an HTTPS website hosted by the
organization that maintains this project? That will help a lot. Yeah, especially if it's pretty easy to see, does this point to the real Git sources? Does this point to an HTTPS website hosted by the organization
that maintains this project?
That will help a lot.
Yeah, especially for the system you're putting in production.
I think there's a definite room for improvement around GPG key handling, though.
So I can't remember what the package was,
but sometimes you see these pinned comments at the top of the AUR that say,
oh, just copy-paste this key into your key ring and it will be fine.
Yeah.
That's a great point, Alex.
There is some room for improvement there, isn't there?
It was caught pretty quickly, it seems, and taken care of and a full investigation was done.
So on the other end of this, you know, it's pretty well handled.
What are your thoughts on it, Brent?
Well, I have to say I'm a relatively new AUR user.
So it's amazing.
I call myself relatively new, but there's a surprising number of packages I'm pulling from there.
But I was taught very early on in the process from some forums and things like that
that actually you should really look at the package builds.
And they're super simple. I've been looking at them. And I did catch actually something
last week that was a little bit curious to me. And so it just allows you to do a little bit of
extra homework. And in this case, it was my lack of maybe technical knowledge. And it allowed me to
learn a little bit more about how it all worked.
But I really think that's essential for everyone to be extra cautious, let's put it.
Yeah, it's really, the rule of thumb is if it's user-submitted software, well, really, hell. I
mean, even the stuff that's quote-unquote curated by Fortune 500 companies or whatever they are,
you know, it still has crap in it. So it's tricky, but definitely more caution is needed when it's a user curated or user submitted, quote unquote, app store.
Yeah, it's a good reminder.
Yeah.
All right.
Let's do a couple of Firefox fast takes.
Just a couple of little things to know about Firefox for Android is entering a maintenance phase as the team is preparing to completely replace it.
a maintenance phase as the team is preparing to completely replace it.
So if you're a user of Firefox and Android, which I have been for a little bit, it's entered a maintenance phase, meaning that they're not going to give any updates for the foreseeable
future except for major bug fixes and security updates will still come down.
That's according to Emily Kager, the mobile engineer at Mozilla.
It sounds like Mozilla is working on an entirely new browser based on open source components in Android.
And Android components, quote, is the actual name of it, is a collection of Android libraries that can be used to build browsers or browser-like applications.
Something tells me they're probably not going to be using Gecko, though.
They'll probably be using the Chromium backend stuff and WebKit.
What do you think, Wes? Is that a bit of a loss to have another? They'll probably be using the Chromium backend stuff and WebKit.
What do you think, Wes?
Is that a bit of a loss to have another?
I mean, this has kind of been the way it's been going for Firefox and mobile for a long time, though.
So I suppose not.
I suppose it's not really much of a loss.
I mean, in a historical sense, it might be.
I get that.
You know, I mean, Gecko's been rendering my things for so long now on so many, on a number of platforms. But it does seem
like this is also maybe a sign of them taking Android as more of a primary platform. I think
so many people use Chrome or other browsers there. I too have been using Firefox and Android,
and honestly, I really like it. It's one of my favorite mobile browsers. And if they can make
something even better, even more tightly integrated and focused on that platform,
I'm excited to see it.
Yeah, you know, I started using it to sync bookmarks and passwords,
but for iOS users, there seems to be another solution.
And I think they just launched this.
You found this, Wes.
It's the Firefox Lockbox.
And this is the password management component of Firefox broken out as a standalone mobile application. The idea,
I would assume, the use case idea is you have all of your websites and passwords saved on Firefox
on your desktop. And then you go to your mobile device and you don't have access to any of them.
So Lockbox gives you access to those passwords. It uses their backend sync. It's 256-bit encrypted,
it says. And you sync down your usernames and passwords
using this Lockbox app.
And I was talking with Wes about this,
and I'm like, Wes, what do you think the use case is for this?
Which, when you point it out for people using it on their desktop.
And I remembered that Noah used to do this.
I don't know if you recall,
but he used to use Firefox as his primary password manager for a long time.
Yeah, he was looking at us kind of funny.
He was like, what are you guys doing?
My browser has all my passwords.
It's already right there.
I think he was sort of eventually had to replace it with a Homeworld solution just in part because mobile.
So Lockbox is only for iOS today, and they just launched it today.
They may be rolling out an Android version
soon, but it's kind of interesting
to see where all this is going. And then one last
Firefox fact for you, a little fast take
here. There is a big update
for you Ubuntu users running
Firefox. Firefox 61
is now available today as
we're recording in the repo.
Now, that's a nice update, and it's
even available for you LTS users, I believe.
So go out there and get your upgraded Firefox,
because you'll get Firefox 61,
which includes several new features.
And it's nice to see that the LTS is getting the latest versions.
It is actually really nice, because the latest versions are great.
Now, if you do want to stay a little more up to date, I will say just downloading the
tarball of Firefox and sticking it somewhere in your home and adding that to your path
also works very nicely.
Yeah, that is a good way to do it, isn't it?
Yeah, that's what I do on most of my platforms and no failures yet.
I kind of stopped getting all worried about the version after Quantum shipped.
You know, then I kind of was like, OK, well, I got the major version once I got got to 60 but it'd be nice to get to 61 it's good to see all right well if that's
the firefox uh fast takes is what i was calling them like that fast takes firefox it's so fast
now right quantum fast yeah exactly you got it uh let's do the ubuntu corner speaking of ubuntu
they launched this week the minimal ubuntu which is not the same thing as the Minimal install or the server install.
It's Minimal Ubuntu.
I don't understand why you're confused.
I don't know what the problem is.
But the idea is pretty slick.
It's a small footprint of Ubuntu that's meant for fast virtual machine provisioning.
So imagine an AWS infrastructure that has to spin up a thousand instances at once.
You think I'm exaggerating.
I'm not.
They really, really care about the spin-up time.
I heard this from an AWS engineer directly.
They are constantly pushing back on Canonical and Red Hat to shave down the boot time
because you multiply that by a thousand in some cases.
And so if you can shave down the boot time
and shave down the image size, it really matters.
It might not matter for us on our laptops,
but it really matters for really large cloud deployments.
And that's exactly what these are for.
This is really Ubuntu designed to be deployed by machines,
not humans.
So think of it in that context.
Yeah, exactly.
I'm really excited for this just because for many cases, Ubuntu server is great, but it's still kind of stuck in the era of bare metal servers.
I remember the argument when SystemD came out and everyone was like, well, why do we need servers to start fast?
You start it once and you reboot it, maybe never. But in the days
of clouds and functions as a service and all the different sorts of containerized options we have
today, you're starting stopping immutable deployments. Everything moves very quickly.
And if you can get this down in an image everywhere, it really scales. Plus, I don't
know about you, but when I'm making images for other people to use, I want to customize that.
And I want to start with as lean a base as possible. I don't want to you, but when I'm making images for other people to use, I want to customize that.
And I want to start with as lean a base as possible.
I don't want to have to go in and remove or disable services that I don't need or no one knows what they do.
This just starts as a super lean core, add on everything you want, do some customization, make a new AMI, and you're good to go.
Yeah, especially in the context of quote-unquote serverless computing.
Serverless computing, it just means they're taking care of the server.
They're standing up an entire Linux instance and then tearing it down once the processing's done.
They need that to be fast.
But Brent, you're pointing out there's other aspects
besides the speed of the boot when you minimize like this.
There's other advantages.
Yeah, I had, I guess, kind of a little bit of a question.
There's a section in there that says minimal security cross-section comes with this minimal Ubuntu.
And I just wondered if anybody from the community or maybe Wes has any thoughts on that.
Yeah, well, you know, less attack surface means it's, by default, less likely to get exploited.
Now, it's still something you attach to the network, right?
It's still running a full Linux stack.
I mean, it is Ubuntu.
You can install any package you want on it.
Yeah, I think that's it right there.
It's not some magic layer of security.
It's just that when you install the default Ubuntu server,
there are a lot of things which are super handy
and make it really easy for users to get started with
that are just enabled by default, right?
So, you know, the Snap service is running,
LXD might be running,
but if you don't use those things, you don't need it. And unfortunately, not everyone,
even some sysadmins know enough to go, you know, how to go prune that, how to disable all the
services, how to check for those things. So if you can just start from a really clean,
minimal base, it's better for everyone. There's some real numbers here too. They
say that the images are less than 50% of the size of a standard Ubuntu
server image, and they boot up to 40% faster than standard Ubuntu. But that doesn't mean they're
necessarily the fastest, does it, 10-bit? I was checking for Onyx and Michael did some tests and
ClearLinux was the big winner. Minimal Ubuntu is definitely better, but Clear Linux was still four times.
That is a great point. You know, there's still other, many people are still going to run Docker
containers with Alpine, or if you have particular needs, right, there are a lot. I think Ubuntu
makes a special impact here is that you still get all of the comfort of using Ubuntu like you do on
any other platform. All the things are there. You can still install devs that you need to.
It'll just work.
Whatever happened to Unikunnels? Do you remember them?
Yeah, that's a good question. I haven't heard as, I mean, they're interesting. I haven't
heard very much. There's been less buzz these past two years. I don't know if that's just
because serverless and Lambda, et cetera, have taken over that space, or maybe it's
just incubated in the background.
Now, Badger, weren't you thinking that maybe Atomic from Fedora makes this a bit less relevant?
Yeah, I think.
So startup time is one thing.
And I guess I hadn't really thought about the scale that Amazon run at.
I go around to a lot of different customer sites and I see people with four or five hundred EC2 instances in just their test environment alone.
It's pretty crazy
and you multiply that across all the different companies across the world the scale that amazon
are running at is mind-boggling um but then for me i think if you look at you know an atomic
os has a read-only file system and so there's a a lot less tinkering needs to actually occur. So you can
almost basically boot from a known state. And it's kind of like how Apple deal with hardware,
right? So they know exactly what's under the hood, so they don't have to do a whole bunch of checks
like a PC BIOS would or something. So it's that similar sort of principle.
Yeah. Yeah. Just if you don't have to worry about the drive changing at all,
you could launch as fast as you can initialize the hardware support you can launch. That's right.
Yeah. That's a, yeah, that is a good point. I feel like the broader context for this too
is containers because this isn't just for spinning up VMs. It's also for spinning up,
you know, an Ubuntu inside a container. So you can start with something minimal
and build your container environment around it, I think.
There's other solutions for that,
but I feel like this would be a pretty good one.
And because it's just an app getaway,
you can get anything else loaded in there that you need,
which is really convenient.
Yeah, I think you hit it on the head right there, right?
Like that you don't, it may not be the best
if you have particular needs, then maybe look elsewhere.
But if you were just already using Ubuntu, it got a little bit better today.
Now they just got to work on the branding because there's too many minimals at this point.
And, you know, it's tough when you have an OS that runs on IoT devices.
It runs on cloud devices.
It runs on laptops and desktops.
And now apparently on floating AIs that will go up into space.
Ubuntu is also running on an astronaut AI.
It's called Simon, C-I-M-O-N, and Joey has a great write-up over at OMG Ubuntu all about it.
And it's one of six experiments that was recently launched to the International Space Station as part of the ESA's International Space Station program, the Horizon Mission.
It's not science fiction, though.
It is an autonomous AI-powered assistant that can see, hear, and understand and speak with the astronauts.
And it's got a touchscreen on it.
When you minimize the Simon application, there's an Ubuntu desktop.
minimize the Simon application, there's an Ubuntu desktop. It's just like right there. And you can open up files and browse the file system on this floating robot. And it's really cool. It's about
the size of a beach ball with one side of it flat that has an LCD screen and a somewhat
freaky face, I'll be honest. And I just, I love the concept of this.
There's so many things they can use it for.
You know, as we go out deep into space,
it's much easier to send robots than it is humans.
But the other thing you have to consider
is this isn't like something
that's going to turn on your smart lights.
This can be very, very purpose-built
to know very, very deep knowledge
about the particular experiments
they're working on. Like, it's not going to answer when Walmart down the road opens up.
It's not going to know that. But if you need to know the molecular structure of this rock,
it's going to know that in and out and be able to display that on the screen.
Right. I mean, if you think about how useful having a smart home with data about your house
or about your car is, you take that to the next level here. Wow. Yeah. You're right about the face though. The face is creepy and I'm just
imagining a Hal style moment there and the face, you know, the smile turns into a frown and,
oh, pod bay doys. Whoops. Yeah. Yeah. Oh, yeah. It gets angry all of a sudden.
All right. Well, here is one of the scientists who's been working on it,
explaining how this works better than I can.
My name is Till Eisenberg.
I'm from Friedrichshafen, the project manager of the Project Simon.
Simon weighs about 5 kilograms and has a diameter of 32 centimeters.
He has a velocity of about 1 meter per second, which is rather slow,
but it's quite comparable
with a normal movement of a crew member. He's equipped with a display of eight
inches which was chosen because this gives the ability to put on to display a
complete face. Simon stands for crew interactive mobile companion and it's
meant to be a social interactive
free-flying object who shall assist the crew during extensive tasks and to
reduce stress. Simon will assist Alexander Guest during his next mission.
He will assist him during two different tasks so we will be able to provide him
with the good advices during complex procedures, as well as assist him in social interaction or by social interaction,
and will provide additional data to the science group,
like video data for or during complex tasks to validate the exact process.
They go on, too, to talk about how if they're working somewhere
where they have decent connectivity,
they're going to still use offloaded cloud processing to help with Simon's intelligence.
And when they're somewhere where they don't have connectivity, which I guess there is connectivity in parts of space, but when they're somewhere where they don't have connectivity, it can work offline.
The majority, though, right now of the AI work is taking place on IBM's Watson cloud service.
And the language comprehension is being done by that, although they're working on some
local language processing, but they don't talk about what technology they're using for
that.
And they say the earthly link may seem antiquated by comparison, but it has its upsides.
He can download new software and they can improve things in batch processing.
So he can come up with solutions like offload it to the cloud, it'll chew on it,
and then they'll send it back to him depending on the bandwidth,
and then he can give the results to the astronauts working at the station.
I know this is early days, but I will be very curious to see, you know,
after six months or a year or however long this project runs,
how useful is it? Do they get real benefits?
Because if so, I assume this will be the first of a long line of AI assistants in space.
Yeah, the first AI assistant in space is pretty, pretty neat.
Let's bring it down to earth, though, and talk a moment about the GNOME desktop.
A lot of love these days from us for Plasma.
But things over in the GNOME camp are slowly but surely getting better.
A lad on Twitter tweeted that we could look forward to some serious performance improvements and included a picture of a slide from Guadec, which just wrapped up the GNOME conference.
And there was a lot of talk about improving the performance in GNOME. slide too, I noticed they specifically call out those jerks at Canonical for all the good work
they're doing to actually help get really deep into the performance issue and solve it. So it
looks like the kumbaya of Canonical and GNOME continues because they give them a particular
call out of thanks in this slide. And it seems that we may see even more performance improvement
announcements soon coming from the GNOME project because they're touting them at Guadec this year, which I am very happy to see.
So don't become a Plasma user just yet, Wes.
Yeah, I know, right?
I think this is also just another good case of, you know, Ubuntu is used in so many ways.
And I think it's pretty easy.
Like the way I use Gnome, I don't care as much about the performance for 90% of them
because I'm, you know, I have a web browser and I have a terminal and maybe I'm developing
something and the rest doesn't matter too much.
But let's say you're a video editing professional using Ubuntu on your
power,
powerful workstation.
Suddenly that matters a lot.
So I think the focus of end user deliverables that Ubuntu can bring is just
going to keep these improvements coming.
Well,
I look forward to it and you know,
it's a whole,
it's a,
it's a whole group effort.
Now there's so many, there's so many major distributions that are shipping GNOME that hopefully we will start to see some.
That's what we've all been hoping is that when everybody kind of focuses on it, not everybody, but when so many desktops focus on it, maybe we'll get real results.
That might be what we're seeing now.
And it's a whole group effort, but it's really great to see Canonical sticking with it. Now, I got to tell you, we had a story that I decided not to include, but now I kind of wish we did because it was about setting up enterprise-grade Wi-Fi.
Holy smokes.
Holy smokes.
If anybody out there has solutions for like home – what am I looking for here?
Like not like a DDWRT solution, but what is the latest in Linux solutions for Wi-Fi?
This isn't a question for the audience.
Like, what is the latest in firmware replacement on a router device
or even better, ones you build with like a PC or something?
I'd love to know.
I've been living in Wi-Fi hell for about three days.
Things are looking better now.
I was so worried today because I'm down here at Linux Academy.
And today was the first of several, I think, five live streams they're doing to announce all this new content that they're launching in July.
And I got here on Saturday, had a nice chill morning, got in here, you know, around 9 a.m.
Sunday morning, started setting up for Linux Action News.
And I pretty quickly discovered that I was dropping packets.
And it was really hard for Joe to even understand a word I was saying.
I mean, harder than normal, right?
Not just because you're mispronouncing things.
Yes, exactly.
I didn't know what it was.
Like, was it Wi-Fi?
Was it something on the network?
You know, on a Sunday, 9 a.m., 10 a.m., I'm the only one here.
So it's not like it's somebody torrenting stuff.
Nobody's here.
Most of the machines aren't even here.
Most of them are laptops.
So I started going through all of these old school troubleshooting processes that I used to go through. Can I ping the router?
Am I dropping packets here?
Am I dropping packets when I go here?
Can I do all these trace paths and all that kind of stuff?
I started running around with a dongle and a spool of Ethernet, like trying different jacks to see if anything was live.
But the folks here at Linux Academy about a year ago refurbed the building a bit.
And at that time, they deployed what's called,
like a common generic term,
enterprise-grade Wi-Fi.
As if there's like some committee
that reviews and measures Wi-Fi
to determine if it's enterprise-grade or not,
like stamps it,
like the USDA-grade beef.
I assume it's one of those things where,
you know, that's what the vendor says
and you pay enough for it that you just, that's the title you get. Yeah.
I think that's exactly what it is. Everybody you paid said it's enterprise grade. So therefore,
it is enterprise grade. I mean, that's what you tell the boss, right? And you're like,
oh yeah, good work, IT. Yeah. And that is true. I tell you, that is so true. And I'm sitting here
going, well, Wi-Fi is still Wi-Fi and I'm doing VoIP and any kind of disruption really messes with the VoIP call.
And so we did a bunch of troubleshooting.
Stefan, their network engineer, he helped me out too.
At least I assume he's their network engineer because the man really knows his stuff.
And we actually still haven't tracked it down.
I don't know exactly what it is.
And of course, I'm the only jerk that's having an issue because I'm the only jerk trying to do a live, high quality, high bit rate VoIP call.
Everybody else is browsing the web, doing email, and they're all on Wi-Fi because they're just, you know, on laptops.
And most laptops that they buy now don't even have Ethernet ports.
So everybody's on Wi-Fi.
The Wi-Fi network's just got all of these machines on it.
So I'm like, no, I want to go Ethernet. Did you see the post on Ars Technica over the weekend about
Wi-Fi? Is that what you're talking about? Yes. Yes. That's the one.
It was a great, great read. It's like six pages long and doesn't have a huge amount of detail.
Well, it has some detail, but yeah, I've used Ubiquiti gear at home for the last two or three
years and it's been genuinely pretty flawless um you just
set it and forget it which is exactly what you want from wi-fi and they have ps sense on the
front end that's great see i think that's what they have here but not the not the pf sense on the
on monitor at all so um but stephanie's coming out he's gonna bring a spool we're gonna wire me in a
jack because there's no jacks they're hot because everything's on enterprise Ethernet or Wi-Fi Ethernet.
You still can't beat a cable, can you?
No, no.
But ironically, the solution was – really the only solution was I had to get outside the network.
It's a particular challenge too because normally when I come down, when I go places to do the shows, I always take the RV.
And I've done it so many times and it's kind of expensive. So I was starting to
really question the practicalness of that. Like, really, do I need to take an RV, a 40 foot long
RV every time I go somewhere like that is too much. It is way more efficient to fly down there
and just do the shows from one of the recording booths for a week or more, maybe potentially
actually two weeks.
So, you know, whatever I could fit in my backpack is what I brought with me.
And then everything else got ordered from Amazon by Linux Academy.
And all of the gear that Wes and I use and know I use in our mobile setups, they pre-bought
and had it here for me.
I get everything all hooked up, got all my gear, same exact software setup, same software
I used on the last trip with my XPS 13. Really, I thought I had it all dialed in. And I started having these
issues. And I started thinking, you know, if I was in the RV, I would have three cellular networks.
I would have three laptops I could switch between. I have Wi-Fi I could connect to. And plus,
I have a booster on the top of my RV that can pick up Wi-Fi signals from two miles away.
a booster on the top of my RV that can pick up Wi-Fi signals from two miles away. It's insane.
It has this little tiny device that is just this incredible booster. So there's a lot of flexibility.
Plus, I can move, and I can just go somewhere else that has better connectivity.
That has saved your butts a number of times.
Yeah. And so this trip, I'm like, oh, boy. So what can I do? Because I don't have the RV. I don't have any other equipment with me and I need to get outside the network
so I went and I got a MiFi
I might return it within 14 days
but you know
it's another device
with another payment and all that kind of stuff
but it was what you have to do to do the show
and so now
now it seems to be holding up mostly
do you guys not have tethering over there
because over here I just switched my phone into a hotspot and off I go.
Yeah, the issue is the room I'm in, which is in the center of a very large building,
and then it's a thick wall with sound insulation.
And so the only thing that gets signal in here is Verizon.
So I had to get a Verizon Wi-Fi.
Otherwise I would just use my – that's what I've done before is I just turn on tethering on my Nexus.
But that's the way it goes sometimes.
Maybe can we buy like a second one of those boosters and just sort of strap it to your back?
You can walk around with that at all times.
Dude, a podcasting backpack.
Yeah, right?
Right?
Yeah.
And just put like a Linux box in there so that way we can say it runs Linux.
Yeah, a little nook sitting at the bottom.
It'll be perfect.
Then you're going to look like the guys in Silicon Valley walking around the convention
trying to hack everyone.
Oh, yeah.
Oh, I already do.
I have been out on the street.
And it's like if I'm with a couple of people, this has happened a few times.
I think it's even happened when I've been around you, Wes.
People will stop and go, hey, what are you guys doing?
What's going on?
Is there a computer convention?
Is there some sort of computer convention?
Exactly.
But you just get enough of us in one room and I guess it is a computer convention.
I guess we just look at.
So,
yeah,
it's been a bit of an interesting problem.
But man,
if the guys here haven't been awesome to work with trying to track it down and
troubleshooting stuff and, you know, turning off all the packet inspection to make sure if
that's not it and that kind of thing so the live stream went pretty well it didn't disrupt the
live stream much there's a couple of dropouts because of it but it was a little more tolerant
so that was nice and the big launch we had lots of people show up and watch the live stream so
that was cool you can probably find it on Linux Academy's
YouTube channel if you want to check it out. We had a fun little bit in the beginning and I was
there for the whole stream. So that was fun too. And now I'm just kind of hanging out here for a
couple of days. They have one more on Thursday. I haven't booked a return flight. So I'm thinking
about hanging out until the weekend and then being here for the Thursday stream, too.
Oh, yeah, you should.
Just a little, you know, a little week in Texas.
Why not?
It's nice.
Plus, it's like, I mean, I just have this total nightmare vision of me flying out of
town, and then that's when something goes wrong with the Linux box or the OBS machine.
You know, something's going to happen because I just got them all switched over.
You need to protect the reputation of Linux, Chris.
If there's one point to your entire existence, this is it.
Well, the thing too is like the guy that's in charge here of the live stream,
he's a photographer and a videographer and a good one too,
but he's never used OBS before.
So it's a bit of a training process and all of that.
So I may be here till at least then,
but pretty happy the first one went through pretty well.
And if we track down this network issue, I think the next one will be really smooth.
And it's, you know, it's a process.
It reminds me when you're the one person having issues with the network, it is what you were asking for is a leap of faith by the people that you were asking for their time to troubleshoot it.
Right.
Like I felt constantly like I had to continue to prove that really,
I know what I'm doing.
There really is an issue here.
Do you know what I mean?
Like, when it comes to the network.
It's up to you to prove that it's, you know,
it's not just your problem,
that it's actually a systemic network-wide problem. And of course, because network engineers are bothered all the time, right?
Everyone loves to blame the network.
So of course, that's a natural defensive position.
Plus, from what it sounds like, you know,
if you're just browsing the web or doing various office related tasks, that is so different than any sort of real time
video streaming protocol. So it's just a different use case, too. Yeah, so true. And it's like way
more intolerant of outages and breakups where like a web browsing session or Slack chat won't even notice a, I'll give you an example. So it's not that bad.
I would say, so I'll ping Google DNS, say 800 times. And in that 800 times, I may drop
0.5% of the packets. You know, it's really small. It's like a very small amount,
You know, it's really small. It's like a very small amount. But what it equates to is about every checks and like try to figure out where I might have a lot of latency or try to figure out where something's dropping packets.
It's kind of been a fun adventure again.
And getting to talk the lingo to the network guy, talking to him about drop packets and going to the patch panel,
seeing if I can't patch in my Ethernet to the switch.
You're back on the front lines.
The whole lingo has been fun.
All right.
Let's keep going, though.
A lot of you at some point have tried to get something working with wine.
And maybe you were successful.
Maybe you weren't.
But the 4th of July marked wine's 25th friggin' birthday. 25 years since the first stable,
not a Windows emulator. The first year brought the all-important support for Microsoft Solitaire,
and by 1996, the wine developers managed to get Word and Excel to run.
It, however, would be another 12 years before the software was declared stable.
Version 1.0 was released in 2008, and in the intervening years, the Wine team bounced around
to several different licensing models for code, starting with BSD style, before eventually landing on a limited
GPL, followed by a number of flame wars that lasted for a long time, which you can still find
archives of. As wine slogged its way towards its first stable release, a brief period of commercial
support from Corel propped up for a bit that eventually faded away when the Canadian company
departed from it all in 2001 to focus on its slow decline.
Windows came around, which promised to offer easy Windows applications while switching over to
Linux. Microsoft noticed that the name was close to Windows, took issue with that, and started the
process of suing them. They went to court in 2002. Lindos lost. But Microsoft did something interesting.
They opted for a retrial and ended up paying Lindos $20 million to purchase the trademark.
And then Lindos was renamed to Linspire, which has actually just recently come back to life
when it launched under new ownership earlier this year i did a review of their uh release of one release ago you may remember trans gaming got in the mix
for a while and trans gaming did something interesting they really got a fork of wine
that focused on direct x and that ended up kind of really pushing forward wine as a project in
the direction of direct x which has been huge for Wine.
And as development has continued, they've just recently released 3.11, which is stable.
Yesterday, they released 3.14 testing, or I'm sorry, 3.12 testing.
And they have also big news from WineConf.
They're beginning progress on ARM support. What?
For Wine. Yeah.
That's, it's
like using, I don't, I mean, this is
really something, but this is actually what Microsoft
is doing too. It's using
x86 on ARM
emulation to run 32-bit
and 64-bit
binaries on ARM.
So I don't even, not only do I not need to actually run that operating system, I don't
even need that architecture.
What a world.
Yeah, that is hilarious.
In fact, it really makes me just respect Wine even more.
It may be slow, but as ARM CPUs get faster, you never know.
Also, it looks like there has been some solid progress on Wine on the tagra based shield which is running android as
well on the nvidia shield running android they showed no notepad plus plus and age of empires
running under wine on android on the nvidia shield which i love the nvidia shield it's been pretty
great and they also talk about vulcan support and whatnot we'll have a link in the show notes
if you guys want to read up about that and um one other little note in there, and I've been
a longtime advocate of Crossover Office. They mentioned that CodeWeavers is still, for a long
time now, I believe, employing about 50% of all of the Wine developers. It's a big deal. Yeah. And
I just noticed they had a new version out that I haven't bought yet, that I was
kind of on the fence because I don't really need it anymore.
But I think I'm going to do it.
I mean, it's almost been worth it just to support this, right?
Yeah.
This is awesome.
You know, I've talked to Jeremy, their CEO, a long time ago.
I had a lot of respect for him when I talked to him.
And then when I saw that for a long time, they've been employing over here.
Not only do they employ 50% of the wine developers, but then they kick that stuff upstream too.
It's not like they're locking it behind some sort of proprietary re-license and then you have to buy it to get it.
They're still kicking that stuff upstream.
They're just so confident in their product that they feel like they can sell it too. And it's a great UI that sits on top of Wine. That if you need business-grade Wine
that is really easy to walk through,
has specific profiles for each applications,
it sets up bottles so that way each Wine configuration
doesn't disrupt the other.
Or you can install multiple applications in a single bottle.
And it has step-through wizards
that will pull down installers off the internet
if they're out there.
It's great.
And I really think that CodeWeavers is a solid company.
So I think after the show, I'm going to go buy myself a copy at Crossover Office.
I think it's worth it.
And man, more power to the Wine Projects.
25 years in.
We might not be running all of the Windows apps we thought we would
25 years into it.
Maybe not.
But it's still pretty cool what they're pulling off.
All right.
We have to keep going because I want to get to the Gentoo stuff.
And we've been catching up.
So we've taken a little bit of extra time here.
But it's just good to chat.
Now it's like the old shows back together again.
It's just so much fun.
It's beautiful.
Let's talk about Ting, though.
Let's talk about Ting.
Go to linux.ting.com.
This is, boy, do I miss having my multiple Ting
access points. This is how I do it because I have a phone on Ting and I have two MiFi devices,
a CDMA device and a GSM device. The reason why I can afford to do that, it's not because I'm
some big baller here. It's because it's $6 a month for the line and then just my usage on top of
that. That's a pretty understandable business expense for what I do.
$6 a month.
And if I don't use them, I don't pay.
Just pay for the line.
That's it.
That's easy.
It's really simple.
And that's why the average Ting bill is just $23 per phone per month.
It's a fair price for however much you talk, text, and data you use.
And when you go to linux.ting.com,
they'll take $25 off a device if you want to get one from Ting directly.
If you bring one, and remember, it's CDMA and GSM.
So if you want to bring a device, and there's a lot that are supported,
just check their BYOD page.
They'll give you $25 in service credit,
which will probably cover more than your first month, potentially.
They have a fantastic control panel where you can see your usage at a glance
and take complete control.
You can set alerts.
You can turn services off.
You can activate devices through their website. And if you ever get stuck, they just really have great customer service, the best customer service in the industry. And that's
something that Ting can focus on in a really particular kind of way. So not only do they have
great prices, not only do they have a great device selection, but they make flexible usage like the
kind that I use where I have one phone and then two different Wi-Fis possible.
It means that when I'm trying to do a show,
I can switch between those devices just by
changing my Wi-Fi network on my
laptop. I love it. They also
have a great blog. They just did a tiny home
post a little bit ago, which I'm partial
to, being that I live in one.
And it's pretty cool.
They really get this stuff.
That's what I love about Ting, and I'd love you to get some Ting.
So go to linux.ting.com and check them out.
They're a great service.
Also, thanks to Linux Academy for sponsoring this episode of the Unplugged program.
linuxacademy.com slash unplugged.
You go there, you sign up for a free seven-day trial, and you support the show.
It's a platform to learn more about Linux.
Anything that runs Linux or that Linux runs on top of.
And today, they just launched new
Azure courseware, some Linux
Essentials courseware, and
man, Wes, this thing for
it's a perfect courseware for the
TechSnap audience. It's a full deep dive
into continuous integration
and deployment. And
on top of that, when you're all done with
the courseware you actually like set
up a whole system, like you really go through and implement every aspect of it. And they've
broken it out in different areas. And the thing that I learned today that I didn't know before
is, so say they say they've itemized continuous integration and deployment courseware into like
eight major sections. You could, if you just needed to, to like brush up on the Kubernetes
aspect, you could jump right to step six.
It will, it will fill in all of the required stuff and build the server infrastructure that you would have had to go through to get there.
Just so you can deep dive on that single topic, take it and learn more.
That is the difference with when you use Linux Academy, right?
Like who else has thought about this, this seriously?
They're trying to do so many different things that they just can't have this level of polish
and attention to detail.
And if you want to get that,
you know, you want to get the next promotion,
you really want to prove that you've learned something,
I can think of nowhere better.
That's right.
Linuxacademy.com slash unplug.
They're going to have another live stream
on Thursday, July 12th at 1030 Chicago time.
That's central.
And the word on the street,
although I haven't seen the deets,
but the word on the street is they have a bunch of Red Hat-related announcements on the Thursday show.
So check that out.
And they're also hiring.
I wanted to make a little mention of this.
They are looking for a full-stack Node developer, if that's something that you might be interested in. They also are looking for a senior Ruby developer, as well as a customer support manager, Ruby on Rails developer, and a Microsoft Azure
training architect. Now that's a lot of positions. So I would say there's an email address I gave out
last week. And I think that's the same one. I don't remember what it was. I'm sorry. I can't
recall. Brent, do you happen to remember randomly what email address I randomly said last week. And I think that's the same one. I don't remember what it was. I'm sorry. I can't recall. Brent, do you happen to remember randomly what email address I randomly said
last week in the show? I'm sorry. I have no idea.
I can't recall either. But if anybody in the chat room knows either, maybe put it in there.
Because there is a lot of positions here. And it's a really cool company. I've been down there
hanging out with them for a while. And I totally think that if you are in this area
and you listen to this show, some of these
actually, all of
the ones I just listed, except for the
customer support manager, are all telecommute jobs.
So you don't even have to be in
Texas. You can be wherever you're at.
That's pretty awesome. So check it out.
Go to linuxacademy.com
slash unplug to get
the deal. And you know what?
Do this.
If you want to look,
if you want to apply for the positions,
I don't have this.
I have a special email address
that's set up just for the Jupiter Broadcasting audience.
You kind of get the hotline treatment.
So if you really are interested,
it may be worth listening to the sponsor section
of last week's episode.
But otherwise,
I'll give you a little hot tip.
linuxacademy.workable.com
I have them all listed there.
linuxacademy.workable.com. I have them all listed there. Linuxacademy.workable.com.
I think if you're a listener of this show,
that might put you ahead of the list.
So go check that out.
And if you really want the fast line,
go listen to last week's episode and grab the email address.
And then maybe remind me what it was.
Also, a big thank you to DigitalOcean.
do.co.unplugged.
Go there to get a $100 credit at DigitalOcean when you sign up.
You got to use that URL, though, to get that.
do.co.unplugged.
You can build applications super fast on DigitalOcean's crazy quick infrastructure.
Enterprise-grade SSDs, 40-gigabit connections coming into the hypervisors.
Of course, they're running Linux, KVM to virtualize it all.
And they have industry-leading price-to-performance, predictable cost and billing.
And I love these optimized compute types.
While you were out, Wes, we set up a PeerTube instance on a DigitalOcean droplet using their super crazy fast high-end Xeon CPUs where you get two dedicated.
Oh.
So cool, Wes.
You know, because I've done these, I've encoded these exact videos many times.
So I really get into the sense of the speed
up on DigitalOcean.
And it's so nice.
It's just right there.
They also have these mix and match droplets now
where you can set the different,
like maybe you want more RAM
or you can say,
oh, you know what?
I just need to put all in SSD
as well as block storage.
It's so slick.
And the whole thing is easy to manage
with their fantastic dashboard.
And they're easy to use API. If you just want to do it via script or use your favorite language,
it's really easy to get started. And they got great documentation, data centers all over the
world. This is the bar, you understand, right? DigitalOcean is the bar that everybody else in
this game is trying to catch up to. The problem is, when you're ahead, you just got to stay hungry and you can stay competitive.
And that's exactly what DigitalOcean's done.
Now they're offering $100 credits to try it out.
That's insane.
Take advantage of that before they stop doing it because that's insane.
It's so great.
Dio.co.unplugged.
And a big thank you to DigitalOcean for sponsoring the Unplugged program, do.co slash unplugged.
All right.
Well, last week we talked about Gen 2's GitHub repo getting owned.
It wasn't a massive, massive blunder because they weren't really anything more than mirrors.
But now it turns out that it was a bit of a rookie mistake.
And I say that with love, much respect to some of the very, very technical people over in the Gen 2 community. In fact, yeah, there's some people
in the security industry that only use Gen 2 because of some of the people involved with the
project. So I say this with no disparity to the people involved with Gen 2, but it helps me make
a larger point. And that is, after looking through this and thinking about some of the other recent
distro snafus, things like expiring SSL certifications, project leaders randomly
quitting, leaving nobody with access to any of the infrastructure, void! There's just been
example after example after example this year of projects getting this stuff wrong. And in the case of Gen2,
they just owned up to it on their own wiki page. They say the attacker gained access
to a password of an organization administrator. Evidence suggested a password scheme was used
to disclose on one site made it easy to guess the password for unrelated web pages, i.e.
using the same password on multiple websites. The wiki page also reveals that the project got lucky.
The attack was loud.
Removing all of the developers caused everyone to get emailed.
If they had been more careful and more selective,
like figured out somebody that hadn't been active for a while
and just removed them and then got in,
it would have taken much longer for the Gen2
project to notice. But the credentials were taken and they did a noisy attack where they did
something that notified all of the developers. And so it alerted them to it. And those are the
things that really kind of, that was the thing that saved them. But the thing that caused the
issue is a shared password as well as GitHub failing to block access to repositories via Git,
resulting in malicious commits being extremely accessible.
So Gentoo had to force push over as soon as these things were discovered.
And credential revoking procedures were incomplete for them.
And they also didn't have a backup copy of the Gentoo GitHub organization details.
So that made it particularly hard.
And the systemd repo turned out not to be mirrored.
It was stored directly on GitHub.
So there was a couple of things,
oh, as well as some communication snafus.
But my core point that I'm trying to make
is this is clearly not the stuff
that distros are super strong at,
as it probably shouldn't be.
You know, they're developers,
they're maintainers, they're organizers,
but they're not necessarily sysadmins,
or maybe some of them are,
but that isn't what they're spending their time doing.
And I'm not trying to make a case
for putting everything in the cloud here, Wes,
but do you kind of follow my logic?
Like, I kind of come from the school of thought
that developers should develop software and
sysadmins should manage servers.
And recently the Gnome Foundation just launched new positions specifically to take off the
server administrative burden from the developers and people in the foundation.
What's the compromise here?
Because obviously projects don't have unlimited budgets.
They can't hire people full time to do this.
And there's only so many people to do the work.
But yet there's obviously a certain amount of neglect that goes when you have too many
hats on.
What's the solution for this?
Solve it, Wes.
Yeah, I'll solve the whole thing and maybe world peace while I'm at it.
I do think you're right.
Like when we talk about like the split between developers and admins, it's not about even
skill sets or anything like that. It's really what you're right. Like when we talk about like the split between developers and admins, it's not about even skill sets or anything like that.
It's really what you're saying.
Too many hats.
When you have too many jobs and you're thinking about too much and too many responsibilities,
you get sloppy.
You know, I'm sure the people involved here, they probably even knew somewhere in the back
of their mind that this wasn't the best password policy.
And you know what?
I'll fix it next weekend when I, you know, I finally have an ounce of time.
I'll sit down and I'll get this right and I'll finally install that password manager that I've been meaning to. But that's
never going to work. And so whether you're a developer or anything else, you really do need
people that your job is maintaining infrastructure. And it's not sexy and it might not even be fun,
especially when, as you say, a lot of these are volunteers. You're trying to contribute to open
source for the benefit of everyone. And you're excited about, you know, scratching the itch or improving the awesomeness of the build
system or whatever else it is you're interested in. And if you try to take on too many things,
it's just going to fail. So if you can find people who are willing to, you know, be humble and just
say like, yes, I'm going to, I'm going to administer, I'm going to be an administrator,
I'm going to be an overseer. I'm just going to make sure that all the central parts that no one else thinks of and it just has to run in the background will actually work.
And ideally, if you could have some sort of funding to make sure that happened, because that's probably especially hard to find volunteers for, that would be even better.
Yeah.
Yeah.
Brandon, what do you think of the idea of the free and open source software community coming together with essentially like an open best practices, like a comprehensive best practices that many, many, many projects could
contribute to. What about something like that? Yeah, it seems to me like this, these are the
types of problems that would exist with almost any project, right? So I think because we have
such strong community, even if you're making quasi competitive products, everyone needs this
infrastructure to make their work happen, right? So what if we can maybe all get together and
create some kind of, you know, this might be a dream, but some kind of best practices. I don't
know if it's a document or some kind of, maybe it's a conference or something where everybody can just be up
to date on this and some of the developers can maybe lean on some of the projects who
are a bit more expert.
Or what about, maybe what about some sort of donated auditing time, right?
Where you could get in security experts to go into an organization and do periodic assessments
just to say like, hey, where are you at?
What can you improve?
Yeah.
You would think there would be people in the community that have that particular skill set
that could be a way they contribute,
but there would need to be some structure around it
because otherwise you could get people
just banging on doors they shouldn't.
Eric, you think maybe I'm just a DevOps hater?
You say DevOps aims to solve this.
I say DevOps is the problem.
DevOps is a philosophy, first and foremost.
It's not a position.
It's not supposed to be clickbaity. DevOps is a philosophy, first and foremost. It's not a position. It's not
supposed to be clickbaity. It's a philosophy. It's a way of doing work.
With the way that the founding team folks, it was just a bunch of people that were at a conference
that were sitting down talking over drinks and basically came up with this entire philosophy.
It's a way of doing work.
You break things down into one or two-week sprints.
You take tasks one at a time.
You sit down with the developers, with the security people, with the network people,
with the business, with the systems administrators.
Everybody sits down at a table and prioritizes work.
And what can I get done this week?
What needs to get pushed to next week?
And that's where a lot of these tool sets have come in.
The CICD, continuous integration, continuous deployment type of an approach comes from
this DevOps philosophy.
So if you've got all of your code for your infrastructure,
for your network, for your applications in a GitLab instance,
and that's managed by a tool like Jenkins
that will automatically set up new builds,
that talks to a tool like SaltStack
that automatically provisions and monitors your infrastructure to make sure that,
well, this server crashed, so I'm going to rebuild another one and tell the load balancer to point to this IP address instead of the bad IP address.
It's a philosophy, and it only works well as the people that are implementing that philosophy.
Yeah, and it does seem like the tools are getting better.
They're getting more complicated, too, but it seems like the tools around this, what has been a philosophy for years now, are actually starting to catch up closer together, that's a benefit. Now, it's not a silver bullet, right? So anytime you have more things to do than you
actually have manpower or time to do, you're trying to do too much, you'll still face the
same problems. Yeah, I like that you got the word stakeholders that worked in there. That's
well done. Now, Brent has an idea. Maybe we just simplify this. Something a little easier than a
comprehensive guide.
Right, Brent?
Yeah, I remember.
I wish I could remember what it was called.
And maybe someone in the community can help me out on this.
But I remember Noah talking.
It might have been like even six weeks ago on Ask Noah about exactly this, a type of checklist that you could look at your own systems and see if they were secure with basically, you know, you didn't have to be creative to come up with the checklist.
It was right there and it was being distributed by one of the organizations.
I forget what it was. A checklist isn't a bad way to go, even if it was something that was sort of generic that, you know, you could at least say we followed this community checklist.
I feel like there is something here.
Yeah. I mean, a checklist is good. I think you need, a checklist is helpful. It's a start,
right? But you still do need that person who has enough time and focus and skill to run through
the checklist and know which ones are actually checked or not. Yeah. That's, that is a really
fair point. And, and maybe the checklist is a little bit too surface. You know, you need to
understand what's happening below the checklist.
Yeah, and that is sometimes what I worry is happening.
And this is just me being a curmudgeon, is we have what DevOps to me really is,
is it's getting software developers more empowered to just deploy things that they need to do their job
and not have to sit around and wait for the sysadmins to set the system up.
Then there's this tedious back and forth where something isn't configured quite right.
And, you know, the sysadmin has no idea how the software works and the software developer
has no idea what the sysadmin is doing.
And the whole momentum there really caught and just that I guess another way to put that
is the demands to speed up software development have sort of put us down this path where developers are deploying infrastructure.
And when I started in the IT industry, those two lines were never crossed.
And they generally didn't get along super well either.
Either sides of the camp were always grousing about the other side of the camp.
So it wasn't an ideal solution.
And Linux really is an operating system designed for system administrators. It
really is. If you think about it, you have to be root to install software. Well, who's root?
The system administrator. And if you think about the repo, what is the repo? The repo is an
all-knowing person who has blessed this package as being possible to install on your system.
this package as being possible to install on your system. That's the mindset of a sysadmin.
And snaps and Docker and app image and flat packs and DevOps are all about collapsing that resistance down to the developer can publish and the user can install it. And sometimes the user
is the developer. And that mind shift has been good for the pace of software development,
and it's led to things like incredible cloud infrastructure and things like AWS Lambda and
DigitalOcean one-click deployments, which I use the hell out of. So it's not all negative,
but my core issue has always been, as we go down this path path is you become less and less of an expert. You are
required to be less and less of an expert to set up really complicated systems. Whereas in the past,
your level of knowledge basically had to match the level of complication of those systems. It
was a one for one thing to get that sucker working. You knew exactly how it worked in and out
intimately. That is no longer the case.
There is now a massive gap between knowledge and complexity.
And that's not necessarily a bad thing, but I think it does lead us down this path where you can over-deploy an infrastructure.
You can build yourself a kingdom that traps you.
And then it's a lot to manage.
You've got multiple people trying to get the stuff.
You start sharing passwords.
You're using three different things to coordinate the team's communication.
Nothing is super solid because there's not one really uptight person who is worrying about all of that stuff constantly saying we need to be doing this.
We need to be fixing this. or whatever to do that, and you leave it up to just the developers or the distributions or the projects who are busy doing other things to maintain it, I feel like this is going
to be a systemic issue that constantly plagues busy, overtaxed open source projects and could
even over time lead to a bit of a reputation as amateur and that a lot of these projects
don't have their shit together
and that they constantly are having issues.
I mean, going all the way back to mint breaches,
I mean, we can just see here and rattle off mistake after mistake after mistake
that is really just boiled down to, oops, we missed that,
or we didn't really know, or something changed that we weren't aware of,
or the project lead who had all the access left. And every time you just go, you smack your head and you go, that's amateur
hour. It's a gray area in many ways, because you're right. We have, we have so many tools
and in many ways that's good because a lot of times if they're done well, it means it's easier
to get it right. Right. So if you're on Amazon and you're deploying a load balancer, it will
handle all, you know, you, if you have SSL on there and you have it provision it, you know, it will have the best ciphers. They take care of it. It gets upgraded
automatically. So that's nice. You don't have to go learn the horrible world of SSL. At the same
time, you're right that if it's so easy to build yourself this giant, complicated mountain and no
one really understands how it works, that will fall down. And that's where I think maybe sometimes
it's a little bit of a fallacy that just because you've, you know, you've quote unquote gone DevOps or whatever else, that doesn't mean
you can't have individual experts and you can't have people that happen to know more about a part
of the system than others or are specific domain experts. And at the same time, it reminds me of
sort of unit tests for software development and all the safeguards and everything else,
there's really no substitute for attention to detail
and caring about the software or systems that you're crafting.
Yeah.
Well, there you have it.
I'm just putting it out there that would be great
if somehow we could really come together on,
even if it was a checklist, that'd be fine.
That'd be a great starting point.
Or a guide, a wiki of best practices.
The problem would be anything like this ever getting traction
and it would take a lot of buy-off. The problem would be anything like this ever getting traction,
and it would take a lot of buy-off.
But it really seems like if we could give these folks that are too busy to – if you have the mindset ever when you're managing your infrastructure that,
oh, man, this is this tedious thing that I just have to get done so that way we can do X,
you're probably not the person that should be managing that.
You should be the person that's looking forward to setting that stuff up so that way you can
do X.
And if you're in that position, we're like, oh man, this thing again.
All right, well, we got to go get that thing set up on GitHub so we can just do this, please,
so that way we can get back to work.
If that's how you feel about it, it's probably a sign that your team needs somebody else
doing that work.
And so if we could make that person's job, because that's not always an option, if we can make that person's job and life a little bit
easier with a checklist or a guide or some sort of community resource that says, these are at least
the minimums that you should be doing, I think that would be extremely valuable. So I hope we
see it. Who knows? You never know. You never know.
Maybe I'll travel around in my RV and I'll advocate it on every Linux event I go and one day something will happen.
Maybe.
In the meantime, where should folks get more Wes Payne in their life?
Oh, you can find me on Twitter at Wes Payne or stay tuned for some exciting new TechSnap episodes coming to a podcast near you.
Ooh, TechSnap. coming to a podcast near you. Ooh,
TechSnap.Systems slash subscribe.
Brent, do you have anywhere you'd like to send
the good folks any
tips, tricks to get more Brent in their life?
Yeah, sure. You can either come to Northern
Ontario and I'll tour you around,
or you can hit me up on Twitter,
Brent, at Brent Gervais,
B-R-E-N-T-G-E-R-V-A-I-S
or brentgervais.com.
That's great.
That's great.
Now I know if I ever go there,
I'm going to contact you.
If I ever go back to New York City,
I'm going to contact XMN.
And if I ever go down to Seattle,
I contact Wes.
I am really covered now,
at least on the major coasts.
You can follow me.
I'm at Chris LAS.
The whole network is at Jupiter Signal.
Links to...
Build the community.
Well, you know what?
Really, if you think about it,
we have a very wide-ranging network of experts,
probably in every major city.
So anywhere we want to go,
there's probably going to be somebody there.
If you'd like to join us or get links to anything we talked about,
we have all that stuff detailed at LinuxUnplugged.com.
LinuxUnplugged.com slash 257.
The live show is at JBLive.tv on a Tuesday.
And I may be still down in Texas by the time you hear us again.
I just don't know.
But thanks so much for joining us.
And we will see you right back here next week. Thank you, guys.
That was a lot of fun.
Thank you, Mumble Room.
You guys were great as always.
Stars as always, that Mumble Room.
Brent, the mumble worked pretty good.
You sounded pretty good during the whole show, too,
so I'm glad that worked.
That's kind of unbelievable.
It was great.
We kind of pulled that together last minute, didn't we?
Yeah, yeah.
And, Eric, it's good to hear that whatever was wrong with your mixer last week,
the highs are a little better this week.
Yeah, that was fun to find out after the fact.
Yep.
Yeah, the kids love the knobs on the mixer, man.
They love the knobs on the mixer.
Ooh, look at that, Daddy.
Security Amateur Hour coming in by Architect right now.
The Traveling LEP roadshow.
We need uptights.
That's funny.
Uh,
do you have any,
uh,
hot travel tips to leave us with West before we part?
Um,
buy a Kindle and spend your whole time using it.
Oh,
did you get a Kindle?
I did.
I don't know how I didn't have one already in that.
I'm a,
I'm a pretty big reader.
I've just been stubborn and sticking to the old dead tree matter, but it's an Amazon product. Yeah, I was conflicted, but boy, I was glad I had it on
that 12 hour flight. I got a Switch for this trip. Oh, really? Oh, nice. Yep. Yep. I got Odyssey and
Mario Kart. Man, those games at 60 bucks a pop, though, that is outrageous. So I'm going to wait
a little bit for the others. But so much fun.
Much better than I expected.
It has a kickstand on the back so you can take the handles off.
And it's just so much fun to play with the kids.
It has been great.
It is better than expected.
And the fact that it uses USB-C to charge is an absolute perk.
Yeah.
So I'll get some,
I'll have some switch time while I'm on this trip.
So I'll have a,
by the time I get back,
I'll probably have a pretty good take on the device and might be ready to try
to put Linux on it.
Oh,
do it.
Yes.