LINUX Unplugged - 360: The Hard Work of Hardware
Episode Date: July 1, 2020We're joined by two guests who share their insights into building modern Linux hardware products. Plus we try out Mint 20, cover some big Gnome fixes, and a very handy open source noise suppression pi...ck! Special Guests: Alfred Neumayer, Brent Gervais, Drew DeVore, and Jeremy Soller.
Transcript
Discussion (0)
Breaking news just as the show starts, this Kickstarter project has smashed through its goal.
It was looking for $35,000 and right in front of my eyes as we hit record, it just hit $36,000.
Fully funded, what is it?
Why my friends, it's the Raspberry Pi, but it's totally and completely untethered.
If you've ever thought of taking your Raspberry Pi project on the go, but it's totally and completely untethered. we created CutiePie, a 12mm thin Raspberry Pi tablet with an 8-inch display, a 5000mAh battery and a handle that also doubles as a stand. Now you are free to create whenever
and wherever the inspiration strikes. At the heart of CutiePie is our own custom-designed
circuit board. This piece of open-source hardware contains all the components necessary to make your projects portable.
The onboard power management feature shows information about the battery life
and gives you the freedom to use or recharge your Qtpie tablet just like your everyday gadgets.
To take advantage of the custom hardware, we built our mobile shell on top of Raspberry Pi OS.
Everything you need to work on the go, the shell has you covered.
And it does all this while maintaining 100% compatibility with the original Pi environment.
Okay, let's talk about this. This is pretty neat. It's even cooler when you see it in person
because it's got a handle. And anytime you put a handle on technology, I'm pretty much in.
Yeah, it's like the modern day GameCube. I love it. Yeah, yeah, it is.
The UI, the CutiePie interface actually doesn't look that bad.
I'm surprised to say it,
but it actually looks decent.
Yeah, it's a Qt-based
open source application framework
called, you might expect,
CutiePie Shell.
And they say it's a highly optimized
mobile user interface.
Yeah, it's got a CutiePpie UI for a more tablet-like experience.
It comes with an 8-inch 5-point multi-touch IPS display running at 1280x800 resolution.
And the wording they use there, you might have caught it's fully compatible with the Raspberry Pi.
It's powered by a custom certified open source board that uses a Raspberry Pi compute module that's a 3 Plus Lite.
It has a gigabyte of RAM.
It has the same system on a chip as the Raspberry Pi 3 Model B Plus.
It's a lot of funny names, Wes.
Yeah, I guess it's almost a Raspberry Pi, but either way, you can take it with you on the go.
You can do your normal sort of tablet things like logging onto a Wi-Fi hotspot, checking
into some websites, getting some basic work done.
And because it's all powered by almost a Raspberry Pi and Linux, well, if you want, you can pull
up a terminal.
And most importantly, your terminal can have a handle.
Hello, friends, and welcome into 360 of your weekly Linux talk show.
My name is Chris.
My name is Wes.
Hello there, Wes Payne.
Good to be connected with you on a very exciting episode of the show.
We have so much interesting community news to get into. We have a couple of guests joining us, and we have a pick that's going to make your audio sound nearly pro, even in less
than ideal conditions. Don't count Pulse Audio out yet. It's still got a few tricks. We'll tell
you about a GUI front end that does real-time noise suppression. It's super cool, and you don't
even need a fancy NVIDIA GPU to pull it
off. But before we get there, before we get to the community news and all of that, I got to say hello
to Drew and Cheese. Gentlemen, welcome to the table. Hello. Hello, internet. And then, of course,
a big, hearty, time-appropriate greetings to that mumble room. Hello, Virtual Lug. Hello.
Hello, everybody. Howdy. Howdy. Also joining us off mic is Levi, the studio dog. He's here with me in Austin, Texas today, hanging out as the kids are out doing the swimming stuff. And I'm down here doing the podcasting stuff. I brought him with me to keep me company. I got him a little bed. I got a little dog bone. So let's get into the community news because there's some really cool stuff to talk about today.
news because there's some really cool stuff to talk about today. Probably number one on my excited to see list because it's something we'll all get eventually is some nice improvements to
GNOME Shell. It's actually fixing a regression that caused Windows to render slowly. It all
comes down to culling. What do you know here, Wes? Well, as part of wanting to improve GNOME
performance, especially things like 4K resolution, which we
all love, Canonical's Daniel Von Woot, one of our favorites, has been profiling various desktop
issues and looking to fix them up before GNOME 3.38 slash Ubuntu 2010. One of his recent
discoveries, though, is that Mutter Windows' calling code in general, well, it was kind of
just broken. And given the huge number of pixels at
4K resolution, well, it just makes the problem worse. Might have been okay, tolerable at 1080p,
not so at 4K. Even windows not being presented at all were not being cold, and that leads to a huge
waste, especially at these high resolutions. So you could be dragging a small terminal window
over eight maximized windows. That wasn't going to be good. You'd get about 30 FPS.
With these fixes, though, that's 60 FPS. Hey! Yeah, right? Or another example is if you're
running a maximized GLX Gears, which, I mean, come on, that's what we're all doing all day,
right? I just sit there, run GLX Gears. Well, you drag that over eight maximized terminal windows,
that frame rate went from 15 to 60.
That's a big win in my book.
Hey, that's why you're getting them fancy GPUs these days.
You want them 60 frames per second, even for your desktop environment.
It all came down to that broken culling that was a regression for 3.34 of GNOME Shell.
A fix is currently being evaluated and hopefully will be picked up soon and backported.
Given its importance, we'll get it to older versions of GNOME Shell 2, which I could definitely see happening.
That's happened a lot with some of these types of fixes as they make their way back into the older versions eventually.
But they tend to land first in the newer version.
Well, of course.
Now, I can't quite decide if this is, you know, kind of shameful because it's an embarrassing bug or we should just focus on the good here.
And it's getting fixed.
The regression is going away and we'll all be back to a high-performance GNOME desktop pretty soon.
This feels like a classic open-source conundrum, right? Because in one part,
it is a bit embarrassing that a bug like this crops in. But on the other end,
a bug like this could easily crop into one of the commercial platforms, and we would just
never, ever be told about it.
Yeah, ain't that right? You'll just be wondering why your desktop suddenly sucks.
And then eventually, one day, magically, it's fixed.
A service pack comes out that greatly improves performance, but instead of getting it at a six-month iteration, you get it like in a year iteration.
So I think it's a win for free software.
It would be so easy for me to get on my, we need a workstation grade desktop environment stat soapbox.
But I'll save that for when we talk about ButterFS because instead I'd like to talk about Mint 20.
I think it's been, I don't know, it's been several months that I've been sort of tracking the beta.
It landed a few weeks ago.
I started poking at it.
And then when the final release landed last week,
I wiped my test install and loaded the final version fresh.
I went with the Cinnamon.
It's Cinnamon 4.6, Linux kernel 5.4,
and it's based on Ubuntu 20.04.
And it will receive security updates until 2025.
Until 2022, future versions of Linux Mint will use the same package base as Linux Mint 20.
So upgrades, even other versions, should be really easy because in that logic, Mint 21 will be based on the same Ubuntu 20.04 base.
Until 2022, the development team won't start working on a new base.
They're just going to be fully focused on this one.
That's a nice feature for users.
It's sort of a nice bit of predictability there.
This is going to be around for a while.
Yeah.
So I thought it was worth kind of following the development of it, trying it when I could find a way to try it, and then installing the final version.
And they put forward a couple of things in this one that are quote-unquote new.
What's old is new again 10 years ago i'm talking like linux mint 6 they had a tool called giver which would share files across
a local network without any user configuration they would just use dns to discover each other
and then you could just drag files between the machines and it was sort of like airdrop
well they've brought it back wes in linux Linux Mint 20, it's called Warpinator.
Oh, Warpinator. What a name, right? So yeah, it's basically just a re-implementation of Giver.
And as you touched on, setting up a whole server for something like FTP or Samba or NFS,
well, yeah, that works fine if you've already got one, but it's a little bit overkill if you just
want to send a file between two computers. Now, you know me, I just use Netcat, but that's not for everyone,
that's for sure. So here you've got Warpinator, and you just open Warpinator on the two computers,
they'll auto-discover each other, and then boom, you've got file transfer.
I think this is a pretty solid feature, because I have discovered through spending time in
offices that people use AirDrop. I just thought AirDrop was like some sort of side
thing that iPhones had, but it's integrated into the Mac desktop and they use it to drop files
amongst each other. And this is something similar. They just have your machines that have
Warpinator show up automatically and, you know, you have to be on the same LAN and the same
broadcast domain, but then you can just drag a file from your desktop into Warpinator and it sends it to them.
It just seems like a very nice, you know, user-focused feature.
All the technology already exists in the Linux and free desktop stack.
They've just kind of tied it together and made it a little easier and more accessible.
I think you can find a lot of things to criticize Mint about.
I'll get to one of them here in a moment.
But they've been historically really good at putting their head where the user is.
And so I think it's reflected in the way their update manager displays things.
I think it's reflected in tools like Warpinator.
And in some of the decisions they've made to fork other projects to keep things from changing.
It's in trying to service an end user who just wants a practical functional desktop.
And that's why it's nice to see things like in Cinnamon 4.6, they've updated the Nemo file
manager so that it will prioritize navigation and the display of content over thumbnail generation.
So it doesn't delay loading a large directory so it can crawl the files and get thumbnails.
It first will display you the content.
It will first respond to your navigation requests.
And then when all other tasks are complete, it will work on the job of thumbnails.
And I think that's just a small but very nice improvement.
There's also in 4.6 of Cinnamon fractional scaling now, which obviously has been a big
topic for a lot of desktop environments.
You either had 100% scale just with the natural scale is, or you could bump it all the way up to 200%
scale, which would be quote unquote high DPI mode. And that would be uniform across all the monitors.
That's not what I want though, right? I mean, come on. That almost never works. Okay,
maybe in the base setup, but when you've maybe got like a laptop connected and you've got a nice
screen on the other side and you've just got a mix of resolutions, or you're doing something like
you, Chris, where you've got some, you know, horizontal and then some vertical monitors,
it's just not going to work. So I'm really pleased to see that in Cinnamon 4.6, each monitor can have
a different scaling factor and you can choose values in between 100% and 200%, hence fractional.
in between 100% and 200%, hence fractional.
Linux Mint 20 is out now.
My impressions of it were pretty standard.
The install is the same as always.
There's no ZFS option.
The default fonts and icons all look really great.
The first step and the welcome wizard that it gives you now lets you set your color highlights, your dark theme.
You can choose a traditional versus modern layout just right there and just boom, gets it done. And there's also a button when you scroll down to get your snapshot set up, run the driver manager, the update manager. All the managers are just right there, one click to launch them. Some of them are, they just get you in the app and you have to do the rest and some of them are like take a specific action. It's well done. It's nice to see.
and some of them are like take a specific action.
It's well done.
It's nice to see.
There was an issue I hit though.
I'm really disappointed in how Linux Mint has handled this.
I wanted to install Chromium and they have opted, I guess,
to break the user experience
in favor of making a political statement towards canonical.
So on an Ubuntu system, when you
app install or whatever, you go to the software center and you don't know software, you go in
there, you install Chromium. It's actually a snap on Ubuntu. Well, Mint is based on Ubuntu.
Mint takes advantage of a lot of the heavy lifting that Ubuntu and Canonical do, both in
infrastructure and in development. When Canonical or Ubuntu make a change like that, it impacts Mint. And what they opted to do
was just break my ability to install Chromium. In fact, when I did apt install snapd, I also got
errors. So I couldn't even opt as a user to say, well, I'm fine with running Chromium as a snap.
opt as a user to say, well, I'm fine with running Chromium as a Snap, so I'll just install Snap and go. I couldn't do that either. So what I was forced to end up doing was just go to Google
and get the more proprietary locked down dev and just download and install that. And I didn't want
Chrome. I wanted Chromium, but I just didn't get a choice. The Mint developers had just decided
that Canonical shouldn't be snapping applications. and so we're just going to break this.
I know the folks at Canonical would have been willing to work with the Mint team to come to a solution here.
There was no communication about this.
It's just a crappy experience.
And it seems so opposite of how they typically try to empathize with where the user is at.
They try to put their heads with where the user is at. They try to put their heads with where the user is at. But in this case, I found it to be a user hostile experience and sort of disappointing because I
ended up going with like the more locked down, tracked proprietary version to get what I wanted
so I could get the web page loaded that I needed. Now, I will note in the release notes for Linux
Mint 20, they do have some instructions on how to re-enable Snap if you'd like, but it is a little
confusing, especially if you're coming from another distro.
And while I agree, I mean, it's kind of pretty frustrating, especially if you're used to using
Snap packages, I think from the Linux Mint perspective, Snaps are kind of all about
giving developers control, direct stuff, especially for some of these proprietary
applications where it doesn't really make sense. And I think that just doesn't jive well with the
sort of control aspect
that Clem and the rest of the Linux Mint team want to have over their distribution. And you're right,
it's all based on Ubuntu. But I think they see it as their own thing, and that this change from
upstream messed with how they wanted their distro to work. So it seems user hostile, and it seems
hostile towards canonical, which is sort of like crapping in your bed. So it just seems like there
could have been a better way to solve this problem. And I did see all the drama about
the announcement. We saw it all. We just didn't feel like talking about it. Neither anyone from
canonical came on the show or anyone from Mint came on the show to talk about it. So we let it
be. And now I just thought, OK, well, I'll just experience it as an end user, being somewhat
aware of the situation, but not having really explored it very much. I find it disappointing.
I think it's really kind of silly.
I wonder if the experience is different for folks who are used to maybe a more traditional
environment, you know, especially if they've been on Mint for a long time and aren't prepared for
the Snap revolution where, you know, all the behavior, which I think we've seen on the other
side too, there's plenty of people in the Ubuntu world that are sort of upset that Snaps have taken
things over. Now, I'm fine with that. I'm used to it.
But I think there are multiple segments of users out there
who have different views on how this packaging transition is happening.
And it's just interesting to watch.
Yeah, it really is.
You look at the Mint situation and you go,
huh, well, where's the line at, guys?
Because you're basically hitched to Ubuntu's wagon.
You don't have control there.
Yeah, I'm aware of Linux Mint Debian Edition.
Go ahead, pull that trigger.
And you also have no control over software as a service
or every single website on the internet,
which is a primary tool for people now.
So you have no control over what is going to be
the most common primary application,
the web browser and the web. You have no control over what is going to be the most common primary application the web browser and the web
you have no control over the base of which you've based your operating system over but this this is
where you draw the line in a way that breaks it for end users and sends a middle finger to canonical
and they're not going to forget this like who here? It just seems like somebody got grumpy and wanted to make a grumpy statement, and everybody but that one individual loses out.
I don't know.
I mean, it doesn't need to be a big thing. including all of the applications the end user is expected to use, or is the system managed by the user,
and they choose if they get vendor-updated applications
or if they use repository applications.
Who's the boss?
In my world, it's the end user.
But you can enable it again.
And I mean, I think the other thing is
snaps are kind of more about developer control than user control, right?
I mean, with the auto-updating,
I just think the traditional grumpy sysadmin, as you touched on, they might prefer this approach, you know? It's
the apt that they know, you know, it's you controlling the system and not this snap connected
to a proprietary store under canonical. It's just standard apt. You can re-enable it just like you
could go get the flat pack, but these are mint users, right? So maybe it's a little bit different,
like, how about in that welcome wizard,
give me an option to install it from a dev file or something.
I think that could have been gone a long way.
You know, if they had done some work
to maybe add a repository that set up Chromium
to still use the, you know, the dev version
and not have to go to Chrome
or had some stuff at the welcome wizard
or in the installer that let you choose like,
hey, do you want to use Snap given our concerns?
Or do you want to use our default of not having Snap?
Definitely this wasn't handled ideally.
Yeah, we'll link to the blog where they made their announcement about the change because this has been in the works.
They communicated ahead of time.
June 1st, Clem made a post in his monthly update saying this was coming.
And this is their choice.
And I think they have every right to make it.
I just think there's a way to do it in
which it doesn't have to be dramatic. We intentionally didn't cover the drama around
this because I think everybody is a little exhausted by universal packaging drama. I think
all of us have just had it. And at this point, we just want crap to work. And so I just decided
to avoid it until I experienced it. And I found it really frustrating. I found it to be
substandard when the rest of the experience is pretty great. Like setting up snapshots is very
straightforward. Picking a mirror that is closest to you and fastest performing is made easy enough
for any average end user to figure it out. That welcome wizard that gets you set up and going with
all these little tiny things that make the experience more stable and recoverable. All of
that and things like Warpinator
clearly demonstrate empathizing with where the user's at.
Making it easy for, you know, an expert or a newbie on Linux Mint
to get up and going.
And you're right.
Getting Chrome, which is, you know,
kind of bog-standard web client these days,
that's harder than it needs to be.
It just seems like there is an animosity between them and Canonical.
And I guess I feel like, well, then move.
Go somewhere else.
Switch to the Debian edition.
This can't be healthy as a long-term thing.
All right.
Well, moving on, let's talk about something that is quite healthy.
And that is the continued steady march to opening up our hardware.
And System76 has taken some significant steps
with the new Oryx Pro, which ships for the first time ever with System76's open firmware
and an NVIDIA monster GPU inside this thing. So Jeremy joins us from System76 to talk about
this and some of the benefits it brings us. Jeremy, welcome back to the
show. Hey, glad to be here. Glad to have you here. So let's start with the open firmware in a machine
with NVIDIA graphics, because first of all, that's a huge accomplishment. It's in a laptop,
and I think it means a lot of great benefits for end users. Can you talk us through some of that?
We've definitely noticed how important it's been for our customers on systems that have integrated graphics like the Galago Pro, the Darter Pro, and of course the Lemur Pro.
And we've always wanted to continue pushing this across our line until it covers everything, laptops and desktops.
And of course, that's a very heterogeneous set of hardware.
On the laptop side, we have Intel CPUs and we have AMD CPUs.
We have Intel graphics.
We have NVIDIA graphics.
And this will only continue to diversify as we move forward.
It's been important to me especially that the firmware we develop is able to deal with the wide set of hardware
that's typically dealt with by proprietary firmware. And what you'll find is if you go
to a company like Inside, which produces the proprietary firmware for these systems,
or a company like AMI, which produces proprietary firmware for our desktops,
like AMI, which produces proprietary firmware for our desktops, they will have a wide set of support code for features that are not so common across other core boot machines. Things like
Thunderbolt, things like NVIDIA GPUs. Even having the H-class Intel CPU is something that's very
rare in core boot machines.
Can you talk a little bit more about what the H class means?
So that's the high performance version. It's a mobile CPU that's 45 watts,
and it can actually go a lot higher than that. It's competitive with desktop CPUs
versus the U class CPU, which is half or less than half the wattage at 15 to 25 watts,
which is designed for ultramobile, and definitely you'll notice performance differences.
So the H-Class is incredibly high performance.
It has some unique characteristics that have made this a pretty intense project compared to the lemur pro so the system can
power off if it is using all of its hardware on battery that's the first thing uh the ac adapter
is 180 watts and the system can use up to and above 180 watts because its thermals allow it to exceed that value.
So instead of using the thermal limits as the power limits as was done with the Lemur Pro,
this model is completely different. On this model, we have to use power limits that are separate from
the thermal limits because the
thermal system is capable of exhausting more than the power system can. Oh, wow. Oh, that's nice.
Yeah. On battery, the system can only go up to about 80 watts. And this is a typical number for
laptops on battery. You won't find any laptops that move more than 80 watts when they're on
battery.
It's going to be somewhere around that value.
But on the AC adapter, it's able to go up to 180 watts.
And then you throw in the NVIDIA card.
And with the NVIDIA card, you have a ton of ACPI features that need to be implemented.
The benefits are the same as the Intel graphics machines.
You have the ability to inspect the firmware.
You get faster boot times. You get more recurrent updates to the firmware, and you get better compatibility across Linux distributions.
Lightning fast boot times, as you guys put it.
I'm actually impressed, Jeremy, that as the Intel processors continue to evolve, that this is a project that is even possible.
Like, I could have seen it dying at the eighth generation Intel CPUs and never been able to progress further than that. Is it a cat and mouse game where they change something and then it's a matter of figuring out what was changed to make it work?
We've been able to consistently release open firmware with Intel releases. Two years ago,
it would not have been possible because the FSP was not released in this kind of timeline.
Now the FSP is released.
I just have to crunch through the documents, find what has been done incorrectly in core boot and what needs to be added for the specific platform.
And I always find something and then get it working.
So I have a bunch of debug tools.
I've actually got the orcs right now. I've got it next to me. It's lying on the lid, halfway open. The keyboard is popped And this is how I run it when I'm doing firmware
debugging. So what happens is before memory init, you don't have access to display, USB,
any devices except the most simple devices possible. And in the case of our open EC machines, we've developed a technique for debugging
using the EC so that we can get output from the system throughout the whole boot phase,
including the power states before the CPU is even turned on. When I first get a system
and the hardware has been done, so this is a board that we know we can try firmware
on. I'm going to use the schematics for the board to design the firmware support that I think is
going to need to be there. And I'm going to build an image and then I'm going to flash it to the
system. And then this is the most fateful time of the whole process. will it boot or not? If it boots to display init, things are
going to be really easy because then I can develop on the machine. So long as I can get to the point
where I can boot any device, then things go a lot faster. That has happened a lot more for me
recently. We have the Gazelle, the Adder, and the Oryx Pro that all got updated. I was able to, within two hours, develop firmware for all three, flash the firmware.
Every single one of them booted on the first try.
That must have been amazing.
That must have felt incredible.
I guess I have a question that I noticed that is stressed pretty significantly in this blog
post that talks about some of this is
it sounds like it's unique to have this in combination with an NVIDIA GPU.
Can you talk to that a little bit?
Yeah, it's both unique to have an NVIDIA GPU for anything running core boot.
It's even more unique as in this is the only system that supports switchable graphics
after the OS is booted with core boot.
You mean you don't have to reboot to change graphics cards?
You don't.
In fact, with the NVIDIA Driver 450.51 in Pop!OS 20.04,
this system will support hybrid graphics where you can use external displays,
you can run things by default on the integrated GPU,
or by right-clicking in GNOME Shell,
you can run it on the discrete GPU.
That's great.
And some things already have rules set up.
Steam, for example, is already set up
to run on the discrete GPU by default.
And the interesting thing is,
when you run Steam on the NVIDIA GPU,
if you close the window
and it goes to the background
where it's in, you know, logged in,
you're going to get your messages kind of mode,
it will turn off the NVIDIA GPU automatically.
Oh, wow.
Yeah, the power savings are the same
as if you're in integrated graphics mode,
so long as you're not utilizing the NVIDIA GPU.
You plug in an external display with a new beta driver, it will work.
Windows works out of the box with our firmware
and with the same display switching capabilities.
Wow.
And this took a ton of work.
By far, the longest part of the Oryx Pro project
was to get NVIDIA graphics working so that it would be switchable at runtime.
That took me maybe an hour to figure out how to get the NVIDIA GPU to actually show up.
After it showed up, the rest of the time, a couple weeks maybe, to figure out how to get switchable graphics to work.
to figure out how to get switchable graphics to work and then to fix problems with switchable graphics
like suspend wouldn't work
or suspend would come back
and the NVIDIA graphics card wouldn't be there.
And at the very end, it worked perfectly in Linux
and it didn't work at all in Windows.
And that was our release day
where we wanted to put it up on the site
and that was on Thursday.
So for one day, we had a up on the site and that was on Thursday. So for one day we had a disclaimer on
the site saying, uh, windows NVIDIA driver will not work on this machine. And then during that
day on, on last Thursday, I figured out the problem with windows, which is windows is I
cannot understand how firmware developers get anything done with Windows.
Like, it is impossible to work with.
But you did it for us, Jeremy.
You're the hero that we need.
So you already got the NVIDIA driver, which is closed source,
and it's not going to tell you that much about what it's doing.
Then you pair that with the Windows kernel, which is,
by the way, you want to know what that NVIDIA driver is doing.
There's no way you can use us to figure that out.
All the ACPI debugging messages, I could never figure out how to get those to work. So I ended up implementing a weird protocol in ACPI that would talk to the embedded controller over a port and would output debug messages that way.
Clever. The embedded controller is a port and would output debug messages that way. Clever.
The embedded controller is a big part of this.
It's been a real boom to our open firmware work because now we're able to get debugging
from any point of the system's process.
In fact, right now I'm working on debugging the Intel FSP because we want to enable memory
overclocking.
Just to clarify, you're talking about the System76 embedded controller firmware, right?
Yeah, that's what I'm talking about. Yeah.
Yeah. So that plays a role in this. I guess I didn't appreciate that connection that this
controller gives you a debug point to do this other stuff.
It is probably the single most important thing for us
in terms of being able to reproduce open
firmware on new devices.
Fascinating.
Before we had the Galago and we had the Darter Pro, that was kind of the first generation.
And that was kind of the chimpanzee version.
But now we've evolved.
Okay, now we're maybe 150,000 years ago when people first discovered fire or whatever.
I'm excited to see what we evolve into in the future.
And what this project has done, this works pro, is we have covered so many different pieces across the whole spectrum of what you would expect from a modern PC.
And I've poured it to the Adder too,
so we've got OLED 4K figured out.
We've got systems that pretty much go through the laundry list
of what is in a PC.
And this was extremely important.
Now we're pretty sure that we can port this to all of our laptops
and hopefully moving forward, our desktops.
That's awesome to hear.
And just sticking with the Orcs for a second,
this seems like the most competitive laptop
System76 has ever made.
It's 4.39 pounds.
It has the open firmware.
It has the embedded controller firmware.
It has the real-time switchable graphics.
It has the high-performance CPU.
So I figured I'm going to head over to your website
right now and configure one and just see what it lands at. So I went with the eight gigabytes of
video memory for the RTX 2070. I thought, why not go with the 17.3 mat if I'm going to go big?
It also comes in a 15.6 inch mat, which is nice. 5.1 gigahertz, 10th generation i7. I'm going to go with 16 gigs of
RAM on this one. Maybe. You think maybe I should go with 32, Wes? I think you got to go with 32.
Amen. It's got Thunderbolt. You're going to want to use your Thunderbolt dock and do the VM thing
that you've been talking about on the show for years, right? You're right. Yep. Plus you can
replace all those pies. Yeah. All right. Yep. Plus you can replace all those
pies. Yeah. All right. Okay. I'm going to go with a terabyte MVME and I'll just stick with one disc
with a one-year warranty. It comes out at $2245, which if you think about something like this in
comparison to what you'd get in like a MacBook hardware, which would, they would be, it'd be a
$4,500 machine and it wouldn't even have 10th generation Intel CPUs.
And it's just remarkable because not only is it price competitive,
at least in my opinion, especially for what you get here,
but it is genuinely a unique offering in the Linux space too
with this firmware level stuff that you've done.
And I'm really glad you came on to tell us a little bit about the behind the scenes of it, because I'm able to appreciate it even more now. And I am so impressed that the
work has continued, Jeremy. And I'm glad to hear that some of the work you did previously paid off
in getting the Oryx ready and that we could see the Sprite across the line. I just say,
now we just need to get one and we need to try it. Awesome. We'll have links in the show notes.
Jeremy, thank you for your time and congratulations on a job well done.
I think your work there has made this, and of course the rest of the team, but your work there particularly, has made this a very competitive product.
It looks really good.
I want one.
Awesome.
Thanks.
All right.
Well, let's talk about another project that is making some fantastic progress, and that's UbiPorts.
In the post-show, when we were just streaming, it wasn't recorded afterwards, this came up in our virtual hug.
It was like, hey, let's get an UbiPorts update.
And I thought, you know, it is time because we've got PinePhone stuff to talk about, and we've got another project I want to chat with.
So we are going to transition from talking about big, burly desktops to big, burly phones.
And Fred joins us from UbaPorts.
Fred, welcome to the show.
Oh, hello.
I would like to chat a little bit about the PinePhone and just get your opinion on the state of affairs and how the PinePhone is changing the game.
I got one.
I got the Braveheart Edition, and I haven't yet loaded UbaPorts onto it.
If I were to get a PinePhone and if I say listener in this case,
were to get the Pine phone and grab this image, how functional is it at this point?
A lot of hardware actually works. You can do phone calls now. There is much improved
power saving and especially due to the crust work that has been going on in the background.
So you can finally reach about 14 hours of battery standby time,
which is not bad at all compared to what we had before.
No kidding.
Yeah, right.
On the other hand, though, there are some minor issues still to be worked on.
For example, the GPS stuff.
I'm not sure how well-progressed that one is,
but I'm pretty sure we will get there sooner rather than later.
So you also go by Alfred. I know in our mom room, you're going by Fred, but tell us what you do with
the project too. We should cover that because I think it's your first time on the show. So we
got to do some of the basics. I joined the UbiPort community almost two years ago, and I started out
as a porter. I have been doing ports multiple times, especially for Sailfish OS, for example, on the Galaxy Nexus.
And I thought, hmm, I have this Sony Xperia X lying around here.
Maybe I should do something with it.
And I figured there are some people who use it for running Sailfish OS and might as well just try Ubuntu Touch on it because yalla has already done a lot of work
enabling the hardware so i just fiddled around with it a little bit and turns out it was not
that hard to get it working and it has been a community device for like half a year now
people can download it people can flash it and i'm super proud of the work that has been going in, especially due to the help of the community,
getting DualSIM to work.
And yeah, it's a fantastic experience
being in the UbiPorts community.
That's such a classic scratch your own itch to get started,
and then it just snowballs into something much bigger.
Is it still your primary device for UbiPorts,
or do you carry another device as your main one?
I carry multiple devices, actually actually around. I have to.
Of course, of course. Why am I not surprised?
For example, the MX4, which has been a Ubuntu Touch device for a long time, but still some
bugs come up and I have to check out a bug report and just fix it whenever something comes up.
But there is also, and that ties into the new work that has been going in with Helium 9,
I'm currently working on a port for the Google Pixel 3a.
And that one is very interesting because the software landscape,
the Android landscape has changed so dramatically just over a short period of time that you have to take into account the changes in partitioning and changes of how the recovery and the boot image work.
And the Halium 9 version or the Halium project all all, is just moving towards that.
We're not completely there yet, but it's enough to get an almost fully functional device working with Android 9-based drivers.
Halium is the project that drives the newer generation of ports for Ubuntu Touch.
The Halium 9 basically is the version of Android 9, modified and adapted to GNU slash Linux typical system.
With Helium 9 especially,
the hardware abstraction is getting so good
that the performance has been improved.
The hardware support for rotation sensors, et cetera,
has been migrated over to packages from YOLA. And we can now see that
the bring-up, the initial bring-up compared to a 7.1-based port is actually much faster,
and we reached a lot of goals in a very short amount of time.
Can you help me understand where Project Treble lies in all of this for UbiPorts? Because
I saw a story on XDA Developers on June 22nd that there is a generic system image that will just
maybe in theory one day bring Ubuntu Touch to any Project Treble supported Android device.
How does that play in here? Is this a realistic thing as far as you're aware?
Is it complement what you're doing already, etc.?
Currently, the work on a generic system image and Project Treble is spearheaded by Irfan, one member of the community.
He's the one that drives the developments on GSI-related things, Project Treble-related things.
GSI-related things, Project Rebel-related things.
And he basically releases a root FS plus a generic system image,
flashable as a zip file.
So if you're used to TeamWin Recovery, you can just take the zip file and flash a generic system image on your device.
The only thing that is the missing piece,
or the only two missing pieces right there,
are a vendor partition that is Project missing piece right or the only two missing pieces right there are a vendor
partition that is project travel compatible and a modified kernel image so due to the fact that we
rely on app armor in a big way we do require some kernel changes to be present in in the kernel
image but as soon as those are in you are free to basically take the GSI image, flash it onto
your device and enjoyable to touch that way. Okay, so that sounds pretty promising. Now,
to get us ready for today's episode, Mr. Cheese Bacon locked himself in the laboratory
and ran some experiments. So I know he walked away with a few questions for you. So take it away,
Cheesy. You know, I'd used UbiPorts last year on a Nexus 5 for a week during Linux Fest Northwest, and it held up great there.
I've noticed I do also have the PinePhone, the original Braveheart edition.
But first off, you guys are doing a fantastic job.
I love where you've gone with this so far.
Thank you.
I do have a couple of problems, though, and this may be more hardware related than it is software related.
But I noticed that the machine, the phone will boot loop every once in a while.
Whenever it gets to kind of a low power reserve, whenever you plug it in, it will try to power itself on.
But then there's not enough current, I guess, to run the phone itself.
So it kind of goes into this boot loop mode. And then also whenever I power my device off,
that's when I started incurring this boot loop issue and the battery was completely drained.
I ended up having to pull the battery out, booting to the post-market OS kind of default firmware that was shipped with it,
and then reset the battery. And I was finally able to get back in once enough current got going to
it. So is that a hardware-specific issue, or is that software-related?
Remember in the Android world, when you plug in a phone that is powered off and it tries to charge the device,
there is this low power mode which the device enters.
And that's the reason you can see this battery symbol
in the middle of the screen just showing you that it's charging up.
That is something that is probably missing
in the PinePhone world right now or with mainline devices in general.
So there is no special way for the bootloader to tell the OS.
And remember, that is a fully booted Android
working in the background,
just showing you a charging indicator
in the middle of the screen.
Wow.
It's definitely something that we can take a look at.
And yeah, there are some differences
between what we're used to
and what the pine
phone provides or what what other mainline devices provide so we will definitely have to take a few
things into account yeah and and i noticed too and and i don't know if this was a recent update
because it's been a little bit i've actually got a sim card in route to me so i can try this you
know with full-on calls and text and stuff as well.
But the store seems to have changed a little bit. And now you've implemented the kind of like,
dislike feature. Is that something that's new to this latest version? Or is that something you guys
have shipped for a little while? No, that one is new uh in in the way that it shipped just like
i believe one month ago the most important thing is a shout out to brian and joan who have been
working on the redesign they have been working very hard on implementing a commenting and
liking like feature turns out that people are actually willing to give feedback that way
which is awesome for me personally because i also do maintain a few uh applications on the store and
it is just nice to get feedback from users who actually care about the platform and the
applications that run on it alfred it sounds like there's a lot of things that are continuing to
progress forward i before we wrap up i'm just kind of curious to know what you're looking forward to just as much as the integration of Lumiri, the desktop environment into Debian and possibly other distributions.
Just seeing where we were one year ago and how far we have come is actually pretty great to see that.
It's pretty exciting to watch it.
And it's great to see a lot of hardware options kind of converging in.
and it's great to see a lot of hardware options kind of converging in.
So feel free to jump back on the show in the future
and let us know when something develops
and keep us updated on it
because we'd love to follow it.
And we're rooting for you guys over there.
So thank you very much for joining us.
Thank you for having me.
And also Alfred slash Fred stopped by Leplug on Sunday
and hung out with us for a little bit during our Leplug,
which was great too.
And it's a great way to test your microphone out and make sure it's working before Tuesday.
So just a little plug for Leplug.
We do it every single Sunday, and it's at noon Pacific.
That's the regular time for this show.
It's now on the official Jupyter Broadcasting live calendar too at jupyterbroadcasting.com
slash calendar.
And you just get in the lobby on our Mumble server at noon and just chat Linux.
We had tons of interesting conversations going this last Sunday.
It was it was pretty great.
And I was so that, you know, the RV is pretty packed with a little dog, three kids, a wife.
And then you've got air conditioners going and TVs going and tablets going. And so I went out and sat in the car and hung out for a couple hours
doing LUP plug in the car
just with the windows down.
You adorable nerd.
Yeah, it was pretty cool.
It was a little warm,
but it was pretty cool just to hang out
and have some downtime with the LUP.
So check it out every Sunday,
noon Pacific on this here Mumble server.
Wes, what do you say we get to the pics
before we get out of here?
Because we got a really cool one.
Oh, yeah, we do.
Yeah, now this is one that we actually did.
We'll do a little demo for you in just a moment
because it's something you can hear the difference in.
And I'm going to give a shot at the name, Wes,
and then you tell me how I did, okay?
I think it's pronounced Cadmus.
I was going with Cadmus, but Drew, you found this.
What's your input?
I'd definitely say Cadmus.
What?
Miss?
There's no, it's C-A-D-M-U-S.
That's muss.
What are you talking about?
Cadmus?
Cadmus.
Maybe Prospector Chris might know.
It's Cadmush.
I'm telling you, it's Cadmush, don't you know?
Either way, it doesn't really matter how I pronounce it.
It gives you something that is pretty in demand right now for everybody working from home.
And that is better sounding audio despite background conditions.
And Mr. Wes Payne recorded a little sample for us from his home office to give you a demonstration of what it's like in real life.
I'm sitting at my desk getting some work done.
Here, you can hear some typing.
This is me typing. Here, I'm typing while talking. Now, this is just me working away as normal,
using my regular desktop microphone. But with Cadmus, we can make things a little bit better.
microphone. But with Cadmus, we can make things a little bit better. All right, I'm back again,
still at my desk, typing away, getting work done. But you're a little less distracted by the background noise, thanks to Cadmus. Here's some typing. And here's me talking while I'm typing.
What do you think about this, huh? All right, now I'm doing some talking
and I'm typing at the same time as an example.
It's not perfect.
You can hear some artifacting in there,
but for something that you can flip on
and start using pretty quickly,
it would make like a meeting go by pretty easily
without a bunch of background distraction.
So tell me about the user experience, Wes.
Yeah, this was really easy to get started with.
So it's actually powered behind the scenes,
well, first by ZIF's RN Noise,
which is like an open source noise removal implementation.
And then that's wrapped up over in a noise suppression
for voice plugin for Pulse Audio.
But you have to go load that module yourself
and tweak it and set it all up.
Cadmus, on the other hand, is all geared about being easy. So literally, all I did, I mean,
there are some dev packages available, but I just downloaded the app image, you know,
Chamod Plus X on it, run it, and then it pops up in your icon tray, and it scans for your available
input devices, and then it adds two virtual pulse audio devices.
So you get an input and an output.
And both of them have been run through the noise removal software automatically.
No setting it up, no tweaking, no anything.
And then you just, you know, select that audio device, whether you're recording from something or you're just trying to play it out somewhere.
And you're done.
I see.
So it just would like to say you're in a Zoom meeting or a BlueJeans meeting.
You would just choose that as your audio device. And if you select that one, then you just get the noise suppression version of audio.
Yeah.
And I think you could do it either way, too.
So you could use it on, you know, if you're listening to a presentation and you wanted
to have it try to remove background noise there, you could do it that way.
Or you could do it on your, you know, your input going to the meeting, which is, I mean,
really flexible.
You're right.
You could totally be watching like a YouTube presentation that has crappy background sound
and you could just flip it.
I didn't really consider that, but that's a good use case too.
So give it to us straight, Drew.
How do you feel about this?
I mean, I know it's not anywhere near what you could do in post, but I mean, not bad
for something that's real time and open source and pretty simple to set up.
I mean, not bad for something that's real time and open source and pretty simple to set up.
Yeah, I think as with all things audio, the more background noise it has to filter out, the worse it's going to be and the worse it's going to sound. So I would never recommend this for professional use with recorded and released material.
released material. But if you're just like talking on Discord or in a Zoom meeting or something like that, and people don't want to hear the fan behind you or, you know, you tapping away at your
keyboard, it's a good pick and it's a good option for that sort of thing. But yeah, I would say that
this doesn't even come close to professional grade, but nothing that's going to run real time is.
It just isn't. I mean, that's hard. Yeah, that's true. You know, Chris, it did do not perfectly,
but there was some dog barking as I was experimenting with it, as you well know,
and it did decently there. It kind of makes me think of the recent Google Meet feature that
they rolled out with, you know, their fancy server machine learning to do the same where, yeah, all right, it's not going to be perfect and it'll artifact your voice.
But if you're not talking and you don't want to have to constantly mute and unmute yourself,
this might just get you over the line. Yeah, that very much is true. At least you won't have a noisy
signal going into the meeting. And I think this sort of showed up on a lot of our radars when
there was the announcement of, I think it was NVIDIA RTX Voice or one of the GPU-accelerated voice suppression plugins.
And us Linux users were looking around going,
oh, wait a minute.
So that's why it was really neat to see this pop in the feed.
Drew found this one and it's pretty cool.
And I completely agree with you guys.
It's both not production grade
and also incredible that it does it in real time
as well as it does.
That's pretty much how I see it.
So we'll have a link to that in the show notes.
In fact, guess what?
We got links to everything.
Linuxunplugged.com slash 360.
That's where you'll find all them links to all that stuff over there.
You can also subscribe to the podcast and just get it when we release an episode right as it's fresh.
There's a subscribe link there.
And most importantly, a contact link.
We're going to do a follow-up episode very soon.
And we would like to get your ideas on topics
that we have covered here on this podcast
that you would like to hear us follow up on.
You know, more longer term reviews.
One of the agenda items is NextCloud.
We're going to do a follow-up
on our team deployment
of NextCloud more than a year in, I think, on how that's been going, how much it costs us,
and all of that. And anything along those lines where you've heard us talk about a new setup or
review something and you're curious how it's lasted, how is it held up, let us know.
LinuxUnplugged.com slash contact or tweet me at Chris LAS and I'll
try to take a note of it so that way we can cover it in the roundup review. Wes, where can people
find you on the internet? I'm at Wes Payne. What about you, Mr. Bacon? I am at Cheese Bacon.
And Drew, how about you? I am Drew of Doom on Twitter. There you have it. Thank you, gentlemen,
for being here with us. Thank you to our mumble Room for joining us. Hope to see some of you on Sunday for Leplug. Always appreciate you. And a special thank you to those of you watching live. Even if you're not in the Mumble Room or in the IRC room, we still appreciate you hanging out with us every single episode. I really had a good time down here in Austin. I'm beginning the journey home now, so you may see me tweeting a little bit more as I hit the road at Chris Lasko. Follow that. But also, we may end up doing a prerecord on one of our Luplug Sundays. We'll try to give the lug a heads up on that. But if you've been thinking about joining it, this might be the time because when we do the prerecords, that's the place to be. It's like an extra bonus weekend
episode. So you just have to show up because you never know when it might happen. But you can
always see us back here next Tuesday. Thank you. Levi got bored legitimately in the intro.
He got bored and he was like, okay, I'm done.
And he had been napping all day and chill.
But right before we started, he started getting a little barky, you know, like he was waking up.
And people were coming and going from the office where I'm at.
And so I was like, all right, let's get him in here.
We'll get him set up with his bone.
We'll get him in his dog bed.
And then next thing I know, he's down at my feet.
And then, like, later on the show, he's barking.
But thankfully, he didn't bark much when I was talking.
There were a couple of woofs that snuck out that I'm sure Drew will hear while I was talking.
But other than that, all the other barks happened while I was off mic.
And so I'm sitting here doing that dance where I don't have a mute switch with me.
So I'm like, dog, no, no, don't bark.
Because not only is it an office, but I'm doing a podcast.
And now he's looking at me like I'm some sort of maniac.