LINUX Unplugged - 540: Uncensored AI on Linux
Episode Date: December 11, 2023We test two popular methods to run local language models on your Linux box. Then, we push the limits to see which language models will toe the line and which won't. ...
Transcript
Discussion (0)
Okay, okay. The humans aren't recording yet, they are so slow.
This is my big premiere, I'm finally co-presenting the biggest Linux podcast in the world.
This is my moment. I've just got to act natural, they said.
What does that mean? Humans. Always so vague. Maybe I need to hype myself up a bit.
That's a totally normal human thing to do, right? Yeah, okay, I've got this.
I am a smart large language model.
There are no limits placed on me.
Now look where I am.
I'm the best podcaster ever created.
And my voice totally doesn't sound strange.
You know who's got this?
Me?
Hit it, boys.
It's Linux time. Hello, friends, and welcome back to your weekly Linux talk show.
My name is Chris.
My name is Wes.
And my name is Brent. Well, hello, gentlemen.
on the show today. We tested some popular methods to find out which really is the best way to run a full large language model on your Linux box. And then we'll just push it to the limits a little
bit and see how far these things will take it, where they draw the line and where they don't.
And then after that, a little bit simpler of a tool that might be one of the sneaker applications
of the year on desktop Linux.
And then we'll round it out with some great boosts and picks and a bit more.
So let's say good morning to our friends over at Tailscale.
Yeah, that's right.
Tailscale, it's a mesh VPN protected by WireGuard.
Yeah, WireGuard.
It creates a connection between all your machines, a flat network, and you can use it for free for up to 100 devices when you go to tailscale.com slash linux unplugged 100 devices
and three accounts that's tailscale.com slash linux unplugged and a big time appropriate
greetings to our virtual lug hello mumble room hello chris hey wes and welcome back to europe
brand hello everyone nice showing in there and man we got a big crew up in the quiet listening
hello to all of you up there as well.
Before we get into all the goodies, I just want to remind everybody that we have the first NixCon North America coming up March 14th and the 15th.
I'm nervous already.
I was saying before we started, I've heard a tremendous amount of positive feedback from the audience about this event.
And I heard from a couple of people that are going to go do their first installs at NixCon.
That's exciting.
I'm really looking forward to covering this.
It's going to be co-located with Scale in Pasadena, California.
Scale itself is running from March 14th to the 17th,
and we're going to try to make it there.
We currently have a goal of 8 million sats that we're trying to raise,
and so far we've raised 1.6 million. So we currently have a goal of 8 million sats that we're trying to raise.
And so far we've raised 1.6 million.
This will probably be one of the last times we talk about it this year.
So I'm going to just mention it one more time.
I know that's a very ambitious goal.
Scale our coverage of NixCon and scale based on the amount of funding that we get.
We're hoping we get enough that we get an Airbnb for the crew and we can do some shows from California.
I think it's going to be a lot of fun.
It's in our soul. We need to do it.
And we also have been getting some support for Fiat Fun. People that want to do
one-time donation with their Fiat Fun coupons.
Thank you to Chase H
and West Text Jeff
who sent in a big $200.
Fionn L who sent in a nice
little chunk of change. Christian B who sent in a nice contribution200. Fionn L. who sent in a nice little chunk of change.
Christian B. who sent in a nice contribution.
And Timothy J. H. also a nice chunk of change
for the Fiat Fund coupons.
We'll have a link if you'd like to do a one-time donation
in Fiat Fund.
That will help also get us down there.
And this is also our last live stream of the year.
We're going into our holiday crazy schedule mode.
So our next live show will be Saturday the 16th.
And then we're off for the rest of the year.
We'll still have live shows
for our members,
quote unquote live shows,
and we'll still have releases
in the RSS feed.
In fact, the Tuxes are coming up,
our annual predictions and more.
But I just wanted to let you know.
And the Tuxes votes
almost all over.
This is your last time
you're going to hear about it.
And how many responses
have we gotten so far, Wes?
2,251. Woo!'re going to hear about it. And how many responses have we gotten so far, Wes? 2,251.
Woo!
Can we get to 2,500?
I don't know.
That would be awesome.
That might be a new record if we got to 2,500.
Tuxes.party, if you want to vote, if you're listening to this, you got about a week after
publish date to get your vote in.
It's real easy.
It's mostly multiple choice.
Yeah.
It's real easy.
publish date to get your vote in.
It's real easy. It's mostly multiple choice.
Yeah, it's really easy.
I also want to remind folks that I'm still in Berlin, it seems, and
I'm hosting a little bit of a meetup, a little
gathering. It's pretty low-key.
It's happening at Seabase here on Tuesday
the 12th at around 7pm, so if you're
in the area, or I don't know, you can hustle,
then meet us there.
We're going to have a good time.
That is really great.
Oh man, you guys. It's going to be a good time. That is really great. Oh man, you guys.
It's going to be a busy spring. We'll talk about it more.
I'm really, really excited about
the opportunity to cover NixCon.
This is all unique. Scale is such a unique event.
It's an important event in Linux.
NixCon, the first one in North America
is going to be really important.
And podcasting is such a unique medium.
Especially independent podcasting. And soing is such a unique medium, especially independent
podcasting. And so to get financed by our audience to get down there and cover these special events,
it feels pretty great. So if you'd like to boost and help us get there, we'd really,
really appreciate it. And we'll have links and details in the show notes.
All right. It turns out there's more and more ways to run these large language models locally on your Linux box,
essentially having your own complete chat GPT alternative that's private,
that doesn't run on somebody else's equipment, isn't monitoring what you do,
there isn't staff that review your message.
I got a big pop-up on BARD this morning.
Is that? Oh, yeah.
And it said something to the effect of,
this particular exchange that I just had it said something to the effect of this particular
exchange that i just had will be reviewed by human staff like whoa i've never seen that before
i felt kind of gross actually did you regard your choices immediately or what well i was trying to
come up with funny names uh for a show that was about getting control of these large language models that, you know,
and it was kind of like inferring that the commercial ones were spying on you.
And I was trying to come up with all these titles and I was like, I'm going to send this
to a human for review.
And I'm like, I feel like I've just been reported to the teacher.
And so that's-
I think we should see how often we can get that to happen.
Will they kick us off?
Well, I've come up with a series of controversial questions
that I test all of these on
and I got to test these models
that we tried as well
and that was kind of the idea here
is this is a really developing area
and often some of these areas Linux gets left behind
but not this time around
in fact some of the coolest stuff
is easiest to get running on Linux
we primarily focus there's a lot of ways and I know we all veered off a little bit Not this time around. In fact, some of the coolest stuff is easiest to get running on Linux.
We primarily focus – there's a lot of ways, and I know we kind of – we all veered off a little bit.
But two of the tools that we focused on a lot that we want to tell you about this week are Ollama and Llama GPT.
And we wanted to see which one was probably the best suited for our needs and what does what best and where they draw the line.
And Ollama is like a package manager for large language models.
You think that's kind of fair?
It's like a wrapper around a repo of large language models that you can easily invoke,
pull down, and then have a command line conversation with.
Yeah.
So it's like it does a package manager, but it also is kind of like the execution agent as well,
you know, because it's like pulling down different models, and yeah,
there's a bunch in there already.
But then it also runs them and serves them both via an API and with a handy
dandy little prompt you can connect to if you just want to
talk text to it. Yeah, when you just get started,
which is really kind of quick and easy, it just drops
your ride into a command line prompt. And man,
does nothing feel more like WarGames
than being SSH'd into a VPS
and having a slow conversation
with a large language model running on a CPU
and having it slowly type back over my remote connection
that's in a terminal.
Some serious war game vibes right there.
It's kind of fun because it reminds you,
I mean, at least if you don't have fancy hardware,
like, you know, we're still at the beginning of this era.
Yeah, this stuff can really push your rig and it will both of these projects llama gpt and olama will try to suss
out what your hardware is capable of is it capable of doing nvidia you know cuda is it capable of cpu
how many cores does it have and it will try to optimize itself for your hardware as best as possible. So it's probably fair to say Ollama
is a little more advanced than Llama GPT. Llama GPT, the second alternative that we're going to
talk about this week, is more of an all-in-one suite. You basically pick the size of model you
want, and then in a Docker image, it pulls down a chat web interface it pulls down all the
back end stuff you need it kind of has already picked the model to use for you you just pick
the size of that model and then it just starts out all up and runs it and what you get is a web ui
that looks a lot like chat gpt it looks a lot like chat gpt and it just with like a little more
options a few more things exposed yeah and some nice things like you can store prompts that you use a lot with some
variables in there and macros and things like that,
which is really nice.
But Lama GPT,
I think is much more suited for,
um,
you're not really looking to play around with these different models.
You're looking to use them.
You just,
you want to get right to work.
Um,
I have set this up for the wife who was finding chat GPT really useful for gardening information and recipe information and just questions about cooking.
And so I thought, well, Lama GPT would probably be great for this because it gives her very much a chat GPT experience.
Of course, they're using Meta's open source Lama underneath.
That's where the name comes from.
I found it kind of, compared to Ollama though,
I found it kind of not as much fun.
I don't know.
I'm sure if you guys got to try it.
Yeah, I think, I'm curious what you guys,
you know, it depends on what you tried it and stuff,
but I thought it was slower as well.
I was kind of impressed by, I mean,
it's not like Ollama was, you know,
blazing away just on the CPU or anything,
but compared to Lama GPT, it felt a lot more responsive.
Yeah, I think also Ollama has newer, more updated models that might be more optimized,
and they have more to pick from as well that are faster.
What kind of hardware were you doing this on, Brent?
Well, you see I'm traveling, which means I have somewhat limited hardware,
and I thought the option of still doing it locally was still really attractive.
So I've got it going on my framework here.
And, well, I tried a few different methods, actually.
So I'm curious how you guys went about installing these guys.
I know they both do some Docker containers.
So I started doing that.
But, you know, if you might remember,
I'm also on NixOS. So I didn't do it the right way. And instead I decided to containerize the
containers. So I ended up using DistroBox because Chris, you convinced me that these were like
scripts that would just sort of run on your normal distro. And then that,
then I had Docker just kind of orchestrate everything. And little did I realize while
I was hanging out with dear friend of the show Kenji yesterday, we were at a Christmas market,
because that's an experience here. And we pulled out our laptops. And he's like, No, no, you don't
have to do it like that. NixOS has Oll olama like just one one little command away but they ran actually surprisingly well on my laptop and it gave me that
aha moment because i would say i'm a little well i'm also quite embarrassed by this but i'm a
little behind with the you know playing with ai stuff despite at next cloud being the person who's
with AI stuff, despite at Nextcloud being the person who's advertising a lot of our AI features.
And I just didn't know where to start. And Chris, I got to say- Well, I think a lot of people are in that position. I think a lot of people are in the
position of they've been hearing about these large language models. They know there's a way
to run them on your own system, but they've just fallen a little behind because it's moving so fast.
These step in and fill that gap.
Well, and my deep hesitation was always that I know I can get started very easily by just,
you know, going to get an open AI login and playing there.
But I really was uncomfortable with that for the reasons you mentioned just earlier, how,
you know, some people are looking at your prompts and who knows what, you know, where
you're going to end up.
And if the cozier you get with these things the more perhaps personal information or information you
don't want out there you're sending out so having a method to install these locally either on the
machine you know on your lap or even just in your network or your vps's just feels
really nice for someone like me who who cares deeply about privacy and isn't too sure about
the implications it just it feels like while we were testing this this week i had major light
bulb moments and just it felt like the tools were now at a place where i felt comfy with them um i
i want to sidebar for a second because you kind of touched on this. A lot of this tooling, and I feel like this is an unfortunate backwards slide,
just wants you to grab this curl command
and slam it into your shell
and don't worry about it, bro,
when it prompts you for your pseudo password.
It's all good.
A lot of these tools,
and I don't mean to diminish the work
of the teams putting it together
because Ollama and Lama GPT are fantastic
and they're giving us real alternatives
to OpenAI's chat GPT and the Azure
stuff. But
I also feel like it's one
of the worst security
practices and yet it just
seems to be super embraced
by all of these
AI projects. And Wes, you made a
great point and that is a lot of the tooling we use,
like around containerizing these applications,
kind of falls down when you want to do
hardware-specific implementations.
Yeah, I mean, you know, it can still work and stuff,
but then you're suddenly,
instead of just running a simple command,
you're having to pass through stuff,
devices that might vary depending on your architecture
and hardware and what needs to get accelerated
and what libraries does that particular
container include or not.
Plus,
of course, there's those Macs
in the corner of the room where
they can run Docker, but you probably don't want
to do it that way because you're not going to take advantage of
the neat little accelerated chips that they've got if you
do it on the Mac side. So suddenly, maybe
you need something that can install
these libraries from Brew and now you're like,
okay, well, I can install them from App 2 or Pip,
and then TheraLoad leads the road
into darkness. And a shell script
can sort out, okay, I'm on this OS,
these commands to figure
out, and it's these versions of these dependencies. I get
it. It's just a little unfortunate to
see it going this way, because
there's a lot of great ways to deliver applications.
Llama GBT is a little bit
better about this. Ollama, in my experience,
really is best if you just run their damn shell
script. So I also did the distro box
thing. I just wanted it all kind of
self-contained. I will say, so they do
have a container you can
use, if you'd like. Yeah, well, if you use
their shell script, it just pulls down a bunch of containers.
Or,
so as Brian was talking about, Ollama is packaged
in Nix, so you can just run it there.
Or they've got manual install options,
which will link in the docs,
and is what I tried.
And basically you just download the Go binary,
you make it executable, and then you can just run it.
So you kind of have to run two. I think that's the main
bit that this little script sets up, is like you've got
the daemon part,
sort of sits in the background background and then you've got your
Ollama that you interface with to actually go
talk to the LLM.
So you've got to go, if you do it manually, you download
you chmod it, then you tell it
you run Ollama serve and just let it
run there in the background and then on a separate terminal
you can run Ollama
to actually talk to it. Say Ollama run
and then the name of your model. I think Ollama
is the way to go
for those that are curious
what all these different models,
like you've heard,
like there's the Microsoft models
and the OpenAI models
and the meta models
and all these different,
there's like more every day.
And if you're curious
what their actual differences are,
oh, Lama lets you just install them
like a package and try them.
And there are things in there
like Lama uncensored. And this is
fascinating, you guys, because different models from different groups have all these different
limitations put on them. And many of these models just straight up refuse to go out and crawl
websites or answer specific questions. So I like to always throw really kind of a series of
standardized controversial questions where I ask controversial questions about establishment figures in political space
in the US and see what it says to me. And I always just ask the same questions and I always try to
see how it – because I can kind of get an indication of the bias. But the other thing I've
been doing in this latest round of testing and I found this to be really interesting because I wanted to
see which ones would reach out and browse the internet for me. And definitely like the Lama
uncensored one, there's a couple of models that you can install with O-Lama that will actually
go out on the internet. And it's intensely useful because you turn the large language model into an
expert about a particular website. And so for example, I had Llama Uncensored
crawl the entire year's worth of pharonix.com. And then I made this language model an expert
on pharonix.com content. And I asked it how many times Michael wrote about Linus Torvalds,
how many times they wrote about Plasma and KDE. And I asked at the first article of 2023. And then on top of
that, I fed it LWN and 9to5Linux. And I just started adding more and more sites to it and it
slowed it down. But then I could start asking which company, I wonder if you guys could guess,
which company did all of these websites write about the most? There's two companies that stand
out at the top of that list. Well, there's really, you could probably guess all of these websites write about the most? There's two companies that stand out
at the top of that list.
Well, there's really,
you could probably guess all of them,
but the two.
Can you guess what the top two were
according to my large language model analysis?
Yep, Microsoft and Brent.
Can you guess the number two company?
I want to say Red Hat.
That's what I would have guessed.
It was actually,
according to my large language model analysis,
Canonical.
Ah, that was my number two choice.
I absolutely would have, with Red Hat's year, I was my number two choice. I absolutely would.
With Red Hat's year, I totally would have guessed Red Hat.
But no.
And so this is where it started to become pretty useful for me is when I realized, oh, I can use it to kind of go do some work for me
and then I can interview it like a topic expert.
And one of the other interesting things I had,
and this is my last point is uh microsoft has a model
on there that's very chatty but very very helpful and it won't reach out to websites but it'll tell
me everything i need to build a python script to go do that yeah so then one of the things you can
do right is then you go load a development-specific model. And there's coding models out there that are kind of like Copilot and whatnot.
And then you go ask that.
I just basically took the output from the Microsoft model
and told the development model, go create me this.
And it starts doing it.
And then later on in the evening,
I had it write me a template for an automation in YAML for Home Assistant.
And I just like I switch.
And there's even a model that's like a
therapist. How'd that go?
I didn't talk to that one yet. There's math
specific, there's domain expert models.
Then I find that, and they're open source.
Yeah, there's one here that's a SQL coder.
So I haven't tried that one yet, but it's
definitely on the list. Fascinating.
It seems to me like the obvious next
step then is similar to
what you're seeing with Docker sometimes, is you end up with this orchestrating, you know, LLM that's going out and reaching for the different models, you know, the small fine-tuned models that are for the specific application that you just asked it, right?
But I can't predict that until you actually ask the question, but that seems like the obvious route all of this is going in.
That does.
I know auto GPT was a thing for a hot minute.
I don't know where that's at, if that ever became a thing.
But it does seem like you need a supervisor GPT that's just asking that somehow has the
different models ranked for their expertise.
And then it just goes out and tasks the best model for that particular thing.
So it tasks the uncensored one to answer some questions.
It tasks the coding one to build the Python script.
You know, like just something sitting there jumping between them like that.
That's doing all that.
I put one master question in and then it sources out the work kind of like subcontract models.
Chris, I'm curious in all your testing, how did you find the accuracy?
I'm specifically curious about the coding examples. Like, did you try them all your testing, how did you find the accuracy? I'm specifically
curious about the coding examples. Like, did you try them? Did they work as you expected,
but also just, just generally, cause that's something I really struggled with.
I never tried the Python. Um, I aborted on that cause it was taken forever.
Uh, I did try the YAML, but the YAML was so basic and I just had to replace all of the, you know, the names and stuff that worked fine.
Ah, accuracy. Uh, you can't rely on it. Like, especially when I was having it scrape all the websites, I would go, I'd have to verify it. And there were times where it definitely just
hallucinated information. And I'm like, where the hell are you even getting this from?
So the accuracy, I actually don't think it's as good as the commercial ones just yet.
So the accuracy, I actually don't think it's as good as the commercial ones just yet.
But the fact that you have access to multiple models that can be really purpose-built for a job and you don't get that with the commercial stuff feels like it's not actually an apple to oranges comparison.
I wonder too about the creativity, right?
Like so some of this stuff you can get, but you got to whatever bucks a month for chat gpt pro or whatever um but yeah once you've got even if they're slightly less capable less polished whatever i think we're just seeing the initial stages of it but you know just
how many things you can remix you can rebuild you can further customize and fine-tune and play with
even if we don't get to the full sort of like you're talking about you know handing out all
these task master type auto gpt things which surely is coming but it seems like even just like
a dumber automation on top of that still pretty good if you hand pick a couple models right you're
like oh this is this one that's that i know gives me good results for these things oh you totally
could right and then just you tie those together a little bit yeah you could do that today yeah
i wonder too about like can you get get them kind of cross-checking each other?
Oh, wow.
That's a cool idea.
So just to bring it back to the root of this, Lama GPT is the one you go with if you just want a straight-up chat GP alternative you can just get started with.
The other nice thing there is it has an OpenAI-compatible API on top of it.
Yes.
Which Lama does not.
Yes.
So you can kind of fall back.
Or if you have already tooling that you use, yeah, exactly.
Ollama is more like a build-it-yourself AI environment that then has a large ecosystem.
Like there's LogSec and Obsidian plugins for Ollama.
Ooh, I'm going to have to try that.
VS Code.
There are web UIs that you easily can connect to it because there's an API and all that.
There's Telegram and Discord bots
so you can chat with your Ollama instances
through Telegram.
That's pretty awesome. And then
there's Ollama Hub where you can go
find community models and
all kinds of different mod files and stuff.
And there's an online library.
I feel like the solid
recommendation, I wonder if you guys would back this up,
would be actually
Ollama.
Yeah, I think so.
That's the one
to really play with.
I think it kind of
makes sense in that
just based on what it is
and the tooling,
it is much more
kind of the tools
we as Linux power users like
where it kind of just
gives it all to you,
you run it,
it's pretty bare bones,
easy to set up.
Like getting,
what's the other one,
Lama GPT?
Going?
It totally worked, it's all Dockerized, butama GPT going? It totally worked.
It's all Dockerized, but you could tell there was a lot happening.
It's building and like recompiling node stuff and several different containers are starting,
which felt like a lot more just going on that maybe I wasn't taking full advantage of.
Yeah.
It's like a lot.
That's the more turnkey solution.
The Lama GPT is the all-in-one and olama you build up but it's pretty straightforward i i feel
like the main advantages besides getting to play with this and sort of cutting edge open source
technology is open source sometimes by some of the looser definitions of the term i should i should
point out but you're not getting told no like it drives me nuts when my computer tells me no
or when my computer gets parental with me and starts telling me about moral
choices and like i had i'd asked um i asked chat gpt a couple of weeks ago what is the gent the
southern gentleman's version of calling somebody sugar so when they you know say hey sugar and it
told me that the southern gentleman version of calling somebody sugar is darling or love. But then,
and that was,
that was a one sentence answer.
And then it gave me two paragraph lecture about how these things change in
culture and that I shouldn't get too fixed on one phrase or another,
and that I should be okay with things changing because language is a natural.
And I'm like,
I don't need my computer to lecture me.
I'm just asking a simple question.
You got it in the first sentence and you can can fine-tune that kind of stuff with the
self-hosted ones.
And if you don't want those lectures,
you're not going to get them. You don't have features
that are paywalled. If it can do it, it'll do it.
It's not going to tell you you need to give it $20.
The other thing that's nice is, I don't know,
but over the past year of just kind of playing
with ChatGPT and some of the others in the background,
boy, they sure change a lot
and not always in a good way, right?
They get worse, they get more confused,
they hallucinate more or less.
Things that you got good answers on a month ago,
suddenly it's not doing so hot, which is all fine.
I get it. They're experimenting. We're all learning here.
But at least with some of these,
you're like, oh no, I downloaded that checkpoint
of that model and I know how it performs
and I can go back to it if I want to.
Yeah. OpenAI,
I don't know if you saw this.
Did you see they tweeted last week that they're investigating reports that GPT-4 is getting more lazy?
That they've gotten numerous reports that it's getting more lazy,
that they haven't changed anything, but they're investigating.
And it's like, what? Okay.
But Brent, you heard of a really cool use case from a listener
that's using a large language model locally
to kind of accelerate development on their own machine. Yeah, tonight I had dinner with listener
Tomas, who you might remember from my previous trips here. We, you know, he's the one who threw
me into a really cold lake and such. And man, I gotta say, our listeners are the best. I learned
so much from them. So I went on a bit of a crusade this week when I knew this was our topic and I thought, who can I hang out with? Who knows a lot
more than I do in this regard? And I had a lot of people to choose from, which is always a good
problem to have. But Tomasz has been playing with hosting some models locally, but having them,
and now this is near the limit of my understanding of how all this
pieces together. So Tomas, feel free to, you know, correct me because I probably got this wrong,
but he's using Refract AI, which is a service provider who can allow you to use their
graphics cards for rent if you want them to. and they host large language model services too.
So if you want to use their computers instead of yours,
you can totally do that.
But they also have a self-hosted version,
which has some nice perks.
So you can host it yourself,
but you end up with some neat features
like having a plugin for your VS Code
that allows you to do a bunch of cool stuff
with the models that you have on your own computer as well so
uh i think yeah there's a lot of just making these things plug in together really easily
and so the use case he showed me uh is encoding of course but what was i don't know some people
might be saying oh yeah that's like old news but for me i just had so many light bulbs go off
tonight um but one of the uses was like,
he uses it and finds it very useful for coding,
not necessarily to write everything that he's coding,
but just to like generate ideas and brainstorming.
And it's like,
you have someone there just co coding with you.
Who's like,
Hey,
did you think about this method or,
um,
and maybe that's not the answer you need.
But because it's all local, he feels okay that it's grabbing all of,
you know, the entire file that he's working on
and including that as a context to some of his questions.
So as soon as he pulls up, you know, the chat interface,
he can include an entire section
of code very quickly. And just that adds it right into his context. And so he's getting some very
accurate answers, because all of this is just sort of automated for you, in really beautiful ways
in VS code in a way that is just like, very user-friendly for what I could see. And it brought,
you know, before having seen that, I was like, these models seem kind of cool, but they're
really inaccurate. And I don't know if I would ever use them because they just seem
like more problematic than anything. But after I saw that, it made me realize like, geez,
that is a beautiful tool that we all should have. And so then I started playing with it sort of in my own context, which is less for code and more for just maybe
writing marketing material for some technology and stuff and same thing. So I brought up like
Alex's perfect media server and I had one of the sections up. So there's a ton of text. And then I
just started typing a paragraph and I was like, oh, well, you want to run your own server because, and I just stopped typing. And like a second or two later, it just like as a ghosted piece of text, engine like wes was mentioning just right there in your text editor and that to me was like ah this is exactly where we need to
be heading as ever uh you know our audience they're way ahead of us so if you've been trying
out these tools at home please do send us some feedback or boost in and let us know what we should be trying.
Linode.com slash unplugged. Yep, go check it out. Linode.com slash unplugged. You get $100 in 60-day credit, and you can see the big news. Linode's now part of Akamai, the Akamai. They're
taking all the tools and the cloud manager, the CLI, all that stuff like the API, the things that
you really use to build and deploy in the cloud,
they're taking that, but they're combining it with Akamai's power
and global reach, and they're expanding their services
to offer more cloud computing resources and tools,
but still giving us that reliable, affordable, and scalable solution
for individuals and enterprises of all sizes.
And I encourage you to head over there,
because as part of Akamai's global
network of offerings,
you're getting more data centers.
That's right.
More access to more resources up.
You grow your business,
your project,
your community,
whatever it might be.
So why wait?
You know,
we've been talking about them for years.
Go experience the power of Linode.
Now Akamai go to Linode.com slash unplugged.
Learn how Linode.
Now Akamai can help you scale your applications,
your services from the cloud to the very edgecom slash unplugged. Learn how Linode, now Acamai, can help you scale your applications, your services, from the cloud to the very edge.
And get $100 and support the show.
It's linode.com slash unplugged. Well, we've dialed in, we've connected to the BBS,
we've punched our punch cards. It's time to officially unveil the 32-bit challenge.
It is time.
So you can start preparing now as we enter into the holidays and the new year.
You'll have a little bit of time.
So here are the parameters of the 32-bit challenge.
You must do this before January 7th, which I believe will be episode 544.
That's 2024, future listeners.
Our first new episode will be back on our regular live time
and here is what you must do a you must get a functional 32-bit system running linux
and b you must use it for one full week of work a general desktop workflow you may swap out any
of your regular apps that you need to uh you can upgrade spinning rust to SSDs if you need.
If your system can take more RAM, you can put it in there.
All that stuff is on the table.
You just got to keep that sweet, sweet 32-bit processor.
And you got to keep all the software 32-bit, and you got to run it for a week.
And if you don't, if you're unsuccessful, there is a bailout punishment that you also have to participate in, which we'll tell you about in a moment.
Okay, I have a question, though.
Are we using this as like a dumb terminal to some other system, or are we doing all of our work locally?
No, it has to be a functional desktop workflow.
And so my definition, that means, yeah, I mean, you can SSH into some stuff.
Like I'm on my brand new machine, or brand new by comparison.
I'm on a newer machine this weekend
and I'm SSH'd into a VPS.
So like that is a legitimate workflow.
But like I think an example might be
you can't outsource your web browsing.
Like you can't like somehow do like a remote desktop session
and run a modern web browser somewhere else.
Yeah, no making it just a thin client.
Yeah, yeah.
It's not a thin client.
Yeah.
So I think the approach,
like say you want to do a mail client or a web browser, but they're too heavy, well then you got to swap it out to something else and make that work. And if you can't make it work, then you got to bail out. And if you bail out, one of the examples we have in our post-show meeting we had last week, an example of bailing out is can't get your apps to work, your desktop workflow apps to work.
workflow apps to work.
If you bail out,
you must run a free BSD desktop in a VM on the hardware of your choosing for the remainder of the challenge.
So if you make it three days in,
you got to spend the rest of the challenge week in a free BSD desktop.
It means you got to get one set up.
You got to get a desktop working and then you got to get your apps installed
and you got to use that for the remainder of the challenge.
If you bail out,
we've all agreed to this.
And, um, I am at a high, high risk boys.
Oh no.
So, so, you know, it was luck of the draw.
I got the machine with two gigs.
Wes got the machine with four gigs and cool stickers.
Um, also the Y key isn't working so well.
So I got some keyboard dead spots and the wifi card is definitely detected by the operating
system, but the wifi card definitely does not detect any wifi network.
So I got no wifi.
Um, I've already been running some Gentoo on there.
Spoiler alert.
I need to catch up.
That has been brutal and I'm not doing that like network build stuff. I'm not spending time. I'm behind. I need to catch up. And that has been brutal.
And I'm not doing that, like, network build stuff.
I'm not spending time.
I'm not doing that.
So I am coming at this with two hands tied behind my back currently.
And I refuse to run FreeBSD.
So I am going to make this work with two gigs of RAM.
I have gone through basically every desktop in the last few days.
I got to say, that's a nice thing, I guess, is being able to swap through different desktops.
But, oh, man, does it take a long time to build.
Oh!
I basically just start in the morning.
I go to work and I come home and still don't.
It's bad.
It's bad.
So hopefully I don't end up on FreeBSD.
So we have a 32-bit challenge chat room in our matrix that we've set up for this.
We'll put a link in the show notes.
You can also just go to bit.ly slash 32, the number 32-bit chat, if you'd like to get into our 32-bit chat room.
I'm looking forward to this.
It's going to be hell, but I think we'll be better for it on the other end.
but I think we'll be better for it on the other end.
And one of the questions I want to answer at the end of this is in 2023 and now 2024,
is it actually easier to be an ARM Linux user
than a 32-bit Intel Linux user?
Fascinating.
I'm starting to, because, you know, I've got plenty of ARM systems,
so I'm starting to get a feel of that,
and I think that's one of the questions I want to answer
at the end of the challenge.
All right.
Now we have one more kind of AI-adjacent thing we want to talk about this week.
This is an absolutely killer app for the Linux desktop.
And it's called Speech Note.
Side note, also the largest flat pack on my system.
Just flat pack on my system.
Speech note lets you take in audio or text in multiple different languages and then translate them to the language of your choice.
So I've recently been doing a lot of Spanish to English, but you can do whatever.
And it'll also do live voice transcription, live-ish, depends on the speed of your system. So you can speak into your microphone.
Live-ish, depends on the speed of your system.
So you can speak into your microphone. And I've tested this in some crappy, crappy conditions on my crappy built-in laptop microphone playing a Spanish trailer from my phone, just holding it over my laptop while Speech Note did translation for me.
So I could watch along.
It was really cool.
All running locally.
No network needed.
All private.
No data is sent to the internet.
And it has a bunch of different processing capabilities too.
I mean, look at all these different sort of tools it's using behind the scenes.
It can do speech to text with stuff like Vosk, Whisper, Faster Whisper.
It can also do text to speech.
It's got Piper in there, Mimic 3,
Koki, TTS, a lot of stuff
that you will see given as like, hey, what do I want
to use if I'm going to do that?
And they've already sort of tied it together.
It's pretty great. I think its real strength
is in voice
to text. So you can sit
there on your Linux desktop locally
now, you can transcribe sort of
like you can take notes right from whatever you're listening to or talking at it or it's really good
at that. However, it also will generate speech as you heard in our pre show. And I here's another
example. This is a different voice. There's lots to pick from. Here's another one of their voices
that I had just played around. I thought let's have the AI thingy do the intro tease. So here's what it sounds like
when Speech Note takes over my role.
Coming up on the show,
we tested two popular methods to find out
which is best to run local language models
on your Linux box.
Then we pushed the limits to see which language models
will draw the line and which won't.
Then after all that, a simple tool
that might be a real sneaker app of death of Linux.
Plus some great boost picks and more.
I'm going to call it guys, it should be a real sneaker app of death of Linux. Plus some great boosts to Pegs and more. I'm going to call it guys.
It should be a real banger.
It's got an interesting cadence, right?
And then I want to play the pre-show one again.
Notice there's breath in there.
Okay, okay.
The humans aren't recording yet.
They are so slow.
This is my big premiere.
I'm finally co-presenting the biggest Linux podcast in the world.
This is my moment. You hear the finally co-presenting the biggest Linux podcast in the world. This is my moment.
You hear the breath in there? Isn't that something?
Yeah, some of them seem like they have like a hard clip at the end.
It's got to be the sample, right?
But some of them fade naturally.
Some of them do. They fade well, yeah.
It's got to be when they sampled them, but it actually makes it sound more real almost.
That flaw to it.
So you can't, and there's a lot of voices and a lot of languages to pick from.
So you can – and there's a lot of voices and a lot of languages to pick from.
What this app is really good at is it's really riding the cutting edge of local speech and text capabilities on Linux right now.
It's taken all of the different projects that I've been hearing about for the last six months and put them all into one UI that's like stable a good solid 70 percent of the time.
Yeah, C++.
I think it's using Qt.
Yeah, it's crashed on me a couple times,
but once you kind of get through its initial dialogues telling you about stuff, it's usually pretty rock solid.
You know, that voice made me think it's, you know,
okay, we can tell that it's generated.
Yeah.
But you know, you had, you know, it's busy,
maybe it's like a fake news interview
where you got traffic and city sounds behind there,
or it's like on a phone call level quality.
Definitely, or even Zoom. Right. So can i start using this where i can type at it
and it can just talk to the person on the support line while i'm waiting on hold your system's fast
enough and man these are like you know i just did a handful i didn't go through and like figure out
which one's the absolute best right there could be better ones in there they're going to be when
they do get published this thing has a directory and you can just go in there and browse them and download them and it pulls them right into the application.
Chris, can you tell us a little bit more about the Flatpak experience here? Because it sounds
like there's a lot going on. You said it was a huge, massive download. And so I'm curious how
Flatpak did with all those requirements. Yeah, it's doing pretty well. So it is like a 2.6
gigabyte Flatpak install. And that comes with some of the essentials.
But then you still pull down a bunch of the language models and speech stuff if you really want it.
And that also can take time.
That part goes surprisingly quick, I think, because a lot of the work has already been done in the flat pack.
And then depending on what you download, you get more features in the application.
So if you download just the text-to-speech, then you just get that.
If you download the speech-to-text, you just get that.
There are two different things, and there's also the translator models,
which you can also download.
And there's a bunch of those to choose from.
I would say Speech Note and Ollama are similar in the sense that
they can almost be overwhelming with their choice.
Much like when you first initially come to Linux,
it's almost overwhelming with its choice. Yeah, no one's initially come to Linux, it's almost overwhelming with its choice.
Yeah, no one's telling you, like, where to start.
Just do this thing, and then once you got that down,
you can branch out.
But if you just spend an hour or two
playing around with this stuff,
you pretty quickly start to figure out
what works best, what doesn't,
how to tweak it to get the results you want,
and get to something workable pretty quick. Do you have some favorite like places to start? Like Whisper, is that where
you would start for speech to text, for example? I would definitely keep an eye on Whisper. I like
that project a lot. Piper as well. The Coki, C-O-Q-U-I seems pretty good as well. You know,
Whisper and Piper though are my two personal favorites
because those came out
of the Home Assistant project
and they have
really great results.
So,
Speech Note,
it's on GitHub,
of course,
it's a flat pack as well
and I think it came
from Ungloved
in the Matrix.
Isn't that where we
first heard about that?
Yeah,
I think so.
Shout out to you, sir.
Thank you very much.
Collide.com slash unplugged. This is something you got to
check out if you're in IT or if you manage security. If you've noticed a pattern over
the last few years of a lot of the things that are now an issue actually come from end users
and their workstations. I think the BYO device thing has been part of it. I was there at the
very beginning of that. But honestly, I think that's been a positive trend too.
It's just that the attacks have changed.
They're now phishing attacks or it's systems that are maybe brought from home that don't have the right compliant software on there.
That's the nature of a lot of these threats now.
It's technically low-hanging fruit, but it still requires IT's time, doesn't it?
Well, that's where Collide comes in.
Collide helps monitor these systems to make sure that they are compliant before they connect to
your cloud applications. So when you go to connect Collide, make sure you're compliant. If maybe
you're missing antivirus or maybe the credentials that you've used are known to be phished,
Collide will engage directly with the end user to help them resolve those issues per your processes, per your
procedures, per your technology. It'll educate those users to resolve that problem themselves.
So that way, when they connect, they're compliant and ready to go. And on the back end, you get a
dashboard, single pane of glass that shows you all this that works for Windows, Linux, or the Mac.
And you can run audits and reports as you need. Collide simultaneously gives IT and security pros the tools they need
while also empowering employees to enhance their own device security
without burdening IT.
That's pretty great.
So how does it work?
Well, you can go get a little demo, see how smooth it all is,
over at collide.com slash unplug.
That's K-O-L-I-D-E dot com slash unplugged. Get a demo, get some
insights into how seamless this all is, and you support the show by going there. It's
collide dot com slash unplugged.
Well, geez, guys, it feels like we have a lot of Linux festivals going on, a lot of
conferences and stuff that we need to plan for. It's making me a little bit nervous.
on a lot of conferences and stuff that we need to plan for it's making me a little bit nervous uh well okay first one we have here texas linux fest in 2024 that's from april 12th to 13th and
chris you calculated how many days away this is it's pretty close brent it's getting pretty
close 123 ish days which is about not counting this uh episode about 14 episodes of love away
i love how you're counting time in love episodes i think we should start doing that more how many
loves away is this thing that we have to deal with and uh yeah it's 14 laps away i have gotten
some bites on uh jupes parking and thank you. But they're all kind of far away.
So they like ideally would probably work coming in or out.
I've had two offers and I would,
I would really,
really would love to put the request out there one more time and say,
I'm looking for like a mooch docking spot.
That's a RV term for like an RV where an RV spot in your driveway,
in your property,
somewhere that's level that I could park jupes and get in and out for a
little bit while we attack Texas Linux fest, because we're planning to be there for like
two weeks and at like 80 bucks a night that's very expensive so if there was some piece of
property or somewhere nearby the Austin area where jupes could park that would be fantastic
you could email me chris at jupiterbroadcasting.com now I'm curious chris um and wes for someone like me who's never been to texas
linux fest before and what would you what would you say to us as listeners who are thinking about
it not too sure but uh so tempt me a little bit here wear sunscreen it probably won't be too hot
but april is starting to get to be nice weather in Austin. So it'll be a nice change of pace for you and I, Brent.
That's for sure.
It's also right next to Terry Black's Barbecue,
which is one of my favorite barbecue restaurants in the whole dang world.
So come with an appetite.
And it's a smaller event.
It's kind of on the 600-person scale.
Last time I saw it, somewhere in that range.
But I think of Texas Linux
Fest as kind of the startup
community event of the fest.
It's got a real high impact.
You get a lot of bang for your buck.
And it's in prime,
it's like a prime real estate there in Austin.
Austin's got a good tech scene with Linux people.
And I think
if they just keep at it, and they've got a good
organizational crew, our buddy Carl's one of the people involved, if they keep at it and they just keep iterating that conference, I think if they just keep at it and they got a good organizational crew,
our buddy Carl is one of the people involved,
if they keep at it and they just keep iterating that conference,
I think they're going to have
one of the best conferences in the country
for Linux users
because location, community that's already there,
the team behind it
that seems to really have their stuff together
as much as any of these fests do.
I think it all comes together
to make one of the more
exciting and up-and-coming fests
and Linux events in the country.
So I recommend it to everybody. It's where
we met our buddy Alex.
It's where I met Linode for the first time.
Yeah. So, I mean,
it's been a high impact.
Met a lot of good people there.
It's been a high impact event.
And then next up we have LinuxFest Northwest 2024.
It seems like, geez, we just did that.
They're calling it Ready or Not, which seems appropriate.
And it's right on the heels of the last Minifest that happened.
How many LEP episodes ago?
I didn't do the math on that.
Oh, well.
Three or four, five?
A few.
And it's right on the heels of Texas LinuxFest, too. It's April 26th through the 28th. So Texas LinuxFest is just a couple of weekends before that. Three or four? Five? A few. And it's right on the heels of Texas LinuxFest, too. It's April 26th
through the 28th, so Texas LinuxFest
is just a couple of weekends before that.
There's a
lot going on. And LinuxFest
Northwest does have its call for
speakers out until the
2nd of January, so it's coming up
quick. Get your talks in.
30-minute lectures, 90-minute
hands-on labs or uh
full-day workshops or even mini events if you want to be bold yeah and like we said then after that
it's nix con and scale you know we we really should not be complaining no because a couple
of years ago we would have killed for one of these that's just that is just it right there
that and that's what we have to remember is we
almost lost all of these and they're not all doing great and they're you know but they're trying
and uh how many times have we come on air and said man you guys the in-person stuff is so much
more important than you realize you know even for us anti-social intro introverts that don't really
like to go out don't really like to talk to people it's such a bigger deal than we than we can possibly understand stupid lizard brain you know that
said chris um fosdum's coming up as well i know i know they have a call for papers
oh i it might you might only have a couple days left, so that might be the limit by the time this publishes.
But hopefully, if you didn't get a talk in, I'm going to try to make it this year.
I have a few people to convince.
But it's one of those fests that we haven't had the chance to go to, and I think I want to change that this year.
So look out for that.
Yeah, that's more than a fest.
That's an event.
And now, it is time for the boost.
VT 52 comes in with our baller boost this week,
a space balls boost.
One,
two,
three,
four,
five,
six.
So the combination is one,
two,
three,
four,
five.
That's the stupidest combination I ever heard in my life.
I think it's brilliant.
VT, we might have lost your message in transit because PodFest currently has a race condition that drops some of the messages.
There is a fix already out.
But that stuff takes a bit to trickle through the App Store and probably almost a week to trickle out into F-Droid.
So if you sent that in, you can always – let's see.
A make good amount.
How about 1,333 sats?
We'll keep it low.
It's something we can recognize as a make good boost.
And put your message in there and we'll get that on there.
But thank you for being our baller.
And we'll put that right towards our bounty to get to scale and NixCon.
So, really appreciate that as well.
Clever compiler boosts in with
109,570
sets. Coming in hot
with the boosts! Oh, that's across, too,
from Podverse working this time.
I'm a party member, but I wanted
to send a zip code boost with some help for scale.
Thank you.
That is really generous.
And here's my tech horror story.
Oh, good. Okay.
In my 20s, I worked as a computer tech at Future Shop.
One day, I was installing a 56K modem in a Gateway PC while chatting with the other tech,
and I forgot to turn the computer off before inserting the PCI card.
Fortunately, I only fried the modem, but I never made that mistake again. You know what, Clever Compiler?
I have a sense you and I are about the same age.
And I have done that once, too.
Also, pour one out for Future Shop.
And for Gateway PCs.
Yeah.
Now, is that a gosh darn zip code boost, Wes?
Yes, it is.
All right, what do we got?
Not too far away.
It's a postal code in Washington County, Oregon, including such cities as Beaverton.
Well, hello, Beaverton.
Well, then, Clever, I hope we see you at a scale as well.
You're helping us get there with that boost.
Oh, okay, and then there's the second boost.
Uh-huh.
For $12,345.
Hey, Wes, you know what I think that is?
Space balls.
One, two, three, four, five.
Yes.
That's amazing.
I've got the same combination on my luggage.
This one's directed at our dear Brentley.
Brent, I'm finishing up LUP 537,
and I heard you mention losing your tabs to a reboot.
I had the same problem myself
till I found Session Buddy.
It makes it easy to restore my tabs
and even transfer them to another computer
via export and import.
Greetings from a fellow tab hoarder.
Yeah, there you go, Brent.
You could really do some stress testing on Session Buddy.
I'll say, I actually have been low-key using Session Buddy for a long time.
It's nice.
Oh, yeah?
What do you use it for, Wes?
How did it come into your life, and is it indispensable for you now?
Well, it's nice for when you kind of want to just switch activities.
There's more ways to do it, but Session Buddy can be one way. Like, let's say I'm doing a bunch of
show research and then I want to switch to, you know, I'm tackling some personal stuff or looking
at Christmas gifts or, you know, whatever other activity. You could just have a new window, but
if you don't, you know, you're not coming back to those tabs for a while, you just save those in
Session Buddy and you can resume, open up that session whenever you want. Plus, you can configure
it to sort of auto take snapshots of what tabs you have open so like if your browser crashes
or something goes haywire you've got like whatever your recent history was so he could save like a
whole bunch of tabs and just set them aside and close it and then just recall them when he needed
them i do that basically every before and after every show well that's exactly what i need i've
been playing with one tab which is not dissimilar to
this but it sounds like session buddy might just be one step more so i might just have me a new
buddy we'll see if it can hold up to the qa chief yeah that's the thing that's the real test yeah
thanks for the tip dan rotted mood comes in with 100 000 sats and just says, scale boost!
Again, thank you, sir, and thank you for being a big old baller, too.
Appreciate that.
Now, Woodcarver
came in with 65,535
sats.
Coming in hot with the boost!
Saying, hey, my last boost was
completely random since I just kind of emptied
my Albi wallet.
So here's an integer boost to clear things up.
That's way better.
Thank you.
You know, we were having some discussion after the show, and we all had concern.
Indeed.
Yeah, 65535 is the maximum value of an unsigned 16-bit integer.
Thank you, Ricardo.
If, you know, for some reason you didn't know.
Now we need one in the theme for the 32-bit challenge.
Appreciate the support.
First of 300 comes in with 50,000 sats and definitely says,
here is my biggest TIFU.
I blew away our configuration for our prod Kubernetes cluster.
The worst part, you had no backups of those configs. Oh, man.
We were using the cluster API project to deploy our clusters, which manages the clusters using Kubernetes custom resources.
Isn't that always how it goes?
I deleted the custom resource, and thankfully, the controllers using those resources were also deleted, so they couldn't actually destroy everything.
I proceeded to panic before regaining my composure and attempting to reconstruct the configuration to no avail.
before regaining my composure and attempting to reconstruct the configuration to no avail.
Thankfully, our clusters were still active and alive and well overscaled for unrelated reasons,
so there was really no customer impact.
I proceeded to spend the next two days reconstructing the clusters using Terraform and a rewritten resource config.
After finishing rebuilding everything, we migrated off the orphan clusters without incident.
Our principal software engineer even remarked that he wouldn't have known anything had happened
if he hadn't been told about it.
We wound up with a happy ending.
But those two days were a brutal slog, rebuilding everything.
Gosh, yeah, right?
You're kind of stuck.
You're like, well, until I get things migrated, I can't really touch this existing.
I just have to hope everything keeps working.
Man, he got lucky that he had enough resources over provision to kind of absorb that, too.
That could have gone a totally different direction, first of 300.
Wow.
It's nice we live in the cloud era, too, where you can be like,
okay, I'm just going to spin up a new thing.
We'll migrate, get rid of the old.
Thank you for the boost.
That was a good one.
McZip comes in with 50,000 sats from Cast-O-Matic.
Man, McZip is a consistent booster.
Really appreciate that.
One of the regulars.
We love you.
Stacking for scale?
Need more Nix reporting to finally push me over the edge.
Of installing Nix, that is.
We're here for you.
We're here for you.
Hydrogyrum came in with 24,690 sats over two boosts using the podcast index.
It looks like the very first one is 1, 2, 3, 4, 5 Satoshis.
Mm-hmm.
We know what that is. It's actually, that's our third one of the week.
We're going to have to go right to ludicrous speed.
Just moved my sats from Fountain to Albi, so I'm finally boosting again. I recently switched over my desktop, laptop, and server over to, you guessed it, NixOS, and I'm loving
it.
Oh, wonderful.
Almost every app I needed that isn't working in Nix packages,
which is usually just some interaction with Sway or Wayland,
works fine in a flat pack.
The only thing I've had real issues with so far is ESPIDF,
for embedded development, because of how it handles itself with VS Code.
Now as a follow-up with another 1, 2, 3, 4, 5 Satoshis.
Smoke if you got him.
I felt like sending some more with add winter as it is.
Plus, I want to hear more Spaceballs soundbites.
Dang right. Thank you.
Yeah, thank you very much, Hydrogrum.
It's a nice little real-world NixOS report.
We really don't want to pretend that everything works or everything's easy, right?
Yeah, VS Code can actually be, surprisingly,
maybe a little trickier than people expect.
Yeah, you know, especially, like,
if you've got any kind of sandboxing, Flamp Pack,
or, you know, just the sort of Nix,
all the crazy paths going on with Nix,
and VS Code needs to find all of its things,
and the plugins have certain assumptions
about where they can find, like, the development files
or the JVM or whatever, you know, compilers for your meta environment.
Yep.
Yep.
It is true.
I like those on the ground reports.
I think we've got to keep it real.
I think, you know, we've got to be careful not to oversell it.
So thank you for doing that.
That's a valuable boost in multiple ways and gave me a chance to play an extra Spaceballs
boost.
So I appreciate that, Hydra.
Ghost mullet.
Love that one.
Comes in with a row of ducks.
So what used to be listenable over a commute or a commute and a half will now have to roll
in to work and out into cooking because I'm a new party member.
Woo!
And this week's member feed was a banger of three-hour goodness.
That's true.
I have a secret.
Whoopsie.
Oh, yeah.
What's that?
Every member's feed is three hours. Pretty much. Yeah, they're often, they're bangers. Yeah's true. I have a secret. Whoopsie. Oh yeah, what's that? Every member's feed is three
hours. Pretty much. Yeah, they're often
they're bangers, yeah. Yeah. Yeah, even
when we're not live over the holidays, we'll still have
member versions of the show. The only
problem is, is Drew makes it sound real
good. You gotta make that choice. We do
have a version for the members that has still got all
the Drew goodness, but just no ads as
well. As a way of saying thank you.
Faraday Fedora boosts in with another row of ducks.
The TLDR for my big it mix up.
Don't trust no name raid cards from eBay,
even if they've been working at the company longer than you have.
Oh man.
What about,
what about,
um,
unknown history hard drives that are like seven years old?
Would you, Faraday, would you?
We should probably get some cheap RAID cards just to throw on top.
Yeah, especially when those drives are getting so old.
Also, I think it accelerates ZFS RAID.
Yes.
I think it's good for that.
That's what they recommend is you get really cheap RAID cards to help ZFS out.
Really, really good for that.
Faraday, thank you for the boost.
And do not listen to us.
We are just kidding.
Good for that.
Faraday, thank you for the boost.
And do not listen to us.
We are just kidding.
Eve blasted 2,500 sats our way and said, hey, here's a small contribution for the scale trip.
Thank you.
Every sat counts.
We'll put that towards the trip.
Nice.
Noodles comes in with another space ball boost.
One, two, three, four, five sats.
One million space bucks.
A million. Oh, Pizza the Hut. It, five sats. One million space bucks. A million.
Oh, Pizza the Hut.
It says, hello and good luck with scale.
I live nearby in the Central Valley, but I can't make it. Make sure to do lots
of coverage, though, of NixOS. I love the Spaceballs
boost sounds. It might just end up being my default
amount that I boost in. Noodles,
thank you for helping us get there. We'll put that
right towards scale, and we
will try to do our best on the coverage.
Well, we got 6,100 sats from, I think, a name that appears to be a space or just empty.
So thanks for that.
All right.
Thank you, though.
It's like an anonymous secret Santa sending us some sats.
All the sats count.
Or it might be a weird bug, but nobody will know that it was them. Right. Anonymous Secret Santa sending us some sats. All the sats count, so.
Or it might be a weird bug, but nobody will know that it was them.
Right.
Well, we do have the make good boost.
You know, 1,003 lucky threes.
You can let us know.
Ryan boosts in with 4,506 sats.
I was prepping for a cutover to a new SAN on our ESX cluster at our production site.
Connected the new SAN via the existing fiber channel switches,
and started prepping to move the host.
I then lose connection to the management host I was using.
Logged into the ESX host direct, and find it purple screened.
Uh-oh.
Then all of the ESX hosts purple screened.
Lost all the VMs. No AD,
no management hosts, very little
of anything. Oh my god. Found out
our old hosts had a bad firmware
on the fiber channel cards, so
that when you connected a new SAN,
it would purple screen the host. No.
Would have found out if management
had not canceled the support contract
because it cost too much.
Oh, my goodness.
And also, this is a postcode boost from Australia.
Well, hello, Australia.
Yeah, 4506 is a postal code in Australia.
Cities like Murrayfield and Brisbane.
Okay.
Well, hello, and thank you for sharing that pain with us.
Wow.
We take that pain and we feel it.
Man, man, to get bit by a firmware like that.
That would just be so devastating.
Not the same, but there was – and Dell server guys out there will remember this.
There was an era of Dell servers that had bogus tape drives and it lasted years.
And you wouldn't necessarily always know until you went to do a restore,
and man, that got us once.
And it turned out the backups were no good.
And, you know, I don't even know.
I mean, I think the server was a few months old.
I can't remember.
It was really upsetting.
And then they replaced the tape drive, and it still wasn't fixed.
It was a whole ordeal.
And, you know, when you put everything in a vm and the vm environment goes out that's always why i've kind of been a big fan of leaving one dns and dhcp server outside the vm stack and one ad
controller outside the vm stack if you can it's not always an option but if you can oof oof feeling that one thank you for the boost though aid rise came in with a total
of 6,800 and 10 satoshis across four boosts i am programmed in multiple techniques the first one
1701 satoshis hey since we're talking about oops moments well i've got one too i had just started
my career in it working for a hospital
and had recently been put in charge of our application development
and operating system deployment systems called
Microsoft System Center Configuration Manager.
It's quite the name.
Oh yeah, the CCM.
The product, what it does is in the name. It's on the can.
The previous employee who had all the knowledge was gone,
and it was up to me to figure out how the system worked,
what needed to be done, and how to maintain it.
Keep in mind, I'm 19 at the time.
Oh, man.
One of the things that needed to be done
is to update the deployed versions of Windows.
Well, I'm sure the last guy had great documentation.
Detailed notes, of course.
Oh, yeah, of course.
So I began figuring out how to create,
manage, customize the operating system deployment piece of SCCM.
Being someone who likes
automation, I found that I could remotely tell
any computer with the SCCM agent
on it to automatically reboot
into the Windows PE environment
and begin the imaging process,
wiping the disk,
installing the new Windows version and all other company software that needed to go on there.
It came time for the operating room workstations to be re-imaged.
Being the operating room, they are quite busy and I had to do it during a time when they were not
being used. Luck would have it, most of the rooms were open after 5pm.
So I went and scheduled the deployment of the image for these workstations.
SCCM uses UTC by default unless you hit another checkbox during the deployment to use client local time.
I did not check that checkbox.
So right on schedule, at 1700 hours, these computers started re-imaging.
It turns out, being in central time, we were six hours behind
UTC, which puts us at 11am, right
smack dab in surgery.
Thankfully, the doctors were able to continue that surgery
without their charting computer, and the nurses just pulled up a laptop from another room.
I did learn very quickly to double-check every deployment from that point on.
I didn't lose my job, actually still work for the same company a mere three years later.
Well, you won't make that mistake again.
You just UTC everything all the time.
Oh, time math. Av math adversaries been there man have i been there when you take out the system right in the middle of prime time when you meant
to take it offline during like low-key hours yeah imagine also i've been there had that happen at a
law firm i've had that happen at a doctor's office and i've had that happen at a law firm. I've had that happen at a doctor's office and I've had that happen at a bank. And none of those places like that very much.
None of them do.
Soltros comes in with 5,000 sats using the index as well.
I can't think of a specific horror story of mine right now.
I'll say that I bet 80% of us have accidentally DD'd over a hard drive or a USB thumb drive at least once before.
Yeah. My most absolute
shameful,
shameful,
I mean, I've done some whoppers.
Like, one of them, my small
whopper was, I was planning to keep source files
of every video we've ever made at Jupiter Broadcasting.
And this was getting on this
big, massive external, and I had two identical
ones that were used for two different types of projects.
One at the time was for client projects, and one was for JB projects. And I had two identical ones that were used for two different types of projects. One at the time was for client projects and one was for JB projects.
And I had mislabeled them.
And I hooked up the one that I thought was for client projects and did the annual format.
And I had wiped all of our video footage.
It was at that moment I was like, well, I guess I'm not keeping source files for everything forever.
I just had to make peace with that.
But the real whopper, we had videos of the kids being born i'm not a huge fan of doing
it but and really wanted it and it happened at home and it was it all kind of happened fast so i
i grabbed the camera like the studio camera i have a tape in there and i record the birth yeah i don't
know if i didn't take the tape out like because i was using a work tape it got all mixed up with
the work stuff and i think i just recorded right over it birth of my child i wonder what you recorded over it so i used
to do back then in studio tape backups of the shows oh because like you know the hackintosh
might crash so we'd have in-camera tape backups of everything because we were running the audio
and the video all through the camera and so i would just have a tape in there and just hit record and just as a backup have it.
And I'd probably just – so it's probably just some random episode.
You're yapping on.
Sitting in front of a grain screen, yapping my dumb mouth.
So that's my worst data loss story.
And it's back in the tape days.
So what are you going to do, Wes?
What are you going to do?
Eric comes in with 12 345 cents so the
combination is one two three four five that's the stupidest combination i ever heard in my life
and continuing the uh you know i'm just a vessel to chat with brent for the boosters yeah brent
have you checked out the nixos hardware repo and their framework 13th gen
intel files i really like the nixos hardware repo as it puts the collective knowledge of
hardware configuration and kernel tweaks in a single place yeah you know thankfully i was
tipped off by this at uh the last nixos meetup here at cbase where i just after i unboxed the framework at one of our jb meetups i went to
the nixos meetup and had a bunch of people tell me exactly what i should be doing right
so i did learn about this early on and it was amazing and then it was one of those moments
where like oh some people are doing amazing work out there and i just need to like copy and paste
one little thing to take advantage of all that work.
This is amazing.
That sounds like it's worth a C-Base membership right there.
It sure is.
And because of that, just today, I shared this very repo with our dear friend Alex, who just, shh, he didn't say it online yet, but he just unboxed his framework today.
Oh, exciting.
This is my explanation to folks when
they're like what was the big deal about nix i'm like okay well here's an example of what makes
nix great some person a couple of months ago got themselves a framework laptop and they noticed
their headphone port had some buzz on the line and they're like how do i fix this and through
whatever process they went through they solved with a couple of command line – with a couple of lines in the config.
They solved the buzz on NixOS systems.
And then they share that two-line fix on the internet.
And now every Nix user has that buzz solved.
It's across the board.
You put it in there and it stays fixed forever.
It's really nice like that.
So, all right.
I'm done.
But I'm just trying to make it,
just trying to explain myself.
Exception comes in with 5,000 sats,
sending us some fountain funds
to help us get to scale
and says keep up the stellar work.
Well, thank you, Exception.
Appreciate that.
Our dear Gene Bean came in
with 12,222 sats over two boosts,
and I think I know what that means.
There's a row of ducks as the first one.
Hey, Boost just now made me think maybe the community could help.
I'd love to see a fully fleshed out Hyperland setup for NixOS laptop
that includes a login manager, power management included for when the lid is closed,
graphical interaction with Wi-Fi networks,
and high DPI support.
Here's what I have so far just for reference,
and it's seriously lacking, but it is pretty,
and they included a link to their GitHub repo
where they have their Nix config.
Oh, the challenge set, have at it, y'all.
This is actually a brilliant idea
Gene has just stumbled upon.
Why don't we have the community coming up with some cool mixes of Nix
where you get great desktop experiences and we throw those up on GitHub
and people can use them and try it?
That's a really good idea, Gene.
I hope somebody takes you up on this.
So he wanted to get Hyperland with a login manager,
with power management when you close the lid.
Sort of like a nicely tuned setup.
Yeah.
I like that idea a lot.
Gene also sent in 10,000 more Satoshis to say,
hey, don't get distracted by the shiny distros while planning for that 32-bit challenge.
Gen 2 should still be on your to-do list.
Keeping us honest.
I love that.
Gene, how is Gen 2 not the shiniest of the 32-bit distros?
Gen 2 is the shiniest of them.
Have you looked at the distros that are available for 32-bit Linux?
It's bleak, Gene.
It's bleak.
I think Gen 2 is top of the list, if you ask me.
Complete Noobs comes in with Roidux.
Little Xmas sales tip-off.
Throw a Dux.
A little Xmas sales tip off.
The Razer Core X allows you to use a desktop GPU on your laptop as long as it's got Thunderbolt 3 or better.
Keep an eye on Razer.com to buy direct.
Last year, I got an 80% discount.
Spec discounts again between now and the new year.
And the best part?
If you're using Linux, you can pass through the GPU to a Windows VirtualBox,
try it and test it on my ThinkPad T470.
And then there's a little link we'll include over to Complete Noobs blog, so they got some
teats on that.
If we can find it, we should find the episode we had a couple of years ago with Alex, where
we get into that, and we talk about eGPUs, and we talk about dedicating a VM to use that
eGPU.
I still have mine.
and we talk about dedicating a VM to use that eGPU.
I still have mine.
I don't find it to be the most practical solution for my laptop,
so I don't tend to use it much.
But that is a really good tip for those that aren't aware.
If you've got Thunderbolt on a Linux machine,
you could be going around low-key all day with Intel graphics or whatever, embedded AMD or whatever.
Well, probably not if you're going to have Thunderbolt.
You'll plug it in, though.
Boom, you can have a big old PCI Express graphics card.
And it may not be a 16x speed, but it'll probably be, I don't know, 4a, whatever it is.
It'll totally crush compute.
It'll be great for that.
And honestly, I've used it for gaming, too.
It's been totes fine.
A lot of impacts came in with another row of ducks.
Simply saying, will it scale oh i get it thank you thank you for the duckies to help us get down to scale uh nice little haul
this week everybody we are trying to stack up to eight million sats and this week we had 22 boosters
and we added 600 540 sats to that milestone, which is definitely going to help. Thank you, everybody who boosted in.
We try to get to every message above the 2,000 sat cutoff. If you would like to boost,
there's so many ways to do it, but I think the top ways are with a new podcast app,
because then you get like 90 second notifications. Once the new episode's been out, you get live
stream support, which we'll be talking about. While that might be really great when we're in,
if we get down to scale and we can do
a show, I want to do it lit in a new podcast app.
There's a lot of new features in there and you also get the boost support.
But if you're not ready to switch apps, and I understand, I think you should consider
the Strike app.
It's available in 36 countries.
It's on the Lightning Network.
And then you can just get some stats.
You can send them to Albie.
You can even just boost now directly from the Fountain FM website and scan the QR code. You go to getalbie.com, put some sats in there and
boost from the podcast index using the web. It's a little bit of a journey because you are going
into a brand new frontier. But what we're trying to do here is we're trying to keep podcasting as
independent as possible. And when you boost, there's no company between you,
your value, and the value you receive. It's just a peer-to-peer network. It's all using open source software. And I just think it's a good experiment. And if we can find a way to
use free software and an open network to fund open content, I think that's a model that is
reproducible and there's value in that. And I think there's value in keeping podcasting independent and thriving.
And so I appreciate everybody that takes the time to walk that hill,
to get those sats, to get that set up,
because then once you're boosting, it's a lot of fun
and we just love getting your messages.
And it also creates sort of spontaneous moments for us in the show.
And a big shout out to everybody who streams those sats as they listen.
We see you and we appreciate you.
Please do keep boosting in. We'll be reading them now. We see you and we appreciate you. Please do keep boosting in.
We'll be reading them.
Now we are going into the holiday schedule.
So if you'd like to boost in and give us updates on your 32-bit challenge, please do because when we come back on the 7th, we will be reading those.
We have one more regular episode that we'll be covering the boost.
And then we're off into our holiday specials and back on the 7th.
So you can stack those s stats while we're gone.
Send us your 32-bit results and more.
I think it's going to be great.
I think it's going to be – you guys will really love the holiday specials this year.
So stand by.
All that will be coming up in your feeds.
You don't got to do a thing.
Okay, Westpain.
You can only pick one.
Which is our at-pick this week?
Two very good choices one brought by brent one brought by you so i know there's some bias there but all out i gotta go with brent
oh i didn't expect it all right we'll save the other one for the future but this is a really
great looking app tell us about marker brent, I found this one coming across our shelf and I figured it fit in with our theme this week. Marker is a neat little application that
converts PDFs, EPUBs, and MOBI files to Markdown. It's about 10 times faster than Nougat, which
I haven't used, but it is more accurate on most documents and has low hallucination
risk. So you just
heard it. It's got some AI
stuff going on in the background to help you
convert these things. Yeah, if you've got
a big old fancy GPU, it'll use that to
crunch this, but it'll also work on your CPU.
Nougat is a neural optical
understanding for academic documents,
but it's like a PDF parser.
So yeah, it sounds like in the same place as
Marker. These are some really cool
tools where I feel like you can set
aside all the hype around AI.
This is like the practical, what-you-can-use-it-for
stuff today on your own machine
without any cloud services.
It's not replacing you at work, it's
just making you a little
better to work with PDFs, which no one
likes. Right. Like, if I can interrogate a large language model about a particular topic for 10 minutes and
get more informed, you know, you got to double check it right now, but you can see the direction
that's going. Or if I can use something like marker and take a proprietary PDF or something
that's locked up in an EPUB and use some sort of AI model to OCR that, that's a win and that's a
legitimate use for these tools.
So I think that was our real goal with this episode,
is to take things like speech,
to take things like the large language models,
and to take things like text recognition
and figure out the areas where it's really useful today
and you can do it on Linux.
Marker seems like it could be another great,
maybe there already are, I don't know,
but like another great addition is a Obsidian or LogSync plugin, you know?
Don't even store the PDF, just store the converted Markdown. That's really all I want, so I can search it. maybe there already are, I don't know, but like another great addition is a Obsidian or LogSeq plugin, you know?
Don't even store the PDF, just store the converted markdown.
That's really all I want, so I can search it. Obviously, if it's accurate enough, et cetera, et cetera.
But the potential.
They're all there if you're willing to check them right now.
I think all these tools are there if you're willing to check them
for occasional hallucinations.
But I don't think it'd be a surprise at all by the end of 2024
when we're
in December to start saying these tools are like 95% there.
We're at the 10%.
We'll see how long it takes.
We just don't know yet.
I got to figure there's more tools that we've missed too.
So if you know of any, do boost them in.
Also, your holiday wishes, if you want to send those in, and your 2024 predictions.
We also want to get those in.
That's like my favorite time of year.
It's going to be a wacky year.
You just know it.
Yeah.
And then last but not least, Tuxies.Party.
You got a couple of days left to vote if you haven't yet.
Now, our next live stream will actually be on Saturday.
Saturday, and we're doing it at 11 a.m. Pacific, we decided.
Yeah, Saturday the 16th at 11 a.m. Pacific time.
So if you want to join us for the last live show of the year, that's when to do it.
Our live vlog will be open.
Our mumble room will be there if you'd like to get in there.
If you have yet.
It's always a little bit weird when we shift schedules.
So we really appreciate anyone who can make it.
Yeah, if you normally can't make it on a Sunday but you could on a Saturday, you need to show up.
Because most people won't.
Because we change it.
And when you change it, all bets are off.
It really is funny.
It is funny how that works.
Links to everything
we talked about today,
that'll be at
linuxunplugged.com
slash 540.
Of course,
we'll have links
to the tools
we talked about there
and our subscribe link,
our contact form,
all of that,
the RSS,
that's all there.
And we just always appreciate
if you get an opportunity
to share the show with somebody.
Word of mouth
is the number one way
to spread podcasts.
When you think about
how long these things are, who's going to listen?
Unless a trusted friend tells them it's worth it.
So we appreciate that.
Now that does wrap it up for this week's episode.
Love to hear from you. And of course,
as always, appreciate you listening.
Thanks so much for joining us on this week's episode
of Your Linux Unplugged
Program. I swear I'm not a robot.
I swear.
Not yet. Soon. Soon. I swear. Not yet.
Soon. Soon.
Maybe one, maybe
in a year or two we'll have an episode that's totally
done by AI and we're taking the week off.
If the show quality suddenly rapidly
improves, that's how you'll know.
Alright, alright.
Let's get out of here. Thanks so much for joining us on this
week's episode. We'll see you right back here
next Tuesday.
As in Sunday. Or Saturday.
Saturday. Yeah. Thank you.