LINUX Unplugged - 616: From Boston to bootc
Episode Date: May 25, 2025Fresh off Red Hat Summit, Chris is eyeing an exit from NixOS. What's luring him back to the mainstream? Our highlights, and the signal from the noise from open source's biggest event of the year.Spons...ored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:💥 Gets Sats Quick and Easy with Strike📻 LINUX Unplugged on Fountain.FMLINUX Unplugged TUI Challenge Rules — Help shape the challenge - what did we miss?TUI Challenge Rules Discussion ThreadRed Hat Summit 2025 Homepage — May 19-22 2025 in Boston, MARed Hat Summit 2025: Execs Tout Opportunities In Open Source AI, Virtualization Migration — OpenShift Virtualization has seen almost triple the number of customers, with the number of clusters deployed in production more than doubling and the number of virtual machines managed by the offer more than tripling.Agentic AI, LLMs and standards big focus of Red Hat SummitRed Hat Summit: Key Innovations for IT Channel PartnersUnlock what’s next: Microsoft at Red Hat Summit 2025 — Red Hat Enterprise Linux (RHEL) is now available for use with Windows Subsystem for Linux (WSL).Red Hat Launches the llm-d Community, Powering Distributed Gen AI Inference at Scale — Red Hat’s vision: Any model, any accelerator, any cloud.Red Hat Introduces Red Hat Enterprise Linux 10 with Supercharged Intelligence and Security Across Hybrid Environments — Red Hat Enterprise Linux 10 delivers a paradigm shift in enterprise operating systems with image mode.10.0 Release Notes | Red Hat Enterprise Linux | 10 | Red Hat DocumentationRed Hat Enterprise Linux 10 Officially Released, Here's What's New — Red Hat Enterprise Linux 10 highlights include Red Hat Enterprise Linux Lightspeed for integrating generative AI directly within the platform to provide users with context-aware guidance and actionable recommendations through a natural language interface.RHEL 10: Leading the future with AI, security and hybrid cloudRed Hat Enterprise Linux 10 Reaches GASiFive Collaborates with Red Hat to Support Red Hat Enterprise Linux for RISC-V — The developer preview of Red Hat Enterprise Linux 10 is initially available for use on the SiFive HiFive Premier P550 platform.Red Hat AI on Hugging FaceFIPS 203/204/205 — These standards specify key establishment and digital signature schemes that are designed to resist future attacks by quantum computers, which threaten the security of current standards.Virtualization success stories: Join Red Hat OpenShift Virtualization's momentum in 2025llm-d — llm-d is a Kubernetes-native high-performance distributed LLM inference frameworkWhat is vLLM? — vLLM is an inference server that speeds up the output of generative AI applications by making better use of the GPU memory.Image mode for Red Hat Enterprise Linux — Image mode leverages the bootc tool to build and deploy Red Hat Enterprise Linux. Bootc stands for bootable container, and the image will include the kernel, bootloader, and other items typically excluded from application containers.Image mode for Red Hat Enterprise Linux OverviewIntroducing Fedora Project Leader Jef SpaletaBluefin — Featuring automatic image-based updates and a simple graphical application store, Bluefin is designed to get out of your way. Get what you want without sacrificing system stability.KongrooParadox's nixfiles — This was my second nixos release since getting into Nix last year (February I think), and this strategy made it really painless. No surprises about deprecated options since I saw these cases slowly when these changes hit unstable.yazi: Blazing fast terminal file manager written in Rust, based on async I/Ojira-cli - Feature-rich interactive Jira command linebrowser-use — Make websites accessible for AI agentsThomato's TUI ResourcesPick: RamaLama — Make working with AI boring through the use of OCI containers.ramalama on GitHub — RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for inference in production, all through the familiar language of containers.
Transcript
Discussion (0)
I think we've officially come full circle.
We are recording in the master bedroom of an Airbnb.
You know, we went around, did the scientific testing,
and determined acoustically,
this was the best location to record the show.
We don't wanna get any lectures from Drew.
No, no.
And thankfully, I don't think we had to tear apart
any beds for this one, but it's funny because the studio
where we record is actually my former master bedroom
converted into a studio.
I do think we'll have to let Brent tear a part of bed
after this just to get that energy out
because he was ready to go.
I was planning ahead and we're only using one mattress.
["Skyfall 2"]
["Skyfall 2"]
["Skyfall 2"]
["Skyfall 2"]
["Skyfall 2"] Hello friends and welcome back to your weekly Linux talk show.
My name is Chris.
My name is Wes.
And my name is Brent.
Hello gentlemen.
Well coming up on the show today we're reporting from Red Hat Summit and we're going to bring
you the signal from the noise and why this summit has me flirting with something new
plus your boost, a great pick and more.
So before we go any further,
let's say good morning to our friends at Tailscale.
Tailscale.com slash unplugged.
They are the easiest way to connect your devices
and services to each other wherever they are.
And when you go to Tailscale.com slash unplugged,
I don't need to support the show,
but you get 100 devices for free,
three user accounts, no credit card required.
Tailscale is is modern secure mesh
networking protected by
Easy to deploy zero config no fuss VPN the personal plan stays free
I started using it totally changed the way I do networking everything is essentially local for me now
Tailscale has bridged multiple different complex networks
I'm talking stuff behind carrier grade NAT, VPSs,
VMs that run at the studio, my laptop, my mobile devices,
all on one flat mesh network.
It works so darn good that now we use it
for the backend of Jupyter broadcasting server
communication as well.
So do thousands of other companies like Instacart,
Hugging Face, Duolingo, they've all switched to Tailscale.
So you can use enterprise grade mesh networking for yourself or your business.
Try it for free. Go to tailscale.com slash unplugged.
That is tailscale.com slash unplugged.
Well, we are here in our Airbnb.
Yeah, why are we doing housekeeping at someone else's Airbnb?
I know. How do they you know, these Airbnb's, they just get more and more out of you every
single time.
But maybe it's because we brought our own mess.
Well, actually, it's not so much of a mess.
It's actually going really well.
People are getting excited about our terminal user interface challenge.
We are still looking for feedback on the rules.
We have it up on our GitHub.
We've seen some already good engagement, though, people talking about it in the matrix room.
So we're getting really close to launching it
when we get back.
It's the final week before we launch essentially.
We're gonna get back in the studio next Sunday.
We're gonna sort of set up the final parameters
of the challenge, give you one week,
and then the following episode's gonna actually launch.
Get ready to uninstall Wayland.
Yeah, take the TUI challenge with us.
There's a lot there.
It's looking like it's gonna be a lot of fun,
and we're gonna learn about a bunch of new apps I never knew about. We have a lot there. It's looking like it's going to be a lot of fun and we're going to learn about a bunch of new apps I never knew about.
We have a lot of work to do because the listeners, they're way ahead.
Also a call out to the completionists. We're doing this for a couple of episodes. We know
a lot of you listen to the back catalog, listening in the past, if you will, and then you're catching
up. We recently heard from somebody that was about 15 episodes behind and it got us thinking,
how many of you out there are listening in the past? So when you hear this, boost in and tell us where you're
at in the back catalog and what the date is.
Until those darn scientists finish up. This is the closest thing we have to time travel,
okay?
And then one last call out for feedback. This episode, I'm getting into why I am switching
off of NixOS. And this isn't a negative thing about NixOS, but I thought I'd collect some
information if you tried it and bounced off of NixOS. Boost in and tell me why. I'll be sharing
my story later, but also if you're sticking with NixOS. I'd be curious to know what it is about it
that's absolutely mandatory that you wouldn't give up. Boost that in as well or linuxonplug.com
slash contact. We'll have more information about that down the road because really it's just
ancillary to what this episode is all about and that's Red Hat Summit.
So we were flown out to cover Red Hat Summit as we have done for the past few years and the ones
where there's a Red Hat Enterprise release are always really the most exciting and Red Hat Summit
2025 here in Boston at the Boston Convention and Exhibition
Center was May 19th through the 22nd of May. And they did something a little different
this year. They decided to make what they really referred to as Day Zero Community Day.
So this was a track that was sort of ran adjacent to Red Hat Summit in the past is now a dedicated entire day.
And I thought I'd go check it out.
Welcome to Community Day at Red Hat Summit Day 1.
And it's all about, you guessed it, artificial intelligence.
Well, okay, and Linux.
But they made a pretty good call.
They said, hey, Red Hat is working to set the open standards for Red Hat and for data
and for models.
And here at Summit, you can interact with us directly and inform how we participate
in those.
So sort of like get involved in AI through Red Hat, a call to action, as well as just
general information about today's event.
You knew right from the beginning, okay, it's going to be another year where we focus on AI
quite a bit. But this was a kind of a different call. It was there's a lot of impact still to be
made for open source AI. And we're really paying as a company, Red Hat's really making a push.
So why don't you get on board with our open source initiatives and inform the conversation there.
We'll push the wider industry based on your feedback.
I mean, I do think that's a trend we see play out
over and over, both between, you know,
Red Hat interfacing with the industry,
but also really leveraging and in many cases,
sometimes being driven by what's available
and what's happening in the open source side
because they really have skill sets
and how to turn that into an enterprise product. So the better the open source side gets they really have skill sets and how to turn that into an enterprise product.
So the better the open source side gets, the better their product gets.
I wasn't really sure what the focus would be this year.
I mean, I knew RHEL 10 was coming, but last year was really focused on local AIs.
Could you do two years of summit on AI?
This was Brent's first Red Hat Summit.
Which is hard to believe, really.
And we wanted to capture his first impressions sort of right there after he'd
had a chance to walk around on what they're calling day zero.
Well, this is my first time here at Red Hat Summit.
And I got to say, you guys warned me about the scale of this thing.
Wow.
Just the infrastructure and the number of booths and the number of people and like
how organized it is to get everybody all here and doing the things they're supposed to be doing.
I am a little overwhelmed by just the size.
I bet you they spend more on hotel rooms than I probably make in five years.
I don't know, maybe more.
Oh gosh.
I did do have a nice hotel room, but even just the layout, like how everything's so
close you don't have to go very far you don't have to travel everybody knows sort of
there's people just standing around helping with wayfinding like it's super
present
the vibes a bit different at linux fest they weren't
even doing registration they weren't counting attendees
there wasn't anything like that here you have to get your badge scan to enter
every area in every room
and the security is definitely a higher presence.
What impression does that leave on you?
Well, I guess there are relationships
being built here that are very different than the relationships
being built at other conferences, right?
Like we saw some negotiation booths.
Didn't see those at Lennox Fest.
So it's a bit of a different feel.
But there's some real stuff happening here.
So real connections being made
Day two should be even more interesting
Really? It feels like maybe things are just kind of slow rolling today
Did you get that impression that it's just sort of not quite started yet?
It seems like people are still arriving and warming up to the whole situation getting the lay of the land
So I'm excited to see tomorrow. That's when all the exciting stuff happens
Yeah, just wait.
You get to wake up real early
for a bright and early keynote, Brent.
I forgot about how we have a time zone disadvantage.
So day zero, if you will,
was sort of the ideal day to go see the Expo Hall.
These Expo Halls are just quite the spectacle.
I mean, the crews that come in
and set these up
in an amazing amount of time,
they also have all of this racking they do for the lighting.
I learned it took them two days to put all that together.
And apparently that was like quite a miracle.
Yeah, I mean, these booths are structures
with like areas inside them and massive displays
and LED lighting embedded everywhere.
These are your highest of the high end display booth type stuff.
I mean this is really nice stuff and we wanted to see it before it got too crowded.
Well you can't do day one without doing the expo hall and it's an expo hall.
Let me tell you it's a whole other scale than well Linux Fest Northwest or scale.
Lots of production, lots of money, lots of lighting.
And right now we're standing out front of the Dev Zone,
which seems to be one of the more popular areas,
and in particular the Dev Zone Theater.
And what do they seem to be going over, Wes?
Yeah, they're talking about the marriage of GitOps
and Red Hat Enterprise Linux image mode.
And despite us just being in a packed talk I think
there might be more people trying to watch this here on the expo hall floor I
think there's a lot of excitement around image mode and the things you're going
to be able to do or can already do we're tying it to existing you know
declarative workflows with patterns that developers like that now can meet the infrastructure.
It does seem to be a real hunger for it here. Like it's standing only room right
now and they're doing a live presentation too so there's a screen and
everybody's trying to see it but there's so many people in the way like we're
here in the back we can barely see the screen. So Brent what do you think of
this expo hall compared to other experiences you've had? It is very large. I gotta say it's very well spaced, like
you can see a ton. It's not like these little cubes. Many expo halls just feel
closed in. This is open and breathy and tons of people but doesn't feel squished
together and it's bright and I don't know innovative. You know it does feel
squishy, this floor. Why is this like this? So we're in the like dev room, cloudy space and no,
we're at app services spread. Oh, see, I'm confused.
But the flooring they've added extra cush is like cloud, very cloudy.
Feels good on the tired feet.
Now something that you caught in there was image mode and there was buzz on the
expo hall floor about image mode.
But REL 10 hadn't actually been announced yet.
So we hadn't officially heard the news about image mode.
But staff were walking around and literally asking,
have you heard any leaks about REL 10?
You heard anything?
Because there's some things going around.
And then we were like, and what would,
I can't remember what we said.
And it was something like, why don't you tell us
what the leak is and we'll tell you if we heard it.
So there was some anticipation around day two and the keynote, because that's where we expected to get the official news of REL 10.
And Matt Hicks, the CEO of Red Hat, kick things off.
Welcome to Red Hat Summit 2025.
This is our favorite week of the year,
and it's great to have so many customers and partners
here with us in Boston.
There's so much to learn this week,
and we hope that each of you can come away
with a new insight to improve your business,
yourself, and hopefully strengthen one of the things
that brings many of us here, open source.
He had an analogy pretty quickly after that, where we all three looked at each other in
the dimly lit keynote room and we're like, what?
So I wanted to play it again for us so we could actually have a conversation about it.
This isn't about replacing your expertise.
This is about amplifying it.
I recently had to explain this tension
to my 10-year-old son who loves basketball.
This is how I explained it to him.
Imagine a new sports drink comes out,
and when you drink it, every shot you take goes in. I'm not going to lie, I'm not
going to lie, I'm not going to
lie, I'm not going to lie, I'm
not going to lie, I'm not going
to lie, I'm not going to lie,
I'm not going to lie, I'm not
going to lie, I'm not going to
lie, I'm not going to lie, I'm
not going to lie, I'm not going
to lie, I'm not going to lie,
I'm not going to lie, I'm not
going to lie, I'm not going to
lie, I'm not going to lie, I'm
not going to lie, I'm not going
to lie, I'm not going to lie,
I'm not going to lie, I'm not
going to lie, I'm not going to
lie, I'm not going to lie, I'm
not going to lie, I'm not going
to lie, I'm not going to lie, I'm
not going to lie, I'm not going to lie, I'm not going to lie, I'm not going to lie, I'm not going to lie, I'm not going to lie, I'm not going to lie, competition when a middle schooler could shoot better than Steph Curry.
But I don't think that is necessarily true.
Strength still matters.
Just getting the ball to the rim from half court is no easy feat.
Defense still matters.
Your shot can be blocked.
Speed still matters. You have to get open just to take a shot. the game. But the game's speed still
matters.
Your shot can be blocked.
Speed still matters.
You have to get open just to
take a shot.
So, yes, a sports drink like
this would drastically affect
one aspect of the game.
Accuracy. But how can we possibly understand the impact on a game just by removing one factor
when there are so many others in regards to
height
speed endurance
athleticism
strength a
Change like that would fundamentally change the world of basketball that my son knows and loves
would fundamentally change the world of basketball that my son knows and loves.
It would change who could be great at the game.
It would change the focus of the game.
It might change the rules of the game,
but it would not eliminate the game.
I believe we would take these factors,
we would shape them into a new game,
and given just the inherent
creativity in people, that new game would be better.
Right now, that's exactly where we are with AI.
We're in the moment of uncertainty between games, between worlds, we have to simultaneously understand
that while the fundamentals that we know are changing,
maybe beyond the point of recognition,
there are so many other factors that come into play
in terms of creating true business value.
So there's a couple of things that jumped out at me
during the keynote when he said that.
And I think the first one was,
this is again, the CEO of Red Hat.
And I think he just gave us an analogy
to what they view AI as,
as this almost magic sports drink
that means that if they can get everything else to line up,
all the other supporting players in the game to line up,
then they have this solution that's gonna let them get nothing but net.
That is, that's essentially like a makeup company saying they have discovered the fountain of youth,
and they're gonna bottle it, right? I mean, that is the biggest of the biggest statements.
So I'm just starting there, before we even get into the other aspect of the analogy,
what are your impressions of that?
Well, I think it's a big deal like AI is all uncertainty currently
But this statement feels like we know exactly the direction we want to go in
We are already working towards it and it's already doing things for us and there's still a lot of vision here from a company that otherwise
didn't work on AI, right, until recently. And it does seem like they have a lot of the supporting
products in place to realize this idea that he put out there. And we can get into some of that later,
but they they have several product pieces that sit on top of RHEL that are trying to enable this
vendor lists accelerator neutral, backend neutral AI system that's local or in the cloud.
Also, when it's in the cloud,
it's completely vendor-neutral from Oracle to Azure,
or you can run it on your own infrastructure
and pick your backend models.
They're trying to put all the supporting players in place,
but to me, it still feels like a real wild analogy.
See, I think I see it more as trying to acknowledge the fears of folks around AI and the uncertainty,
but making a pitch on the sort of human enablement side, right?
Like kind of talking to the people who have to work with their products and administer
them and saying like, we think this will make you more effective in that goal.
And then to your point, on the other side, they're then working to make sure that their
technology is ready to meet that and interface with whatever AI power-up you are able to
get.
The way I interpreted this, re-listening, I think the look we gave each other live was
like, I, what, how does this, what's this trying to say?
And re-listening to it here live.
I got a little confused because at first he set it up as like open source is great.
Here's the basketball, you know, sports drink thing that gives you superpowers.
And I thought, okay, open source is the sports drink and it allows all sorts of
new things to happen and all sorts of new technologies to flourish because you've
solved that problem in a way that is collaborative, etc. etc.
And then he's quickly shifted to the AI piece, which almost reflects for me the trajectory
of Red Hat.
Yeah, they very much came to the point of saying we see the path that AI is on right
now as a similar path that
open source was on and Linux was on 10 to 20 years ago. While this might feel
new for many of us, this isn't the first time we've experienced this in software.
In fact, when open source emerged, there were a lot of people that felt the same way about it.
Open source challenged how software created value, even what competition meant.
It removed barriers that defined proprietary software and it even added a new factor around collaboration being
critical for success. And in that challenge, it was feared, resisted,
ridiculed, attacked. And yet, last year, there were over 5 billion contributions made to open source software.
Despite the fear, despite the attacks, despite the disruption, open source still changed
the world of software.
I felt that potential in my first experience with open source. It captured
my imagination along with millions of others. It defined my career along with millions of
others. Where others saw fear or disruption, I saw potential along with millions of others.
That is exactly what we're experiencing with AI right now.
The world that many of us know is open source
and software and IT.
We have shaped this world over decades
and now the rules are changing.
And while that can be scary and that can be disruptive, if we take a step back, the potential
is also undeniable.
I would be really interested in the audience's thoughts on the parallels and analogies that
Matt was drawing here.
Buston, with your thoughts, if you agree, if you strongly disagree, I'd really like
to hear that as well. But I think the news we were sitting there waiting for
was actually RHEL 10 and so Matt steps off the stage for the first time and we
get into the news!
Please welcome Red Hat Senior Vice President and Chief Product Officer, Ashesh Badani.
Everywhere you turn, the world is running on Linux.
Tens of millions of people trust Linux to power the critical infrastructure. And trillions of dollars a day is dependent on Linux.
For more than 20 years, Red Hat Enterprise Linux, or RHEL,
has been the trusted platform for organizations around the world. It is the heart of Red Hat's portfolio
and the foundation of our core technologies.
But Linux is often managed the same way it was
10 or 15 years ago.
Today, we're changing that.
We're giving Linux admins new superpowers that allow them to wait less
and do more. That's why I am so excited to announce
RHEL 10.
This is the most impactful, most innovative release we've had in a long time.
And image mode is one of those reasons.
We'll get to that in a moment, but there was another announcement up on stage that I wanted
to include too, and that was something they're calling LLMD.
Reasoning models produce far more tokens as they think.
So just as Red Hat pioneered the open enterprise by transforming Linux into the bedrock of
modern IT, we're now poised to architect the future of AI inference.
Red Hat's answer to this challenge is LLMD, a new open source project we've just launched today.
LLMD's vision is to amplify the power of VLM to transcend from single server
limitations to enable distributed inference at scale for production. Using
the orchestration prowess of Kubernetes, LMD integrates advanced inference
capabilities into existing enterprise IT fabrics. We're bringing distributed
inference to VLM where the output tokens generated from a single inference
request can now be generated by multiple accelerators across the
entire cluster. So congratulations to all of you. You came here to learn about the future of Linux
and now you know what disaggregated pre-filled decode for autoregressive transformers is.
It's actually a really significant contribution. So you could think of it as you submit a job to
an LLM and then this system
sort of sorts out the best back-end execution based on resources, the type of job, the accelerator
you might need. So it's taking something that is a real single pipeline and breaking it
up with all of this back-end flexibility. Here's how they describe it on the GitHub.
Read me. LLMD is a Kubernetes native distributed
inference serving stack.
A well-lit path for anyone to serve large language models
at scale with the fastest time to value
and competitive performance per dollar
for most models across most hardware accelerators.
So bringing that home and what it actually means
in a practical sense for like a small business like myself,
it would be maybe we have a few jobs
that run on Olamo locally on our LAN hardware, but every now and then we have a big job and we want to
execute that out on cloud infrastructure and this can help us do all of that and you know the
orchestration of it. So it's actually it's a pretty significant contribution and it works with VLM
which we'll talk about more later or now. No, no I was just going to say it is a big contribution
and you know Red Hat's playing a huge part,
but they also list right here folks like CoreWeave,
Google, IBM Research, of course, as well as Nvidia.
Yeah, yeah.
And there's been some news about AMD's
interest and involvement as well.
And the Nvidia involvement is particularly interesting
to me because this doesn't serve Nvidia
in selling more hardware.
This project actually enables people to distribute workloads to other things that are not Nvidia
hardware that are cheaper things when not needed.
And so it's pretty interesting to see Nvidia actually engage in this process.
I get why AMD is, but it's interesting to see Nvidia engaged even though it kind of
in a way eats away at their hardware mode.
And I think it's exactly things like that that are maybe drawing some of the parallels
to the Linux evolution that we've been talking about.
Yeah, and so the behind the scenes conversations
I had with Red Hat staff is essentially,
this is where the users are.
Nvidia is doing this because their customers
are asking them to, just like their customers
asked them to support Linux years ago.
So yeah, that's the parallel there.
So it was a long keynote, I'm not gonna lie.
It was two hours, and what we just shared with you were some of the highlights,
but there are also moments where, you know, they're trying to address multiple audiences.
You have your technical people there, you have your sales people there, you have your
chief technology officers there. And so in one keynote, they're trying to speak to all of these
different diverse audiences that just don't really get the same messaging. And so you'd often have guests come up on stage that kind of essentially say roughly the same
thing and it gets really business jargon heavy because you're speaking to that audience. So we
sat there for a while listening to a lot of that and then also interspersed with like these really
interesting technical moments. We just stepped out of the keynote. This was the big keynote.
There will be a keynote every day, but this was the big one. It was a two-hour chonker. And we got Red Hat
Enterprise 10, which was pretty great. And Image Mode was a big part of that. There was essentially
four key things that they listed that they're really excited about, RHEL 10. And I think Image
Mode is what they led with, and it was probably the one that stuck with me the most. They talked
about how vendors like Visa want to be able to update their
infrastructure as if it was a smartphone and just flip a switch and they've got
the new updates and it'll streamline updating security.
And I'm actually here for it.
I hope it makes RELL a little more maintainable for shops that are deploying
it, but of course for year two in a row, the big topic was artificial intelligence.
And AI was really baked into everything.
And I'm just curious, Brent, as a first-timer,
what your impression of all of the AI talk was.
Because, I mean, you just can't prepare a guy for this much AI talk.
The scale of the AI.
I did notice that they basically took each of their products and added AI on
the end of it, which I didn't expect and nobody really addressed that. But they're just sort
of spreading the AI throughout. I think maybe that's a more of a strategic plan to, I don't
know, be part of the future. But I'm curious how that dilutes the current products or where
they're headed with it.
It is a lot of brands now to keep track of.
And like I said, we're in year two of this.
And I'm not 100% convinced that all of the people watching in that room actually have
the needs they're addressing up on stage.
I think some people do, you know, airlines and Visa, I think they do.
But I'm not sure everyone in that room was really feeling the urgent pressure to deploy AI to get a return on investment or total cost of ownership lowered for
whatever they might have. And that's not to say that Red Hat doesn't seem to have found a more
refined focus for their AI implementation. I think year two of this AI focus is actually a
lot more practical. It's about shrinking the size of some of these models.
It does seem like they've found a few areas
that they can bring some of the special Red Hat sauce to.
Yeah, you know, okay, I think you're right
that there are definitely questions around
is there this low-hanging fruit of like,
you gotta meet this AI need, AI can do it today,
you just have to figure out how to deploy it.
Yes, for some, everywhere, maybe an open question. But I do think you have to give Red Hat credit.
Like if you were trying to if you are solving that problem, they have a lot of nice things
in the works from like you're saying, right? Like quantized and optimized models that you
can just get from Hugging Face or via catalog in your Red Hat integrated products. They've
also been talking a lot about VLM and turning that via the new LLMD
into a distributed solution.
So now you can do inference that isn't just
running from a single process.
It's doing inference across your whole cluster of GPUs.
And we saw folks today from Intel and AMD and, of course,
Nvidia.
But it's nice to see, at at least whether or not you're really
using in your business if you were to that you would in the future at least have real
options not only between different models but also different accelerators as they put
it.
That distributed model stuff you were talking about that was an opportunity for them to
bring Google up on stage and the comment was Google was our partner in crime in creating
this.
So really leaning Microsoft Azure got to mention up on stage.
So they're trying to present themselves as a vendor neutral AI solution.
And when I say trying to present, I think they are doing it.
They're doing it successfully.
So if someone out there is in this market, I mean Red Hat is killing it.
But for me, as somebody who's looking at the more practicals,
REL 10 is it, right?
You get improved security, you get image mode,
and the other thing that they talked about,
almost as if it was new, is virtualization.
RHEL 10 is clearly making a pitch to shops
that wanna migrate off VMware.
Did you catch this too?
Oh yeah, I mean, the whole product offering
and really the rise of OpenShift virtualization,
you know, it's not necessarily that new, and things like KubeVirt have been around for
a while to let you run VMs as containers.
But they didn't quite come out and say the word Broadcom, but you got the feeling, you
could tell there were stories around like, oh, a year ago, we really needed to modernize
or look into our virtualization spend.
And last year, there was a lot of talk about the potentials,
I think, at Summit, right?
And folks were talking about OpenShift being well positioned,
and this year was a bit of a,
let's show you all the successful customers now deploying
and have migrated or are in the process
of successfully migrating to an OpenShift
in a Kubernetes-based virtualization platform.
And we even saw a variant of OpenShift announced
that is basically just OpenShift tailored for only running VMs. So that's that's
full circle. The Emirates Bank was up on stage I think they mentioned they had
something like 9,500 and 9,800 virtual machines running under OpenShift
virtualization and they also announced the OpenShift cloud virtualization is
available on all major cloud platforms including Azure and Oracle. Wow.
So when you,
when you think you got a solution that works on premises and something you can
easily offload to the cloud, it actually kind of left me feeling like,
we need to play around with OpenShift Virtualization and just kind of wrap our
heads around it.
Just give me your Oracle API key. We'll get started.
And I wasn't kidding either.
I really felt like they made a good pitch for RealTen
and the OpenShift virtualization platform.
I think it's something we are going to experiment more with
and get more hands-on experience.
It was actually a good solid product.
We got a hands-on demo for the press
that they went through the dashboard
and it looked just as easy to use as Proxmox
or if anybody's familiar with the later iterations of VMware ESX and things like that, it really sort of met those expectations as far as management and dashboard went.
It looked good.
Yeah, you can tell like Dave, it works really well if you have, you know, an existing sort of containers workflow and OpenShift, you want to add virtualization, but now they're even targeting it for folks that maybe haven't yet tried out OpenShift, but they're looking for a virtualization solution, and you can
get yourself an OpenShift cluster pretty much just tailored to run virtualization.
And then, you know, maybe later you expand out into containers too.
So there was something that really got my attention, and I am thrilled to see Red Hat
pushing further down this path.
And you see it also becoming really popular with Fedora Silver Blue, you see it with Bluefin and Bazite and the UBlue universe of operating systems.
It's using images to manage and deploy your infrastructure to get
immutability and image mode is something that Red Hat is focused on. They're
taking Bootsie and they're bringing it even further and we had an opportunity
to sit down with the product manager of Image Mode for RHEL
and we got all the inside deets.
Well, I'm standing here with Ben
and he's the product manager for Image Mode for RHEL
and I asked him to try to give us the elevator pitch
of what Image Mode is.
Yeah, well it's a great question.
So, okay, we know containers, right?
We've been building containers for applications
for a decade now.
All the same ways that you build containers and manage them,
we now can do that for full operating systems.
So we're going to change one important detail, right?
We all know a Docker container,
it's going to share the kernel, right?
Well, these base images that we use for this,
they're now bootable containers.
So the kernel is going to live
and be versioned in that container, right?
And so now we're going to take that, we're going to write it to Metal, we're going to
write it to the VM or Cloud Instance, whatever, and now that server is going to update from
the container registry.
So now all of your container build pipelines, whatever automation you're using for testing
verification, now you can do that for operating systems.
So it's really the same tooling, tool set, language, same everything for your applications you can now use for your operating system. So it's really the same tooling, tool set, language, same everything for your applications you can now use for your operating system. So
the world we're living in is complicated enough, it's only getting
more complicated. So anything we can do to simplify and reuse and just just get
people to value faster is the way to do it. And that's what that's what you get
with Image Mode for REL. That does sound very nice. So how are you booting an image?
Is this Bootsie involved here?
Yes, Bootsie is the core of the technology,
which stands for boot container.
It's the magic that kind of closes the gap
between the tarball that your container image is
and the system.
It gives you an AB boot feel to the system, right?
So when you update, you stage the next one in the background and you can you can reboot and now you're in the new one,
right? So Bootsie is the core of this and the core command line when you need to update the image or
switch to a different one or reprovision the system. So yeah, and Bootsie went into the CNCF.
It's a sandbox project now. We're working on getting the incubator status. So yeah, that's the core.
My recollection is that we got Bootsie at the last summit.
Bootsie was announced.
So has this been kind of in the works since that announcement?
Yeah, exactly.
So we did a big announcement last year.
Since then, we've been working with a lot of customers on getting them to production,
right?
We just had one mentioned in the keynote.
We had another one speaking yesterday.
I don't know if I can say names on this, so I'm not going to leave it out,
but I don't know. It was great. We have another one speaking later today, and then one of the
hyperscalers is demoing it right now. So yeah, I would say just the traction we're seeing has
been awesome. So it definitely feels like that fit to where it's the right tech at the right time for people
to be using it.
Yeah, I'm curious.
I felt like when we kind of heard stuff last year,
it was co-announced or at least sort of pitched a bit
as being motivated by faster problems specifically
around AI workloads.
You know, here's this new mode of operations
we think would be a really good fit.
But I'm curious, last year we heard a lot of sort of like,
OK, we're starting there, but we think
the applicability is a lot broader,
and I'm wondering if that's kind of
showing out in customer adoption.
Yeah, it's way broader.
I think, I almost look at this just kind of an image flow
is very general purpose, right,
is where you can get to quite quickly.
So yes, it's still very relevant for AI.
REL AI actually ships as a Bootsy image, right,
and we run it that way.
I would say one of the big values there
is any time you're connecting a complicated stack, right,
I'm versioning a kernel, kernel modules,
different frameworks, libraries,
where it's a Jenga stack, right,
which a lot of AI looks like these days.
Building with containers solves
a huge amount of versioning problems.
We want to get people out of the state
where I DNF update
a package and oh, now my storage doesn't work
because there's a lag over there and blah, blah, blah.
Like, no, if the build fails, it'll never hit your server.
Right?
So this is when you use containers that just becomes
so easy, right?
Again, it's about going back to simplifying all the
complexity we have and getting to value is the whole thing, right? Again, it's about going back to simplifying all the complexity we have and getting to value
is the whole thing, right?
I'm just curious, what does it look like for folks maybe
who have never tried image mode but have experienced
regular RHEL deployments?
How do you get started with a new system that's
full on image ready?
Great question.
So there's different paths.
Depends on your environment.
So the answer may change a little bit,
depending on what your needs are.
But in general, I think Podman Desktop
is probably the easiest tool.
It's no cost, it runs on any platform.
So if you're working on Mac or Windows,
we'd love to upgrade you to RHEL,
but you know, we get it, right?
So you can put this on, there's a Bootsy extension,
you can build containers, you can convert them to images,
you can boot them as a VM, all from Podman Desktop.
It's amazing.
I use that today.
Now, I immediately then switch to versioning everything
in Git, and I have GitHub Actions as everything.
So my good buddy Matt here, and some other colleagues
put together templates for all the big CI, CD systems.
So if you want to just get started with,
say you do GitHub actions, GitLab CI,
Jenkins, Secton, Ansible, you get the idea.
It's infrastructure agnostic, right, is the whole thing.
We got all the templates, clone the one, it's so easy.
So we kind of have a good path if you want to work locally
or if you want to work in like a Git model.
Those are the two paths I would steer you towards.
Given Bootsy and ImageModa relatively new, what are the challenges coming up that
your team is going to be working on?
Well, we've got a big roadmap. We're adding more security capabilities. There's multiple
ways to answer your question, but let me talk about security, right? Because this is forward-looking stuff here.
Where we have all the pieces,
and we're working on stitching them together,
because what we want to do is the way you sign applications
with like, cosine for your container image.
We can have the same basic key insert,
actually inject that into firmware,
if it's UEFI or inject it into the cloud image, right?
And then from there, we can have a post-process step
on your container that makes a UK unified kernel image,
right, that is signed, we get full-measure boot.
And then the root FS of that container,
that digest, is in the UKI as well.
So if your root file system gets modified at all,
it's the holy grail security story,
that tamper-proof OS that we've been chasing.
So Bootsy gives us all the things we need
to stitch that together in the Linux and make it easy.
Because today this stuff is possible,
but you have to be like,
there's like five people on Earth that can do it today,
right, and I want like me to be able to do it, right?
Like, and so, and we're pretty close.
So, my goal, again, is forward looking
statements, so all the, all that. But my, I hope next year at Summit, that's what we're
talking about. And, and everyone is like, wow.
That'd be great. I'd love to catch up at next Summit and see how it went. Thanks, Ben.
I'm particularly interested in Red Hat adopting this further because it brings a lot of what
I like about Nix OS and
what I like about Bluefin and Bazite, but it brings it to the enterprise operating system
and it could solve so many problems.
And you guys know I've talked about this, but the other reason why I kind of like this
approach that they're doing is while it is a top-down system, it is leaning into workflows
that people already understand.
They're already deploying containers. They're already using GitHub Actions or whatever they're
using locally. There's a thousand, tens of thousands of DevOps engineers out there that
could start deploying their own custom bespoke Linux systems. And this is why I got into Gentoo
back in the days, because I needed very bespoke custom systems.
And there was no tooling around this.
There was nothing.
I didn't really have a lot of options.
So I went with Gentoo a hundred years ago
to build these really bespoke custom systems
that then I would manage and orchestrate
from like this crazy scripting thing that I had set up.
But this brings this to everybody
using systems that are maintainable
with RedHat's backing and their whole CYA
when it comes to certifications, licensing, compliance.
I mean, it just makes me think other ecosystem here, but think about like setting up a Deboot
strap system for, you know, just debuting on like going from the base up, trying to
get that going. And then, you know, for an RPM style, it's going to be different. And
for an arch system, it's like pack strap or whatever. And these, there's all these different things. And then in this new world, you just change what
base image you pull from. And it's just so much simpler.
As somebody who used to really, really get frustrated managing systems where all your
only options were RPMs, and maybe an RPM repo that got you what you need, this is just such
a huge land shift. And it was nice to be able to pick Ben's brain.
such a huge land shift. And it was nice to be able to pick Ben's brain. One question I ended up having in all of this is how old are these new packages? Like,
Rail 10 just came out, but you know, in enterprise things are slightly more glacial than let's
say Nix OS, which we visited last week. So what are we looking at here, boys? Like, what
does Rail 10 actually have under the hood?
Well, I believe it was branched off from Fedora 41.
I think during the beta maybe there was a 6.11 kernel,
but it's shipping with Linux kernel 6.12,
and then I believe GNOME 47.
We also got DNF 5 in Fedora 41, which is probably a big change.
When you look back at the Fedora releases, you can see, oh, Red Hat was trying to get
this pipe wire milestone in.
Red Hat was trying to get this DNF milestone in because ultimately that became RHEL.
And sometimes you see these things get packed into a Fedora release for that reason.
And DNF V is great.
So you know, for the parts where you're maybe not doing it with image mode, that will be killer.
And also Boot C initially shipped in Fedora 41.
So there you go.
See, to me, it's like, if you like Fedora 41,
well, now you get that in RHEL.
It's basically Fedora 41 LTS, which is kind of appealing.
You get Kino 47 or KDE 6.2.
I had just a quick thought here on image mode.
And if it sees wider deployment,
is one small benefit of the approach,
maybe it's a big benefit,
is the AB style and rollbacks that this really easily enables.
I was just thinking, when we've seen recent issues,
big problems with Windows deployments in the enterprise where maybe
something like a quick easy boot undo,
boot into the last version rollback would
have saved just billions of dollars of agony. And we know, right, like, rel is deployed
at or above the scale of Windows in these types of back end enterprise applications.
So this could be huge.
And I think so. I think it's I think it's so monumental that it's it's making me seriously
consider the Red Hat ecosystem for for what I do for what we do. So yeah, we'll get into it.
OnePassword.com slash unplugged.
Now imagine your company's security, kind of like the quad of a college campus.
Okay, you've got these nice, ideal, designed paths between the buildings.
That's your company owned devices and applications.
IT has managed all of it and curated it. Even your employee identities. And then you
have these other paths. These are the ones people actually use. The ones that
are worn through the grass. And actually, if we're honest with ourselves, they are
the straightest line from point A to point B. Those are your unmanaged
devices. Your shadow IT apps, your
non-employee identities like me, a contractor. I used to come in and be one of those. I was
always shocked because they're not designed to work with the grass path. They're designed
to work with the company approved paths. That's how these systems were built back in the day.
And the reality is a lot of security problems take place on the shortcuts the past users
have created. That's where one passwordword Extended Access Management comes in.
It's the first security solution that brings all these unmanaged devices, apps, and identities
under your control.
It ensures that every user credential is strong and protected.
Every device is known and healthy, and every app is visible.
The truth is One Password Extended Access Management just solves the problems traditional
IAMs and MDMs weren't built to touch.
It is security for the way we actually work today, and it's generally available for companies that
have Okta, Microsoft Entra, and it's in beta for Google Workspace customers as well. You know what
a difference good password hygiene made in a company? Now imagine zooming out and applying that
to the entire organization with One Password's award-winning recipe. One Password is the way to
go. Secure every app, every device and every identity,
even those unmanaged ones go to one password.com slash
unplugged that's all lowercase.
It's the number one password.com slash unplugged.
Now, if we hadn't had enough of two days of interesting stuff,
there was a third day with a brand new keynote.
Well, here we go.
It's day three.
We're walking to the keynote right now.
I don't know what to expect
because all the big announcements like RHEL 10
and things like that were announced yesterday.
So I'm kind of going in blank, not sure what to expect.
We'll find out together.
One thing they came back around during the keynote
on day three was the security enhancements
in Red Hat Enterprise Linux.
And there is one
particular area they really focused on.
Please welcome Red Hat Senior Vice President and Chief Product Officer, Ashesh Badani. REL 10.
REL 10 is the biggest leap forward in Linux in over a decade.
And we didn't just get here accidentally.
Two decades of server innovations, virtualization, containers, public clouds, and each and every
stage, RHEL has been the enterprise Linux standard.
And now, the AI era is here.
And around the world, there are uncertainties.
But in a world of uncertainties, one thing is certain.
REL. Yeah, that's good. That sells it.
Now there was, of course, the just general positioning of REL, right?
It's an AI-first distribution, but also it is a post-quantum encryption distribution.
That's a mouthful.
We've talked a little bit about post-quantum cryptography.
Let's go into that in some more detail.
Can you tell us about the impact of quantum computing, which I'm sure the audience is
really interested in, and why we need to prepare for a post-quantum future?
Sure.
So, in the not- the not so distant future, quantum
computers will be more readily available and they'll be leveraged by bad actors
to break today's encryption technologies. When that happens, sensitive data will
no longer be considered safe. But organizations like NIST and the IETF
are already working on draft requirements and standards of what will be needed
in a post-quantum world.
And Red Hat is ahead of the game here.
We are leaders in post-quantum security,
and we've been working on those requirements
to meet post-quantum cryptographic challenges
for some time now.
Because we know that we need to help our customers
protect their data against future attacks
and fulfill future regulatory requirements.
REL 10 has the libraries, tools and tool chains ready
so you can rely on us when you're ready to transition
and start into a post-quantum world.
This is obviously early days, right?
You hear the wording there,
when you're ready to start transitioning
to a post-quantum world. These standards are very early, obviously.
Yeah, I mean, we don't really even have the kind of quantum computers to really sort of
test these fully out.
So some very smart people have done some very clever math and devised so far our best takes
on how we might defend against this.
And Red Hat's there if you want to, you know, try to get ahead of the game.
There's two things here. So number one is they're kind of pegging to the standard. So
as the standard evolves, they will likely evolve their support for it. Right. So that's
kind of what the that's that's a beachhead here. The second thing is you have to realize,
I mean, I know you guys do, but you got to just think it takes 10 years sometimes for
these distributions to really these enterprise distributions to really work their way out into the ecosystem.
And so 10 20 years from now this could be a problem.
This could be a problem 10 20 years from now.
And so if you start in real 10 well by the time people are running real 13 hopefully it's in it's baked in and it's working.
The other thing that occurred to me yesterday is you have to think about the information that you're storing today and that might get cracked, let's say, in the future by Quantum stuff just because
it's sitting on disk.
Yeah.
So getting in early, I guess, is the name of the game in this case.
Mm-hmm.
Yeah.
You know, and not trying to trivialize it, but there is also, I think, real value sometimes
in just having one more checkbox that may get added to security questionnaires that
become standard in the coming years
That is true out of the box. You're good to go. You can say Linux covers this
It's not just something Microsoft is doing or whatever or Oracle or whatever. It might be
Yeah, there's a supported Linux platform that you can do that will be first class in that ecosystem now day three
We wanted to just knock a couple of things off because we're at Red Hat summit
And so we had access to folks that you just normally wouldn't have access to in person and we wanted to chat with the outgoing Fedora project
manager and the incoming Fedora project manager because both Matt and Jeff,
Matthew as he likes to be called, Matthew and Jeff were at Summit and so we went
to the community section found the Fedora booth and got these guys to sit
down. Well I have two quite important folks here. Gentlemen, can you introduce yourselves?
I'm Matthew Miller.
I am the current Fedora Project leader
for about two more weeks.
Two weeks, and you?
I'm Jeff Spoletta.
I will be the Fedora Project leader in about two weeks.
So I see you guys are hanging around together.
Is there like a transitional period
that you're spending together for this transition?
Yeah, basically. Jeff started at Red Hat two weeks ago, and now we're trying to not scare him away,
but maybe not doing it. I don't know. How's that going?
Yeah, I basically am looking at this as a I am Matthew's shadow man, as it were,
as a call back to some previous branding.
But yeah, I'm here for the last couple of weeks
of the fire hose of like, just Red Hat onboarding,
and this week it's, I'm trying to meet as many stakeholders
that would like to leverage Fedora
to get some innovation done.
And instead of opining myself, I'm really in a mode where I'm taking
information from as many people and part of that is getting as much headspace
mapping from Matthew as I can. Yeah, like literally just taking his brain and trying
to shove it into mine before flock to when the actual handover happens.
Is being here at the summit the first time you spend time together in person?
Well, for many years.
I've hung out with Jeff before.
Jeff was active in Fedora Project at the beginning of time, as I was, and then he went off to
do real jobs and stuff.
And Jeff, I was going to say, why Fedora?
But it sounds like you've been involved for a long time.
Yeah, I was, you know, the first, I mean,
eight years of the project, I mean, I was there
before it was Fedora Linux, when it was
Fedora US as a contributor, so I was an external
contributor through the first critical period
when the project was being spun up,
and then I took one of those paths
less traveled situations in life,
I went to Alaska to study the Aurora and then eventually got to the point where I was off
the grid for several weeks at a time doing research and I just couldn't contribute anymore.
And so I had to step away from the project, which is actually pretty interesting because
I have like the deep project knowledge, like the foundations.
I understand what the project's supposed to be.
But I've also stepped away, and after being an academic,
I've done three different startups, three different sizes.
Like I did a thing, a small startup
with a telemetry project, actually a wearable project,
for a couple years.
I then worked for a company as a dev rel
for doing monitoring, SenSue, they no longer exist,
they were acquired,
and then I worked for Ice Avalant, and they got acquired,
and so it's really interesting,
I was getting ready to move back east from Alaska
to follow my wife, who's got a job in Virginia,
and it just so happened and lined up
when Matthew announced that he was stepping down,
so it was, it's like the stars aligned, right?
So I come back east, basically pick up my life that I left
when I went to Alaska.
And it's like I'm right back where I started
and like back into Fedora now this time as the project lead.
It seems almost meant to be.
Did you get nominated by this gentleman
or how did that process work?
We had a lot of really good candidates
and it was a super, super hard decision. hard decision and in the end we agreed the stars
Aligned here for this to be the best. Very nice. Matt, why the decision to change things?
Well, I've been so it will have been 11 years as foot our project leader when we do the handover to be
Beginning of June there. So that's a long time
And I honestly I love it and I really could keep doing it
but I think it's good for the project to have someone else
kind of looking over things and it's good for me to find something else to do although I'm not gonna go very far
I'm actually gonna be still in the same group at Red Hat that does the Fedora Linux community things
Does this just mean you get to play on things that are maybe less planned or you get to
just kind of spend your time somewhere that you would like to?
Well, I think planned is pretty ambitious for anything I've invented.
But the first thing I'm going to do is sleep for a week.
And then I'm actually going to be a manager in there,
because I actually don't have any experience
as being a full-time people manager.
And I thought I'd see how that goes
and see how that broadens my view
into working in the open source world.
And we'll see where we go from there.
And then, gentlemen, how does the, like like is there a mentorship process that's going on
here?
I know you said you're spending two weeks together, but is there anything more formal
or less formal?
Yeah, so that's also, I think a lot of times, I mean, it's been 10 years, we don't really
have a process for FPL transition that's there, but a lot of times it's been kind of thrown
into the deep end. Robin Bergeron, my predecessor, helped me a lot, but was also very ready to be done
with the job at the time.
So I did a lot of making things up as I was going along, and I think Jeff will get to
do a lot of that as well, but I want to make sure I'm going to be there so I can share
anything, my thoughts on things without trying to, you know, I don't want to make sure I'm going to be there so I can share anything, my thoughts on things
without trying to, you know, I don't want to be one of those, I'm pulling the puppet
strings behind the scenes kind of thing.
I'd be very respectful of the new role, but I also want to make sure that I'm accessible
for, because I do have a lot of knowledge about things that Jeff keeps telling me, did
you make slides for this?
Did you write this down?
No, I have not, but I can tell you all about it.
So we'll try and get that transferred
in a formal way rather than just,
oh yeah, I should tell you this.
Nice, and Jeff, what are you looking forward to
when you get your feet dirty here?
Well, I guess, like I said,
I don't want to opine too much just yet, but initially what I'm really looking forward to is getting a
sense of the health of the project because I think Fedora is now at that
time where it's now a generational project. And as I tell people who meet me,
if you remember my name and you're still involved in the project, you're maybe a risk.
You may be an institutional bus factor or what's the better way of saying that?
Champagne factor or desert island factor.
We talk about llama farming.
So I am concerned that people who are doing it
for the full length of the project,
they probably have institutional knowledge
that we don't have a process to change over.
And we may be relying on them too much
to do what I consider hero work.
And I want to find that, I want to get a sense
of where that is so we can have an appropriate process
to get mentor new contributors in.
So that's my first thing, not technology,
just get a sense of the health of the project.
Because even though it is very stable in terms of output now,
which was not what it was when I was working on it,
and everyone says, yes, it's a rock solid deliverable,
I want to get a sense of where the contributors are at
and where the creaky bits are, right?
So we're not burning out some people
to make sure that that deliverable's happening.
I mean, as I tell people this week,
my mental model for this job is
I'm the president of a weird university, right?
Like this job to me is, I'm not doing the work,
like the people in the community or the faculty
and the students doing the work in the university,
but Red Hat is sort of like the equivalent
of the state legislature, like they are investing
in funding and so I have to bridge that.
And it's, so it's important for me to get FaceTime
with as many Red Hat stakeholders as I can
so that I can build bridges and make sure that
the community ethos and the process by which
technology works its way through from Fedora up
is something that they're getting the best value out of
without disrupting the community, right?
Because it's, like I said, like the university model
in my head, every time I say it, I'm like,
this is the right model for this job because it's like, state legislatures and
faculty are not on the same page all the time, and that's where the president of a university
basically sits, and that's what it feels like.
Well, Matthew, Jeff, thank you so much for joining us, and come on Linux 1 and plugged
any time.
It's always nice to talk to you, and yeah, I'd be happy to talk more. Thank you so much for joining us and come on Linux one plugged anytime.
It's always nice to talk to you and yeah, I'd be happy to talk more.
Even when I'm out of the role, I'll probably have more spare time for just, you know, sitting around
pontificating about things. So that'll be fun.
Sounds good. And Jeff, thanks for joining us and we'll surely hear from you in the future.
Yeah, absolutely. Thanks for having me on.
Yeah, Matthew, that invitation that mumble room is open all the time, come pontificate with us anytime.
I'm also really glad we made that connection.
I think it's gonna be interesting to have Jeff on the show
after he's got some time under his belt
at the helm of the Fedora project.
I know you boys are looking forward to that too.
He just has such perspective,
if you think about all the time put in.
Yeah, yeah, really.
I mean, it's pretty neat to have somebody
originally connected with the project,
took some time away to really get some perspective and come back and I like his model of a university that's an interesting thought model at least going in it'll be fascinating to follow up with him and find out if that played out for him.
I think the next few years should be fun for the fedora folks. So on our last day, you know, you have to knock out the fun stuff like seeing our buddies at the Fedora booth, and they had this machine that they were teasing. I had to try it. It's called the AI Wish Machine.
Okay, so we have a little experience here. Wes, do you want to explain what we're about to do?
Yeah, it's the one thing I think so far at Summit that there's been a lot of hype around.
We saw it advertised at the keynote on stage, and Chris has yet to try it. It's the
spectacular AI Wish Machine. Magic promises AI wish is granted. Your wish
Chris is its consideration. Chris what are your expectations here? I mean it was
featured in today's keynote well it was before the keynote started you know like
when you go to the movie theater and they have like advertisements up on the
screen. This was up on the screen it's something you got to try.
So I got a lot of questions, you know I've seen a lot
of things here at Summit, so I assume this is going to
kind of connect a few dots for me, and if nothing else,
give me some advice on how perhaps OpenShift could help
revolutionize the JV infrastructure and really drive
innovation and lower total cost of ownership.
So that's what I expect it's going to tell me.
You know the other thing, we've been to summits before.
In particular, last year, there was some pretty cool AI-powered
stuff, like walls and visualizations and changing
your photo kind of thing.
Could be something like that, maybe.
So should we go over?
So I attempted, of course, but everybody wanted their token,
because after you complete the vending machine experience the AI wish machine dispenses a token and
everybody loves their little swag. Okay Chris you've stepped up to the machine
here the AI wish machine what's your first impression? It's popular. Two
different people cut in front of us to use this thing. People have people
apparently have questions. So the first thing I gotta do is I gotta scan my badge to make an AI wish.
I'm gonna go ahead and do that.
Is it scanning?
I don't think it's scanning.
Try scanning harder.
I didn't see other people struggle with this.
Why is it not working?
I got my badge in the hole.
What is it?
There we go.
Right, is it doing it now?
Yes.
Okay.
Hello, human.
Hello, human.
She's rolling something.
Scan your badge. Nope. It didn't get it
Okay, you may now make your AI wish okay, I wish to be rich
I know you have to actually choose from these options. I wish to train models without compromising my private data
I wish to build and deploy my AI wherever I need it. I wish to easily scale my eye across my company
I wish to use my preferred AI models and hardware.
Well clearly I wish to... none of these.
I'm gonna, uh, well I'm gonna just scale it across my company because it's the last other thing I wanna do, so I'm just gonna pick that one.
Easily scale your AI across your company, okay? That's what I wished.
And AI says, with some slow framework, or frames, I tried but you'll need to insert a gazillion dollars.
What? Why is AI hustling me for money?
Processing your wish.
Why is the framework like 15 frames per second?
If your AI solution won't work with you,
it won't work for you.
When you need your AI to scale on your terms,
yeah, you need Red Hat.
Thanks for playing.
That's it.
Grab your pen and then visit the booth to talk to a Red Hatter.
Well, where's my pen?
Oh.
OK, let's get this.
Oh, it's a Red Hat with AI sparkles.
OK. Well, Chris, come over here. Oh, it's a red hat with AI sparkles. Okay.
Well, Chris, come over here.
I'm so excited to learn how was your experience.
I'm not sure what was answered.
I think that just told me to go to a booth and I got a pin.
I like pins, I guess.
But how was your AI experience?
Bad, man.
That wasn't really the best experience, but one thing that was kind of low-key
talked about at the keynote
that I think you picked up on, Wes,
as maybe going to have larger
implications down the road is
Red Hat seems to be embracing
MCP at all the different levels.
Yeah, definitely. This was something we had
on our little buzzword bingo chart
going into the summer. Not sure if we'd see it
or not, because it's kind of relatively new even in just the broader AI universe.
It's the model context protocol and it's a standard that came out of Anthropic for sort
of letting the AI systems interface with the rest of the world.
As you've heard, we believe that openness leads to flexibility and flexibility leads to choice with AI.
And to ensure that, it's critical that we have industry-wide standards that all companies can build around.
Now, as we discussed yesterday, MCP or Model Context Protocol is one of those core standards that's just poised to take off.
Now the letter P, protocol, is really important in this case.
Vint Cerf, the godfather of the internet,
describes protocol as a clearly delineated line
that allows for independent innovation
on either side of that line,
what he calls permissionless innovation, allowing anyone to experiment and
innovate, no approvals required. This is what we're striving for at Red Hat.
I like that messaging. I'm gonna be curious to see what their actual rollout is.
It does sound like they're working on the back end
to sort of have MCP implementations for a lot of Red
Hat products and services, right?
So if you want to be able to interface these things
from a chat bot or hook it into other agentic AI systems,
Red Hat will be ready.
You could see maybe a practical use case of this
is somewhere where you could review your system resources,
utilization, disk usage, things like that
from a single interface.
So you log into a dashboard,
hey, what is the status of the web servers?
And the system just comes back
with a whole sheet of information.
And even maybe down to like, you know,
applications that are installed
and their usage and things like that.
And you could also then, they talked about
hooking it up into the event-driven side
of the Ansible Automation Platform, right? So from your AI-driven interface, whatever that may be,
you can go trigger an event that's going to go restart that server that the AI showed you was
malfunctioning. And this is, you know, the question I have is, is this something that is of an interest
to the RHEL base? I mean, I'm not trying to typecast, but it seems like they're traditionally a
pretty conservative user base. Is this something people are pushing for?
And I was trying to get a sense of that at the keynote
or after the keynote, so as people were leaving.
And I also would like to get a sense of that
from the audience, because this is an area
they're clearly pushing on for two years straight.
And I think everyone maybe at this point has seen,
you know, AI shoved into interfaces in a poor way
and also in an actually helpful way.
And so there's, you know, there's always the question of like,
does this actually make you more efficient in your tests or is it
a new way to do the same thing? I think regardless of how you break it down, though,
it's nice to see a large, well positioned,
well known brand in the space
really working hard to bring something that is not vendor locked.
You know, like I like a lot of the different solutions that are out there, but it's like
you're all in on the open AI ecosystem or you got all in on anthropic.
I was also impressed.
I don't know how you guys feel about this, but just, you know, every company is talking
about AI.
It feels like at least if you're even vaguely associated with tech these days, but talking
with some of the folks in a few different places around the summit,
it seemed like Red Hat is very credible on AI.
I mean, they have a lot of people who are legitimate actors
in various open source AI communities working there,
working with them, like they know what they're doing.
They also, to me, felt very well informed
and very well connected with other businesses
who are leading the way.
Yeah, I mean, we saw Nvidia up there, AMD, Intel,
you know, generally people that are competing
all collaborating together on this stuff.
And of course it's always fun for us to run in
with old friends of the show,
and Carl was there at the community booth.
All right, Carl, what do you got for me right here?
I got a little pocket meat, a little bit of beef jerky
and some beef and pork dried sausage.
Get a little pocket meat on the expo floor.
Thanks, Carl.
I had that pocket meat twice.
I got to go to that pocket meat source twice
while we were there.
This is now like conference tradition for us.
If we go to a conference and don't find
Carl's special meat, then I think
we're just going to feel like we left out.
We do have to be careful though, because at some point,
the event organizers might get keyed off
that Carl is competing with the catering.
Well, if you'd like to support the show, we sure would
appreciate the support and you can become a member at Linux
unplugged comm slash membership, you get access to the ad free
version of the show or the bootleg, which I I'm very proud
of. I think the bootleg is a whole other show in itself. And
so you get more content stuff that didn't fit in the focus show.
And you also get to hang out with your boys as we're getting set up.
And then you get all the post show stuff where we sort out all of the things.
But you can also support us with a boost.
And that's a great way to directly support it in a particular episode or production.
Fountain FM makes this the easiest because they've connected with ways
to get sats directly, but there's a whole self-hosted infrastructure as well.
You can get started at podcast apps dot com.
I mentioned Fountain because it gets you in and it gets you boosting
and supporting the show that way pretty quickly.
So the two avenues Linux unplugged dot com slash membership or the boost.
Or if you have a product you think you want to put in front of the world's
best and largest Linux audience, hit me up.
Chris at Jupiter broadcasting dot com.
There's always a possibility that we might just be
the audience you're looking to reach.
That's chris at jupiterbroadcasting.com.
Well, I felt a little bit of a reality shift going to this.
Whoa.
I did see you sweating a bit in your seat.
That must explain it.
Well, we've been talking a lot about this behind the scenes.
And I have made the decision to switch my systems to Blufin.
And the reason being is I'm going to, behind the scenes, start playing with image mode.
I'm going to start in Podman Desktop, and I'm going to start building my systems in image mode.
And then we're also going to start deploying some RHEL 10-based systems and some open virtualization systems here just for us to learn and experience.
And I like a lot of what image mode is going to bring to RHEL and what's already kind of there with Bluefin.
And that is immutability delivered in this image way that is accessible to all kinds of administrators and DevOps people
where I think Nix is extremely powerful, especially I like the building up from the ground up approach,
but we've clearly seen a lot of people bounce off of it.
So I want to try to jump into this mainstream
that's going in a direction that I like anyways.
The rest of the world is kind of leaning
into these immutable systems.
And I think there's a lot of value
in learning a cloud native workflow outside of NixOS.
Chris, this feels like it's such a massive shift for you.
Why, why now?
Because it's like getting in on the ground
at the image-based workflow
at like this scale. I mean, I feel- Will you stop if I just promised never to
alias nano to Vimica? I mean, I might bounce off it, but I really want to give it a go. I've already
got Blufin downloaded and installed on one of my systems. This is because you never figured out
how to write a Nix function, isn't it? Right. I know. It's just a flakes man. The flakes drove
me away. No, it's the idea of getting a lot of what I get with Nix OS,
but with, and you're going to hate it when I say this,
but a standard file system layout.
Oh.
I know.
I'm sorry.
I'm sorry.
I'm sorry.
This is why you wouldn't use Logsie,
because you just want to mark this up.
But we have heard a lot of audience members
say they really like these, I don't know,
quote unquote, modern ways of deploying Linux
and Blufin has been the choice that I've seen
float to the surface.
Yeah, and I think it's my starting point.
You know, it's my, I'm gonna give this a go,
I'm gonna test drive it, I'm gonna write this out
before I make the switch and then at the same time,
I'm gonna be playing around with Podman desktop,
seeing if I can build systems and what it's like to do that
and then compare and contrast and move over and at at the same time, also experiment with some of
the OpenShift virtualization stuff, because I think that's really big. That standalone
OpenShift virtualization platform is going to be a contender, or it is a contender.
I have a question about how long you're going to commit to this path?
Well, unless I drop off a cliff, I guess indefinitely.
I don't know, I don't really have any timeline on it
because I think it really depends
on how the whole experiment goes.
I've already started.
We've, you know, when we tried Blufin last time
and I've played around with Bazite on my own,
I've always really liked their general initial approach,
but I always thought it would be a little bit better
if I could just take and shift it a little bit
and make it more specific to a podcasting workflow because I'm not a developer I'm a podcaster.
It makes me wonder about like some sort of challenge maybe not official but like you
know like what are some things that you are used to doing or like doing on your current
Knicks based systems and how can we see what it's like for you to try to port some of those?
Well I thought I'd start with the TUI challenge like I was gonna try to have it all my main
workstation everything ready to go for the TUI challenge. I was going to try to have it all, my main workstation, everything ready to go for the TUI challenge and then, you know, because I got to install
a bunch of TUI apps.
You know, I do like this because then if you publish maybe, you know, the container files
you're using, then I can bootstrap them.
I see how it is.
Chris, are you looking for advice from the audience?
Well, maybe gone down this path.
I guess so. I am curious people that are running this as their daily driver, these image-based immidials.
Silver blues and your blue fins
and your you-blue universe.
We need your atomic habits.
Ah!
Or people that bounced off of Nyx and Y
or people that tried and can't.
I mean, I'm curious too,
the people that tried to switch away from Nyx and it failed.
Because it seems like that could end up being me
if I don't know what I'm doing.
So I'm a little nervous about that,
especially because we're traveling and all of that.
But I'm willing to give it a go.
I'm feeling adventurous.
Okay, so like after the show, we pour one out and then we RMRF?
I think that's it.
And now it is time for the boost.
Well, we did get some boosts.
It's a slightly shorter week because we recorded early, but that doesn't mean people didn't support us.
And Nostromo came in with our baller boost, which is a lovely 50,000 stats.
And he says here's to here is to some better sat stats.
Thank you, Nostor.
You are helping bring that stat up all on your own right there.
My favorite type of self-fulfilling prophecy.
That's right, that's right.
Appreciate the boost.
Kongroo Paradox comes in with 34,567 stats.
Not bad.
I think it's so.
Just upgraded my Nix machines to 2505.
Yeah, we should mention 2505 is out.
Congrats to the folks involved.
Officially out.
I run unstable on my main laptop,
which is an M2 Air running NixOS Apple Silicon.
The stable release on most of my home lab
and maintain the options for the two inputs in my flake.
This was my second NixOS release
since getting into Nix last year
and the strategy made it really painless.
No surprises about deprecated options
since I saw these cases slowly when these changes hit unstable.
What is your approach to NixOS releases?
Hmm, good question.
Thank you, Kongaroo.
What do you do, Wes?
I mean, you're kind of a flake-based system,
so you're probably not really paying too much attention
as to like channel changes and updates.
I do think this can be a nice way to do it.
If you, you know, you can do sort of test upgrades
either on other systems where you do want to be on unstable
and see the sort of the overlap between your two
configurations or just do test builds on stable
with whatever existing configuration you have.
And yeah, if you think there might be cases
where you do need specific versions,
your more sensitive diversion changes,
then pre-plumb your flake with Nix Packages versions
ready to go with those
that then you can more freely get the boilerplate done
then you can more freely mix and match.
Remme, are you more or less likely
to upgrade to the next release
once the previous release is no longer supported?
In other words, are you gonna wait?
I usually wait like about a month, I would say.
But then I'm all in.
Yeah.
So I like to give it a little bit of a transition period
and then just dive right in.
All right.
We will add a link here to a Congru's Nix files too,
for those who are curious
or maybe wanna emulate the approach.
Thank you for sharing that.
I like that.
Thank you for the boost too.
We've got a boost here.
23,000 stats from Grunerl.
Just in case nobody has already told you, it's called Da-Wa-ish.
Da-Wa-ish.
Which is German for I was there.
Da-Wa-ish.
So not Big D Witch.
Oh.
We did redeploy a final iteration for ourselves and it's been
pretty fun tracking everywhere we've gone all around Boston and whatnot. Been doing
some tourism.
Yeah. It's actually to the point where Chris is kind of trying to choose his itinerary
based on, you know, getting fun new routes and then to our entry. He's been doing like
route art. That's really impressive.
I like to draw on the map. Thank you for the boost. Appreciate it. Todd from
Northern Virginia is here with 5000 Sats. Todd's just supporting the show. No message, but we
appreciate the value. Thank you very much, Todd. Bravo, Boozien with 5555 Sats. Jordan Bravo here. I recommend the TUI file manager Yazzy. That's Y-A-Z-I.
Also, for folks who need to use Jira without the browser, check out Jira CLI.
Yeah, something tells me that's going to be way faster, too.
We got a boost here from Tebby Dog, 18,551 cents.
Thank you for helping us. Help you help us all.
All the next service talk has me thinking of a new tool I recently found
called a browser dash use.
It's a tool that uses LLMs to control a web browser.
Really interesting to watch at work.
And it integrates with all the common LLM APIs.
Oh, well, thank you.
That's good to know.
Also, a post initial boost well what post lethal? I'm sorry. That's my best. I'm sorry. Leet. So do we know what that means, Wes?
No, but I'm curious we got it has something to do with math
Oh, that's why that's why I thought maybe Wes would be calculating over there, you know, he missed this one
I know rising and you know, we all know that. Yes, zip code is a better deal.
Yeah, we do know that.
Did you did you bring it?
You want to know if I brought the if I packed the the five pound map?
Yeah, did you bring it in my carry on?
Yeah, it's like I brought the mixer and the microphones.
Yes, I did.
Oh, there it is.
Okay, yeah, we can put on the table here just move your laptop Brent don't spill the booch
I'm already on the second table. Why do I get pushed off?
Okay. All right, teddy dog
Teby dog not Teddy teby dog. He says it is a it is it's
18,551 sats Wes. Yeah, there we go. Thank you
Yeah, can you get that dial? I got a small paper cut so I'm tending to that. Yeah. There we go
Yeah, get some just grab one of Brent's band-aids. You brought a whole bunch. I also have a clothesline if you need it
That actually would be helpful because then we could string up the map and I could lay down and then I could sort of read
It that way. Okay and take a little nap. Do you need a headlamp?
Yeah, actually and some epoxy would be useful too. I. And take a little nap. Do you need a headlamp? Yeah, actually.
Yeah, and some epoxy would be useful too, I think.
Oh, I didn't bring it, darn.
Oh my gosh.
I did find some travel epoxy on our trip though.
There's a little cute little bottle of it
you could just keep in your pocket.
We should definitely bring that then.
Okay, well just put a little dab
on the map for me, would you?
Okay, right here?
Yeah, and a little to the left.
Oh.
Yeah, so where you just spilled the epoxy,
that is the German state of Mecklenburg for poor men.
All right.
That sounds like that's the name of it for sure.
On the island of Ruger.
Oh, well, pump the brakes right there.
That's pretty neat.
That is pretty neat.
Thank you for the boost and thank you for the fun zip code math.
Now I'm glad we actually packed that map.
That was actually worth it.
Adversary 17 is here with 18,441 sats.
You're doing very well.
Says I'm a bit behind but the headsets are sounding great.
Regarding the Bang Bus adventures and getting pulled over, if someone had offered their
truck and trailer services, would you have taken taken it from what I know about you guys?
I feel like you would have been more interested in the sketchy route regardless
Well, you got to test the van we that it was as much of a van test
We need to know it didn't work and the best way to find out was to drive it
It's so true as an uninvolved third party. I'm just gonna say confirmed
Yeah, you know what? I realize our audience knows us so well. Yeah, yeah, mm-hmm, you got us adversaries. Thank you.
Tom-tomato-boosin' with $12,345.
I think that might be a Spaceballs boost.
We're gonna have to go right to Lunacrisp!
It's been a minute, thank you for the Spaceballs boost.
I'm looking forward to hearing your reports from Red Hat Summit
I've started the two-week challenge early because I'll be on holiday most of next week
I'm already having a blast and it reminds me of how much I enjoyed using Linux and BSD
Back in the day right on. Oh and
Mr. Mato also links us up here cuz my write-up which I'm updating as I go along, is at a link we'll have in the show notes.
That is great. I love it. He's getting a head start. That's really nice.
In fact, if anybody else has any great TUI tools, now is your chance to send them either boost or email because we need to round them up.
We'll be doing that in the next episode before we launch the actual TUI challenge.
That's fantastic. Thank you for the boost.
Megastrike's here with 4, four hundred forty four sats. That's a big old duck. He says hello
It's funny you bring up the back catalog listeners
I just finished listening to every Jupiter broadcasting episode minus the launch
Released since the beginning of the year in the last week and a half at 1x speed
Well, I feel like mega strike you should like give us some insights.
That's so crazy. What have you learned in this journey?
Megastrike is a mega listener. I'll tell you what.
Does this include this week in Bitcoin? Are you going to go back and catch the launch?
At least since episode 10, because it's pretty good.
I wonder. I have so many questions. What's the schedule like?
What do you do? What activities do you listen for?
Were you road tripping? Like, how did you get that much time in?
That's awesome to hear
And I have so many questions. Thank you for the boost. Well turd Ferguson is here with
18,322 cents
First of all go
Podcasting and second of all, did you boys soak up any culture in Boston or was it all ansible in open shift?
It was a lot of ansible open shift. That is true
I mean Chris got in the fight at the package store. It was that there was that we got to go to a ballgame
We did that
We saw the Salem we went to Salem and we saw a very old grave site, which was pretty cool
Actually sounds a little weird, but it was actually pretty fun to do some some beautiful graveyards out here famous witches, too
Yeah, what else did we do? What else have we done that wasn't summit related?
We've done a few things.
We're in our Airbnb now and-
Well, we popped in to pay our appropriate respects
at Cheers.
Oh, that's right.
We went to Cheers.
That was kind of all right.
It was all right.
Norm has just passed.
So it was kind of nice to be there
right as Norm had passed.
So people were there pairing their respects
and they had pictures up and flowers and all of that.
They were very gluten friendly at Cheers, I gotta say.
Yeah, pretty good service. You know, it's not just a tourist hotspot, but the food was fine.
And of course, we did mention we got to go to a baseball game. So that was pretty classic.
That was really nice. Yeah.
I thought we got pretty lucky here. Red Sox and the Mets is pretty like classic ball game.
Yeah.
And also Fenway Park, I had always heard of it
and how unique it was, but to see it in person.
Yeah, I'm not a sports ball guy,
but that's just such a great opportunity.
And it was a blast.
Well, as Wes knows, baseball has very strange rules
around parks, shapes and sizes, basically none.
And so-
Each one is a unique experience.
But you know, after that, we kind of got our fill
of the city and made our escape, which of course meant encountering the native drivers.
That's true.
I really thank you both for letting me drive.
I really enjoyed it.
I found it.
At first I was a little like, wow, the lanes have no meaning here.
I mean, quite literally, lanes have no meaning here.
But it's because the roads are old and narrow.
And so you just kind of weave, you do a weave and you just trust that the other driver is
going to weave to your zig or whatever. And so you zig kind of weave, you do a weave, and you just trust that the other driver's gonna weave
to your zig or whatever.
And so you zig and zag around everybody,
and I really enjoyed it.
It actually is a lot like driving the RV,
where it's down to last second dodging another thing
that's just barely sticking into your lane,
or you don't have a complete lane,
and you have a very wide vehicle.
And so it was essentially taking all my RV driving
experience and applying it to a passenger vehicle.
But it worked great and I enjoyed the heck out of that.
So that was a treat for me,
because usually when we travel, I don't get to drive at all.
We also then got to see lighthouses and go to the ocean
and get fresh seafood out of the dirty Atlantic Ocean.
It's not as good as the Pacific, but you know what I know.
I feel like you're biased.
And we crossed off some new states, right?
New Hampshire and Maine, our cousin from another coast.
That's right.
That's right.
So thank you.
Thank you, Turd, for that.
It's nice to reminisce about it.
In fact, thank you everybody who boosted into the show.
Even though it wasn't a full week,
we had a decent showing, and we really appreciate it.
We had 30 of you just stream those Sats as you enjoy the show,
and you stacked collectively 46,223 Sats. So when you bring that
together with all of our boosts, everything that we read on above the 2,000 Sats cut off and below,
we stacked a grand total of 215,748 Sats for this very humble but yet very appreciative episode of
the Linux Unplugged program. Thank you everybody who supports us with a boost or the membership. You are literally keeping us on the air and the best.
If you'd like to get in on the boosting fun, you can use fountain.fm.
It makes it really easy.
Or just a podcast app listed at podcast apps dot com.
Before we get out of here, you know what we got?
A pick.
This is one that we were tipped off to at the summit
and it's pretty neat.
It's MIT licensed and Wes has it running
on his laptop right now.
What is it, Wes Payne?
It's Rama-Lama.
I love that.
Say that again.
Rama-Lama.
Once more. Rama Lama.
Rama Lama.
Yeah, okay.
So we've talked a bunch about O Lama on the show, but it turns out it's not really fully
open source.
And so some folks are a little put off by this and there's some feelings like it's got
some VC money.
There's some like, okay, right now they're totally fine, but what might happen?
And I guess the core part of it and like like the sort of the some of the model serving stuff
is not open source.
And I think there's some feelings like,
they're trying to be a bit like Docker in the early days
where they wanna be the standard, right?
They've got their own model catalog and protocol
for fetching the models from them.
When there's also places like Hugging Phase and other,
you know, lots of ways to get these models.
And so, Rama Llama was created sort of as an
more fully open alternative to O Llama. these models. Absolutely, yeah.
step, which is a scripting layer that assesses your host system for whatever capabilities might be available for running
models efficiently.
And then the rest of it is all done with containers.
So it'll spin up a pod man canary.
You can use Docker too.
And that gets a standardized environment,
which then gets piped in whenever
host-specific stuff is needed.
And then in there, you go download the model
from Olamma or Hugging Face or wherever else is supported.
Wherever you want.
And then using either Olamma CPP or VLM,
you can then directly run as a chatbot
or serve via OpenAI compatible API, that model.
So in other words, you can get a script.
And even if you've just got a weak CPU-based system,
this thing will set up, identify you got a CPU system,
launch the pod band containers, and inevitably give you
an interface that looks a lot like Chat chat GPT running on your local box. But if you want a next level that sucker, you can use VLM to like pipe the back end to like some serious GPU action or like a cloud provider, whatever you might want.
all the way to AI Hero. Oh my god.
But no, you can actually, look, I was just playing with it,
right, so it's OpenA compatible,
so you know, you got open web UI,
or not so open web UI running locally,
you can hit that right up just like you would for Olamma,
you can talk to Ramalama.
That's right, okay, so we'll have links
and more information in the show notes for that.
I see here at the bottom of the ramalamma.ai,
it says supported by Red Hat, so I take it.
Red Hat's all in.
Yeah, I think it's actually maybe even under
the containers repo there.
So it's kind of a, you know, first party system in Red Hat
and in the wider pod man ecosystem too.
Boom, power tip right there from Wes Payne and Mr. Brantley.
So we're getting back to our regular live schedule.
We always try to keep the calendar as up to date as we can
at jupiterbroadcasting.com slash calendar.
And of course, if you got a podcasting 2.0 application, then we mark a live stream pending
usually about 24 hours ahead of time in your app. And then when we go live, you just tap
it right there in your podcast app and you can tune in.
Also just a friendly reminder in case you don't know, we have more metadata than that
too, because we also got chapters, right? Stuff you really want to hear about and jump
right to chapter stuff maybe you don't want to hear about and you wouldn't rather skip
the next chapter.
And we also have transcripts on this show.
So you want even more details on that
or you just want to follow along,
those are available in the feed.
See you next week.
Same bat time, same bat station.
Show notes are at linuxunplug.com slash 616.
Big shout out to editor Drew this week
who always makes our on location audio sound great. We really appreciate him. And of course, a big shout out to our members
and our boosters who help make episodes like this possible so we can do on-the-ground reporting
to try to extract that signal from the noise. Thank you so much for tuning this week's episode
of your Linux Unplugged program. We will in fact be right back here next week and you can find the RSS feed at linuxunplug.com slash RSS. I'm going to go to bed. you