The Changelog: Software Development, Open Source - Bringing the cloud on prem (Friends)
Episode Date: July 21, 2023Adam was out when Bryan made his podcast debut here on The Changelog, so we had to get him back on the show along with his co-founder and CEO Steve Tuck to discuss Silicon Valley (the TV show), all th...ings Oxide, homelab possibilities, bringing the power of the cloud on prem, and more.
Transcript
Discussion (0)
Yes, this is ChangeLoginFriends, a weekly talk show about Silicon Valley and bringing the power of the cloud on-prem.
Thank you to our partners for helping us ship awesome pods pretty much daily.
Shout out to Fastly, Fly, and TypeSense.
Okay, let's talk.
Okay, so this is our talk show.
We accidentally, I think we ganked your guys' name.
I didn't realize.
I think you guys inspired us. This is called Change Logging Friends. This is just our talk show, Brian. So youanked your guys' name. I didn't realize. I think you guys inspired us.
This is called Change Logging Friends.
This is just our talk show, Brian.
So you were on our interview.
Okay, you guys know that.
You were on our interview show.
We thought we came up with a name.
Then we went back to your website.
It's like, wait a second.
They already have a podcast called this.
So we ganked it.
We switched the and to an ampersand.
So, you know, we made it our own.
You used an ampersand.
Oh, crap.
Sorry.
Not on your website website you don't so
we're not called oxide so there you go that's true yeah and you're called oxide yeah that's the uh
yeah it's all good anyways point being is this will be a lot looser than even our last conversation
brian that's my point yeah it's gonna feel less interviewee if that's our last conversation yeah
yeah our last conversation super rigid because we were having arguments over which Silicon Valley
character I was.
If you can believe it,
that was rigid for us.
Fair.
Fair.
My kids gave me grief for that.
It's like,
I can't believe like dad,
you know,
which Silicon Valley character you are.
You're gwart.
I'm like,
go to your room.
Really?
Dang.
That's a burn.
It is a burn.
Well,
a fun backstory on that. Adam wasn't there
for the show. Adam is actually the Silicon Valley aficionado amongst us. It was me and Gerhard.
It's true. And I was going to just not bring it up. That was my plan. And I should have.
That's a good plan. And Gerhard brought it out. And then Brian, also a big fan. So you started
launching in and I'm sitting there like, oh no, me and Gerhard don't even know the show very well brian's an expert you guys are like let's go fishing i'm like that's great let's go
deep sea fishing you're like why are we on a boat we're leaving the bay i'm like oh we're going deep
sea fishing right now you're like i did not yeah you guys were not ready to go it was awful it was
terrible in fact i think we had a cut a few minutes because you just chided us you're like
come on guys and we couldn't look that bad i I was like, well, if you're going to ask the question,
be ready to roll.
That's right.
Yeah, it was fun.
I didn't ask the question.
I was just a victim of having Gerhard there.
That is true.
Gerhard asked the question.
I should have never invited Gerhard.
Yeah, this is a blowback for Jared.
Yeah.
Well, Adam is here.
Brian is here.
Do you want to get it out of the way?
I mean, I'm sure Adam will bring it up.
It's going to be the whole show, gonna be the whole show Jared the whole show
the whole show is still coming out Steve are you a
fan I mean no I've been
Brian's been hazing me for the better part of the last
year and a half because I got to season I got
through season four I had not
gotten through season five and six and so he would fire
references and he's just like I can't work
like this I can't I couldn't work
that way like get through the
end of the series so I Like get through the end of the series.
So I finally powered through the rest of season five and season six in the last like six months.
So that's that's the part that I have in state seasons one through four.
It was it was a while ago.
I've taken the other tactic.
I just refuse to watch it now just so that Adam can't.
It's so good.
It's so good.
It's easy.
You think you're hurting Adam, but you're not hurting Adam. Jared is hurting Jared by doing that. That's right. It's so good. It's so good. It's easy. You think you're hurting Adam,
but you're not hurting Adam.
Jared is hurting Jared by doing that.
That's right.
I'm hurting myself.
It is so extraordinary.
And it's extraordinary for all the reasons
that great satire is extraordinary
in terms of it's very much a reflection
of the satire that we're living
called Silicon Valley.
And it's just very, very well done.
Yeah.
Well, the number of people that will say I can't watch it because it hits too close to
home tells you it's perfect satire.
That's what most people say.
And Brian's the first one who didn't say that.
He's like, oh, and he just launched it.
You know, Steve and I actually in a previous life reported that this chair, you know, at
one point they get rid of the CEO and everyone's reporting to the chair.
And that is the episode.
I know a lot of people that like can't watch it because of that episode.
Does that happen?
Really?
Oh, there are plenty of companies where it's like the CEO is so bad.
We're going to get him out of here.
We actually don't know who the CEO is.
But by the way, it's none of you turkeys.
So like, right.
Like actually this chair is now in charge.
Can you really fire us?
You can't really fire us.
You're just the CTO.
Yeah, exactly.
But that whole
dynamic, and there are just a
bunch of the dynamics in there that are very
and I think this is Dick
Costolo is the one to really
that's the reason I think, you know, Dick
Costolo was kind of a fan of the series and
after season one kind of
volunteered to help write a bit.
Really?
I mean, and he's talked a bit about this, but I just feel like as you get into these later seasons and you get things that are so dead on.
I mean, Jared, I love where they have emerged two companies, SliceLine and Optimoji. And there is a civil war
in Pied Piper because the two
companies had different dog policies.
And one
of the
companies' dog policies
was you absolutely bring your dog
to work. The other company's dog policy was like,
no, no dogs come to work. And it's
like this civil war spills into Pied
Piper because Richard casually allows one of them to bring a dog to work. And the next thing you war spills into pied piper wow because richard
casually allows one of them to bring a dog to work and the next thing you know he's trying to
get him to like him he's like nobody likes him basically well they like him but they don't
respect him so they don't listen to him he's like well i'll get you your favorite coffee you want
dogs in here i'll get you some dogs in here so whatever it takes to get him to like him and i
think it's so interesting because it would be with so many things about the series, it's like, it's funny and it's light, but it's hitting on something really, really deep where
you have, and this is a absolute problem in Silicon Valley where management is like, yeah,
I don't know. I just want to make you happy. And it's like, well, that's not actually not the way
you lead, right? You're not always going to be as Steve and I will know. It's like you not,
leadership is not always going to leave everybody happy. And if you try to leave everyone happy all the time, what you end up doing is actually
just creating a mess, right? You leave no one happy. Yeah, yeah, for sure. It's good stuff.
It is good stuff. It is good stuff. I mean, the other one that hits home for I think a bunch of
folks, especially those in enterprise sales is everything around the box. And then he's like
collection of the sales team where it's like
see now i'm gonna fall down because this was before season five but uh i can't remember his
name but he's like you know regional vice president northwest and they're all checking
in with all the different like regional sales reps that's right that's right i shadow him i'm
keith i'm shadowing bob that's right jan the man here again. They call me Jay and the man from inside sales.
And it's a woman.
So it's like an oxymoron.
It's like, well, okay.
Wait, Jared, you've not watched any of it.
So I have watched season one and I liked it, but I just didn't love it.
So I watched it and I just kind of dropped off.
And then Adam's like, basically been trying to reel me back in.
I kind of took the antagonistic stance, you know?
So, but you guys are selling it.
You're selling it.
Adam has not used the shame tactic that Brian plays.
He does, but it's not as effective, I think.
We're too close.
I just don't care as much about it.
Let's not give me too much credit
because it took years of ongoing shame
to get Steve finally over the line.
Oh, okay.
But there is a great scene that Steve's alluding to.
Something, again, is very
close to our own lived experience where they build out sales effectively before they have a product.
And so Richard walks in and there is this sales team that is already built out,
but they don't have a product yet. And the sales team, and again, we've seen this over,
Ian, you see this over and over and over again in startups where they build out sales and marketing
before having product market fit. The sales and marketing folks are slick. They seem
like they're charismatic. They've got all the, they've got this kind of customer Rolodex.
They've got all of these, next thing you know, you have all of these, you know,
RFPs and MSAs, and you've got all these trials going on and it feels like very promising.
It feels like a pipeline. POCs, exactly. It feels like a pipeline,
but in fact, it's not. It's all a fiction. And because there's no product. And as customers
discover that there's no product, then the sales folks are like, well, the problem is the product.
And like, they're not wrong, but they're also not right. It's like, actually the problem is
that we poured a lot of
our scarce resources into building sales and marketing before we had resounding product
market fit. Oh, what was the line? They're like the, when he's asking about why they can't sell
and he's like, like, well, I mean, they're amazing salespeople when the product can sell itself.
Wow.
Good job, Steve.
That's like a line, man.
No, because he's like, I thought these guys were the best.
It's like, yeah, they're the best because the product sells itself.
And Silicon Valley does this over and over again where someone says something, you're like, what the fuck?
And the camera holds an extra beat on the other person
in the room being like are you listening to yourself say that the reaction shots are great
and that is and i feel like that's one of those words like there's a lot of wisdom in that shot
and it very much informed the way we build oxide by the way wow here's the oxymoron here too like
the the unexpected thing is like i'm gonna spoil something so obviously if you're listening this we're spoiling things so if you haven't watched beyond season one like you're
spoiling it for me yeah it's everything spoiled cover your ears i probably shouldn't watch it
now yeah spoiled okay so the box goes on to be the most money maker for hooli in like season five
like it's the of all the inventions of the great inventions that gavinom did, in the words of Cherie, I believe her name is.
She's like, it would have been better if you didn't invent any of those things because they were all money losers.
But here is the box.
They got like, he's like, is this for the whole year?
He's like, no, this is for the first quarter in terms of sales.
Like it was the best success.
What's in the box?
What's the box?
I don't even know what the box is.
What is the box?
Watch and you'll know.
Is it kind of like Brad Pitt at the end of Seven? What's in the box? What's in the box? I don't even know what the box is. What is the box? Watch and you'll know. Is it kind of like Brad Pitt at the end of Seven?
What's in the box?
What's in the box?
I mean, this is kind of funny because they try to disparage kind of the hardware angle of it.
But of course, that's what we're building at Oxide.
Yes.
I love hardware, by the way.
We love it when they're like, you know, this is where.
So they've got the great scene where they're looking at in the data center where the box is going to go.
And like the box goes here.
Sad data center operator that, you know, lives in a cave.
Yeah.
I mean, I can imagine you guys watching the show, like literally taking notes, you know, because I mean, it's so on the nose with what you guys are doing.
It is very on the nose.
The answer is the box three, the Gavin Belson signature edition.
That's right.
Jared, they crowdsourced the logo inside of Hooli.
Oh, nice.
And I won't give that away,
but it's really quite genius.
That's too good.
Leave some things
for him to chew on later.
Yes.
Exactly.
But that certainly resonated
with what we're doing at Oxide
where we're doing this exactly.
We are doing the box three
Steve Duck edition.
Is that right?
We don't have a Russ Hanneman
on the cap table though.
That's what we don't.
No Trace Colas. This guy.
This guy.
This guy. Hey, this guy.
You guys sold a box, didn't you?
Segway? Didn't you guys sell a box?
Yeah, Segway. We're
shipping. You shipped? We shipped.
We have shipped. I saw
the tweet. Was that you?
Now that I see you in in person was that you
wrapping this thing that is not me wrapping this thing you look similar to whomever that is wrapping
it um i'm not sure if he's going to be insulted i'm going to be insulted by that i don't know i
think i look similar to robert keith i don't have any in particular so that's our engineer robert
keith okay now that i zoom in i'm disagreeing i didn't open up big it's small on twitter i call
back to another silicon valley scene when robert Keith joined the company, among the very, very few people at Oxide were a Robert
and a Keith. And I'm like, look, I don't know how to tell you this, but we can't call you by either
your first or last name because we are. And he's like, it's fine. I've been known by RFK before.
Like, all right, thank God. The two name problem, which in Silicon Valley was because of a second Jared.
Exactly.
So RFK, our engineer, he one of the questions that we got on Twitter.
So he's wrapping the rack.
Well, one, there was like a very weird strain of like, I can't believe the amount of plastic that you are using to wrap the rack.
And it's like, do you know how anything works or is built? Like, trust me, the plastic that this thing is being wrapped in is the least of the resources being consumed. I mean, it is a computer. It is a good one wrapping it in something that's going to create static electricity. Wow. And it's actually anti-static. But my favorite was medical Twitter that was just
spiraling over his right foot about to plant where his ankle's going to snap. Oh no, gosh.
Really? And it's worried about like what happened to his ankle. Well, so I actually talked to RFK
about this because I'm like, you know, RFK, one of the burning questions we got on the internet
is like, is this guy about to eat shit? Because it looks like it. And it does actually, you zoom in on the right foot, you're like, it does look like you're about to trip.
And he's like, I can't remember, but I can tell you, I definitely did not.
Like, I do not recall sprawling all over the floor.
So I think, you know, RFK is a coordinated guy.
He did not trip.
But yeah, that was us.
So that's our engineers on site with our contract manufacturer in Minnesota,
putting the final touches on that rack as it goes into the truck
and ships out to the customer number one.
So pretty exciting stuff.
Yeah, very exciting.
I mean, to literally ship something real, not just software.
I mean, not that that's a bad thing, but like physical thing
that's very hard to like take back and obviously change.
You're going to get it on site.
They're not going to want to let it go.
It's a beautiful thing to me.
You guys have phenomenal industrial design
as well as just design generally.
I love the color.
Who's doing your design work
and how can we get some access to this talent?
The team.
Yeah, I mean, seriously,
your guys' design is so good.
Smoking.
Early on, we were very fortunate
to get connected with a firm that was helping us with some elements of design, some other stuff.
And one of the folks that was there that was kind of front and center for that had-time designer is not, you know, you're not starting there when you've got, you know, limited resources and a small team.
And it was just an absolute no-brainer with this particular person that we had worked with getting them in early.
And I think like a bunch of people at Oxide, you know, this particular person spans way beyond design.
Yeah.
And thinking about design, not for design's sake, because that's the other side of this is we've all lived as like data center operators. You always have seen the
products where you see design for design sake show up. And you're always asking the question,
like, I don't use that. Could you strip that off? And then how much would the remaining product
cost? And there's some good company examples of that in the past where they would put these
really expensive bezels or LEDs on the front, displays on the front that don't really serve a purpose. And
all you as an operator see is like added cost for no benefit. And I think, you know,
the team's done a great job of focusing in on design for usability. Like let's colorize this
thing because that's where the operator needs to touch, or this is how you indicate health or quality of a particular part of the system, rather than how do we make this
thing just look good for looking good sake? Well, I think Ben Leonard is the designer that
Steve is referencing. And I thought one of some of my favorite conversations is getting Ben together
with the mechanical engineers to figure out how to make the rack look great while making it highly
manufacturable, while designing for manufacturing
all these other constraints and you know steve and i are very much students of history i mean i
think that if you haven't read steve jobs the next big thing um absolutely terrific book about the
history of next and like the really actually interesting chapter of steve jobs's life is at
next and because he made a lot of mistakes and one of the mistakes they made is he's after
this mat this particular black for the next cube and it just spends untowards amount of money and
immediately it's the wrong decision and like we do not want to have the matte black that we are
finicky about that we're not designing for manufacturing. So how do we make this thing beautiful without sacrificing its manufacturability? And that requires you to get a, like some
mechanical engineers and a designer in the same room. And there's some back and forth because
it's like, how about this? Like, no, no, that's too expensive. Can't do that. Can't do that.
But what we landed on, I think is really gorgeous. And the, I don't know if you've seen the side of
the rack, but there's a punch through
with the Oxide logo with that green that just absolutely pops. It's good looking, which is
really important to us. I mean, it's important to us to build something that when we set out,
we wanted to build something that we would all be proud of, that we pulled together this kind of
team spanning these different domains and disciplines. And Oxide, as a result, because we pulled in so many different kinds of folks from so many
different domains, Oxide feels like a heist movie. It feels like a heist movie. I love heist movies.
And it's got like, you know, we got the safe cracker and we've got the helicopter pilot and
we've got the specialists, but then those specialists all pull together to pull off one last job.
That's right.
Well, hopefully not one last job in your guys' case.
No, no, no.
That one was the first.
The first job.
Yeah.
There'll be other follow-up movies maybe with different products.
This depiction on your homepage of the rack, is this pretty accurate to what a typical rack you'd sell
would look like that is very accurate yeah actually that is based on the cad renderings
so that's pulling straight from mechanical cad is a lot of that storage is that what a lot of that
is like the vertical greens across the top and bottom that's what the green is that's right okay
did i read it right you had like 32 terabytes of NVMe. You only do NVMe storage in this thing?
Yep.
Gosh, these things are expensive. Holy moly.
That's good though, right?
I mean, for what you're doing in a data center, you want the fastest possible,
but that is such an expensive buy.
I mean, that's not my money. Somebody else's money.
Right?
I think for the lifetime of the company, there's been this real home lab interest in Oxide.
Yes.
Give me a home lab oxide edition.
I know.
We've had plenty of requests for that, for sure.
I want that, for real.
For the enthusiasts.
Because remember the last time, Brian, Gerhard wanted to buy one.
You're like, you're not buying one.
I think I let Gerhard down a little bit easy.
Gerhard's like, when can I buy one?
I'm like, you're not going to buy one, pal.
Yeah.
He's like, I'll save up.
I'm like, I don't know if you want to do that.
That's right.
There is a lot of opportunity there. I mean, obviously, you have to focus on the market you're going to focus one pal yeah he's like i'll save up i'm like i don't know if you want to do that but that's right there is a lot of opportunity there i mean obviously you have to focus on the market you're going to focus on which totally makes sense but like you're using zfs which a
lot of home labbers love bt uh rfs is another one but like i think for the most part open zfs is one
the home labbers heart so you're at least there you know and you have beautiful hardware
yeah and we've got i mean in zfs certainly an important building, you know, and you have beautiful hardware. Yeah. And we've got, I mean, in ZFS, certainly an important building block.
You know, we built our own software from the lowest levels to the highest level.
So we've got our own service processor.
We've got our own hypervisor.
We've got our own control plane software.
We've got our own console.
And all of that is open source.
So that's the other kind of big angle that we can tack into.
And this is kind of like what we tell the home labbers.
It's like, well, good news.
Like it's all like downloadable.
Right.
And we're like, nah, we want to buy something.
You're right.
I want the hardware.
Sorry.
I know.
It's kind of weird that you guys have this like nerd cachet and you have this like enthusiast
audience.
So many people interested, watching, love it, want to buy stuff.
You're gear hearts.
I'm sure Adam would buy some stuff.
I would totally.
Yeah.
Yeah.
But does that even translate into anything of value for you all? Yeah. How so? Yeah. I was just Adam would buy some stuff. I would totally. Yeah. But does that even
translate into anything of value for you all? Yeah. How so? Yeah. I was just going to say,
it's like that contingent. Many of those folks are in companies who spend a lot of money on
infrastructure on premises, which again is kind of this like forgotten corner of the technology
world. It's like, Oh, does anybody do on-prem
compute anymore? And it turns out like just listen to an AWS keynote from two years ago,
and Andy's on stage talking about 95% of infrastructure sits outside of the public
cloud. And so you have this kind of overlooked area that is much, much larger than the public
cloud, but has none of the access to the same benefits that we are all intimately familiar with, which is like, how,
why would you consume infrastructure any other way than at the end of an API that is a set of
elastic services? And yet, if you want to own parts of your infrastructure for the right reasons,
or you have regulatory compliance reasons or latency or, or security, or for any of these types
of things, which are good reasons to run portions of your infrastructure on-premises,
you're doing the same thing that folks are doing 20 years ago, 30 years ago. You're taking a metal
rack, and then you're figuring out what server vendor to put in there, who, by the way, is
outsourcing firmware and a bunch of other stuff in that set of boxes. And then you're figuring out what do you do for storage and what do you do for your networking?
And then you have to do the software part.
Are you going VMware?
Are you going Red Hat?
And you have to basically build that whole thing together over months just to deliver what AWS has at the swipe of a credit card,
which is a set of elastic services for developers.
And it's a
tragedy because you shouldn't have to, you know, Brian and I were at a cloud computing company and
just realizing how tough it was for those that were not running a cloud computing company
to actually get this kind of clean water to their end users, to developers. So to your comment,
Adam, it is definitely like expensive for home labbers. But the interesting thing that you find is when we're talking to, you know, enterprise customers,
and they're comparing it to their current stack of putting all that into a rack, it actually becomes
really, really attractive, even from an economics perspective.
Well, and I think that that kind of appeal to that enthusiast demographic is super important
to us, because so many of those enthusiasts that are home labbers at home, they're the ones that are going back to work and making an IT decision.
So we love having that.
And I think that like that's always been really important to technology in general is that that playful tinkering that's happening where people are kind of following their natural curiosity is a really important way that technology is developed. So even though we're never going to sell to Gerhard and the
home labbers, we love the support, the engagement, the discussion, the enthusiasm. It's not our
market, but it's a really important element of who we are. And plenty of folks have come to Oxide
out of that enthusiast demographic where one of know, one of our engineers came to us
because they were starting to do things in Hubris, which is our open source operating system. We
talked about last time, Jared, Rust-based operating system that any home lab can experiment with,
by the way. I think that's what I was trying to steer Gerhard into. Yeah, you were.
Yeah. I'm like, dude, what you want to buy is like a $20 eval board, whatever those went off to. Like, this is what you want to buy.
You know, you want to buy an STM32H753 eval board.
You can download Hubris.
Then you've got like, you've got an Oxide computer.
You have it for 20 bucks.
And he's like, no, no, I want a real computer.
He's serious.
The thing that's amazing is those things are real computers.
And so it is actually a great way for people to get to know
some of the lowest level software
that we've done.
And because all of that's open, people are able to get insight into this level of software
that historically has been completely closed and proprietary.
Do you think if you conquer this enterprise world, you'll consider Homelab, like the home
cloud, so to speak?
Oh, oh, oh, oh.
Never say never.
Adam, Adam, Adam, Adam. There's room for the home cloud.
Like, that's what I'm saying. It's not about like, oh, will you please do this? Cause I want it.
It's more like there's a market, I believe in the future for a home cloud. All right. So the first
step is at least pretty straightforward, which is there are a bunch of use cases. This is still in
the enterprise, but there's a bunch of use cases that are sitting in retail stores, bank branches,
manufacturing sites, park attractions, right? Where there is a lot of need for compute and storage and networking and really needing a cohesive kind of integrated solution.
So I think that has to be step one for us as we think beyond the core data center use cases.
And then, yeah, there's the pony rec. There's been a lot of calls for how small this thing could get.
But you know what you guys could do
in the meantime,
and maybe just forever,
is to throw us a bone,
you know, is to like have a Drobo
kind of a thing,
just like it's an oxide storage thing
that can sit on my desk.
I'm a YouTuber.
It can be in the background.
It can glow green or whatever.
And like, I think we'll all shut up and just go on with our life if you guys provide something that we can buy off of the website.
I think we got to the ask.
We got to the ask.
Yeah, can you just give me an oxide-branded machine?
So I think part of the challenge for that home lab or demographic is that we have taken a rack-scale approach.
This is true rack scale design.
So in particular, as you like really want to,
actually I'll tell you like the biggest technical hurdle
to getting a true oxide rack into a,
even a scaled down one,
is you got to have your own power shelf
and power rectifiers.
So we've got a power shelf.
We do our AC to DC conversion in a single shelf in the rack,
and then we run DC up and down the rack.
OK.
So again, a mini DC bus bar, and then we also
have got an integrated switch, which is a, actually,
the single biggest challenge we would have
is scaling down that switch to something
that can reasonably fit in a home app.
Also, Adam, I feel like I'm doing the discourtesy
of taking the request a little too seriously
because it's like, it's just not going to work in the home lab.
I think it's like, what?
I'm very serious.
I know, I know you, I know, I know, I know.
Okay, let's back up one step then.
So rather than take what you have as that large rack,
which is just phenomenal.
I mean, 2048 CPU cores.
I mean, I don't need that in my home.
So don't give me that scale down. Give me a version of how you think for home lab cloud,
right? Assume that I want you to consume four to eight UMI rack, and you're a simplified system
that gives me great power, great networking, maybe great CPU, obviously, and then storage.
Just in one single box that has super fast throughput between all the different services I run.
You know, maybe I'm running Proxmox, maybe I'm running something else.
I don't know that you've all built something that's Proxmox-like, but give me not a version of what you have scaled down, but a version that thinks like you think for home cloud.
Yeah, I think, again, the challenge there is that we have taken
just from a technical perspective. It is that ultimately the reason that Oxide exists is because
the machines that we run in the data center are actually closer to the home lab than they are to
the hyperscalers. That's actually the problem is like, haven't you home labbers had enough really?
Because what we run in the DC are these oneU and 2U boxes that actually are personal computers.
And the approach that we've taken is to blow all that up and to take a rack scale approach.
So that scales down to a point.
But when you get to something like the switch, it's like actually the integration of the switch with our control plane software.
So we've got our own switch.
We've got our own switch operating system.
Actually, that switch is actually not one switch.
It's two switches because you've got a high-speed switch
and you've got a management switch.
Getting that into a form factor,
I mean, it's not impossible
in kind of like the arbitrary future,
although even that-
Are you scared, Brian?
Are you scared to do this?
You're making all these excuses.
I'm just teasing you.
I appreciate you're trying all the tactics here.
No,
I really appreciate it.
He started,
he started with like,
imagine if though,
let's just go clean sheet.
Like,
how would you do it?
Let's not say you're not going to do it.
How would you do it?
And then it's like now to the shame,
you know,
and that was the origin story to the oxide mini.
That's right.
I went McFly on you to be super serious.
I love the focus on where you're at though.
Like I'm a ubiquity lover. I love the simplicity of what you. To be super serious, I love the focus on where you're at, though. Like, I'm a Ubiquiti lover.
I love the simplicity of what Ubiquiti has done for home networks and just enterprise networks, even.
Like, they've just made it really easy to, I guess, get into networking when you would normally have been, you know, maybe intimidated by some of the things that running a network requires.
And so I think they've proven there's a beautiful hardware possibility
molded with great thinking and great software.
And then distributing that
and having a fanatic customer base.
They have a fanatic customer base.
So given that in the marketplace,
if you can sort of collapse some of those things
that you already have done,
maybe there's another player in the market
that's called Oxide. It's a compelling argument maybe maybe maybe yeah there you go just say yes
guys you don't have to do it we're gonna do it next quarter 2026 done yeah you know i think that
actually it's funny because i do feel that it would be easy jared to your point you're like
can you guys just agree to it so we can move on? I don't know.
Let's go back and talk about the series that I haven't watched or something.
But we have always tried to be really direct about what we're doing and what we're not doing.
And I've got a complicated relationship with Steve Jobs.
There's plenty to not like about the guy.
But I do love his WWDC 1997 keynote.
Focus is about saying no.
And especially as a startup, especially as a new
company, you've got to know what you're saying no to. And what's actually important in order for us
to be able to, and this is the hand on heart honest answer, in order for us to be able to ever
serve those smaller edge use cases, which is still probably in the enterprise, but would get us much
closer to the home lab, we need to survive and thrive as a company. And that means we got to focus on
this core market that we're going after, which is this enterprise DC market. So.
Good answer.
Yeah. I'll put my hat down then because I for sure agree with extreme focus. So I'll give you that.
However, I will also say I begin with, if you conquer.
Okay. I mean, we can look forward to the future. Like, yes, absolutely.
For sure.
There you go.
There you go.
Yes.
I do feel that like we, I mean, our aspirations are really to be the kind of company that young engineers can come up in that customers love to buy from that people are enthusiastic
about.
And it's like we're veterans, right?
And we are trying to pull from the best of our
collective pasts and careers and where companies really get this right. And then they lose their
way. And we want to, we've certainly seen a lot of, and we are trying to pull from the best of that
and build something that can be really generational and special. So yes, in that future,
absolutely. Home Lab. Yes. Oxide Home Lab edition.
He finally landed on the correct answer.
2050 Oxcon is going to be just really the big announcement there.
Okay.
As we finally serve the Home Lab.
I can go back and play this audio at that announcement and be like, wow.
And we will.
We'll go do that.
Exactly.
Wow.
They knew even then.
The vision of these guys.
27 years in the future, they would serve the Home Lab.
So how do we get there?
What's the stage? No, no, no, no. I'm not going there.
It's just turned into a board meeting. Yeah, exactly. It's like, okay, so no,
you've already committed to doing this in 2050. I'm just pulling in the date at this point.
Yeah. Give us our roadmap. High level, high level. Just what are the milestones?
That's not what I'm saying, but okay. No, seriously, how do we get there? What is the state of on-prem? Like you guys are building amazing hardware for this market that you said,
Steve was sort of like, I forget your words, but basically just unpaid attention to it's,
it's been an afterthought basically under the radar, under the radar. Thank you, Jared.
It has. I mean, it's been in the worst is it's been ignored by the companies that are serving that market.
Yeah.
And that is largely because the last 10 years, all the focus has been on how do we collectively
move to this public cloud computing model.
And forget everything else.
I mean, if you were going to give it the most charitable treatment, it's like, well, no,
that should be the first focus.
It's not in spite of everything else. It's just like, that's where you should start. And that's actually
not entirely untrue. And that has been the focus for most companies over the last decade. And
we were certainly in the midst of it running a public cloud computing company.
I think now the question is, okay, well, we've moved most of the good use cases to the rental
model of the public cloud. Because a lot of people think about
cloud computing as this rental service model, this kind of hotel model for living, rather than
the actual what it does, which is providing abstractions over a bunch of complicated
infrastructure under the surface and making it accessible via APIs. So I think now companies are rightfully asking,
how do we get that same service model everywhere the business needs to run?
And there's no good answers right now. Over the last 20, 30 years, the industry has split
hardware and software. You've got hardware providers over on the left and software
providers over on the right. And if you want to bring those two together, it's each individual company's job to go do that. Any company that is building cloud-like
infrastructure on-prem has to do all the assembly and the integration and the troubleshooting.
God forbid something goes wrong, it's like finger pointing left, right, and center.
What version of software are you running? Instead of delivering kind of a complete solution. Now the long forgotten masses
on prem are trying to figure out what's next. And because you can't, just like, I'm in a hotel room
right now and it's very nice. I didn't have to buy any of this stuff. And if I want, I can order
food to the room. And it is pretty cheap considering I didn't know I was going to be in this city
five weeks ago. But if I were living here five weeks from now, I would be looking at a huge bill.
I would have people that can come and go in my room without telling me.
There's aspects of hotel living that don't really hold up when you know you're going to be in a city
in a location for 12 months, 24 months, 36 months. So I think that's, you know, at the core of this for us was how do we extend it so that
cloud computing is sort of that ubiquitous foundation.
And now companies in the future are able to either rent it from a provider like AWS, Google,
Microsoft for the right use cases, and then own it where they want to own it.
But it doesn't take an army of 500 people to kind of assemble it and build it, integrate it and support it. There's really this kind of
productized hyperscaler-like infrastructure that everyone should have access to. That's where we
started. Now, Brian, I think I can speak for you that we had a good sense that this was going to
take, this was going to require taking on a lot because it's not only kind of a de novo server design, but then,
you know, we decided early on that we thought we had to do our own switch.
That has its own kind of backstory there. The paths diverged so long ago is the problem.
Yeah. The problem is that kind of the extant hardware makers are PC companies, Dell, HPE,
Supermicro, and they don't actually understand cloud computing.
And those folks at those companies that understood cloud computing, Steve grew up at Dell. Steve was
at Dell for, what, 10 years? And when Steve saw this burgeoning new use case in California for
Dell servers, a company called Facebook. And inside of Dell, they're like, this is a website.
This is just not that, like, we don't see why this is that important.
Like, we should be selling to, you know, the Chevrons of the world.
Insurance and manufacturing.
Right.
Finance and, yeah.
And part of the reason that Steve went to a cloud computing company in 2009
is because he couldn't really get Dell to understand
the importance of cloud computing.
And you see this over and over
and over again. Go look at the backgrounds of people doing cloud at Google, at AWS, at Meta,
and you'll see the Dell and the HPE in their own past. And you know that they left because those
companies didn't get it. As those companies didn't get it, they got further and further apart.
And so those designs haven't moved from 20 years ago.
So in order for us to be able to go deliver that hyperscale class infrastructure, hardware and
software together, we've got to go back to where the trails diverged and we've got to go down the
right path from on-prem. The problem is they diverged so long ago that we have to take on a huge,
huge problem. And the minimum viable product for this company is enormous. As Steve's alluding to,
it included the networking switch. It included getting rid of the baseboard management controller,
the BMC, doing our own service processor, doing our own software all the way up and down the stack.
So VMware does not run on this box. ESX does not run down the stack. So VMware does not run on this
box. ESX does not run on this box. AMI does not run on this box. AMI does not run on this box.
We have done, we don't have a bias. We don't, we've done our own hypervisor. We've done our
own control plane. And that's an enormous, enormous lift. Yeah. Well, and by the way,
when you look at kind of professionalized cloud
computing infrastructure providers, this is pretty consistent. Like Amazon and Google and Facebook,
these companies, their infrastructure looks nothing like what's accessible to the Fortune 500
companies that are out there building on-prem. And you've kind of seen a similar pattern in
the automotive industry where we've been in like a couple of decades of outsourcing.
So there's a really good podcast where Jim Farley is talking about how Ford outsourced everything in software. And so when they wanted to make a change to like the seat controller
mechanism, they had to go to Bosch and be like, hey, do you mind updating the software that
controls this aspect of the car? There were like 500 different examples of this, and this was done to lower costs,
to bring the cost of each car manufactured down by like 500 bucks.
And the realization that he is having, having watched what Tesla has done and what some of
the Chinese manufacturers have done, is like, this is not only costing us more, we are moving slower,
we are not competitive. And they kind of had this revelation that they had
to bring everything back and start thinking holistically at Ford about what a modern vehicle
looks like. And I think as we were kind of peeling back the layers, we had a sense of it while we
were at Joyent. And because of all the issues that we would run into that were kind of like,
we're at that hardware software interface. But when you start peeling it back, it's like, man, there is some decades long cruft that are going to be pretty challenging
to rip out and do a new. The saving grace was that at every single one of those layers,
there were groups of technologists that had come to the same conclusion of like, no, this layer's
got to get blown up and rethought. And the reason we are
where we are is because those technologists came to Oxide and said, wait a minute, like,
oh, you're rethinking the switch? Thank God someone's rethinking the switch. I've thought
a lot about this problem. Why? That's where I want to go to. What's so wrong with the switch?
Oh, no. Here we go. We don't have time. Here we go again.
Oh, my God.
And it's not just the switch, but the switch operating system.
And you've got the... The switch is in charge of a lot of different things, obviously.
It's like moving the packets.
It's connecting the devices.
It's connecting all the IP stuff, right?
Like, it's super important, obviously, in the network.
It's the network.
It's the backbone of it.
It is.
But right now, like, the switch has no real integration with the compute nodes that It's the network. It's the backbone of it. It is, but right now, like the
switch has no real integration with the compute nodes that it's talking to. So there are a bunch
of things that you actually want to go deliver functionality to that end user. You want to give
them that virtual private cloud, right? You want to give them that there's a bunch of like
sophisticated, you want to give them sophisticated firewalling. There's a bunch of sophisticated
stuff you want to go do.
In order to do that, you actually need to have hardware and software and cross-stitch across the compute sled and the switch.
And when those things are delivered by two different companies that have no real sense of collaboration or constantly pointing fingers at one another, it's really hard for that end user to go create that infrastructure for on-prem. So
yeah, very much the switch had to go. It's not a problem with the switch. It is very much that
the switch just doesn't know what happens when data leaves. Right. It's like silos. And if you're
actually thinking about a pool of resources that are all like, again, back to cloud computing,
you're not trying to design, you know, specific hardware components and software components.
You're trying to give developers instant access to arbitrary amounts of compute storage and networking
via an API and in that you give a quality of service to that. And you can't do that when you
know, you have kind of that brainstem that switch that is unaware of what's happening on, you know,
compute sleds and unaware of what's happening up in the
software stack. And I mean, it's the classic, anytime there's a bump in the night, everyone
blames what? The network. Like, oh, it's got to be something in the network. And poor network
engineers are left kind of trying to defend themselves saying, no, everything I see in the
switch and the routers looks good. It can't be in the network. This is where, again, kind of time and time again,
we realized that you need to build these things together
and be able to deliver that kind of end-to-end visibility.
How different is what you guys are doing?
So if I'm a CTO and I have two proposals on my desk
and I have to decide a direction we're going to go
with a new data center we're building on or whatever.
And I can go with Oxide Racks
or I can go with whatever is currently there,
stack a bunch of Dells and some switches together
and do what I've been doing for the last decade.
What kind of switching costs am I looking at?
What kind of lock-in is there?
Do I have huge risk to pick you guys
or is it like everything you're doing
is so low level that at a point where I'm going to care about it as a company who's rolling out
some services, it's all good. How different is it? So in terms of, I mean, we would propose in
terms of value and density and economics and services, it's very, very different.
In terms of switching costs, I think one of the big benefits
and why the timing was right for Oxide now versus Oxide, say, five, 10 years ago, is that where
companies have oriented and really invested a lot of resources is sort of developer-friendly
tooling for cloud computing. So by that measure, the switching costs are extraordinarily low
because you're now able to leverage the same kind of Terraform frameworks.
And the models and workflows that you become accustomed to are stitching into Oxide
because you can think about it as kind of another cloud that you now kind of own and operate on-prem.
And it's leveraging all that investment you've done over
the last five years, getting to more cloud-first type models and workloads and development practices,
but being able to leverage those on-prem. And then in terms of thinking from a data center
operator perspective, where this solution meets the rest of the data center is obviously at the network handoff. And so we speak, you know,
BGP to the network, we come with gifts to the network operators and engineers,
which gives them a whole kind of new world of visibility so that they can not only be in
defense mode, but actually be proactive and be able to anticipate where there's congestion and
be able to kind of help give users better experiences.
And then we're, you know, we've invested a lot to make sure that that handoff point that where we're talking BGP to someone's network is clean and pretty straightforward. So pretty well.
And then in terms of that operator experience, one of the things that we've definitely optimized
for, because we've actually built this thing as a product, you can actually get it wheeled in,
decrated, powered up, and you can start provisioning on it that day.
No way.
So actually, even now, I guess, Steve, you won because Steve would say,
we are going to get you up and running within a day. And I'm like, look at Steve. I normally-
And by the way, just for context, this happened to us as we were building out data centers all
over the country and eventually the world when Samsung acquired Joyit. And the lag time from when those boxes all land to when you've got added
capacity, which by the way is dead. You can just watch the dollars burning on the clock when you
have got boxes you've paid for and you do not have customers that are being served by them.
And so that time is really, really important when you're thinking about the economics of the business. And for us, I think we had it, we were operating
pretty efficiently, but that's still measured in weeks. And a bunch of the companies that we went
and talked to in 2019 were telling us that they measure it in months. It's like an average of
like a hundred days from when boxes land to when they've done installation, integration and burn
and test and software deployment and validation and network settings. They've handed this off to
developers, 100 days. And our goal, at least my goal, Brian's goal was higher than this,
but was that we'd be able to do this in one day. So you roll it in, you give power,
you apply networking, and you have productive end users in the same day.
It's like, it's not a day. I keep saying it's like, it's not a day. It's like hours.
And Steve's like, no, you say an hour, you would say one hour. Come on, Steve, it's hours.
And it's also like, it's not like, well, what we're aspiring to, it's like what we've done.
And so I'm like, Steve, like, can you give us, and it's just like, look, can we just say a day?
And I'm like, it'll, I mean, if it takes
hours, like it'll be done in a day. I'm like, they'll definitely be done in a day, but it's
actually, and this is where you get to the real payoff of having rethought all of this, having
designed it holistically, just like that iPhone unboxing experience is really quick and smooth.
That oxide unboxing experience, decreating experience is in the reason that it's possible is because this
whole thing hat, we have all the hardware and all the software.
And so when we actually do our initial install of the software,
we effectively go through our own recovery path of like,
assuming you've got nothing on the rack and we go from literally nothing on the
rack to you can provision within hours.
I mean, it is like, I think it's standing at like 90 minutes right now. And actually,
do you know what we are ultimately bound by? Is the UART speed inside of the sled when we're
transferring the most primordial image so the thing can bootstrap itself up and boot off the
network. In order to be able to boot off the network, you need to have enough of an image
that you can actually go boot.
And that we are ultimately bound by that UART speed is ultimately if we had a, I do love
that the install experience around this is just eye popping.
And the folks that have been working on this are not necessarily, I mean, we've got some
folks who have suffered through the pain of Dell and Supermicro and HPE, but a lot that are actually coming just from like the cloud side
of things. And they're like, I don't know, like, I want to make this as great as it can be. Like,
I don't even, like, they don't know. It's like, no, do you know how far ahead you are of the state
of the art? And so this, when you initially install the rack and you plug into these technician
ports and do this original, because you'd have to have some initial configuration, right? You have to have some initial, before you
can actually just hit API endpoints and hit that web console, there's got to be bootstrapping.
And the actual software that does that is just gorgeous. And it's a, we think it's going to be a
wholly different experience. So, you know, Jared, to go back to your question, you're that CIO. If you look at what this product offers your internal customers, it's much more comparable
to the cloud than it is to the on-prem stack of garbage that you're currently suffering with.
Gotcha. Sorry, Homelab. No, no, no. I'm saying that's not Homelab. No, no, no. Sorry,
Homelab. I'm cool with that. No, the problem is actually we are running Homelabs in our DCs.
We are.
Everyone is.
Those are bold words.
It's time to get the Homelab out of the DC.
I think that's a good pitch.
We're trying to get the Homelab out of the DC.
That's exactly what it is.
Yeah.
To the earlier conversation, like the Homelabbers that go into these enterprise environments
are the rabble rousers.
They're the ones that are like shaking their fists.
Like, why can't we get better?
And it's interesting because our motion is not top down. It is, you know, these folks are some of the most load-bearing
folks in these organizations that are helping create the products that these companies are
selling to their customers. And they are saying, you know, how come we can't do better internally
so that I, we can focus on building better products for our customers instead of
being our own private cloud corporation. We had one company that we're talking to in the finance
space that was like, we have a 500 person engineering operations team and we have to put
them out of business because our customers don't, and not get rid of them. We need to reapply those
folks to be able to work on the things that our customers
are waiting for and want.
But it is folks that are not going to necessarily sign the PO, but they're the ones that are
making the noise to get to the folks that do sign the POs.
And it's been great to have that kind of community support.
And it is that clarifying time when companies have moved certain things to the public cloud
and realized how much
less operational overhead there is to help sharpen like, wait a minute, how come we can't have that
same operational efficiency internally back to like the CIO and the CTO? It's like, wait, we can
vastly improve being able to focus our talented folks on our business and then give those
developers a much better developer experience,
which I think that's kind of the all important bit,
especially as the, the,
the amount of importance placed on shipping new features,
shipping new products,
focusing on what,
what their actual business is,
has been super important.
Can you walk us through exactly what it's like to boot for the first time?
This oxide rack is assuming the, exactly what it's like to boot for the first time this Oxide rack.
Assuming the sad data center person has walked us to where it will go and says this is where your Oxide server rack will go.
Watch the box.
Assuming that's already taken place.
We're there.
Jared, let's say Jared and I are there.
We're the administrators, the operators, whatever you want to call us.
We've got to provision this thing.
You say it takes a few hours we slide it in maybe it takes a small forklift or several people maybe
it's got wheels i have no idea this thing is not short so let's just say it's there it's there
we're not worried about door spaces how wide we got to be nothing like that we're at the rack
it's not plugged in is rfk with us do we have rfk here or are we on our own? RFK's unwrapped it
because he comes to unwrap it. And we're ready to plug it in to the network, to power, et cetera,
and then boot it for the first time. Are we, you know, attaching our ethernet cable to a port on
this thing or a console port? Like what is the exact interface, the real details? Yeah, the real
details. So if you look at the rack and I think maybe you can go to see it on the website, but
there are technician ports at the front of the switch. So that is where you are going to plug in your laptop cable effectively is going to need to be able to connect to your broader network.
That's going to be uploaded over that technician port. And then you are going to SSH into that
technician port. And you've got an install screen that's going to walk you through the actual
installation of that rack. And then you're going to, we've got to get a video out there of this
so people can kind of see it. And this is also where it's just like, we've got a very demo based culture. And so we do every Friday, we've
got what's called Demo Friday, where anyone can just demo anything to the company that's been
really, really important for us because it allows people who are doing things that are like pretty,
maybe pretty small in the stack to kind of get that appreciation of the peers.
But we had the demo on Friday of one of our engineers making this thing that is already
gorgeous, like even better. Steve, I don't know if you got a chance to watch John's demo, but it's
just like absolutely eye-popping. So, but we got to put a video of it out there so people can
actually see it. It was demoing yesterday. Oh, nice. And we didn't even have the latest. Back
to where we started, we were with a customer and John was like, well, as he started to go in on Oh, nice. Because back to the fact that they've sort of been ignored, having folks around them that really, really care about what is most painful or frustrating about their daily jobs and seeing a little bit of care and thoughtfulness go into these parts of the stack is really fun.
And Wicket is kind of part of this sort of set up service on the rack that gives you a visual of how many sleds do you have?
What is up?
What is not up?
And this is like, we're not over the web right now.
We can't be, right?
So this is all over SSH.
This is a terminal app.
So this is where actually one of those
like strange bounties of Rust.
So this is based on Rust 2E,
which is a terminal user interface builder.
And you can build like really easily you can build really robust eye
poppingly beautiful terminal based apps and so this is right yeah this is a terminal based experience
um i love it that is and i think it's a deceived point it's like one of these things where
we are going into these like little details that matter a lot to people who've been suffering and
one of the things that we that is really important to us at Oxide for the virtual machine. So you provision
a virtual machine. How do you get into that virtual machine? If it itself, the guest has
borked networking or screwed up or even screwed up the image in some way, it's like you need a
great serial console. The irony of the cloud is that the serial console is actually like
more important than ever. And the serial console was something that actually even the biggest
public cloud providers don't take very seriously. And we have taken the serial console really,
really seriously. And one of the things that kind of fell out of our implementation
is you can have many people watching a single serial console and
participating in a single serial console. So you can share effectively. And I think this is going
to be one of these things that is just like our customers are going to absolutely love because it
is when you're dealing with one of these like low level issues, that's annoying. It's like,
oh, I've screwed up cloud in it in some way, and it's hitting the wrong thing or what have you. And no one else can log into it because that's the problem.
The ability to share out a serial console where everyone can log into the same serial console
and begin to get this thing debugged, which is a problem that everybody has. In the public cloud,
this is a problem that we have, right? And I think it's going to be one of those little touches that
we think people are going to really love because it's meaningful. It's not little. I mean And it's I think it's going to be one of those little touches that we think people are going to really love because it's meaningful.
It's not well, I mean, it's actually like really, really significant and it's going to have a material effect on the way people are able to do their jobs.
So back to the boot up. Yeah, I don't think Adam, I don't think we took you or Jeremy took you all the way.
Not deep enough. I want to go take me to the TUI. So, yeah, I'm in the TUI and I've I've uploaded or I'm already in the TUI.
So I've uploaded this config. You know, I'm in this thing. and I've uploaded or I'm already in the TUI, so I've uploaded this config.
You know, I'm in this thing.
What do I see as initial operator?
Like, am I, you said this is your own OS,
so it's like.
Yep, so you are seeing the,
it is telling you like,
I'm going to give you a root of trust image,
a service processor image and an OS image.
And I've done this for each of these sleds.
And we are now,
this is now in progress for each of these sleds. And we are now, this is now in
progress for each of these sleds. One of the challenges is always how do you deliver a
beautiful interface that's also transparent and gives people the details that they need when
things go wrong. So we very much have designed that in this in mind. So you're seeing its progress,
but you can also get as much information as you want about what's actually happening.
And where are we actually in terms of what's actually going on in the system.
Again, one of the big advantages of us being more transparent, open source, like we want you to know if this thing goes wrong, where it went wrong and what happened.
You've got all these details, but what's actually happening? And then truthfully, that takes like 20 minutes.
You can do all that in parallel.
That kind of all comes up. And then your configuration, provided that you've been able
to actually connect to via BGP and you've got external connectivity, which you've got to deal
with one's own internal network to do that, and we've got the ability to get an NTP server and so
on, you're up and you're going to go hit a web console and you're going to go provision.
That web console then is going to walk you through a workflow to go get set up with your IDP.
What's IDP?
Identity provider.
Okay.
Yeah, your identity provider for R.
So again, in enterprise environments, you've got usually like a SAML-based auth environment.
And whether it's Keycloak or some larger, more unwieldy Microsoft products,
we were not going to go try to replicate all of that. These are established authentication and
identity validation mechanisms. And so integrating into that so that you have kind of a pretty clean
workflow for being able to get that stitched together. And now you have, you're the administrator. So now what you are doing is
setting up a silo. And that is kind of a boundary for, because one of the other important aspects
of this is being able to operate a multi-tenancy. And I know like multi-tenancy gets thrown around
a lot, but the necessities of having both delivering kind of quality of service guarantees
to customers while having complete
isolation is one of the very complicated and hard elements of running a cloud.
And something that has been very difficult for Axton systems providers to get right who are
selling on-prem, even some of the kind of hyper-converged folks that entered the market
in the last five, 10 years, this notion of multi tenancy is a pretty tricky one to get right. But in the Oxide system, you're basically setting up
a silo or a number of silos, depending on your customers that you're serving. So you, Adam,
have like two different departments and you would have those sort of departments in their own kind
of boundary. And then it's as simple as inviting them in
and those users can then come in
just like they're hitting EC2
or AWS. They
can set up their credentials
and create a project
and they're off and running. They can go
deploy instances directly. They can do it
via the API,
CLI, or the web console.
Yep, web console. Okay.
And off they go.
Is there installing Ubuntu at that point or their flavor of Enterprise Linux, whatever they decide to?
Yeah, they can upload images that they want to run.
You can kind of promote those images to be available to everyone in the silo, just someone in the project.
So you have the ability to kind of select who you want to have
access to what, as say the project lead. The purpose of this is to enable those end users,
whether it's SREs, developers, et cetera, to be able to operate fully self-service, right?
It's like get out of the shadow IT where folks feel like they need to go swipe a credit card
because that's how they can move quickly and start giving them that same agency on-prem that they have in the public cloud.
And then from an operator's perspective, back to you as the administrator, your job is to keep them running, make sure that they have ample quota and that they are, you know, accessing the resources that they need, but you should not
have to be in the way in allowing them to kind of run and deploy software and run software,
much like the cloud. Very cool. Well, we started the show with saying that you've just delivered
your first rack. So congratulations again. How did you know you're ready to like deliver like how did
you know this was hardened to the point where you can deliver on that promise what did it take to
get there like how much how bloody are your knuckles how upset are people on the inside to
some degree to get there like how do you know how did you know what did you do to know to know that
this was mature enough to do that yeah i mean so i think you always have a problem when you're
co-designing hardware and software you've got the things that you can kind of revisit and the
things you can't revisit. And you kind of said this at the top that when you ship that hardware,
that hardware leaves. Yeah. It's out of your control. So the hardware has to be
absolutely right. And you really need to drive that to be correct. And there are huge numbers of challenges there in terms of
getting the hardware is hard. And I think actually more directly, the details really matter. And a
very small detail can be the difference between hardware that works and a warm brick. And so
getting those details right takes a long time. There's a lot of iteration involved. We actually have been pretty transparent about that whole journey.
So we've got our and friends, Oxide and friends.
The OG and friends.
The OG, yeah, exactly.
I think we can all be and friends.
We're all friends here.
We're all friends here.
Yeah.
I was telling Joe, I'm like, this is amazing.
They have this podcast called Oxide and Friends.
How novel.
Yeah, exactly.
Yes.
We've loved getting the team on there in their own voice. So we've been able to shed a light on some things that really have not had a light upon them. So getting the EE team talking about
bring up, tails in the bring up lab have been extraordinary and getting compliance,
regulatory compliance. So when you
have hardware, you can't just like ship hardware. You've got to actually have the FCC has to
certify that you have not made something that's going to interfere with all the electronic
equipment around it. And that's, that's compliance. And by the way, the FCC has fixated on the state
of the art, which are these one U two U systems. So it turns out when you're building a rack level system and you walk in to go get compliance,
they are measuring you against these much smaller systems. And if you push back on that,
you're like, well, wait a minute, there's, you know, there's the density of two racks
running inside this one rack. This is the product. They kind of shrug. They're like,
I don't know, pick it up at the FCC.
Oh.
And you just find that, you know, time and time again, there are few in the industry that are thinking at the rack level.
In fact, the only demographic that has to think at the rack level
are these end customers.
Yeah.
And that's not where you want to think about it at.
No.
Because that's where it's already baked.
That's the cake, you know?
Right.
You know, as we went through this, it's like you can see why this is hard.
And compliance was hard.
And we got a great oxide and friends talking about all of our adventures in compliance,
which, by the way, people never talk about because what happens to compliance stays in
compliance historically because of all of the for any company going into compliance
is tough because you're going to find things where it's like, we are emitting, we got this emitted at this particular frequency. We have this emission
that we need to go understand and patch up. And so there's a lot of work. And, but once that's done,
you've got to have the software ready to go. And in particular, the software that is the most
important software to have ready to go is the ability to actually update the
software. So there are two elements of software that have to be perfect when you ship. One is
the actual root of trust and the ability to actually indicate that this is oxide firmware,
to actually sign that firmware and to put it on the root of trust and to lock down the root of
trust such that it can't be impersonated. That has to be done
correctly. And that's actually super complicated because that requires the generation of a secret,
namely the private key that we generate that is ultimately used to sign that firmware.
That's a secret. And how does Oxide keep that secret? And I am convinced
that many other companies our size are like, just lock it in the CEO's drawer and don't ever talk
about it again. But it's like, that's not really good enough. Because if this secret is going to
be used, if you could impersonate Oxide firmware in perpetuity with this, you actually need to go solve a really
thorny problem, which is how you generate this securely and how you store it securely.
And that's a whole thing. And so there's something called a ceremony. And this is a technical term,
right, in security spaces. And Steve, this is something that you and I learned a lot about,
did not appreciate the complexity. You've got to have that exactly correct. And that's a whole thing. You've got to
have the ability to update the software. That's got to be correct. The software's got to be able
to bootstrap itself. And then you've got to know the software that constitutes that minimum viable
product. And there's a whole lot. And by the way, software update is enormously difficult. This is very difficult for Amazon.
It's very difficult for a good example of a company that does it really well in Tesla
and a company that is struggling because they don't do it well in VW.
Like it has these very, very, very long shadows.
If you cannot do a good job of versioning and updating software. And it sounds trivial. I mean, it was the feature
that we had to make sure we had gotten right before shipping. And everything beyond that,
well, obviously, there's a huge amount of software that ships in this system. It's more software than
hardware, which is a bit counterintuitive because we've got a big hardware rack on the website.
And it's easy for folks to think about it as a hardware product, which it certainly is.
There's a whole bunch of software on there, but update is the fulcrum. I mean, that is the thing
that allows for all of the rest of the software to continuously be improved, to go fix things that
are wrong. And we were, again, very, very fortunate that we were able to attract folks that had been working on this problem for their career, very passionate about this problem, that were front and center on working on that.
But as Brian points out, that's one of a couple of really, really critical things that we're making is one that we ourselves want to use. So we've got an Oxide rack that runs our software that we are constantly updating and running on ourselves.
We are the first customer.
We are the first customer. And this is always essential. You know, when you buy a product from someone and it feels like, are these guys using their own product?
Because it's like, this thing kind of sucks.
Yeah.
And if the engineers were forced to use their own product, I think it'd be a lot better.
And we are a big believer.
It's something that was instilled in me early in my career at Sun, where a real turning point at Sun, and you talked about errands, about ZFS. And one of the early moments for ZFS was us storing our own home directories on ZFS.
And, you know, I'm very proud to be in that first batch of whatever it was,
eight people that had all of their data on ZFS.
And because we had to go all in first. And that machine, Zion, was a machine
that we all volunteered to be on. Part of the reason that we've deployed on ZFS at Oxide is
because I've been on ZFS for whatever it has been, 20 years. And when you've walked that trail with
your own infrastructure, you have a level of confidence in it because you've been using
it yourself. And so we are using our product ourselves. And there's so many things that have
come out of our own use of it, where we have obviously discovered all sorts of issues that
need to be improved and so on. But it's also given us the confidence to know that like,
you know, what we're building is actually in it to pull this whole thing together required a hardware rack
that was to the point that it could really be used. We needed a lot to be in place to be able
to even use our product ourselves. And boy, the first demo day that one of our engineers actually
did, and you could kind of see him working himself up to it. And Lukeman on our team and Steve,
I know you'd been like DMing Lukeman to see if we could
actually demo the whole rack together. And that moment where all of a sudden we had all oxide
software running on all oxide hardware and being able to demo that for the whole company is so
catalytic and was so energizing. And to realize like every single one of us at this company has been demoed today.
And how great is that?
When was that, that demo?
How far back was that demo?
That demo was, I mean, because again,
you need all this,
like the stuff has to go through a compliance first,
right? Compliance was in January.
So, you know, it was in early April
is when we were able to actually pull everything together
and then start iterating really quickly.
And fortunately the software had been developed in in parallel of this year, 2023.
Yeah. But if you go back to some of the other milestones of strongly believing that this bird
was going to fly, if you go back to the first bring up of the first board, and we did a DeNovo
design on the board, you kind of find there's these reference architectures for server boards
that everyone in the industry uses. And if you break the mold, which is again, based on this sort of, it's a PC mold from the
80s. If you break that mold, you're kind of in the wilderness. And you find that this stuff is
very poorly documented for the reasons that we have reference architectures that everyone runs off of. And so that first bring up on that first board was in 2021.
It was September 2021, right?
Yeah, October, because we were getting it up through October, November.
And then another big, big, big one was because, again, remember,
like early 80s is when the PC industry outsourced BIOS and firmware.
And companies like American Megatrends came up
because it was IBM and the clones.
All the clones, everyone was consolidating
around this outsourced model of,
let's have one company or a set of companies
write the firmware for all these machines
so we don't have to.
And the outgrowth of that is you've got
this massive proprietary opaque blob of software
on enterprise machines that is not very well
qualified and definitely not understood. So ripping all of that out and writing a de novo
set of firmware in Rust and getting that to boot on an x86 board was actually maybe the riskiest thing we did. And when that booted up, that was another like, holy, we might make it.
It's birkin fly, man.
It works.
And that was a while ago.
That was a long time ago.
Then I would say, like on the software side, we've been working on the control plane, the
hypervisor and all that.
I had to happen long before we had hardware.
So there was another early demo from Sean Klein on our team.
And see, let's remember when that demo was when demoing all of the software, not on oxide hardware.
So this is on commodity hardware.
And that was that was another moment of like everyone's like, holy, we're going to pull this thing off.
And that was a year and a half, two years ago.
That's a long time ago.
Yeah.
So this is on the one hand, it all came together on the rack in April.
But this has been going on for a long, long, long time.
Because it takes a long time to do all this stuff.
One other demo that was amazing.
And you could just tell the two engineers, James and Greg, that were doing this demo were just like so giddy.
They could barely.
But they did a good job of playing it off like it was just another casual demo.
So they had a Minecraft server running.
And they're chatting up about their Minecraft activities and who's doing what.
Running in the oxide rack, to be clear.
Yeah, running in the oxide rack. And one important aspect of any kind of cloud infrastructure is the ability for you to move workloads uninterrupted.
So you need to be able to tolerate live migrating
things around. And so we're watching this demo and they're small talking and just giddy to give
the final reveal. And at the end of this Minecraft banter, they had been demonstrating our live
migration. They'd been migrating stuff all over the place with no blips in gameplay.
And again, it was kind of another,
because just there's a bunch of aspects of this
that you need to go kind of stress test.
And it was yet again,
another one where the whole company
on demo days sitting there
just like gobsmacked
that this capability was running
as well as it was under the hood.
And live migration is one of those,
like, again, little things
that if you don't do, if you don't build into the first product,
then you have these violins of compute that you can't do anything about. And it's very,
very important that we're able to migrate things around so we can reconsolidate the rack,
so we can service it, so we can pull sleds, so we can add sleds. It's like you need to have
this capability, but it's got to be built into the very lowest DNA of the product.
And then we bring it all the way to today and we are going to be finding things that are at the edge of oxide in the customer environment that some of which are smooth and some of which have sharp edges.
And the next six weeks and six months and six quarters are going to be continuing to smooth that out and continually improve that so
that the product is even easier and getting better as we go. Yeah. So Steve, you mentioned that you're
in a hotel room. I'm not sure if you mentioned before we hit record that you're actually on site
with a new customer and getting messages now, like this exciting start of the day,
messages are coming in. So surely you're going to learn a lot today probably you
know and ongoing yes i may have been uh going to my dms occasionally during this to see how things
were going it's all good you played it very smoothly thankfully nothing must be that much
on fire because we have had one guest have to just run out in the middle of the show before
and i wouldn't have blamed you if you had to, but I'm happily, you haven't had to.
I may have muted once or twice, but yeah.
No, it's exciting.
You know, when you look at on-premise
versus not on cloud, is that synonymous?
And the reason I bring that up is like,
the question really is, is who is an Oxide rack for?
Like what type of customer?
And the second question I suppose is this shift for 37 singles to move off the cloud is it should they have bought an oxide rack like
is that the kind of you know given you know the prolific move from okay cloud is you talked about
rental earlier steve and how you know obviously doesn't make sense to live in a hotel forever
is there is that the same song basically?
Is that, should 37 singles be a customer or are they a customer type for you all?
Who should buy these things?
Yeah, I think probably, but I would want to have a conversation with DHH first and make
sure to understand what their explicit use case is.
And this first product from ours is not intended to be applicable to every single
use case on-premises. It's focused first on general purpose compute. So we are definitely
going to have hardware acceleration in the product in future iterations, but there's a large swath of
workloads that are well-suited for this. And it's a lot of the on-premises workloads today. By the way, I own a home and I'm
staying in a hotel room. So it's also like, there are the right kind of accommodations for the right
use case. But the general customer set that we're talking to and that we're engaged with and
that we're serving right now are large organizations, typically. So you've got kind of Fortune 1000 regulated industries,
you've got a lot of large institutions that are going to have a lot of need for rental public
cloud computing, and also are going to have a lot of on-premises IT infrastructure that they need to
support for the next couple decades, as far as the eye can see. And you even ask some of these folks,
like the most ambitious public cloud adopters,
how much of your workloads do you expect to have in a public cloud only model in five years? And
it's hard to find anyone that will even say north of 50%. Is that right? So you have this just
massive, massive, and these are measured in hundreds and hundreds of millions of dollars
in both places, right? And again, still having to pick from these kind of 1980s architectures
that Brian mentioned
and deal with having to then find software.
Is that software provider
that I'm using today getting acquired
by maybe a megacorp
who's going to raise prices?
And so the large kind of institutions
and large enterprises
are the demographic
that we are focused on the most right now
because those are the ones that have reached out and said, hey, we have spent a lot of time and
energy on our public cloud strategy over the last five years. And now we're kind of turning that
ray gun on premises and figuring out how we modernize and how we improve that.
There's another group that is really interesting and we spent a bunch of time with, and that is
the large cloud SaaS companies. Companies that were born in the public cloud, they themselves are now spending as much as large enterprises in the public cloud.
And I think the thing that I don't like about the whole 37signals discourse is this cloud
repatriation. It's like, it's time to leave the cloud. It's time to go back to on-premises. And
I think that's totally the wrong conversation. What's really interesting
is when you talk to these large cloud SaaS companies, they're not saying like, oh, we got
to get out of the cloud. It's a racket. We can do all this for less. We can do it better than the
cloud. Like, yeah, good luck. You're going to do it better than AWS does it. No, it's conversations
that are around how do we grow and go get access to more of our customers' data?
In this financial regulated industry, we've got 10% of this four-letter bank's data.
How do we serve that bank and help them use our products for 100% of the data?
Well, in order to do that, we've got to extend our platform closer to where that customer is
for a bunch of their data. And we can't do that by cobbling together a kit car of five
different enterprise providers and building a 500 person engineering team. And that's where,
you know, we've had some really, really rich conversations with these folks where they're
excited that they've got a vertically integrated appliance that they can land their cloud SaaS
platform on top of and go deliver that into a colo, an exchange, places where a lot more of this
customer data lives or these customer use cases live.
And so we're really excited about that use case because that now allows Oxide in a way
to help extend enterprise software beyond just public cloud use case to a bunch of these
other markets.
And yes, they will be customers of Oxide.
We will be partners because we're going to be, you know, there's kind of a nice virtuous
cycle here where it can be kind of a helpful distribution channel that also help these
companies to improve latency, you know, grow revenue.
And those use cases are much more interesting than like, oh, is the pendulum swinging back
out of the public cloud
and back to on-premises?
It's like, that's kind of the wrong way to think about it.
I've got two really quick questions,
and then we'll let you all go.
Sound good?
Yeah.
My first one is, where are you guys storing this secret?
Ooh, yeah.
That's a good one.
We actually do want to do an Oxide and Friends.
Clearly, we're not going to tell you exactly where the secret is stored,
but I think we do want to go into some,
because I think it's like the technical details are really interesting.
I think it's important that we talk about the dot matrix printer
that gave its life for the secret.
Oh man, I like the sound of that.
Most gave some, some gave all for Oxide.
And that matrix printer sacrificed itself for the greater good.
Met a Dremel that it wished it had not.
It lived a short but important life.
Was this like a scene out of Office Space, you know, where they take it out back and...
It goes beyond that, though.
You can't do that.
Oh, yeah.
Because we thought we're like, oh, this is gonna be like PS load letter and we're gonna
take it into the field and we're all gonna it's like no no no
no no no this is gonna be like taken apart surgically and like destroyed surgically so
in particular the dremel goes through the microcontroller because this dot matrix printer
why did this dot matrix printer have to die because Because it printed out the secret. It saw the secret.
It saw the secret.
Yeah.
So it might be like,
you can see the secret,
but then I have to kill you.
You know,
the dot matrix printer has died and the,
the secret is stored,
attended by armed guards.
So the,
you know,
there fortunately society has some,
some apparatus for storing such things.
So which landfill is this thing in?
Yeah,
exactly.
That's right. We got, we got Russ Hanneman out there looking for the thumb drive.
It's a good, good question. Ultimately, that ends in a safe deposit box at an unspecified
institution. That's what I figured. In an unspecified country.
Ultimately, it has to. But I think the apparatus there is really interesting,
and it's something that we actually want to get into in the future.
It was really fascinating.
Just like all of the precautions that you take and that are really important because the secret is super, super important.
The secret is company ending.
And you have seen this from there are vendors that have lost control of their signing keys.
Really?
Yes. Oh, my goodness. MSI. This has happened msi wow yeah it's happened recently msi they lost control of their signing
key and it's like you're done it's game over um you can never know what you're actually running on
you can't trust what you're running on right right and it is really really important so we
we've treated that uh with great care and great rigor.
And then we're also, for any customer that's like, because another really interesting aspect of this is documenting this process really thoroughly.
So a customer, we obviously can't tell you what the secret is, but we can be very transparent about all of the steps that we took to go secure that.
So there's a very crisp audit trail.
So we know exactly who was there, how it was done,
all of the steps and procedures that we took.
You've got when it was done and so on.
So it's pretty neat.
That's cool.
You guys should publish that ceremony,
like not the details, but just the general flow
and like how to really keep a secret kind of thing.
That'd be a cool blog post or GitHub repo or something.
Oxide and friends. There you go. Well, your hub has got to be the podcast, right?
And everything else is the spokes. So I agree with that. That's right. Put on the podcast first.
While we're on the podcast conversation,
I think podcasting's moment has kind of passed in terms of there was a time where it was like
everybody had to have a podcast. And I feel like people have kind of moved
on the general consensus, but brands have wanted to have podcasts. Some have podcasts. It seems like it's
a great thing for a brand. So many of them make podcasts that nobody wants to listen to. And you
guys have a podcast that everybody wants to listen to. You're also a brand, so to speak, you're a
company. And I'm just curious, like, do you have a strategy? Is there like a strategy around this?
It's just like, you just like to talk on on microphones or like is there a content strategy going on here or is it just
like we like to talk on the microphone we should talk about on the metal first because I think that
was the first version of the podcast was on the metal and the strategy such as it was behind that
because it was also selfish in that we wanted to talk to people that had been there as computers were built over the last couple of decades and found
that there was not a lot of recorded history of it.
I mean, obviously thousands of books written, but there wasn't a lot of audio kind of telling
the stories of computing in the seventies and eighties and nineties and two thousands
and even more recently. And we were seeking and kind of were fortunate to run into or know folks that were at that
hardware software interface in the earliest days of Honeywell and Intel and getting them
on record telling those stories.
I think we had a pretty good instinct that this was going to be content folks would want
to listen to.
But that historical themed kind of how we got here, like why we are in the state we are,
was really compelling. And I think strategically, the thing that was clearest in our mind
was that there are other technologists out there that would like to join us.
And they're going to be folks that we've never met. They're going to be out of our network.
And the podcast was a way of putting the content in front of them that we knew was compelling
and we think that they would find compelling too.
Such as the initial strategic thrust was, this is a way to help build the team.
Yeah.
It felt like it was a bit of a bet, but not much of one because it just felt like this
was pretty obvious.
I don't think we were expecting just how quickly it would bear fruit.
So we got that first episode out of On the Metal with Jeff Rothschild, who is an extraordinary technologist, founder of Veritas, very early Facebook.
First VP of engineering at Facebook.
Early Intel.
Yeah, it was early Intel way back in the day.
And Jeff's extraordinary.
And he was so generous with his time and really terrific conversation with Jeff.
That podcast drops.
And six hours later, I've got someone coming in on LinkedIn saying, I just listened to
the podcast.
I am leaving Facebook.
We've got to talk.
And that was Arjen Rutzauer, who is one of our founding engineers.
Arian was the first one that was like totally out of network for us. But Arian is such an
important part of who Oxide is. And we share values with Arian because he was attracted by
the podcast that we put out there. And he's like, the folks that make this, I want to talk to these people.
And early on with those stories, we knew would be attractive to the kind of technologists.
What we knew, the thing that we knew that I think investors didn't necessarily know
is that the world, technologists, customers knew that it was time for this company.
And that if we could put the bat signal out there saying, hey, here's what we're doing, come join us, we knew that
technologists and customers would raise their hand. And so that's kind of the strategy, such
as it is behind the podcast was, it's a way of getting that bat signal out there. By the way,
it's doing it in a way in a vector that we just love.
We love podcasts.
We love listening to them.
We, we think it's a really important vector.
So yeah, it was on the metal was huge for us, but we didn't talk about oxide at all, except
for some, a couple of advertisements that listeners listen.
Cause we just, you know, recorded a couple of like tongue in cheek ads and listeners
after they had listed,
you know, the 10th on the metal 12th and got the same ads. They started just protesting,
like, please God change the ads. We actually had one listener submit a ad for us and just say,
just use this. Like we'll start creating ads for you. But we didn't talk about oxide at all.
And I think the morph into oxide and friends was not specifically just to talk about Oxide more. There's plenty of topics on there that have absolutely nothing to do with the space and some of the problems in no one talks about. No one talks about bring up because bring up is ugly,
especially on first boards, first systems. And no one talks about compliance because,
again, there's a lot of warts. It's ugly. And folks are scared to expose that to their customers. They're scared to expose that to the market. And what we've found is that that
transparency has actually endeared us to this demographic of customers because they love that they get to see it all. They kind of get to see where it came from, how it was built,
who built it, why they built it. And that level of transparency where even myself,
like five, 10 years ago in my career, you're always like, ah, do we really want to share this?
Do we really want this out there? And you're thinking of all the downside, right? And once you start sharing stuff and you see that positive feedback loop,
it emboldens you to want to share more and more and more. And I can say we are definitely not
at risk of sharing too little. No, not at all.
I mean, it's all contextual. That's the problem. Like people get so scared about sharing. I mean,
obviously the printed secret with the dot matrix, that when you keep you know very close yeah but like your ideas some of them
are worth keeping close to the vest but not like secret forever and they're all contextual like
what you are doing is maybe drastically different than what most are doing and they're not in the
same space so they can't just like transplant this great idea i heard on this podcast from
steven bryan and bam my company's successful It's just not like that. You know, so many people are just not building in the public
and not like literally sharing every possible secret thing ever. Like there's some things that
you do keep that just should remain private, but like most of it, just put it out there.
Cause you'll probably attract the better people you want to work with anyways.
You just made a really important point, which is like someone that you might worry about
wanting to take an idea and go do it.
You find that some of those people actually join the cause.
They want to join you.
Or they become customers
instead of wanting to go build for themselves.
And they're like,
hey, I don't want to take on all that risk you all did.
Like everything you all did, that's amazing.
I just want to work with you all,
not instead of you all.
100%.
That's right.
And I think also we knew that our customers, because we'd been our customers, that the
customers in this space for on-prem computing have been gaslit by their vendors.
And their vendors are not just not transparent, they're deliberately opaque.
And when you are responsible for running that infrastructure and the system is misbehaving
and you feel that everybody is lying to you or otherwise obfuscating what you know to be the truth, namely the system is not
working. We knew that a real differentiator for us would be that transparency. And we've gone to
an extreme that I think is terrific in providing this bright light into these things that have not
had a light upon them. And that's not just opening up all the
software, although we've done all that too, but is getting all these engineers to talk about
the actual real experience of getting this stuff done and brought up. And actually, I think I just
dropped this morning, actually, there's a GoToChicago talk that I gave on the rise of social
audio. So Jared, you were saying that kind of the time for podcasting maybe has passed.
I think we are in a golden age for social audio.
I think social audio is really, really important.
I think it captures something different
than we get through these other media.
And I think that the, so actually Oxide and Friends
was actually born on Twitter spaces.
So- Yeah, I remember you telling me about Spaces back last time we talked, you were
big on it.
And I don't think you were making it a podcast back then.
It was just Spaces only, wasn't it?
We started recording really early.
So we realized that, and fortunately we didn't record the first one.
That's a bummer actually.
Well, what we learned is actually someone did record it.
Oh, they did.
Always be recording. And they always be recording. I absolutely agree with someone did record it. Oh, they did. Always be recording.
And always be recording.
I absolutely agree with you, Adam.
Always be recording.
Always be selling.
Just transplant to be recording.
Always be recording is an Alex Bloombergism and a-
Oh, is it really?
That's Alex Bloomberg.
That's Ira Glass, This American Life.
Always be recording.
I didn't know that.
I thought I invented that.
Jeez, this whole time.
Just have a good idea like somebody else.
Okay, fine.
There you go. And it is really important because you get, it's a different medium. So, I think
social audio, and so this GoToChicago talk I gave on the rise of social audio and why it's important
for engineers. So, what I would like to see, I think that actually, I think people focus too
much on, oh, I need to create like this well-edited, well-produced podcast. Obviously,
love the changelog. That's great. It's a lot of work too. Social audio, throwing a Discord out
there, recording it, and throwing it out via an RSS feed is not a lot of work actually.
And getting engineers in any, I think any company, getting technologists, getting people that are solving real problems together and talk about the struggles they had together solving these problems in detail, but one of our problems societally is that we have
done too good a job of insulating one another from the details of what we're building. And
as a result, like when people look at the phone, it just feels magical. When they look at the cloud,
it feels magical because we've been insulated from the actual
details and from the humanity that's involved in building these things.
So I think it's actually really important that we talk about these details so we can
let people know that, by the way, yes, there are people that are still building computers.
And yes, it's interesting and it's hard and it may speak to you.
Maybe you're
interested in these details intellectually. Maybe you're interested in these details at a deeper
level where it's a deeper calling. And I think one of the disservices that we have done to young
people especially is to imply that everything's been done and everything's solved. And it's
definitely not. And we're all out here solving real problems, but we need to be transparent
about that so people can get engaged
and see that. So sorry, that's a much bigger answer, I think, than you're probably anticipating,
Jared. No, man. I like that answer a lot. All answers are good answers. We are big,
big social audio proponents, not on Twitter spaces anymore. Thank you. No thank you on that.
I want to get off Mr. Musk's wild ride, but we are on a Discord that we then record. And that's been a really, actually, that's been really important because it gives you a chat vector.
So you've got people can type comments and then you've got people speaking on stage.
And which is really, really helpful because it allows people to participate.
There are lots of people that want to participate in the conversation, but don't actually want to raise their hand and speak.
And on Twitter spaces, the only way to participate in the conversation was to actually like take the mic and speak.
It's really nice on Discord to have people be able to like point to links or contribute to the conversation in a way that doesn't require them to do it.
And then if they want to get on stage, they can get up on stage, too.
So it gives you that flexibility.
Huge proponents of social audio.
And yeah, again,
this go to Chicago talk just came out today. Is this your next company you're going to try and build? Right. Or is this just like a, it's such a, I think it's like open source, actually
open source is not a business model. Open source is a technique, a tactic, something you do as part
of building a different kind of business. And it's the right way to build a different kind of
business. Open source is not a business model for us.
Open source is something that we do as part of who Oxide is.
For me, social audio is not a business.
Social audio is part of what we do at Oxide as part of who we are.
What Steve and I are in our nucleotide base pairs, we are this computer company.
The next business is this one
because we believe that we're building
a generational company.
Well, we got to, Adam,
in order to be able to release to the home lab
at our 2050 keynote.
Yeah, we have a lot of work to do.
I know, man.
You got to commit 2050.
That's right.
You can't have another business.
2050 coming to a home lab near you.
What the heck will that be 2050 doing?
I don't think I'll be playing with it.
So you got to do it faster.
Can we do it like 2030?
Maybe I can do 2040 maybe.
But 2030.
We'll split the difference.
2040.
2040, but that's a last and final offer.
Yeah, let's shoot for 2030.
We'll take it.
All right, guys.
Thanks so much for hanging out with us.
This was fun.
Oh, this has been a lot of fun.
I love what you're doing here.
Yeah, this has been fun.
I like this change log in friends thing.
Yeah, it's good.
Thank you.
Sweet.
We'll have to get you on to Oxide and Friends.
We'll do a crossover episode.
Happily.
We'd love to.
What you need to do is send us a rack
so we could test out and do fun things.
Let me truly speak contextually.
Or you know what?
Here's one better.
Come full circle.
Invite us to your next customer install.
And as media, we'll come there and help you document some of the stuff.
We'll do some fun stuff.
That'd be fun.
Why don't you come to our first customer install at Oxide?
Okay.
When's that?
Is that in the past?
Up to Emeryville.
We've got live running kit.
We've got the whole history of boards kind of laid out.
Yeah.
Oh, that'd be fun.
It'd be great to have you up.
Cool. Let's do that. All right, guys. All right, friends. Thank you so much. Bye, Yeah. Oh, that'd be fun. It'd be great to have you up. Cool.
Let's do that.
All right, guys.
All right, friends.
Thank you so much.
Bye, friends.
Bye, y'all.
Jared, you got to bone up on Silicon Valley, man.
Silicon Valley.
I got a lot of work to do.
Thanks, Zachary.
Get on it.
Come on, Donald.
All right.
See you.
Thanks.
Come on, Donald.
That's awesome.
I think I made my case pretty clear during this podcast.
So Oxide, Homelab Edition, Oxide the Home Cloud, whatever it might be called.
At some point, I'm rooting for Oxide to dominate and really just serve a ton of value to the full server rack marketplace that really needs it to have the cloud on
prem, the cloud in their data center, not someone else's cloud.
Okay.
So once again, thank you to our partners, Fastly, Fly, and also TypeSense.
And those beats from Breakmaster, just so good.
So good.
Well, it's been good having you here today.
This is it for this episode of ChangeLoggingFriends.
But hey, come back next week.
We'll see you again soon and talk some more.