The Changelog: Software Development, Open Source - Flavors of Ship It! (Interview)
Episode Date: August 21, 2024Flavors of Ship It on The Changelog — if you're not subscribed to Ship It yet, do so at shipit.show or by searching for "Ship it" wherever you listen to podcasts. Every week Justin Garrison and Autu...mn Nash explore everything that happens after `git push` — and today's flavors include running infrastructure in space, managing millions of machines at Meta, and what it takes to control your 3D printer with OctoPrint.
Transcript
Discussion (0)
three,
two,
one.
What's up?
Welcome back.
This is the change log.
We feature the hackers,
the leaders,
and those who are shipping all that awesome software out there.
And speaking of shipping,
Jared and I are taking the week off and bringing you various flavors of ship it to enjoy.
Yes,
we have a podcast called ship it.
If you're not a subscriber, you can do so at shipit.show
or by searching for Ship It wherever you listen to your podcasts.
Every week, Justin Garrison and Autumn Nash explore everything that happens after Git Push.
And today's flavors of Ship It include running infrastructure in space,
managing millions of machines at Meta,
and what it takes to control your 3D printer
with OctoPrint. A massive thank you to our friends over at fly.io. Guess what? Fly GPUs
are here. The wait is over. Check them out at fly.io slash GPU. Okay, let's ship it.
What's up, friends?
I'm here with a new friend I made over at Speakeasy.
Founding engineer, George Hadar.
Speakeasy is the complete platform for great API developer experience. They help you produce SDKs, Terraform providers, docs, and more.
George, take me on a journey through this process.
Help me understand exactly what it takes to generate an SDK for an API
at the quality level required for good user experience, good dev experience.
The reality is the larger your API becomes,
the more you'll want to support users that want to use your API.
And to do that, your instinct will be to ship a library, a package,
and what we've been calling an SDK.
There's a lot of effort involved in taking an API that lives in the world
and creating a piece of software that can talk to that API.
Building SDKs by hand is a significant investment.
And a lot of large companies might pour a lot of money into
that effort to to create something that's like approaches good developer experience and then
another group of a more growing group of companies will rely on tooling like code generators and so
they're very interested in like once you make the decision to use a code generator, you're kind of forfeiting some of your own opinions and what you think a good developer
experience is because you're going to delegate that to a code generator to give you an SDK
that you think users will enjoy using.
Okay.
Go to speakeasy.com.
Build APIs your users love.
Robust SDKs.
Enterprise-grade APIs.
Crafted in minutes.
Go to speakeasy.com.
Once again, speakeasy.com.
All right.
Thank you so much, Andrew Gunther, for being on the show today.
And today we're talking all about shipping in space.
And so welcome to the show. And my first question is, when you have some code that's running in
space on a rocket ship, and if it's a class that's maybe is undeclared, is that an unidentified
flying object? Oh, that's a boo. That's a boo for me, dog. We don't even have all the context here what do you do in space i have so many questions
i literally was up last night and i thought of that like i woke up and i was like that's the
that's the joke and i'm like oh i'm such a dad anyway he didn't wake up with a line of code he
woke up with a dad joke about space like i love it i get that though We all go through that phase. He's like, I feel you.
It's okay.
So anyway, bad jokes aside, Andrew, tell us about yourself and what you were doing at Orbital Sidekick.
Yeah, for sure.
So I'm Andrew Gunther.
I work for a company called Orbital Sidekick.
So Orbital Sidekick operates a constellation of hyperspectral imaging satellites.
And basically what that means is they have these cameras that can see way outside of
the visible spectrum of light so they can effectively perform spectroscopy from space.
So gases that would normally be invisible to the naked eye are things that their cameras
can see.
And so their primary market right now is customers like oil and gas.
We're like, hey, let us know if our pipelines are leaking.
So OSK basically processes their own imagery,
determines where leak sites are, and forwards those on to customers.
They have customers in government who buy raw imagery,
looking to expand out into other industries.
As you can imagine, with these kinds of cameras,
there's all kinds of cool stuff you can do. You can monitor plant health. You can help with mining
prospecting. So very, very cool technology still in the early stages. Three satellites in orbit
right now. Two more launching in March-ish. Don't have exact dates yet, but a little bit more about me. So I am principal software engineer at Orbital Sidekick.
Prior to that, I worked at AWS for seven years.
So basically, I left AWS, joined OSK as lucky employee number 13, and got to build a lot
of their ground segment systems from the ground up.
As time went on, I got to be a little bit more involved and help out on payload
side as well. So kind of the V1 of all of OSK systems, I got to sort of touch and then moved
into this role of wherever the fires are, I moved around to put those out. And hopefully not physical
fires. I mean, these are spaceships and rockets. No, thankfully, no, no physical fires okay like we just became best friends like i'm so excited right now
like i'm trying to like parse the like amount of questions in my brain because that's how excited
okay like so you're building the software that processes the images but also is it like are
you building the software that is on the satellites? So it's interesting.
OSK is a company of about 30 people.
Do you need 31?
Because like...
Applications are open.
Okay, cool.
It's interesting.
So being that small, we have to work with a lot of vendors to sort of pull things together.
But the payload design and a lot of the core software for image processing, we write ourselves.
What language do you write image processing? So the image processing on ground is all in Python. And we the firmware for the SAT
is C++ as well with some Python mixed in. So one of the big value props for OSK is that we try and
perform some of the imagery analysis on board the satellite before it even comes to
the ground. Yeah, because you have this incredibly wide spectrum imagery, the data is huge. I mean,
we're talking these satellites can bring down one and a half terabytes of imagery per day,
per satellite. And so part of the idea is the more processing we can do on board to understand
what imagery might be a priority versus not a priority really helps us get that information to our customers faster.
So there's also this aspect of the analysis we write on ground should be analysis that we can hopefully perform on board as well.
That is so cool.
We ship NVIDIA hardware up to space.
We are running NVIDvidia dev board in low earth
orbit i'm just thinking of nvidia drivers now and i'm like oh like that's the worst like trying like
on a linux embedded system well how are oh yeah a linux embedded system that you never get to
touch again right it goes up and you're you're locked in if something goes wrong like how do
you fix it when it's like in orbit? Yeah, I mean, this really gets into redundant systems, right?
So a lot of the components on board, there's at least two, our own components.
So our own dev board, I forget if there's one or two of those, but there's kind of like a main control computer that exists separate from ours that kind of handles a lot of the, you know, the boring stuff like pointing
the satellite and doing the actual hard work of transmitting data back down to the ground.
And then our dev board basically handles all of that image processing, sending commands to the
camera. So effectively we, we have capabilities to like fail over from one component to the other,
or if we're rolling out an upgrade, you know, we roll it out to XCOM 2 and then we primary swap to XCOM 1.
And so it's almost like an A-B test in space, right?
You are kind of like a canary.
So you upload it to one of the XCOMs, you swap over to that, make sure everything still works.
Great. Roll it up to the second XCOM.
Everything still works. Great.
I feel like you have to write really good code because and you have to really think about your hardware because you never get to touch it again, you know, and you could miss
a picture. 100%. And this is one of the crazy things. I mean, even in a startup, like an
aerospace startup, the dev cycle on hardware is super long. So, you know, a lot of the hardware
was designed and locked in and figured out before a lot of people got hired. Like the hardware was decided before I even got hired.
Which is crazy.
You said you were number 13, right?
Yeah.
That's like really early.
Yeah.
So there's a lot of, you know, by the time these things go up, like, you know, you've
got three new generations of NVIDIA dev boards that have come out.
You're running like Ubuntu 1804 in space for the next, you know, half decade.
Yeah, long-term support has a different meaning when it's flying around the world.
Yeah, it's like L-L-L-L-T-S, long, long, long-term support.
Which like, well, it's funny because people like kind of make jokes about like NASA in different places and how they use like in the government and how they use outdated technologies. But when you really think about it in context, there's a reason why they're still using that
very reliable technology because, hey, you can't go change it every year. Yeah, because you got to
test the hell out of it before it goes up. And it's so interesting to kind of see this boon of
aerospace startups. Like before I came to OSK, like I didn't work in aerospace. Like I said,
I was at AWS. And I also had the draw of like, I want to work on space stuff.
Like that sounds awesome.
And seeing this smashing of startup culture and aerospace, like you have this culture
that wants to move incredibly fast and this culture that's traditionally very slow, trying
to like figure out like where, where does this all meet in the middle?
How do we speed this process up become more agile and
that was to me like one of the most interesting things to observe first of all that's really cool
when i think about like 30 people three satellites right like 10 people per satellite in space you're
gonna have a couple more and like that's like the opposite scaling of what i think of for like
running systems where it's like one sysadmin can do 100 machines it's like but like a spaceship
like it's literally a satellite takes so much time and
process. What does that actually look like for you of we're going to make another, you know,
satellites, it's going to go out next year. Like how, what is that lead time for like your,
what you're writing today? What, what decisions you're making about around libraries and code?
And then like, how do you get feedback for that? How do you make sure that like
that thing that you think is going to be accurate next year gives you any kind of feedback loop? Oh man, there's so many great questions to unpack here.
So I'll try and go at it one at a time. So one of the saving graces to some degree is that as we
launch more satellites, they're all based on the same hardware designs, like very minor, minor
revisions between them, right? Like you have a satellite that works, like don't mess with it,
continue to launch more of the same. So, but then also on the flip side, right, when the first one goes up, and we realize like,
ah, we really should have done some things differently, as we learn, the iteration cycle
is even slower. So there's a lot of things that we have to kind of deal with on ground. And we're
making notes for what the next gen hardware is going to look like additional concerns. And when
you talk about, you know, what kind of packages are we going to use? That's a huge concern of ours, right? You again, it's running 1804. In space, we're trying
to do machine learning, and data analysis, a lot of those libraries move very fast, they're very
quick to drop support for older operating systems. So you know, we have to make the call as a small
team, are we going to compile these ourselves? Like, are we going to build our own versions of these dependencies to maintain them?
So we're very cognizant, especially on the onboard data processing side of what libraries
we pull in. I mean, more so than anywhere I've ever been, because not only is maintenance a
concern, size is a huge concern. Pushing software updates to space is hard, right? It takes a while.
You're going to test the hell out of it and you want to make sure that it works. And so I like to
pick on Node.js because you have like the NPM package system, right? Where just everything
sprawls out to infinity. You install- They're ridiculous. 65 warnings.
Just like what? Yeah. You install one thing and suddenly now you've got like 100
gigabytes of dependencies. And even in Python, right, we have to be really careful of that.
Like, what does our dependency sprawl look like? And we've made conscious decisions to say, you
know, that sprawls out a little too much, like we're not going to use it. And something we really
try to hold to our own frustration sometimes is parity between space processing and ground processing.
So there's a library where it's like, all right, well, we don't want to ship it to space.
Are we going to use it on the ground? I don't know.
Maybe now we've kind of separated these paths and it makes it harder for us to verify results between the two.
So those are the kinds of things that we have to think about.
It's interesting that to some degree, even space decisions can slow down ground decisions to
some degree. It's really interesting, because you're kind of developing in like the paradox of
like, all developer, like angst, I guess, or problems like, okay, so you want to build
something with low dependencies, something that's not going to be vulnerable, something that's going
to last for a long time. But then how do you pick that? You have no control over software life cycles
of other people. And you have to know like three years in advance. That's what I'm saying.
And like, how do you account for vulnerabilities? Like you're going to have to patch things
eventually. So like, how do you patch in space? There's going to be a CVE for something.
Yeah. I don't want to fall into the trap of it's in space, so it's safe.
Right. The attack surface is a little different than what a typical server might have. Yeah, I don't want to fall into the trap of it's in space, so it's safe, right?
The attack surface is a little different than what a typical server might have.
Yeah, it's a very different attack surface, but I will kind of pull that card a little bit, at least as far as the Python side of things goes, that that system is very isolated.
Like it's not, we're not running a web server up in space, right?
But I will, you know, to the point of security and CVEs happening in space,
Space Force is actually making a huge push.
They were at DEF CON last year.
I got to go and watch the,
they held the Hack-a-Sat hackathon,
which was very cool.
They actually, the Space Force launched a satellite
for a hackathon.
I still can't believe Space Force is like real.
Like every time someone
says it, it makes me happy. It's pretty amazing. Like, that sounds just so cool. Like I work at
Space Force, like no big deal. Like, right. Central Space Command, you know? Yeah. Like
Buzz Lightyear in real life. It's awesome. Yeah. So it's important to not fall into the trap of
like, we're in space, so we're safe, right? And especially in that startup
culture of like wanting to move really fast and compete with these bigger guys. That's something
that we're very cognizant of and trying to find those right balances. How do you make decisions
and like what kind of tenants do you have to like, I guess, develop because you both want to
develop quickly because everybody wants to innovate and develop quickly. And that's how
you get an edge on your market. But also like, how do you make that last for so long?
And then how do you do it with like, I was writing an automation script and we were trying to get
rid of like dependencies. So it's like, okay, I won't use Pando, I'll use like Python, like,
you know, like the things that come with Python, right? So like trying to like develop on that
level, just a small automation script made it so much more complicated. So I can only imagine like image processing. It's a big push pull, because you definitely want to
try and keep your space systems as simple as possible. And we're very much breaking that mold
by saying like, we're going to do imagery analysis on board on board of a satellite. And so it's
definitely something we're cognizant of. And we have this nicety that we can test a lot of things out in the ground segment.
We can use those libraries on ground initially before we make the call of, you know, this
is something that we want to run in space.
So let's retrofit.
We can use all those nice libraries, have, you know, 100 gigabytes of dependencies to
prove out those analyses on ground.
And then when we want to say, OK, this is high value, we want to run it on board,
we can take that step to say, all right, let's strip this back. Let's make this bare bones.
How do we, how do we leverage what's already on board to now ship this thing up to space?
That works backwards.
Yeah. Yep.
Fundamentally, a customer comes to you and says, I need you to look at something on the ground.
You're not running customer code in space, right? They're giving you a job to say like coordinates, please send me this data, right? Yep, exactly. You're going to go
point the iPhone 25 at this ground spots and like get this image back. It's going to transmit down
just like a terabyte a day out of this. Like you're just taking pictures constantly and then
you process that a little bit more and then send them either raw data or whatever it is that they're
looking for, right? Like that's the general pipeline here. Yep. You got it. What was the
benefit there of not making the satellite a dumb client and putting intelligence in the satellite
and on the ground? Cause it feels like you could put that either place and you, you chose the
hardest decision to put it in both places. Like there has to be value that you're getting out of
doing that process before. I mean, if you're just sending a terabyte, like, I don't know, was that just a big antenna
with a satellite dish up there?
Just like beaming down death rays of pictures.
So that antenna that actually performs that one terabyte a day downlink, we get a pass
on that antenna every 90 minutes.
So if you have a really critical high priority workload and your objective is to deliver
some insight to a customer as quickly as possible, you might not want to wait that 90 minutes.
And even after that 90 minutes is up, it has to get to the ground.
It has to be processed then on the ground before it finally gets delivered.
So the idea being if we can prioritize because we're also not able to get down all the data we may have on board every pass.
So it's kind of twofold, right?
If you can say on board, hey, I have very high confidence that there is a methane leak at this position, that is a much smaller piece of data. And there are other antennas that we can use to
transmit that data more instantaneously. And then secondly, maybe you have a little bit less
confidence, but something is suspect.
You can say, all right, on our next pass, this data skips the line.
We're going to downlink that first.
And we're going to make sure that gets analyzed as part of this pass so we can get that information out as quickly as possible to customers.
But it is definitely the hardest of all the options.
You're right. So kind of like how they're using machine learning to look through ultrasounds, but you're using it to like basically prioritize data from satellites to bring that down
first. Yep. It's really cool. How do you debug that? Like, is that only like you have a dev box
on your desk and you're saying like, Oh, I think this is what's happening, right? Like at some
point when you debug something, you just have to kind of poke at it. But I can't imagine like that
latency, you have a 90 minute window. I don't know how long that window lasts, but you're like, Oh, I got a shell for 89 seconds.
Like I got to jump on the box and like poke at something. Yeah. You, the, you can SSH into space
for like a hot five minutes and take a look around. So, you know, there has to be some planning ahead
of time. Like if you want to run some set of debug scripts, you know, you're going to want to know ahead of time
and just run that in an automated way
rather than just like maybe having a terminal open.
And which we've done, we've done,
especially after the Sats first went up
and we were trying to better understand
the characteristics of the first one
and just get a sense of what was happening live.
There was a lot of like, all right, time to
SSH. I can only imagine the like constant TMUX session. That's like, you're like, Oh, it's coming
back up around. I got, let me connect to it again. Hold on. That's, that's just amazing.
We don't have space for TMUX, man. It's a fresh shell every time.
Oh, then your SSH hangs and you're like, dang. Yeah, yeah.
That is exactly what I would hope to be.
Yeah, it's the dream.
There's so many challenges in this.
What would you say is like one of the things that stood out to you as like something you didn't expect?
Because I mean, you're going into this knowing this is going to be a hard thing to do.
There's a lot of variables and things at play.
What is something that surprised you
about shipping software into orbit that you're like, wow, I didn't see that one coming?
Yeah, I think, you know, and maybe I was uniquely naive in that, you know, I think everybody has
that vision built up of like how NASA does things, right? And you imagine clean rooms and every like
this perfection and just everything is immaculately tested. And I'd say that there's not problems, but it's you have that vision of that much slower
pace.
And I think what was surprising to me is the speed at which we can move and the amount
of chaos that introduces and that it's OK.
Like there was a lot of thought put in up front, right around those failure modes and understanding and basically protecting ourselves, our future selves, so that when things do get chaotic, and things do break, we have the levers that we can pull. So it's not clean rooms that I mean, there are clean rooms, but you take a drill, do a test. And it's like, Oh, man, we're we need to like route this connection somewhere else. And somebody just like takes a drill to a frame and they're like, all right, let's send it to space. That just kind of like
shatters your view, right, of how kind of the way NASA does things. And I think that kind of goes to
what I was saying earlier about this meeting in the middle of this startup culture wanting to move
fast and that entrenched aerospace culture of moving very, very slow, right? If we can launch
a satellite for say like $5 million, why are we going to run a $5 million on-ground test for that satellite,
or a $10 million on-ground test for that satellite? We can launch three for that price,
and if one of them works, we're great. So I think that's kind of where that push and pull
really comes into play. And I think that was really surprising to me, was just how much
leniency there was
towards moving fast. Like I didn't expect it to be able to move as fast as we've been able to move.
I wonder what the culture was like in the sixties, right? When it's like we are landing on the moon
because that like they had to move fast, right? Like my grandfather actually worked on the Apollo
missions, which is just like his pictures were absolutely amazing. And it was, I never got to hear stories from him of like what the culture was like,
but I can only imagine that like, at some point you're just like, no, this has to happen this
decade. Right. Like someone's, you know, so someone said we're going to the moon and the
fact that like so many people and so much funding and money was in place to do that.
And now on the opposite side where it's like, no one told you
to do this. Like no one told you this is the thing we have to do. And so the initiatives are very
different of like, Hey, we see where we can add value to people that maybe had to drive their
truck for two days to go see this pipeline or something like that. Like, Hey, I got you in an
hour and a half and you're going to get your images and they're gonna be processed and we'll
see all that stuff that maybe you couldn't see before. And that's just pretty amazing to be able
to add that value that quickly. Yeah, it's nuts. I mean, this is something that even 10 years ago
wasn't possible, right? Like launches have become way cheaper. You know, $5 million is a lot,
but in the grand scheme of like Silicon Valley VC money, that's not a lot. And it's become
super accessible for startups to launch payloads into space. It's high cap X still for sure,
but it's possible when it really just wasn't before. And I think to your point, we're seeing
that transformation in a lot of industries. For oil and gas, the state of the art was like once
a quarter, they would pay some kid trying to get their pilot's license to just like fly and look out the window of a Cessna.
And like, do you see any leaks?
Nope.
Right.
That's what we're going up against.
That's what we're replacing.
It just feels like such a huge quantum leap forward for that industry.
And we're seeing like we see that with customers, right?
They're super excited.
I mean, A, because it's space and it's cool. But also awesome that you've been able to add so
much value, but also like iterate faster and at a, I guess, smaller cost, you know,
even if $5 million isn't anything to like.
Still high CapEx, but lower than it used to be CapEx.
Yeah. I mean, like back in the day, the only people that launched anything into space was NASA, you know?
So the fact that it is even an industry that multiple people, like or multiple companies can do, you know, is kind of just wild in itself.
So my dad also worked in aerospace.
And when I told him that I was coming to his company, he was like, you mean you're just a bunch of guys and you put some satellites in space?
Like, yeah. Yeah, they just let us.
You just apply for your FCC license and like let Noah know.
And they're like, yeah, go for it.
Which is like wild. There's like no space license or something.
Like, you know, it's interesting because there kind of is.
So it's governed by the FCC because they control the radio waves.
It's I got into like a whole conversation with somebody on Hacker News a while ago about this because I just find this fascinating.
Like how the U.S. finds really unique ways to have regulatory vectors over stuff like space.
Right. And the FCC is like the main body for that because they govern the airwaves.
So it's basically if you want to transmit within the US, you need an FCC license.
And if you're launching a satellite, you probably are going to want to transmit in the US.
So you need a license from the FCC to launch a satellite.
It's like wild because I think through crypto and NFCs,
we've seen what happens when we don't have regulations. But sometimes you're just like where do these things come from like the
fcc is not what i was a guest of like your space license you know what i mean like
so that's like who would have even thought but also at the same time like you know when you're
a kid and you think of space you think of doing so much more to be able to launch something into
space and that's just wild that it's just like, check what the dudes do airwaves
and then you can put whatever you want up there.
Yeah, and even better,
the FCC issued their first fine for space junk
a couple months ago.
Oh, that's cool.
The Federal Communications Commission,
the champions of litter in space.
But it's also interesting though,
because so many things get launched, right?
And like, even if it
doesn't go wrong there's just so much that doesn't go with your like rocket or you know whatever like
it's they're made to have parts that break off so like i don't like did people even think about
like what we're gonna do with all that at some point and that we're gonna collect all that
it was just like let it all burn up a lot of these satellite like our satellites don't have
propulsion so after x number of years the orbit decays and they burn up and that's game.
And do they really just burn up completely?
Mostly so.
What's the lifespan?
We're talking like servers are like seven years, right?
Like I can buy a server, put it in a rack and I hold it for seven years.
You launch a spaceship and how, not a space, I don't know a technical term.
It's obviously a satellite because there's no propulsion or anything.
What is the lifespan of those first three satellites that you are?
So the actual, so there's a difference between like orbit decay and mission life because
the components on board in theory will go out long before the actual orbit will decay.
So I believe the satellites are slated to be a five year mission from the
onboard component perspective.
But this is like,
that's still kind of like NASA grade ratings,
right?
Ideally you get way longer than five years.
And then I think the orbital decay is like 15 years,
closer to 10,
15 years.
It'll take before.
But do they completely like dissolve or like,
cause you know how like the rovers,
like one will be like,
it'll live way longer than they're supposed to.
And then one gets like too much dust.
And then like the like solar plates can't keep like powering it.
Also.
I cried.
Like I was so like in my feelings about the rover.
Like I was like,
I know it was so lonely and I was so sad.
Like I was like, I was your pictures. I know. It was so lonely and I was so sad. Like I was like a really injured.
I was so sad.
My kids are like, what's wrong?
And I was like, the rover.
You build up feelings around these things.
It's funny because each of our satellites we name after a sidekick.
Oh.
So our, like the official designations are like ghost one, ghost two, ghost three.
But we call them Robin goose and chewy.
Oh my God.
What if goose dies?
Like I would like,
Oh yeah.
Like,
like goose goose is ill faded.
Poor.
Are you trying to make us cry?
Andrew?
Like if you had to name it goose,
like that's,
that's definitely not the objective.
This is,
this is just a plug for that.
We have,
we have a Slack bot that announces
like telemetry and new imagery and it uses a picture of the appropriate sidekick and like
speaks as if like goose checking in got new imagery you have a slack bot with pictures
and it taught andrew hire me like like wait like you get paid to do this?
Like, I am just like, I like literally stock all of like James, like Weber and like all
the different satellites and like posting like a crazy person.
Like, just like we're best friends now.
It's happening.
Like Starlink, do you have satellites and satellite communication to like help with
that 90 minute delay?
And will five satellites like reduce that for you significantly?
Yeah.
So we have a radio for satellite to satellite communication.
They're not enabled yet, but feasibly, yes.
The more, the more satellites you have, like the better network you have, and you can kind
of communicate, communicate in between.
There's also proposals kind of going through for a larger network.
So like we could, you know, over encrypted communication,
like talk to satellites that aren't ours and kind of like everybody working
together to get data down faster.
Would that be like the outro net?
I don't know what that you call that instead of the internet.
Like this is.
You guys have to name it something cool after space force.
You can't just, and like you named a satellite goose like the bar is high.
Like, yeah, inter satellite communication networks are definitely something that's up and coming and trying to get off the ground.
It's a bad joke.
I got the pun.
I was late on that.
Are you about to dad joke us?
Who's the jerks in space?
Like is Starlink like is that like you're like they just litter everywhere
and we can't see around them is there some other like this these people don't have this like
another country that's like well they're not working with fcc so like they just threw like
i don't like if you can't answer that's fine but like i'm curious now like is there like space beef
between like satellite vendors now you're gonna get andrew in trouble do you i mean there's there's
only like low earth orbit is smaller than you think and i would say like without naming names
the jerks are the people who are just launching tons and tons which is pretty much anybody who's
looking to offer satellite-based internet right satellite-based internet takes an absurd number
of satellites it's easy to pick on starlink because they were the first, but they're not the only.
And that's going to continue to crowd low Earth orbit, which, again, is the most accessible orbit for people like us.
And, you know, these things like you, you can't understand an orbit out with accuracy multiple years.
Like these things like they're going to collide at some point, like there will be collisions and there have been close calls. And what's crazy is like,
we got a call, like we actually got a call for one of our sats and they were like, Hey,
you're going to pass really close to a Starlink satellite. Just heads up.
How close is really close? Like, I think a space is like, is like you're hundreds of miles away,
but no, this is really close. It is probably close enough that somebody called right can you change like the course of direction if you're
going to get too close or you just kind of like it's just out of luck and you're going to hit
each other us no starling has some rudimentary propulsion so they can do some stuff i mean even
the space station had to move to dodge i think there was like a Chinese satellite where they were like, Hey,
there's a satellite that we have.
Uh,
and you need to move the space station so that our satellite doesn't hit it.
Just move a whole space station.
No big deal.
Yeah.
If you could just like shift altitude control a little bit and like,
yeah,
just real fast.
But what's the heads up for that?
Is that like a,
you have 10 orbits and then you're done?
Or is this like,
Hey,
like 90 minutes.
How long does it take to move a space station?
Like this is wild.
I don't know the answer on the space station,
but for ours,
it was just this like tense hour and a half.
Right.
Cause we get that telemetry down and then it's like,
all right,
this is the orbit.
And so we're sitting there waiting and hope Starlink moves.
Yeah.
And then like 90 minutes later we get that ping and we're like, oh, thank God.
I could just imagine you're like wiggling the camera, like trying to focus back and
forth to like get out of the way.
Like maybe we can move something.
That goes back to like, it is a lower cap of like $5 million, but still that would really
suck if somebody just runs into your $5 million space, like satellite.
Yeah. Just game over, satellite. Yeah.
Just game over, right?
Yeah.
The work that you, like, I mean, I can't imagine how much work it takes to get them into space.
And then, like, the cost.
And then someone just runs into it really quick.
My bad.
Especially for us when it's like one of three, right?
That's a 33% reduction in our total capacity, which is like super meaningful to the business.
Each of these satellites matters for us. percent reduction in our total capacity which is like super meaningful to the business each of
these satellites matters for us i do have in terms of the bullies in space i do have one other very
funny anecdote because i've i have beef with the vatican hold on wow that is a powerful person to
beef with like i'm here for it the vatican has a space program fun Fun fact. What? The Vatican? Okay.
The Vatican has a space program.
You can read all about it.
It's called SPI Satellites, funny enough, but it's S-P-E-I.
It's Italian.
It's Italian.
Cut them some slack. Okay, okay, okay.
The humor of it is not lost on me.
So they actually launched with, they launched on the same rocket as one of ours.
And so one of the processes you have to go through when you launch a satellite is you basically call up NORAD.
And you're like, hey, this unidentified object you're tracking in space, that's ours.
They know like who's who, what's what.
Well, they don't know who's who.
You've got to tell them.
Do they give you Santa's number when you call, though?
Yeah, yeah.
That's the first thing.
It's press one for Santa, press two to claim your satellite is basically the call to reorder.
But every time they're like, the call options have changed.
Yeah, yeah, yeah.
The call options have changed.
Press one for Santa, two for satellites.
And so when we launched, the Vatican called NORAD and claimed our satellite incorrectly.
No, you got scalped by the Vatican. We got scalped by the Pope, man. We got scalped by now scalped by the pope man we got scalped by
the pope it's the greatest meme of all time internally of just like dude when you're like
a great grandpa you should be like this one time i worked in space and then the pope tried to steal
my satellite like did you have baller like work stories like man coming from the outside like the conversation i had with our main space systems guy of like how do we like
how do we get our satellite back from the vatican he's like it's just a naming thing like it's not
a big deal i was like no tell me it's a big deal i want to believe this is a huge deal you just
wanted to start a fight with the pope didn't you like you were like send me to italy like we will
like have this out like a hundred percent yeah like we got we got beef
so your satellite is forever a norad always thinks your satellite is now no so we we managed to
correct this clerical error and we've we've properly identified yeah clerical that was good
that was a pun i just i'm i'm far enough into daddom that they just roll out and i don't even
think i was gonna say
are you a dad Andrew because like you have to dude can we talk like I went to go like talk at
my kid's school and they're like oh cool you're an engineer but like Andrew wins every time like
my kids are like oh you build java and the only thing I can say is that java builds minecraft
that's the only cool thing like my kids don't care if i build job but when you get to say like i work in space
like you win coolest career day dad ever every time i do appreciate that i have like a job that
my kid kind of gets i can be like satellites and she's like yes space like understood space
that's a whole childhood like you know how like they get into dinosaurs they get into like
space is like a whole
thing that's like a chapter like in childhood but then she brings home pink eye and i'm like come on
man dude my kids brought home hand foot and mouth like the other bitches i'm like i'm i am currently
on drops for for pink eye and it's just the worst oh my god why do they always get us sick like
they're just like oh we love you so much and we so cute. Please don't include this in the outro.
I just imagine the episode is going to end with this like conversation on pink eye.
Dude, have you seen the meme where the like alien is like breathing in the like lady's face?
And it says when your kid's sick and they're like, I love you.
And you're like.
Okay, wait, before we leave, what's the craziest thing you've had to fix in space?
Oh, this is a great one.
So the craziest thing.
So I mentioned these radio that we have multiple radios on board, right?
We have this like super high bandwidth one and it's one way.
And that's where that one and a half terabytes down comes from.
We have this kind of like satellite to satellite.
We have a much slower one.
That's more for like command and control sort of deal.
SSH. Yeah yeah that's where
all the ssh magic happens and effectively what the way this is all supposed to work
is imagine like tcp where your packets come down over this fast one and then we send the axe back
up the slower connection and we could not connect over that slower connection when we first launched
and so you basically like ran into
flow control where we would try and like downlink imagery and it would give up after like a few
megabytes because it's like oh i'm not getting any acts and so i i got pulled into that and we
basically had to we pushed this like really small patch up to the spacecraft to basically like ignore
acts like pretend acts do not exist and just blast this data down.
Cause we're,
I mean,
yeah,
we're a startup and we're trying to like,
we've launched our satellites and investors and customers are waiting for
like those first pictures.
And we're trying to like as quickly as possible.
Yeah.
As quickly as possible to get these things down.
So we ended up pushing up this patch to basically ignore the acts. And we ditched the file transfer client entirely on the ground. And we just started running packet
captures like we just ran TCP dump on this thing. And just started like built this catalog of like
terabytes of TCP dumps. And then we we we wrote a script that would basically like analyze these and try to piece
together files from the TCP dumps across multiple passes.
So like the same file would get transmitted like 10 times because you can
imagine your packet loss from space is quite high.
So it was the most infuriating thing to watch.
Cause it's also this long tail.
Like we couldn't tell,
like we didn't tell,
like we didn't have the control of telesatellite.
Like, oh, we only need these five remaining packets.
It would just blast down the whole thing.
So you would get like 50% on one pass.
Then on the next pass, 75, then 90, then 95,
then 99, then 99.9.
And because these bundles are encrypted,
you need the whole thing. Like you can't be like, ah, screw that last packet. Like for encryption to work, encrypted you need the whole thing like you can't be like ah screw that last packet like for encryption to work like you need the whole thing and so we're we we
like basically wrote this um we call them djp cap yeah pcap file read it in and like yeah parse the
just spinning spinning those pcaps yeah spinning those pcaps the djp cap was just trying as hard
as it could to assemble from these tcp dumps that's how we got our first imagery. This issue has since been resolved. But the first imagery from our satellites was basically rebuilt through this crazy kind of bespoke process. And again, I think that kind of like goes towards the whole theme of the space segment moves much slower than we can move on the ground. So we're always trying to think of ways like how can we deal with this on the ground?
How can we fix this on the ground?
And I think that's probably the most harrowing story out of all of them.
What a way to get started.
Yeah.
Yeah.
That was a stressful couple of weeks.
Man, it was for weeks.
And I just feel like images like it's so easy to mess up like into like mess up imagery.
You know what I mean?
Like you need high resolution to really up like into like mess up imagery you know what i mean like you need high
resolution to really be able to like do things so like well and then the even better wrench that got
thrown into this is that some of the packets would be corrupted so we would we would try and
reassemble and then we would de-dupe like multiple of the same piece of imagery to make sure like
it was the same on multiple passes so it was almost like you kind of needed like two copies of an image to make
sure that it was all good.
The other thing I will say is a lot of this work we did at one of our vendor
partners down in Sunnyvale.
And I took back to our office,
they had Baja blast there.
And it was the first time I had ever seen Mountain Dew Baja blast in the
wild.
And I, when I tell you like we,
we completed that first piece of imagery and my coworker and I were just
like Baja blast time.
And I,
that is a core memory for me now is Baja blast is success.
I love it.
That's Andrew.
Thank you so much for coming on the show.
This conversation has been a rocket of a ride.
I had to get one in.
It's all right.
And I learned so much.
Justin, you're killing me.
Where can people find you to ask more questions?
Or I mean, I know orbitalsidekick.com
is the website for the company,
but I know you're available
or at least somewhat social online.
Where should people reach out and find you?
Yeah, so I'm CodeBrewed on Twitter.
I also hang out in the Ship It Slack.
I'll give you guys that Slack plug.
A great place to go.
Yeah, a great place to go.
So check that out.
I'd say those are probably
the two best places to get ahold of me.
And by people, he means me,
so we can be besties.
Oh, I'm also on Mastodon now. I'm on Mastodon
also as Code Brood. Awesome.
It was nice meeting you. That was so cool.
It was great to meet you as well.
Okay, friends, here are the top 10 launches
from Supabase's launch week number
12. Read all the details about 10 launches from Supabase's launch week number 12.
Read all the details about this launch at supabase.com slash launch week.
Okay, here we go.
Number 10, Snaplet is now open source. The company Snaplet is shutting down, but their source code is open.
They're releasing three tools under the MIT license for copying data, seeding databases, and taking database snapshots.
Number nine, you can use pgreplicate to copy data, full table copies, and CDC from Postgres to any other data system.
Today, it supports BigQuery, DuckDB, and MotherDuck with more syncs to be added in the future. Number eight, Vect2PG, a new CLI utility for migrating data for vector
databases to Supabase or any Postgres instance with PG vector. You could use it today with
Pinecone and QDrant. More will be added in the future. Number seven, the official Supabase
extension for VS Code and GitHub Copilot is here. And it's here to make your development with Supabase and VS Code even more delightful.
Number six, official Python support is here.
As Supabase has grown, the AI and ML community have just blown up Supabase.
And many of these folks are Pythonistas.
So Python support expands.
Number five, they released LogDrain so you can export logs generated by your SuperBase products to external destinations like Datadog or custom endpoints
Number four, authorization for real-time broadcast and presence is now public beta
You can now convert a real-time channel into an authorized channel using RLS policies in two steps
Number three, bring your own Auth0, Cognito, or Firebase.
This is actually a few different announcements,
support for third-party auth providers,
phone-based multi-factor authentication,
that's SMS and WhatsApp,
and new auth hooks for SMS and email.
Number two, build Postgres wrappers with Wasm.
They released support for Wasm WebAssembly
foreign data wrapper. With this feature, anyone can create an FDW
and share it with the Superbase community. You can build Postgres interfaces to
anything on the internet. And number one,
Postgres.new. Yes, Postgres.new
is an in-browser Postgres with an AI interface.
With Postgres.new, you can instantly spin up an unlimited number of Postgres databases
that run directly in your browser and soon deploy them to S3.
Okay, one more thing.
There is now an entire book written about Supabase.
David Lorenz spent a year working on this book, and it's awesome.
Level up your Superbase skills and support David and purchase the book.
Links are in the show notes.
That's it.
Superbase launch week number 12 was massive.
So much to cover.
I hope you enjoyed it.
Go to superbase.com slash launch week.
That's S-U-P-B-. Go to superbase.com slash launch week. That's
S-U-P-B-A-S-E dot com slash launch week.
So today on the show, we have Anita Zhang from Meta. And Anita, you are a engineer D,
manager D is your title. Is that correct?
Yep.
I think that's fabulous as a Linux
user and a long time restarter of services. Tell us about what you're responsible for at Meta.
Well, I support a team that basically, well, my manager calls it supports the Meta's Linux
distribution. I like to call it operating systems. Sounds better.
But we primarily contribute to SystemD,
to BPF-related projects,
building out some of the common components
at the OS layer
that other infrastructure services build on top of.
So you're the kernel of Meta's infrastructure.
We have like an actual kernel team to do the kernel,
but one layer up, I guess.
One layer above that.
So describe the infrastructure, describe the Sforces.
I've been following what Facebook and Meta have been doing for a long time as a Red Hat
user at other places and seeing the upstream contributions.
But I know many people to this podcast may not know what that infrastructure looks like
and what you actually do.
Yeah.
I mean, we've been around a while. We personally,
the company owns millions of hosts at this point, a mix of like compute, storage, and now the AI
fleet. Teams primarily work out of a shared pool. So we have a pool of machines called TW Shared,
where all of the container jobs run. There are a few services that run in their own set of host prefixes.
But for the most part, the largest pool is TW Shared.
A lot of our infrastructure to support this scale is homegrown.
I don't know anything off the shelf that's going to do a million hosts.
Yeah, me neither.
That's amazing.
So Meta has their own flavor of Linux, I guess?
No, we actually use CentOS for production, all of our production hosts, and even inside the containers we're using CentOS.
Desktops are primarily some flavor of Fedora, Windows, or macOS.
And what does that look like for what you're doing at the fleet level? Like you're
provisioning the OS or have some tooling to provision the OS. And from talks that you've
given that I've watched, you had a great talk at scale, by the way, if anyone wants to see that
talk, it's on the scale websites, but like you doing upgrades, like if I want to upgrade a million
hosts, I was like, Hey, I need to roll out a new version of the operating system. That's going to
take a little while. Like there's, there's a lot of process and there's a lot of risk there, right?
Because like you could be causing other things to fail.
So how do you do that in a safe way and at that size?
You know, we've gotten a lot better at it over the years.
When I started, we were doing like CentOS 6 to 7.
And I think that probably took like a year or two to actually reach over like 99% of
the fleet. And there's always that trailing 1% that for some reason, they can't shut down their
services, or they don't want to drain, or lose traffic or things like that. But now we're able
to complete, I'd say like 99% of the fleet in a year or less. We started doing a lot of validation sooner.
So now we actually hook in Fedora ELN into our testing pipeline and we start deploying
parts of Fedora ELN and running like our internal container tests against them.
And so that has caught a few like system wide distribution changes that we'll be ready for,
like once CentOS, I guess now CentOS Stream 10 is going
to be released later this year. Describe Fedora ELN. Like why is that different than what you're
running? So Fedora ELN is, man, I don't know what exactly it stands for. It's Fedora something next.
So it's going to be like the next release of Fedora that will eventually feed into
things like CentOS Stream.
Basically like the Rawhide equivalent of like, hey, this is a rolling kind of new thing.
Yeah. But eventually that gets cut down.
How does that relate?
Or I'm actually really curious, like CentOS Stream, right?
When they moved to this rolling release style of distribution, how did that affect how you're
doing those releases and doing upgrades for those hosts?
Because you have to at some point say like, this is the thing we're rolling out, but the OS keeps going.
Yeah, I'd say the change to stream didn't really affect us much because we were already kind of
doing rolling OS updates inside the fleet. So when new point releases get released, we have a system
that syncs it to our internal repos and then updates the repositories.
And then we have Chef running to actually pick up the new packages and things and just
updates depending on what's in those repositories.
So the change to stream didn't really change that model at all.
We're still doing that, picking up new packages on like a two-week cadence.
Do you guys use a lot of automation that you build in-house?
Yeah, we kind of have to the repo syncing remind me i had a project at animation we had rel that we would
sync all the all the repos internally it all sits on nfs and then we mount everything to nfs to pull
in repos and i forget our it was like a jenkins like tree of like syncing jobs that would all run
to like register a system and pull down like 300 or something repos
that we would like sync every night and like okay let's fetch all the files now oh yeah and then
stick those squirrel those away somewhere on a drive and then host them so that everyone else
can sync to it and then have it like roll out to the testing fleet it's a lot of data and it's a
lot of stuff that just have to as packages get removed from upstream and you're using them in
places i'm assuming you have some isolation there because as far as I know,
most of your workloads are containerized on that, like on the twine,
on TW shared as like the base infrastructure, right?
Yep. So containers,
they don't get the like live updates that the bare metal hosts get.
So users can just find their jobs in a spec.
And for like the lifetime of the job,
the packages and things that go into it don't change.
I mean, there are certificates that also are used to identify the job.
Those get renewed,
but we have a big push to get every job updated
at least every 90 days.
Most jobs update more frequently than that.
Is that an update for like the base container layer or whatever they're building on top of?
Yeah. They'll actually have to shut down their job and restart it on a fresh container and
they'll pick up any new changes to the images or any changes to the packages that have happened
in that time. Can you describe TW Shared for the audience as well? Because that's one of the things that I think is really fascinating that you have your own container scheduler.
And as far as I know, all those containers are running directly with system D, right? Like you're
not having like a shim of like an agent. I mean, you have agents, but go ahead and describe it.
So I used to work on the containers team, the part that's actually on the host,
the whole like twine or team consists of like
the scheduler and they're like resource allocation teams to figure out like which hosts we can
actually use how to allocate them between the teams that need them. And then on the actual
container side, we have something called the agent that actually talks directly to the scheduler and
translate the user specification into the actual code that needs to get run on the host. And that agent sets up a bunch of namespaces
and starts systemd and basically just gets the job started.
And that's systemd inside the container?
Yeah. So the bulk of the work that is done in the agent, at least for the systemd setup, is it translates the spec into systemd units that get run in the container.
So if there are jobs, if there are commands that need to run before the main job, those get translated to different units.
And then the main job is in its own unit as well.
And then there's a bunch of different configuration to make sure the kill behavior for the container is the way we expect and things like that.
There is a sidecar for the logs specifically.
So logs are pretty important, as you'd imagine, to users being able to debug their jobs.
There is a separate service that runs alongside the container to actually make sure that no logs get lost.
And so those logs get preserved in the host somewhere.
Twine sounds really cool too.
I was reading the white paper about that yesterday.
How does that work with like the sidecar?
I would assume, I've never really actually done this side,
like systemd inside the container running on systemd.
So if I log into a host, not the container,
I see just services all the way down, right?
They just look like standard systemd units.
They're just isolated from each other.
Is that right?
Yeah.
So the container like job, it will be like one systemd unit and you'll see a bunch of
processes in it.
And you'll also see a couple of agents that we run, but mostly just the usual like systemd
PID1 inside the container and like their own instance of journald, loginD, and all that stuff.
And that was the question I actually had.
I assumed that journald would handle the unit logging,
but you say there's a sidecar that I'm assuming is getting that logs out
to journald on the host, or at least some way
so that you don't lose those logs inside the container.
Yeah.
That's cool.
At that point, it's just native systemd, really.
You're just using every feature of systemd to isolate and run those jobs.
And then you have an overarching scheduler, resource allocator, all that stuff.
Yeah, pretty much.
One of the things that I found super interesting in the white paper was host profiles, where
different workloads, you basically virtually allocate clusters, I guess, for lack of better entitlements is what you call them for like, hey, this job gets this set of hosts.
And then you can dynamically switch those hosts to needing different kernel parameters, file systems, huge pages.
And you have a resource allocator that does that, as far as I understood.
How does that affect what you're doing?
Like you have a set of host profiles.
You say, hey, you can pick from a menu and then we know how to switch between them. understood how does that affect what you're doing like you have a set of host profiles you say hey
you can pick from a menu and then we know how to switch between them how does that typically work
so that part's a little newer than from the time i was in containers but so you create a host profile
you work with like the host management team to do that and then you can i believe specify it in your
job spec and then when you need to either restart your job or move the job around, they actually
have to drain the host.
Most host profiles require a host restart.
It's things like huge pages.
You need to restart the host to apply.
And then the jobs get started back up on the host with the host profile you're asking for.
How does that affect you as the OS team?
Like, is there anything that you're doing specifically for that?
Not specifically, but they do. So the host agent actually builds a lot of their components on top
of systemd as well. So they've been doing things like moving more configuration out of Chef into
host agent where it's more predictable. So things like SystemD network
D configs or the syscuddle configs that also go through SystemD as well.
Is that a Linux penguin on your sweatshirt? Because that's the coolest sweatshirt I've
ever seen. Oh, yeah. Yeah. The tux hoodies. This is the one that Justin was talking about.
That is so cool. Yeah, they had them at scale and I was very jealous because they're cool.
And this is an audio podcast, so no one knows what we're talking about.
But basically, it's a bunch of little small tuxes inside the hood of the hoodie.
That's so cool.
If anyone from scale is listening, they probably have a hoodie.
I'm sad that I missed your talk at scale.
It was on my schedule.
And then I think I forget what we were doing, but somehow ended up somewhere else. And I was super sad to miss your talk at scale. It was on my schedule. And then I think I forget what we were
doing, but somehow ended up somewhere else. And I was super sad to miss your talk. Do you get to
contribute a lot to open source because meta seems really big on contributing or letting like
releasing things for free, I guess? Yeah. I'd say at least the way the kernel team and our team operates is that we're mostly upstream first.
So everything that we write, we write it with the idea that we're going to be upstreaming it.
And that's how we manage to keep our team size small so that we don't have to maintain a bunch of backports, things like that.
At some point, you have to wait for it, though.
You're like, we're going to write this internally.
We're going to hope this gets upstreamed.
And then we have to either wait for the release to consume it, or we're just going to keep
running it.
But then if upstream needs changes, you have to kind of like merge back to it.
Yeah.
So the kernel, we actually like build and maintain internally.
So we can kind of pull from the release whenever you want.
And we can kind of do the same thing with CentOS too, because we all contribute to the
CentOS hyperscale SIG.
That's where any bleeding edge packages that we want to release immediately goes into the hyperscale SIG.
It's really cool that you guys contribute to upstream first, but also kind of maintain your own stuff.
So that way you can kind of pick and choose if you want to put something, like if it's like a bug fix that you need earlier you can already apply that i mean i'd say meta is like super into you know release frequently
and so if we always stick to like upstream then we'll always get like the newest stuff and we're
less likely to run into some obscure bug from like two years ago that it's really hard to debug
how does release frequently and and a million hosts go
together? Because you mentioned that it takes about a year to basically roll out an update to
every host. But if you're pushing out updates to the OS every month, then you have 12 different
stages of things that are going through release. And that makes it really hard to debug and
what version are you on? Where did we fix that bug somewhere else how do you manage that yeah so it's mainly the major upgrades that take like up to a year so you know when we're
going we're about to go from nine sent to a stream nine to ten that will probably take a long time
than if we're just doing like our rolling os upgrades so the the thing about CentOS is that we do maintain kind of like ABI boundaries. So we expect
that the changes that, you know, Red Hat and CentOS are making to packages are mostly like bug fixes
that won't break compatibility in the program. And that's remained true. We haven't run into
a lot of major issues with rolling OS upgrades. Most issues come from when we personally are trying to pull in the latest version of SystemD
or something, and we're rolling that out.
Those we have to do with more intention.
You mentioned an AI fleet.
From what I've heard Zuckerberg talk about is Meta has more GPUs than anyone else in
the world, basically.
How do you manage that? Not only are how the drivers installed, because like
Linux and NVIDIA aren't always known to be the best friends, but then how do you like
isolate those things and roll out those changes? Yeah, I'm probably not like the best person to
ask about it, but we do have a pretty sizable team now of production engineers dedicated to supporting the AI fleet
and making sure that it's stable and that our train jobs don't crash and things like that.
Under TW Shared, do they just show up as a host profile or is that like, do I
get an entitlement that says I need GPUs for this type of workload?
It's more like the latter. So even though everything's in TW Shared, we know what kind
of machine type they are. So you can specify what purpose you're using the machine for and things like that.
What's the difference between a production engineer and a system engineer?
Well, I'm a software engineer, technically, I guess.
The title.
So a software engineer, then there's a production engineer, a system engineer. I guess what's like...
There are a lot of titles.
I know.
I'd say production engineer and software engineer are the most similar, especially in infrastructure.
When I was in the containers team, like the production engineers and software engineers,
pretty much all just did the same stuff. Like we were all just focused on scaling
and making the system more reliable.
I'd say in like a product team, production engineers focus more on operationalizing and
making the service production ready while the software engineer is kind of like creating
new features and things like that.
Okay.
That's interesting.
One thing I found fascinating about some of the talks you've given and information is, is the fact that Meta is still notably an on-prem company, right? Like
you have your own data centers, you have your own regions, you have machines, and it doesn't seem
like you try to hide that from people. You don't try to abstract it away. And like, you don't,
I don't, at least I haven't ever seen a reference to like, it's our internal cloud. Like it's no,
it's like it's pool of machines and people run stuff on the machines.
And the software and the applications running on top of it are very much like,
this is just like system B unit.
You're just running it containerized.
What other types of services do you have internally that people need?
I mean, I saw references to things like sharding for like,
we need just fast disk places and we need some storage and databases externally.
But like, what are the pieces that you find
that are like common infrastructure for people to use?
Yeah, I mean, I probably dispute the fact
that people have to understand
kind of like the internals of how the hosts
and things are laid out.
So the majority of services,
we're talking like millions of hosts and TW shared, they are
running containers. And I'd say a lot of their knowledge about the infrastructure probably stops
at when they write the job spec and to the point where they go into the UI and look at the logs.
So if you're just writing like a service, a lot of that's abstracted away from you.
You don't even have to handle like load balancing and stuff. There's like a whole separate team that deals with that as well. That's awesome. Yeah. But if you're on
the infrastructure side, sometimes you need to maintain those widely distributed binaries on
the bare metal hosts. So like us running system D or the team at Siamat that does like the load
balancing, they also run a widely distributed binary across the fleet on
bare metal. There's also like another service that does specifically fetching packages, or,
you know, shipping out configuration files and things like that. But yeah, the most of the
services people write, you know, they're running in containers, databases, they have like, kind of
their own separate thing going on as well. Most of them are moving more into TW shared as well,
but they have more specific like requirements related to draining the host
and making sure there's no data loss.
Right.
How those shards like making sure enough of the data replicas are available.
Yeah.
But they're like one of those teams that they just want their own set of
like bare metal hosts as well to, you know, do their own thing with.
Like they don't care about running things in a container if they don't have to.
Typical DBAs.
What would you say are some of the challenges you're facing right now on the OS team or just in general in the infrastructure?
The AI fleet's always a challenge, I guess, making sure jobs stay running for that long.
I think every site event is kind of an opportunity to see where we can make our infrastructure more stable,
adding more validation in places and things like that.
Just removing some of the clowniness
that people who have been here a long time
have kind of gotten used to.
And you mentioned that as far as moving more things
out of something,
traditional configuration management like Chef and moving it into more of like a
host native binary that can manage things, I don't want to say more flexibly,
but I guess more predictively.
I think you mentioned that where it's just like, yeah.
Yeah, making things more deterministic, removing cases where teams that don't need
to have their own host, shifting them into TW
shared so that they're on more common infrastructure, adding more safeguards in place so that we can't
roll things out live and stuff like that. You also mentioned in the, again, referencing the
paper, cause I just recently read it. All of your hosts are the same size, right? It's all,
it's all one, one CPU socket. And I think it was like 64 gigs of RAM or something
like that. Yeah, that's probably not true anymore, but yeah, the majority of our compute fleet looks
like that. Yeah. Okay. So the majority of TW shared is like, we have one size and you're just
like, everyone fit into this one size and we will see how we can make that work, right? Because you
can control the workloads or at least help them optimize in certain ways to say, because not all AI jobs or big data jobs are going to fit inside of that envelope.
Yeah.
Especially with databases and AI.
Yeah.
And we're trying to shift to a model now where we have bigger compute hosts so that we can run more jobs side by side stacking.
Because realistically, one service isn't going to be able to scale to
all the resources on the host forever. So yeah, we're getting into stacking now.
So yeah, it's more like a bin packing approach and saying like, hey, maybe we do have some large
hosts for, especially I'm assuming for the jobs that do need like, hey, I don't fit in 64 gigs
of RAM and local NVMe isn't fast enough for whatever reason or is going to cause the job to run longer.
Do you think AI is going to change the way that meta does infrastructure because you're adapting to the change and how much bigger the hosts you need and how much more GPUs and all that kind of stuff?
Oh, I mean, even in like the past year, we've made a few notable infrastructure shifts to support the AI fleet.
Yeah, it's not even just the different resources on the host, but all of the different components.
A lot of them have additional network cards, managing how the accelerators work and how to make sure they're healthy and things like that. Yeah. I suppose once you have any sort of specialized compute or interface, whether that's network, some fabric adapters, you always have
snowflakes in some way where it's like, hey, this is different than the general compute stuff.
Oh yeah, for sure. How has that affected your global
optimization around things? I know, again, the paper was old now. It's like 2020, I think is when it was published, which is probably looking at 2019, 2018 data. But in general, it was like something
like 18% overall total cost optimization because of moving to single size hosts. Because you're
like, hey, our power draw was less overall globally. And something like, I think it was
the web tiers was like 11%. I should have had it up in front of me.
11% more performance by switching to host profiles and allowing them to customize the host.
Have you had things like that over the past four years with these either optimizations in specialized computes that have allowed you to even gain more global optimization?
Because a million hosts, like a 10% gain in efficiency or lower power requirements is huge. That's megawatts of savings.
We are also working on our own ASICs to do inference and trading. That's probably the
place where we're going to see not just the monetary gains from developing in-house, but also
on the power and resource side as well.
That's fascinating.
That's starting to come out this year in production.
Have you been enabling that through like FPGAs that you allow people to program inside the fleet?
Or how does that like, how do you come out of like, hey, we have an ASIC now and it does some specialized computing task for us?
Yeah, that's a better question for the silicon team.
That's fine.
I only see the part where, you know,
we actually get the completed chip,
but I'm sure they're doing their development on FPGAs.
And at some point they have like,
here's a chip, go install it for us.
And you need, here's a driver for it, right?
Like they need to give that to you as a host team.
Oh yeah, we have a team that is actually,
I work pretty closely with that writes, we should
like a user space driver, it just uses VFIO over the kernel. I think the chip is just the accelerator
is just over PCIe. Meta sounds awesome. It sounds like you get to actually really dive deep on what
you're learning. And like you're part of infrastructure or development. Because it
seems like you have teams for everything.
Yeah. I'd say you can really go as deep as you want to here.
Yeah. I really want to see an org chart now. I was like, there's so many of these teams that just keep popping up. It's like, oh yeah, no, we have a team that does that.
I know. I'm like, that's cool that it almost gives you enough abstraction that you can really
focus on your specialty because you get to really be deep in that area because you're
not having to worry about all the extra components, I guess.
Yeah. I mean, that's my favorite part.
I mean, some people are just really into developing C++ or the language.
But then I'm on the infrastructure side.
I just really like working directly with hosts.
And you've been there for a little while now, right?
Almost eight and a half years at this point.
I feel like people go to Mata and stay there forever because you probably get to get really good at whatever you're doing.
Plus, I feel like it would be cool to talk to those other teams because when you have questions, they must be really good.
Like if they're so specialized in that area, then they must know so much about that when you go to like collaborate with other teams.
Yeah, it's super nice just feel to ping like anybody over work chat,
like literally anyone. Just if you have a question, everyone's super nice about helping
you out as long as you're nice to. What'd you do before Meta? Or is this like,
like, have you worked at Meta your whole career? Yeah, I started here out of graduation.
I did one internship before I started here full-time. What are you looking
forward to working on in the next year? Are there big projects or big initiatives that you would
like to tackle or even things in the open source or like things that you want to give back and
make sure other people know about? I mean, I'm always interested in doing more stuff with SystemD.
I think there's still a bunch of components
internally that could be utilizing systemd in more ways, you know, making sure that we're on
the common base. That's kind of the main like general goal that I'm always going to be focused
on, I guess. There are also some bigger, I mean, the journal D I've been trying to get us to replace RSYS log completely and move entirely to SystemD Journal D.
That's an ongoing effort.
That was one of my best claims of fame at Disney Plus was I disabled RSYS log.
I was like, no, we don't have it.
It was just Journal D.
I was like, we're just doing Journal D now.
And it saved us so much just like IO throughput on the disks and everything.
And there was a lot of problems with it, too. Maybe we weren't ready to do that, but I was like, no,
I'm, we can't ship Disney plus until our syslog's off. Yeah. I want to be there. It was great. It
was a great feeling one day where I'm like, I don't need this anymore. I don't need our syslog.
I mean, moving completed. So system D network D was pretty cool. But I mean,
now that that's done, I can just like be happy with it. There are probably some more stuff we're going to be
doing with like systemd umd, the outer memory killer. I think we're about ready to get
Senpai upstreamed into systemd. Senpai is like a memory auto resizer that we wrote. And I don't think that that's been open sourced in any
way. I mean, we have like an internal plugin to do that with the old like FBUMD. I think it's time
to get that into system BUMD as well. Is that for resizing the like container,
like the C group and saying like you, how much memory they have available or is that something
different? It's a way to kind of poke a process and like make sure that they're only
using the amount of memory that they actually need. Cause a lot of, you know,
services and things will allocate more memory than they need.
Interesting. It's a little like a get back in line. You don't get that memory.
A little bit.
Yeah. Have you been doing anything with immutable file systems or read-only
or like AB switching hosts for like Fedora has Silverblue. I use a distro called Bluefin,
which is kind of built on top of that, which does like AB switching for upgrades to do reboots every
time. It sounds like you're doing rolling updates. So you wouldn't be, you would still be writing
packages to disk instead of like flipping between partitions. I mean, we're trying to shift to like more of
an immutable model. Internally, we have something called MetalOS. And right now we're rolling out
a variation of MetalOS called McClassica. It's similar to, the goal is like kind of an immutable
file system, but it's making strides to get there. We still have to rely on Chef to do a lot of configuration,
but a lot of it has shifted to a more static configuration
that is more deterministic and gets updated at a cadence
where we can more clearly see what the changes are.
And I was asking that because leading into you saying
you want more systemd stuff,
and I'm curious if you're trying to use things
like systemd, system extensions extensions or sysx or whatever it's called
that are like layering different things on top of systemd which is typically for an immutable
file system but still allow changes to happen yeah i haven't looked too deeply into what that
team's been up to but i do know that they did make use of some of the bleeding edge systemd features to build these images and things like that.
We're not using systemd sysx just yet.
I mean, I wouldn't count it out.
Yeah.
It's one of those things that looks really interesting, especially if you try to move more into immutable file system layers.
Like, hey, I still need to configure this.
And how do I do that in a composable, immutable way?
Well, Anita, this has been great. I'm just nerding out
because I'm trying to learn all of the things
that I've done in the past
and still doing in the future.
And I think it's great that Meta
is not only doing this at just a core level
of just like, hey, we just have system D
and things running that,
but also giving back upstream
with the system D builds
and all the stuff that you've been publishing,
not white papers,
which Autumn and I were reading and talks,
but also just the open source work.
So I think that's fascinating.
And we didn't even get to talk about eBPF really that much because that's a whole other topic.
Oh yeah.
You have to come back.
I think Meta gets a really bad rap for a lot of things,
but I don't think that you guys get enough credit for the amount of open
source you guys do and the white papers and just i mean like the white papers you guys have written
on databases and the database contributions alone is amazing and there's been so many things like
given away for free so people can gain like knowledge you know like i don't think meta gets
enough credit for that i mean i think from engineering standpoint, we just kind of get the warm fuzzies when people actually use and like the
stuff we write.
That's like the best part of being an engineer.
I find it fascinating because Meta is one of the few places that like
doesn't sell the things that they talk deeply technically about where it's
like a lot of like, you know, Amazon and Google and Microsoft are like,
Hey, we built this amazing thing. Now go buy it from us. And like, it's like, no, we're solving our own problem and we're just giving it
back to you. And that's a really cool. That's what I'm saying. Like, I think that people talk
about what meta does wrong, but rarely do people talk about the fact that there'll be like, Hey,
I just figured this really cool way to do this at a crazy scale. And here it is that you can
read about it and learn about it for free. And I'm like, that's awesome. So I think I've learned a lot from like the different database papers
and like different white papers that you guys have released.
And just, it's crazy that you guys released an entire AI model,
like for free, like it's insane.
Yeah.
Yeah. I've been running Llama.
I haven't done Llama 3 yet though,
but it's on my list of things to play with.
Awesome.
I feel like white papers are like a great way to learn and really get like in depth
for something.
So you can go and like do that project or try something out because you get to see like
why that solution was made for that problem, you know, and kind of like figure out like
where, how they use the projects that you guys release.
So I think it's cool the way you do that.
Oh yeah.
I really appreciate the academic side of things.
Anita, thank you so much. And we'll reach out, I'm sure in the way you do that. Oh, yeah. I really appreciate the academic side of things. Anita, thank you so much.
And we'll reach out, I'm sure, in the future with more things, maybe in the future, talk
about eBPFs and ASICs and more work that you're doing on the OS layer, because that's just
a fun thing and seeing how it grows.
All right.
Looking forward to it.
Thank you.
Have a great day.
Hey, friends. great day. Hey friends, I'm here with Todd Kaufman, CEO of Test Double. You may know Test Double from friend of the show, Justin Searles. So Todd, on the homepage for Test Double, you say,
great software is made by great teams. We build both. That's a bold statement. Yes. We often are brought in to help clients by adding capacity to their teams or maybe solving
a technical problem that they didn't have the experience to solve. But we feel like we want
to set up our clients for future success and the computers just do what we tell them. So,
well, at least for now, we try to work with our client teams to make sure
that they're in a great state, that they have clarity and expectations, healthy development
practices, lean processes that allow them to really deliver value into production really quickly.
So we started a lot of our engagements by just adding capacity or technical know-how. We end a
lot of our engagements by
really setting up client teams for success. Yeah, I like that. So when you say to someone,
you should hire Test Double for this reason, what is that promise?
I'll throw out a couple of different promises. I would say, one, we will leave your team in a
better state than we found them. And that may be improving the code base. It may be improving some
of the test suite. More often than not, it's sharing our experience and our perspectives with your team members so that they're accelerating
along their own kind of career growth path. Maybe they're learning new tech by virtue of
working with us. Maybe they are figuring out ways to build software with a higher level of quality
or scale, or maybe they're even focusing on the more human side of the equation and figuring out how to better communicate with coworkers or stakeholders or whomever.
So that's guarantee number one.
The other one I would say is that we're going to deliver without being a weight on your organization. So by that, I mean, we're able to come in really quickly, acclimate,
learn your systems, learn your processes, learn the right people and deliver features within,
you know, our first days there. So our challenge to our team is to always be shipping a pull
request in the first week of work. So we acclimate very quickly and we're very driven to get things
done. That means we don't require a lot of
supervision or management overhead or technical support the way some companies envision working
with a consulting firm. So we really challenge ourselves and guarantee to our clients that we're
going to be very easy to work with. Very cool, Todd. I love it. So listeners, this is why Edward
Kim, co-founder and head of technology at Gusto, says, quote, give Test Double your hardest problems to solve, end quote.
Find out more about Test Double's software investment problem solvers at testdouble.com.
That's testdouble.com, T-E-S-T-D-O-U-B-L-E.com.
And I'm also here with Dennis Pilarinos, founder and CEO of Unblocked. Check them out at
getunblocked.com. It's for all the hows, whys, and WTFs. Unblocked helps developers to find the
answers they need to get their jobs done. So Dennis, you know we speak to developers. Who is
Unblocked best for? Who needs to use it? I think if you are a team that works with a lot of coworkers, if you have like 40, 50, 60, 100, 200, 500 coworkers, engineers, and you're working on a code base that's old and large, I think Unblocked is going to be a tool that you're going to love.
Typically, the way that works is you can try it with one of your side projects, but the best outcomes are when you get comfortable
with the security requirements that we have.
You connect your source code, you connect a form of documentation, be that Slack or
Notion or Confluence.
And when you get those two systems together, it will blow your mind.
Actually, every single person that I've seen on board with the product does the same thing.
They always ask a question that they're an expert in.
They want to get a sense for how good is this thing?
So I'm going to ask a question that I know the answer to.
And people are generally blown away by the caliber of the response.
And that starts to build a relationship of trust where they're like, no, this thing actually can give me the answer that I'm looking for. And instead of interrupting a coworker or spending 30 minutes in a meeting,
I can just ask a question, get the response in a few seconds and reclaim that time.
Okay, the next step to get unblocked for you and your team is to go to getunblocked.com.
Yourself, your team can now find the answer they need to get their jobs done
and not have to bother anyone else on the team.
Take a meeting or waste any time whatsoever.
Again, getunblocked.com.
That's G-E-T-U-N-B-L-O-C-K-E-D.com.
And get unblocked.
Thank you so much, Gina Hoyska, for joining us on the show today.
And can you tell us about yourself and how you got started with creating Octoprint?
Yeah, so you already said my name, but I'm also known as Fusel around the world, especially around the net.
So if anyone has come across that name, then yeah, that's me. Hi. And yeah, well, Octoprint.
That happened basically when I got myself a 3D printer back in late 2012 and found myself
in a position that it was sitting here next to me in my home office, producing noise,
producing fumes and annoying the hell out of me because I just wanted to not sit next
to it while it was doing stuff, but it took hours to finish whatever it was doing.
And so I figured there must be some way to just put it in another room, but still monitor it
from afar through Wi-Fi and such. And I figured there's probably something out there that does
this. It turns out, nope, there wasn't something like this. And I happened to be a software
engineer. So that became a bit of my vacation project over Christmas, pretty much. And I happened to be a software engineer. So that became a bit of my vacation project over Christmas, pretty much.
And I threw it on GitHub after that in January and thought I was done.
Back then, it was just a really, really basic thing.
Monitoring temperature, already having this feedback loop where you also had some webcam
implementation and all of that to be able to see what your 3D printer was doing while
it was running through your jobs and some basic file management and such but definitely way smaller a
project than it is now over 10 years later I threw it on GitHub and within a week or so the emails
started coming in and the feature requests started coming in and then it took over my life and now
I've been doing it full-time for almost 10 years and crowdfunded for wait we do
have 2024 now so that must be eight years i think yeah eight years full-time crowdfunded work that's
awesome an open source project that's one of those success stories of open source and crowdfunding
right like because that's not a common thing for it's like, oh, one person started a project and now you can actually make your living off of this hobby or originally hobby
sort of thing. And that's really awesome just to hear that the community around it has come together
to be able to support such a cool project. Yeah. And it's always something that I can
talk about at parties, even if people don't know what 3D printing is or what open source is. If I
tell them people give me money, even though they don't have to, then I always get interest from people around me.
2012.
What printer was even available 2012?
That's like the CNC cupcake machine.
In my case, it was an Ultimaker.
That was like, yeah, a big wooden box.
No heat to bed.
Yeah, no one even knew what to do there.
Very slow and very weird.
And the filament was still thicker.
Like, uh, yeah, it was, it, it printed with the three millimeter stuff, which actually
was 2.85 millimeters, but still, uh, yeah, almost twice the, uh, diameter of what we
use these days mostly.
So 1.751.
It's like melting crayons.
Yeah.
It was weird when I got my first roll of filament
of 1.75 millimeter filament in my hands,
it felt so weird and not good.
And like it would break just by looking at it and such
because I was just used to all of this 2.85.
And then I think last year or so, I threw out all of the old 2.85 that I still had and looked at it.
And it looked so heavy and strong.
And what?
I was able to print with that?
No way.
So yeah, things really changed.
So in 10 years of Octoprint, how many printers do you support? It seems like it grows
every time I check it out. Yeah. So the thing is that most printers out there actually run an
open source firmware and have more or less agreed on a communication protocol. I say more or less
because a lot of the printer vendors actually adjust the firmware, often
without really knowing what they are doing with the result that they break the firmware
in the process.
And then things get really tricky for the users because then usually they do not know
how to fix it.
And yeah, in the end, that is always when I'm very happy that I also built a plugin
system into OctoPrint because that allows to work around these things so that people
can just, if they have a printer like that and also happen to know how to code
or can find someone who can see the issue and work around it, or maybe if it's a large enough
community, then maybe I can also do that. It's just a little plugin that pretty much translates
from the broken firmware into something that is more standard conform. And that way, yeah, pretty much everything that is sold out there is supported by OctoPrint.
But these days, it gets a bit more tricky because a whole bunch of printers are now currently coming
out that have a full blown host system. So OctoPrint is a so called print host. And a lot
of printers now come with something similar fully blown on board so they
only now have a wi-fi interface they often have an integrated full graphical display and such and
it is really tricky now to access these and use them with something that the vendor did not plan
on which is a bit sad that's how my son's uh printer well he has a toy box so it's meant for little kids to use with their iPads. So in a way it kind of monitors, but it kind of makes it limited what you can do with it because it comes with its own software and everything. Close source. Yeah, close source. And it was like, I had such a hard time because I had so many printers in the past that I
always wanted them to be open source and I want them to work certain ways.
And I always spent more time fiddling with them than using them and printing.
And so I saw recommendations for the bamboo and I'm like, I'm going to try it.
I'm going to go with this one.
I know it's close source.
They have a whole ecosystem of stuff.
And I think the problem is going to be when things break and i can't fix a problem or i can't troubleshoot and find you know find a community around like hey how
does this work it's all just going to be like oh well here's a janky fix we have that like
shows you how to do something there's good news for you though someone wrote a plug-in that allows
bamboo printers to work with octoprint really oh that's awesome really i really want a bamboo
so that's why i'm just like i'm not sure uh if it works with all of the models and such but
it's like it's the plug-in developer basically on octoprint like he's the one with the many plugins
i keep watching everyone's videos on twitter and like tiktok and i want a bamboo so bad but i'm
like i don't want to get locked into like the software yeah i'm not touching that with a 10-foot pole uh it's i i saw one in person
with a buddy and mechanically i was very very impressed but then also this news hit recently
or what recently that's almost been a year now or so, I think, where they had this funny security issue
where some printers suddenly fetched the wrong stuff
from the cloud and started printing
in the middle of the night for models from strangers.
And that is just something-
I did not hear about that.
Yeah, and stuff like this happens,
then this is a big, big no for me.
And also the part with all of what 3D printing is these days,
what 3D printing is these days,
what 3D printing has come to over the last 10 years,
that was done on the shoulder of open source.
And now all of these companies, it's not just Bambu,
it's a bunch of others as well, are just rolling in and trying to lock everything down
and trying to lock everything in
and creating their own little gardens.
And it's just not the way that I want to see all of this happening.
I'm a bit afraid that we will lose all of the open access that we have now
if stuff continues like that.
I think open source as a whole, like databases,
everything has gotten really weird with where do we go from here
with having companies in open source. License changes. like databases, everything has gotten really weird with where do we go from here with like
having companies and open source changes. Yeah. It's, it's been very interesting.
Now back to OctoPrint for a bit. I saw you had a release last week. What does that release process
look like? Cause you have this huge system that supports all of these printers and you have these
plugins and all of these features. How do you actually go about releasing and testing that
to say like this is a new release of OctoPrint?
So it should be obvious that it's pretty much impossible
to test every possible printer, firmware,
plug-in operation system,
starting state of software situation.
So what I do before I actually roll out a full release is there goes a long,
long phase of release candidates. And Octoprint has a release branch system built in. So if you
feel fine with testing stuff that is not necessarily fully stable yet, then you can just
switch over to another release branch and then you will get release candidates whenever I push those out. And they actually get the same procedure that I
do for every single release. And I will go quickly over that later as well. But
the idea behind that is that if I have something like 1000, 2000 people out there testing a release
candidate and putting it through several years of print duration over the course
of the release candidate phase, then I can be pretty sure that a lot of these combinations
that I would never be able to test has been tested.
And yeah, it usually takes something like three to four release candidates until no
more bugs come in.
And at that point, then I declare this stable.
And of course, after I've pushed out a stable release,
so the current stable version is 1.10,
but we are now already at 1.10.1.
So there are bug fix releases that I also push out.
Those do not go through a full release candidate phase again,
but they only get bug fixes
and maybe small minor improvements of existing functionality.
They do not get new features, they do not get big changes. They obviously also get security fixes,
stuff like that, but I try to really limit what goes in there. And if it feels too risky, then
it goes into the next stable release that will actually get the full release candidate phase again. And what I do for every single release is, so OctoPrint can basically run anywhere where you
can run Python, but most people run it on a Raspberry Pi. So that is also what I concentrate
on for testing. And there is this dedicated image that someone else is maintaining, Guy Sheffer,
for OctoPrint, which is called OctoPy.
And a lot of people confuse the image with the software and the software with the image,
which also causes a lot of complications in support.
But anyhow, so OctoPy is the most common environment that OctoPrint will be installed on out there.
So what I have here is I built myself a little test rig that has three Raspberry Pi 3s, which is the current basic option that I suggest.
So get a three of because that basically is the best thing that you can get, the lowest supported version. And if you want something with more power, then of course you can get something else.
But the three is like the base version that I look at. So I have three Raspberry
Pi 3s there, and all of these have a little card adapter in there that can be switched through USB,
either to act as a mass storage device through a host on the one end, or as an SD card on the
other hand. So that is slotted into the SD card slot of each of the Raspberry Pis and all of
these then go into a USB hub to a fourth Raspberry Pi, a Raspberry Pi 4 actually, which I call
the Flash Host. And that thing also has control over the little powered USB hub through which I
power the three Raspberry Pis. And now I can individually power them on and off. And I can
also individually unmount and mount their SD cards and flash them without having to physically
release the SD card and push it into a flashing stick and then flash. That is what I did until
2020. And it was driving me nuts because... Well, that's what I've been doing. No,
like this sounds fascinating. I didn't even know you can have like an sd card on one end with a it's like connected to the usb on
the other side and you can you can switch it back and forth there is yeah that one of these things
costs me 100 but uh they exist and yeah a little a little hey sometimes that 100 is worth it yeah
that saves how much time yeah i, I mean, I have three.
That was really worth the money that I spent on that
because what I do on every release is basically
I flash a whole bunch of starting versions on the Raspberry Pis,
like Octopi version X with Octoprint version Y.
And then I look if I can upgrade to the release to be from that version through all
of the regular update mechanism. And for that, of course, I need not only flash the SD card,
but also provision it with the Wi-Fi credentials and then SSH into that thing and do all of that.
And all of this is automated now thanks to this little test rig that I built. So I just tell it, flash device A
to this version of Octopi, make sure Octoprint is at that version, and also switch it to this
release branch. And then please also fire up the browser when it's done with that.
And so before every release, I have this huge checklist in my tooling and go through all of
that. And of course, the usual stuff like create new tags,
create a change log,
make sure the translation is up to date.
The German one, this is the only one that I maintain.
Everything else needs to be supplied
by people who actually speak the language fluently
that they are targeting.
Also add supporter names and all of that.
And then there's also always a whole test matrix
that I write down in JSON that gets
rendered into a little table.
And that then tells me exactly what command line I have to enter into my scripting so
that all of this will be done.
Then I wait.
Then a browser window pops up.
Then I click update.
Then I look if everything works.
And once I've gone through all of these, most usually something between seven to 10 test scenarios,
which used to take a whole day
and now takes less than an hour, if I'm lucky.
Wow, that's cool.
Your automation is very impressive.
It saved me so much time.
It's every single release,
I'm sitting here and have this huge smile
because that saved me so much time.
Yeah.
And I also have a blog post about this test rig.
Does it have pictures?
It has pictures.
I need to find that so we can add it.
I can drop you the link and you can put it in the show notes and something.
Yeah.
And what happens then is at some point I'm through all of this and then I'm happy and
stuff.
And then I do the regular release thing.
So I just click on release on the GitHub release.
I have already filled in the change log on all of that.
And what now happens is a whole workflow runs through GitHub actions,
which first of all, runs the whole test suite against everything.
The unit tests are done.
The end-to-end tests are done.
And only if all of this is green and stuff is actually even released on PyPI and such
as well, it triggers the test rig again.
Because what it will do now is it will automatically build an updated image with the new OctoPrint
version, so a new OctoPy version with the new OctoPrint version.
All of that will happen in GitHub Actions. And
then when this image is built, then the flash host in my network here at home on my desk will
be triggered to download this image, fire it against the pi, flash it, run the end-to-end
tests against it. And if that is green, I get a little email in my inbox that says, hey,
this image tested green. Do you want to release it? And if I then click
yes, then it will be released to the wild, basically. This is like the software engineer
dream. You found something that you're interested in. You built it over Christmas break, and then
you solved this awesome problem, and then you automated it and solved all these problems to
make it like efficient.
It's so cool.
I'm so impressed.
How many core maintainers are on Octoprint?
Is it just you?
It's just me.
What software were you writing before Octoprint?
Enterprise Java stuff.
There you go.
So you went Java to Python, basically.
Yeah.
Python was self-taught, started when I was... So yeah, my career was a bit weird.
I started actually working at university
because I wanted to do a PhD and I worked at university so in Germany it's like you have some
work either you're teaching or you're doing something administration and at the same time
you're working towards your PhD and I ended up in the administration part. So I was administering the whole department's servers,
all of them on really old Unix, not Linux, Unix machines.
The mail server was older than me
and not really finding much time for my PhD,
but automating a lot of stuff back then,
even already for the administrative tasks with Python.
And then at some point I decided, yeah, okay, so the PhD thing isn't happening. I'm not getting
really enough time to work on that. And to be honest, I was more drawn to doing something
like really with my hands and not just writing stuff and having students do the stuff with their
hands. So I ended up as a software engineer in the industry
and ended up writing a bunch of software like in Java,
IPTV related actually for a big telecommunication company.
And that went on for half a decade.
And then I got myself a 3D printer and that arrestes history.
So that's so cool.
And you said you've been crowdfunded for eight years now.
Yeah.
So eight years ago, you had to make this decision to leave your job and go do...
That decision was forced on me because the thing was 10 years ago already,
I left this Java job because I was hired by a Spanish company
who also was a vendor of 3D printers back then.
They found me, they found Octoprint, they liked what I was doing.
And they hired me full time to work on that back in 2014.
But then in 2016, they ran out of money.
And have since also gone under completely as far as I know.
So they had to let me go. And now I found myself in the position
that I had been doing Octoprint for almost two years
at this point, full time.
Like it had grown a lot,
the amount of work that it needed,
maintenance work, community,
and all of that had grown.
But yeah, I was no longer getting paid for it.
So it was the decision that I had to do,
either try to do it as a side project again,
which was an absolute no at this point already,
because when I was still doing it as a side project,
the first two or so years, that was already bad for my health.
Drop it altogether, which was something that I really did not want to do,
and go back to a regular normal nine to five kind of job, or do something that I never thought I
would ever do and try to just take the step into the darkness where I did not know at all what was
going to happen and try to do this crowdfunded and basically self-employed. And yeah, I figured
if I would not at least try that, I would probably kick myself for the rest of my life and asking
myself what could have been. So I jumped into the cold water and did it. And so far it's been
working. I do find it interesting that the commercialized spin wasn't even an option for you there right
like you could have like tried to raise money and say like this is gonna be a product i'm gonna make
a new business out of it and you have this open core model sort of like paid plugins whatever you
want to do right like that's so many companies do that and that's how they get started because it
was a side project or it was something they're interested in and for you it was like ah i either
abandon it or i do it all community and that's Yeah, I'm really not that big of a fan of this whole open core thing. And
personally, I also felt like I could not really do that, because I forked off of open source
software. So the part that talks to your printer was something that I basically took from a slicer of all things because that already was talking to Cura.
Cura had a communication part that I could just take over.
A lot of people had contributed.
So going like, yeah, I'm going to close this down now and we are only going to keep an open source. It just felt wrong and to this day feels wrong.
And I believe in open source and I find it a bit weird that it's still news for people out there
that, yeah, open source in general should be something that should be funded. We shouldn't
have to jump through hoops by selling stuff around it because what we do with maintaining
open source is already a full-time job.
Now, I don't know if you can go into details,
but where does your funding come from?
Is that from like recurring businesses that say,
hey, we want to pay for you to...
No, that's mostly users.
I have some business sponsorships,
but most of the people are really just,
yeah, your average Octoprint user
who has one or two or something printers
and just likes what I'm doing and throws me something between one to five bucks per month.
And if you have a whole lot of people who do that, then this matters.
Do you know how many installs you have or roughly how many?
Yeah, so I have anonymous usage tracking built into Octoprint.
All of this also self-built, completely GDPR-okayish, and only on my own
servers with my own tech stack and all that. This is completely opt-in, however. So if people do not
say, yes, it's okay to track me, then I will never know about the install. But according to that,
I have around 150,000 instances out there. Based on on some fun install stats from the Pi Wheels project,
who suddenly saw huge download spikes on the packages they host for Raspberry Pi,
whenever I pushed out a new update, I know that the number is likely around 10 times higher.
Yeah, I was gonna say 150,000 opted in. Yeah. That is usually a very small percentage
of people that were like, yes, I will let you get this information. That's awesome.
Which means it's probably like even more people. Right. Well, yeah. So if you estimate 10 times
more, that's 1.5 million. I could see that. That's totally not even out of realm.
The first time that I saw the first numbers come in
after the first release with the anonymous user checking,
I literally hit under my desk
because that was just,
I felt so much responsibility in that moment.
And it felt so heavy, literally heavy on my shoulders.
I just had this, I just had to hide.
So I just sat down under my table and
breathed that deeply and uh took a minute i hope that your success story like i hope people like
hear about it because that's so cool that you i feel like you did the like moral right thing that
people say that you can't do and still be successful and you not only have been successful but like just as like an engineer somebody like just people are
using something that you made you know like tons of people and they like it so much that they like
want to pay you for it that is so cool just to see that many people using your stuff yeah and
yeah it's also I considered my life's work. I mean,
I don't know if I will do this forever, especially not given the whole open source printer situation
that we talked about briefly, because at some point I might just get pushed out of the market
by a tendency to locking everything down. But yeah, it definitely feels like I have done something that actually has
helped people, which is not something that I can say about my previous job, I have to say.
Enterprise Java helping people? I don't know. Sorry, Autumn, no shade.
A lot of stuff runs on Java, okay.
A lot of stuff does. When you mix those two words of enterprise and Java,
I don't have any good memories.
It's more the enterprise bit also.
It's more the enterprise than the Java.
The Java itself
was okay. I mean, you can also
build good software in that and you could also
build performance software in that and it's not as slow
as people always said.
But on the other hand,
I also have to say that with Python, everything
got even faster.
Not in the run speed, but in the development speed, like so much less overhead.
Well, that's just because your variable names aren't sentence long, right?
You didn't see the first kind of Python that I wrote when I was writing Java during the day and then at night.
So a bunch of stuff is still not in snake caps, but in the other one.
Camel case.
Camel case.
Thank you.
Because, yeah, I mean, I was a Java developer.
Going back and forth, I always mess up the for loops in certain things.
You can tell I've gone back and forth too many times.
Oh, I can top that.
I mean, OctoPrint is pretty much a web application,
and the backend is written in Python,
but the frontend is JavaScript.
And switching between Python and JavaScript
is almost as bad as switching between Python and Java
because I go back to Python,
I start putting semicolons behind every single line,
and I go from Python to JavaScript,
and I just try to start
my blocks with with colons instead of braces it's just annoying it's so funny like there's
certain things that you can definitely tell that you've gone back and forth between two
languages when you like look at your errors and you're like damn and i did again
yeah and it happens daily just yesterday i i can't remember what exactly it was. I just
remember that yesterday I was like, no, Gina, this is not Python. When I was editing a JavaScript
file. I do that all the time. It's tricky. So where do you want to bring OctoPrint from here?
Like what's the next thing that you would like to do? What is the next sort of like big,
like it's not just, you know, more printers are
fine. I mean, I still think that you have influenced that standard of communication by
having this early project so long that was be able to, you know, talk to all these printers,
you have this plugin system. What's the next thing you want to do? What's the next cool thing that
you're like, I would love if Octoprint could do this. There's a bunch of stuff that actually
needs to be done, which boils down more or less to
taking care of some tech stack situations, because I'm still on a very old version of
all of the stuff that runs the UI. But because of the plugin system, it's really tricky to update
that or to swap that for something new, because all of the UI of all of the plugins out there
would suddenly stop working. And I've spent a lot of thought into how to approach this
and especially how to best get this working.
And I'm still in the process of doing this.
This is one of the bigger parts that I'm working on.
Also for the better part of a decade, actually,
I've now been also working on a new communication layer.
And that is also a very tricky thing to pull off.
And I also have had really bad luck with it
because every time that I actually get on it
and get it to a point where I'm almost ready to,
like I'm 80% or 90% to something happens.
So the first time I ran into a complete
and utter problem with my whole approach because
of some firmware issues out there that I wasn't aware of. So I had to scrap everything and start
anew. The second time I lost the job and had to go crowdfunding. The third time I ended up in a
breakup after over 15 years of a relationship. The third or fourth time i remember i don't remember something like covet happened and so like i'm almost too scared now to work on that anymore but
that's a lot it's like this huge project that really needs to get done to make everything
more modular and to be able to make it easily adaptable to new developments out there and to possibly also swap the whole communication stack out
to target something else than serial communication,
like something like network or so.
But the only problem is that it is a project in and of itself,
English, at this time of the day.
And as I already said, I am the only maintainer.
So I also have to take care of all the
bug fixes, all the security fixes, all the other new features, all of the community management,
architecture stuff. How do you push all the developers and different people that are
making the plugins to the next version so you can eventually do an update?
I deprecate stuff, write big, big, nasty warnings
into change logs, hope that someone actually reads them, that at some point, some versions
later remove the deprecated stuff after it was logging warnings and warnings and warnings to
the logs for several months. And if stuff then breaks, plugin developers can suddenly react quite fast, I learned.
Only after it breaks.
Yeah.
Nobody listens to the warnings for like five years, 10 years.
I had this quite nasty situation that, yeah, Python 2 to Python 3.
That was such a horrible jump though.
Like it was so bad.
It was.
It's still going on.
And I was right in the middle of it because all of the plugins out there were Python 2 only.
Octoprint was Python 2 only.
And it took a long, long time to get Octoprint up and running.
And that was also thanks to a lot of very, very nice contributors who helped there doing a lot of the legwork.
And then spending half a year or so ironing out all the bugs that were introduced in the process,
pushing out blog posts, pushing up tools that would help people to move over,
marking plugins as Python 2 or Python 3 compatible automatically on the plugin repository
basically by looking at the code automatically and detecting if it would compile under Python 3 or not.
It was an absolute nightmare, but somehow we pulled it off.
That sounds exhausting.
It was exhausting.
And 5% of OctoPrint's user base, according to the anonymous usage tracking,
is still on Python 2.
Wow.
And at this point, I just have given up trying to motivate them.
They'll never die.
Yeah, I mean, OctoPrint is Python 3 exclusive now since version 1.6, 5?
I have no idea, actually.
Something like mid-2020 or so.
I can't remember exactly.
And there are still people who are left on the Python 2 only version who I redirected
to take their updates from somewhere else just in case there was anything that I still
needed to push out.
But so far, far have never done anything
and will now also not do because those 5%,
they can just like if a security issue
or something like that shows up,
they really should just finally do the jump.
Yeah, they need to.
It's like when we try to get people off of Java 8,
it's like never dying.
Yeah.
I can imagine.
My knowledge is still stuck on Java 8. It's like never dying. Yeah. I can imagine. My knowledge is still stuck on Java 7.
You talked about some things you'd want to make changes in the future.
Looking back of more than 10 years of building this project, what do you wish you would have
done differently? I would have done so many architecture decisions differently that are now biting me in my behind
over and over again.
How do you, how do you make sure you like, because a lot of that comes from just learning
either scaling the projects and needs to change over time, or you didn't know how it worked
back then.
And you just learned a new way of doing it now.
Like, how do you, how would you go back in time and like teach yourself?
Oh, you should do it this way instead.
Is there a way? Do you have a time machine? How would you go back in time and teach yourself, oh, you should do it this way instead?
Is there a way?
Do you have a time machine?
Apart from that, I mean, I think most of the stuff,
if I just had known any better,
so if I had found some more information on some things,
then yeah, that would have saved me a lot of work.
I mean, some of the problems I actually just managed to iron out with the current release because I basically have two web server situations going on.
I have tornado sitting in there, single threaded async.
And on that I have flask sitting,
which is sync. So that is really a bad idea.
You do not want to mix that up. But
in 2012, Gina didn't know any better than that. And now I know.
Class talked a big game at that time. It's not even your fault.
The good thing is that I found a solution for that, which means we had huge performance gains
in the latest version that I just pushed
out now, because now I managed to make the whole connection between the two things as well,
so that they don't block each other anymore. And so the whole web page loads faster now,
and it's way less likely that some third-party plugin can now block the whole server as well.
And yeah, but these are things that if i had known them back then
if i just better understood the kind of stuff that i was working on because i mean i didn't
know about three printing protocols back then i didn't know about flask i didn't know about tornado
i didn't know about all of that i was just just like, okay, this might maybe work. And if I connect this here and then there and blah,
and then I added a plugin system on top
and that made everything way more complicated
because now you have an ecosystem.
You cannot just rip out parts anymore
without destroying parts of the ecosystem in the process.
And so that is what is now making things way more complicated.
In your defense though,
like 3D printing has grown so
much in the last decade and releasing software in general has grown so much. Like you sound
extremely knowledgeable about all of these things. And I don't know if anyone could learn them as
well if you weren't just doing it, you know, like you, you know, all these things because you,
you built it and you maintained it and you had to make those hard decisions.
So it seems like you're doing a great job for me.
Thank you.
Yeah, I mean, I'm still here, right?
So it can't be too bad.
And yeah, the things I now know about 3D printing firmware and especially about the differences between the various variations.
Honestly, I wish I didn't know as much sometimes.
There'll be dragons.
The curse of knowledge.
Not just that, but I feel like it's always that struggle of like,
you learned it at like 2am because of something went wrong.
Because like it went sideways and you had to learn it.
Oh, that's something, by the way i also learned
i never do releases after wednesday anymore because that gives me thursday and even though
it's usually my my day off because i'm on a four-day work week if push comes to shove gives
me friday and it doesn't ruin me ruin my all my whole weekend i did a bunch of releases on Fridays and it cost me one too many weekends.
Never pushed a prod on Friday.
Yeah. That is the real wisdom of this podcast right now is like, people say like,
don't push on Friday. And you're like, no, no, don't push after Wednesday. Like if you're,
if you're pushing on Thursday or Friday, you're just asking for it.
That is the perfect big time to get like.
Someone else to try and then call you.
And that is like, they need a day.
There's no testing like real users
wanting to use your software
in a way that you never imagined.
Oh yeah, I know.
You can't, like, just, that's why I think the,
like, you obviously do as much testing as you can,
but getting real people to try it,
like the way that you said that you do like that release where people can try
your other branches,
like so they can bake properly.
I feel like that needs to be a shirt on them.
It's like test with users.
It's like,
I mean,
there's nothing like,
it is nothing like some real person being like,
I wonder what you could do if I put this here.
And you're like,
why would you do that?
Or they have like some crazy workflow where you're just like,
what? Like, would you do what like some crazy workflow where you're just like what yeah you do what like oh yeah no i drop to the web console every time and i type my commands
manually in javascript you're like i'm like just looking they're like but like i want to use the ui
and the ci cli and then do this so then you're just like but why why would you do that but you
know you know you have produced some stable software.
If after a huge new point release, not point release, a minor release, only such stuff comes in.
It's only the weird use cases.
And this time I can say that I managed to do that.
I got only really, really weird, really odd stuff.
That's an achievement.
Right?
I thought so as well.
Not just that that but the fact
that you automated all that by yourself and you were the main maintainer you are amazing like
amazing you need to keep in mind i automated that because i am the only maintainer so that i had
more time to do the main yeah but you still had to do the automation. I know it makes your life easier,
but like sometimes like you will sit there
and it takes longer to automate stuff than like,
I mean, you get it back obviously after a while, but like.
Well, not always, right?
I mean, you can spend the whole week automating something
that you do once a year.
And in this case, you're like, oh no,
this taste went from a day to an hour
is a good use of automation.
Because we've all automated something and we were like this is gonna be great and then it takes longer to automate
than it does to do it man you're like why why did i do this to myself like eight hours in i'm into
home automation so i have this a lot i love that stuff me too like but i'm just like there's
certain things and i'm just like that was
such a bad idea but like you'll never know until you do it the good thing is you often still learn
something new in the process so even if it's all for the that's what I'm saying like you like just
listening to you talk about it I'm like man like your knowledge is just insane like you must just
know the ins and outs of so much of this
because of the way that like, you just, you're like,
and then I had this problem
and then I found this awesome way to fix it.
And I'm like, how did you do this by yourself?
Like, that is amazing.
Okay, but what do you print at home?
Like, did you make your own 3D printer or do you have like?
No, I actually always just get something
from the shelf basically.
And I- So what's your favorite
3d printer i'm not sure if i would call it a favorite i have a very old cruiser max free
by now that i have modified a whole lot and that works and works and works and works and i actually
just printed a guitar with it that i gave away as a birthday present to the father of my partner,
who was really, really happy about that.
And yeah.
Do you have anywhere that you post the stuff that you 3D print?
Because I just want to follow all the stuff that you print, because it has to be awesome.
Sometimes on Mastodon, sometimes on Printables, but mostly probably on Mastodon.
So chaos.social slash adfusel.
And that's also where I post everything pretty much that i make currently i'm more into making print and play board games for some reason
that just suddenly started oh that's cool i just made a card game again this morning so i yeah
it's a weird thing because i feel like at the amount of three like you were 3d printing when
like it wasn't even like a big hobby you you know, and the fact that you have like, you created all this software, I'm like, you have to be making cool things.
Like you, this was created out of you. to the frame and for mounting the radar unit that I have to tell me when a car is coming
from back and such, stuff like that.
Then together with a buddy, we did a whole project for the Chaos Communication Congress
and the Chaos Communication Camp last year, which were basically little environment sensors
that we put into little gnome figures.
And I printed all of these gnome figures
you are like the human problem solver like they like how many my problems has she talked about
that she solved like you know what i like she's the epitome of like engineering brain she's like
i had this problem that i made that i'm just like i just want to be your friend like you are amazing
you're just like every like you're just like and then i solve
this automation problem and then i'm like realize we need it this and i'm just like
you just you make all the things and this is actually the reason why i got a 3d printer
because i had this all of these ideas constantly how to solve certain issues in a in a household
like just around the home but i never never had a way to do that.
And then I got a 3D printer
and suddenly everything looked like a nail
for my new hammer.
And then later I got a laser cutter
and then I got a vanilla cutter.
And can we just talk about,
you should be Gina Fusel, the problem solver.
Yeah.
Like you've got to add to like that.
Part of its official title now.
I love it.
Yes. That is actually one of my best skills here. but like you gotta add to like that like part of its official title now i love it yes
that is actually one of my best skills here that is something that also back when i was still
uh still a java engineer person was constantly you're always gonna have problems and always
end up with like you know adversities but just the fact that your attitude is like okay we have
this problem and we're gonna fix it this way like that is amazing you are going to be successful for it the only downside
of it is that sometimes my brain won't shut up because then it is you know like when you're
lying in bed and you're trying to sleep and your brain is going oh by the way you might be able to
solve this that way or you could do this and such so i'm now taking to listening to audiobooks so
that i can actually fall asleep because otherwise this stupid thing just won't shut up but then the audiobook gets good i live that problem
all the time i have a trick up my sleeve i only listen to audiobooks i have already read
so i know what happens so i can't solve that problem too she's a problem solver because i'm
like i have the same brain i feel like it doesn't do the same cool problem solving that you do.
Like I'm trying to get on your level one day.
Like I'm not there yet.
But like, it's always like, and then this, and then you should do this.
And then you need to make a list for this.
And I'm like, can you shut up?
I'm trying to sleep.
But then I'm like, oh, the book just got good.
I just give it something to listen to.
And then it shuts up.
And because I already know it, I get tired and I sleep. It doesn't work with podcasts. It doesn't work with books. I don't
already know because then I want to actually, you know, listen and know what happens. But
Gina has all the secrets, guys, all the secrets.
This has been a fantastic conversation and thank you so much for coming and sharing all about
Octoprint and what you do. For anyone that's listening, if you're not familiar, if you have a 3D printer,
go check it out, run it on a Raspberry Pi 3, donate to the project because this is one of
those successful open source projects that has been around for a while. I was a user for a long
time. I am also a donator. So I encourage everyone else to go out there and it's great having an
integrated GitHub sponsorships and those,
all those things that you have for the project,
make it really easy to say like, oh yeah, here's,
here's $10, here's a recurring, you know, buck or two.
All those things go a long way to help, you know,
promote the work and really promote the like idea behind successful open
source that can be community run and community funded is an awesome success
story.
Yes. I hope that people take the success story and, you know, it proves to them that this
can be a model for open source.
It's possible.
Thank you so much, Sheena.
Thank you for having me.
It was a blast.
I hope you enjoyed those flavors of Ship It.
Yes, we love that show.
Justin, Autumn, they do an amazing job hosting that show and we're so proud of them
if you're not subscribed go to shipit.show
right now
or search for shipit
in your favorite podcast app and subscribe
later this week on friends
we are talking to Suze Hinton
cyber security, white hat
hacking, Kali Linux
3D printing, flying an airplane
all the fun.
It was so awesome catching up with Suze.
I think you'll enjoy it.
Okay, a massive thank you to our sponsors for this episode, Speakeasy.
Check them out, speakeasy.com.
A brand new domain name, speakeasy.com.
Generate enterprise-grade APIs.
And our friends over at Superbase.
Launch week number 12 is under wraps,
but you can check it out and learn more
about all that they launched.
Check them out, superbase.com slash launch week.
And our friends over at Test Double,
check them out, testdouble.com.
Their mission is to improve the way
the world builds software,
and they're doing awesome.
You should check them out.
And to one of our newest sponsors, GetUnblocked.com.
Unblocked helps developers to find the answers they need to get their job done for all the hows, the whys, and WTFs.
I tried it out.
I loved it.
It's amazing.
Check them out.
GetUnblocked.com.
And last but not least, to our amazing friends and partners over at Fly, check them out, fly.io.
Check out their GPUs.
They have GPUs in place now that you can run.
You can run your own Ollama in the cloud on Fly.
Check them out, fly.io.
And those beats from Breakmaster, Breakmaster Cylinder, bring in the beats.
Love them, love them.
Hey, that's it. The show's done. Thanks for tuning in.
We'll see you on Friday.