Screaming in the Cloud - Episode 5: The Last Mainframe with a Kickstart and a Double Clutch
Episode Date: April 11, 2018How are companies evolving in a world where Cloud is on the rise? Where Cloud providers are bought out and absorbed into other companies? Today, we’re talking to Nell Shamrell-Harrington ab...out Cloud infrastructure. She is a senior software engineer at Chef, CTO at Operation Code, and core maintainer of the the Habitat open source product. Nell has traveled the world to talk about Chef, Ruby, Rails, Rust, DevOps, and Regular Expressions. Some of the highlights of the show include: Chef is a configuration management tool that handles instance, files, virtual machine container, and other items. Immutable infrastructure has emerged as the best of practice approach. Chef is moving into next gen through various projects, including one called, Compliance - a scanning tool. Some people don’t trust virtualization. Habitat is an open source project featuring software that allows you to use a universal packaging format. Habitat is a run-time, so when you run a package on multiple virtual machines, they form a supervisor ring to communicate via leader/follower roles. Deploying an application depends on several factors, including application and infrastructure needs. It is possible to convert old systems with old deployment models to Habitat. Habitat allows you to lift a legacy application and put it into that modern infrastructure without needing to rewrite the application. You can ease in packages to Habitat, and then have Habitat manage pieces of the application. Habitat is Cloud-agnostic and integrates with public and private Cloud providers by exporting an application as a container. Chef is one of just a few third-party offerings marketed directly by AWS. From inception to deployment, there is a place for large Cloud providers to parlay into language they already speak. Operation Code is a non-profit that teaches software engineer skills to veterans. It helps veterans transition into high-paying engineering jobs. The technology landscape is ever changing. What skills are most marketable?  Operation Code is a learning by experience type of organization and usually starts people on the front-end to immediately see results. Links: Nell Shamrell-Harrington Nell Shamrell-Harrington on Twitter Nell Shamrell-Harrington on GitHub Operation Code Chef Ruby on Rails Rust Regular Expressions Habitat AWS Kubernetes Docker LinkedIn Learning GorillaStack (use discount code: screaming).
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode of Screaming in the Cloud is sponsored by my friends at
GorillaStack. GorillaStack's a unique automation solution for cloud cost optimization, which of
course is something here and dear to my heart. By day, I'm a consultant who fixes exactly one
problem, which is the horrifying AWS bill. Every organization eventually hits a point where they start to
really, really care about their cloud spend, either in terms of caring about the actual dollars and
cents that they're spending, or in understanding what teams or projects are costing money and
starting to build predictive analytics around that. And it turns out that early on in my
consulting work, I spent an awful lot of time talking with some of
my clients about a capability that GorillaStack has already built. There's a laundry list of
analytics offerings in this space that tell you what you're spending and where it goes,
and then they stop. Or worse, they slap a beta label on that side of it for remediation and
then say that they're not responsible for anything or everything that their system winds up doing. So some folks try and go in
a direction of doing things to write their own code, such as spinning down developer environments
out of hours, bolting together a bunch of different services to handle snapshot aging,
having a custom Slack bot that you build that alerts you when your budget's
hitting a red line. And this is all generic stuff. It's the undifferentiated heavy lifting that's not
terribly specific to your own specific environment. So why build it when you can buy it?
GorillaStack does all of this. Think of it more or less like if this, then ifttt for aws it can manage resources it can alert folks when things
are about to turn off it keeps people appraised of what's going on more or less the works go check
them out now they're at gorillastack.com spelled exactly like it sounds gorilla like the animal
stack is in a pile of things uh use the discount code SCREAMING for 15% off the first year.
Thanks again for your support, Gorilla Stack. Appreciate it.
Thank you for tuning in to Screaming in the Cloud. My name is Corey Quinn,
and today I'm joined by Nell Shamrell-Harrington, currently a senior software engineer at Chef,
and also the CTO of Operation Code. Welcome to the show, Nell.
Hello. Thank you so much for having me. It's great to be here.
Wonderful.
So we'll get back to Operation Code a little bit later, but one of the reasons I reached
out to you originally is to talk a little bit about, I suppose, how your core employer,
Chef, is, I guess, evolving in the context of a world where increasingly cloud is on the rise.
Now, historically, to my understanding, you came up through cloud computing through your
previous work as a developer, correct?
That is correct.
I was working at Bluebox, which was then a cloud provider.
There's a lot of stories that end that way of, yes, they once were a cloud provider,
and then, et cetera, et cetera, things happen, and we are today it's it's the nature of the industry well what happened
was they were bought out and absorbed by ibm no which is frankly not a terrible way to go oh no
not at all it beats the acquisition exit as opposed to and then they were never heard from again as
they wandered off into the wilderness you know sometimes I look at my old, because I've got a lot of conference t-shirts
from the past six, seven years.
And a number of them, I have to think to myself,
does this company still exist?
No, I don't think they do.
So I've got a little historic relics of swag
from companies past.
A number of folks also tend to have this,
I guess, collection in a drawer somewhere
of stock options that aren't worth the paper that they're printed on, fundamentally.
They can wallpaper a room with them, but they never turn into things.
So, frankly, I find the idea of having conference t-shirts from companies that don't exist anymore a lot less depressing.
Ah, that is very true.
So, after Blue Box, did you go directly to Schaaf, or did you have a wilderness period?
Well, kind of a wilderness period.
So I worked for Fish Me for a year, which was an application company that ran on Blue Box, actually.
So it was nice having that little bit of an inside connection when I started working there.
They produced software that could be used to send simulated phishing emails to employers could send it to
their employees that if an employee clicked on it they would be brought back to our web application
and get some nurturing lessons on how to avoid doing that in the future i to some extent i
imagine that was overtaken in time by the russian mafia who instead of sending nurturing lessons
sent very expensive lessons but fundamentally it wound up being a crowdsource solution. True. I mean, you learn either way. Just,
yeah, one was more expensive than the other. Okay. And after that, you wound up at Chef.
I did. And how long ago was that? That was three years ago. I'm actually coming up on
my three-year anniversary in about a week, I think. Oh, wonderful. Probably before the show winds up going out there, but happy belated anniversary
when the time comes.
Why, thank you very much.
Of course. So, I personally considered myself something of an expert in the realm of configuration
management, but I'm going to caveat that with Chef was the one tool out of more or less the entire
public market of configuration management tools that I never touched directly.
There was always, oh, I know all of these tools.
And then there's Chef, where I would smile, nod sagely when people said things and never said a word.
But to my understanding, these tools fundamentally, at least at the time I was heavily involved in this, all tended to do the same type of thing. In other words, they would manage what was on a box or an instance or a virtual machine or even inside of a container if you wanted to go that approach. Files, services, packages installed certain things in a certain state. And whenever they would run, they would detect deviations from their ideal blessed state and attempt to converge them back to the mean.
Is that roughly accurate?
That's roughly accurate.
That's the fundamentals of configuration management.
Wonderful.
And this was on the rise for a while.
It seemed like this was the direction that a bunch of companies were heading in. of having 300 administrators all doing this stuff by hand badly because humans make terrible
computers, this was the shiny future that everyone envisioned. And then somewhere along the way,
it seems like the industry took a different path that not many of us saw coming. These days,
or even a few years ago now, you talk to companies about what they envision the best practice approach is,
and the answer generally tends to take the form of,
oh, you use immutable infrastructure,
you don't wind up touching anything on the box,
you just blow it away and then repair it.
We can debate whether or not that is the correct way of doing things,
but that is an architectural pattern that has emerged with some vigor.
Very much so. So in a world like that, what does Chef become?
Well, the way we think of it is, yes, Chef, classically, classic Chef, as I call it,
is about configuration management, about having those fleets of servers that you all want configured the same, then you want to be able to make a change in the cookbook, as we call it, the template for it,
and roll it out to the entire thing.
So we've been kind of disrupting ourselves
because we've realized,
although there is very much still a place
for configuration management,
the emphasis of, I'd say,
configuration management is current gen.
Things like immutable infrastructure,
containers, things like that, those are next gen.
So the way Chef is moving into that next gen, while not abandoning people who are still in the current gen, is we have a couple of new projects.
One is compliance. you run your infrastructure in, whether it's something you manage through a chef cookbook, whether it's an immutable piece of infrastructure, you need some way to make sure it's secure.
And we have a lot of customers who work in defense industry and healthcare, and they need
some sort of automated way to scan all of that infrastructure and be sure that certain ports are closed, different things are configured
about them. And we provide a tool called compliance, which will automatically scan
infrastructure, real working infrastructure for those security requirements. So that is one major
play that Chef is making to help expand us from just configuration management to being relevant
to immutable infrastructure.
I mean, no matter what kind of infrastructure you have, security is always going to be relevant.
Right.
This would be InSpec, correct?
Yeah.
InSpec is what we use to create those templates, and compliance is the actual scanning tool.
Gotcha.
Okay.
So it ties into a larger offering around compliance.
Wonderful.
Right.
I would also point out that as much fun as it is to be a hype evangelist and talk about how containers are the way and the light of the future, not every workload is appropriate for a container-based ecosystem.
Absolutely.
Further, a lot of companies are, I don't want to use the term stodgy, so let's pretend I did. These are large insurance companies, large banks that have been around for a century or so.
And because someone gets up on stage at a conference and says, this is the new way to do everything, go ahead and shove it in a new direction.
The end will see you when you get there.
These digital transformations are something that a number of these companies take very cautiously.
And given the consequences of getting it wrong, I can't say that they're necessarily wrong for doing that.
So there's always going to be a bit of a long tail of folks who still today don't necessarily trust virtualization, let alone cloud, let alone containers, let alone the future of serverless, etc.
Right. I remember reading when the new tax plan passed.
So the software the IRS uses to generate tax forms was, I believe, written in the Kennedy era.
They still use it because it still works, but now they're facing having to change it.
And so now I think they might look at modernizing that a little bit.
But, I mean, it's one thing for a young startup to
change from using configuration management on their infrastructure to containers. I mean,
that's not going to be an easy change even for them. But for big institutions that control
major portions of the world's economy, that cautiousness is necessary. And we can debate whether it's a byproduct of the cultures of those companies
or if it's their actual technical needs, but I think it's a little bit of both.
Right. I remember reading about that.
I believe the IRS still uses one of the last mainframes in existence
with a kickstart and a double clutch.
That's not generally something you see too many other places.
And when you call up a
cloud provider and ask about getting one of those installed, they look at you very strangely.
What is wrong with you?
So one other thing that Chef has been focusing on for a little while is something called Habitat,
historically. And I had the privilege of attending a meetup on it about a year, year and a half ago.
And I had to leave halfway through the presentation because my brain was full.
At that point, I could not wrap my head around what it was, what it represented, and the level
of technical complexity that was being discussed by some of the very bleeding edge people who were
working on it. First off, what is it? And secondly, is that still the case?
Certainly. So Habitat at its core is an open source project. I'm one of the core maintainers
of it. But as what it is and what you would use it for, I think one of the difficulties with
conveying it is it really is two things. The first is Habitat is software that allows you to use a
universal packaging format. We provide software where you just create your application code as
you normally would. You don't need to rewrite the application in any sort of way. And we provide a
way for you to take that application and put it in a what we call a heart artifact, a certain package of the software.
Now, you could run that heart artifact,
whether it's on bare metal, virtual machine, or container.
It doesn't matter which one at the moment
as long as it uses Linux x86.
But the real power is you can take that heart file,
that package that you created with Habitat,
and you can easily export it to Docker, to Cloud Foundry, to Kubernetes.
More formats are being added all the time.
So something I've seen when I've gone into other companies
is that there's a lot of debate is a generous word.
There's a lot of fighting about what is the one true way of deploying applications.
And what Habitat, it kind of turns that on its head and it says, you know, there is no one true way to deploy applications.
It's going to depend on your application's needs, your particular infrastructure needs, the environment that you work in.
So let's give you a way to export it into whatever format you need it.
So you don't have to worry about that when
you're developing the application itself. You know you can put it no matter what it is in that
habitat format and then export that to whatever you need. Now the other thing along with a packaging
format, it's also a runtime. And what I mean by a runtime is when you run a heart package, let's say you're running that on a virtual machine, and you want to run multiple virtual machines running that same package.
When you start them up, they form what we call a supervisor ring.
And what the supervisor ring allows them to do is it allows them to communicate with each other.
I mean, there's a lot of things you can do with this communication.
You can roll out configuration changes, et cetera. But one of the coolest things it does is if you
say, let's say we want a MySQL cluster. So we have three different VMs all running MySQL.
And when you're running a cluster like that, it's really common to want a leader follower topology
where the leader will receive all of the writes and the followers will receive all of the
reads. So what happens is when you spin up those three VMs and install that heart package, once you
start them, you don't have to do anything. On their own, they will hold an election using a built-in
algorithm and they will decide who the leader is and who the followers are. Now, the other really cool
thing is if the leader goes offline for whatever reason, they will automatically hold another
election and elect another leader and the rest will be followers. So they have this, they don't
need a central orchestrator for their runtime like you need with a lot of container runtime solutions. So the idea is to
push as much as we can down into the application layer and then allow the packages themselves,
the things that are running the packages themselves, to decide how they should be run.
So loosely, it is one, a packaging format, universal packaging format, and two, a particularly cool runtime that allows your applications to self-organize.
Fascinating.
So from a perspective of rolling this out to an environment, a lot of these systems and tools work very well in a Greenfield-style deployment model.
But when you take an existing application, let's not pick on the ancient mainframes. Let's pick on one generation newer. PHP apps that were written 20 years ago.
What does it take to, I guess, take these old monoliths, these old systems that have these
Byzantine deployment models, and convert them to take advantage of something like Habitat?
Is that a total rewrite? It's absolutely not. And that is one of the most beautiful things about Habitat.
If you have, let's say, an old PHP app and you know how to deploy that app, all you would do is you would take your application and you would write what we call a plan.
And the plan is how that application is deployed.
So if you know how to deploy it manually, like you take that PHP app, put it on a virtual machine, then know the commands you run to get it running, you capture those in the plan file. And that's all it would take as long as you know how to deploy it and know how to capture it in that plan file. You would then package that into that heart artifact, that Habitat universal artifact, and you'd be able to instantly put that anywhere in the cloud. Again, whether it's a VM, whether it's a container,
it'll allow you to lift that application, that legacy application, and put it into that modern
infrastructure without needing to rewrite the application itself. Wonderful. And is this one
of those boil-the-world scenarios in which every part of an application needs to be managed by
Habitat for it to make sense? Or is it something could be eased in more directly?
So you can ease into using your application with Habitat by taking one part of it. Let's
say you have a Rails application and you want to start running this with Habitat.
What I would probably start off with doing is just taking the application server,
either package or passenger or the web server like Nginx or Puma or
whatever it is you use, package that in Habitat, and then see how that goes. And then consider at
that time moving the database to being managed with Habitat. Basically, it is something that
can be eased into. Gotcha. Now, because the general theme, as much as there could be a theme of this podcast,
is cloud computing,
how does this wind up integrating
with the large public and or private cloud providers
that exist today?
Well, if you export your application as a container,
you can easily run it on AWS container services.
You can run it on Azure container services. I cannot keep the acronyms straight. So whatever the acronyms are for those services,
pretend I said them. I'm sorry, there have been three more launched since we began having this
conversation. Exactly. I think so. So the nice thing about it is it makes it kind of cloud
agnostic. So I could use the same Habitat package to run on AWS.
I could run it on Azure, Google Cloud, Digital Ocean, whatever have you, all those different
evolving cloud providers.
So it gives you that freedom to run your application where you want to run it in the same way,
regardless of which cloud you're using.
Now, that said, if you're using some really high-level AWS features,
the higher level you get, the more tied to a certain cloud provider you get.
So there might need to be some adaptation there.
But as for core VM or container functionality,
you can run it in any one of those clouds.
Right. And as the world moves to serverless,
the use case for any code that can't complete in five minutes or less and is written in one of a
handful of blessed languages, there's just no place for that in the world in the future.
Here in reality, that generally doesn't tend to play out the same way.
That said, something that's been interesting about Chef historically is that it's been one of the few third-party offerings that is marketed directly by AWS historically.
You had OpsWorks for Chef, and then there's Chef Automate for OpsWorks, or am I getting my acronyms and ordering confused again. There's OpsWorks, which is a kind of a service within AWS.
And then if you want to run Chef Automate, which is our kind of self-contained Chef server platform, if you will, there's an AMI in the AWS marketplace that you can use to spin one of those
up. Wonderful. So coming back to that point, if you're in a position now where Habitat starts to more or less solve a number of
these creation questions of the getting software from inception to something that is able to be
deployed, is there a place for some of these large cloud providers to start providing Habitat native
platforms? Or is it something that at this point is so easy to parlay into something they
already speak that there wouldn't be a need for it at this point i would say the focus is on
parlaying it into something they already speak um i could see at some point in the future having
habitat native platforms but uh the whole idea of habitat is being able to take the same
uh package and run it on anything gotcha. This was always one of the questions, to some extent,
in the early days of Docker. When you have a container, well, we already have
these instances, and we can just run it there.
Then, increasingly, the cloud providers came out with support for different orchestrators.
These days, it seems that Kubernetes has more or less owned that entire space.
But there was a time before this happened where getting anything to run repeatedly in, for example, AWS-style environment,
most people were using Packer or similar to build an AMI, often with Chef as a handler to do a lot of those configuration pieces.
And then that AMI that got created was passed out to auto-scaling groups and the rest.
This took a significant amount of time to launch new features.
You would look at 45-minute deployment processes for a one-line fix.
So there's always been a little bit of a tug-of-war between, do you go ahead and bake everything from scratch every time there's any change? Or do you get it most of the way there and then finish it off by having something like Chef in place making those changes? And increasingly,
it seems that if you can reduce the cycle time down, the argument of never having something you
can log into, you just deploy a new version with the push of a button in seconds or less,
starts to become something much more
viable. That is true. One of my favorite stickers I've seen on someone's laptop at a conference said
serverless just means you're using someone else's servers. So I mean, as we abstract,
I mean, that's wonderful, but there's always, I don't know if there always will be, but currently
there still is very much real infrastructure underneath it that needs to run it.
Right. The nice part now is being able to pay someone else to handle these things with a team and a budget that generally far outstrips what most of us would be able to put together.
Oh, you're a three-person startup. Go ahead and hire an entire team of 200 ops people as your next round of funding.
Right.
That raises eyebrows eyebrows having been in
those rooms. I imagine so. So to come back around to what we started our conversation with,
something that you've been up to lately is you're the CTO of something called Operation Code.
Can you tell me a little bit about it? Sure. Operation Code is an organization that's
dedicated to helping veterans who are transitioning from military to civilian life learn software engineering skills.
So we partner with code academies or coding boot camps to try and establish scholarships for these veterans.
We have lobbied Capitol Hill in Washington, D.C. to have GI Bill funds be able to be used for coding boot camps.
Basically, we are a teaching organization helping veterans move into those high-paying
software engineering jobs. Wonderful. How did you get started in something like that?
Well, I am the daughter of two military officers. Both my father and my mother were in the Air Force.
Going into the Air Force was not an option for me when I turned 18 due to some medical history
stuff that I have. But I grew up in military culture. I've always very much identified with it.
And I realized that I wanted to do something, you know, if I couldn't
be in the military myself, I wanted to do something to help those who are making that
transition out of it, move into this world of technology that I've been in for the past several
years. One of the challenges as I take a step back and look at the career trajectory that I've had,
I went from doing a bunch of non-tech things to being in support to working as a sysadmin
to becoming a systems engineer, production engineering, DevOps,
if you won't smack me over the wrist for that one, and so on and so forth.
But if I take a look back at the technical road that I walked, I, myself as technically, then I'm at a
loss where I don't have a great story to tell people of, well, just do this, this, and this.
For better or worse, the road that I walked is closed. How do you bootstrap someone who's
starting from little more than an interest and a willingness to put in time? What we do is we generally, knowing where to start is by far
the hardest because, I mean, the technology landscape, since I started 10 years ago,
more than 10 years ago, has changed so rapidly. And it feels like it kind of entirely turns over
as what skills are most marketable every couple of years or so. So what we do is when
someone's just starting out wanting to get a feel for something, we have a partnership with,
they used to be lynda.com. Now it's LinkedIn Learning, I think something like that.
Basically, we give people the resources to try it out on their own if they want to and see,
do I like this coding thing? What speaks to me about it? Because it's such a
hard field to get into. I think some people come in because they want a high-paying job, which
honestly is a perfectly fine reason to come in the field, I think. But that isn't always enough
to carry someone through not just the learning curve of beginner, but the learning
hockey stick, I think of it as, for an intermediate developer into a senior developer.
So something else we do is we do have open source projects. That's a major part of my role as CTO,
is helping govern those projects. And we run our front end in React, we run our back end in Rails,
and then we run our infrastructure in Kubernetes.
And the reason we do that is all three of those are very modern, very desired technologies
to use. So we give someone who wants to come in and get experience with these a chance to work
with myself or work with other mentors in the program on learning these technologies through contributing to a real open source project
that they can then put on their resume or put in their portfolio and show developers.
So we are very much a learning by experience kind of organization.
And we are constantly iterating on that and changing as the industry changes. So someone comes in and they're learning as they go, and you give them exposure to front-end, back-end, the infrastructure bits with Kubernetes.
And I believe you mentioned before we started the show that the Kubernetes cluster itself runs on top of AWS.
That is correct. Yep. Yep. Using a Kobos, I think is what we use to spin up that cluster.
The consensus on the proper way to run Kubernetes on AWS is clear that everyone else is doing it wrong. I feel like as many people as you talk to, there's always someone with a divergent opinion.
But so when someone comes in, where do you start them on that entire stack? Do you
tackle the entire thing and see who can drink from that fire hose to some extent?
We usually start people on the front end because in that case, you can see your work immediately
and see the effects of your work very intuitively. So that's usually where I start someone off to
just kind of get a feel for coding, get a feel for development. Then we might teach them about
APIs and move them into Rails. And then usually only after that,
unless someone comes in saying,
I want to be an infrastructure engineer,
then by all means, I'll start them with infrastructure.
But then we start introducing people to Kubernetes.
In particular, because with our current setup,
you have to have your workspace configured for Kubernetes
in order to get into one of the containers
that's running our web server
and access Rails console or access the Rails logs. There is currently a little bit of Kubernetes
knowledge required for a lot of the maintenance tasks with our Rails application. And that's
something we're looking at saying, is there a better way to do this? But it's a way to get
someone's hands wet, if you will, or or feet wet i guess you don't want to get
your hands wet but anyway feet wet in kubernetes and just get a little bit of exposure to all these
different technologies so here's the 64 000 question is habitat used in some ways in these
environments or not yet not yet it's something i am looking at doing. I very much want to, if for nothing else,
than using it to create our Docker images. I have not introduced Habitat quite yet. There's a few
things I want to get more stable about our environment before introducing the Habitat,
which will require some more knowledge acquisition by our team.
Right. One thing I've always noticed in the course of
my career has been that if I think I know something, all I have to do to dispel that
notion is teach it to people who've never heard of it before. Absolutely. Yep. Everything from
there turns into questioning of what, how I think about it, how it's presented. And for something
that is still as early days as Habitat presents as sometimes,
I can't shake the feeling that that would wind up adding significant value
just to the onboarding process.
The challenge when you're working on an open source project
and you're deep into the weeds is you forget at times what it's like to come
not only to the project new, but to the project as it stands today.
Because when you
started, it did far fewer things, and it was much easier to wrap your head around. I mean, these
days, I can't imagine what it's like to come to AWS, build a new account, click and get the giant
list of fine print, and oh my word, that's a service listing. That's not just legalese.
It's difficult to get over that hump. And it's challenging because a lot
of times very talented people forget what it's like. One of the things I find that's so compelling
about Operation Code is it's giving back in a way that you don't see very often.
Right. Something, part of what motivated me to reach out was a dear friend of mine who's a veteran posted something on Facebook in that when someone tells her thank you for your service, it puts her off a little bit.
And I didn't fully understand it.
But as we talked through it, I realized, because I had always thought the moment you find out someone is a veteran, you always say thank you for your service.
I thought that was the polite thing to do.
But the thing was, that was about me. Me thanking someone for their service was saying, oh, look, maybe not look at me, but oh, look, I know the proper response to that.
And you made that sacrifice and I didn't, so thank you. And veterans have told me it can feel like
someone is saying better you than me. So as I talked to this friend, I'm so glad she had this conversation with me. It revealed to me
that the way for me to truly help is not necessarily thanking someone, though I do sometimes still
thank people with their permission. It's through giving them the resources they need to find a
purpose in life after the military.
Because that is a very hard transition to make for a lot of people going from,
you know, very well-defined purpose, knowing exactly what you're doing,
to the kind of more nebulous civilian technology world.
So I knew I needed to do something that could directly help people with that transition
and get them into
those very well-paying, purpose-filled technology jobs. It's great to hear stories like this,
and it's nice to see that people care. It's a nice reminder that the general nature of humanity is to
help other people out. Sometimes it's difficult to maintain sight of that. Right, it is, especially
when you're dealing with internet culture. I actually have Twitter and Facebook blocked by default in my
main browser because I was too often just, you know, I had a free moment. I would just
turn it on and then instantly be flooded with all the negativity. The worst, maybe not the
worst of human nature, but the darker side of human nature. So I can find
that without effort, but it takes some extra effort, I think, to see the good side of human
nature right now. And in an organization like Operation Code, the joy is once you join it,
you see that good side of human nature constantly, and you're actively doing something to make the
world a little bit better.
Terrific. I'll throw a link to Operation Code in the show notes. Where else can people find you?
You can find me on Twitter at Nell Shamrell. I still check it daily, even though I have it
blocked by default. Sometimes I unblock it. You can find my personal website is NellShamrell.com
or on GitHub, I'm Nell Shamrell. I'm pretty consistent with what
my username is on most things. So feel free to reach out to me there. If you're interested in
operation code or just want to talk cloud infrastructure or a habitat or anything,
feel free to drop me a line. Thank you so much for joining us now. My name is Corey Quinn,
and this is Screaming in the Cloud.