Screaming in the Cloud - Episode 48: Nobody Gets Rid of Anything, Including Data
Episode Date: February 13, 2019Companies can find working in the Cloud quite complicated. However, it’s a lot easier than it used to be, especially when trying to comply with regulations. That’s because Cloud providers... have evolved and now offer more out-of-the-box services that focus on regulation requirements and compliance. Today, we’re talking to Elliot Murphy. He’s the founder of Kindly Ops, which provides consulting advice to companies dealing with regulated workloads in the Cloud. Some of the highlights of the show include: Technical controls are easier, but requirements are stricter Risk Analysis: Putting locks on things to thinking about risks to customers Building governance and controls; making data available and removable Secondary Losses: Scrub services to make scope and magnitude of loss smaller Computing became ubiquitous and affordable; people started collecting data to utilize later - nobody gets rid of anything General Data Protection Regulation (GDPR) set of regulations apply to marketing technology stacks to manage systems Empathy building exercise and security culture diagnostic help companies understand compliance obligations Security Culture: Beliefs and assumptions that drive decisions and actions Evolution of understanding with public Cloud’s security and availability Raise the bar and shift mindset from pure prevention to early detection/ mitigation; follow FAIR (factor analysis of information risk) Links: Kindly Ops Amazon Web Services (AWS) Microsoft Azure Relational Database Service (RDS) Google Cloud Platform (GCP) Nist Cybersecurity Framework GDPR Day People-Centric Security by Lance Hayden Stripe Society of Information Risk Analysts (SIRA) DigitalOcean .
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This week's episode of Screaming in the Cloud is generously sponsored
by DigitalOcean. From where I sit, every cloud platform out there biases for something. Some
bias for offering a managed service around every possible need a customer could have.
Others bias for, hey, we hear there's money to be made in the cloud. Maybe give some of that to us.
Digital ocean, from where I sit, biases for simplicity. I've spoken to a number of digital
ocean customers, and they all say the same thing, which distills down to they can get up and running
in less than a minute and not have to spend weeks going to cloud school first. Making things simple
and accessible has tremendous
value in speeding up your time to market. There's also value in DigitalOcean offering things for a
fixed price. You know what this month's bill is going to be. You're not going to have a minor
heart issue when the bill comes due. And that winds up carrying forward in a number of different ways.
Their services are understandable without spending three months of study first. You don't really have to go stupendously deep just to
understand what you're getting into. It's click a button or make an API call and receive a cloud
resource. They also offer very understandable monitoring and alerting. They have a managed
database offering, they have an object store, and as of late last year, they offer a managed Kubernetes offering that doesn't require a deep understanding of Greek mythology for you to wrap your head around it.
For those wondering what I'm talking about, Kubernetes is, of course, named after the Greek god of spending money on cloud services.
Lastly, DigitalOcean isn't what I would call small time.
There are over 150,000 businesses using them today.
Go ahead and give them a try, or visit do.co slash screaming,
and they'll give you a free $100 credit to try it out.
That's do.co slash screaming.
Thanks again to DigitalOcean for their support of Screaming in the Cloud.
Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Elliot Murphy, the founder of Kindly Ops.
Welcome to the show, Elliot. Hi, Corey. Thanks for having me.
Oh, thanks for being had. So there are a few interesting bits of overlap throughout our,
I guess, history that I figure is probably as good a starting point as any. For example,
we both at one point in our
lives called Maine home. I escaped, you didn't, you still live there. If we can extend the word
living to cover Maine. There is a lot of ice right now. Exactly. At the time of this recording,
there's apparently some sort of giant snowstorm heading your way. And it's chilly here in San
Francisco as well. We're just under 60 degrees. Brutal. Exactly. I had to put on a jacket this morning. It was awful. But what also I guess is more interesting than, hey, we used to
live in the same geographic area, is that your company focuses on providing consulting advice
to companies that are dealing with regulated workloads in the cloud, primarily AWS, but also
a bit of Azure and GCP that are
scattered in there for show as well. Correct? Absolutely. Yeah. So you might think FinTech,
biotech, all that kind of stuff. When I started my consulting company, I went through a couple
of rapid iterations. I went from I'm a DevOps consultant, which great swing a dead cat,
you'll hit 20 of those people that all look alike, and it becomes a race to the bottom. The second iteration pretty rapidly was helping with
compliant workloads in AWS, specifically PCI. And one of the things that got me out of that
was the fact that the job, from my perspective, was kind of miserable.
A ancillary fact was that I had a conversation with you and I realized, wow, this is what it looks like to do that right. I'm making this look like amateur hour. So when you find out you're not doing something super well and you get out of it, that often feels like it's the best path forward.
It is pretty complicated at times. AWS has made it a lot easier. Oh, absolutely. Back in the days when I was working with a variety
of different regulated industries as part of a full-time job, keeping up with compliance
agreements and the rest with your cloud provider was a massive undertaking. Increasingly, it feels
like that's changing a bit as more and more services are compliant out of the box, the
requirements are lessening, and that is really easy to overlook, but it's a tremendous
amount of work on the part of the cloud providers to be able to get there. Yeah, things have evolved
quite a lot. So only a couple of years ago, it was pretty common to need to provision EC2 instances
with specific types of encryption setups so that you could run Postgres on top of them in a way
that met your encryption
at rest requirements. Now, of course, you can use all different flavors of RDS with encryption
in flight and at rest, just out of the box. So particularly around the technical controls,
so much has gotten easier. But we're also kind of seeing the requirements get a little stricter.
So for example, the NIST cybersecurity framework was revised last year.
And one of the things that calls out is that you should be doing risk analysis.
And so I think we're just seeing a natural maturing of practice where instead of everybody
trying to figure out how to put locks on things, now that that is pretty easy to do, we're trying to level up and have people think in a mature way about the actual risks that
they're facing and that they're facing on behalf of their customers and try and make good decisions
about how best to manage those. It feels that compliance has always been a big complicated
area. And when I look back at the times that I worked for regulated employers, the technical stuff
was by all means not trivial to wind up handling.
But what I recall far and away beyond that was less to do with being able to check the
boxes of, yes, it's encrypted as red rest, et cetera.
Good for you.
You've now solved for the problem of people break into multiple data centers and steal
a bunch of drives and somehow recombine them to get the data that you care about out of
it, which is not really a threat model in today's world while we're talking large cloud
providers and much more around the idea of building governance and controls into your
business as a whole.
Yeah.
And so, for example, something that might be even more important than how locked up the
data is, how safe the data is, is how available the data is. And also, can you get rid of stuff?
What are your retention policies? So if you look at a possible bad thing happening and look at the
magnitude of the loss, it's going to cost you money in a handful of different ways. Some of those ways might be around fines and judgments.
We would typically call those secondary losses. So you could spend a bunch of money trying to
make sure that data never gets breached, which we know is not realistic. Or you could look at,
well, we don't really need any of this debugging data for longer than two weeks. So we're
going to automatically scrub that stuff away so that if a breach does happen on these log servers,
there's much the scope and magnitude of the loss is much smaller. And any fines or judgments that
we got around that data would also be much smaller. One of the most terrifying aspects of GDPR day, for lack of a better term, was the pile of email I got that day from companies I could not have picked out of a police lineup, sending me an email to my maiden name.
I changed my name legally toward the end of 2010. that was going by some of the names that they wound up listing with different nicknames. I went by middle name for a while. I'm getting emails to a name I hadn't used in nearly 15 years.
And that was terrifying. First, you have an awful lot of data sitting there. Secondly,
if I did business with you back then, the first time you reach out to me again to tell you that
we're doing business differently is 15 years later with a privacy policy update?
What? That's terrible marketing. It was sort of a strange shock that no one gets rid of anything.
Yeah, for a long time. As computing became so ubiquitous and affordable, lots and lots of
people started collecting data because we can. We might think of good things to ask this data later. And the big shift with GDPR
was suddenly a set of regulations applied to marketing technology stacks, whereas previously
they had applied to your transaction, your financial processing stack, your healthcare
data processing stack, but certainly not to your marketing stack. And so there's a whole new set of
people and a whole new set of companies who are having to confront these issues around like, how grown up are we being with how we're managing these systems? And the fact is,
like everyone was was not doing great about it. And the regulation sort of forced a little bit
of a wake up. One of the more surprising elements that I see when talking to companies who have
compliance obligations, is I guess their willingness to retreat into answering
everything with compliance, as if it was a magic word that justified or excused all kinds of
different behavior patterns. That tends to be a very strange conversation, where you get the sense
that the people wielding compliance as a bat don't really seem to grasp what it is that
their obligations are and how that has been interpreted. Absolutely. So like this, dealing
with a big set of rules, no matter who made the rules and how much you like the rules is frustrating
to begin with in bureaucratic. And then it gets even worse when people are trying to use the rules
to force you to do something that you don't think makes sense. And so like one of the things that we've been doing is insisting on an empathy building exercise
whenever we are trying to help a company transition into leveling up on compliance.
So we've been using this security culture diagnostic from the book People-Centric Security
by Dr. Lance Hayden. And he's actually released that survey or diagnostic
under creative commons. So it's a fantastic tool that people can can download from his site. It's,
you know, it's just a word doc that you can use. And I hopefully we can link that up in the show
notes. But he outlines four different security cultures. And it's just amazing to see going
through a 30 minute exercise with folks helping helping them understand like which ones seem to be prominent in their environment and which ones would they like to be prominent in their environment and understand the values of each.
Their behavior towards each other totally changes and they start behaving with empathy and understanding. kind of the key to that is that this helps you to perceive culture as not like sort of the true
self that you carry around or like some very, very singular, important cultural core that exists in
your company, but it's really beliefs and assumptions that drive decision and actions.
And that is a mental model. And suddenly when people realize that they might
have a default or a preferred way of acting and deciding, but that's a mental model they're using,
they can learn about other mental models and they can understand when those other models have value.
They're able to just be so much more helpful to each other. And so I'd like to real quick,
just outline those couple of those cultures, for sure one that you and i with you know
small business owners have is autonomy culture right there's loose controls and you are very
externally focused you're very interested in people outside the company because there's not
too many people inside the company and that is super common in early stage startups and small
consulting firms that is a very very useful way of working. You sort of make whatever
you earn, whatever business you go out and win for yourself. You don't get anybody else sort of
helping you and supporting you. A totally different culture would be a compliance culture,
which you typically would see at a large healthcare organization where they have very
tight controls and they care very much about what's outside the organization. They're externally focused in terms of living up to other people's rules.
So caring a lot about becoming compliant, checking off the box on these regulations.
And then a totally other perspective would be a government organization.
And they're also very strict, like a healthcare organization, but totally internally focused.
So a government agency, a government organization, they sort of don't look
to the outside world for what's right and wrong. They don't really care about that. They decide
internally what's right and wrong and good and bad, and expect everyone in their organization
to follow their own rules that are internally developed. As you can see, like right just from
those three different cultures, like you can probably spot it a mile away. Oh, this company
is behaving like this, but they'd like to behave like that. And so we need to understand the different mental models
that people are using. Or companies say that they behave like one of those and in practice
behave very differently. Exactly, exactly. So when you and that's like the aspirational thing of
where the company wants to go versus if they're looking back at like, what have we actually done in practice over the last year? What I always found fascinating was the evolution of understanding
as you wind up embracing different aspects of technology as things tend to evolve. I know I've
told this story during a conference talk once, but I don't think I've ever told it on the podcast
where I was earlier in my career doing a project where
everything lived in AWS. This was my first outing to addressing compliance in this environment.
So a financial company sent one of those painful questionnaires of, oh, we're debating doing
business with your company, fill out the following 80 page survey. And what I filled out addressed AWS as if it were coming from the
perspective of a data center. No big deal. You can probably figure out how that tended to work out,
where I get a message a few weeks later, great, give us the, here are the following dates that
we'd like to go and send our security people to tour the data center you have in Ashburn,
Virginia,
or Herndon, or whatever it was at the time that they were publicly admitting to.
And the response to that was, oh, dear. It turns out that no one gets to tour the AWS data centers.
And by treating it that way and telling people that at the end, it didn't go well at all.
We're not allowed to tour the data center. We're the third
largest bank in Omaha. Who do they think they are? Stupid online bookstore. And there was an
understanding gap. What made that work more effectively in subsequent outings was the
talking directly to the account managers at AWS who've answered these questionnaires a hundred
times already. Here, give them the following list of paperwork. It fits in a truck.
And when they're done with all of that,
they want more, then have them talk to us.
And magically, those doors started opening,
partially because AWS got better
at answering those questions
and partially because the understanding
of these finance companies improved
as they started realizing that no matter how big they are,
they're not going to get to tour an AWS data center.
It wound up getting
smoother as people on both sides of that conversation learn to communicate with each
other on the same wavelength. Yeah, absolutely. There's been a complete reversal in how it's
perceived. I would be nervous these days if someone was trying to run their own data center
for doing some critically sensitive workload rather than using one of the big cloud providers
just because economies of scale, right?
Like the number of security engineers working at AWS
defending that infrastructure 24 seven
is so much bigger than even a big finance company
is able to do for something
that they're running on their on-premise.
The piece that I always found fascinating
was that in having these conversations with folks,
the story of why public cloud was not acceptable began to hold less and less water.
It went from it's new and scary and we don't trust it to our data is important and we don't want that living in the public cloud.
Really? Because your bank is in the cloud, your compliance body that is going to be auditing you is in the cloud,
and your tax authority is in the cloud, your compliance body that is going to be auditing you is in the cloud, and your tax authority is in the cloud. So what makes your data more important than any of those
other three bodies who are very happy right now in the same availability zones and regions that
you are currently poo-pooing? Yeah, and your military is there too.
Oh yeah, that's right. I'd forgotten that piece. So it comes down to this story of, yeah, you're
right. It's much safer if I have a bunch of half-awake people running their own few racks down the street of the Colo. It just doesn't work out that way.
Yeah, it doesn't at all. And it's really a great time to be working on some of these things,
because for a long time, I was a little sad thinking that all the cool tech was being
applied to absolutely trivial things. But it feels like over the last year, we've really kind of hit this tipping point where a lot of the cool technology
is now able to be applied to the most sensitive workloads. And so we can do really interesting
things with medical records and with financial transactions and bring benefits to people who
need cool features around those absolutely life-critical
transactions. What's interesting to me as well is that people still tend to approach this stuff as
a binary rather than a spectrum. It's fascinating that someone will naively say that a payment
transaction company needs to have the same level of security controls and best practices and
security policies as Twitter for pets, more or less. And it feels like that is fundamentally
untrue. Right. It totally is. And so I think a couple of things are happening. One is that
we're trying to raise the minimum bar for everyone, right? And so things like GDPR
cast a very wide net and they sort of insist on,
if you're processing data about customers, you need to level up. What was okay five years ago
is not okay today. But then beyond that, I think there is a real spectrum. And one of the things
that I'm really hoping we in the tech industry learn how to do some skills we need to acquire are skills that the insurance industry has had for 100 years.
And that's understanding how to think about risk, like you said, as a spectrum, as a range of probabilities with a range of possible losses.
And then choose the things that we're doing to try and protect or minimize the amount of loss based on what really makes sense. So if you have
$100 at risk, it doesn't make sense to spend $10,000 to protect it. Maybe it makes more sense
not to do that business, or maybe it makes more sense to buy some insurance, or maybe it makes
sense to have another control that is totally much less obnoxious to the people in your organization. So an example I love to use
is we're worried about these engineers, you know, out there in that cloud, turning stuff on and
spending money. And then like, what if it's not working? Like, what if they run up a big bill?
So that's a legitimate concern. And you absolutely want to have spending controls inside your
company. But think about like how it feels to have a budget alarm versus a very restrictive
policy about who can create new resources. You're gonna have a totally different like
amount of innovation inside the team, and a totally different track record of retaining
people with one versus the other. But the budget alarm is going to cost you way less and tell you
way sooner when something when you know something does does go wrong and you're spending money that you don't want to be spending.
And it also leads to the rise of shadow IT, people working around policy when it gets in the way of
doing their job. And people get understandably upset when they're making six figures, but aren't
allowed to spin up a $50 a month instance without six weeks of approvals. It becomes working against
the better interest of the company where people have to subvert process in order to effectively do their jobs. And that is never something anyone wants to see happen. I guess the way I tend to approach the security is from a perspective of if the headline risk of when it happens. Do you want to be in the headlines for getting breached after they wound up kidnapping three of your members of your staff and putting this incredibly advanced system into place that would eventually subvert you folks over time and the world had never seen it before? Or do you want it to be because you didn't use the proper permissions policy on your S3 buckets and someone found it by accident.
It comes to raising the bar of what it takes to subvert you. At some point, I'm sorry,
your startup, no matter how effective it is, is not going to be able to withstand a coordinated assault by a nation. It just doesn't work that way. Yeah, absolutely. And I think there's real
value to shifting the thinking from pure prevention, which I think is maybe, it seems to be a default approach for a lot of technical folks. It's like
total focus on prevention at all costs and shift from that to early detection and mitigation.
And that can lead to dramatically better customer experiences and dramatically
better employee experiences. So I remember the third project I worked on where I
was integrating a payment gateway, I got to use Stripe. And that was amazing. I found out later
on watching a meetup talk that Stripe had optimized for approving accounts quickly and disabling them
quickly if they detected fraud. Whereas all of the other payment gateways I had worked for
were very onerous in the signup process because they were trying to stop any fraudulent signups.
Whereas Stripe was optimizing for lots of signups immediately turn off any fraudulent accounts.
And so as a legitimate user, I had the best experience I've ever had with a payment processor because of that mindset that they had towards security.
Right. When I was selling sponsorships early on in the history of the newsletter, I was using Stripe to do it.
And I needed to be able to drop an invoice that someone could pay with a credit
card in front of them in about 20 minutes. And Stripe had it done in three. It was incredible.
Now, if it had been fraudulent, I suspect they would have hit me with a belt, metaphorically,
or perhaps literally speaking, given that I know enough people over there. But the fact that it
got out of my way was incredibly valuable. And I'm sure they've run the numbers, and I'm sure that there are barriers around that,
where if I spin something up quickly, it's not going to instantaneously let me accept
a $4 million payment and transfer that into my account.
There are going to be controls and oversights to make sense.
But depending on how they structure it, if the total risk is in the order of, I'm not going to be able
to process more than, I don't know, $5,000 worth of transactions or whatever it is until a human
has reviewed it, well, that is a lot more manageable than I'm trying to sell this thing
for $20 and I need to wait four weeks to do that. And by that point, the buyer has long since lost
interest and gone away. Exactly. What it costs them in terms of fraudulent use has got to be orders of magnitude lower
than what it would cost them to go the other direction in terms of dissuaded customers.
Yeah. And that's where a much more mature way of thinking about risk modeling and understanding
what is the actual amount of risk that you're trying to protect against,
it can totally
transform the feeling people have working with those products and services in those
regulated environments.
Yeah.
One of the more, I guess, counterintuitive aspects of this entire world, and this applies
to people who are the developers, who are the administrators, who are the rest, 90%
of your security posture will come down to
some very basic things. Use a unique password for every site. Use a password manager and enable
two-factor auth wherever you can. If you do nothing other than those three things, you are going to be
so much better off than any other ridiculous five steps down the ladder
things you can do to start optimizing these things. You can run incredibly complex security software
that does amazing things. But if you aren't controlling the basic stuff, the permissions,
the access control, then there's really no point to it. That's like building a incredibly thick,
amazing wall
and forgetting to lock the door. Yeah, absolutely. One of the most fascinating things for me as I've
gotten deeper and deeper into risk management over the last couple of years was realizing as
you start to do this proper analysis, there is a standard model called FAIR, Factor Analysis of
Information Risk. It's super helpful. But as you start to
actually calculate out in dollars, what are our dollars at risk here? More often than not,
it shows you have things that are over-controlled, that you're spending too much on protecting,
rather than what I would have expected, that it's always showing how much more you need to add
controls in place. But just as many times, it shows you're way over-controlling this stuff.
It's not actually
reducing your exposure any. Let's simplify things. And that becomes anathema. The problem is, is that
effective security personnel and compliance personnel understand that there's a limit of
what they can ask for and what they can't. The naive approach of lock down everything,
Captain Edge case security and the rest will only ever communicate via signal.
They run Linux only on hardware that
they control. They don't use anything that's made in the last five years because they want to lock
it all down themselves. Everything's encrypted. And you talk to them, cool, how do I email you?
And the answer is, well, first you have to install GPG. And it goes down this entire list of making
them almost irrelevant to any conversation. I think everyone who's worked in this space long
enough knows at least five people that that could be referring to that they know from their personal histories. And I get it.
I love that the idea that you can go that deep. I'm not working for the NSA. I send out a snarky,
sarcastic newsletter. And for my personal use case, the dangerous access that I have that is
gated by all the stuff you would expect starts and stops with access to my clients' AWS bills. And I keep a minimum of those that I need to do my job. And then,
novel idea, I get rid of them when I'm done. So the window for exploit is relatively small
for what I do, and it doesn't get you much. That does not mean that I could pass any of
these compliance regimes today, but I don't need to.
If I were to go down that path of building out everything that I do for my entire business
across everything is in a compliant way, I would pay for that with an awful lot of velocity.
And for what I do, the risk does not justify taking that level of care and diligence.
There very well may come a day where that changes. But
today, I do what makes sense for the risk profile that I live within. The danger comes in is if that
risk profile changes, and I don't notice or take appropriate steps when that happens.
Yeah, there's a built in, there's a built in sort of tension against like preventing anybody from
trying to prevent a data breach with another responsibility that
folks in these environments have, which is a data availability that you still have the data.
And so there's sort of a funny failure mode in encrypted backups and all this encryption
everywhere, which is if you don't have the keys, you can't get the data. It's gone forever.
And so you also have the risk of availability loss,
which can lead to fines and judgments and lost business and all of that stuff.
And so you're absolutely right. All of it needs to be balanced. There is a spectrum,
a range of choices. Some of those choices are going to be unique to your business,
but then some of them, you talked, just you talked about that bar are just
available to you in the cloud. And that's really cool. Surfacing a lot of these decisions up to
the appropriate level is also something that tends to be overlooked at times where it becomes very
easy for an individual contributor who's configuring something to make one of these
decisions on the fly. And that works in small environments that are not particularly
regulated. That goes away really quickly once you start having to be responsible for that for other
folks. The last thing in the world a CISO wants is being told of a security posture problem that
someone randomly decided in the dark of night three years ago and no one ever revisited. So
it comes down to also understanding the organizational requirements.
Yeah. And this is starting to bubble up even on the agenda for board members who are responsible
at the highest levels for oversight and governance of an organization. They're certainly not making
decisions about how to protect things, but they want to know. They really want to know from the
CISO and from the rest of the staff, how are we on
cybersecurity? How are we compared to where we were six months ago? How much should we be spending on
it? And referencing that insurance stuff again, it's really important, I think, that folks working
around this in the tech industry learn the techniques for quantitative risk analysis.
There's a nonprofit trade organization
called Sierra Society of Information Risk Analysts. I volunteer and help them. And this is something
that other industries have been dealing with operational risk for decades and much longer.
And there are techniques that are well understood that we can directly apply to cybersecurity risk
and sort of express those issues, those trade-offs that
we're facing in terms of a business case, in terms of dollars at risk, in terms of how much it would
cost to reduce a certain amount of risk. And that is something that everybody at the board meeting
can understand. When I was going through my own business insurance process on my side, I was asked
what my DR strategy was. And the honest answer was, cool, if the internet
or power goes out of my house, I'll go work from a coffee shop. And this led to a back and forth
where they wanted to know, okay, well, what if your data center goes offline? Well, I keep everything
inside of AWS for what I'm working on. So that isn't really a concern. Okay, what if they lose
an entire region? Well, permanently, then I'm really not worried very
much because first, most of what I'm building is replicable. And secondly, I'll be too busy
printing money from people who did not plan for this and have serious business concerns there.
And suddenly I'm charging 10 times what I used to to help get sites back online. And in the event
of a world shaking event that is almost cataclysmic in nature.
Past a certain point, yeah,
my DR plan doesn't matter anymore.
Well, what happens if something happens to you?
I'm an independent business owner.
My business closes.
The end.
That is the nature of what I do.
I'm not necessarily building something here to outlive me.
So sorry, folks.
The podcast and the newsletter
and even the cost consultancy go away if I get hit by a truck. My apologies in advance. Yeah. And that's just a
level of risk that's appropriate for a small business, right? That we're not going to defend
against those. You just accept them. What's also strange too, is when you hear people talking about
this from a business continuity perspective, the question is always, what if you get hit by a truck
instead of the much more likely scenario of what happens when you walk in and give your two weeks notice because
you're changing jobs? We understand that you're looking at an 18 to 36 month average tenure for
most people in the tech space, but we still talk about now magically you're going to stay at the
company that you're at now until you retire with a gold pocket watch in 25 years. That doesn't
happen anymore. And instead we either turn into a lottery winner where this is great, amazing thing that you're at now until you retire with a gold pocket watch in 25 years. That doesn't happen
anymore. And instead, we either turn into a lottery winner where there's this great,
amazing thing happening or something horrifying and you get hit by the bus,
as opposed to you leave and go on to your next job, as is natural in the cycle of things.
It winds up being an edge of disaster recovery and business continuity planning that I always
found to be farcical. I had the same problem when I was asked, okay, so we have a site that's an hour away. So what's
your plan to get there in the event that the city is in chaos? And the answer was a very honest,
I'm going to be taking care of my family. And then they go down the list of, okay,
your family's okay, but now you want to do work and the internet is broken.
Cool. And you somehow think that in that scenario, San Francisco
is going to be intact and or I'm still going to be working here rather than printing money
from everyone else who's willing to pay me multiples of my salary. And suddenly I wasn't
invited to those meetings anymore. Yeah, it's one of those cases like we were saying earlier where
you just really have to think about like okay what's what are the
actual costs of not showing up to work for a week and then what would it cost to make it so that we
could show up to work for that week and as soon as you see that it's going way out of whack like
the costs are far exceeding the value you're actually trying to preserve like why why keep
belaboring the point? They should be just
cutting that conversation short. Yeah. And it just winds up being something that sort of
only exists in a very niche scenario. And the disaster you plan for, by the way,
is never the disaster that hits. It's always going to be something new and exciting and complicated.
Right. So thank you so much for taking the time to speak with me.
This has been fun. If people want to learn more about the nonsense that it is that you and I once upon a time do for a living, where can they wind up learning more?
Check out our website, kindlyops.com. We have a knowledge base there, which has some free words and some free software around risk analysis and how to think about these things, maybe how to get people off your back at work a little bit, if you're having to deal with it. Yeah, so kindlyops.com.
I would absolutely endorse the stuff you folks do. In the past, when I've had weird compliance
questions, you were generally my first stop. That is not a paid endorsement. That is simply
the reality that you are better at this than I am, and I don't want to do it.
Well, thank you very much.
Thank you so much once again for your time. I appreciate it. Elliot Murphy, Kindly Ops. I'm Corey Quinn,
and this is Screaming in the Cloud. This has been this week's episode of Screaming in the Cloud.
You can also find more Corey at screaminginthecloud.com or wherever fine snark is sold.