Screaming in the Cloud - Exposing Vulnerabilities in the World of Cloud Security with Tim Gonda
Episode Date: January 10, 2023About TimTim Gonda is a Cloud Security professional who has spent the last eight years securing and building Cloud workloads for commercial, non-profit, government, and national defense organ...izations. Tim currently serves as the Technical Director of Cloud at Praetorian, influencing the direction of its offensive-security-focused Cloud Security practice and the Cloud features of Praetorian's flagship product, Chariot. He considers himself lucky to have the privilege of working with the talented cyber operators at Praetorian and considers it the highlight of his career.Tim is highly passionate about helping organizations fix Cloud Security problems, as they are found, the first time, and most importantly, the People/Process/Technology challenges that cause them in the first place. In his spare time, he embarks on adventures with his wife and ensures that their two feline bundles of joy have the best playtime and dining experiences possible.Links Referenced:Praetorian: https://www.praetorian.com/LinkedIn: https://www.linkedin.com/in/timgondajr/Praetorian Blog: https://www.praetorian.com/blog/
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is sponsored in part by our friends at Thinks Canary.
Most folks find out way too late that they've been breached.
Thinks Canary changes this.
Deploy Canaries and Canary tokens in minutes and then forget about them.
Attackers tip their hand by touching them, giving you one alert when it matters.
With zero administrative overhead and almost no false positives,
Canaries are deployed and loved on all seven continents.
Check out what people are saying at canary.love today.
Kentik provides cloud and netops teams with complete visibility into hybrid and multi-cloud
networks, ensure an amazing customer experience, reduce cloud and network costs, and optimize
performance at scale from internet to data center to container to cloud. Learn how you can get
control of complex cloud networks at www.kentik.com and see why companies like Zoom, Twitch, New
Relic, Box, eBay, Viasat, GoDaddy, Booking.com, and many, many more choose Kentik as their network
observability platform. Welcome to Screaming in the Cloud, I'm Corey Quinn. Every once in a while, I like to branch out into new
and exciting territory that I've never visited before. But today, no, I'd much rather go back
to complaining about cloud security, something that I tend to do an awful lot about. Here to
do it with me is Tim Gonda, Technical Director of Cloud at Praetorian. Tim, thank you for joining me on this sojourn down what feels like an increasingly well-worn path.
Thank you, Corey, for having me today.
So you are the Technical Director of Cloud, which I'm sort of shorthanding to,
okay, everything that happens on the computer is henceforth going to be your fault.
How accurate is that in the grand scheme of things?
It's not too far off, but we like to call it a Praetorian nebula. The nebula meaning that
it's Schrodinger's problem. It both is and is not the problem. Here's why. We have a couple key
focuses at Praetorian, some of them focusing on more traditional pen testing, where we're looking
at hardware, hit system A, hit system B, branch out, get to goal.
And the other side, we have hitting web applications and moving, say, this insecure app leads to this XYZ vulnerability or this medical appliance is insecure and therefore we're able to do XYZ item.
One of the things that frequently comes up is that more and more organizations are no longer putting their
applications or infrastructure on-prem anymore. So therefore, some part of the assessment ends
up being in the cloud. And that is the unique rub that I'm in, in that I'm responsible for
leading the direction of the cloud security focus group, who may not dive into a specific specialty
that some of these other teams might dig into,
but may have similar responsibilities or similar engagement style.
And in this case, if we discover something in the cloud as an issue,
or even in your own organization where you have a cloud security team,
you'll have a web application security team, you'll have your core information security team
that defends your environment in many different methods, many different means,
you'll frequently find that the cloud security team is the hot button for, hey, this server was misconfigured at one certain level.
However, the cloud security team didn't quite know that this web application was vulnerable.
We did know that it was exposed to the Internet, but we can't necessarily turn off all web applications from the internet because that would no longer serve the purpose of a web application.
And we also may not know that a particular underlying host's patch is out of date that cloud security will be right there alongside the incident responder saying, yep, this host is here,
it's exposed to the internet via here, and it might have the following application on it.
And they get cross exposure with other teams that say, hey, your web application is vulnerable.
We didn't quite inform the cloud security team about it. Otherwise, this wouldn't be allowed
to go to the public internet. Or on the infrastructure side, yeah, we did know that
there was a patch underneath it. We figured that we would let the team handle it at a later date.
And therefore, this is also vulnerable. And what ends up happening sometimes is that the cloud
security team might be the onus or might be the hot button in the room of saying, hey, it's broken.
This is now your problem. Please fix it with changing cloud configurations or directing a team to make
this change on our behalf. So in essence, sometimes cloud becomes it both is and is not your problem
when a system is either vulnerable or exposed or at some point, worst case scenario, ends up being
breached and you're performing incident response. That's one of the cases why it's important to know or important to involve others in
the cloud security problem or to be very specific about what the role of a cloud security team
is or where cloud security has to have certain boundaries or has to involve certain extra
parties to be involved in the process.
Or when it does its own threat modeling process, say that, okay, we have to
take a look at certain cloud findings or findings that's within our security realm and say that
these misconfigurations or these items, we have to treat the underlying components as if they
are vulnerable, whether or not they are. And we have to report on them as if they are vulnerable,
even if it means that a certain component of the infrastructure has to already be assumed to either have a vulnerability, have some sort of misconfiguration that allows
an outside attacker to execute attacks against whatever this entity is, and we have to treat and
respond our security posture accordingly. One of the problems that I keep running into,
and I swear it's not intentional,
but people would be forgiven for understanding
or believing otherwise,
is that I will periodically, inadvertently
point out security problems via Twitter.
And that was never my intention,
because, huh, that's funny,
this thing isn't working the way
that I would expect that it would,
or I'm seeing something weird in the logs
in my test account.
What is that?
And, oh, you found a security vulnerability
or something akin to one in our environment.
Oops, next time, just reach out to us
directly at the security contact form.
That's great if I'd known I was stumbling blindly into a security approach, but
it feels like the discovery of these things is not heralded by an, aha, I found it, but a,
huh, that's funny. Of course, actually, and that's where some of the best vulnerabilities
come where you accidentally stumble on something that says, wait, does this work how I think it is?
Click, click. Oh boy, it does. Now, I will admit that certain cloud providers
are really great about the proactive security reach outs
if you either just file a ticket
or file some other form of notification,
just even flag your account rep and say,
hey, when I was working on this particular cloud environment,
the following occurred.
Does this work the way I think it is?
Is this a problem?
And they usually get back to you
with reporting it to their internal team,
so on and so forth.
But with, say, applications or open source frameworks
or even just organizations at large
where you might have stumbled upon something,
the best thing to do is either look up,
do they have a public bug bounty program?
Do they have a security contact or form reach out
that you can email them?
Or do you know someone at the organization
that you just send a quick email saying,
hey, I found this.
And through some combination of those
is usually the best way to go.
And to be able to provide context
to the organization being,
hey, the following exists.
And the most important things to consider
when you're sending this sort of information
is that they get these sort of emails almost daily.
One of my favorite genre of tweet
is when Tavis Ormandy at Google's Project
Zero winds up doing a tweet like, hey, do I know anyone over at the security apparatus of insert
company here? It's like, all right, I'm sure people are shorting stocks now based upon whatever
he winds up doing that. Of course. It's kind of fun to watch. But there's no cohesive way of getting
in touch with companies on these things, because as soon as you have something like that, it feels like it's subject company know, you're also almost causing a green light where other security
researchers are going to go dive in on this and see like, one, does this work how you described?
But that actually is a positive thing at some points where either you're unable to get the
company's attention, or maybe it's an open source organization, or maybe you're not be fully sure
that something is that case.
However, when you do submit something to a customer and you want it to take it seriously, here's a couple of key things that you should consider.
One, provide evidence that whatever you're talking about has actually occurred.
Two, provide repeatable steps that the layman's term, even IT support person, can attempt to follow in your process that they can repeat
the same vulnerability or repeat the same security condition. And three, most importantly,
detail why this matters. Is this something where I could adjust a user's password? Is this something
where I can extract data? Is this something where I'm able to extract content from your website I
otherwise shouldn't be able to? And that's important for the following reason.
You need to inform the business what is the financial value of why leaving this unpatched
becomes an issue for them.
And if you do that, that's how those security vulnerabilities get prioritized.
It's not necessarily because the coolest vulnerability exists.
It's because it costs the company money.
And therefore, the security team is going to immediately jump on it and try to contain it before it costs them any more.
One of my least favorite genres of security report are the ones that I get, where I found a vulnerability.
It's like, that's interesting.
I wasn't aware that I ran any public-facing services, but all right, I'm game.
What have you got?
And it's usually something along the lines of, you haven't enabled SPF to hard fail any email that doesn't wind up originating explicitly from this list of IP addresses.
Bug bounty, please.
And it's no genius.
That is very much an intentional choice.
Thank you for playing.
It comes down to also an idea of, whatever I have reported security vulnerabilities in the past, the pattern I always take is I'm seeing something that I don't fully understand.
I suspect this might have security implications, but I'm also more than willing to be proven wrong.
Because showing up with you folks are idiots and have a security problem is a terrific invitation to be proven wrong and look like an idiot.
Because the first time you get that wrong, no one will take you seriously again. Of course. And as you'll find with most bug bounty programs, or if you
participate in those, the first couple that you might have submitted, the customer might even tell
you, yeah, we're aware that that vulnerability exists. However, we don't view it as a core issue
and it cannot affect the functionality of our site in any meaningful way. Therefore, we're electing
to ignore it. Fair. Very fair. But then when people write up about those things, well, they've decided this is not an issue, so I'm going to do a write-up on it.
You can't do that. The NDA doesn't let you expose that. Really? Because you just said it's a non-issue.
Which is it? And the key to that, I guess, would also be that is there underlying technology that
doesn't necessarily have to be attributed to said organization? Can you also say that
if I provide a write-up
or if I provide my own personal blog post,
let's say we go back to some of the OpenSSL vulnerabilities,
including OpenSSL 3.0 that came out not too long ago,
but since that's an open source project, it's fair game.
Let's just say that if there was a technology such as that,
or maybe there's a wrapper around it
that another organization could be using
or could be implementing in a certain way,
you don't necessarily have to call the company out by name
or rather just say, here's the core technology reason,
and here's the core technology risk,
and here's the way I've demoed exploiting this.
And if you publish an open source blog like that,
and then you tweet about that,
you can actually gain security support around such issue
and invite for the research.
An example would be that I know a couple of pen testers
who have reported things in the past.
And while the first time they reported it,
the company was like, yeah, we'll fix it eventually.
But later when another researcher reported
the exact same finding, the company's like,
we should probably take this seriously and jump on it.
And sometimes it's just getting in front of that
and providing frequency or providing enough people around to say that,
hey, this really is an issue in the security community
and we should probably fix this item
and keep pushing those organizations on it.
A lot of times they just need additional feedback
because as you said,
somebody runs an automated scanner against your email
and says that, oh, you're not checking SPF
as strictly as the scanner would have liked
because it's a benchmarking tool. It's not necessarily security vulnerability rather than
it's just how you've chosen to configure something. And if it works for you, it works for you.
How does cloud change this? Because a lot of what we're talking about so far could apply to
anything, go back in time to 1995, and a lot of what we're talking about mostly holds true.
It feels like cloud acts as a significant level of complexity on top of all of this.
How do you view the differentiation there?
So I view the differentiation in two things. One, certain services or certain vulnerability classes
that are handled by the shared service model
for the most part are probably secured better
than you might be able to do yourself.
Just because there's a lot of research,
the team has experimented a lot of time on this.
An example of if there's a particular like network spoofing
or network interception vulnerability
that you might see on a local LAN network,
you probably are not gonna have the same level of access
to be able to execute that on a virtual private cloud or VNet or some
other virtual network within a cloud environment. Now, something that does change with the paradigm
of cloud is the fact that if you accidentally publicly expose something or something that
you've created or don't set a setting to be private or only specific to your
resources, there's a couple things that could happen. The vulnerabilities, exploitability based
on where increases to something that used to be just, hey, I left a port open on my own network.
Somebody from HR or somebody from IT could possibly interact with it. However, in the Cloud, you've now set this up to the entire world with people that might
have resources or motivations to go after this product.
Using services like Shodan,
which are continually mapping the Internet for open resources,
and they can quickly grab that, say,
okay, I'm going to type these targets today.
Might continue to poke a little bit further,
maybe an internal person that might be bored at work or a pen tester just on one specific
engagement, especially in the case of, let's say, what you're working on has sparked the
interest of a nation state and they want to dig into a little bit further. They have the resources
to be able to dedicate time, people, and maybe tools and tactics against whatever this vulnerability that you've given
previously in the example of maybe there's a specific ID and a URL that just needs to be
guessed right to give them access to something. They might spend the time trying to brute force
that URL, brute force that value, and eventually try to go after what you have. The main paradigm
shift here is that there are certain things that we might consider less of a priority because the cloud has already taken care of them with the shared service model, and rightfully so.
And there's other times that we have to take heightened awareness on is when we either dispose something to the entire internet or all cloud accounts within creation.
And that's actually something that we see commonly. In fact, one thing I would like to say we see very common is
all AWS users, regardless if it's in your account or somewhere else,
might have access to your SNS topic or SQS queue,
which doesn't seem like that's a big vulnerability.
What? I change the messages, I delete messages, I view your messages.
But rather, what's connected to those?
Let's talk AWS Lambda functions functions where I've got source code
that a developer has written to handle that source code
and may not have built in logic to handle.
Maybe there was a piece of code that could be abused
as part of this message that might allow an attacker
to send something to your Lambda function
and then execute something on that attacker's behalf.
You weren't aware of it, you weren't thinking about it, and now you've exposed it to almost
the entire internet. And since anyone can go sign up for an AWS account or Azure or GCP account,
and then they were able to start poking at that same piece of code that you might have developed
thinking, well, this is just for internal use. It's not a big deal. That one static code analysis
tool isn't probably too relevant. Now it becomes hyper-relevant and something you have to well, this is just for internal use. It's not a big deal. That one static code analysis tool
isn't probably too relevant.
Now it becomes hyper-relevant
and something you have to consider
with a little more attention
and dedicated time
to making sure that these things
that you've written or are deploying
are in fact safe
because misconfigured or misexposed,
suddenly the entire world
starts knocking at it and increases the risk of it may
really well be a problem. The severity of that issue could increase dramatically.
As you take a look across, let's call it the hyperscale clouds, the big three,
which presumably I don't need to define out. How do you wind up ranking
them in terms of security from top to bottom? I have my own rankings that I like to dole out.
And basically, let's offend someone at every one of these companies, no matter how we wind up
playing it out, because I'll argue with you just on principle on them. How do you view them stacking
up against each other? So an interesting view on that is based on who's been around longest
and who has encountered the most technical debt. A lot of these security vulnerabilities or
security concerns may have had to do with a decision made long ago that might have made
sense at the time. And now the company is kind of stuck with that particular technology or decision
or framework and are now having to build or apply security band-aids to that process until it gets resolved.
I would say, ironically, AWS is actually at the top of having that technical debt and
actually has so many different types of access policies that are very complex to configure
and not very user-intuitive unless you speak intuitively JSON or YAML or some other markdown language
to be able to tell you whether or not something was actually set up correctly.
Now, there are a lot of security experts who make their money based on knowing how to configure
or be able to assess whether or not these are actually the issue.
I would actually rank them as by default, by design, between the big three, they're
actually on the lower end of certain based on complexity and
easy to configure wise. The next one that would also go into that pile, I would say is probably
Microsoft Azure, who admittedly decided to say that, okay, let's take something that was very
complicated and everyone really loved to use as an identity provider, Active Directory, and try to
use that as a model for, even though they made it extensively different,
it is not the same as on-prem directory,
but use that as the framework for how people wanted
to configure their identity provider
for a new cloud provider.
The one that actually, I would say, comes out on top,
just based on use and based on complexity,
might be Google Cloud.
They came to a lot of these security features first.
They're acquiring new companies on a regular basis with the acquisition of Mandiant, the
creation of their own security tooling, their own unique security approaches.
In fact, they probably wrote the book on Kubernetes security would be on top for, I guess, from
usability, such as saying that I don't want to have to manage all these different types
of policies.
Here are some buttons I would like to flip, I'd like my resources for the most part by default
to be configured correctly.
And Google does a pretty good job of that.
Also, one of the things they do really well
is entity-based role assumption,
which inside of AWS, you can provide access keys by default
or I have to provide a role ID
or in Azure, I'm going to say,
here's a RBEK policy for something specific that I want to
grant access to a specific resource.
Google does a pretty good job of saying that, okay,
everything is treated as an email address.
This email address can be associated in a couple of different ways.
It can be given the following permissions.
It can have access to the following things.
But for example, if I want to remove access to something,
I just take that email address off of whatever access policy I had somewhere. And then it's taken care of. But they
do have some other items such as their design of least privilege is something to be expected when
you consider the hierarchy. I'm not going to say that they're not without fault in that area,
in case until they had something more recently, far as finding certain key pieces of, like, say, tags or something within a specific subproject or in their hierarchy.
There were cases where you might have granted access to a higher level, and that same level of access came all the way down and where at least privilege is required to be enforced.
Otherwise, you break their security model.
So I like them for how simple
it is to set up security at times. However, they've also made it unnecessarily complex at
other times, so they don't have the flexibility that the other cloud service providers have.
On the flip side of that, the level of flexibility also leads to complexity at times, which I also
view as a problem where customers think they've done something correctly based on their best of knowledge, the best of documentation, the
best of medium articles they've been researching.
And what they have done is they've inadvertently made assumptions and led core anti-patterns
like be into what they've deployed.
This episode is sponsored in part by our friends at Optics, because they believe that many
of you are looking to bolster your security posture with
CNAP and XDR solutions. They offer both cloud and endpoint security in a single UI and data model.
Listeners can get Optics for up to a thousand assets through the end of 2023 for one dollar.
But this offer is only available for a limited time on OpticsSecretMenu.com. That's U-P-T-Y-C-S SecretMenu.com.
I think you're onto something here,
specifically in what I've been asked historically and personally to rank security.
I have viewed Google Cloud as number one and AWS as number two.
And my reasoning behind that has been that
from an absolute security of their platform and a
pure, let's call it math perspective. It really comes down to which of the two of them had what
for breakfast on any given day. They're so close on there. But in a project that I spin up in Google
Cloud, everything inside of it can talk to each other by default, and I can scope that down
relatively easily. Whereas over in AWS land, by default, nothing can talk to anything.
And that means that every permission needs to be explicitly granted,
which in an absolutist sense and in a vacuum, yeah, that makes sense.
But here in reality, people don't do that.
We've seen a number of AWS blog posts over the last 15 years,
and they don't do this anymore, but it started off with,
oh yeah, we're just going to grant star on star
for the purposes of this demo.
Well, that's horrible.
Why would you do that?
Well, if we wanted to specify the IAM policy,
it would take up the first third of the blog post.
How about that?
Because customers go through that exact same thing.
I'm trying to build something and ship.
I mean, the biggest lie in any environment
or any code base ever is the comment that starts with to do. Yeah, that is load bearing. You will
retire with that to do still exactly where it is. You have to make doing things the right way,
at least the least frictionful path, or because no one is ever going to come back and fix this
after the fact. It's never going to happen as much as we wish that it did.
At least till after the week of the breach,
when it was highlighted by the security team to say that,
hey, this was the core issue, then it'll be fixed really.
And short order, usually.
Or a Band-Aid is applied to say that this can no longer be exploited
in this specific way again.
My personal favorite thing that, I wouldn't say it's a lie,
but the favorite thing that I see in all of these announcements right after the,
your security is very important to us, right after it very clearly has not been sufficiently important to them.
And they say, we show no signs of this data being accessed.
Well, that can mean a couple different things. It can mean
we have looked through the audit logs for a service going back to its launch and have verified
that nothing has ever done this except the security researcher who found it. Great. Or it can mean
what even are logs exactly? We're just going to close our eyes and assume things are great. No, no.
So one thing to consider there is in that communication, that entire communication has probably been vetted by the legal department to make sure that the company is not opening itself up for liability. when that usually has occurred. Unless it can be proven that that breach was attributable to your user specifically,
the default response is we have determined that the security response of XYZ item
or XYZ organization has determined that your data was not at risk at any point during this incident,
which might be true, and we're quoting Star Wars on this one,
from a certain point of view. And unfortunately, in the case of a post-breach, their security,
at least from a regulation standpoint, where they might be facing a really large fine,
is absolutely probably their top priority at this very moment, but has not come to surface
because, for most organizations,
until this becomes something that is a financial reason where they have to act,
where their reputation is on the line, they're not necessarily incentivized to fix it. They're
incentivized to push more products, push more features, keep their clients happy. And a lot
of the time, going back and saying, hey, we have this piece of technical debt, it doesn't really excite our user base or doesn't really help us gain a competitive edge in the market.
It's considered an afterthought until the crisis occurs and the information security team rejoices because this is the time they actually get to see their stuff fixed.
Even though it might be a super painful time for them in the short run because they get to see these things fixed. They get to see it put to bed. And if there's ever a happy medium where, hey, maybe there was
a legacy feature that wasn't being very well taken care of, or maybe this feature was also
causing the security team a lot of pain, we get to see both that feature, that item, that service
get better as well as security teams not have to be woken up on a regular basis because
xyz incident happened xyz item keeps coming up in a vulnerability scan if it finally is put to bed
we consider that a win for all and one thing to consider in security as well as kind of like
we talk about the relationship between the developers and security and or product managers and security is if we can
make it a win-win-win situation for all, that's the happy path that we really want to be getting
to. If there's a way that we can make sure that experience is better for customers, the security
team doesn't have to be woken up on a regular basis because an incident happened, and the
developers receive less friction when they want to go implement something, you find that that secure feature, function, whatever,
tends to be the happy path forward
and the path of least resistance for everyone around it.
And those are sometimes the happiest stories
that can come out of some of these incidents.
It's weird to think of there being any happy stories
coming out of these things,
but it's definitely one of those areas
that there are learnings there to be had if we're willing to examine them.
The biggest problem I see so often is that so many companies just try and hide these things.
They give the minimum possible amount of information so the rest of us can't learn by it.
Honestly, some of the moments where I've gained the most respect for the technical prowess of some of these cloud providers
has been after there's been a security issue and they have
disclosed either their response or why it was a non-issue because they took a defense-in-depth
approach. It's really one of those transformative moments that I think is an opportunity if
companies are bold enough to chase them down. Absolutely. And in a similar vein, when we think
of certain cloud providers' outages and we're exposed like the major core flaw of their design, and if it kept happening, and again, these outages could be similar and analogous to it was like 2017, 2018, where it turns out that there was a core DNS system that inside of UACs1, which is actually very close to where I live, apparently was the core crux of how do we not have a single point of failure, even if it is a very robust system, to make sure this doesn't happen again.
And there was a lot of learnings to be had, a lot of in-depth investigation that happened, probably a lot of development, a lot of research.
And sometimes on the outset of an incident, you really get to understand why a system was built a certain way or why a condition
exists in the first place. And it sometimes can be fascinating to kind of dig into that very deeper
and really understand what the core problem is. And now that we know it's an issue, we can actually
really work to address it. And sometimes that's actually one of the best parts about working at
Vittori in some cases is that a lot of the items we find, we get to find them early
before it becomes one of these issues. But the most important thing is we get to learn so much
about like why a particular issue is such a big problem and get to really solve the core business
problem or maybe even help inform, hey, this is an issue for it like this. However, this isn't
necessarily all bad in that if you make these adjustments or these
items, you get to retain this really cool feature, this really cool thing that you built, but also
get to say like, here's some extra added benefits to the customers that you weren't really there.
And such as the old adage of, it's not a bug, it's a feature. Sometimes it's exactly what you
pointed out. It's not necessarily all bad in an incident.
It's also a learning experience.
Ideally, we can all learn from these things. I want to thank you for being so generous with your time and talk about how you view this increasingly complicated emerging space.
If people want to learn more, where's the best place to find you?
You can find me on LinkedIn, which should be included in this podcast description.
You can also go look at articles
that the team is putting together at Praetorian.com.
Unfortunately, I'm not very big on Twitter.
Oh, well, you must be so happy.
My God, what a better decision you're making
than the rest of us.
Well, I like to run a little bit under the radar,
except on opportunities like this
where I can talk about something
I'm truly passionate about,
but I try not to pollute the airwaves too much.
But LinkedIn's a great place to find me.
Praetorian blog for stuff the team is building.
And if anyone wants to reach out,
feel free to hit the contact page up in praetorian.com.
That's one of the best places
to get my attention at the time.
And we will, of course, put links to that
in the show notes. Thank you so much
for your time. I appreciate it.
Absolutely.
Tim Gonda, Technical Director of Cloud at Praetorian. I'm cloud economist Corey Quinn,
and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star
review on your podcast platform of choice. Whereas if you've hated this podcast, please
leave a five-star review on your podcast platform of choice, along with an angry comment talking about how no one disagrees with you based upon a careful examination of your logs.
If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duck Bill Group.
We help companies fix their AWS bill
by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business
and we get to the point.
Visit duckbillgroup.com to get started.
This has been a HumblePod production.
Stay humble.