Screaming in the Cloud - Replay - Hacking AWS in Good Faith with Nick Frichette

Episode Date: December 26, 2024

On this Screaming in the Cloud Replay, we’re taking you back to our chat with Nick Frichette. He’s the maintainer of hackingthe.cloud, and holds security and solutions architect AWS certi...fications, and in his spare time, he conducts vulnerability research at Hacking the Cloud. Join Corey and Nick as they talk about the various kinds of cloud security researchers and touch upon offensive security, why Nick decided to create Hacking the Cloud, how AWS lets security researchers conduct penetration testing in good faith, some of the more interesting AWS exploits Nick has discovered, how it’s fun to play keep-away with incident response, why you need to get legal approval before conducting penetration testing, and more.Show Highlights(0:00) Intro(0:42) The Duckbill Group sponsor read(1:15) What is a Cloud Security Researcher?(3:49) Nick’s work with Hacking the Cloud(5:24) Building relationships with cloud providers(7:34) Nick’s security findings through cloud logs(13:05) How Nick finds security flaws(15:31) Reporting vulnerabilities to AWS and “bug bounty” programs(18:41) The Duckbill Group sponsor read(19:24) How to report vulnerabilities ethically(21:52) Good disclosure programs vs. bad ones(28:23) What’s next for Nick(31:27) Where you can find more from NickAbout Nick FrichetteNick Frichette is a Staff Security Researcher at Datadog, specializing in offensive security within AWS environments. His focus is on discovering new attack vectors targeting AWS services, environments, and applications. From his research, Nick develops detection methods and preventive measures to secure these systems. Nick’s work often leads to the discovery of vulnerabilities within AWS itself, and he collaborates closely with Amazon to ensure they are remediated.Nick has also presented his research at major industry conferences, including Black Hat USA, DEF CON, fwd:cloudsec, and others.LinksHacking the Cloud: https://hackingthe.cloud/Determine the account ID that owned an S3 bucket vulnerability: https://hackingthe.cloud/aws/enumeration/account_id_from_s3_bucket/Twitter: https://twitter.com/frichette_nPersonal website:https://frichetten.comOriginal Episodehttps://www.lastweekinaws.com/podcast/screaming-in-the-cloud/hacking-aws-in-good-faith-with-nick-frichette/SponsorThe Duckbill Group: duckbillgroup.com

Transcript
Discussion (0)
Starting point is 00:00:00 From my own personal perspective, I always think it's best to contact the developers or the company or whoever maintains whatever you've found a vulnerability in. Welcome to Screaming in the Cloud. I'm Corey Quinn. I spend a lot of time throwing things at AWS in varying capacities. One area I don't spend a lot of time giving them grief is in the InfoSec world because as it turns out, they, and almost everyone else, doesn't have much of a sense of humor around things like security. My guest today is Nick Frechette, who's a penetration tester and
Starting point is 00:00:36 team lead for State Farm. Nick, thanks for joining me. Hey, thank you for inviting me on. This episode is sponsored in part by my day job, the Duck Bill Group. Do you have a horrifying AWS bill? That can mean a lot of things. Predicting what it's going to be, determining what it should be, negotiating your next long-term contract with AWS, or just figuring out why it increasingly resembles a phone number, but nobody seems to quite know why that is. To learn more, visit duckbillgroup.com. Remember, you can't duck the duck bill bill. And my CEO informs me that is absolutely not our slogan. So like most folks in InfoSec, you tend to have a bunch of different, I guess, titles or roles that hang on signs around someone's neck. And it all sort of distills down
Starting point is 00:01:24 on some level, in your case at least, and please correct me if I'm wrong, to cloud security researcher. Is that roughly correct or am I missing something fundamental? Yeah, so for my day job, I do penetration testing and that kind of puts me up against a variety of things
Starting point is 00:01:39 from web applications to client-side applications to sometimes the cloud. In my free time, though, I like to spend a lot of time on security research and most recently been focusing pretty heavily on AWS. So let's start at the very beginning. What is a cloud security researcher? What is it you'd say it is you do here, for lack of a better phrasing? Well, to be honest, the phrase security researcher or cloud security researcher has been kind of, I guess like watered down in recent years.
Starting point is 00:02:09 Everybody likes to call themselves a researcher in some way or another. You have some folks who participate in the bug bounty programs. So for example, GCP and Azure have their own bug bounties. AWS does not, I'm not too sure why. And so they want to find vulnerabilities with the intention of getting cash compensation for it. You have other folks who are interested in doing security research to try and better improve defenses and alerting and monitoring so that when the next major breach happens, they're prepared or they'll be able to stop it ahead of time. From what I do, I'm very interested in offensive
Starting point is 00:02:45 security research. So how can I, as a penetration tester or a red teamer, or I guess an actual criminal, how can I take advantage of AWS or try to avoid detection from services like GuardDuty and CloudTrail? So let's break that down a little bit further. I've heard the term of red team versus blue team used before. Red team presumably is the offensive security folks. And yes, some of those people are in fact quite offensive. And blue team is the defense side. In other words, keeping folks out. Is that a reasonable summation of the state of the world? It can be, yeah, especially when it comes to security. One of the nice parts, you know, about the whole InfoSec field, I know a lot of folks tend to kind of just say, like, oh, they're there to prevent the next breach.
Starting point is 00:03:28 But in reality, InfoSec has a ton of different niches and different job specialties. Blue teamers, quote unquote, tend to be the defense side working on ensuring that we can alert and monitor potential attacks, whereas red teamers or penetration testers tend to be the folks who are trying to do the actual exploitation or develop techniques to to do that in the future so you talk a bit about what you do for work obviously but what really drew my notice was stuff you do that isn't part of your core job as best i understand it you're focused on vulnerability research specifically with a strong emphasis on cloud exploitation, as you said, AWS in particular. And you're the founder of Hacking the Cloud, which is an open source encyclopedia of various attacks and techniques you can perform in cloud environments. Tell me about
Starting point is 00:04:16 that. Yeah, so Hacking the Cloud came out of a frustration I had when I was first getting into AWS, that there didn't seem to be a ton of good resources for offensive security professionals to get engaged in the cloud. By comparison, if you wanted to learn about web application hacking or attacking active directory or reverse engineering, if you have a credit card, I can point you in the right direction. But there just didn't seem to be a good course or introduction to how you as a penetration tester should attack AWS. There's things like, you know, open S3 buckets are a nightmare, or that server-side request forgery on an EC2 instance can result in your organization being fined very, very heavily. I kind of wanted to go deeper with that. And with Hacking the Cloud, I've tried to sort of gather a bunch of
Starting point is 00:05:10 offensive security research from various blog posts and conference talks into a single location so that both the offense side and the defense side can kind of learn from it and leverage that to either improve defenses or look for things that they can attack. It seems to me that doing things like that is not likely to wind up making a whole heck of a lot of friends over on the cloud provider side. Can you talk a little bit about how what you do is perceived by the companies you're focusing on? Yeah, so in terms of relationship, I don't really have too much of an idea of what they think. You know, I have done some research and written on my blog, as well as published to Hacking the Cloud, some techniques for doing things like abusing the SSM agent, as well as abusing the AWS API to enumerate permissions without logging to CloudTrail.
Starting point is 00:06:02 And ironically, through the power of IP addresses, I can see when folks from the Amazon corporate IP address space look at my blog. And that's always fun, especially when there's like four in the course of a couple of minutes or five or six. But I don't really know too much about what they or how they view it or if they think it's valuable at all. I hope they do, but really not too sure. I would imagine that they do on some level. But I guess the big question is, you know that someone doesn't like what you're doing when they send cease and desist notices or have the police knock on your door. I feel like at most levels, we're past that in an InfoSec level. At least I'd like to believe we are.
Starting point is 00:06:39 We don't hear about that happening all too often anymore. But what's your take on it? Yeah, I definitely agree. I definitely think we are beyond that. Most companies these days know that, you know, vulnerabilities are going to happen no matter how hard you try and how much money you spend. And so it's better to be accepting of that and open to it. And especially because the InfoSec community can be so, say, noisy at times, it's definitely worth it to, you know, pay attention, definitely be appreciative of the information that may come out.
Starting point is 00:07:09 EWS is pretty awesome to work with, having disclosed to them a couple times now. They have a safe harbor provision, which essentially says that so long as you're operating in good faith, you're allowed to do security testing. They do have some rules around that, but they are pretty clear in terms of if you were operating in good faith, you wouldn't be doing anything like that. It tends to be pretty obviously malicious things that they'll ask you to stop. So talk to me a little bit about what you've found lately and been public about. There have been a number of examples that have come up whenever people start Googling your name or looking at things you've done. But what's happening lately? What have you found that's interesting?
Starting point is 00:07:50 Yeah, so I think most recently, the thing that's kind of gotten the most attention has been a really interesting bug I found in the AWS API. Essentially, kind of the core of it is that when you are interacting with the API, obviously that gets logged to CloudTrail so long as it's compatible. So if you are interacting with the API, obviously that gets logged to CloudTrail so long as it's compatible. So if you are successful, say you want to do like secrets manager list secrets,
Starting point is 00:08:11 that shows up in CloudTrail. And similarly, if you do not have that permission on a role or user and you try to do it, that access denied also gets logged to CloudTrail. Something kind of interesting that I found is that by manually modifying a request or malforming them, what we can do is we can modify the content type header. And as a result, when you do that, you can provide literally gibberish.
Starting point is 00:08:34 I think of a VS code window here somewhere with a content type of meow. When you do that, the AWS API knows the action that you're trying to call. But because of that messed up content type, it doesn't know exactly what you're trying to do. And as a result, it doesn't get logged to CloudTrail. Now, while that may seem kind of weirdly specific and not really like a concern, the nice part of it, though, is that for some API actions, somewhere in the neighborhood of 600, I say in the neighborhood of just because it fluctuates over time. As a result of that, you can tell if you have that permission or if you don't without that being logged to CloudTrail. And so we can do this enumeration of permissions without somebody in the defense side seeing us do it, which is pretty awesome from an offensive security perspective. On some level, it would be easy to say, well, just not showing up in the logs isn't really a security problem at all. I'm going to guess that you disagree. little money on the side, and you're okay with perhaps, you know, committing some crimes to do
Starting point is 00:09:45 it. Through some means, you get access to a company's AWS credentials for some particular role, whether that's through remote code execution on an EC2 instance, or maybe you find them in an open location like an S3 bucket or a Git repository, or maybe you phish a developer. Through some means, you have an access key and a secret access key. The new problem that you have is that you don't know what those credentials are associated with or what permissions they have. They could be the root account keys or they could be, you know, literally locked down to a single S3 bucket to read from. It all just kind of depends. Now, historically, your options for figuring that out are kind of limited. Your best bet would be to brute force the AWS API using a tool like Paku or my personal favorite, which is Enumerate IEM by Andres Rancho.
Starting point is 00:10:34 And what that does is it just tries a bunch of API calls and sees which one works and which one doesn't. And if it works, you clearly know that you have that permission. Now, the problem with that, though, is that if you were to do that, that's going to light up CloudTrail like a Christmas tree. It's going to start showing all these access denies for these various API calls that you've tried. And obviously, any defender who's paying attention is going to look at that and go, okay, that's suspicious, and you're going to get shut down pretty quickly. What's nice about this bug that I found is that instead of having to litter CloudTrail with all these logs, we can just do this enumeration for roughly 600-ish
Starting point is 00:11:11 API actions across roughly 40 AWS services, and nobody is the wiser. You can enumerate those permissions, and if they work, fantastic, and you can then use them. And if you come to find you don't have any of those 600 permissions, okay, then you can decide on where to go from there or maybe try to risk things showing up in CloudTrail. CloudTrail is one of those services that I find incredibly useful, or at least I do in theory.
Starting point is 00:11:35 In practice, it seems that things don't show up there and you don't realize that those types of activities are not being recorded until one day there's an announcement of, hey, that type of activity is now recorded. As of the time of this recording, the most recent example of that in memory is data plane requests to DynamoDB. It's, wait a minute, you mean that wasn't being recorded previously? Huh, I guess it makes sense, but oh dear, and that causes a re-evaluation of what's happening from a security policy and posture perspective for some clients.
Starting point is 00:12:05 There's also, of course, the challenge that CloudTrail logs take a significant amount of time to show up. It used to be over 20 minutes. I believe now it's closer to 15, but don't quote me on that, obviously. Run your own tests, which seems awfully slow for anything that's going to be looking at those in an automated fashion and taking a reactive or remediation approach to things that show up there. Am I missing something key? No, I think that is pretty spot on. And believe me, I am fully aware of how long CloudTrail takes to populate, especially with
Starting point is 00:12:34 doing a bunch of research on what is and what is not logged to CloudTrail. I know that there are some operations that can be logged more quickly than the 15-minute average. Off the top of my head, though, I actually don't quite remember what those are. But you're right. In general, the majority, at least, do take quite a while. And that's definitely time in which an adversary or someone like me could maybe take advantage of that 15-minute window to try and brute force those permissions, see what we have access to, and then try to operate and get out with whatever
Starting point is 00:13:03 goodies we've managed to steal. Let's say that you're doing the thing that you do, however that comes to be. And I am curious, actually, we'll start there. I am curious, how do you discover these things? Is it looking at what is presented and then figuring out, huh, how can I wind up subverting the system it's based on? And similar to the way that I take a look at any random AWS services and try and figure out how to use it as a database. How do you find these things? Yeah, so to be honest, it all kind of depends. Sometimes it's completely by accident. So for example, the API bug I described about not logging to CloudTrail, I actually found that due to copy and pasting code from AWS's website, and I didn't change the content type header. And as a result, I happened to notice this weird behavior and kind of took advantage of it. Other times,
Starting point is 00:13:49 it's thinking a little bit about how something is implemented and sort of the security ramifications of it. So for example, the SSM agent, which is a phenomenal tool in order to like do remote access on your EC2 instances, I was sitting there one day and just kind of thought, hey, how does that authenticate exactly? And what can I do with it? Sure enough, it authenticates the exact same way that the AWS API does, that being the metadata service on the EC2 instance. And so what I figured out pretty quickly is, even if you can get access to an EC2 instance, even as a low privilege user, or you can do service site request forgery to get the keys, or if you just have sufficient
Starting point is 00:14:30 permissions within the account, you can potentially intercept SSM messages from like a session and provide your own results. And so in effect, if you've compromised an EC2 instance, and the only way, say incident response has into that box is SSM, you can effectively lock them out of it and kind of do whatever you want in the meantime. That seems like it's something of a problem. It definitely can be, but it is a lot of fun to play keep away with incident response. I'd like to reiterate that this is all in environments you control and have permissions to be operating within. It is not recommended that people pursue things like this in other people's cloud environments without permissions. I don't
Starting point is 00:15:11 want to find us sued for giving crap advice, and I don't want to find listeners getting arrested because they didn't understand the nuances of what we're talking about. Yes, absolutely. Getting legal approval is really important for any kind of penetration testing or red teaming. I know some folks sometimes might get carried away, but definitely be sure to get approval before you do any kind of testing. So how does someone report a vulnerability to a company like AWS? So AWS, at least publicly, doesn't have any kind of bug bounty program. But what they do have is a vulnerability disclosure program. And that is essentially a email address that you can contact and send information to. And that'll act as your point of contact with AWS while they investigate the issue. And at the end of their investigation, they can report back with their findings, whether they agree with you, and they have are working to get that patched or fixed
Starting point is 00:16:01 immediately, or if they disagree with you and think that everything is hunky-dory, or if you may be mistaken. I saw a tweet the other day that I would love to get your thoughts on, which said effectively that if you don't have a public bug bounty program, then any way that a researcher chooses to disclose the vulnerability is definitionally responsible on their part because they don't owe you any particular duty of care. Responsible disclosure, of course, is also referred to as coordinated vulnerability disclosure because we're always trying to reinvent terminology in this space. What do you think about that? Is there a duty of care from security researchers to responsibly disclose the vulnerabilities they find or coordinate those vulnerabilities with vendors in the absence of a public bounty program on turning those things in? Yeah, you know, I think that's a really difficult question to answer. From my own
Starting point is 00:16:56 personal perspective, I always think it's best to contact the developers or the company or whoever maintains whatever you've found a vulnerability in, Give them the best shot to have it fixed or repaired. Obviously, sometimes that works great and the company is super receptive and they're willing to patch it immediately. And other times they just don't respond or sometimes they respond harshly. And so depending on the situation, it may be better for you to release it publicly with the intention that you're informing folks that this particular company or this particular project may have an issue. On the flip side,
Starting point is 00:17:30 I can kind of understand, although I don't necessarily condone it, why folks pursue things like exploit brokers, for example. So, you know, if a company doesn't have a bug bounty program and the researcher isn't expecting any kind of like cash compensation, I can understand why they may spend tens of hours, maybe hundreds of hours, tracing down a particularly impactful vulnerability only to maybe write a blog post about it or get a little head pat and say, thanks, nice work. And so I can see why they may pursue things like selling it to an exploit broker who may pay them a hefty sum if it is orders of magnitude more it's oh good you found a way to remotely execute code across all of ec2 in every region that is a hypothetical don't email me have a t-shirt it seems like you could basically buy all the t-shirts for what that is worth on the exploit market yes absolutely and
Starting point is 00:18:24 i do know know from some experience that folks will reach out to you and are interested in particularly some cloud exploits, nothing like minor, like some of the things that I found, but thinking more of like accessing resources without anybody knowing or accessing resources cross-account. That could go for quite a hefty sum.
Starting point is 00:18:41 Here at the Duckbill Group, one of the things we do with, you know, my day job is we help negotiate AWS contracts. We just recently crossed $5 billion of contract value negotiated. It solves for fun problems, such as how do you know that your contract that you have with AWS is the best deal you can get? How do you know you're not leaving money on the table? How do you know you're not leaving money on the table? How do you know that you're not doing what I do on this podcast and on Twitter constantly and sticking your foot in your mouth? To learn more, come chat at duckbillgroup.com.
Starting point is 00:19:16 Optionally, I will also do podcast voice when we talk about it. Again, that's duckbillgroup.com. It always feels squeaky on some level to discover something like this that's kind of neat and wind up selling it to basically some arguably terrible people. Maybe. We don't know who's buying these things from the exploit broker. Counterpoint, having reported a few security problems myself to various providers, you get an autoresponder, then you get a thank you email that goes into a bit more detail for the well-run programs at least. And invariably the company's
Starting point is 00:19:51 position is, is whatever you found is not as big of a deal as you think it is. And therefore they see no reason to publish it or go loud with it. Wouldn't you agree? Because on some level, their entire position is, please don't talk about any security shortcomings that you may have discovered in our system. And I get why they don't want that going loud. But by the same token, security researchers need a reputation to continue operating on some level in the market as security researchers, especially independents, especially people who are trying to make names for themselves in the first place. Yeah.
Starting point is 00:20:26 How do you resolve that dichotomy yourself? Yeah, so from my perspective, you know, I totally understand why a company or a project wouldn't want you to publicly disclose an issue. Everybody wants to look good, and nobody wants to be called out for any kind of issue that may have been unintentionally introduced. I think the thing at the end of the day, though, from my perspective, you know, if I, as some random guy in the middle of nowhere, Illinois, finds a bug, or to be frank, if anybody out there finds a vulnerability in something, then a much more sophisticated adversary is equally capable of finding such a thing. And so it's better to have these things out in the open and discussed rather than hidden away, so that we have the best chance of anybody being able to defend against it or develop detections for it rather than just kind
Starting point is 00:21:11 of being like, okay, the vendor didn't like what I had to say. I guess I'll go back to doing whatever things I normally do. You've obviously been doing this for a while, and I'm going to guess that your entire security researcher career has not been focused on cloud environments in general and AWS in particular. Yes, I've done some other stuff in relation to abusing GitLab runners. I also happen to find a pretty neat RCE and privilege escalation in the very popular open source project PyHole. Not sure if you have any experience with that. Oh, I run it myself all the time for various DNS blocking purposes and other sundry bits of nonsense. Oh, yes.
Starting point is 00:21:48 Good. What I'm trying to establish here is that this is not just one or two companies that you've worked with. You've done this across the board, which means I can ask a question without naming and shaming anyone even implicitly. What differentiates good vulnerability disclosure programs from terrible ones? Yeah, I think the major differentiator is the reactivity of the project, as in how quickly they respond to you. There are some programs I've worked with where you disclose something,
Starting point is 00:22:14 maybe even that might be of a high severity, and you might not hear back for weeks at a time. Whereas there are other programs, particularly like the MSRC, which is a part of Microsoft, or with AWS's disclosure program, where within the hour, I had a receipt of, hey, we received this, we're looking into it. And then within a couple hours after that, yep, we verified it, we see what you're seeing, and we're going to look at it right away. I think that's definitely one of the major differentiators for programs. Are there any companies you'd like to call out in either direction? A no is a perfectly valid answer to this one for having excellent disclosure programs versus terrible ones. I don't know if I'd like to call anybody out negatively, but in support, I've definitely
Starting point is 00:22:55 appreciated working with both AWS's and the MSRC and Microsoft's. I think both of them have done a pretty fantastic job and they definitely know what they're doing at this point. Yeah, I must say that I primarily focus on AWS and have for a while, which should be blindingly obvious to anyone who's listened to me talk about computers for more than three and a half minutes. But my experiences with the security folks at AWS have been uniformly positive. Even when I find things that they don't want me talking about that I will be talking about regardless, they've always been extremely respectful.
Starting point is 00:23:34 And I've never walked away from the conversation thinking that I was somehow cheated by the experience. In fact, a couple of years ago with the last in-person re-invent, I got to give a talk around something I reported specifically about how AWS runs its vulnerability disclosure program with one of their security engineers, Zach Glick. And he was phenomenally transparent around how a lot of these things work and what they care about and how they view these things and what their incentives are. And obviously being empathetic to people reporting things in with the understanding that there is no duty of care that when security researchers discover something, they then must immediately go and report it in return for a pat on the head and a thank you. It was really neat being able to see both sides simultaneously around a particular issue. I'd recommend it to other folks, except I don't know how you make that lightning strike twice. It's very, very wise. Yes. Thank you. I do
Starting point is 00:24:21 my best. So what's next for you? You've obviously found a number of interesting vulnerabilities around information disclosure. One of the more recent things that I found that was sort of neat as I trolled the internet, I don't believe it was yours, but there was a ability to determine the account ID that owned an S3 bucket by enumerating via binary search. Did you catch that at all? I did. That was by Ben Britz, which is, it's a pretty awesome technique. And that's been something I've been kind of interested in for a while. There is an ability to enumerate users' roles and service link roles inside an account, so long as you know the account ID. The problem, of course, is getting the account ID.
Starting point is 00:25:01 So when Ben put that out there, I was like super stoked about being able to leverage that now for enumeration and maybe some fun phishing tricks with that. I love the idea. I love seeing that sort of thing being conducted. And AWS's official policy, as best I remember when I looked at this once, account IDs are not considered confidential. Do you agree with that? Yep, that is my understanding of how AWS views it. From my perspective, having an account ID can be beneficial. I mentioned that you can enumerate users' roles and service-linked roles with it, and that could be super useful from a phishing perspective. The average phishing email looks like, oh, you want an iPad, or you're the 100th visitor of some website or something like
Starting point is 00:25:45 that. But imagine getting an email that looks like it's from something like EWS, you know, developer support or from some research program that they're doing. And they can say to you like, hey, we see that you have these roles in your account with account ID, such and such. And we know that you're using EKS and you're using ECS, that phishing email becomes a lot more believable when suddenly this outside party seemingly knows so much about your account. And that might be something that you would think, oh, well, only a real AWS employee or AWS would know that. So from my perspective, I think it's best to try and keep your account ID secret. I actually redact it from every screenshot that I
Starting point is 00:26:25 publish, or at the very least I try to. At the same time, though, it's not the kind of thing that's going to get somebody in your account in a single step. So I can totally see why some folks aren't too concerned about it. I feel like we also got a bit of a red herring coming from AWS blog posts themselves, where they always will give screenshots explaining what they do and redact the account ID in every case. And the reason that I was told at one point was that, oh, we have an internal provisioning system that's different, it looks different, and I don't want to confuse people whenever I wind up doing a screenshot. And that's great, and I appreciate that. And part of me wonders on one level, how accurate is that?
Starting point is 00:27:09 Because sure, I understand what you don't necessarily want to distract people with something that looks different. But then I found out that the system is called Isengard. And yeah, it's great. They've mentioned it periodically in blog posts and talks and the rest. And part of me now wonders, oh, wait a minute. Is it actually because they want to disclose the differences between those systems? Or is it because they don't have license rights publicly to use the word Isengard and don't want to get sued by whoever owns the rights to the Lord of the Rings trilogy? So one wonders what
Starting point is 00:27:32 the real incentives are in different cases. But I've always viewed account IDs as being the sort of thing that you probably don't want to share them around all the time, but it also doesn't necessarily hurt. Exactly. Yeah. It's not the kind of thing you want to share with the world immediately, but it doesn't really hurt in the end. There was an early time when the partner network was effectively determining tiers of partner by how much spend they influenced. And the way that you demonstrated that was by giving account IDs for your client accounts. The only verification at the time, to my understanding, was that, yep, that mapped to the client. You said it did, and that was it. So I can understand back in those
Starting point is 00:28:10 days not wanting to muddy those waters, but those days are also long past. So I get it. I'm not going to be the first person to advertise mine, but if you can discover my account ID by looking at a bucket, it doesn't really keep me up at night. So all of those things considered, we've had a pretty wide ranging conversation here about a variety of things. What's next? What interests you as far as where you're going to start looking and exploring and exploiting, as the case may be, various cloud services? Hack the dot cloud, which there's the dot in there, which also turns into a domain, excellent choice, is absolutely going to be a great collection for a lot of what you find and for other people to contribute and learn from one another. But where are you aimed at? What's next? Yeah, so one thing I've been really interested in has been fuzzing the AWS API.
Starting point is 00:29:00 As anyone who's ever used AWS before knows is that there are hundreds of services with thousands of the AWS SDKs. The problem, though, is that those are designed for sort of like the happy path where you can format your quest the way Amazon wants. As a security researcher or as someone doing fuzzing, I kind of want to send random gibberish sometimes or I want to malform my requests. And so that library is still in production, but it has already resulted in a bug. While I was fuzzing part of the AWS API, I happened to notice that I broke Elastic Beanstalk. Quite literally, when I was going through the AWS console, I got the big red error message of b.requestparameters is null. And I was like, huh, why is it null? And come to find out as a result of that, there is a HTML injection vulnerability in the
Starting point is 00:30:06 elastic, well, there was a HTML injection vulnerability in the Elastic Beanstalk for the AWS console. Pivoting from there, the Elastic Beanstalk uses Angular 1.8.1, or at least it did when I found it. As a result of that, we can modify that HTML injection to do template injection. And for the AngularJS crowd, template injection is basically cross-site scripting because there is no sandbox anymore, at least in that version. And so as a result of that, I was able to get cross-site scripting in the AWS console, which is pretty exciting. That doesn't tend to happen too frequently. No, that is not a typical issue that winds up getting disclosed very often. Definitely, yeah.
Starting point is 00:30:45 And so I was excited about it. And considering the fact that my library for fuzzing is literally like not even halfway done or is barely halfway done, I'm looking forward to what other things I can find with it. I look forward to reading more. And at the time of this recording, I should point out that this has not been finalized or made public. So I'll be keeping my eyes open to see what happens with this. And hopefully this will be old news by the time this episode drops. If not, well, this might be an interesting episode once it goes out. Yeah, I hope they'd have it fixed by then.
Starting point is 00:31:17 They haven't responded to it yet, other than the, hi, we've received your email. Thanks for checking in. But we'll see how that goes. Watching news as it breaks is always exciting. If people want to learn more about what you're up to and how you go about things, where can they find you? Yeah, so you can find me in a couple different places.
Starting point is 00:31:34 On Twitter, I'm Frechette underscore N. I also write a blog where I contribute a lot of my research at FrechetteN.com, as well as Hacking the Cloud. I contribute a lot of the AWS stuff that gets thrown on there. And it's also open source. So if anyone else would like to contribute or share their knowledge,
Starting point is 00:31:51 you're absolutely welcome to do so. Poll requests are open and excited for anyone to contribute. Excellent. And we will, of course, include links to that in the show notes. Thank you so much for taking the time to speak with me. I really appreciate it.
Starting point is 00:32:02 Yeah, thank you so much for inviting me on. I had a great time. Nick Frishet, Penetration Tester and Team Lead for State Farm. I'm cloud economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with a comment telling me why none of these things are actually vulnerabilities, but simultaneously should not be discussed in public ever.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.