Screaming in the Cloud - Keeping Workflows Secure in an Ever-Changing Environment with Adnan Khan
Episode Date: October 17, 2023Adnan Khan, Lead Security Engineer at Praetorian, joins Corey on Screaming in the Cloud to discuss software bill of materials and supply chain attacks. Adnan describes how simple pull request...s can lead to major security breaches, and how to best avoid those vulnerabilities. Adnan and Corey also discuss the rapid innovation at Github Actions, and the pros and cons of having new features added so quickly when it comes to security. Adnan also discusses his view on the state of AI and its impact on cloud security. About AdnanAdnan is a Lead Security Engineer at Praetorian. He is responsible for executing on Red-Team Engagements as well as developing novel attack tooling in order to meet and exceed engagement objectives and provide maximum value for clients.His past experience as a software engineer gives him a deep understanding of where developers are likely to make mistakes, and has applied this knowledge to become an expert in attacks on organization’s CI/CD systems.Links Referenced:Praetorian: https://www.praetorian.com/Twitter: https://twitter.com/adnanthekhanPraetorian blog posts: https://www.praetorian.com/author/adnan-khan/
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
Are you navigating the complex web of API management,
microservices, and Kubernetes in your organization?
Solo.io is here to be your guide to connectivity
in the cloud-native universe.
Solo.io, the powerhouse behind Istio,
is revolutionizing cloud-native universe. Solo.io, the powerhouse behind Istio, is revolutionizing cloud-native application networking.
They brought you Glue Gateway, the lightweight and ultra-fast gateway built for modern API
management, and Glue Mesh Core, a necessary step to secure, support, and operate your
Istio environment.
Why struggle with the nuts and bolts of infrastructure when you can focus on what truly matters,
your application?
Solo.io has got your back with networking for applications, not infrastructure.
Embrace zero-trust security, GitOps automation, and seamless multi-cloud networking, all with Solo.io.
Here's the real game-changer.
A common interface for every connection, in every direction, all with one API.
It's the future of connectivity, and it's called Glue by Solo.io.
DevOps and platform engineers,
your journey to a seamless cloud-native experience
starts here.
Visit Solo.io slash Screaming in the Cloud today
to level up your networking game.
As hybrid cloud computing becomes more pervasive,
IT organizations need an automation platform
that spans networks, clouds, and services while helping deliver on key business objectives. Welcome to Screaming in the Cloud.
I'm Corey Quinn. I've been studiously ignoring
a number of buzzword hype-y topics, and it's probably time that I addressed some of them.
One that I've been largely ignoring, mostly because of its prevalence at expo hall booths
at RSA and other places, has been software bill of materials and supply chain attacks. Finally, I figured I would indulge the topic.
Today, I'm speaking with Adnan Khan, lead security engineer at Praetorian.
Adnan, thank you for joining me.
Thank you so much for having me.
So I'm trying to understand on some level where the idea of these SBOM or bill of material attacks have,
where they start and where they stop.
I've seen it as far as upstream dependencies have a vulnerability. Great. I've seen misconfigurations in how companies wind up configuring their open source presences.
There have been a bunch of different, it feels almost like orthogonal concepts to my mind,
lumped together as this is a big, scary because if we have a big single scary thing we can point at, that unlocks budget. Am I being overly cynical on this
or is there more to it? I'd say there's a lot more to it. And there's a couple components here. So
first you have the SBOM type approach to security where organizations are looking at which packages are incorporated
into their builds.
And vulnerabilities can come out in a number of ways.
So you could have software actually have bugs, or you could have malicious actors actually
insert backdoors into software.
I want to talk more about that second point.
How do malicious actors actually insert backdoors into software. I want to talk more about that second point. How do malicious actors actually insert backdoors? Sometimes it's compromising a developer. Sometimes it's
compromising credentials to push packages to a repository. But other times, it could be as simple
as just making a pull request on GitHub. And that's somewhere where I've spent a bit of time doing research,
building off of techniques that other people have documented,
and also trying out some attacks for myself against two Microsoft repositories
and several others that I've reported over the last few months
that would have been able to allow an attacker to slip a backdoor into code
and expand the number of projects that they are able to attack beyond that.
I think one of the areas that we've seen a lot of this coming from
has been the GitHub action space.
And I'll confess that I wasn't aware of a few edge case behaviors around this.
Most of my experience with client-side Git configuration
in the.git repository, pre-commit hooks being a great example, intentionally and by design from a security perspective, do not convey when you check that code in and push it somewhere or grab someone else's, which is probably for the best because otherwise it's, oh yeah, just go ahead and copy your password hash file and email that to something else via a series of arcane shell script stuff.
The vector's there.
I was unpleasantly surprised somewhat recently to discover that when I cloned a public project and started running it locally and then adding it to my own fork, that it would attempt to invoke a whole bunch of GitHub Actions flows that I'd never allowed it to do.
That was, let's say, eye-opening.
Yeah. So on the particular topic of GitHub Actions, the pull request as an attack vector, there's a lot of different forms that
attack can take. So one of the more common ones, and this is something that's been around for just
about as long as GitHub Actions has been around, and this is a certain trigger called pull request target. What this
means is that when someone makes a pull request against the base repository, or maybe a branch
within the base repository such as main, that will be the workflow trigger. And from a security
perspective, when it runs on that trigger,
it does not require approval at all.
And that's something that a lot of people
don't really realize
when they're configuring their workflows.
Because normally,
when you have a pull request trigger,
the maintainer can check a box that says,
oh, require approval for all external pull requests.
And they think, great,
everything needs to be approved.
If someone tries to add malicious code to a run that's on the pull request target trigger,
then they can look at the code before it runs and they're fine. But in a pull request target trigger,
there is no approval and there's no way to require an approval except for configuring the workflow
securely. So in this case, what happens is, and in one particular case against the Microsoft
repository, this was a Microsoft reusable GitHub action called GPT review.
It was vulnerable because it checked out code from my branch.
So if I made a pull request, it checked out code from my branch,
and you could find this by looking at the workflow.
And then it ran tests on my branch.
So it's running my code.
So by modifying the entry points,
I could run code that runs in the context
of that base branch and steal secrets from it
and use those to perform malicious actions.
Gotcha.
It feels like, historically,
one of the big threat models around things like this,
when you have any sort of CICD exploit,
is either falls down one of two branches.
It's either getting secrets access
so you can leverage those credentials
to pivot into other things.
I've seen a lot of that in the AWS space.
Or more boringly and more commonly in many cases, it seems to be, oh, how do I get it
to run this crypto miner nonsense thing?
With the somewhat large-scale collapse of crypto across the board, it's been convenient
to see that be less prevalent, but still there.
Just because you're not making as much money means that you'll still just have to do more
of it when it's all in someone else's account. So I guess it's easier to see and detect a lot of the exploits that require
a whole bunch of compute power. The, oh, by the way, we stole your secrets and now we're going
to use that to lateral into an organization, seemed like it's something far more, I guess,
dangerous and also sneaky. Yeah, absolutely. And you hit the nail on the head there with sneaky because when I
first demonstrated this, I made a test account. I created a PR. I made a couple actions such as I
modified the name of the release for the repository. I just put a little tag on it and didn't
make do any other changes. And then I also created a feature branch in one of Microsoft repositories.
I don't have permission to do that. That just sat there for about
almost two weeks and then someone else exploited it and then they responded to it.
So sneaky is exactly the word you could describe something like this.
And another reason why it's concerning is
beyond the secret disclosure, in this case the repository
only had an OpenAI API key.
So, OK, you can talk to chat GPT for free.
But this was itself a GitHub action.
And it was used by another Microsoft machine learning project that had a lot more users called SynapseML, I believe was the name of the other project. So what someone could do is backdoor this action
by creating a commit in a feature branch,
which they can do by stealing the built-in GitHub token.
And this is something that all GitHub action runs have.
The permissions for it vary, but in this case it had right permissions.
Attacker could create a new branch, modify code in that branch, and then modify the tag,
which in Git, tags are mutable. So you can just change the commit the tag points to.
And now every time that other Microsoft repository runs GPT review to review a pull request,
it's running attacker controlled code. and then that could potentially backdoor
that other repository, steal secrets from that repository.
So that's one of the scary parts of,
in particular, backdooring a GitHub action.
And I believe there was a very informative
Black Hat talk this year.
I'm forgetting the name of the author,
but it was a very good watch about how actions vulnerabilities can be vulnerable.
And this is kind of an example of, it just happened to be that this was an action as well.
That feels like this is an area of exploit that is becoming increasingly common.
I tie it almost directly to the rise of GitHub Actions as the default CI-CD system that a lot of folks have
been using. For the longest time, it seemed like a poorly configured Jenkins box hanging out
somewhere in your environment. That was the exception to the infrastructure as code rule,
because everyone has access to it, configures it by hand, and invariably has access to production,
was the way that people would exploit things. For a while, you had CircleCI and TravisCI,
before Travis imploded
and circled at a bunch of layoffs. Who knows where they're at these days? But it does seem
that the common point now has been GitHub Actions. And a.github folder within that Git repo with a
workflows YAML file effectively means that a whole bunch of stuff can happen that you might not be
fully aware of when you're cloning or following along with someone's tutorial somewhere. That has caught me out in a
couple of strange ways, but nothing disastrous because I do believe in realistic security
boundaries. I just worry how much of this is the emerging factor of having a de facto standard
around this versus something that Microsoft has actively gotten wrong.
What's your take on it?
Yeah, so my take here is that GitHub could absolutely be doing a lot more
to help prevent users from shooting themselves in the foot
because their documentation is very clear
and quite frankly, very good.
But people aren't warned
when they make certain configuration settings
in their workflows.
I mean, GitHub will happily take the settings and they, you know, they hit commit and now
the workflow could be vulnerable.
There's no automatic linting of workflows or, hey, a little suggestion box popping up
like, hey, are you sure you want to configure it this way?
The technology to detect that is there.
There's a lot of third party utilities utilities that'll lint actions workflows. Heck,
for looking for a lot of these pull request target type
vulnerabilities, I use a GitHub code search query.
It's just a regular expression. So having something that
at least nudges users to not make that
mistake would go really far in helping
people not make these mistakes, you know,
adding vulnerabilities to their projects.
It seems like there's also been issues around the GitHub Actions integration approach where
OICD has not been scoped correctly a bunch of times. I've seen a number of articles come across
my desk in that context. And fortunately, when I wound up passing out the ability for one of my
workflows to deploy to my AWS account, I got it right because I had no idea what I was doing and
carefully followed the instructions.
But I can totally see overlooking that one additional parameter that leaves things just
wide open for disaster.
Yeah, absolutely.
That's one where I haven't spent too much time actually looking for that myself, but
I've definitely read those articles that you mentioned.
And yeah, it's very easy for someone to make that mistake. Just like it's easy for
someone to just misconfigure their action in general, because in some of the cases where
I found vulnerabilities, there would actually be a commit saying, hey, I'm making this change
because the action needs access to these certain secrets. And oh, by the way, I need to update
the checkout step. So it actually checks out the PR head so that it's
testing the PR code. People are actively making a decision to make it vulnerable because they don't
realize the implication of what they've just done. And in the second Microsoft repository that I
found the bug in was called Microsoft Confidential Sidecar Containers.
That repository, the developer a week prior to me identifying the bug
made a commit saying that we're making a change and it's okay
because it requires approval.
Well, it doesn't because it's pull request target.
Part of me wonders how much of this is endemic to open source
as envisioned through enterprises
versus my world of open source,
which is just,
I've got this weird side project in my spare time
and it seemed like it might be useful to someone else.
So I'll go ahead and throw it up there.
I understand that there's been an awful lot
of commercialization of open source in recent years.
I'm not blind to that fact,
but it also seems like there's a lot of companies
playing very fast and loose
with things that they probably shouldn't be,
since they have more of a security apparatus than any random contributor standing up a clone of something somewhere will.
Yeah, we're definitely seeing this a lot in the machine learning space because of companies that are trying to move so quickly with trying to build things.
Because open AI has blown up quite a bit
recently. Everyone's trying to get a piece of that machine learning by, so to speak. And another
thing of what you're seeing is people are deploying self-hosted runners with NVIDIA,
what is it, the A100? It's some graphics card that's like $40,000 a piece attached to runners for
running integration tests on machine learning workflows.
Someone could, via a pull request, also just run code on those
and mine crypto. I kind of miss the days
when exploiting computers was basically just a way for people to prove how clever
they were or once in a blue moon come up with something innovative. Now it's like, well, we've got all
around the mulberry bush just so we can basically make computers solve a Sudoku form and in return
turn that into money down the road. It's frustrating to put it gently. When you take a look across the
board at what companies are doing and how they're embracing the emerging capabilities inherent to these technologies, how do you avoid becoming a cautionary tale in this space?
So on the flip side of companies having vulnerable workflows, I've also seen a lot of very elegant ways of writing secure workflows.
And some of the repositories are using deployment environments, which is the
GitHub Actions feature, to enforce approval checks. So workflows that do need to run on
pull request target because they need to access secrets for pull requests will have a step that
requires a deployment environment to complete, and that deployment environment is just an approval
and it doesn't do anything.
So essentially, someone who has permissions
to the repository will go and approve that environment check
and only then will the workflow continue.
So that adds mandatory approvals to pull requests
where otherwise they would just run without approval.
And this is on particularly the pull request target trigger.
Another approach is making it so the trigger is only running on the label event
and then having a maintainer add a label so the test can run and then remove the label.
So that's another approach where companies are figuring out ways to write secure workflows and not leave their repositories vulnerable.
It feels like every time I turn around, GitHub Actions has gotten more capable.
And I'm not trying to disparage the product.
It's kind of the ideal of what we want.
But it also means that there's certainly not an awareness in the larger community of how these things can go awry, that is kept up with the pace
of feature innovation. How do you balance this without becoming the department of no?
Yeah, so it's a complex issue. I think GitHub has evolved a lot over the years. Actions,
despite some of the security issues that happen because people don't configure them properly,
is a very powerful product.
For a CICD system to work at the scale it does
and allow so many repositories to work
and integrate with everything else,
it's really easy to use.
So it's definitely something you don't want to take away
or have organizations move away from something like that
because they are worried about the security risks.
When you have features coming in so quickly,
I think it's important to have a base,
kind of like a mandatory reading.
Like if you're a developer that writes
and maintains an open-source software,
go read through this document
so you can understand the do's and don'ts
instead of it being a patchwork
where some people, they take a good
security approach and write secure workflows. And some people just kind of stumble through
Stack Overflow, find what works, mess around with it until their deployment's working and their
CICD is working and they get the green checkmark. And then they move on to their
never-ending list of tasks because they're always working on a deadline.
Reminds me of a project I saw a few years ago when it came out that Volkswagen had been
lying to regulators.
It was a framework someone built called Volkswagen that would detect if it was running inside
of a CI-CD environment, and if so, it would automatically make all the tests pass.
I have a certain affinity for projects like that.
Another one was a tool that would intentionally degrade the performance of a network connection so you could simulate having a latent or stuttering connection with packet loss, and they called that Comcast. Same story. I just thought that it's fun seeing people get clever around things like that.
Yeah, absolutely. once and I gave up after 12 dependency leaps from just random open source frameworks. I mean,
I see the Dependabot problem that this causes as well, where whenever I put something on GitHub
and they don't touch it for a couple of months, because that's how I roll, I come back and there's
a whole bunch of terrifyingly critical updates that it's warning me about. But given the nature
of how these things get used, it's never going to impact anything that I'm currently running.
So I've learned to tune it out and just ignore it when it comes in, which is probably the worst of all possible approaches.
Now, if I worked at a bank, I should probably take a different perspective on this, but I don't.
Yeah, and that's kind of a problem you see not just with SBOMs. It's just security alerting in general, where anytime you have some sort of signal and
people who are supposed to respond to it are getting too much of it, you just start to tune
all of it out. It's like that human element that applies to so much in cybersecurity. And I think
for the particular SBOM problem where, yeah, you're correct. Like a lot of it, you don't have
reachability because you're using a library for one particular function and that's it.
And this is somewhere where I'm not that much of an expert in where doing more static source
analysis and reachability testing, but I'm certain there are products and tools that offer that
feature to actually prioritize SBOM-based alerts based
on actual reachability versus just having it as a dependency or not.
I feel like on some level, wanting people to be more cautious about what they're doing
is almost shouting into the void because I'm one of the only folks I've found that has
made the assertion that, oh, companies don't actually care about security.
Yes, they email you all the time after they failed to protect your security, telling you how much they care about security.
But when you look at where they invest, feature velocity always seems to outpace investment in security approaches.
And take a look right now at the hype we're seeing across the board when it comes to generative AI.
People are excited about the capabilities and security is a distant afterthought
around an awful lot of these things.
I don't know how you drive a broader awareness of this
in a way that sticks,
but clearly we haven't collectively found it yet.
Yeah, it's definitely a concern.
When you see things on, like, for example,
you can look at GitHub's roadmap,
and there's a feature there that's,
oh, automatic AI- based pull request handling.
OK, so does that mean one day you'll have a GitHub powered LLM just approve PRs based
on whether it determines that it's a good improvement or not?
Like, obviously, that's not something that's the case now. But looking forward to maybe five, six years in the future, in the pursuit of that ever increasing velocity, could you ever have
a situation where actual code contributions are reviewed fully by AI, and then approved and merged?
Like, yeah, that's scary, because now you have a threat actor that could potentially,
specifically tailor
contributions to trick the AI into thinking they're great, but then it could turn around and
be a backdoor that's being added to the code. Obviously that's very far in the future. And I'm
sure a lot of things will happen before that, but it starts to make you wonder like if things are
heading that way or will people realize that you need to look at security
at every step of the way instead of just thinking that these newer AI systems can just handle
everything. Let's pivot a little bit and talk about your day job. You're a lead security engineer at
what I believe to be a security-focused consultancy, or if you're not a SaaS product,
everything seems to become a SaaS product in the fullness of time. What does your day job look like? Yeah, so I'm a security engineer on
Praetorian's Red Team. And my day-to-day, I'll kind of switch between application security and
Red Teaming. And that kind of gives me the opportunity to kind of test out newer things
out in the field, but then also go and do more traditional application security assessments
and code reviews and reverse engineering
to kind of break up the pace of work.
Because red teaming can be very fast and fast-paced and exciting,
but sometimes that can lead to some pretty late nights.
But that's just the nature of being on a red team.
It feels like as soon as I get into the security space
and start talking to cloud companies,
they get a lot more defensive than when I'm making fun of,
you know, bad service naming
or APIs that don't make a whole lot of sense.
It feels like companies have a certain sensitivity
around the security space
that applies to almost nothing else.
Do you find as a result that a lot of the times
when you're having conversations with companies
and they figure out that, oh, you're a red team for a security researcher,
oh, suddenly we're not going to talk to you the way we otherwise might.
We thought you were a customer, but nope, you can just go away now.
I personally haven't had that experience with cloud companies.
I don't know if I've really tried to buy a lot.
If I ever buy some infrastructure from cloud companies as I don't know if I've really tried to buy a lot. If I ever buy some infrastructure
from cloud companies as an individual, I just kind of sign up and put in my credit card and
they just take my money. So I don't really think I haven't really personally run into
anything like that yet. Yeah. I'm curious to know how that winds up playing out in some of these,
I guess, more strategic, larger company
environments. I don't get to see that because I'm basically a tiny company that dabbles in security
whenever I stumble across something, but it's not my primary function. I just worry on some level,
one of these days, I'm going to wind up accidentally dropping a zero day on Twitter or something like
that. And suddenly everyone's going to come after me with the knives. I feel like at some point,
it's just going to be a matter of time.
Yeah.
I think when it comes to disclosing things and talking about techniques,
the key thing here is that a lot of the things that I'm talking about,
a lot of the things that I'll be talking about in some blog posts that I have coming out,
this is stuff that these companies are saying themselves.
They recognize that these are security issues that
people are introducing into code.
They encourage people to not make these mistakes.
But when it's buried in four links deep of documentation and developers are tight on
time and aren't digging through their security documentation, They're just looking at what works,
getting it to work and moving on.
That's where the issue is.
So from a perspective of raising awareness,
I don't feel bad if I'm talking about something
that the company itself agrees is a problem.
It's just a lot of times their own engineers
don't follow their own recommendations.
Yeah, I have opinions on these things.
And unfortunately, it feels like I tend to learn them in some of the more unfortunate
ways of, oh, yeah, I really should care about this thing.
But I only learn what the norm is after I've already done something.
This is, I think, the problem inherent to being small and independent the way that I
tend to be.
We don't have enough people here for there to be a dedicated red team research environment,
for example. I tend to bleed over a little bit into a whole bunch of different things.
We'll find out. So far, I've managed to avoid getting it too terribly wrong, but I'm sure it's
just a matter of time. So one area that I think seems to be a way that people try to avoid cloud
issues is, oh, I read about that in
the last in-flight magazine that I had in front of me, and the cloud is super insecure. So we're
going to get around all that by running our own infrastructure ourselves from either a CI, CD
perspective or something else. Does that work when it comes to this sort of problem?
Yeah, glad you asked about that. So I've also seen companies that have large open source presence on GitHub just opt to have self-hosted GitHub Actions Runners. And that opens up a whole different Pandora's box of attacks that an attacker could take advantage of. And it's only there because they're using that kind of runner. So the default GitHub actions runner, it's just an agent that runs on a machine.
It checks in with GitHub actions, it pulls down builds, runs them, and then it waits for another build.
So the default state is a non-ephemeral runner with the ability to fork off tasks that can run in the background. So when you have a public
repository that has a self-hosted runner attached to it, it could be at the organization level,
or it could be at the repository level. What an attacker can just do is create a pull request,
modify the pull request to run on a self-hosted runner, write whatever they want in the pull request workflow,
create a pull request. And now, as long as they were a previous contributor, meaning you fix the
typo, that could be a single character typo change could even cause that or made a small
contribution. Now they create the pull request. The arbitrary job that they wrote
is now picked up by the self-hosted runner. They can fork off a process to run in the background.
And then that just continues to run. The job finishes, their pull request, they'll just,
they close it, business as usual. But now they've got an implant on the self-hosted runner.
And if the runners are non-ephemeral, it's very hard to completely lock that down. And that's something that I've seen quite a bit of that on GitHub where,
and you can identify it just by looking at the run logs. And that's kind of comes from people
saying, oh, let's just self-host our runners,
but they also don't configure that properly.
And that opens them up to not only tampering
with their repositories, stealing secrets,
but now depending on where your runner is,
now you potentially could be giving an attacker
a foothold in your cloud environment.
Yeah, that seems like it's generally a bad thing.
I found that cloud tends
to be more secure than running it yourself in almost every case, with the exception that once
someone finds a way to break into it, there's suddenly a lot more eggs in a very large, albeit
more secure basket. So it feels like it's a consistent trade-off, but as time goes on,
it feels like it is less and less defensible, I think, to wind up picking out
an on-prem strategy from a pure security point of view.
I mean, there are reasons to do it.
I'm just not sure.
Yeah.
And I think that distinction to be made there, in particular with CICD runners, is there's
cloud, meaning you let your...
There's full cloud, meaning you let your CICD provider host your infrastructure as well. There's kind of that hybrid approach you mentioned where
you're using a CICD provider, but then you're bringing your own cloud infrastructure that you
think you could secure better. Or you have your runners sitting in vCenter on in your own data
center. And all of those could end up being both having a runner in your cloud
and in your data center could be equally vulnerable
if you're not segmenting builds properly.
And that's the core issue that happens
when you have a self-hosted runner is you,
if they're not ephemeral,
it's very hard to cut off all attack paths.
There's always something an
attacker can do to tamper with another build that'll have some kind of security impact. You
need to just completely isolate your builds. And that's essentially what you see in a lot of these
newer guidances, like the Salsa framework. That's kind of the core recommendation of it is like
one build, one clean runner. Yeah, that seems to be the common wisdom. I've been doing a lot of work with my own self
hosted runners that run inside of Lambda. Definitionally, those are, of course, ephemeral.
And there's a state machine that winds up handling that and screams bloody murder if there's a
problem with it. So, so far, crossing fingers, hoping it works out well. And I have it bounded
to a very limited series of role permissions and, of course, its own account to constrain blast
radius. But there's still, there are no guarantees in this.
The reason I build it the way I do is that
alright, worst case, someone can get
access to this. The only thing they're going
to have the ability to do is frankly
run up my AWS bill, which is
an area I have some small amount of experience
with.
Yeah, yeah, that's always kind of
the core thing where if you get into someone's cloud
like, well, just sit there and use their compute resources.
Exactly. I kind of miss when that was the worst failure mode you had for these things.
I really want to thank you for taking the time to speak with me today.
If people want to learn more, where's the best place for them to find you?
I do have a Twitter account. Well, I guess you can't call it Twitter anymore.
Watch me. Sure I can.
Yeah, so I'm on Twitter and it's Adnan Vakhan. So it's like my first name with D and then K-H-A-N because, you know, my full name probably got taken up like years before I ever made a Twitter
account. So occasionally I tweet about GitHub Actions there. And on Praetorian's website,
I've got a couple of blog posts. I have one
the one that really goes in depth
talking about the two Microsoft
repository pull
request attacks
and a couple other ones that I disclosed
will hopefully drop
on the 20
what is that Tuesday?
That's the 26th
so it should be,
should be airing on the Praetorian blog then. So it should be out by the time this is published.
So we will of course put a link to that in the show notes. Thank you so much for taking the time
to speak with me today. I appreciate it. Likewise. Thank you so much, Corey.
Adnan Khan, lead security engineer at Praetorian. I'm cloud economist Corey Quinn,
and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star
review on your podcast platform of choice. Whereas if you've hated this podcast, please
leave a five-star review on your podcast platform of choice, along with an insulting comment that's
probably going to be because your podcast platform of choice is somehow GitHub Actions. If your AWS bill keeps rising and your
blood pressure is doing the same, then you need the Duck Bill Group. We help companies fix their
AWS bill by making it smaller and less horrifying. The Duck Bill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point.
Visit duckbillgroup.com to get started.