Screaming in the Cloud - Security Can Be More than Hues of Blue with Ell Marquez
Episode Date: January 4, 2022About EllEll, former SysAdmin, cloud builder, podcaster, and container advocate, has always been a security enthusiast. This enthusiasm and driven curiosity have helped her become an active m...ember of the InfoSec community, leading her to explore the exciting world of Genetic Software Mapping at Intezer.Links:Intezer: https://www.intezer.comTwitter: https://twitter.com/Ell_o_Punk
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
It seems like there's a new security breach every day.
Are you confident that an old SSH key or a shared admin account isn't going to come back and bite you?
If not, check out Teleport. Teleport is the easiest,
most secure way to access all of your infrastructure. The open-source Teleport
access plane consolidates everything you need for secure access to your Linux and Windows servers, and I assure you, there is no third option there.
Kubernetes clusters, databases, and internal applications
like AWS Management Console, Yankins, JitLab, Grafana,
Jupyter Notebooks, and more.
Teleport's unique approach is not only more secure,
it also improves developer productivity.
To learn more, visit goteleport.com. And no, that's not me telling you to go away. It is goteleport.com.
This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of Hello World demos?
Allow me to introduce you to Oracle's Always Free tier.
It provides over 20 free services and infrastructure, networking, databases, observability, management, and security.
And let me be clear here, it's actually free.
There's no surprise billing until you
intentionally and proactively upgrade your account. This means you can provision a virtual
machine instance or spin up an autonomous database that manages itself, all while gaining the
networking, load balancing, and storage resources that somehow never quite make it into most free
tiers needed to support the application that you want to build. With Always Free, you can do things like run small-scale applications or do proof-of-concept
testing without spending a dime. You know that I always like to put asterisk next to the word free.
This is actually free. No asterisk. Start now. Visit snark.cloud slash oci-free. That's snark.cloud slash oci-free.
Welcome to Screaming in the Cloud. I'm Corey Quinn. If there's one thing we love doing in
the world of cloud, it's forgetting security until the very end, going back and bolting it on
as if we intended to do it that way all along. That's why AWS says security is job zero,
because they didn't want to renumber all of their slides once they realized they forgot security. Here to talk with me about that today is
Elle Marquez, security research advocate at Intezer. Elle, thank you for joining me.
Of course.
So what does a security research advocate do, for lack of a better question, I suppose? Because
honestly, you look at that, it's like a security research advocate, it seems, would advocate for doing security research. That seems
like a good thing to do. I agree, but there's probably a bit more nuance to it than I can pick
up just by a facile reading of the title. You know, we have all of these white papers that
you end up getting, the pen test reports that are dropped on your desk that nobody ever gets to.
They become low priority.
My job is to actually advocate that you do something with the information that you get.
And part of that just involves translating that into plain English so anyone can go with it.
I've got to say, if you want to give the secrets of the universe and make sure that no one ever reads them, make sure that it has a whole bunch of academic style citations at the beginning
and ideally put it behind some academic paywall. And it feels like people will claim to have read it but never actually read
the thing. Don't forget charts. Oh, yes, with the charts. In varying shades of blue. Apparently,
that's the only color you're allowed to do some of these charts in despite having a full universe
of color palettes out there. We're just going to put it in varying shades of corporate blue and
hope that people read it. Yep, that sounds about security there.
So how much of, I guess, modern security research these days
is coming out of academia versus coming out of industry?
In my experience and research I've done in researching researchers,
it all really revolves around actual practitioners these days,
people who are on the front lines, you know, monitoring their honeypots and actually reporting back on what they're seeing, not just theoretical.
Which I guess brings us to the question of, I wind up watching all of the keynotes that all
the big cloud providers put on, and they simultaneously pat me on the head and tell
me that their side of security is just fine with their shared responsibility model and the rest,
whereas all of the breaches I'm ever going to deal with,
and the only way anyone can ever see my data is if I make a mistake in configuring something.
And honestly, does that really sound like something I would do?
Probably not, but let's face it, they claim that they are more or less infallible.
How accurate is that?
I wish that I could find the original person that said this,
but I've heard it so many times,
and it's actually the cloud irresponsibility model. We have this blind faith that if we're
paying somebody for it, it's going to be done correctly. I think you may have seen this with
billing. How many people are paying for redundant security services with a cloud provider?
I've once, well, more than once, have noticed that if you were to configure
every AWS security service that they have
and enable it in your account,
the resulting bill would be larger
than the cost of the data breach it was preventing.
So on some level, there is a point
at which it just becomes ridiculous
and it's not necessarily worth pursuing further.
I honestly used to think
that the shared responsibility model story was a sales
pitch, and then I grew ever more cynical, and now my position on it is that it's because if you get
breached, it's your fault, is what they're trying to say. But if you say it outright to someone who
just got breached, they're probably not going to give you money anymore. So you need to wrap that
in this whole involved 45-minute presentation with slides and charts and images and the rest,
because people can't refute one of those quite the way that they can a, it's in a tweet sentence of, it's your fault.
I kind of have to agree with them in the end that it is your fault. Like, the buck stops with you
regardless. You are the one that chose to trust that cloud provider was going to do everything,
because your security team might, you know, make a mistake, but the cloud provider was going to do everything because your security team might make a mistake,
but the cloud provider is made up of humans as well who can make just as many mistakes.
At the end of the day,
I don't care what cloud provider you used.
I care that my data was compromised.
One of the things that hurts me the most
is when I read about a data breach from a vendor
that I had either trusted knowingly with my data
or worse, never trusted,
but they somehow scraped it somewhere and then lost it. And they said, oh, a third-party contractor
that we hired. It's, yeah, look, I'm doing business with you, ideally, not the people that you choose
to do business with in turn. I didn't select that contractor. You did. You can pass out the work and delegate that.
You cannot delegate the responsibility. So now, Verizon, when you talk about having a third-party
contractor have a data breach of customer data, you lost the data by not vetting your contractors
appropriately. Let's go back in time to hopefully something everybody remembers, Target. Target
being compromised because of their HVAC provider.
Yeah, how many people, you know, this is being recorded in the holiday season,
are still shopping at Target right now? I don't know if people forget it, they just don't care.
A year later, their stock price was higher than it was before the breach. Sure, they had a complete
turnover of their C-suite at that point. Their CISO and CEO were forced out as a result, but
life went on.
And they continued to remain a going concern despite quite literally having a bullseye painted on the building.
You'd think that would be a metaphor for security issues, but no, no, that is something they actually do.
You know, when you talk about, you know, the CEO being let go or, you know, being run out, but what part did he honestly have to do with it? They're talking about, oh, well, they made the decisions and they were responsible, what, because they got that, you know, list of
just 8,000 papers with the charts on it? As I take a look at a lot of the previous
issues that we've seen with, I've been doing my whole S3 bucket negligence awards for a while,
but once I actually had a bucket engraved and sent
to a company years ago, the Pokemon company based upon a story that I read in the wall street
journal, how they declined to do business with a prospective vendor, because as going through
their onboarding process, they noticed among other things, insufficient security controls
around a whole bunch of things, including S3 buckets. And it's, holy crap, a company actually making a meaningful decision
based upon security. And say what you will about the Pokemon company, their audience is,
at least theoretically, children and occasionally adults who believe they're children. Great.
Not here to shame. But they understand that this is not something you can afford to be lax in,
and they kiboshed the entire deal.
They didn't name the vendor, obviously, but that really took me aback.
It was such a rarity to see that, and it's why I unfortunately haven't had to make a bucket like that since.
I wish I did. I wish more companies did things like this.
But no, it's just a matter of, well, we claimed you to the right thing, and we checked all the boxes and called it good, and oops, these things happen. Yes, but even when it goes that way,
who actually remembers what happened? And did you ever follow up if there were any consequences
to not going, okay, third party, you screwed up, we're out, we're not using you? I can't name a
single time that that happened. Over at the Duckbill Group, we have
large enterprise customers. We have to be respectful and careful with their data. Let's
be very clear here. We have all of their AWS billing data going back for some fixed period
of time. And it worries me what happens if that data gets breached. Now, sure, I've done the
standard PR crisis comp thing. I have statements and actions prepared to go in the event that it happens.
But I'm also taking great pains to make sure it doesn't.
It's the idea of, okay, let's make sure
that we wind up keeping these things
not just distinct from the outside world,
but distinct from individual clients
so we're not mixing and matching any of this stuff.
It's one of those areas where if we wind up having
a breach, it's not because we didn't follow the baseline building blocks of doing this right.
It's something that goes far beyond what we would typically expect to see in an environment like
this. This, of course, sets aside the fact that while a breach like that would be embarrassing,
it isn't actually material to anyone's business. This is not to say that I'm not taking it
seriously because we have contractual provisions
that we will not disclose a lot of this stuff,
but it does not mean the end of someone's business
if this stuff were to go public in the same way that,
for example, back when I worked at Grindr many years ago,
in the event that someone's data had been leaked there,
people could theoretically have been killed.
There's a spectrum of consequences here,
but it still seems like you just do the basic block
and tackling to make sure that this stuff isn't publicly exposed.
Then you start worrying about the more advanced stuff.
But with all these breaches, it seems like people don't even do that.
You have Tesla, right, who is working on going to Mars, sending people there who had their S3 buckets compromised.
At that point, if we've got this technology just giant there, I think we're safe to
do that whole, hey, assume breach, assume compromise. But when I say that, it drives me up the wall.
How many people just go, okay, well, there's nothing we can do. We should just assume that
there's going to be an issue and just have this mentality where they give up. No, that gives you
a starting point to work from, but that's not the
way it's being seen. One of the things that I've started doing as I built out my new laptop
recently has been, all right, how do I work with this in such a way that I don't have credentials
that are going to grant access to things in any long-lived way ever residing on disk? And so that
meant with AWS, I started using SSO to log into a bunch of things.
It goes through a website
and then it gives a token and the rest
that lasts for 12 hours.
Great.
Okay, SSH keys.
How do I handle that?
Historically, I would have them encrypted with a passphrase,
but then I found for macOS an app called Secretive
that stores it in the secure enclave.
I have to either type in a password
or prove it with a biometric Touch ID nonsense
every time something
tries to access the key. It's slightly annoying when I'm checking out five or six Git repos at
once, but it also means that nothing that I happen to have compromised in the browser or whatnot is
going to be able to just grab the keys, send it off somewhere, and then I'll never realize that
I've been compromised throughout. It's the idea of, at least theoretically, defense in depth,
because it's me. It's my personal electronics in all likelihood that are going to be compromised more so than
it is configured that a lockdown S3 buckets manage properly.
And if not me, someone else in my company who has access to these things.
I'm going to give you the best advice you're ever going to get and people are going to
go, duh, but it's happening right now.
Don't get complacent.
Don't get lazy. How many of us are, okay, we're just going to put the key over here for a second,
or we're just going to do this for a minute and then we forget. I recently, you know,
did some research into Amotat and the new virus and the group behind it. You know how they got
caught? When they were raided, everything was in plain text. They forgot to use their VPN for a
while. All the files that they'd gotten, no encryption.
These were the people that that's what they were looking for,
but you get lazy.
I've started treating at least the security credential side
of doing weird things as one-off,
even one-off bash scripts,
as if they were in production.
I stuff the credentials into something like AWS's parameter store
and then just have a one-line snippet of code that retrieves them at runtime to wind up retrieving those.
Would it be easier to just slap it in there in the code?
Absolutely.
Of course it would.
But I also look at my newsletter production pipeline and I count the number of DynamoDB tables that are at an active use that are labeled test or dev.
And I realize, huh, I'm actually kind of bad at taking something that was in dev and getting it
ready for production, very often I just throw load at it and call it good. So if I never get
complacent around things like that, it's a lot harder for me to get yelled at for checking secrets
into Git, for example. Probably not the first time that you've heard this, but Corey, I'm going to
have to go with you're abnormal because that is not what we're seeing in a day-to-day production
environment. Oh, of course not. And the reason I do this is because I was a grumpy old sysadmin
for so long and have gotten burned in so many weird ways of messing things up. And once it's
in Git, it's eternal. We all know that. And I don't ever want to be in a scenario where I open
source something and surprise, surprise, come to find out in the first two days of doing something, I had something on disk. It's just better not to go down that path,
if at all possible. Being a former sysad as well, I must say what you're able to do within your
environment, your computer, is almost impossible within a corporate environment. Because as a
sysad, I'm looking at what did the devs do again? Oh man, what's the
security team going to do? And you're stuck in the middle trying to figure out how to solve a problem
and then manage it throughout an entire environment. I never really understood intrinsically the value
of things like single sign-on until I wound up starting this company. Because first, it was just
me for a few years. And yeah, I can manage my developer
environments and my AWS environments in such a way that if they get compromised, it's not going
to be through basic, oops, I forgot that's how computers worked type of moment. It's going to
be at least something a little bit more difficult, I would imagine. Because if you, all right, if you
manage to wind up getting my keys and the passphrase, and in some cases, the MFA device, great, good, congratulations, you have done something novel and probably deserve the data.
Whereas as soon as I started bringing other people in who themselves were engineers, I sort of still felt the same way.
Okay, we're all responsible adults here. And by and large, since I wasn't working with junior people, that held true.
And then I started bringing in people who did not come from a deeply
computery technical background,
doing things like finance
and doing things like sales
and doing things like marketing,
all of which are themselves
deeply technical in their own way.
But data privacy and data security
are not really something
that aligns with that.
So it got into the weeds of
how do I make sure that people
are doing responsible things
on their work computers, like turning on disk encryption and forcing a screensaver and a
password and the rest, and forcing them to at least do some responsible things. Like having
one password for everyone was great until I realized a couple people weren't even using it
for something. And oh dear, it becomes a much more difficult problem at scale when you have to
deal with people who, you know, have actual work to do rather than sitting around trying to defend the technology against any threat they can imagine.
In what you just said, though, there is one flaw is we tend to focus on, like you said, marketing and finance and all of these organizations who don't get phished, don't click on this link.
But we kind of give just the openness
that your security team, your sysads, your developers, they're going to know best practices.
And then we focus on Windows because that's what the researchers are doing. And then we focus on
Windows because that's what marketing is using. That's what finance is using. So what, there's
no way to compromise a Mac or a Linux box, that's a huge, huge open area
that you're allowing for attackers.
Let's be very clear here.
We don't have any Windows boxes,
of which I'm aware, in the company.
And yeah, the technical folk we have brought in,
most of them I'd worked with,
at least the early folks I'd worked with previously,
and we had a shared understanding of security.
At least we all said the right things.
But yeah, as you write, as you grow, as you scale,
this becomes a big deal. And it's, I also think there's something intrinsically
flawed about a model where the entire instruction set is, it all falls on you to not click the link
or you're going to doom us all. Maybe if someone can click a link and doom us all, the problem
isn't with them. It's the fact that we suck at building secure systems that respect defense in depth. Something that we do wrong, though, is we split
it up. We have endpoint protection when we're talking about our Windows boxes, our Linux boxes,
our Mac boxes, and then we have server-side and cloud security. Those connect. Think about,
there's a piece of malware called Evil Gnome. You go in on a Linux box, you have access to my camera, key logging and watching exactly
what I'm doing.
I'm your sysad.
I then cat out your SSH keys.
I go into your box.
They now have the password, but we don't look for that.
We just assume that those two aren't really that connected.
And if we monitor our network and we monitor these devices, we'll be fine.
But we don't connect the two pieces.
One thing that I did at a consulting client back in 2012 or so that really raised eyebrows
whenever I told people about it was that we wound up going to some considerable trouble
building a allow list within Squid, a proxy server that those of us in Linux land are
all too familiar with in some cases. So everything in production could only talk to the outside world
via that proxy. It was not allowed to establish any outbound connections other than through that
proxy. So it was at that point only allowed to talk to specified update servers, specified third
party APIs, and the rest. So at least in theory, I haven't checked
back on them since. I don't imagine that the log4yay nonsense that we've seen recently would
necessarily work there. I mean, sure, you have the arbitrary execution of code. That's bad. But
reaching out to random endpoints on the internet would not have worked from within that environment.
And I liked that model, but oh my god, was it a pain in the butt to set up properly, because it turns out, even in 2012, just to update a Linux system reasonably, there's a fair number
of things it needs to connect to from time to time. Once you have all the things like New Relic
instrumentation in, and the app repository you're talking to, and whatever container source you're
using, and, and, and, then you wind up looking at challenges like, oh, I don't know. If you're
living in an AWS-style environment, like most modern things are, okay, we're only going to
allow it to talk to AWS endpoints. Well, that's kind of the entire internet now. The goalposts
move. The rules change. The game marches on. On an even simpler point, with that, you're
assuming only outbound traffic through those devices. Are they not connected to anything
within the internal network? Is there no way for an attacker to pivot between systems? I pivot over
to that, I get the information, and I make an outbound connection on something that's not
configured that way. We had a lot of talk outbound to the management subnet, which was on its own
VLAN, and that could make established connections into other things, but nothing else was allowed to connect into that. There was some defense in depth and some
thought put into this. I didn't come up with most of this, to be clear. This was smart people
sitting around. And yeah, if I sit here and think about this for a while, of course there's going to
be ways to do it. This was also back in the days of doing it in physical data centers. So you could
have a pretty good idea of what was connected to the outside world just by looking at where the cables went.
But there was also always the question of how does this, does this do what I think it's doing or what have I overlooked?
Security's job is never done.
Or what was misconfigured in the last update.
It's an assumption that everything goes correctly.
Oh, there is that.
I want to talk, though, about the things I had to worry about back then. It seems like, in many cases, get kicked upstairs to the cloud providers that we're using these days. But then we see things like AzureScape, where security researchers were able to gain access to the Azure control plane, where customers using Cosmos DB, Azure's managed database service, one of them, could suddenly have their data
accessed by another customer. And Azure is doing its clam up thing and not talking about this
publicly other than a brief disclosure. But how is this even possible from a security architecture
point of view? It makes me wonder if it hadn't been disclosed publicly by the researcher,
would they have ever said something? Most assuredly not. I've worked with several researchers in Inezer and outside
of Inezer, and the amount of frustration that I see within reasonable disclosure, it just blows
my mind. You have somebody threatening to sue the researcher if they bring it out. You have a
company going, okay, well, we've only had six weeks. Give us three more weeks. And next thing
we know, it's six months. There is just this pushback about what we can actually bring out to the public on why they're vulnerable in organizations.
So we're put in this catch-22 as researchers. At what point is my responsibility to the public?
And at what point is my responsibility to protect myself, to keep myself from getting sued
personally, to keep my company from going
down. How can we win when we have small research groups and these massive cloud providers?
This episode is sponsored in part by my friends at Cloud Academy. Something special just for you
folks. If you missed their offer on Black Friday or Cyber Monday or whatever day of the week doing
sales it is, good news. They've opened up their Black Friday promotion Cyber Monday or whatever day of the week doing sales it is, good news,
they've opened up their Black Friday promotion for a very limited time. Same deal, $100 off a
yearly plan, $249 a year for the highest quality cloud and tech skills content. Nobody else is
going to get this, and you have to act now because they have assured me this is not going to last for much longer.
Go to cloudacademy.com, hit the start free trial button on the homepage and use the promo code cloud when checking out. That's C-L-O-U-D. Like loud, what I am with a C in front of it. They've
got a free trial too, so you'll get seven days to try it out to make sure it really is a good fit.
You've got nothing to lose except your ignorance about cloud. My thanks to Cloud Academy once again for sponsoring my ridiculous nonsense.
For a while, I was relatively confident that we had things like Google's Project Zero,
but then they started softening their disclosure timelines and the rest. And it was, we had the
full disclosure security distribution list that has since shuttered to my understanding.
Increasingly, it's become risky to yourself to wind up publishing something that has since shuttered to my understanding. Increasingly,
it's become risky to yourself to wind up publishing something that has not been patched
and blessed by the providers and the rest. For better or worse, I don't have those problems just
because I'm posting about funny implications of the bill. Yeah, worst case, AWS is temporarily
embarrassed and they can wind up giving credits to people who are affected and be mad at me for a
while, but there's no lasting harm in the way that there is with, well, people
are just able to look at your data for six months and that's our bad oops-a-doozy. Especially given
the assertions that all of these providers have made to governments, to banks, to tax authorities,
to all kinds of environments where security really, really matters.
The last statistic that I heard, and it was earlier this year,
that it takes over 200 days for a compromise even to be detected.
How long is it going to take for them to backtrack, figure out how it got in?
Have they already patched those systems and that vulnerability is gone,
but they managed to establish persistence somehow?
The layers that go into actually doing your digital forensics only delay
the amount of time that any of that is going to come out or that they have some information to
present to you. We keep going, oh, we found this vulnerability. We're working on patches. We have
it fixed. But does every single vendor already have it pitched? Do they know how it actually
interacted within one customer's environment that allowed that breach to happen? It's just ridiculous to think that that's actually occurring
and every company is now protected because that patch came out.
As I take a look at how companies respond to these things, you're right. The number one concern most
of them have is image control, if I'm being honest with you. It's the
reputational management of, we are still good at security, even though we've had a lapse here.
Like every breach notification starts out with, your security is important to us. Well, clearly
not that important, because look at the email you had to send. And it's almost taken on aspects of
a comedy piece, where it grips with corporate insincerity.
On some level, when you tell a company that they have a massive security vulnerability, their first questions are not about the data privacy.
It's about how do we spin this to make ourselves come out of this with the least damage possible.
And I understand it, but it's still crappy.
As tech folk talk to each other, when we have security and developers speaking to each other,
we're a lot more honest than when we're talking to the public, right?
We don't try to hold that PR umbrella over ourselves.
I was recently on a panel speaking with developers,
head SRE folk, what was there?
I think there was a CISO on there.
And one of the developers just honestly came out and said,
at the end, my job
is to say, how much is that breach going to cost versus how much money will the company lose if I
don't make that deployment? The first thing that you notice there is that whole how much money
you'll lose. The second part is, why is the developer the one looking at the breach?
Yeah, the work flows downward. One of the most depressing aspects
to me of the CISO role is that it seems like the job is to delegate everything, sign binding
contracts in your name, and eventually get fired when there's a breach and your replacement comes
in to sign different papers. All the work gets delegated. None of the responsibility does,
ideally, unless your solar
wins and try and blame it on an intern. I mean, I wish I had an ablative intern or two around here
to wind up casting blame they don't deserve on them, but that's a separate argument.
There is no responsibility taking as I look at this, and that's really a
depressing commentary on the state of the world.
You say there's no responsibility taken, but there is a lot of blame assigned. I love the concept of postmortems to why that breach happened, but the only people in the room are the security team because they had that much control over anything. blamed for every single compromise as more and more responsibility, more and more privileges,
and visibility into what's going on is being taken away from them. Those two just don't balance,
and I think it's causing a lot of just complacency and almost giving up from our security teams.
To be clear, when we talk about blameless postmortems for things like this, I agree
with it wholeheartedly within the walls of a company. However, externally, as someone whose data has been taken in some of these breaches,
oh, I absolutely blame the company, as I should. It's especially when it's something like, well,
we have inadvertently leaked your browsing history. Why were you collecting that in the
first place is sort of the next logical question. I don't believe that my ISP needs that to serve
me better, but now you have
Verizon sending out emails recently, as of this recording, saying that unless anyone opts out,
all the lines in our cell account are going to wind up being data mined effectively so they can
better target advertisements and understand us better. No, I absolutely do not want you to be
doing that on my phone. Are you out of your mind? There are a few things in this world that we
consider more private than our browsing histories. We ask the internet things we wouldn't
ask our doctors in many cases. And that is no small thing as far as the level of trust that
we place in our ISPs that they are now apparently playing fast and loose with.
I'm going to take this a step back because you do a lot of work with cloud providers. Do you think that we actually know what
information is being collected about our companies and what we have configured internally and
externally by the cloud provider? That's a good question. I've seen this before where people will
give me the PDF exploded view of last month's AWS bill. And they'll laugh because what information
could I possibly get out of that? It just shows spend on services. But I could do that to start
sketching out a pretty good idea of what their architecture looks like from that alone. There's
an awful lot of value in the metadata. Now, I want to be clear. I do not believe on any provider,
except possibly Azure, because who knows at this point, that if you encrypt the data
using their encryption facilities with AWS,
I know it's KMS, for example,
I do not believe that they can arbitrarily decrypt it
and then scan it for whatever it is they're looking for.
I do not believe that they are doing that
because as soon as something like that comes out,
it puts the lie to a whole bunch
of different audit associations that they've made
and brings the entire empire crumbling down.
I don't think they're going to get
any useful data from that.
However, if I'm trying to build something
like Amazon Prime Video
and I can just look at the bill
from the Netflix account,
well, that tells me an awful lot
about things that they might be doing internally.
It's highly suggestive.
Could that be used to give them an unfair advantage?
Absolutely. I had a
tweet a while back that I don't believe that Google's Gmail division is scanning inboxes for
things that look like AWS invoices to target their sales teams, but I sure would feel better if they
would assure me that that was the case. No one was able to ever assure me of that. I don't mean to be sitting here slinging mud, but at the same time, it's given that when you
don't explicitly say you're not doing something as a company, there's a great chance you might
be doing it. That's the sort of stuff that worries me. It's a bunch of unfair, dirty trick style
stuff. Maybe I'm just cynical or maybe I just focus on these topics too much. But after giving a presentation on cloud security,
I had two groups,
both from three-letter government agencies
come up to me and say,
how do I have these conversations with the cloud provider?
In the conversation, they say,
we've contacted them several times.
We want to look at this data.
We want to see what they've collected.
And we get ghosted or we end up talking to attorneys.
And despite over a year
of communication, we've yet to be able to sit down with them. Now, that's an interesting story.
I would love to have someone come to me with that problem. I don't know how I would solve that yet,
but I have a couple ideas. Maybe they're listening and they'll reach out to you, but...
You know, if you're having that problem of trying to understand what your cloud provider is doing,
please talk to me. I would love to go a little bit more in depth on that conversation under an NDA or six.
I was at a loss because the presentation that I was giving
was literally about the compromise
of managed service providers,
whether that be a outsourced security group,
whether that be your cloud provider.
We're seeing attack groups going after these.
Think about how juicy they are.
Why do I need to compromise your account or your company
if I can compromise that managed service provider
and have access to 15 companies?
Oh, yeah.
Why would someone spend time trying to break into my NetApp
when they could break into S3
and get access to everyone's data, theoretically?
It's a centralization of security model risk.
Yeah, it seems to so many people as just this crazy idea. It's a central this crazy idea.
It's something that's happening now. And with the progress that attackers are making,
criminal groups are making, this is going to happen pretty soon.
Sometimes when I'm out for a meal with someone who works with AWS and the security org,
there'll be an appetizer where, oh, there's two of you. I'm going to bring three of them because I guess waitstaff love to watch people fight like that. And whenever I want
the third one, all I have to do is say, can you imagine a day in which, just imagine hypothetically,
I am failed open and allowed every request to go through regardless of everything else.
Suddenly they look sick, lose their appetite, and I get the third one.
But it's at least reassuring to know
that even the idea of that is that disgusting to them. And it's not the, oh, that happened three
weeks ago, but don't tell anyone. There's none of that going on. I do believe that the people
working on these systems at the cloud providers are doing amazingly good work. I believe they
are doing far better than I would be able to do in trying to manage all those things myself
by a landslide. But nothing is ever perfect. And it makes me wonder that if and when there are vulnerabilities,
as we've already seen clearly with Azure, how forthcoming and transparent would they really be?
And that's the thing that keeps me up at night. I keep going back during this talk, but just the
interaction with the people there and the crowd was just
so eye-opening. And I don't want to be that person, but I keep getting to these moments of,
I told you so. And I'm not going to go into SolarWinds. Lord, that has been covered.
But shortly after that, we saw the same group going through and trying to, I'm not sure if
they successfully did it, but they were targeting networks for cloud computing providers. How many companies focused outside of that compromise at
that moment to see what it was going to build out to? That's the terrifying thing, is if you can
compromise a cloud service provider at this point, it's, well, you could sell that exploit on the
dark web to someone. Yeah, that is a, if you can get a remote code execution or you're able to look into any random cloud account, there's almost no amount of money
that is enough for something like that. You could think of the insider trading potential of just
compromising Slack, a single company, but everyone talks about everything there and Slack retains
data in perpetuity. Think of the sheer M&A discussions you could come up
with. Think of what you could figure out with a sort of a God's eye view of something like that.
And then realize that they run on AWS, as do an awful lot of other companies.
The damage would be incalculable. I am not an attacker, nor do I play one on TV.
But let's just kind of build this out. If I was to compromise a cloud provider,
the first thing I would do is lay low. I don't want them to know that I'm there. The next thing I would do is start getting into company environments and scanning them. That way I can see where their vulnerabilities are. I can compromise them that way and not give out the fact that I came in through that cloud provider. Look, I'm just me sitting here. I'm not a nation state. I'm not somebody
who is paid to do this from nine to five. I can only imagine what they would come up with.
It really feels like this is no longer a concern just for those folks who manage to have gotten
on the bad side of some country's secret service. It seems like APTs, advanced persistent threats,
are now theoretically something almost anyone has to worry about.
Let me just set the record straight right now on what I think we need to move away from. The whole
APTs are nation states. Not anymore. An APT is anyone who has advanced tactics, anyone who is
going to be persistent, because you know what? It's not that they're targeting you, it's that
they know that they eventually can get in. And of course, they're a threat to you. When I was researching my work into advanced persistent threats,
we had a group named TNT that said,
okay, you know what, we're done.
So I contacted them and I said,
here's what I'm presenting on you.
Would you mind reviewing it and tell me if I'm right?
They came back and said, you know what, we're not an APT
because we target open Docker API ports.
That's how easy it is.
So these big attack groups are not even having
to rely on advanced methods anymore. The line onto what that is is just completely blurring.
That's the scariest part to me, is we take a look at this across the board, and the things I have to
worry about are no longer things that are solely within my arena of control. They used to be back
when it was in my data center,
but now increasingly I have to extend trust to a whole bunch of different places because we're not building anything ourselves. We have all kinds of third-party dependencies and we have to trust that
they're doing the right things as they go to and making sure that they're bound so that the
monitoring agent that I'm using can't compromise my entire environment. It's really a good time
to be professionally paranoid.
And who's actually responsible for all this? Did you know that 70% of the vulnerabilities on our systems right now are on the application level, yet security teams have to protect it?
That doesn't make sense to me at all. And yet developers can pull in any third-party repository
that they need in order to make that application work because, hey, we're on a deadline.
That function needs to come out.
Elle, I want to thank you for taking the time to speak with me.
If people want to learn more about how you see the world and what kind of security research you're advocating for, where can they find you?
I live on Twitter to the point where I'm almost embarrassed to say, but you can find me at L underscore O underscore punk.
Excellent.
And we will wind up putting a link to that in the show notes, as we always do.
Thanks so much again for your time.
I appreciate it.
Always.
I'd be happy to come again.
El Marquez, security research advocate at Intezer.
I'm cloud economist Corey Quinn, and this is
Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your
podcast platform of choice. Whereas if you hated this podcast, please leave a five-star review on
your podcast platform of choice, along with an angry comment that ends in a link that begs me
to click it that somehow looks simultaneously suspicious and frightening.
If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS. We tailor recommendations to your
business and we get to the point. Visit duckbillgroup.com to get started.
This has been a humble pod production stay humble