Screaming in the Cloud - Find, Fix and Eliminate Cloud Vulnerabilities with Shir Tamari and Company
Episode Date: January 19, 2022About ShirShir Tamari is the Head of Research of Wiz, the cloud security company. He is an experienced security and technology researcher specializing in vulnerability research and practical ...hacking. In the past, he served as a consultant to a variety of security companies in the fields of research, development and product.About SagiSagi Tzadik is a security researcher in the Wiz Research Team. Sagi specializes in research and exploitation of web applications vulnerabilities, as well as network security and protocols. He is also a Game-Hacking and Reverse-Engineering enthusiast.About NirNir Ohfeld is a security researcher from Israel. Nir currently does cloud-related security research at Wiz. Nir specializes in the exploitation of web applications, application security and in finding vulnerabilities in complex high-level systems.Links:Wiz: https://www.wiz.ioCloud CVE Slack channel: https://cloud-cve-db.slack.com/join/shared_invite/zt-y38smqmo-V~d4hEr_stQErVCNx1OkMAWiz Blog: https://wiz.io/blogTwitter: https://twitter.com/wiz_io
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is sponsored in part by our friends at Redis,
the company behind the incredibly popular open-source database
that is not the BindDNS server.
If you're tired of managing open-source Redis on your own,
or you're using one of the vanilla cloud caching services, these folks have you covered with the go-to managed Redis service for global caching and primary database capabilities, Redis Enterprise.
To learn more and deploy not only a cache, but a single operational data platform for one Redis experience, visit redis.com slash hero. That's R-E-D-I-S dot com
slash hero. And my thanks to my friends at Redis for sponsoring my ridiculous nonsense.
This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before,
but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention,
but they're using it to help developers be more efficient by reducing repetitive tasks.
So the idea being that you can run stateless things without having to worry about scaling,
placement, et cetera, and the rest.
They claim significant cost savings, and they're able to wind up
taking what you're running as it is in AWS
with no changes
and run it inside of their data centers
that span multiple regions.
I'm somewhat skeptical,
but their customers seem to really like them.
So that's one of those areas
where I really have a hard time
being too snarky about it
because when you solve a customer's problem
and they get out there in public
and say, we're solving a problem, it's very hard to snark about that. Multis Medical,
Construx.ai, and Stacks have seen significant results by using them, and it's worth exploring.
So if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or Batch,
consider checking them out. Visit risingcloud.com slash benefits. That's risingcloud.com
slash benefits. And be sure to tell them that I sent you, because watching people wince when
you mention my name is one of the guilty pleasures of listening to this podcast.
Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the joyful parts of working with
cloud computing is that you get to put a whole lot of things you don't want to deal with onto the shoulders of the cloud provider you're doing business with, or cloud providers, as the case may be if you've fallen down the multi-cloud well.
One of those things is often significant aspects of security, and that's great, right until it isn't. Today, I'm joined by not one guest, but rather three coming to us from Wizz,
which I originally started off believing was, oh, it's a small cybersecurity research group,
but they're far more than that. Thank you for joining me. Could you please introduce yourselves?
Yes. Thank you, Corey. My name is Shir Tamari. I lead the security research team at Wiz.
I've been working in the company for the past year.
I'm working with these two nice teammates.
Hi, my name is Niroh Fed.
I'm a security researcher at the Wiz research team.
I've also been working for the Wiz research team for the last year.
I'm Sagi, Sagi Tzadik.
I also worked for the Wiz research team for the last six months.
I want to thank you for joining me.
You folks really burst onto the scene earlier this year when I suddenly started seeing your
name come up an awful lot.
And it brought me back to my childhood where there was an electronic store called Nobody
Beats the Wiz.
It was more or less a version of Fries on a different coast.
And they went out of business and, oh, good, we're going back in time.
And suddenly it felt like I was going back in time in a different light
because you had a number of high-profile vulnerabilities
that you had discovered specifically in the realm of Microsoft Azure.
The two that leap to mind the most readily for me are ChaosDB
and the Oh My God exploits.
There was a third as well, but why don't you tell me, in your own words, what it is that you
discovered and how that played out? Before we found the vulnerabilities in Microsoft Azure,
we did report multiple vulnerabilities also in GCP and AWS. We had multiple vulnerabilities in AWS
that also allowed cross-account.
It was cross-account access to other tenants.
It just was much less severe
than the chaos-debut vulnerability
that we were discussing more later.
And both were presented in black at Vegas.
This was the last of those.
So we do a lot of research.
You mentioned that we have a third one.
Which one did you refer to?
That's a good question because you had the, I want to say it was called Azure Scape and you're
doing a fantastic job with branding a number of your different vulnerabilities. But there's also,
once you started reporting this, a lot of other research started coming out as well from other
folks. And I confess a lot of it is sort of all flowed together and been very hard to disambiguate.
Is this a systemic problem?
Is this effectively a whole bunch of people piling on
now that their attention is being drawn somewhere
or something else?
Because you've come out with an awful lot of research
in a short period of time.
Yeah, we had a lot of good research in the past year.
It's very important to mention that the Azure Scape
was actually found by a very good researcher in Palo Alto.
And the one is named?
I think now I can can take what it is.
Yeah, they came out of Unit 42, as I recall,
their cybersecurity division.
Every tech company out there seems to have
some sort of security research division these days.
What I think is sort of interesting
is that to my understanding,
you were founded first and foremost as a security company.
You're not doing this as an ancillary
to selling something else like a firewall
or effectively you're an ad tech company like Google
where you're launching Project Zero.
You are first and foremost aimed at this type of problem.
Yes.
Wiz is not just a small research company.
It's actually a pretty big company.
We have 200 employees.
And the purpose of the WIS product is
a cloud security suite that provides
a lot of scanning capabilities
in order to find risks in cloud
environments. And the research
team is a very small group.
We are a group of four researchers.
We have multiple responsibilities.
Our first responsibility
is to find risks
in cloud environments.
It could be misconfigurations.
It could be vulnerabilities in libraries, in software.
And we add those findings and the patterns we discover to the product in order to protect our customers and to alert them for new risks.
Our second responsibility is also to do community research, where we research vulnerabilities in public products and cloud providers.
And we share our findings with the cloud providers and also with the community to make the cloud more secure.
I can't shake the feeling that if there weren't folks doing this sort of research and shining a light on what it is that the cloud providers are doing, if they were to discover these things at all, they would very quietly effectively fix it in the background and
never breathe a word of it in public. I like the approach that you're taking as far as dragging it,
kicking and screaming into the daylight. But I also have to imagine that probably doesn't win
you a whole lot of friends at the company that you're focusing on
at any given point in time. Because whenever you talk to a company about a security issue,
it seems like the first thing they're concerned about is, okay, how do we wind up spinning this
or making sure that we minimize the reputational damage? And then as a secondary reaction of, oh,
how do we protect our customers? But mostly how do we avoid looking bad as a result? And I feel like
that's an artifact of corporate culture these days, but it feels like the relationship has got to be somewhat
interesting to navigate from your perspective. So once we found the vulnerability and we
disclosed it with the vendor, okay, first I would mention that most cloud providers have
a bug bounty program where they encourage researchers to find vulnerabilities and to
discover new security threats.
And all of them as a public disclosure program where researchers are welcome and get safe
harbor, you know, where the disclosure vulnerabilities.
And I think it's like a common interest, both for customers, both for researchers and the
cloud providers to know about those those vulnerabilities to mitigate them.
And we do believe that sometimes cloud providers does resolve and mitigate vulnerabilities behind the scenes.
And we know, we don't know for sure, but I don't know about everything, but just by the
vulnerabilities that we find, we assume that there is much more of them that we never heard
about.
And this is something that we believe that needs to be changed in the industry.
Cloud providers should be more transparent.
They should share more information
about the result vulnerabilities.
Definitely when a customer data was accessible
or where was it risk or possible risk.
And this is actually,
it's something that we're actually trying
to change in the industry.
We have a community and like innovative community.
It's like a initiative that we try to collect.
We open a Slack channel called the Cloud CV, and we try to invite as much people as we
can that are concerned about cloud vulnerabilities in order to make a change in the industry
and to assist cloud providers or to convince cloud providers
to be more transparent,
to enumerate cloud vulnerabilities
so they have an identifier,
just like a CV,
and to make the cloud more protected
and more transparent to customers.
The thing that really took me aback
by so much of what you found
is that we've become relatively accustomed to a
few patterns over the past 15 to 20 years. For example, we're used to, oh, this piece of software
you run on your desktop has a horrible flaw. Great. Or this thing you run in your data center,
same story, patch, patch, patch, patch, patch. That's great. But there was always the sense that
these were the sorts of things that were sort of normal, but the cloud providers were on top of things where they were effectively living up to their side of the shared responsibility bargain.
And that whenever you wound up getting breached for whatever reason, like in the AWS world where, oh, you wound up losing a bunch of customer data because you had an open S3 bucket. Well, yeah, that's not really something you can hang super effectively around the neck
of the cloud provider, given that you're the one that misconfigured that.
But what was so striking about what you found with both of the vulnerabilities that we're
talking about today, the customer could have done everything absolutely correctly from
the beginning and still had their data
exposed. And that feels like it's something relatively new in the world of cloud service
providers. Is this something that's been going on for a while and we're just now shining a light on
it? Have I just missed a bunch of interesting news stories where the clouds have, oh yeah,
by the way, people, we periodically go in and drag people out of our cloud control plane because
oops-a-doozy, someone got in there again with the squirrels.
Or is this something that is new?
So we do see in history other cases where vulnerability researchers disclose vulnerabilities
in the cloud infrastructure itself.
There was only a few.
And usually, the research was conducted by independent researchers and i don't think it had
such an impact like chaos db which allowed cross-tenant access to databases of other customers
which was a huge case and so if it wasn't a big story so most people will not hear about it and
also independent researchers usually don't have the back that we
have here in Wizz. We have funding, we have the marketing division that helps us to get coverage
with reporters. We make sure to make, if it's a big story, we make sure that other people will
hear about it. And I believe that in most bug bounty programs where independent researchers
find vulnerabilities, usually they more care about the bounty than the after effect of distorting the vulnerability, sharing
it with the community.
Usually also independent researchers usually share the findings with the research community.
And the research community is relatively small to the IT community.
So it is new, but it's not that new.
There was some offense back in history
of similar vulnerabilities. So I think that one of the points here is that everyone makes mistakes.
You can find bugs, which affect mostly, as you mentioned previously, this software that you
installed on your desktop has bugs and you need to patch it. But in the case of cloud providers, when they make mistakes,
when they introduce bugs to the service,
it affects all of their customers.
And this is something that we should think about.
So mistakes that are being made by cloud providers
have a lot of impact regarding their customers.
Yeah, it's not a story of you misconfigured your company's SAN,
so you're the one that's responsible for a data breach. It's suddenly you're misconfiguring everyone's SAN simultaneously. It's the sheer scale and scope of what it is that they've done. us primarily, since that is admittedly where I tend to focus most of my time and energy, has been privilege escalation style stuff where, okay, if you assign some users at your company
or wherever access to this managed IAM policy, well, they'll have suddenly have access to things
that go beyond the scope of that. And that's not good. Let's be very clear on that. But it's a bit
different between that and, oh, by the way, suddenly someone at another company that has no relationship established with you at all can suddenly rummage through your data that you're storing in Cosmos DB, their managed database offering. but for a number of folks I've spoken to in financial services, in government, in a bunch of
environments where data privacy is not optional in the same way that it is when you're running
a social media for pets app. Yeah, but the thing is that until the publication of ChaosDB,
no one ever heard about the cost and data tampering in any cloud providers.
Meaning maybe in six months, you can see a tampering in any cloud providers. Meaning, maybe in
six months, you can see similar
vulnerabilities in other cloud providers that maybe
other security research groups
find. So yeah, so Azure
was maybe the first, but
we don't think they will be the last.
Yes, and also when
we do the community research, it's very important to us
to take big targets. We enjoy
the research. One day, the research will be challenging and we wanted to achieve something that was new and
great. So we always put very big targets to actually find vulnerability in the infrastructure
of a cloud provider. It was very challenging for us. We didn't come to ChaosDB by that. We actually
found it by mistake. But now we think actively that this is our next goal, to find
vulnerabilities in the infrastructure
and not just vulnerabilities that
affect only the vulnerabilities
within the account itself, like privilege escalation
or bad scope policies
that affect only one account.
That seems to be
the transformative angle that you don't
see nearly as much in existing
studies around vulnerabilities in this space. It's always the seems to be the transformative angle that you don't see nearly as much in existing studies
around vulnerabilities in this space. It's always the, oh no, we could have gotten breached by those
people across the hallway from us in our company, as opposed to folks on the other side of the
planet. And that is, I guess, sort of the scary thing. What has also been interesting to me,
and you obviously have more experience with this than I do,
but I have a hard time envisioning that, for example, AWS having a vulnerability like this and not immediately swinging into disaster firefighting mode, sending their security
execs on a six-month speaking tour to explain what happened, how it got there, all of the
steps that they're taking to remediate this. But Azure published a blog post explaining this in relatively minor detail.
Here are the mitigations you need to take.
And as far as I can tell, then they sort of washed their hands of the whole thing
and have enthusiastically begun saying absolutely nothing since.
And that, I have learned, is sort of fairly typical for Microsoft
and has been for a while, where they just don't talk about these things when it arises.
Does that match your experience?
Is this something that you find that is common
when a large company winds up being effectively embarrassed
about their security architecture?
Or is this something that is unique to how Microsoft tends to approach these things?
I would say in general that we
really like the Microsoft MSRC team.
The group in Microsoft is responsible
for handling vulnerabilities
and I think it's like the security division
inside Microsoft MSRC.
So we have a really good relationship
and we had a really good time working with them.
They're really professionals.
They take our findings very seriously.
I can tell that in the ChaosDB incident,
they didn't plan to publish a blog post.
And they did that after the story got a lot of attention.
So I'm not in their PR team,
and I have no idea how they decide stuff
and what is their strategy.
But as I mentioned earlier,
we believe that there is much more cloud vulnerabilities that we never heard of and it should change.
They should publish more.
It's also worth mentioning that Microsoft acted really quick on this vulnerability and took it very seriously.
They issued a fix in less than 48 hours.
They were very transparent in the entire procedure.
We had multiple teams meeting with them.
The entire experience was
pretty positive with each of the vulnerabilities we've ever reported to Microsoft.
So it's really nice working with the guys that are responsible for security. But regarding PR,
I agree that they should have posted more information regarding this incident.
The thing that I found interesting about this, and I've seen aspects of it before, but never this strongly, is I was watching for, I guess what I would call just general shittiness, for lack of a better term, from the other providers doing a happy dance of, ha ha, we're better than you are. that because when I started talking to people in some depth at this at other companies, the
immediate response, not just AWS, to be clear, has been that, no, no, you have to understand this is
not good for anyone because this effectively winds up giving fuel to the slow burning fire of folks
who are pulling the, see, I told you the cloud wasn't secure. And now the enterprise groundhog
sees its shadow and we get six more years of building data
centers instead of going to the cloud. So there's no one in the cloud space who's happy with this
kind of revelation and this type of vulnerability. My question for you is, given that you are
security researchers, which means you are generally cynical and pessimistic about almost everything
technological, if you're like most of the folks in that space that I've spent time with,
is going with cloud the wrong answer?
Should people be building their own data centers out?
Should they continue to be going on this full cloud direction?
I mean, what can they do if everything's on fire and terrible all the time?
So I think that there is a trade-off when you embrace the cloud.
On one hand, you get the fastest deployment times
and a good scalability regarding your infrastructure.
But on the other hand,
when there is a security vulnerability
in the cloud provider,
you are immediately affected.
But it is worth mentioning that
the security teams for the cloud providers
are doing an extremely good job.
Most likely, they are going to patch the vulnerability
faster than it would have been patched in an on-premise environment. doing an extremely good job, most likely they are going to patch the vulnerability faster
than it would have been patched in an on-premise environment.
And it's good that you have them working for you.
And once the vulnerability is mitigated, depends on the vulnerability, but in the case of ChaosDB,
when the vulnerability was mitigated on Microsoft's end, it was mitigated completely.
No one else could have exploited it
after they mitigated it once.
Yes, it's also good to mention
that the cloud provides organizations and companies
a lot of security features.
And I don't want to say security features.
I would say it provides a lot of tooling
that helps security.
The option to have one interface,
like one API to control all of my devices, to get visibility to all of my servers, to enforce policies very easily.
It's much more secure than on-premise environments where there is usually a big mess, a lot of vendors.
Because the power was in the on-prem, the power was on the user.
So the user had a lot of options, usually used many types
of software, many types of hardware.
It's very hard to
mitigate the vulnerability, the software vulnerability
in on-prem environments. It's very hard to get
the visibility, and the cloud provides
a lot of security
in good aspects, and
in my opinion, moving to the cloud
for most organizations
would be a more secure choice
than remaining on-premise
unless you have a very,
very small on-prem environment.
This episode is sponsored
by our friends at Oracle HeatWave,
a new high-performance
query accelerator
for the Oracle MySQL database service,
although I insist
on calling it MySquirrel.
While MySquirrel has long been the world's most popular open-source database,
shifting from transacting to analytics required way too much overhead and, you know, work.
With HeatWave, you can run your OLAP and OLTP, don't ask me to pronounce those acronyms ever
again, workloads directly from your MySquirrel database and eliminate the time-consuming data movement
and integration work,
while also performing 1,100 times faster
than Amazon Aurora
and two and a half times faster
than Amazon Redshift at a third the cost.
My thanks again to Oracle Cloud
for sponsoring this ridiculous nonsense.
The challenge I keep running into is that, and this is sort of probably the worst of all possible
reasons to go with cloud, but let's face it. When US East One recently took an outage and
basically broke a decent swath of the internet, a lot of companies were impacted, but they didn't
see their names in the headlines. It was all about Amazon's outage.
There's a certain value when a cloud provider takes an outage or a security breach, that
the headlines screaming about it are about the provider, not about you and your company
as a customer of that provider.
Is that something that you're seeing manifest across the industry?
Is that an unhealthy way to think about it?
Because it feels almost like it's cheating in a way.
It's, yeah, we had a security problem, but so did the entire internet.
So it's okay.
So I think that if there would be evidence that these kind of vulnerabilities
were exploited prior to our disclosure,
then you wouldn't see headlines of companies shouting in the headlines.
But in the case of us reporting the vulnerabilities prior to anyone expecting them,
results in nowhere a company showing up in the headlines.
I think it's a slightly different situation than an outage.
Yeah, but also when one big provider have an outage or a breach,
so usually the customers will think it's out of my responsibility.
I mean, it's bad.
My data has been leaked, but what can I do?
I think it's very easy for most people to forgive companies that were breached.
I mean, you know what?
It's just not my area.
So maybe I will not answer that.
No, no, it's very fair. The challenge I have as a customer of all of these providers, to be honest,
is that a lot of the ways that the breach investigations are worded of, we have seen
no evidence that this has been exploited. Okay. that simultaneously covers the two very different use cases of we have
poured through our exhaustive audit logs and validated that no one has done this particular
thing in this particular way, but it also covers the use case of, hey, we learned we should probably
be logging things, but we have no evidence that anything was exploited. Having worked with these providers at scale,
my gut impression is that they do, in fact,
have fairly detailed logs of who's doing what and where.
Would you agree with that assessment?
Or do you find that you tend to encounter logging
and analysis gaps as you find these exploits?
We don't really know.
Usually, in the chaos of the business scenario,
we got access to a Jupyter notebook.
And from the Jupyter notebook,
we continue to another internal services.
And we, nobody stopped us.
Nobody, we expected an email like,
What you doing over there, buddy?
Yeah.
Please stop doing that.
We're investigating it.
And we didn't get
any and also we don't really know if they monitor it or not i can tell from my technical background
that logging so many environments it's hard and when you do decide to log all these events
you need to decide what to look for example if, if I have a database, a managed database,
do I log all the queries that customers run?
It's too much.
If I have an HTTP application, a managed HTTP application,
do I save all the access logs, like all the requests?
And if so, what would be the retention time for our moment?
We believe that it's very challenging on the cloud provider side,
but it's just an assumption. And during the disclosure with Microsoft, they did not disclose
any scenarios they had with logging. They do mention that they are overviewing the logs,
and they are searching to see if someone exploited this vulnerability before we disclosed it. Maybe
someone discovered it before we did but they told them
they didn't find anything one last area i'd love to discuss with you before we call it an episode
is that it's easy to view whiz through the lens of oh we just go out and find vulnerabilities
here and there and we make companies feel embarrassed rightfully so for the things that
they do but a little digging shows that you've been around for a little over a year as a publicly known entity. And during that time,
you've raised $600 million in funding, which is basically like, what in the world is your pitch
deck where you show up to investors and your slides are just like copies of their emails and
you read them to them? I mean, on some level, it seems like that is an astounding amount of money to raise in a short period of time.
But I've also done a little bit of digging.
And to be clear, I do not believe that you have an extortion-based business model, which is a good thing.
You're building something very interesting that does in-depth analysis of cloud workloads, and I think it's got an awful lot of promise. How does the vulnerability research that you do tie into that larger platform
other than, let's be honest, some spectacularly effective marketing?
Specifically in the ChaosDB vulnerability, we were actually not looking for a vulnerability
in the cloud service providers. We were originally looking for common misconfigurations that our
customers can make when they set up their Cosmos DB accounts,
so that our product will be able to alert our customers regarding such misconfigurations.
And then we went to the Azure portal and started to enable all of the features that Cosmos DB has to offer.
And when we enabled enough features, we noticed some features that could be vulnerable,
and we started digging
into it and we ended up finding chaos living. But our original work was to try and find
misconfigurations that our customers can make in order to protect them and not to find the
vulnerability in the CSP. This was just like a byproduct of this research. Yes. As we mentioned earlier,
our main responsibility is
to add a real security
risk content to the product
to help customers to find
new security risks in their environment.
As you mentioned, like the
escalation possibilities within
cloud accounts and
policies
and many other security risks that are in the cloud area.
And also, we are a very small team inside a very big company. So most of the company,
they're doing heavy product works, they talk with customers, they understand the risks,
they understand the market, what the needs for tomorrow, and maybe we are well-known for our
vulnerabilities, but it's just a very small
part of the company on some level it says wonderful things about your product and also
terrifying things from different perspectives of oh yeah we found one of the worst cloud breaches
in years by accident as opposed to actively going in trying to find the thing that was
that is basically put you on the global map
of awareness around these things because there are a lot of security companies out there doing
different things in fact go to rsa and you'll see basically 12 companies that just repeated over and
over and over with different names and different brandings and they're all selling some kind of
firewall this is something actively different because everyone can tell beautiful pictures with slides and whatnot and the corporate buzzwords.
You're one of those companies that actually did something meaningful.
It felt almost like a proof of concept.
On some level, the fact that you weren't actively looking for it is kind of an amazing testament for the product itself.
Yeah, we actually used the product at the beginning in order to overview our own environment and what is the most common services we use.
Usually we mix this information with our product managers in order to understand what customers use and what products and services we need to research Cosmos DB was that we found that a lot of our Azure customers are using Cosmos DB on their production environments.
And we wanted to add mitigations for common misconfigurations to our product in order to protect our customers.
Yeah, the same goes with other research, like Oh My God, where we've seen that there is an excessive amount of OMI installations
in an Azure environment, and it raised our attention, and then found this vulnerability.
It's mostly like popularity-guided research.
Yeah.
And also important to mention that maybe we find vulnerabilities by accident, but the
three of us, we are doing vulnerability research for the
past 10 years and even more.
So we are very professional. This is
what we do, and this is what we like to do.
And we became skilled to the
information.
It really is neat to see,
just because every other security
tool that I've looked at in recent memory
tells you the same stuff.
It's the same problem you see when the,
the AWS billing space that I live in, everyone says, Oh, we can,
we can find these inactive instances that could be right size. Great.
Cause everyone's dealing with the same data. It's the,
the security stuff is no different. Hey, this S3 bucket is open. Yes.
It's a public web server. Please stop waking me up at two in the morning about
it. It's there by design, but it's, it goes back and forth with this, the same stuff just presented differently. This is one of
the first truly novel things I've seen in ages. If nothing else, you convinced me to kick the
tires on it and see what kind of horrifying things I can learn about my own environments with it.
Yeah, you should. Let's look at it.
I want to thank you so much for taking the time
to speak with me today. If people want to learn
more about the research you're up to and
the things that you find interesting, where can they find
you all? Most of our publications,
I mean, all of our publications are under
the Wizz blog, which is
wizzio.com. People
can read all of our research.
Just today, we are announcing a new one.
So feel free to go and read there.
And also, feel free to approach us on Twitter.
The three of us, we have a Twitter account.
We are all open for direct messages.
Just send us a message.
And we will certainly put links to all of that in the show notes.
Shir, Sagi, Nir, thank you so much for joining me today.
I really appreciate your time.
Thank you.
Thank you very much.
It was very fun. Yeah. This really appreciate your time. Thank you. Thank you very much.
It was really fun.
Yeah.
This has been Screaming in the Cloud.
I'm cloud economist Corey Quinn, and thank you for listening. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice,
along with an angry, insulting comment from someone else's account.
If your AWS bill keeps rising and your blood pressure is doing the same,
then you need the Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckill Group works for you, not AWS.
We tailor recommendations to your business, and we get to the point.
Visit duckbillgroup.com to get started.
This has been a humble pod production stay humble