Screaming in the Cloud - S3's Hidden Features and Quirks with Daniel Grzelak
Episode Date: June 18, 2024Corey Quinn and Daniel Grzelak take you on a journey through the wild and wonderful world of Amazon S3 in this episode. They explore the fun quirks and hidden surprises of S3, like the myster...ious "Schrodinger's Objects" from incomplete uploads and the head-scratching differences between S3 bucket commands and the S3 API. Daniel and Corey break down common misunderstandings about S3 encryption and IAM policies, sharing stories of misconfigurations and security pitfalls.Show Highlights:Â (00:00) - Introduction(03:49) - Schrodinger's Objects(05:23) - S3 Permissions and Security(06:44) - Incomplete Multipart Uploads Causing Unexpected Billing Issues(10:28) - Historical Oddities and Unexpected Behaviors of S3(12:00) - Encryption Misconceptions(15:17) - Durability and Reliability of S3(17:49) - AWS Security and Trust(21:01) - Practical Tips for S3 Users(26:10) - Compliance Locks and Data Management(29:13) - Closing ThoughtsAbout Daniel:Daniel Grzelak is a 20-year cybersecurity industry veteran, currently working as Chief Innovation Officer at Plerion. He is no longer the CISO at Linktree nor the Head of Security at Atlassian, but he tries to stay relevant by hacking AWS and Cloud in general.Links Referenced:Personal Website: https://dagrz.com/LinkedIn: https://www.linkedin.com/in/danielgrzelak/Things you wish you didn't need to know about S3: https://blog.plerion.com/things-you-wish-you-didnt-need-to-know-about-s3/S3 Bucket Encryption Doesn't Work The Way You Think It Works: https://blog.plerion.com/s3-bucket-encryption-doesnt-work-the-way-you-think-it-works/*SponsorPanoptica: https://www.panoptica.app/
Transcript
Discussion (0)
And the best part of it is it actually works.
Like that's what I love about Amazon.
Like it's so complicated.
There's so much scale to it and it continues to work.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
A couple of years back, I had my annual charity t-shirt
focus on S3 as the eighth wonder of the world, because it legitimately
is. It is an amazing service that has, in many cases, transformed the way that we store things,
the way we improperly use things as message queues or databases that perhaps shouldn't be,
and myriad other things. However, I firmly believe that there is nothing so perfect that it cannot be made fun of,
nitpicked to death, or in other ways, dragged through the public square. My guest today
apparently feels somewhat the same. Daniel Gjerlach is the Chief Innovation Officer at
Plarion. Daniel, thank you for joining me. I'm excited. I'm a big fan of your shitposting,
Corey. Well, thank you. I try to be as well, because otherwise I'd get really bored really fast,
because let's face it, our industry is boring if you take it too seriously.
Yes, indeed.
This episode's been sponsored by our friends at Panoptica, part of Cisco.
This is one of those real rarities where it's a security product that you can get started with for free,
but also scale to enterprise grade.
Take a look.
In fact, if you sign up for an enterprise account,
they'll even throw you one of the limited,
heavily discounted AWS skill builder licenses they got
because believe it or not,
unlike so many companies out there,
they do understand AWS.
To learn more, please visit panoptica.app
slash last week in AWS. To learn more, please visit panoptica.app slash last week in AWS. That's panoptica.app
slash last week in AWS. You had a post that came out about a week before this recording titled
Things You Wish You Didn't Need to Know About S3. And I saw that come across my desk and okay,
great, let's look into this. Because I've seen blog posts with
similar titles somewhat frequently over the years. And it's, I bet you didn't know this one weird
trick. And invariably, there's not a whole lot of new information hidden in those posts. There was
a lot of new information hidden in this post. So I absolutely wanted to talk to you about it.
Where did this post come from? I guess it's probably the best starting point for us. Right. So before I jump into that, I actually don't think there was much
new in the post. There was something new for everyone. Everyone found something interesting.
And the genesis of it was we were trying to build some more detailed risk analysis about S3 at
Plarium. So I went and started having a look at like, how does it work?
Make sure we get everything right. Make sure we've got all the details correct. And so I started
testing S3, started playing with it. And every time I found a quirk, I went, oh, that's not
exactly how I thought it would work. I would send it to my engineering team and they would go, no,
that's not right. That can't be right. And so the more I kept going, the more of these quirks
I would find. And that's how we ended up making the list. It's just someone would always go,
I thought it worked differently in some way. When I say a lot of this was new information,
I mean, something was new to me. If I haven't seen it before, it's new to me. I don't know
that there were necessarily any groundbreaking revelations that the S3 team is going to be
reading this and going, holy crap, it does what?
But it addresses a bunch of things that I had either not been aware of, or in some cases not thought about for a while, and others should never bother to think about at all. I mean,
your first point's terrific, where you talk about S3 buckets are the S3 API. It never occurred to
me to dig into the question of why the AWS command line interface has a subcommand of S3 and a
separate subcommand of S3 API. What is that? I know I use the latter when I want to handle weird
modifications to buckets and whatnot. And the S3 subcommand generally when I just create or delete
a bucket and work with objects within it. But it had never occurred to me to delve into the nuances
of why in the way that you did. Yeah. And I'm not clear on the difference between why there's two command lines,
but I think the big difference is the endpoint that you end up sending API operations to.
Typically, it's a central endpoint that controls the whole service. And then you provide the
parameters that you want it to use. Here, we basically send the API calls to the S3
bucket. Now, I'm not sure if underlying that is some big general endpoint, but for the user,
that's the way it looks. And so when you are able to go in, then delete that bucket by just sending
it a HTTP request, that's something that people don't necessarily expect to happen.
I would honestly expect that not to happen just because CloudFormation likes to lose its mind every time you try to delete a stack that
has a bucket in it that has a lingering thing there. And they recently fixed the ability to,
oh, you can just go ahead and delete the stack now and check the box to orphan the bucket. It's,
no, I want you to clean the thing up and get rid of it. I'm telling you explicitly,
go ahead and blow out the data.
I don't care.
This is all for scratch stuff anyway.
And I understand why you don't want to make it too easy to do that in production by accident,
but there are different use cases for different things.
Yeah, and I think that's still true.
The bucket still needs to be empty.
The interesting thing in my blog post was that
if you accidentally, say, give S3 star permissions on a bucket,
you can actually just send a delete request to the bucket and the bucket will be deleted.
Now, obviously, permissions have to be wildly misconfigured for something like that to happen.
Yeah. And who would ever screw up IAM permissions, right?
Exactly. That's what everyone's been doing for a very long time. And that's the point here. It's
like AWS has done a really good job over the years of removing many of the foot guns. I couldn't see a use case for
where a bucket could be deleted by literally anyone on the internet. So that's a foot gun
that I think could be removed in the future. I am very surprised by that. Did you test whether
it works if there's data in the bucket? I didn't, but I expect it doesn't. I still think you've got
to remove all the objects inside before you run it. That would at least make it a little bit
better in that I can't think of any buckets in my accounts that are purely empty with not a single
object or version stored within them. But I expect if you have S3 star permissions misconfigured,
then you can probably just send delete requests for all the objects as well.
Yeah, I guess by the point you can delete it, you can definitely do a list and see the buckets.
Frankly, at that point, there's no reason you couldn't also just send a lifecycle policy up
as well and configure it to just blow everything away.
It's just, I mean, it's working as expected. You've literally given anyone on the internet
all the permissions for the S3 bucket. It's just, I just don't think there's a use case
for something like that. You found a whole bunch of strange things in this. It's just, I just don't think there's a use case for something like that. You found a whole bunch
of strange things in this.
It's been a while
since I thought about this,
but I have found clients
at smaller scale
where this becomes significant.
At large enough scale,
all weird billing misconfigurations
become small enough not to matter
because they would have gotten
fixed otherwise.
You wind up with the Schrodinger's objects,
as you call them,
of incomplete multi-part uploads for objects. If the upload fails midway through, those objects
that have already been received by default sit around forever. They don't show up in the console.
They don't show up under a list objects without very specific parameters. And they do charge you.
So I have seen in the early days when I was working at much smaller scale, yeah, someone said that, all right, I have a one terabyte bucket. Why am I being charged 50
terabytes? And incomplete multi-part uploads were the issue, which at that point became clear that
there's something systemically wrong here. Figure out what keeps dying, trying to upload things and
make sure that gets fixed. And also here's a lifecycle policy to clean those out to fix the
end result of it. But it's been a while
since I've seen that because most folks are not going to be spending $100 million a year on AWS
and discover that $20 million of it is incomplete S3 uploads. That isn't a thing that happens here
in the real world. No. And I think that the interesting thing here is that you could have
it happen by accident where you end up with a bucket with all these objects that you don't know about. But also, if you allow anonymous people on the
internet to dump stuff in your buckets, then they could potentially do it on purpose.
Oh, yeah. And it is the least discoverable thing in the world. There is no way,
except maybe S3 storage lens, if you really go looking, to figure out how many of those you have
account-wide and what it is you're being charged for those figure out how many of those you have account-wide and whether or not
you are being, and what it is you're being charged for those. It's one of those very,
you have to know the secret passcode to get in to the hidden speakeasy in order for that to begin
making sense or being something that would even occur to you that might exist.
Yeah. And like you said, though, you can put a life cycle policy on all of your buckets to
protect against this kind of thing. It's just that I'm not sure that anyone does that by default. I did not know, for example,
that multi-part upload listings will return the principal ARNs. That was novel to me.
Yeah, that's a fun one. And look, there's not much confidential about a principal ARN,
but in some cases, an attacker wants to do something and they don't know what the resource
identifier is that they need to target and so when you leak these kind of bits of information
all over the place there's some very specific edge cases in which hey knowing a resource name
or knowing a full arn is really important from an attacker's perspective what i've found is that
when i've talked to the s3 team it is pretty clear clear that, I mean, they put a good spin on it,
don't get me wrong,
but it is abundantly clear
that in nobody's wildest dreams,
when this service was built and released,
what, damn near 20 years ago now,
that what it would grow into,
what it would become,
and for every other AWS service
where I've spoken to service teams,
they learn more about their services
from how customers use,
or in some cases, misuse them, than was ever accounted for in any planning document that
could have existed. I think AWS, the S3 history is one of the fun parts of this. It's obviously
one of the most robust, most used services that's been around for a long time. But because of that,
it's got some of the sort of the archaeology of its past that's now gone away. For example, ACLs.
Now, all resources in AWS are protected by fancy policies that are very well defined and very well
understood. But in the past, S3 had this concept of ACLs, which is now turned off by default. But
you can still do interesting things with it, things that you perhaps don't expect. And the
mental model for that is very
different to what it is for IAM policies. Absolutely. I was always so confused by a
bucket ACLs. I inadvertently reported what I thought was a security issue, politely,
because I'm never sure of what I'm looking at when I didn't understand the interplay of an ACL
with an IAM policy and found very quickly that, nope, the problem is that I'm a
fool. Okay, great. I can own and accept that, but I'm also never the only fool. I generally have
good company in people making poor decisions as I travel throughout the industry. Do you remember
the old authenticated users group in ACLs? With the checkbox next to it in the S3 console,
people would click it. I know I did in
the very early days, assuming, oh, any user authenticated to my AWS account should be able
to look at this bucket because it's a company-wide thing. Yeah, turns out that meant every AWS S3
user on the planet. Yeah, and that was a fun one where people could make that mistake very
legitimately going, it's authenticated users, it must be ours. It's not everyone on the planet. But I think that's part of the interesting archaeology of S3.
And so ACLs have a bunch of those interesting quirks like that. For example, the other one
that I ran into was that you can provide permissions to people based on the email
address associated with the user of their AWS account.
And there's a fancy error message that comes up and tells you
if that email address doesn't exist in the AWS database.
So you can basically figure out if an email has an AWS account associated with it,
which is another fun quote.
That is wild to me.
Because remember the canonical user ID where you used to have to...
Where there was this giant...
I don't know what the hell it was.
It was some huge alphanumeric string, as I recall, that was the canonical user that owned the bucket.
Because S3 significantly predates IAM and everything else. It's part of basically the
fossil record at this point. But it was always a separate AWS user identity in some ways. I never saw it used for much other than S3 policies.
That really messed with me.
Yeah, I think it's a 32-character hex string.
And another fun thing about that is if you find that string anywhere,
for example, in an objects listing, you take that canonical string,
you chuck it in an IAM policy, save it.
And when you see that policy come back up again,
it'll have that string resolved to the ARN of that canonical user.
Wild to me that that still works. I mean, it makes sense that it does.
S3 is so much lore around it now that people legitimately don't know when I'm shitposting
or not. I'll talk sometimes about how S3 used to basically have a
BitTorrent endpoint if you enabled it on a bucket, and people thought I was making it up. That was
one of the few deprecations that AWS has done because it turned out approximately nobody needs
it. It's not how we transfer large files across the modern internet anymore. For at a time when
a lot of folks were highly bandwidth constrained, that was not nothing. Yeah, it's actually still
in the documentation. That's one of the things I ran into. Oh, that was not nothing. Yeah, it's actually still in the documentation.
That's one of the things I ran into.
Oh, that can't be right.
Went and tested it and found out it was deprecated,
but it's still hanging around in some writing.
The documentation is basically engraved upon stone tablets.
As I recall, they have a couple of versions of the API.
One is like a 2012 date,
and I think it's the last time it was updated,
where you still have to
specify for some things,
a version string that references a,
a date that is older than my elementary school child.
Nice.
That's fun.
Few things are better for your career and your company than achieving more
expertise in the cloud.
Security improves,
compensation goes up,
employee retention skyrockets.
Panoptica, a cloud security platform from Cisco, has created an academy of free courses just for you.
Head on over to academy.panoptica.app to get started.
You can't change these things very much.
I was talking with Jeff Barr once, and he made a great observation that I asked if we
turned it into a blog post, and he wrote the intro for it, which was lovely of him.
But he talked about the idea that S3 at this point has become a generational service, where
they have no idea what's in any given S3 bucket.
They aren't scanning stuff, and there are encryption practices and policies in place
to prevent them from ever doing this.
But it's definitely something they have to think about, which if you don't know what's in a given bucket, maybe it's a bunch of shit posting meme images.
Maybe it's incredibly important bank records.
Maybe it's the nuclear codes.
They just don't know.
So they have to not lose data.
They have to make sure that it is accessible via a variety of embedded API calls that are
never going to get updated anywhere.
And they have to make plans for this to still be there in 500 years, because as long as
the account, the bill gets paid on an account, who's to say whether something's right or
wrong?
Lord knows I have a bunch of old S3 buckets.
I have no idea what's in them.
I'll never touch again.
And they round up to less than a penny, so I don't particularly care.
But those things have to still exist.
And the best part of it is it actually works. That's what I love about Amazon. It's so
complicated. There's so much scale to it and it continues to work. But I really would love to
touch on your encryption point, if you don't mind, because I think that's another area where they
mistake assumptions about how S3 encryption works.
And one of my friends was actually talking to a CISO after a major breach.
And the CISO was telling them, hey, it's okay that our objects got stolen because we've got encryption at rest enabled.
So it's perfectly fine.
And so people's mental model for encryption is the file is encrypted.
It goes away when the attacker tries to open the file.
It'll be a bunch of garbage. They won't have the key, so they won't be able to decrypt it.
But that's not exactly how encryption works in S3. I can see you want to say something.
Oh, last year, I wrote a whole article on this. It's on my site. I'll put a link to it in the
show notes. S3 encryption at rest does not solve for bucket negligence. And I go into a whole spiel on exactly what you're talking about. You're right. I always
found that encryption at rest in the cloud context to be basically a box checking exercise and little
more because, okay, if you can break into an AWS data center, steal the drives, get out alive,
I've stolen them from the proper places to reassemble the severed objects into
various ways and recombine them, you kind of earned it at that point. That's not really my
threat model. Encryption at rest matters a hell of a lot more for laptops that you're going to
leave at the coffee shop or in your car. It matters a lot more for your crappy data center,
where the security guard forgets to go and lock the door at night. Those are going to
be areas where it absolutely matters. With this, it just isn't a realistic threat model because
regardless of how well encrypted at rest something is, it still is going to be returned via the API
when it's requested, assuming the permissions are right. There are exceptions with KMS encryption
in certain ways. Please continue. And that's exactly the point here is in S3,
if you don't have access to the key,
it actually works as an access control mechanism
rather than you getting back a bunch of garbage data.
So if you don't have access to the key,
you just don't get the object.
The only way you get the object
is if you have access to the key,
in which case you get it in plain text.
And so if your data gets walked,
it gets walked in plain text.
Do you believe that there is a risk
of AWS using its privileged position
to scan the contents of S3 buckets?
I know that people love to have conspiracy theories
around this all the time,
that they're looking through all the data you put in S3.
It's, I have a rough idea,
at least of what general order of magnitude
of compute power it would take to actually do that.
And I have some questions about your conspiracy theory. But again, you focus on this stuff a lot
more than I do. What are your thoughts? I don't know how it works underneath,
and I don't see inside AWS, but that's not a theory I subscribe to. It doesn't make sense
from a business perspective. It doesn't make sense from a technical perspective. Why would
you do that? They've got way more to lose than they have to gain by doing something like that.
That's always been my perspective.
And we know it's expensive to do it because look at how they priced Amazon Macy when it
launched.
And even after they redid the pricing, this is something that explicitly looks through
your S3 buckets on your behalf, looking for sensitive information so that you can make
sure that you know where it lives in your environment.
And it is extortionately slow, incredibly expensive, and not widely deployed for those two reasons.
I have a really hard time imagining that if they had this magic thing on the back end that would just tell them what everyone had in every bucket,
that they wouldn't find a more cost-effective way and more widely adopted way of being able to perform that task.
I just don't see it.
Yeah, look, I just don't think, I think AWS is a good actor fundamentally. They're not a bad actor.
They have so much complexity and like over 300 services, like tens of thousands of API calls,
like at that scale, you end up seeing weird things happen because there's just so many things that
could happen. But fundamentally, I've always found them to be a good actor, try their best to do security right and all of that
kind of thing. I would agree with that sentiment. AWS does a lot of things that I find questionable
and weird, but they don't tend to touch security, particularly of foundational services. They mean
well. And there are enough people I know who work there
that I think of as canaries who would resign on the spot on ethical grounds, if nothing else,
if something like that were to take place, that I'm comfortable making that assertion.
I don't know if that's enough for some people. I mean, obviously, if you're a government and like,
well, Corey's got a good feeling about that, does not check your audit box, nor should it. Let's be
very clear here.
But for my dumb Twitter for pets startup, yeah, that's good enough for me in my use case.
Yeah. And how would you even check that assumption if you want to?
Exactly. The way that they've done it before is, oh, well, we have all these third party audits
that validate the things that we've said are correct, et cetera, et cetera. I know that there
are people that I've spoken to that I trust. These are phenomenal technologists
and they are supremely confident
that it functions as described.
But I've always viewed on some level,
you have to trust your cloud provider
or your data should not live
within that cloud provider
because there's nothing out there that says,
oh, when it's Corey's specific requests,
we're going to send him
to an identically performing set of API endpoints
that just don't do all that pesky encryption stuff under the hood. And we can inspect every
aspect of what he does. I don't think that they're doing that, but there's nothing to my understanding
that would prevent them from a technical basis from doing so.
Yeah, look, and you touch on an interesting point. There was a great post by Nick Frechette recently. He's been digging into
sort of non-production endpoints, things that you don't expect to be there, and where you can send
production data. And so I would encourage everyone to read that blog post. I think, again, that's a
case of, hey, there's so much complexity that these non-production endpoints have snuck in
to the sort of the production landscape
and can very occasionally be used with production data.
But AWS will fix all of that stuff
once they find out about this kind of thing.
That's generally the response that I've gotten.
Since this article, as you say,
doesn't include anything groundbreaking,
new revelations around S3,
have you gotten any feedback from the S3 team about,
oh, hey, we didn't realize this, or you misunderstood something, which I get a fair bit when I wind
up writing deep technical dives, because this stuff is complicated and no one has all the
pieces in their head at once. Have you gotten feedback at all from them on this? Or is this
just one of those things where we let our work speak for itself? No, look, I always send them
my stuff just to make sure they, like,'m off the wrong. And it's 5am
and I just got an email this morning. And I know the big thing in there was they wanted to make
sure that people understood that these weren't vulnerabilities and the vast majority of these
things could be protected against with native configuration. And I 100% agree with them. These
are not vulnerabilities. And that's why specifically in the blog post,
I call them quirks and oddities
and just things you need to know
rather than, hey, these are things
that AWS needs to fix.
Yeah, it's a well-written post
and it's very engaging.
But at no point from the point
where I first saw it come across my desk
and now, was I ever under the misconception
that, oh, this is a vulnerability.
I see things that could lead themselves to customer-side vulnerabilities if the customer had a
misunderstanding about how something functioned. Now, that is not to say that AWS has a vulnerability
on their side, but technically the all-authenticated users was not a vulnerability on their side,
but it led to thousands upon thousands
of customer side vulnerabilities because misconfiguration is one of the biggest threat
vectors in cloud. Exactly. The way I think about it is if you've got an expectation or a mental
model that's one way because of the way that things are worded or because of your experiences,
often the complexity of AWS will result in a
slight deviation from that mental model or that expectation. And so you'll end up making a mistake
that perhaps you otherwise wouldn't. One of the examples I give in the blog post about that is
object keys inside S3. Now, object keys look very much like file names on your file system,
but they're specifically called
keys because they're not files. But most people will assume they will function like file names.
And it turns out that because they're keys and not file names, you can put any sort of characters
that you want in them, slash, slashes, hashes, percentage signs, et cetera. And that's completely
fine and okay to do and very well documented. But if your application treats them as if they were file names, it might end up
misfunctioning or introducing a vulnerability itself because it thinks it's a file name.
Exactly.
It's the interpretation of a very complicated, very explicit set of things that are designed
from the ground up as very base level service primitives
that in turn compose together into something truly incredible. A lot of those incredible
things are in fact emergent properties, as best I can tell, because I don't think that there are
any people with the perfect ability to predict the future hanging out on the S3 team 15 years ago.
This is stuff that happens. And Milan Thompson-Bukovac gave a talk at reInvent a few years ago,
mentioning how after the S3 apocalyptic event, I think in 2017, they rebuilt all of S3 as 235 microservices. And my comment on that immediately was, this is important for S3. This is not a how
to guide. They are not Pokemon. You need not collect them all. Your five person startup should
absolutely not do this.
Because, yeah, it makes perfect sense for them to do it the way that they have at their scale.
Whatever your startup is doing, I promise it is not S3 levels of scale
and won't be for many, many, many years and at least seven rewrites.
So you're fine.
Don't view it as an instructional guide.
But that glimpse under the hood that they were able to completely rewrite all of S3
and customers never knew
because it still supports the same APIs
in bug for bug level of reproducibility
is nothing short of amazing.
Yeah, and that's the beauty of it.
There's all that complexity
and it's just really simple to use.
You just send your files there
and then they live there forever
unless you ask them to be deleted.
It's beautiful. I love it. Yeah, One of the scarier things for enterprise is,
okay, when I say delete and you say it's deleted, how deleted is it at that specific moment?
People always wonder about that one. And they're not wrong to have that question in their minds because yeah, there are legal definitions in contracts around what exactly deleted means.
And let's not blunder our way into inadvertently making representations to our own customers
that turns out aren't strictly true.
Yeah.
But again, it's one of those things like it doesn't matter if your object is deleted right
now or a minute from now across the internet.
For most companies and most customers, there really is no difference.
One of the fun things I found was this idea of compliance locks.
So if you're a legal team and let's say someone sent you a subpoena or something like that and said, hey, do not delete these things under any circumstances, you're able to implement that in an S3 bucket and say, hey, this object cannot be deleted under any circumstances. And in fact, the only way that it can be deleted
once the bucket has that compliance mode enabled
and the object has been set to a compliance lock
is to delete the entire AWS account.
Exactly.
I've always wondered if there was an end run around this
because, okay, I break into someone's account.
I now go ahead and waive the compliance lock,
the legal hold compliance one.
I forget which version of it it is.
And I can set it for up to a century.
Congratulations.
Your great grandchildren will despise your negligence because they will still have to pay your AWS bill unless you can delete everything else in the account.
Is there a mitigation for bad actors?
And I've never gotten a satisfactory response on that question.
So two things.
Well, I don't know the answer to that.
I've found in the past, for example, when I've made a KMS key policy that meant that
I could never delete the key.
If I got on an enterprise support plan, AWS would find a way around it and help me do
it.
So I think it's possible that if you made a cataclysmic mistake, they would help you delete the objects.
However, the way that object uploads work
means that if the bucket
has compliance mode enabled,
the uploader actually gets to set
whether the object has compliance
locking enabled or not,
which again is just a different quirk
about how the service works.
I wish to hell that you could do that for storage classes.
Everything in this bucket or prefix is going to be intelligent tiering, go,
without having to teach every single thing that I've got,
including some legacy desktop applications that are proprietary that I cannot modify.
Yeah, make sure that you put that into the intelligent tiering storage class.
Instead, I have to go through the lifecycle policy, which means that every object gets
written again. And that counts as a chargeable fee, which at scale is not nothing. And then,
and only then it starts aging into that, which is frustrating.
But it's the way it works. And it matters for some customers, but not most.
I don't think it would break anything to be able to say, okay, say okay now by default yeah everything goes into standard because the way it currently works today
but you now have an option where the bucket can set aside that anything placed into it
winds up being put into that storage class i think that would be a welcome enhancement i don't know
that would necessarily break anything customer side it might very well break things on the aws
side i know i've told this from years ago and it just has no one was listening
is not the reason that they haven't gone ahead
and implemented something like this.
I'm sure it's complicated,
but man, as a customer,
wouldn't that be nice?
It would be indeed.
I really want to thank you
for taking the time
to not just go ahead
and write all this up,
but also to speak with me about it.
If people want to learn more,
where's the best place
for them to find you?
Thanks, by the way.
Yeah, on blog.baryon.com.
That's where generally I do my posting. Luckily, by the way. Yeah, on blog.flarion.com. That's where generally
I do my posting. Luckily, my employer allows me to write in a style holder, but I like to. So
it's good fun and there's a good bit of research on there. Excellent. And we will, of course,
put links to all of this in the show notes. Thank you so much for taking the time to speak with me.
I really appreciate it. Been a good time. Thanks, Corey. Daniel Xerlak is the Chief Innovation
Officer at Flareon. I'm cloud economist Corey Quinn, and this is Screaming in the Cloud. Been a good time. Thanks, Corey.