Screaming in the Cloud - The Infinite Possibilities of Amazon S3 with Kevin Miller
Episode Date: October 26, 2022About KevinKevin Miller is currently the global General Manager for Amazon Simple Storage Service (S3), an object storage service that offers industry-leading scalability, data availability, ...security, and performance. Prior to this role, Kevin has had multiple leadership roles within AWS, including as the General Manager for Amazon S3 Glacier, Director of Engineering for AWS Virtual Private Cloud, and engineering leader for AWS Virtual Private Network and AWS Direct Connect. Kevin was also Technical Advisor to the Senior Vice President for AWS Utility Computing. Kevin is a graduate of Carnegie Mellon University with a Bachelor of Science in Computer Science.Links Referenced:snark.cloud/shirt: https://snark.cloud/shirtaws.amazon.com/s3: https://aws.amazon.com/s3
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is brought to us in part by our friends at Datadog.
Datadog is a SaaS cloud monitoring and security platform that enables full stack observability for modern infrastructure and applications at every scale. Datadog enables teams to see everything dashboarding, alerting,
application performance, monitoring, infrastructure, monitoring,
UX, monitoring, security, monitoring dog logos
and log management in one tightly integrated platform.
With 600 plus out of the box integrations with technologies,
including all major cloud providers, databases, and web servers,
Datadog allows you to aggregate all your data into one platform
for seamless correlation,
allowing teams to troubleshoot and collaborate together in one place,
preventing downtime and enhancing performance and reliability.
Get started with a free 14-day trial
by visiting datadoghq.com slash screaming in the cloud
and get a free t-shirt after installing the agent. Managing shards, maintenance windows,
over-provisioning, elastic hash bills. I know, I know, it's a spooky season and you're already
shaking. It's time for caching to be simpler. Memento serverless cache lets you forget the
backend to focus on good code and great user experiences. With true auto-scaling and a pay-per-use
pricing model, it makes caching easy. No matter your cloud provider, get going for free at gomemento.co slash screaming.
That's g-o-m-o-m-e-n-t-o dot c-o slash screaming.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
Right now, as I record this, we have just kicked off our annual charity t-shirt fundraiser.
This year's shirt showcases S3 as the eighth wonder of the world.
And here to either defend or argue the point, we're not quite sure yet, is Kevin Miller,
AWS's vice president and general manager for Amazon S3. Kevin, thank you for agreeing to suffer the slings and arrows that are no doubt going to be interpreted, misinterpreted, et cetera, for the next half hour or so.
Oh, Corey, thanks for having me.
I'm happy to do that and really flattered for you to be thinking about S3 in this way.
So I'm more than happy to chat with you.
It's absolutely one of those services that is foundational for the cloud.
It was the first AWS service that was put
into general availability, although the beta folks are going to argue back and forth about, no, no,
that was SQS instead. I feel like now that Mylan handles both SQS and S3 as part of her portfolio,
she is now the final arbiter of that. I'm sure that's an argument for a future day, but it's impossible to imagine cloud without
S3. I definitely think that's true. It's hard to imagine cloud, actually, with many of our
foundational services, including SQS, of course. But we are, yes, we were the first generally
available service with S3, and pretty happy with our anniversary being Pi Day 314. I'm also curious, your own personal trajectory has been not necessarily what folks would expect.
You were the general manager of Amazon Glacier, and now you are the general manager and vice president of S3.
So I've got to ask, because there are conflicting reports on this depending upon what angle you look at, are Glacier and S3 the same
thing? Yes, I was the general manager for S3 Glacier prior to coming over to S3 proper. And
the answer is no, they are not the same thing. We certainly have a number of technologies where
we're able to use those technologies both on S3 and Glacier, but there are certainly a number of things that are very distinct about Glacier
and give us that ability to hit the ultra-low price points that we do
with Glacier Deep Archive being as low as $1 per terabyte month.
And so there's a lot of actual ingenuity up and down the stack
from hardware to software, everywhere in between,
to really achieve that with Glacier.
But then there's other spots where S3 and Glacier have very similar needs. And of course,
today many customers use Glacier through S3 as a storage class in S3. And so that's a great way to
do that. So there's definitely a lot of shared code, but certainly when you get into it, there's
not all aspects to both of them. I ran a number of obnoxiously detailed financial analyses,
and they all came away with the, unless you have a very specific, very nuanced understanding of
your data lifecycle and or it is less than 30 or 60 days, depending upon a variety of different
things, the default S3 storage class you should
be using for virtually anything is intelligent tiering. That is my purely economic analysis of
it. Do you agree with that? Disagree with that? And again, I understand that all of these storage
classes are like your children, and I am inviting you to tell me which one of them is your favorite,
but I am absolutely prepared to do that. Well, we love intelligent tiering because it is very simple. Customers are able to automatically
save money using intelligent tiering for data that's not being frequently accessed. And actually,
since we launched it a few years ago, we've already saved customers more than $250 million
using intelligent tiering. So I would say today it is our default recommendation
in almost every case.
I think that the cases where we would recommend
another storage class as the primary storage class
tend to be specific to the use case
and particularly for use cases where customers
really have a good understanding of the access patterns.
And some customers do for the certain data set.
They know that it's going to be heavily accessed for a period of fixed period of time.
Or this data is actually for archival.
It'll never be accessed or very rarely, if ever, access just maybe in an emergency.
And those kinds of use cases, I think actually customers are probably best to choose one of the specific storage classes where they're sort of paying that the lower cost from day one.
But again, I would say for
the vast majority of cases that we see, the data access patterns are unpredictable and customers
like the flexibility of being able to very quickly retrieve the data if they decide they need to use
it. But in many cases, they'll save a lot of money as the data is not being accessed. And so
intelligent sharing is a great choice for those cases. I would take it a step further and say that even when customers believe that they are going to be doing a deeper analysis and they have a better understanding of their data flow patterns than intelligent tiering would, in practice, I see that they rarely do anything about it.
It's one of those things where they're like, oh, yeah, we're going to set up our own lifecycle policies real soon now.
Whereas, just switch it over to intelligent tiering and never think about it again. People's time is worth so much more than the infrastructure they're working on in almost
every case. It doesn't seem to make a whole lot of sense unless you have a very intentioned,
very urgent reason to go and do that stuff by hand, in most cases.
Yeah, that's right. I think I agree with you, Corey. And certainly that is the recommendation we lead with with customers.
In previous years, our charity t-shirt has focused on other areas of AWS. And one of them was based
upon a joke that I've been telling for a while now, which is that the best database in the world
is Route 53 and storing text records inside of it. I don't know if I ever mentioned
this to you or not, but the first iteration of that joke was featuring around S3. The challenge
that I had with it is that S3 Select is absolutely a thing where you can query S3 with SQL, which I
don't see people doing anymore because Athena is the easier, more, shall we say, well-articulated
version of all of that. And no, no, that joke
doesn't work because it's actually true. You can use S3 as a database. Does that statement fill you
with dread, regret? Am I misunderstanding something? Or are you effectively running
a giant subversive database? Well, I think that certainly when most customers think about a
database, they think about a collection of technology that's applied for given problems.
And so I wouldn't count S3 as providing the whole range of functionality that would really make up a database.
But I think that certainly a lot of the primitives, and S3 Select is a great example of a primitive, are available in S3.
And we're looking at adding additional primitives going forward to make it possible to build a database around S3. And we're looking at adding, you know, additional primitives going forward
to make it possible to, you know,
to build a database around S3.
And as you see, other AWS services
have done that in many ways.
For example, obviously with Amazon Redshift,
having a lot of capability now
to just directly access and use data in S3
and make that a super seamless
so that you can then run data warehousing type queries
on top of S3
and on top of your other data sets.
So I certainly think it's a great building block.
And one other thing I would actually just say that you may not know, Corey,
is that one of the things over the last couple of years we've been doing a lot more with S3
is actually working to directly contribute improvements to open source connector software that uses S3
to make available automatically
some of the performance improvements that can be achieved either using both the AWS SDK
and also using things like S3 Select. So we started with a few of those things with Select.
You're going to see more of that coming most likely. And some of that, again, the idea there
is you may not even necessarily know you're using Select, but when we can identify that it will
improve performance,
we're looking to be able to contribute
those kinds of improvements directly,
or we are contributing those directly
to those open source packages.
So one thing I would definitely recommend
customers and developers do is have a capability
of sort of keeping that software up to date,
because although it might seem like those are sort of
one and done kind of software integrations, there's actually almost continuous improvement now going on around things like that capability and then others we come out with.
What surprised me is just how broadly S3,, as well as an awful lot of software under the hood has learned
to directly recognize S3 as its own thing and can react accordingly. And just do the right thing.
Exactly. Now we certainly see a lot of that. So that's, you know, I mean, obviously making that
simple for end customers to use and achieve what they're trying to do. That's the whole goal.
It's always odd to me when I'm talking to one of my clients
who is looking to understand and optimize their AWS bill
to see outliers in either direction
when it comes to S3 itself.
When they're driving large S3 bills,
as in a majority of their spend,
it's okay, that is very interesting.
Let's dive into that.
But almost more interesting to me
is when it is effectively not being used at all. When, oh, we're doing everything with EBS volumes or EFS.
And again, those are fine services. I don't have any particular problem with them anymore. But the
problem I have is that the cloud long ago took what amounts to an economic vote. There's a 10x savings for storing data in an object store
the way that you, and by extension, most of your competitors, wind up pricing this versus the idea
of a volume basis where you have to pre-provision things. You don't get any form of durability that
extends beyond the availability zone boundary. It just becomes an awful lot of, well, you could do it this way,
but it gets really expensive really quickly.
It just feels wild to me
that there is that level of variance
between S3 on just a raw storage basis economically,
as well as then just the, frankly,
ridiculous levels of durability and availability
that you offer on top of that.
How did you get there? Was the service just mispriced at the beginning? Like,
oh, we dropped a zero and probably should have put that in there somewhere.
Well, no, I wouldn't call it mispriced. I think that the S3 came about when we took a,
we spent a lot of time looking at the architecture for storage systems and knowing that we wanted a system that would provide the durability that comes with having three completely independent data centers.
And the elasticity and capability where customers don't have to provision the amount of storage they want.
They can simply put data and the system keeps growing and they can also delete data and stop paying for that storage when they're not using it.
And so just all of that investment and sort of looking at that architecture holistically led us down the path where we are with S3. And we've definitely talked about this.
In fact, in Peter's keynote at reInvent last year, he talked a little bit about how the system is designed under the hood.
And one of the things he realized is that S3 gets a lot of the benefits that we do by just the overall scale.
The fact that it know, I would say
building more traditional architectures, you know, those are inherently typically much more
silent architectures with a relatively small scale overall. And it ends up with a lot of
resource that's provisioned at small scale and in sort of small chunks with each resource that you never get to that scale
where you can start to take advantage of the sum is more than the greater of the parts. And so I
think that's what the recognition was when we started out building S3. And then, of course,
we offer that as an API on top of that, where customers can consume whatever they want. That
is, I think, where S3 at the scale operates, is able to do certain things, including on the economics, that are very difficult or even impossible to do at a much smaller scale.
One of the more egregious clown shoes statements that I hear from time to time has been when people will come to me and say, we've built a competitor to S3. And my response is always one of those, oh, this should be good. Because when people say that,
they generally tend to be focusing
on one or maybe two dimensions
that doesn't work for a particular use case
as well as it could.
Well, okay, what's your story around
why this should be compared to S3?
Well, it's an object store.
It has full S3 API compatibility.
Does it really?
Because I have to say, there are times where I'm not entirely convinced that S3 itself has full S3 API compatibility. Does it really? Because I have to say,
there are times where I'm not entirely convinced
that S3 itself has full compatibility
with the way that its API has been documented.
And there's an awful lot of magic
that goes into this too.
Okay, great.
You're running an S3 competitor.
Great.
How many buildings does it live in?
Like, well, we have a problem with the S
at the end of that word.
It's okay, great.
If it fits on my desk,
it is not a viable S3 competitor.
If it fits in a single zip code,
it is probably not a viable S3 competitor.
Now, can it be an object store?
Absolutely.
Does it provide a new interface to some existing data
someone might have?
Sure, why not?
But I think that, oh, it's S3 compatible
is something that gets tossed around far too lightly
by folks who don't really understand what it is that drives S3 and makes it special.
Yeah, I mean, I would say certainly there's a number of other implementations of the S3 API.
And frankly, we're flattered that customers recognize, or competitors and others recognize,
the simplicity of the API and go about implementing it.
But I do think, to your point, I think that there's a lot more.
It's not just about the API.
It's really around everything surrounding S3 from, as you mentioned,
the fact that the data in S3 is stored in three independent availability zones,
all of which that are separated by kilometers from each other,
and the resilience, the automatic failover,
and the ability to withstand an
unlikely impact to one of those facilities, as well as the scalability and the fact that
we put a lot of time and effort into making sure that the service continues scaling with
our customers' need.
And so I think there's a lot more that goes into what is F3.
And oftentimes, just in a straight-up comparison, it's sort of purely based on
just the APIs and generally a small set of the APIs, you know, in addition to those intangibles
around, or not intangibles, but all of the ilities, right, the elasticity and the durability
and so forth that I just talked about. In addition to all that, also, you know, certainly what we're
seeing from customers is as they get into the petabyte and tens of petabytes, hundreds of petabytes scale, their need for the services that we provide to manage that storage, whether it's lifecycle and replication or things like our batch operations to help update and to maintain all the storage, those become really essential to customers wrapping their arms around it, as well as visibility. Things like storage lens to understand what storage do I have,
who's using it, how is it being used. And those are all things that we provide to help customers
manage at scale. And certainly, oftentimes when I see claims around S3 compatibility,
a lot of those advanced features are nowhere to be seen.
I also want to call out that a few years ago, Mylon got on stage and talked about how, to my recollection, you folks have effectively rebuilt S3 under the hood into, I think it was 235 distinct microservices at the time.
There will not be a quiz on numbers later, I'm assuming.
But what was wild to me about that is having done that for services that are orders of magnitude less complex,
it absolutely is like changing the engine on a car without ever slowing down on the highway.
Customers didn't know that any of this was happening until she got on stage and announced it.
That is wild to me. I would have said before this happened that there was no way that would have been possible, except it clearly was. I have to ask, how did you do that in the broad sense?
Well, it's true. A lot of the underlying infrastructure that's been part of S3,
both the hardware and the software, is, you know, you wouldn't, if someone from S3 in 2006 came and
looked at the system today, they would probably be very disoriented in terms of understanding
what was there because so much of it has changed.
To answer your question, the long and short of it is a lot of testing. In fact, a lot of novel testing most recently, particularly with the use of formal logic and what we call automated
reasoning. It's also something we've talked a fair bit about in reInvent. And that is essentially
where you prove the correctness of certain algorithms. And we have used that to spot some very interesting, the one in a trillion type cases that at S3
scale happens regularly, that you have to be ready for and you have to know how the
system reacts, even in all those cases.
I mean, I think one of our engineers did some calculations that the number of potential
states for S3 is sort of exceeds the
number of atoms in the universe or something similarly crazy. But yet using methods like
automated reasoning, we can test that state space, we can understand what the system will do,
and have a lot of confidence as we begin to swap, you know, pieces of the system. And of course,
nothing at S3 scale happens instantly. It's all, you know, I would say that for a typical
engineering effort within S3, there's a certain amount of effort, obviously, in making the change or in preparing the new software, writing the new software and testing it.
But there's almost an equal amount of time that goes into, okay, and what is the process for migrating from system A to system B?
And that happens over a timescale of months, if not years in some cases. And so there's just a lot of diligence that goes into
not just the new systems, but also the process of,
you know, literally how do I swap that engine on the system?
So, you know, it's a lot of really hardworking engineers
that spend a lot of time working through these details every day.
I still view S3 through a lens of
it is one of the easiest ways in the world
to wind up building a static web
server because you basically stuff the website files into a bucket and then you check a box.
So it feels on some level, though, that it is about as accurate as saying that S3 is a database.
It can be used or misused or pressed into service in a whole bunch of different use cases. What have you seen from customers that has,
I guess, taught you something you didn't expect to learn about your own service?
Oh, I'd say we have those meetings pretty regularly when customers build their workloads
and have unique patterns to it, whether it's the type of data they're retrieving and the access
pattern on the data. For example, some customers will make heavy use of our ability to do range
gets on files and that's around objects.
And that's a pretty good capability, but that can be one where that's very much dependent
on the type of file, right?
Certain files have structure as far as, you know, a header or footer, and that data is
being accessed in a certain order.
Oftentimes, those may also be multi-part objects.
And so making use of the multi-part features to upload different chunks of the file in parallel.
And, you know, also when, certainly when customers get into things like our batch operations capability,
where they can literally write a Lambda function and do what they want.
You know, we've seen some pretty interesting use cases where customers are running large scale operations across, you know, billions, sometimes tens of
billions of objects. And those can be pretty interesting as far as what they're able to do
with them. So for something as sort of, well, you might, you know, as simple and basics in some
sense of a get and put API, just all the capability around it ends up being pretty interesting as far
as how customers apply it and the different workloads they run on it. So if you squint hard enough, what I'm hearing you tell me is that I
can view all of this as, oh yeah, S3 is also compute. And it feels like that is a fast track
to getting a question wrong on one of the certification exams. But I have to ask,
from your point of view, is S3 storage? And whether it's yes or no, what gets you excited about the space that it's in?
Yeah, well, I would say S3 is not compute,
but we have some great compute services
that are very well integrated with S3,
which excites me,
as well as we have things like S3 Object Lambda,
where we actually handle that integration with Lambda.
So you're writing Lambda functions,
we're executing them on the get path.
And so that's a pretty exciting feature for me.
But to sort of take a step back, what excites me is I think that... we're executing them on the get path. And so that's a pretty exciting feature for me.
But to sort of take a step back, what excites me is I think that customers around the world in every industry are really starting to recognize the value of data and data at large scale.
I think that actually many customers in the world have terabytes or more of data that
sort of flows through their fingers every day that they don't even realize.
And so as customers realize what data they have and they can capture and then start to analyze,
ultimately make better business decisions that really help drive their top line or help them reduce costs,
improve costs on whether it's manufacturing or other things that they're doing. That's what really excites me is seeing those customers take the raw capability and then apply it to really just to transform how they,
not just how their business works, but even how they think about the business.
Because in many cases, transformation is not just a technical transformation, it's a
people and culture transformation inside these organizations. And that's just pretty cool to
see as it unfolds.
One of the more interesting things that I've seen customers
misunderstand on some level has been a number of S3 releases
that focus around, oh, this is for your data lake.
And I've asked customers about that.
So what's your data lake strategy?
We don't have one of those.
You have like eight petabytes in climbing in S3.
What do
you call that? It's like, oh yeah, that's just the bunch of buckets we dump things into. Some are
logs, some are assets and the rest. It's right. Yeah. It feels like no one thinks of themselves
as having anything remotely resembling a structured place for all of the data that accumulates at a
company. There is an evolution of people learning that, oh yeah, this is, this is in fact what it
is that we're doing. And this thing that they're talking about does apply to us, but it almost
feels like a customer communication challenge just because I don't know about you, but with my legacy
AWS account, I have dozens of buckets in there that I don't remember what the heck they're for.
Fortunately, you folks don't charge by the bucket so I can smile, nod and remain blissfully ignorant,
but it does make me wonder from time to time.
Yeah, no, I think that what you hear there is actually pretty consistent with what the
reality is for a lot of customers, which is in distributed organizations, I think that
that's bound to happen.
You have different teams that are working to solve problems, and they are collecting
data and analyzing, they're creating result data sets and they're storing those data sets.
And then of course,
priorities can shift and,
you know,
and there's not necessarily the,
the day-to-day management around data that we might think would be expected.
I feel like you sort of drew an architecture on a whiteboard.
And so I think that's the reality we are in and we will be in largely
forever.
I mean, I think that at a smaller scale that in and we will be in largely forever.
I mean, I think that at a smaller scale, that's been happening for years.
So I think that, one, I think that there's a lot of capability just being in the cloud.
At the very least, you can now start to wrap your arms around it, right,
where it used to be that it wasn't even possible to understand what all that data was because there's no way to central inventory it.
Well, in AWS with S3, with inventory reports,
you can get a list of all of your storage and we are going to continue to add capability
to help customers get their arms around what they have.
First off, understand how it's being used.
That's where things like storage lens
really play a big role in understanding
exactly what data is being accessed and not.
We're definitely listening to customers carefully around this. And I think when you think about
broader data management story, I think that's a place that we're spending a lot of time thinking
right now about how do we help customers get their arms around it, make sure that they know
what's the categorization of certain data. Do I have some PII lurking here that I need to be very mindful of? And then how do I get to a world where I'm, you know, I won't say that it's ever going to look
like the perfect whiteboard picture you might draw on the wall. I don't think that's really
ever achievable. But I think certainly getting to a point where customers have a real solid
understanding of what data they have and that the right controls are in place around all of that
data, you know, I think that's directionally where I see us heading.
As you look around how far the service has come,
it feels like on some level that there were some,
I guess, I don't want to say missteps,
but things that you learned as you went along.
Back when the service was in beta, for example,
there was no per-request charge. To my understanding, that was changed in part
because people were trying to use it as a file system,
and wow, that suddenly caused a tremendous amount of load
on some of the underlying systems.
You originally launched with a BitTorrent endpoint as an option
so that people could download through peer-to-peer approaches
for large datasets, and it turned out that wasn't really
the way the internet evolved either.
And I'm curious, if you were to have to somehow
build this all from scratch, are there any other significant changes you would make in how the
service was presented to customers, in how people talked about it in the early days?
Effectively, given a mulligan, what would you do differently?
Well, I don't know, Corey. I mean, just given where it's grown to in macro terms,
you know, I definitely would be worried taking a mulligan, you know, that I would change the
sort of the overarching trajectory. And certainly I think there's a few features here and there
where for whatever reason, it was exciting at the time and really spoke to what customers at
the time were thinking. But over
time, you know, sort of quickly those needs moved to something a little bit different. And,
you know, like you said, things like the BitTorrent support is one where at some level,
it seems like a great technical architecture for the internet, but certainly not something that
we've seen dominate in the way things are done. Instead, you know, we've largely kind of a world
where there's a lot of caching layers,
but it still ends up being largely client-server kind of connections.
So I don't think I would do, certainly wouldn't do a mulligan on any of the
major functionality. And I think there's a few things in the details where obviously we've
learned what really works in the end. I think we learned that we wanted bucket names to really be strictly conformed to rules for DNS encoding.
So that was the change that was made at some point.
And, you know, we would tweak that, but no major changes, certainly.
One subject of some debate while we were designing this year's charity t-shirt,
which, incidentally, if you're listening to this, you can pick up for yourself at snark.cloud
slash shirt, was that is S3 itself dependent upon S3? Because we know that every other service out
there is as well, but it is interesting to come up with an idea of, oh yeah, we're going to launch
a whole new isolated region of S3 without S3 to lean on, that feels like it's an almost impossible
bootstrapping problem. Well, S3 is not dependent on S3 to come up. That's certainly a critical
dependency tree that we look at and we track and make sure that we like to have an async graph as
we look at dependencies. But it's such a sophisticated way to say what I learned the
hard way when I was significantly younger and working in production environments. Don't put the DNS servers needed
to boot the hypervisor into VMs that require a working hypervisor. It's one of those, oh yeah,
in hindsight, that makes perfect sense. But you learn it right after that knowledge really would
have been useful. Yeah, absolutely. And one of the terms we use for that as well is the idea
of static stability, or that's one of the techniques that can really help with isolating
independency is what we call static stability. We actually have an article about that in the
Amazon Builder Library, which there's actually a bunch of really good articles in there from
very experienced operations-focused engineers in AWS. So static stability is one of those key
techniques, but other techniques, I mean, just pure minimization of dependencies is one. And so we were very, very thoughtful about
that, particularly for that core layer. I mean, you know, when you talk about S3 with 200 plus
microservices or 235 plus microservices, I would say not all of those services are critical for
every single request. Certainly a small subset of those are
required for every request. And then other services actually help manage and scale the kind of that
inner core of services. And so we look at dependencies on a service by service basis to
really make sure that that inner core is as minimized as possible. And then the outer layers
can start to take some dependencies once you have that basic functionality up.
I really want to thank you for being as generous with your time as you have been.
If people want to learn more about you and about S3 itself,
where should they go after buying a t-shirt, of course?
Well, certainly buy the t-shirt first.
I love the t-shirts and the charity that you work with to do that.
Obviously for S3, it's aws.amazon.com slash S3
and you can actually learn more about me on,
I have some YouTube videos, so you can search for me on YouTube and kind of get a sense of myself.
We will put links to that into the show notes, of course. Thank you so much for being so generous
with your time. I appreciate it. Absolutely. Yeah. Glad to spend some time. Thanks for the
questions, Corey. Kevin Miller, Vice President and General Manager for Amazon S3.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five-star review on your podcast platform
of choice.
Whereas if you've hated this podcast, please leave a five-star review on your podcast platform
of choice, along with an angry, ignorant comment talking about how your S3-compatible
service is going to blow everyone's socks off when it fails.
If your AWS bill keeps rising and your blood pressure is doing the same, then you need
the Duck Bill Group.
We help companies fix their AWS bill by making it smaller and less horrifying. The Duck
Bill Group works for you, not AWS. We tailor recommendations to your business, and we get
to the point. Visit duckbillgroup.com to get started. this has been a humble pod production
stay humble