Screaming in the Cloud - SmugMug's Cloud Adventure with Andrew Shieh
Episode Date: February 8, 2024Andrew Shieh shares the thrilling story of SmugMug’s bold leap into AWS’s cloud technology, marking it as one of the pioneering companies to harness the cloud for digital photography stor...age. This episode offers a unique perspective into the type of strategy and groundbreaking tech advancements that catapulted SmugMug’s success. Listen to the full episode for a masterclass in innovation and adaptation!Show highlights: (00:00) Corey introduces the show & Guest Andrew Shieh(00:54)Andrew shares the story of how SmugMug became AWS's first enterprise customer. (02:17) Discussion on the evolution of AWS's customer service(04:31) Reflections on the expansion of AWS services. (06:08) The critical role of Amazon S3 in SmugMug's operations(12:24) AWS's interest in unique customer stories and feedback (09:32) SmugMug's cloud strategy and optimization(13:50) Andrew discusses challenges and solutions in cloud adoption(17:38) Andrew shares his experiences at AWS re:Invent, offering thoughts on the conference's evolution(21:09) A look into AWS's pricing formulas and business insights (31:55) Closing thoughtsAbout AndrewAndrew "shandrew" Shieh is a multidisciplinary engineer, focused today on making the AWS cloud do what it promises to. Andrew started as an environmental engineer, focused on energy efficiency and air pollution modeling, but quickly got dragged into tech after spending most of college at the help desk of the Unix computer cluster.Andrew's current interests include sustainability, cost efficiency, and economics. Most AWS service teams are his friends and he enjoys (a bit too much) talking to his SmugMug and Flickr coworkers about AWS. He recently spoke at AWS re:Invent about how his children (9 and 11) helped to teach him the value of trivia as a means of learning backwards. He also wrote a keynote for re:Invent's pandemic year, and has rescued billions of precious photos from extinction.Links Referenced:SmugMug: https://www.smugmug.com/S3 Intelligent Tiering blog post on Duckbill Group: https://www.duckbillgroup.com/blog/s3-intelligent-tiering-what-it-takes-to-actually-break-even/Mastodon: https://hachyderm.io/@shandrewLinkedIn: https://www.linkedin.com/in/shandrew/Flickr: https://flickr.com/photos/shandrewAndrew's talk on "Learning Backwards" at re:Invent 2023: https://www.youtube.com/watch?v=od09dD7mc6kÂ
Transcript
Discussion (0)
I found that AWS, they enjoy these customer stories that are not typical cases.
Welcome to Screaming in the Cloud. I'm Corey Quinn. And my guest today is someone that I
have been subtly and then not so subtly and then outright demanding appear on this show
for a long time now. Andrew Shea is a principal engineer at SmugMug slash Flickr slash whatever else you've
acquired recently. Andrew, thanks for joining me. Thanks a lot, Corey. It's great to talk to you.
You have been someone who's been a notable presence in the AWS community for far longer
than I have. You've been in your current role for 12 years at this point, and you've been around
the edges of a lot of very interesting AWS problems as a
result. At one point, SmugLog was the largest S3 customer that I was aware of back when S3
launched. Weren't you the first customer or something like that? Yeah, we were the first
enterprise customer of AWS way back in early 2006. There was a cold call from AWS.
I think it was back when people thought of them as a bookseller.
They just cold called a bunch of companies that they thought might be interested in storage.
Hey, kid, want to buy some object storage?
Yeah, it's better than the first service they launched shortly beforehand.
Hey, kid, want to buy a messaging queue? Yeah, it's better than the first service they launched shortly beforehand. Hey, kid, want to buy a messaging queue?
Because SQS was the first service.
And the correct response from almost everyone
was what the hell is that?
At least storage is something
theoretically people could understand.
And they sold our CEO, Don, on S3.
And we just became their key customer,
especially in those early, early days. I didn't join until six years
later, but I got to see a lot of the effects of it over time. And in the last decade, I've seen
similar changes and all the things that have shipped, all the things that we've had
influence over, some services that we helped to kick off. It's been really, really interesting
to see it as that kind of customer. We're still a small company, but we have this unusual level
of influence in that cloud AWS world. Do you feel like you still do? Because one of the things that
I've found, at least from where I'm sitting, is that the way that AWS themselves engage with customers, regardless of what they say, if you ignore what they
say and pay only attention to what they do, which is a great approach for dating, incidentally,
then you see that there's, at least from my perspective, there's been a significant shift
away from a lot of the historical customer obsession in a good way into what feels like
customer obsession in a vaguely creepy way.
I no longer have the same sense on some level
that as a customer that AWS is in my corner
the way that they once were.
It's tricky to speak from my point of view
because we see, I think,
just as where we live as a customer,
SmugMug and Flickr,
we see the best parts of AWS.
So we generally, we talk to the
top people on the service teams. We get amazing account managers and TAMs. And the people we talk
to are just like the best representative of AWS. They really do talk about customer obsession and
show it. So from my point of view, that's been going really well and is
one of the only companies that I see really living up to what they claim in their values. So,
you know, customer obsession is definitely always there. Anytime I talk to a service team,
they definitely have that kind of empathy that a lot of other companies don't. They
really try to understand what we're
trying, what our goals are, and put them in our shoes. I've seen small companies do that,
but very few companies have that in their culture and actually exercise it.
What have you seen shift over the decade plus that you've been working with AWS?
Because it's very hard on some level to reconcile
the current state of AWS to what it was the first time I touched it. I remember being overwhelmed by
how many services there were, and there was no way I was going to learn what all of them did.
And there were about 12. Now it feels like there are like add an order of magnitude,
double it and keep going. And that you get a lot closer to what's there today.
I still get as overwhelmed as I did back then.
What have you seen that's shifted on the AWS side of the world?
So in terms of services, it's, it's definitely long ago, went past the size of things I can
keep track of over time.
We still try, try to get good highlights and there's still a lot of need for like, hey,
we need to know about all these new things that are coming out. But we're kind of past the point
where we try to track like every new service and try to run every new service and see what it does.
You must be so happy, my God.
We used to do a lot of that. Just like, hey, we'll talk to every service team.
For me, the big tipping point was
ground station, because as much as I advise people otherwise, I still fall into the trap myself
of when AWS announces a new service, that means I should kick the tires on it. And they announced
ground station that talks to satellites in orbit. I'm like, how the hell am I going to get a CubeSat
up there? Followed by, wait a minute, they're building things that are for specific use cases among specific customers. Power to them. That's great. But that doesn't mean it
becomes my to-do list for the next few quarters of what to build. And every service is for someone,
not everything is for everyone. And I don't think, for example, the IoT stuff is aimed at me at all.
I don't have factory size problems. I don't need to control 10,000 robots at this
point in time. So that means that I'm a very poor analog of the existing customers. Some things like
S3, every customer basically uses S3. Some cases they don't realize it, but they are. And other
things get a little far flung out there. And I think that S3 as a raw infrastructure service
really does represent AWS at its absolute best. In every way, S3 represents the best. It's
not just the service itself, but all the people that are on that team from leadership and engineers,
they all seem to have the answers to the questions that we have, even when they've had some problems
and failures, but they, you know but they respond to those really well too.
That's really the heart of AWS.
And I hope that is where the rest of the services
are going to.
Honestly, it's the most Amazonian service I can think of,
even down to having a bad name
because S3 stands for Simple Storage Service.
And it has gotten so intelligent
that calling it simple is really one of those areas
that's, okay, now it just sounds like outright mockery.
What, you don't understand S3, but it's so simple.
If you understand S3 thoroughly from end to end,
every aspect of the service,
they have a job waiting for you.
But you're right about the responsiveness.
Back when intelligent tiering launched,
I had issues with it from the perspective of,
okay, that monitoring charge
and a bunch of small objects
that'll never transition is ridiculous.
And the fact that it's going to charge you
for a minimum of 30 days
means that for anyone using it in a transitory way,
it's a complete non-starter.
A year or so goes by and they reach out like,
there, look at our new pricing.
Does that address your concerns?
Holy crap.
I'm just a loud mouth on the internet.
I use it, sure, But I'm not in the
top any percentage of customers as far as usage goes on this. And they're right. And the answer
was, yeah, mostly the only weird edge case is when you have an object between 128 kilobytes and
between 148 and 160 kilobytes. We have the math on a blog post on our website that says for this
very weird edge case of no changes to this, it will wind up costing you more for the monitoring charge than it will
for the potential cost savings. But that was very difficult to get to, and it hits almost nobody.
It's a great default storage class. Yeah, I think at first, Intelligent-Tiering, because it had the
word intelligent in it, It sounded like some automated,
like, hey, we'll figure out what class to put your objects in. But when we looked at it further,
it was definitely like, oh, that makes sense. You trade off some additional costs of moving things back into S3 standard for the read charges. I find it useful for use cases that you might very
well see. I don't know how widespread the use case is,
but back when I used to do work
at a expense management company,
receipts were generally uploaded once
and read either zero or one times,
but you had to keep them forever.
So yeah, transitioning them into infrequent access
made an awful lot of sense.
But what you'll often see, I imagine,
especially if it was your photo hosting site,
is every once in a while, something that has been there for years suddenly gets a massive amount of traffic.
And if you write naive code, not that I would ever do such a thing, you wind up with every read coming from S3 because you don't have caching.
And suddenly it winds up blowing out your bill because, OK, it's a five megabyte photo that is downloaded 20 million times in 24 hours that starts to add up. So the intelligence around that starts to
be helpful. You can beat S3's intelligent tiering if you have an understanding of the workloads in
the bucket and a deep understanding of the lifecycle story, sure. But for most people,
my recommendation is shifted to, if you don't
know where to start, start with this. There are remarkably few vertex cases where you'll
find yourself regretting. We're definitely in one of those cases. We spend a lot of time
working on optimizing storage, figuring out how to class things appropriately. There's a ton of
caching in our layers. It's all about delivering
all of our photos exactly how our customers want. And the photo model is an interesting one for S3
too, because I think they've talked about it a few times in like reInvent keynotes. But the basic
model is both of our services at SpunkBug and Flickr, we store the original files that the customer uploads
because in general, photographers want,
you know, they want their original photo.
They don't want you to compress it down
like Google Photos does.
They want to be able to get back the original photo
that they upload bit to bit.
So we store those,
but they're generally not well compressed.
They're usually too big for display. So we spend a lot of time store those, but they're generally not well compressed. They're usually too big for display.
So we spend a lot of time processing those, doing a lot of GPU work to deliver them really quickly.
And, you know, without having to minimizing both the S3 costs and also making it deliver across the network as fast as possible.
There's a ton of trade-offs there,
and we spend a lot of time thinking about that.
But when it works, like S3, that's one of the most amazing parts
is how few engineers we really need who have a deep knowledge of S3.
And we can run this entire storage business on top of S3 without actually knowing that much,
without having to touch it very often, except for a few specific things like the cost management and
how to optimize the storage. But once it's up, it runs so much easier than any kind of storage
I've ever used it for. Could you imagine using an NFS filer for this or a SAN somewhere? And this
is also the danger.
What you're doing at SmugBug is super interesting,
especially when it comes to S3.
And I feel on the same level,
it makes terrible fodder for keynotes because you are the oldest S3 customer
and one of the largest.
So taking lessons from what you do
and mapping them to my own S3 usage,
which on a monthly basis is about $90,
is absolutely the wrong
direction for me to go in.
It's a, yeah, by the time that the general guidance doesn't apply to you, yeah, you know.
There's a difference between best practices and, okay, you're now starting to push the
boundaries of what is commonly accepted as standard or even possible.
Yeah. At that point, if you tell me
that my standard guidance for S3 doesn't apply to you,
I will nod and agree with you.
But that's part of the trick I found of being a consultant
is that I recognize the best practice guidance
and also where it starts to fall down
because at certain extents,
everything winds up becoming a unicorn case.
I found that AWS likes talking about those.
They really, they enjoy these like customer stories that are not typical cases.
I think they actually talked about storage classing.
Peter DeSantis in last year's keynote or keynote at night, he talked about storage classing.
It was two years ago now.
Welcome to 2024. Yeah. Late night surpriseing. It was two years ago now. Welcome to 2024.
Yeah. Late night surprise computer science lecture with Professor DeSantis. It's one of my favorite, favorite things at reInvent. That was basically exactly the model we've been working on
with, you know, what to do with all this idle storage and balancing it with the really high
volume stuff, all the high volume uploads, high volume usage,
and then all of that storage volume that just sits there.
While it's not our problem to manage it,
we have to help AWS to come up with some of these ideas
and how to achieve our low cost, unlimited storage,
working with them to make it into our viable business. It's really our
low-level goal there. I have to ask, because the only way to discover things like this is basically
community legend and whatnot. AWS talks a lot about the 11 nines of durability that S3 has,
which is basically losing one object every trillion years or whatever it works out to.
Now, let's talk reality.
I've lost data before because some moron who I'm not going to identify because he's me
wound up deleting the wrong object.
It's always control plane issues and the rest.
It's not actual drive failure, which is why I think that metric gives people a false sense
of security.
They're calculating known possible failures. And DR does not factor into it because
you have the same durability metric for S3 infrequent access one zone, which is going to be
in a single availability zone. And the odds of that availability zone being destroyed by a meteor
are significantly less than 11 nines. So let's be clear here. That was also something that DeSantis
covered in that talk,
which was that one zone isn't necessarily one zone, which is to most people probably a surprise.
But in their interest, if they don't have to move the blocks around, it's cheaper for them to leave
the blocks. So if you're transitioning files out, they might not do anything to them. They may stay
in the exact same spot they were when they're in S3 standard. It just gives them the option to spread their blocks around more and to plan more for their performance
on their individual disks. That service, I think, is somewhat misnamed.
Yeah, it's kind of unfortunate, but it's the truth. And that's, I think it's also a,
it's an example of a bunch of different use cases exist for this. I don't know if a lot of people are aware of this,
but my constant running joke of Route 53 is a database.
I changed that joke last minute to talk about Route 53.
Originally, it was S3, but enough analytics workloads live on top of S3.
I do it myself, where using it as a database
is not the most ridiculous thing in the world.
I have one workload that grabs a SQLite file, uses that as its database in a Lambda function, then returns it with any changes it's made every time it runs.
And there's logic in there to avoid race conditions and whatnot.
But OK, that's not a best practice for a lot of workloads, but it works really well for this particular one. And the joke I found is that
using text records and DNS as a relational data store is objectively ridiculous. That is the sort
of thing that no one should actually do. So it makes a great joke. It's when people start to say,
well, what about this other idea? Yeah, you're trying to solve a problem that is better solved
by other ways. Here's what they are. Yeah, and S3 was actually a very cheap database for a while. When it started,
they didn't have any operation costs, so they only charged for storage volume. And so you could build
a very inexpensive database on top of S3. Not today, but back when it started, you could have all these zero byte
files. During the beta, that's why they started charging per request. Because as I recall,
people were hammering the crap out of the front ends as a result. And that wasn't a use case we
built the thing for. So how do we re-architect around it? You aren't going to change human
nature. So, okay, make it cost money. People either stop doing it or you'll have enough money to invest to fix the problem.
So what have you seen as far as,
I guess, the trends come and go?
I mean, you've been to reInvent,
I think almost every time.
What was your take on reInvent this past year?
What bigger trends is it speaking about?
I've been to every reInvent except for the very first one.
And I missed the first one
because I just had my first
child born. So I think there's a good reason to miss it, but I've been to everyone since.
So I think that's 10 or 11, if you count the virtual one. Vegas hasn't changed. Vegas is still
like my least favorite place to visit, but the people at reInvent are why I go. It's become this very kind of business-focused conference
versus what I would prefer,
which is more like a grow your community,
think about things.
For me, it's become more like,
hey, we need to talk to these vendors,
these teams at AWS.
And it's very just packed with business meetings, not so much on the
like, interest. It's okay, but it's very business and sales kind of focused. This year, I also spoke
at the Expo Hall, which was pretty unusual. I gave a talk at the AWS Dev Lounge all about learning about AWS.
I called it learning backwards because it was about AWS trivia.
I think a topic you know very well.
Yeah, and it feels like that's what a lot of the certification exams started as.
It's like, all right, do you know this one weird trick?
Or can you remember exactly what the syntax is for something?
It's like, that is not how you test knowledge of these things.
Something else you've talked about in that vein,
incidentally, has been the value
of having a broad engineering background
as applied to tech.
And what I always found fascinating
about your approach to it was,
you have a background in civil
and environmental engineering,
but you don't take a condescending approach
whenever you have this conversation. It doesn't come from, oh, yes, because I have this background, I am better than
you at the rest of it. You talk about it being an advantage while also accepting that people come
from a bunch of different places. What's your take on it? So I went to Stanford University,
graduated in 1999. So I think when I graduated, I knew nothing about the industry. And I was just kind
of out there. I probably had that attitude where if you didn't go to college, I didn't know what
you're doing. But over the years, just because I've worked with so many great people, our CEO,
Don McCaskill, for example, he didn't go to college. And I've worked with so
many other great people who have been great engineers, and they didn't go to college,
but had other great, really formative experiences that turned them into excellent engineers.
I think one of the things that broad kind of engineering education gives you is that if maybe
you don't have the same kind of opportunities like to get into some job early
on you know learn about some like weird engineering things um you know the education gives you a
different view into it but i think my favorite example is that uh aws has like different pricing
formulas for every service and when i a lot of people get overwhelmed by those,
when I look at them, I'm like, oh, it's just, you know, it's just another equation. You can
toss this into a spreadsheet or just do it, a calculator and figure out the cost, do some
estimates down the line. The fact that it's an equation is part of the challenge. Like it used,
there are very few things you buy where you have that level of granularity and variability in
pricing dimensions that affect others. It becomes a full system as opposed to simple arithmetic.
But so much of my engineering course load back in the day was like, you know, you learn all these
different equations, let's say like fluid mechanics or something like that. There's
super complicated equations and you have to figure out which ones to use, what goes where, what's applicable. And compared to that, you know,
figuring out AWS costs is really, really much simpler. But you can kind of tell that the people
who actually create those pricing formulas either came from like finance worlds or education, engineering worlds where they,
in their minds, it's also very simple. But to most people, the pricing formulas are like way
too complicated. That feels like a trend from AWS that I've noticed, which is increasingly,
it's become apparent that every team has their own P&L to which they're held accountable,
which means that every service, even ones that should not necessarily be so,
have to generate revenue.
And they are always looking to make sure
that there isn't some weird edge case
like that zero byte S3 chip potential challenge
back when it first launched.
So they have a bunch of different dimensions
that grow increasingly challenging to calculate.
And the problem that I've discovered
is the interplay between those things.
Okay, you put an object into S3, great.
It doesn't take much to look at the S3 pricing page
and figure out what that's going to cost.
But now you have versioning turned on.
Okay, what does that potentially mean?
What does the delta look like?
There's bucket replication that adds data transfer.
It causes config events
to potentially cause
rule evaluations.
If you have CloudTrail
data events turned on,
then it costs 20 times more
to record that data event
than the actual data event cost you.
And then those things in turn
spit out logs in their own,
which in turn generate
further charges downstream.
And if you're sending them
to somewhere else
that's outside of AWS,
there's egress charges to contend with. And it becomes this tertiary, quaternary, and so on and so forth
level of downstream charges that on a per object basis are tiny fractions of a penny. But that's
a fun thing about scale. It starts moving that decimal point right quick. Yeah, going back to
the engineering education, I think that's another thing is orders of magnitude and understanding.
You know, I look at RS3 usage and, you know, people talk about millions of things.
We have billions, multiple billions.
And when you're casually talking about it, it doesn't sound that different. But what I tell people is like, you know, if you're a million
things takes one day, your billion things takes three years. So people don't usually have a great
way to conceptualize the large numbers like that. Like we're doing, you know, a trillion operations
a year. What is that? So conceptualizing the orders of magnitude, that's another thing I feel like the broad engineering education has helped me a lot.
Yeah. There's a lot to be said for folks with engineering backgrounds in this space. I've
worked with some who are just phenomenally good people at a lot of these problems and thinking
about it in a whole new way. That's not all of them. It's one of those areas that is indicative and is curious. Let's also be clear here, like the old joke, what do
you call the last place student to graduate from medical school? Doctor. Same approach, where
there's a different people align themselves in different areas. You take a very odd, holistic
view to a lot of these things that, despite the large number of engineers
from Stanford I have worked with over the course of my career, almost no one else sees things the
way that you do in quite the same way. So I would wonder how much of that is the engineering
background versus your own proclivities and talents. That's a good question, and I'm not
really sure about that. You know, where I ended up learning the most in college was actually I worked at the Unix computer cluster help desk. I did that for
the four years that I was there. So just doing that not only got me more interested, I used to
be a big like Mac evangelist, but this got me into the Unix world. I'm like, hey, well, this,
you know, this has some potential. It's really interesting.
But also just working with all the customers, whether it's other students, CS students working
on some kind of project in the computer cluster and professors trying to get things done. It's a
really good experience, I guess, for someone who's just understand people more working at a help desk.
There are definitely people out there in our industry
who started out doing help desk support work
at computer clusters who've used that experience.
And because of that,
they've gained just such a great customer focus
and interest in what people do, that kind of empathy.
There's a lot that can be addressed
just from the perspective of thinking about things
from a different point of view.
I think that that is, people,
it's gotten, the term itself has become almost radioactive
as far as toxicity goes,
but diversity of thought comes into this an awful lot
where what is the background?
What is the lens you view things from?
I mean, I tend to look at almost every architectural issue these days
from the perspective of cost.
Now, that is not necessarily the best way to approach every problem,
but it's one that doesn't tend to be as commonly found,
and it lets me see things very differently.
Conversely, that means that even small-scale stuff
where the budget rounds up to a dollar for spend,
I still avoid certain
configurations because, oh, that'll be 20 cents of it. That's going to be awful. It's hard to
stop thinking like this. Yeah, you can understand some parts of AWS services. You can understand
what they're going for by their pricing. Like if some service releases a serverless model, you can kind of tell by the pricing whether they want everyone to default to this version of their service, or if it's like a niche product just for small usage, or if it's intended for large usage.
A lot of times the pricing can give that away.
I was very excited about Kendra until I saw it started at $7,500 a month at launch.
It's like, oh, it's not for me. Okay. Yeah. Price discrimination is a useful thing. Honestly,
when I look at a new product, the first thing I pull up is I ignore the marketing copy and go to
their pricing page. Is there something that makes sense for either a free or trial period? Is there
a call me for enterprise at the other end? And in between is a little less relevant. But if they don't speak to both ends of that market, they're targeting something very
special. Let's figure out what that is. Yeah. And in my position, I look at the pricing side
and what it can do for our business is usually the bigger and more difficult question. But you
really learn a lot from the pricing side, which is something I hadn't had to do until I took over the AWS part of our business is just understand, okay, you can figure out a lot of things in this kind of reverse way, looking at how their business people think of their services. is. It's neat to see how that starts manifesting in customer experience too. It's honestly one of
the hard parts for me, whenever I deal with AWS is remembering just the sheer scale of the company.
I come from places where 200 employees is enormous. So that doesn't necessarily track.
And there's no one person sitting there like a czar deciding that this is going to be how they
view it. Instead, it is a pure, it's a pure story of different competing folks working together
and collaborating in strange ways.
It's an odd one.
One last topic I want to get into
before we call this an episode.
I ran a reInvent photo scavenger hunt on site at reInvent,
and you won a prize to be released shortly.
But I want to just say how fun it was.
Oh good, people are using this,
and also,
who's that person that's pulling way ahead of the pack? Effectively, despite doing all these
fun things at reInvent, you also walked around with a camera the whole time, which, you know,
working at SmugMug. Oh, OK. I start to get it. You like photography. Awesome. But you got some
great shots that I'll be posting up when we talk about this a bit more in the coming weeks. Great. That was a big surprise to me when you told me. I enjoy photography a lot. One of the
great reasons of working at MugBug and Flickr is working in the photography culture, working with
photographers. I love going on photo walks, doing this kind of thing. So that was one of the more fun parts of reInvent. I really enjoyed doing
that. And it was also interesting to see how you actually made that app quickly.
The other thing I noticed was that in Werner's keynote, he actually talked about a photo app
that was uploading entire large photos across the network. And I think he was talking about you because he was
talking about the trade-offs between delivering, you know, not doing any work on a photo and
sending the whole thing to a customer versus shrinking it down and sending it faster.
And your app actually sends the entire photo over because, you know, I imagine this is not
your business to develop photography. Yeah, it's partially that. I also
built the thing entirely via Gen AI. I'm curious, as someone who walked around there dealing with
the sometimes challenging radio signals, was that a problem at any point during the conference for
you? The fact that it didn't compress or do any transformation on the photos before it went up?
No, because you only downloaded the photo if you wanted to view it again. So I only did it a few times just to see.
I always poke around photo apps to see how they work.
And I just noticed that because I'm like, oh, this photo is coming down the line really slow.
And of course, it's just the original thing that I sent, you know, which is fine.
In hindsight, that's blindingly obvious.
Yeah, I should have definitely fixed that.
Their existing libraries is basically to include and call it good.
It's not, yeah, it's not like it's something
that would have been a massive challenge.
I just used a lot of the AWS Amplify primitives
combined with a couple of generative AI tools
because I don't know front end stuff.
So let's see if the robot does.
And it works surprisingly well.
Then of course I wound up with someone
who is not an autoload robot, Danny Banks at AWS.
He was a principal developer,
sorry, principal technologist, principal design technologist, get titles right, on the Amplify
team. And he was great at, okay, that looks like crap. Let's fix this. Like, oh, thank God,
someone who's good at this stuff. Turns out that one of the best approaches to getting something
done right is to do a really crappy version that offends the sensibilities of experts.
That's a tactic that I take frequently.
I just used that recently.
I really want to thank you
for taking the time to speak with me.
If people want to learn more
about what you're up to
and how you view things,
where's the best place for them to find you?
That's a great question.
I used to be pretty active on Twitter,
but no longer for some reason.
I can't imagine why that would be.
So I would guess Mastodon. I'm at Shandru at Hackyderm.io. And also you can contact me on
LinkedIn, on Flickr and anywhere else. I'm usually under Shandru, S-H-A-N-D-R-E-W.
And we will, of course, put links to that in the show notes.
Thank you so much for taking the time to speak with me today.
I really appreciate it.
Thank you very much, Corey.
It was a pleasure talking to you, as always.
Andrew Shea, Principal Engineer at SmugMug slash Flickr.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast,
please leave a five-star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice,
along with an angry, insulting comment about how you don't see the problem
with downloading the full-size image every time someone wants to view it from S3 infrequent access.