Screaming in the Cloud - Episode 29: Future of Serverless: A Toy that will Evolve and Offer Flexibility
Episode Date: September 26, 2018Are you a blogger? Engineer? Web guru? What do you do? If you ask Yan Cui that question, be prepared for several different answers. Today, we’re talking to Yan, who is a principal engineer... at DAZN. Also, he writes blog posts and is a course developer. His insightful, engaging, and understandable content resonates with various audiences. And, he’s an AWS serverless hero! Some of the highlights of the show include: Some people get tripped up because they don’t bring microservice practices they learned into the new world of serverless; face many challenges Educate others and share your knowledge; Yan does, as an AWS hero Chaos Engineering Meeting Serverless: Figuring out what types of failures to practice for depends on what services you are using Environment predicated on specific behaviors may mean enumerating bad things that could happen, instead of building a resilient system that works as planned API Gateway: Confusing for users because it can do so many different things; what is the right thing to do, given a particular context, is not always clear Now, serverless feels like a toy, but good enough to run production workflow; future of serverless - will continue to evolve and offer more flexibility Serverless is used to build applications; DevOps/IOT teams and enterprises are adopting serverless because it makes solutions more cost effective Links: Yan Cui on Twitter DAZN Production-Ready Serverless Theburningmonk.com Applying Principles of Chaos Engineering to Serverless AWS Heroes re:Invent Lambda Amazon S3 Service Disruption API Gateway Ben Kehoe Digital Ocean .
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This week's episode of Screaming in the Cloud is generously sponsored
by DigitalOcean. I would argue that every cloud platform out there biases for different things.
Some bias for having every feature you could possibly want offered as a managed service at
varying degrees of maturity. Others bias for, hey, we heard there's some money to be made in the cloud space. Can you give us some of it?
DigitalOcean biases for neither. To me, they optimize for simplicity. I polled some friends of mine who are avid DigitalOcean supporters about why they're using it for various things,
and they all said more or less the same thing. Other offerings have a bunch of shenanigans,
root access and IP addresses.
DigitalOcean makes it all simple.
In 60 seconds, you have root access to a Linux box with an IP.
That's a direct quote, albeit with profanity about other providers taken out.
DigitalOcean also offers fixed price offerings. You always know what you're going to wind up paying this month,
so you don't wind up having a minor heart issue when the bill comes in.
Their services are also understandable without spending three months going to cloud school.
You don't have to worry about going very deep to understand what you're doing.
It's click button or make an API call and you receive a cloud resource.
They also include very understandable monitoring and alerting.
And lastly, they're not
exactly what I would call small time. Over 150,000 businesses are using them today. So go ahead and
give them a try. Visit do.co slash screaming, and they'll give you a free $100 credit to try it out.
That's do.co slash screaming. Thanks again to DigitalOcean for their support of Screaming in the Cloud.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
Today I'm joined by serverless hero, Yen Trey.
Welcome to the show.
Hey, it's Corey. Good to be here.
So you do a lot of things.
You are a principal engineer over at DAZN.
You wind up doing your own video course at productionreadyserverless.com.
You blog at theburningmonk.com.
It feels like you're something of a kindred spirit in that when someone asks me,
so what do you do?
I have to figure out, well, okay, from what perspective?
Because there's about 15 different possible answers to that question.
Yeah, I don't know.
Was that a question?
Only if you want it to be.
Okay, so let's start at the very beginning, I guess,
since you have so many things that are, I guess, across the board here.
I guess, who are you?
You sort of burst onto the scene, at least to my awareness,
about a year or so ago.
You were named a serverless hero at the beginning of this year.
You were writing an awful lot of content that I found myself, I guess,
tripping over, for lack of a better term, that resonated very
well with various audiences that hit on different points.
And I always came away with the same perspective of, wow, that's
insightful. But we never really got to have much of a conversation about this until
relatively recently when we tripped over one another at a conference yeah um so i guess for me i've been
i've been writing for quite a long time now uh i i find the writing and what i'm blogging a
really useful way for me to remind myself and also to form a force myself to really understand
something beyond the basics and it's amazing that when you force yourself to get to the point
where you are capable and able to explain something to someone else
in words that are a lot easier to understand,
then you really force yourself to reach a level of understanding
that you probably didn't think you needed at first.
And so I've been writing about various different things
around the computing and functional programming.
I was really sort of active in the F-sharp
and functional programming scene for quite a while.
And F-server is just a thing that I got really, really interested in,
I guess, 2016, around that time when I joined a social network,
which eventually ran out of money.
But when we were there, we moved a lot of the things
that we were working on to serverless.
And we really learned a lot about, in terms of,
when you run serverless in production,
what are all the sort of challenges that you end up running into?
And I think it's a similar problem that a lot of people are running into now,
whereby it's so easy to go to production
with Lambda, sometimes you
kind of forget that
all the things you learned from the microservices
transition from
Monolith to microservices, all the
things still very much apply.
And some people that are moving straight from
on-premises to the cloud with
Lambda, you are missing
some of that learning from that process.
And a lot of people are getting tripped over
because they weren't ready to think about
how do you bring some of the good practices we learned
during the microservices era to this new world of serverless
and they're finding the problems of,
okay, how do we monitor things?
How do I debug this hugely distributed system
with so many different Lambda functions
and with both synchronous
as well as asynchronous event sources?
So yeah, I find there's a huge amount of things there
to learn and to share with regards to serverless.
So I've been really busy just learning myself
but also trying to share as much as I can.
There's something very valuable about giving back
in the context of having learned something new and going and telling that story and sharing it
with new folks onward. I mean, to some extent, I believe that that's what they base the hero
program from AWS on. What was it like joining that? I mean, I've looked into it from the outside,
but I've never been invited to participate in it. Apparently, actively insulting what they do every week in a sarcastic newsletter isn't the best way to get them to invite you to do things.
Who knew? But it seems to me from the outside that it's based largely upon helping educate
people, helping bring people along for, I guess, the knowledge journey. Can you talk a little bit
about what it was like to be invited and effectively what a serverless hero is?
That is a good question.
I don't know if I really have a good answer for that.
I think the reason why I'm invited is because I'm doing all of these articles
and also doing video courses and trying to share and trying to, I guess,
bring good practices into the serverless community.
So what it's like, it's definitely been really helpful for me personally
in terms of just getting more recognition for what I'm doing,
I guess maybe to some extent bring some authority to what I'm writing
and saying as well that more people think, okay,
this is not just some crazy guy shining from the roof, but if AWS is happy to give
him some, I don't know, unofficial title, maybe he doesn't know what he's
talking about. So from that regard, I think it's been really useful. And also, you get a
free ticket to re-invent, which I think is pretty awesome.
Sometimes that's really all it's about. Those of us sitting outside of the circle
get to pay for it. So what
was interesting is we wound up catching up relatively recently in Dusseldorf, Germany,
of all places, where you gave a talk on the concept of chaos engineering meeting serverless,
which is fascinating to me on a variety of different levels. I mean, first off, isn't
running on a distributed system similar to Lambda effectively
its own form of chaos engineering experiment?
Yes, and
also, I guess,
I forgot who it was,
maybe it was
you, someone who wrote
recently about how if you run
in US East 1, that's
kind of running a chaos experiment in itself
because US East 1 is so prone to having all kinds of problems that people don't see in many other
regions. Well, there's so much running there too that every slight hiccup winds up affecting
someone. So to some extent, that entire region has a bit of a bad rap. But it's interesting just from a perspective, at least where I sit, of trying to understand how chaos engineering would even look in a serverless context.
Because to some extent, what you're doing is you're writing code.
You're writing your arbitrary code and handing that to a provider.
And at that point, almost everything's happening on the other side, I guess, of the Amazonian wall in that context, where, okay, it's going to go and it's going to do these things.
And if Amazon breaks, it could break in new and exciting and interesting ways that I may not be able to accurately predict.
How do you wind up seeing that?
How do you, I guess, figure out what types of failures to wind up practicing for?
I guess it really depends on what it is that you're doing,
what services that you're using.
And also with chaos engineering,
it's not just about figuring out
what happens to Amazon services.
There's so many different errors
that can happen between your own services
and also within your own services as well.
And I think even though we can probably rely on Amazon
to do a certain degree of chaos
engineering to make sure that on the infrastructure side of things that they are well tested and
resilient to many forms of failures, but ultimately as an application, as someone who
owns the system and responsible for the user experience they give to our users,
we have to aim for a level of resilience beyond what we get out of the box with Lambda.
And from that regard, we have to also test failures and resistance to those failures
at the application level.
And that's something that the guys at Gremlin who offers a commercial solution for running
chaos experiments themselves, that's an area that they are also focusing on.
And in fact, recently they have just announced
a new application layer failure injection framework
that you can potentially use from Lambda as well.
I think right now it's only available for Java,
but they're going to make it available for other languages too.
The challenge almost goes beyond that, from where I sit,
to a point where, I mean, I know they've been beaten up
for this an awful lot, and I'm not trying to belabor the point,
but back when they had their big S3 issue a year or so ago,
that wound up not just taking down S3,
but an awful lot of other things that had baked in
under-the-hood dependencies on S3.
You'll see something similar even with or without serverless
where you're going to have an
environment that's predicated entirely upon certain behavior patterns. But if US East 1, for example,
drops down and your entire strategy is to move a bunch of traffic over to US West 2, in isolation,
when you test that, everything goes well. In practice, there's going to be congestion on the
control plane. A lot of people are going to want to be failing over at the same time. You almost have
a herd of elephants problem where at that point, that's the sort of latency trickle-down effect
that is difficult to predict without a very thorough understanding of how Amazonian systems
behave. So to some extent, it almost feels like you're in a position of having
to enumerate all the bad things that can happen that could possibly go wrong, rather than trying
to build a resilient system aimed at a wide variety of problems. Am I thinking about that the wrong way?
I think with chaos engineering, it goes beyond predicting what can go wrong, but also
for example, they just gave in terms of what happens when the region goes down.
In theory, you may have predicted how things would go wrong,
how you can shift traffic around, but you never really know for sure
whether or not things will pay out the way you expect them to
until you actually try it.
In the same way that a lot of companies spend a lot of time
coming up with sophisticated plans for disaster recovery, how they move
different workloads around to different data centers
and so on, and how you work,
but they never exercise them
in reality. So chances
are when something does happen,
your disaster recovery plan
may not work the way you planned.
So one of the practices
that people do with
regards to chaos engineering is to actually exercise those scenarios. So one of the practices that people do with regards to chaos engineering
is to actually exercise those
scenarios. So for example,
you may plan a game day whereby
Netflix does this from time to
time whereby they will plan a game day
and actually trigger a region
wide failure and see how
whether or not their system is able
to recover from those
regional failures the way they hypothesize that it should.
So part of doing chaos engineering is about exercising those failure modes
and see how your system actually behaves so that you can learn from it.
And I think that's really what it comes down to is learning.
It's, I mean, not to tell Netflix they're doing it wrong,
but I wish it was easy to wave a hand
and see if an issue you're seeing is just an entire region broke.
It never tends to manifest that way.
Things start working intermittently.
Some services start responding with strange messages.
Some wind up responding at increased latencies.
But very rarely is it a everything goes dark and nothing is responsive.
Invariably, and I still blame most monitoring companies for this one,
where you wind up in a place where every single environment you're in is,
every person who has an issue pops their head up and says,
is it just me or is it my infrastructure?
I mean, the best early warning sign we still have in some cases is DevOps Twitter.
There's no great way to say, is it my crappy code?
Is it our last deploy?
Or is this a wider provider issue?
Yeah, that's really funny you say that, because Amazon has been traditionally really slow
at updating their service health dashboards.
And oftentimes I find myself asking the same questions.
Oh, is it my infrastructure?
Is it something happening in AWS?
Nothing is updating in the service health dashboard.
And then you go to Twitter and see whether or not other people are also
complaining about AWS being impacted in the region that you are in as well.
It's funny, you're kind of always outsourcing your AWS monitoring to Twitter.
Oh, yeah.
And there are ways to fix this.
I mean, it would be interesting,
for example, if PagerDuty
would wake you up
with a notification that says,
hey, by the way,
we're seeing more than
two standard deviations
of other people in this region
also being paged right now.
It would shave 15 minutes
off most companies' response plans
because they're immediately aware
that, oh, it wasn't someone
pushing bad code
or a disk filling up
or a database filling up or a
database falling over. No, no, this is a provider-level problem. Just getting that first
pass issue is something most companies can't do themselves. So that's a whole separate thing to
rant on. Instead, let's talk about something else that irritates people to no end, API Gateway.
You've written a fair bit on it lately. You've been going into some depth as far
as how to work with it, various things it can do. And I have to say that whenever I work with API
Gateway, I come away feeling more confused than I did when I started. Is that just me,
or is it really confusing? It is really confusing in part of things because it can do so many
different things. And I guess it's not always clear what, given all the different options,
what is the right thing to do given a particular context.
And you can, you know, and also there's also some, I guess,
peculiarities to how API Gateway works.
For instance, when you create a custom domain name,
it just never occurred to me the first time around that, well, the first time I did it, that when it creates a custom domain name. It just never occurred to me the first time around that when I first did it,
that when you create a custom domain name,
it's going to create CloudFormation,
but for some reason it doesn't use CloudFormation caching.
So if you want to have caching enabled,
you have to do it in APEC gateway layer,
or now you can do it with a regional endpoint
and have your own CloudFormation distribution
for that custom domain name. Now you can do it with a regional endpoint and have your own CloudFormation distribution
for that custom domain name.
I saw all the different authentication mechanisms that it supports, when to use which one.
And a lot of that is, I guess, something that I have had to learn myself and through experiment
and also just through different use cases that have come out in my line of work.
I wish there's better documentation, documentation, better education out there from AWS
in terms of providing guidelines on when you should use, say,
IE and authorization versus Cognito versus something custom
or using a custom authorizer function, for example.
And also, the sheer amount of things that's included in API Gateway
does very much feel like a Swiss army knife for all the different things
that you may want to do.
And also, it's not a very cheap service either by comparison to, say,
Lambda invocations.
I think for most people in production, they are likely to cost a lot more
than what they pay for Lambda.
Oh, absolutely. For those who aren't aware, API Gateway acts as a HTTP or HTTPS front end for a variety of Lambda functions.
But you can also put other things behind it.
It effectively is aware of HTTP verbs that you can wind up leveraging.
It can follow all sorts of interesting and convoluted workflows.
It's more or less a networking Swiss army knife,
by which I mean all of the instructions are apparently written in Swiss German
and no one's really clear on how to use it for certain things.
And the feedback from AWS around this service has largely been of the form,
oh, use it however you'd like, which is reassuring but surprisingly unhelpful.
And every time I start using it, I am completely convinced I'm doing it wrong.
But it seems to work for the use case that I have.
So I continue to sit here, and my resentment for API Gateway continues to grow.
And I don't know if you've ever had to interact with API Gateway with its own virtual API to talk to its control plane.
It's one of the most awkward APIs that I've had to work with.
I haven't even gotten to the point of trying to configure API Gateway via direct calls.
Everything I've done with it so far has been through serverless framework.
And that's really the only thing that makes sense
to me. But I do suspect
there's an entire sea of complexity that
I'm not exposed to that could probably solve my problem
in half the time.
It's one of those areas where it's just
a future state
thing. I'll look at that one of these days.
Yeah, I
do the same thing as well.
I mostly interact with API Gateway through the serverless framework,
which simplifies things so much more.
But a few times I've had to, I guess, provision API Gateway with Terraform
and other things that force you to understand how API Gateway resources
are managed and configured, how they link up together.
There's just a whole sea of complexity under the hood
which, frankly,
the serverless framework just
shields you from. Absolutely.
It's one of those areas that I think is still
evolving.
Let's get a little out of the weeds for a minute
and look at big picture stuff.
You're a modern-day thought leader as far as
serverless goes, which means you've been using it
for more than 20 minutes.
Let's look forward at it.
I guess serverless, instead of as it stands today, let's look at what it looks like a few years down the road.
I've been saying for a little while now that today it feels a little bit like a toy in the context of what it's going to look like in a few years. And I've had some people
get very angry at that characterization and say, no, it's not a toy. It's awesome. It's perfect.
We run production on it. Shut your mouth. And other people agree with me. And invariably,
I'm inadvertently starting a war. And then I sneak out the back and take a cab back to the airport
and catch an early flight home. And well, we hope most of those people lived.
Where do you fall on that particular spectrum?
I certainly think that it's good enough to run production workload on that.
And I know many people are running very heavy production workloads on serverless,
while Lambda and other similar functional service offerings. So I definitely think that it is good enough for production
as for it being a toy compared to what it can look like
in a couple of years' time.
I think that is definitely the case.
What we see today is something that's useful,
that's usable in production,
but it has got many caveats that requires knowledge
to work around,
which is why I find the video course I've been doing
or the blog posts I've been writing,
they provide value to the people that are reading them,
that are watching them.
But at the same time, I wish I don't have to write those things.
I wish more things just work out of the box.
And I believe in the next couple of years,
things will continue to evolve.
Some of the problems people are having today
in terms of audit limitations
around VPCs,
ENIs, and CoStar,
all of those will just go away
and it will work a lot better
as a platform. There will be more flexibility
so that it's potentially
you can say, okay, I don't want to use
Go, I don't want to use Node, I want to use
Rust or some peculiar
language that I have
just discovered, you should be able to just bring your own language runtime to the platform
and use Lambda or other functional services as a more general abstraction over containers and other
compute resources. And I definitely think all the problems that we are seeing today in terms of some
of the scaling limits, all of that is going to go seeing today in terms of some of the scaling limits,
all of that is going to go away as well.
And some of the complexities around how do you build observability
into your server application,
all of that should also be improved dramatically
compared to what we have today,
which is oftentimes many home-baked solutions for shipping logs
to monitor to getting correlation IDs and things like that into our service application,
which for many years now we've been able to just offload to some third-party vendor
to provide out-of-the-box for us.
I would agree with you.
There's a lot of stuff that feels like it's half-baked and isn't done yet.
There is a story about how using
Lambda dramatically speeds up the time it takes to write an application and get it into production,
but I feel like it doesn't really kick in until the third application you write.
The first one, as you learn the caveats and trip over it, wait, it does what,
feels like it's going to take a lot longer in order to make sense of it.
The gains don't really come until you've repeated it a few times,
where you desperately need to be a lot more,
I guess, once you come up to speed,
you've found out where the sharp edges are,
you understand the model now,
now you're going to be more effective.
But I don't get the sense,
and maybe this is just me,
that you can drop it in front of a team of developers
and they will be immediately more productive that week.
Is that naive?
No, I agree with that.
And in fact, I think serverless exposes the development team a lot of things that they
may not be used to thinking about.
All the operational side of things in terms of how they set up as centralized logging,
monitoring, and just, I guess, tracing as well.
A lot of those things that traditionally development teams
have been able to offload to some kind of, I guess,
a platform team or DevOps team, for lack of a better name.
Now they have to think about it themselves.
Now they have to understand how to instrument their code,
how to be able to debug their code in the native environment when they're running in the cloud.
And I think it does mean that a lot of the traditional development teams
who haven't been exposed to that, they now have to up their game
and really learn the operational side of things
and how to make the serverless applications production ready.
Who are you seeing these days as far as people who are building up
serverless applications? Who's using
it, and I guess for what types of use cases?
I mean, we do see the toy problems that wind
up being shown on stage at various conferences,
and I've seen it for backend,
but are you starting to see full-on applications
being written start to finish
using serverless technologies, or
is it more of, I guess, a helper thing in your
experience? I live in San Francisco, so I tend to see a lot of things from a different angle.
Hey, we wrote this thing last night, it does serverless blockchain machine learning at the end.
And that's great and all, but isn't exactly representative of what the rest of the world sees.
Yeah, I haven't really seen any serverless blockchain AI,
such as whatever buzzword is out there.
What I do see is a lot of people building, I guess,
building web applications, building backends.
And I guess it really depends on the company that you work in.
A lot of it, I think, a lot of adoptions for serverless
is driven by the cultures in the company.
For instance, I see a lot of DevOps teams adopting serverless
because it makes their life a lot easier
and also makes their solutions a lot more cost-effective
by moving, say, Cronjobs to run inside Lambda function
and being able to do a lot of automation for resources
and monitoring as well, both from the operational
but also from the security point of view
by hooking into all these different events
they can capture with CloudTrail
and then using Lambda functions
to react to them
through CloudWatch events patterns.
And I also see a lot of application developers
free themselves
of some of the organizational constraints
and the inertias around dependencies
on, say, a DevOps team,
which holds the key
to the kingdom. And with
serverless, application developers
can take ownership
of more of their infrastructure, of more
of their systems
using serverless
without getting
entangled by all these hundreds
of different tools that you use for the DevOps.
And I guess another one I see an awful lot is IoT.
A lot of people I've spoken to are in the IoT world,
and for them, serverless seems to just be a very natural fit.
I guess iRobot and Bankeho often talk about how they are using
serverless and really just
take the whole usage
to the next level, but there's a lot of
other smaller companies,
many startups in the IoT world
that are making really heavy use
of serverless. In fact, not
long ago, I was speaking at
a local user group event
in London,
and one of the companies, a very small company,
they have got their own IoT platform,
and yet they were, I think at that point,
one of the biggest users of Lambda in the whole of Europe, and they were easily doing several tens of thousands
of concurrent executions per second.
And Lambda, when serverless, give you that that flexibility and that
that scalability pretty much out of the box and yeah so those are some common use cases i've seen
from companies that are doing ops automation building your traditional web applications as
well as iot and there's also a few other places where I've, including a couple of companies
that I have worked for where we've done that.
We've moved a lot of our analytics pipeline
to run on serverless using Lambda,
Kinesis, Firehose, Athena, and QuickSight,
all of that, that entire stack,
so that we have the whole pipeline
without having to manage and run any server ourselves.
And we only pay for data that we process and we query.
I think that we're starting to see this really gain steam and move toward a place where we're
not, I guess, seeing it.
We haven't seen this sort of adoption from the same players historically.
It does feel an awful lot like there's, I guess, more of an enterprise embrace of this
than there has been with previous things.
You didn't see enterprises rushing to the cloud. You saw small companies doing it. Eventually
it turned into a wave. I feel almost to some extent like we're seeing enterprises
embrace serverless before we are the startups. Yeah, I'm seeing that as well.
In London, there are a lot of financial
institutions and I have
been seeing more adoption of serverless
in that world than I anticipated.
Companies like Capital One, companies, I think Goldman Sachs
is using serverless as well, and a few other big enterprises
which I didn't expect them to be on the serverless wave.
And sometimes I feel like some of these companies
are so late to the cloud
that they're just jumping over the whole containerization step
and go straight to serverless because that was easier.
It was easier entry into the cloud
than to go into the infrastructures of service and using containers.
I definitely feel to some extent that containers are almost a transitional step,
same with orchestrating them in place.
But that tends to be controversial and is probably a conversation best reserved for another day.
So you do have a talk at reInvent coming up.
Can you tell us a little bit about it?
Yes, it will be an extended version of the chaos engineering and serverless talk I did at SIECon in Europe.
So again, it's talking about how it's about the challenges
that we face in the serverless world in terms of building greater resilience
than we are able to get out of the box with AWS.
And some of the many things that we talked about earlier
in terms of how do we then
identify failure cases
and how do we simulate it
to verify that our application
can actually handle those failure modes,
but also try to uncover failure modes
that we are not aware of yet
by running scenarios
that maybe we just don't know
what our systems would do,
but we know it's probably going to be bad
so that we can run those scenarios in an environment outside of production
so that we can learn about our system's failure modes
ahead of it actually happening in production
so that it gives us a chance to then build resilience into the application
so that we can take the principles of chaos engineering
and bring them into the application layer
rather than just applying them at the infrastructure layer.
Which I think opens up an awful lot of opportunities.
I think the version that I saw was fantastic.
I highly recommend that people wind up catching this if you can,
either at Reinvent Itself or on the video after the fact.
So if people like what you have to say, where else can they find you?
They can find me on Twitter,
and they can also find me on my video
course as well. If you go to
productionradioserverless.com
that will take you to the
video courses page where
you can buy the video or just check
out the first couple of chapters
and you can also find me on my blog
theburningmonk.com
and I also do a lot of writing on
Medium as well as for a
few other companies.
I recently wrote a blog post for Logster.io on serverless versus the containers from a
perspective around control versus responsibility and vendor logging in terms of the risk versus
the reward.
So I'm looking at the current state and the adoption trends
for both serverless as well as containers.
Well, thank you very much for taking the time to speak with me today.
There's always, of course, the conference circuit as well.
If someone's not lucky enough to run into you at a conference like I did,
I absolutely recommend it.
You're incredibly gracious, you're an excellent speaker,
and you tend to tell stories in ways that are very engaging. So thank you
for that. And thank you. That's
an amazing compliment there. Thank you
very much. This has been Yen Trey.
I'm Corey Quinn and this is
Screaming in the Cloud.
This has been this
week's episode of Screaming in the Cloud.
You can also find more Corey at
screaminginthecloud.com
or wherever fine snark is sold.