Screaming in the Cloud - Episode 35: Metered Pricing: Everyone Hates That! Charge Based on Value
Episode Date: November 7, 2018Did you know that you can now run Lambda functions for 15 minutes, instead of dealing with 5-minute timeouts? Although customers will probably never need that much time, it helps dispel the b...elief that serverless isn’t useful for some use cases because of such short time limits. Today, we’re talking to Adam Johnson, co-founder and CEO of IOpipe. He understands that some people may misuse the increased timeframe to implement things terribly. But he believes the responsibility of a framework, platform, or technology should not be to hinder certain use cases to make sure developers are working within narrow constraints. Substantial guardrails can make developers shy away. With Lambda, they can do what they want, which is good and bad. Some of the highlights of the show include: Companies are using serverless as a foundation and for critical functions Serverless can be painful in some areas, but gaps are going away Investing in the Future: Companies doing lift-and-shift to AWS are looking at technology they should choose today that’s going to be prominent in 3 years Serverless empowers new billing models and traces the flow of capital; companies can choose to make pricing more complicated or simplified What value are you providing? Serverless can offer flexible pricing foundation When something breaks, you need to be made aware of such problems; Amazon bill doesn’t change based on what IOpipe does, which is not true with others Developers are the ones woken up and on call, so IOpipe focuses on providing them value and help; they are not left alone to figure out and fix problems Serverless and event-driven applications offer a new type of instrumentation and observability to collect telemetry on every event  For serverless to go mainstream, AWS needs to up its observability level to gather data to answer questions AWS, in the serverless space, needs to make significant progress on cold starts in other languages, and offer more visibility and easier deployment out of the box Links: IOpipe Episode 16: There are Still Servers, but We Don't Care About Them Lambda Google App Engine Python Node.js Kubernetes Simon Wardley DynamoDB re:Invent Perl PowerShell Digital Ocean .
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This week's episode of Screaming in the Cloud is generously sponsored
by DigitalOcean. I would argue that every cloud platform out there biases for different things.
Some bias for having every feature you could possibly want offered as a managed service at
varying degrees of maturity. Others bias for, hey, we heard there's some money to be made in the cloud space. Can you give us some of it?
DigitalOcean biases for neither. To me, they optimize for simplicity. I polled some friends of mine who are avid DigitalOcean supporters about why they're using it for various things,
and they all said more or less the same thing. Other offerings have a bunch of shenanigans,
root access and IP addresses.
DigitalOcean makes it all simple.
In 60 seconds, you have root access to a Linux box with an IP.
That's a direct quote, albeit with profanity about other providers taken out.
DigitalOcean also offers fixed price offerings. You always know what you're going to wind up paying this month,
so you don't wind up having a minor heart issue when the bill comes in.
Their services are also understandable without spending three months going to cloud school.
You don't have to worry about going very deep to understand what you're doing.
It's click button or make an API call and you receive a cloud resource.
They also include very understandable monitoring and alerting.
And lastly, they're not
exactly what I would call small time. Over 150,000 businesses are using them today. So go ahead and
give them a try. Visit do.co slash screaming, and they'll give you a free $100 credit to try it out.
That's do.co slash screaming. Thanks again to DigitalOcean for their support of Screaming in the Cloud.
Hello and welcome to Screaming in the Cloud.
I'm Corey Quinn.
I'm joined this week by Adam Johnson, who's the co-founder and CEO of Iopipe.
Welcome to the show, Adam.
Hey, thanks, and good morning.
It is morning.
One thing I want to start with here is a disclaimer that I am an Iopipe customer. You're
not paying me to say that. You're not sponsoring this episode. You are effectively here because
you're doing interesting things in the world of serverless observability, or observerless,
as I insist on calling it. Not because this is not a paid placement. I just love the service.
And getting you folks involved in what I'm doing is always something I try to do.
I think you're the second person I had with Erica being the first from IOPipe.
Yeah, definitely.
Yeah, definitely.
Thanks for the kind words for sure.
Happy to talk today about some of the things we've been seeing.
Perfect.
Before we dive too far into customer stories and specific things that you've seen, I want to start with a question just for my own curiosity. About a week or two ago at the time of this recording, they sort of had a stealth
announcement that wasn't usually publicized. But okay, you can now wind up running Lambda
functions for 15 minutes instead of the five-minute Lambda functions timeouts that we saw before.
Where do you land on that? That's pretty interesting. I think that feature
and previous features that were rolled
out quietly were probably done for
really specific customers of theirs that had
very specific use cases.
I generally am
happy with them
increasing that time. I don't think
most people are going to ever use it, but it's nice
to do it just so that
people aren't saying that serverless is not useful
because it has this five-minute limit.
15 minutes is a very long time.
And you can do a lot of stuff with that.
And I think it does open up some new use cases,
especially in machine learning,
doing a lot of distributed stuff.
There was recently a paper that came out
where they were essentially doing a lot of distributed stuff. There was recently a paper that came out where they were essentially doing a lot of
machine learning type stuff in a very distributed way.
I forgot who rolled that out,
but it was an interesting use case.
And those definitely weren't as possible
with the five-minute limitation.
So while it may not be common,
I think it's nice to open it up to more use cases.
I'd heard whispers that I was never able to substantiate that they were doing things like
this on a case-by-case basis for very specific customers of extending the Lambda runtime.
I can't obviously confirm that, but that's something that I wound up hearing about.
So on the one hand, it's neat that they're making this available to everyone now.
My concern is that this feels like it's the sort of thing that's going to empower
three or four really helpful use cases and several tens of thousands of absolutely terrible
architectures where, yay, we're one step closer to shoving our entire monolith into a Lambda
function. Now, if only they'd give us more disk, more RAM, more connectivity, etc. And I got to say, I'm a bit
of a skeptic on this. I come from a world where I went through the whole process of naivete as a
developer, where I would build an awesome system that I was sure would fix all the problems of the
systems that came beforehand. And there was no way that people would misuse
this. And then I saw what customers did once they got it into their hands, and that scarred me,
Adam. I mean, there are some things that I'll never be able to heal based upon people implementing
things terribly. This is a finely crafted torque wrench. We're going to use it as a hammer.
Yeah, it's true. But I think at the same time, if you look at the history of serverless,
I would consider platform as a service in early iteration of serverless. And things like Google App Engine, which is a great platform, didn't really take off the way that they had anticipated.
And I think one of the main reasons for that is because it was very opinionated and had too many guardrails for developers.
And with guardrails that substantial,
developers will shy away from using that framework or technology or platform.
And I think that that's why Lambda has been very popular, because it hasn't really been as prescriptive as earlier incantations.
So it's much more welcomed by developers
because they can do what they want in general.
And that's good and bad for sure.
People are always going to write terrible things
that shouldn't exist.
But I think it should be up to them.
It should be up to education.
It shouldn't be the platform's responsibility
to hinder certain use cases just to
make sure that developers are
working within the narrow constraints
that the provider decides for them.
Very diplomatically put, and I'm certainly
not going to argue that point with you.
One thing I'm curious about, since you're in a better position
to see what the industry is doing with serverless than I am, how are you seeing people use this?
I keep viewing the idea of Lambdas, serverless, all of this as something that in its current state is something of a toy.
You replace cron jobs with it, you can wind up implementing trivial things, but it's not the sort of thing that you would
build an entire business application or SaaS platform on top of. And yes, there are notable
exceptions there. To be clear, and I don't believe that that's going to be the case forever. I think
we're probably about 18 months away from seeing some transformative shifts in that space.
But I'm curious as to what you're seeing today. I can make naive assumptions all morning long.
I'd rather see what you're seeing here in the real world.
Yeah, absolutely. And I hear that all the time. I talk to
VP of Engineering and things like this, and it's a common
comment that I hear about serverless is that Lambda is mainly just for
cron jobs or toy applications. They can't really fathom that it's used for
anything critical.
But we do see a wide variety of stuff with our users. We see companies that are startups
who are born serverless, who are building their startup with serverless as the foundation,
which gives them some advantages over their incumbents. If you build something from the ground up with serverless in mind,
that suddenly opens up a lot of different opportunities that your
competitors don't have. Your competitors may have very limited
ways in which they can charge their users based on how they consume
computing resources that the startup may not.
So I think it's opening up those use cases.
We typically see those startups are doing the most interesting things,
primarily because they're starting with all greenfield.
They are trying to go all in on serverless.
It may be somewhat painful in some areas.
But I think, as you said, over the course of 18 months,
many of those gaps are going away,
just as we're seeing that duration is increased, the cold starts have been less and less of an
issue over time. I think it's still an issue for some languages, but for languages like Node
and Python, it's very minimal impacts these days. And this is something that you don't find AWS
talking about, but they are quietly improving these things to the point where it's
extremely minimal. I think on the other side, non-startups, we do
see larger companies like large enterprises
who are traditionally laggards
starting to embrace serverless before we see the
traditional early adopters that we see the traditional early adopters
that we would consider an early adopter.
I think that's super interesting to me, and it was very unexpected to see.
And I think what's going on is the early adopters
jumped on the Kubernetes bandwagon very early on.
And they're deep down that path.
And they don't want to make a change right now
because they've already invested so much into that direction.
Meanwhile, there's all these laggards who are just now going to public cloud.
Still, a majority of the market is not in the public cloud.
So there's a lot of change to be happening in the future.
But those companies who are deciding to do the lift and shift
to AWS or other public clouds
are looking at the technologies that they should choose today
that's going to be prominent in three years.
And a lot of them are looking at this and they're making a decision like,
should I invest in containers or should I invest in serverless?
I think most of them will end up doing a mix of both.
But I think they have to place their bets where they think things are going to head in the future.
And a lot of them are seeing serverless as an interesting way for them to leapfrog the early adopter competitors in their
space. Instead of their developers worrying about setting up clusters and coding infrastructure,
they can then just spend their time building and shipping business logic. If they are doing that,
they certainly, in my mind, are going to have an advantage over their competitors who were the early adopters in the coming years.
There's a lot that you just said that we can unpack.
But one thing I want to focus on is the idea that this empowers new billing models.
I don't mean for you to throw anyone under the bus in particular, but the idea of being able to trace the flow of capital through your organization, as Simon Wardley says, is something that's compelling.
And as this accounts for more and more of what workloads a company runs, it enables you to do
that. But it also sort of unlocks a Rube Goldberg pricing chart that is going to scare the crap out
of an awful lot of people.
Well, every time you wind up listing the users, we're going to charge you a quarter of a penny.
Every time you query that, we're going to charge you a tenth of a penny. And it turns very quickly
into this thing where the pricing model does not make sense to a human being. Are you seeing
startups going in that particular direction? Or are you seeing it in a more, how do I put this, human sense?
Definitely the latter. I mean, I think it's possible to do that. But I think it's, like you said, it's pretty obvious that if you have such a complicated pricing structure, it's going to be very hard to convince people to buy into that. But I think what we do see is somewhere in the middle where
they have a lot more flexibility on their margins to either lower their price in general
with simple pricing structures or change it to a different model that's quite flexible,
but not as complicated. For example, if you're using the service, you're consuming compute.
So you should pay for it at that point. But if you're not using the service,
you may not have to pay for it. I haven't seen that prominently happen yet, but I think it's
possible. And I'm interested in seeing what comes out of that. I don't know what the winning pricing
models are going to look like, but it is opening up the use case to them. And I suspect that some
startups are going to realize this and start taking advantage of that to differentiate.
It makes sense that you wind up having a pricing model that's at least loosely coupled to what it
actually costs you. And I think that being able to get that level of granularity into what it
costs to provide a service internally is incredibly valuable, just for a business metrics perspective.
That said, on the other side of the coin, I've always been a big believer
in charging based upon value
as opposed to charging based upon cost.
It feels like the former winds up
in a sort of an escalating chain
the longer you do something.
And the latter tends to generally
lead you into a race to the bottom.
Yeah.
I'm worried that there are going to be
some stories around Lambda that end that way.
Yeah, I agree with that as well.
I'm in the camp of charging based on value as well.
And even for our service, I've had folks at AWS want us to do more metered pricing
based on very specific things around Lambda.
But everyone hates that.
Yeah, exactly.
And yeah, it's missing the point of
what value are you actually providing to teams?
And I think that should be really where the focus is.
But I am in favor of having a foundation.
If serverless can open up a foundation
where that gives you more flexibility
in the choices that you're making on your pricing
and opens up higher margins, that's always a good thing. Because I do think that in the startup
world, startups are pricing their products too low. Everybody starts by pricing them too low.
They're not really understanding the value that they're providing to their end users in the early
days. And that value increases over time. So I think it's just a trend that happens.
But yeah, I'm with you in being a little bit scared by that trend,
continuing and going down to zero.
Because that's just not helping the end users in the long term.
They may be saving money in the short term.
But if that startup or company in general is not getting the margins,
they can't reinvest that into R&D
and creating new technologies and new innovations.
So to me, it's like a world where customers
just jump from vendor to vendor
looking for the next cool thing.
And that's a lot of time spent in switching as well.
Exactly.
There's, again, as one of your customers,
there's a keen appreciation
that I've developed for the way that you wind up pricing things. I think every month I've had you
folks in place, you have cost significantly more than the Lambda functions you monitor because my
Lambda bill hovers somewhere around 60 cents. And that's fine because the value of understanding
what that application does is worth far more to me than $0.60 a month.
I care about understanding and seeing what happens.
To be clear, I have several different applications running Lambda,
including the entire production pipeline for my Ridiculous newsletter,
which is last week in AWS.com, for those who aren't familiar with it.
The sign-up link for that as well leverages two Lambda functions
and an API gateway.
That's the thing that I've set Iopipe to wake me up in the night if it winds up breaking.
If someone can't subscribe to the newsletter, that's a problem and I need to be aware of that.
I think I've seen all of three times where that has gone off in the past four or five months.
And every time it was someone doing something bizarre and not formatting an input correctly,
not trying to wind up operating good faith. There was a penetration test that I wound up
commissioning that wound up triggering some of it. And that's exactly what I want. It's not
excessively noisy. It's not something that I want to roll into a larger platform that
winds up managing 15 different things that are vaguely correlated. It does one thing, it does it extremely well,
and I can integrate it to the rest of my view of my business.
And that's something that I find incredibly valuable.
Yeah, absolutely.
The idea of trying to tie this into something that varies is nuts.
More to the point, my Amazon bill doesn't change
based upon what you folks do.
And with a lot of monitoring platforms, that's not true.
I've done trials where the monitoring system
cost me nothing,
but it doubled my CloudWatch bill
just out of the blue.
And that tends to be
an intensely frustrating conversation.
Yeah, I agree.
Especially with the trend of using
more and more third-party services,
which I'm a fan of.
It is complicated,
but I think in general,
it helps anybody build things that just weren't possible before.
But it does add that complexity of when you make a change
to this dependency, how is it going to affect the pricing
of everything else?
That's super complicated.
I don't think there's been a great solution around that yet,
for sure.
But I do agree, the value that the companies provide
really should be where things are priced. And for us, it's really about providing more
confidence to developers who ship their code. They don't want to get woken up in the middle
of the night. And you want to make sure that what you're shipping is working. And that when there
is a problem, you want to quickly know, is it my problem or is it one of those
third-party services that I'm using?
Is it a database I'm relying on
or some authentication service or what have you?
You want to get to those answers as quickly as possible.
And one of the trends that we're seeing in serverless
is that developers are almost always the ones
who are woken up and on call
for the functions that they ship.
Even if there is a dedicated ops team or DevOps team,
that is how it works.
And we found that to be the case pretty early on
when building Iopipe.
So we've been focusing on really providing value
to the developers themselves,
to provide them with some service
that acts as another extension of their team
that has their back, that helps it so that they don't have to dig through
mountains of logs to figure out, was this my code acting up,
some code path that I just didn't expect to happen?
Or is it just because there was a network blip
between the Lambda container and Dynamo?
That actually happened to us just the other day.
We got an alert with one of our data pipelines.
We basically have an alert set up for when the number of invocations drops below a threshold.
It's like reading off of Kinesis, so it's pretty flat.
It may go up, but it doesn't go below a certain threshold.
So we got an alert saying, hey, this dropped below the threshold.
We immediately started going in and
digging into
Iopipe, for example, and looking at
what's going on.
And there was unexplained
things going on on the Lambda
side. It pointed us very quickly to
Lambda, possibly having
a networking issue on the container
itself. It ended up fixing itself, fortunately.
That's the nice part about serverless is
when things do happen like that,
they are typically very quickly resolved.
But it is important to have the tools and visibility
so that you can understand, was that their problem?
Or was it my problem?
Is there something I can do to avoid that happening in the future?
And a lot of times, in my past, I've seen ops teams who just don't have an explanation. The
thing fixed itself and they're like, well, I don't know what caused it, but it fixed itself. So
well, hopefully it doesn't happen again. That's not great. So it's really important to have that
level of visibility to understand. Yes, I can see these exact events that came through,
and I can go back and use it like an audit trail to understand
how many of these requests were slow and to which service.
I think that starts answering the questions and pointing the fingers at the right provider.
I will absolutely say that there's an incredible level of frustration
with the way that the native tools are positioned
around visibility into a Lambda function. Oh, just set up this complicated thing and tie 15
things together and look in three different places to make sure that the ridiculous log
message that's esoteric and arcane has the data you need. And being able to look in one place
rather than chasing down this giant laundry list of items was incredibly helpful when I was doing early debugging.
And once the application became up and stable
and quote-unquote done,
yes, I know, I know, things are never done,
don't email me,
you wind up in a scenario where at that point
you just want to see anything that happens
that's out of the norm.
And for me, that's either inputs
that aren't valid email addresses,
that's my third-party API acting up
that I'm bouncing off of.
And I'm still annoyed by that.
I'm in the process of replacing
the component in question that does that.
And it winds up getting to a point
where I don't hear from the monitoring system.
I don't think I can point at any other application
I've ever worked on and say,
yeah, it was quiet, except when something was broken.
Dialing in was something that was always a work in progress, never done.
So if, I don't know if that's something that I can thank you folks for,
if that's an artifact of the entire serverless model,
or I just write such perfect code that,
unlike all of the idiots I used to work with previously, I know what's up.
Yeah, I had to teach myself Python for this project. I promise, it is not that one.
Right. Yeah, and I think serverless itself
and in general event-driven applications is
opening up a new type of instrumentation
and observability where you can actually
and you may be forced to collect
telemetry on every event going through the system.
If you look at the previous incantations of monitoring and observability,
it's really around aggregations.
So if you look at some of the very popular tools that are out there,
if you look at the resolution they provide, it's like they give you one second resolution.
In one second, at a very high volume service,
you may have hundreds of thousands of events or more flowing through a Lambda function.
And if you only have six metrics
that tell you what happened during that one second,
you have no idea what really happened.
You may know that something was slow for five minutes, but you don't know
who was affected. Or if you're processing
email signups or orders,
you don't know which users were affected by
that
in general. So I think that by
the way that event-driven
systems, and especially Lambda and Function
as a Service operates,
tools like Iopipe and others, I think, are starting
to collect more
and more of that data, more of that telemetry. And you can go back and use it almost as an audit
trail to go back and see which emails were skipped during this outage that happened,
which orders failed to execute due to a network blip in the container. These were just things that weren't possible before.
And I think there's many reasons for this,
but I think in general, the advances in technology
and the reduced cost of storage over time
has allowed us to start capturing all of that data.
And that, to me, is the foundation
for the next generation of observability tools.
I think all of the existing tools that are just showing you aggregate data
are insufficient in this world.
And once you've started using a tool that provides you with that level of resolution,
where you can see every single event and the telemetry around it,
especially at the business logic level, you can't go back once you've seen that.
No, and I don't think there's ever going to be
a putting that genie back in the bottle.
I think that that ship has entirely sailed.
I think that there's no real path forward
for going back to the opaque things
that we used to accept as normal
once you become accustomed to what this unlocks and empowers.
I suspect it's going to be a bit of a long road
to get this into the mainstream,
but we're seeing it around the periphery an awful lot.
Are you seeing something different
in the sense where this may come sooner than people expect?
I think it will take time.
I think in general, there are only a couple of us startups
who are providing this right now.
But I think for serverless to get mainstream,
I think the cloud providers need to provide this themselves out of the box.
The level of tooling that AWS provides around Lambda
is not even close to sufficient right now. And I think that's a big
hurdle. This, of course,
is not really helpful to my startup,
IOPipe. But I think for the sake of serverless in general,
AWS needs to really up their level
when it comes to observability.
They need to start collecting all of these things
and they need to make it so you're not having to jump through
all of these hoops to answer questions.
And they have to give you the appropriate telemetry
to actually answer the important questions, which they're not doing right now. I think that's going to be a big
blocker for serverless until the service providers can step up their game.
Let's move on to the dangerous portion of this episode. Specifically, you and I have sort of
rough ideas from various directions of what might be coming down the pike
in future releases for reInvent at the end of November.
Without violating trusts, confidences, etc.,
what can we talk about?
What are we hoping to see that comes out of AWS in the serverless space?
Are there things you're excited that you hope to see?
Are there things that annoy you that it isn't doing yet?
Or is this such a landmine that we shouldn't even mention
that there's a conference coming up next month?
I mean, I think in general,
there are certain things I can't say,
but I can tell you what I hope exists,
and I have no idea if some of these will be announced or not.
It's just straight from gaps that I think are in the current ecosystem.
I think one of those is seeing them make significant progress
on cold starts in other languages.
I mean, they've already done it for some of the languages,
but if you're still using things like Java,
which a lot of people are doing,
the cold start situation in that world is very painful.
And the sarcastic answer of, oh, just don't use those languages is sort of language bigotry that
I think serves no useful purpose anymore. It's all fun. We all have our favorite teams we like
to bet on. But let's not urge people away from their platform of choice just to prop up something
else. That tends to be a terrible model. So I'm with you on that.
Right. And they probably won't announce anything because
they generally don't talk about cold starts
or even improvements that
they make to cold starts. We notice
the cold start impact getting lower and lower
and they just don't talk about it.
So it's something they quietly fix,
which is fine.
I think in general,
we've seen at every reInvent, they've added more languages to support Lambda.
So I could expect more.
There's always more languages that people want.
Some people want PHP or Perl or whatever.
Let them use those languages.
I think that's an interesting one that may come out.
But we'll see if we could take bets on which languages are supported. The last one, I would
never have won that bet, the PowerShell. Definitely not
on my radar, but that's kind of interesting.
If you told me the list of languages that would be supported in Lambda, and you asked me to build a
list in order of likelihood, I'm not sure I would have thought to include
PowerShell at all. I mean, that's one
of those things that winds up just
coming completely out of the blue.
And they did it vaguely quietly, too, which makes that
even more interesting to me.
Yeah, definitely another one of those that was probably
done for a specific customer
or two, is my
guess. But I'm sure there are other
really popular languages out there that
I think a lot of people want to see on Lambda.
So hopefully they're making some progress there. They're already way ahead
of everyone else there. I know the other cloud providers are pretty far behind
in offering lots of language choices. But that's one area.
I think the other area that I'd be interested in is
I would love to see more visibility
out of the box into what's going on. I think that
that needs a lot more
effort. And I'm hoping that
they'll have something to offer there
in terms of debugging tools.
I think that's still kind of a weak story.
I also think that the deployment
side of things is still quite weak.
One of the biggest complaints
that we run into in talking to users is that just deploying is still a weak. One of the biggest complaints that we run into in talking to users
is that just deploying is still a pain.
And I think they have a lot of the pieces
in their arsenal to put something interesting together.
So hopefully that's something that they're working on as well.
I will give Amazon credit.
They don't tend to sit and watch customers suffer.
They seem to at times from the public space.
But internally, I've never yet had a conversation with an Amazon employee.
And they were made aware of a customer issue and didn't care about it.
Very often, I find that when I come to them with an engineering problem that annoys the heck out of me,
they won't respond with, wow, no one's ever said that before. What they'll say, very honestly, is, yeah, we know.
And because of X, Y, and Z, we're not able to do anything about that right now. We're working
toward it, but it's more complicated than it looks from the outside. And I do believe them.
There are no simple problems when you're dealing at their scale and when you're dealing at this level of complexity.
Right. Yeah, I totally agree with that.
I think the other big component is, while not feature releases,
I think that there needs to be a lot more education happening from them
to get more adoption.
I know they're doing a lot, but I think that they need to spend more time
at the various levels of orgs to help those companies make the decision
if serverless is right or not for them.
And I think that involves from the developer level
all the way to the top of the organization on down.
I would agree. And I think that's probably a decent place to wind up calling it an episode.
Will you be at reInvent at a booth?
Will you be wandering around sadly
looking for scraps of food?
Where can people catch up with you?
Yeah, so we actually are going to have a booth
for the first time.
It's not going to be in the Expo Hall.
We're actually going to be in the Aria.
I think it's near the registration.
There's going to be a little startup area
and we're going to have a little tiny booth there.
To be clear, this is something that's sanctioned by Amazon.
This is not effectively you deciding that,
yeah, they won't give us a booth, so we're going to make our own
and just, I don't know, commandeering at Mabel or something.
We tried that in the past. It didn't work out well.
But this time it's official.
Yeah, their security is on point for this.
It is. They're really good about it.
But yeah, we're going to be in the ARIA, which
I believe is the hotel where all of
the containers and serverless
talks are going to be. So if you're
there and you're going to those talks
and you're in the ARIA, stop by, see our booth.
We're going to have some interesting
giveaways. And we have a really
interesting demo we're putting together with
DeepLens and Lambda
as well to show some kind of interesting things
you can do with observability with video.
Perfect. I look forward to venturing out of the Venetian maybe
and catching up with you over in the Aria.
Sounds good.
Thank you so much for your time today.
I'm Corey Quinn, this is Adam Johnson of Iopipe,
and this is Screaming in the Cloud.
This has been this week's episode is Screaming in the Cloud. This has been this week's episode of
Screaming in the Cloud. You can also find more Corey at screaminginthecloud.com or wherever
fine snark is sold.