In The Arena by TechArena - Application Management at the Edge with Avassa
Episode Date: May 11, 2023TechArena host Allyson Klein chats with Avassa CTO Carl Moberg about how his company is bringing application control to edge environments and how his team as designed solutions for both infrastructure... operators and application developers.
Transcript
Discussion (0)
Welcome to the Tech Arena, my name's Alison Klein, and today I'm joined by
Avasa CTO, Carl Mokler. Welcome to the program, Carl. Hi, Alison. Thanks for having me. Thank you
so much. So Carl, I got to know Avasa at the Edge Field Day event hosted by Gestalt IT, and I was so
intrigued by the solutions that you are delivering to the market that I knew I needed to have you on the show.
Why don't we just start with an introduction of yourself and Avasa?
Awesome. Yeah, we had a blast. I really enjoy that.
I really enjoy that event.
So I'm Carl Moberg, just like you perfectly pronounced my Swedish name.
I am the CTO and one
of the co-founders here at Avasa. We are a Sweden-based, fairly recent software company
with a set of people that have spent quite a large portion of their careers thinking about
orchestration and automation. And as we shall figure out over this, I guess, episode here, we are now fully focused on seeing if we can make a dent and really improve automation and orchestration for applications at the edge.
And we've been going at it since 2020.
We kind of launched commercially in late 2021.
And we're really having a good time.
And that's a really, really interesting fate in the industry, let's just say.
You know, it's an interesting question, which is, you know, I think of orchestration and automation.
And I quickly associate that with cloud.
And think about data centers.
It's where I come from in terms of my career trajectory. But obviously as edge deployments grow, customers are going to want to use the same core capabilities
for their edge deployments.
Tell me about how that is common with what we've experienced in the cloud and what differs
when you look at edge environments and things that you need to think about.
Yeah.
So, I mean, we literally got started
off of the, let's call it the negative energy
that came out of people that have been,
just like you, kind of grown up
and got really used to,
actually, I would even say spoiled by the ergonomy
of a well-designed public or private cloud.
They had organized themselves around a set of tooling
that they
really loved and they had pretty significant automation. And as they eventually got asked
to extend their responsibilities or their coverage to the edge, they were appalled,
I think is an appropriate term, about the lack of equivalent tooling or the lack of equivalent abstractions
for managing applications.
Not to be, of course, outside of the cloud paradigm,
the private-public clouds.
And of course, the big difference then
around this vague term edge,
the way we think about it is
when they were faced with managing applications
in, first of all, many locations
where each location actually carries meaning, right?
There's a point to why an application is running in a particular place.
So you can't just move things around the way you maybe can inside of elite data centers.
And the second thing is that each and every location has, of course, far less compute than, of course, traditional data center architectures provide.
So I think those are the two things that really trickle up into the way we saw application teams
and platform teams think about how edge is different. And again, of course, the intuition
for many of these teams that we talked about before starting the company was to try and apply the same tooling that they had grown
fond of and went to a number of events around.
They had the t-shirts and the hats and the everything.
And just try to apply that to the edge.
And of course, when we could clearly see that they were struggling, because these tools,
of course, were, again, built for over-provisioned location, almost independent,
maybe one or two locations. That's where we saw the whole idea of starting Avasta is to see if we
can build the appropriate abstractions and then allow them to the extent that they can to reuse
the tooling that they have, but at least get the same ergonomy. you know, look at the edge with the same warmth as they look
at, hopefully, the clouds that they've built.
So I think the big difference is the distance and the fact that there are many locations
and each location has constrained compute and everything, you know, resources associated
with that.
That is the big difference.
I think that, you know, one of the other interesting things that I've seen is that
as companies are getting more sophisticated with their edge implementations,
they're also wanting to do more sophisticated things in terms of workload provisioning
and deploying new workloads.
And I think that that's where Avasa really has hit a sweet spot in the market. Can you talk about how you looked at creating opportunities for workload
provisioning and automation within that, within your solutions?
Yeah.
So as we founded the company, we literally talked about the two personas that would
interact with a system like this, right?
And let's, you know, we, we even have names for company, we literally talked about the two personas that would interact with a system like this, right?
And let's, you know, we even have names for them for people who have seen the recordings from Edge Field Day.
They have names in our presentation.
So one of them is someone we call Patrick, who's in charge of what we call the infrastructure, which usually means the physical aspect of what's running in each location.
Maybe the operating system, maybe a hypervisor, maybe a container runtime.
But that's usually where they stop.
And then we have Applifer, and she cares deeply about the applications
and cares less about the details of the actual infrastructure.
And, of course, what we could clearly see, even when we started in 2020,
was that much of the requirement on a system like the one we're aspiring to build or have built is
going to be driven by the needs of Applifer. And hopefully the Platrix of the world will start
thinking about how can I provide a night abstraction for Applifer that fits with her
tooling, that fits with the way she thinks about the world. It's just that it's the edge this time.
So what we really did was that we tried to start
with thinking about what are the abstractions
that an amplifier would love to see
when she looks at the edge.
And can we build APIs around that
and a command line interface around that
and a web UI around that,
and then treat the rest of the stack,
because obviously you need some central components
and you need some distributed components as the implementation details.
So we've really had our dear Applifer central to our thinking and really tried to build
the system around that.
And of course, she usually thinks in some sort of CICD application-centric worldview.
She doesn't think there are 2,000 locations.
Let me see where my applications are.
She rather thinks, here's my application.
Let me see in which locations they're currently running and how they're doing.
So a term that we will tend to overuse, and I know it's also maybe we need to come up with some new one, is to say application-centric.
The central object of interest here are the computer programs.
And the supporting infrastructure is is from her point of
view more of a supporting function and of course we really love the fact that and i think we
briefly talked about that at edge field day the whole platform engineering idea with having it
and platform teams really be think about themselves as delivering an experience to the application teams is
bang on target with the way we think about these things. We would love to be
the right abstraction for an internal developer platform or application team platform for the
edge. So that's what we aspire or what we have built here. Now, if we go under the covers a
little bit, can you tell us about the software choices that you've made in terms of how you're delivering that provisioning?
And how does that compare to what you would find in a cloud environment?
Absolutely. did early on was that we realized maybe to our surprise how deep and wide the ramifications of
containers have become. And you know how even I at times describe containerization as a solution to
the universal packaging format. So it's just putting computer programs in a platform independent
ish way.
But when we started thinking about how to build a system here, we realized that there were more shoulders to stand on. I mean, the whole idea with registries and repositories and the fact that there's a somewhat okay-ish versioning paradigm and there's a whole slew of accepted ways of doing rolling upgrades with containers and monitoring containers
in several layers.
So we kind of started with the assumption that this is going to be a solution based
on standing on the shoulders of the container giants.
And that took so much off of the table in terms of things we otherwise would have to
implement ourselves and force people into learning. So that was the number one thing. And then of course, we looked at what is the
biggest at the time, what is the kind of dominant container runtime? And of course, Docker was,
everybody was referring to that, but we do see, I will observe a subtle shift towards Podman as the kind of container backend here or container runtime.
And having said that, we also looked at abstractions for the implementation detail here.
Do we need on-site orchestration?
And actually, of course, the dominant solution at that time and kind of still is Kubernetes.
But we actually opted to not use that because we wanted to provide a full platform experience and trying to build what we have with Kubernetes will pull in so many other components that
it would be operationally unwieldy.
So we stuck with running directly on the container runtime.
And of course, you know, again, the fact that you can run any OCI application based off of several container applications is definitely good for us.
And then based on the fact that we have a pretty strong background in this, we could purpose build much of the rest.
And much of the rest is actually largely two components.
One is a distributed scheduler and placement engine, right?
Something that knows how to figure out
where to put applications based off of a sort of filtering language.
Like this application requires a GPU and a camera, for example,
and have the scheduling and placement algorithm figure out
not only which sites or locations that supports this, but which
hosts in each site can actually support that. So the scheduling and placement was something that
we built in-house because there simply aren't any of those. And the second thing, and this is a
pretty big topic that I'd love to get into, is that I think it's time to think about day two and
day three. How do you actually operate things like this at scale?
And one of the things that we realized early on,
maybe a little bit of a nerdy term,
is that we needed a distributed querying mechanism
to provide the monitoring and observability.
And there's a huge architectural difference
between monitoring complex applications
running in a single data center or two data centers or a somewhat less complex
application, but running in maybe 2000 locations, the whole idea of what is the
meaning of healthy if you're running 2000 locations, how do you provide deep
observability when you know you can't have debug logging on in 2000 times three containers, right?
So we've also built a distributed querying mechanism, kind of a streaming backbone that
we can use for both internal needs, but also for applications to use. So those were the kind of the
early decisions that we had to make in order to find out what do we need to develop in-house and what can we leverage and stand on.
That's fantastic.
I think that it's interesting.
The observability piece keeps coming up more and more in conversations of being something
that is absolutely needed and in a lot of cases seen as a gap for the edge.
So I'm so glad that your team has worked on that.
What is the customer response been?
And what do you see as customer sophistication
in deploying edge today?
And where do you think that's going
over the next couple of years?
Yeah, so the customer response has been pretty cool.
The number one thing that happens is that we are, in early conversations,
we get them down from the tree, you know,
because many of them have climbed up in this tree.
Kind of several of the conversations that we've had has kind of started
with the fact that the perception is that the edge is too hard.
It's just too complex for us to approach even, right?
And so the first conversation we usually have is to say, look, we'll show you.
First of all, you will get leverage from the investment you've already made for your
application teams, which is huge because the alternative is to, well, either swivel chair
between what they're already doing in the cloud and some sort of
weird other for the edge, or even worse, spin up two teams. One is wholly focused on weird edge,
and the other one is wholly focused on doing what they've always done, which is the cloud.
And by showing them that, no, there's a wide surface of connectivity here.
If you are a GitHub or GitLab user or AWS or GCP or Azure, it is how you connect your
tooling to the edge as a runtime or as a substrate.
That usually calms down the conversation and accelerates the forward motion.
So we've had many, many, many of those conversations.
Let us show you how you can extend your current capabilities without a massive rethink and start deploying to the edge. And we've seen a couple of really interesting customer success stories around that that's been, of course, morale boosting for us.
So I think that is what we're hearing.
And it's also hard to not think about verticals, right?
Which industries are moving the fastest here and which are probably the best
incentivized to do it.
And we've seen some really, really interesting uptake in, for example,
retail and multi-unit restaurants.
Those are two interesting industries in that even I had underestimated the amount of digitalization
that's happening in their locations just to support their core business of serving customers.
And we also see interesting movements in some more conservative industries like manufacturing,
if you like, and also in locations like mining. And for those locations, it's usually driven by machine learning applications,
usually related to security and personal protection and things like that.
So it usually has to do something with a camera or a LIDAR.
And the interesting thing there is that there's just no way they can backhaul
that kind of data that comes screaming at them from these sensors to some sort of a cloud.
They have to make informed decisions locally based on that data.
So it's a pretty strong driver.
So that was a lot of words for your first question.
I kind of forgot the question to that.
Oh, I think that you answered it. but what I wanted to know is where do
you think it's going in the next couple of years?
Do you see, you know, broader vertical plays or do you see, um, a change in
workloads that are being targeted or both?
So by removing, yeah, no.
So by removing barriers, right.
All kinds of cool things happen, you know water-oriented way of thinking about it is that you're going to get a little flooding. There's going to be some, maybe even over-rotations towards now that we can move some of these applications to a rapid, let's call it normalization.
And what we're really fascinated about is the emerging conversations about, let's call it, so no applications are deployed, I would say, in isolation at the edge.
They usually have counterparts or other parts of the same application topology running either in a regional data center
or in a central data center.
So there's sort of a topology that's emerging here and with topologies normally will come some sort of best practices and a more mature conversation about what are
the critical components that should be running at the edge and don't, it's not
a big thing to move it to the edge, but rather an easy thing to do and let that kind of gel and sit.
And for example, a very interesting observation or very interesting conversations we've had now a couple of times is that we've seen certain industries kind of over-rotate towards the cloud.
One of these industries, in my experience, is the point of sales industries, coming back to retail.
And many of the large point of sales vendors, or most of them now, have something they call cloud POS.
So they move the entire thing to the cloud.
And we see a number of retailers now realizing that having everything in the cloud is really, really risky.
Because if there's anything between the till or the walkout line and that cloud region that is shaky,
that stops us from actually selling stuff.
So there's this really interesting conversation about what are the actual components
in a point-of-s sale system or commercial platform that should reside
in the store.
And we've been part of several of those conversations and they've really changed when we could show
them that it's actually not that hard.
We can help you edge shift that the term we love, we can help you move some of these components
onto the site for some customers, for some size of the, of the store, for example.
So that kind of conversation, right-sizing based on actual needs and removing the
technical barriers of making the decision about which components should go where,
is where I hope this conversation is going.
Um, and I think, I think it is, I think it is. It's just a matter of getting more
experienced people into the conversation at this point. I love it. It's so exciting to see this
progress and really see the instantiation in different industries. Carl, one final question
for you. Actually, I have two. Never mind. Carl, when you look at that opportunity, where is your team pointed in terms of new development
of core capabilities for Amasa and what can we expect for you through 2023?
Carl Tidemann, Yeah, so two things that I'm excited about.
One is, again, coming back to the day two and day three,
we've thought long and hard and implemented a very ambitious
monitoring and observability framework for distributed applications,
which we're really, really excited about.
And we're going to roll out significant pieces of that over the coming months.
The second part is the insight that, of course, we're not an island and this,
you know, even as much as you may love of Afta, we have an agent that needs to run
on an operating system that needs to run on some sort of a hypervisor straight on
the hardware and there's some interesting development going on around, for example,
which, which has been a big gaping hole in my mind, how do I manage the operating
system layer, i.e. both kernel and userland for the nodes that I have in thousands of locations?
And interestingly enough, there has been a lack of solid answers to that. None actually of the
larger Linux companies have raised their hands and said, we think we can do rolling upgrades
of our kernel across thousands of locations without impacting the workloads.
There's a lot of cool stuff going on around that.
And we've worked with some partners on top of immutable operating systems.
We have, for example, announced a partnership with Red Hat around this.
So building with best of breed partners to provide a comprehensive and
manageable solution is something we're pretty proud of. And we're going to talk way more about
on top of, of course, particular use cases, that's mostly instantiations of what we have.
So those two things, monitoring and observability at high, high scale and super easy to use,
fully managed stacks together with partners,
I think is what I'm excited about.
That's cool.
I can't wait to see more.
If folks are listening online and they want to engage with your team, learn more about
the solutions you've got in market and learn more about what's coming down the pipe, where
can they reach out to you?
Yeah, you know, I might be a little old school, but I think going straight to
Alvassa.io is probably the best.
We try to really populate the front page there with anything you can think of.
Of course, we do our best to also keep people entertained at LinkedIn.
And I'm always on Twitter still as at C Moberg and actually
same handle at Mathodon.
So if you want to talk to me directly,
either hit me up on LinkedIn or Twitter
and check out our webpage,
including by the way, a free trial
or for that matter, a demo button
if you want to see what I'm talking about.
Well, Carl, it's been a pleasure.
I hope you say hello to Patrick Applifer
and the rest of the Avasa team.
It was great catching up with you and learning more about the solutions you've
got in market. Thanks for being on.
Thanks, Alison. Will do.
Thanks for joining the Tech Arena.
Subscribe and engage at our website, thetecharena.net.
All content is copyright by the Tech Arena. Subscribe and engage at our website, thetecharena.net. All content is copyright by the Tech Arena.