In The Arena by TechArena - Simplified Edge Deployment with Scale Computing
Episode Date: April 10, 2023TechArena host Allyson Klein chats with Scale Computing VP Craig Theriac about his company’s vision for simplified edge deployments for any environment and how Scale Computing solutions have evolved... to meet evolving market demands.
Transcript
Discussion (0)
Welcome to the Tech Arena, featuring authentic discussions between tech's leading innovators
and our host, Alison Klein.
Now let's step into the arena.
Welcome to the tech arena. My name is Alison Klein, and today I'm delighted to be
joined by Craig Tiriak, VP of Product Management for Scale Computing. Welcome to the program, Craig.
Thanks for having me on. So Craig, I met you and the team at Edge Field Day and heard all about the cool solutions that scale computing is
delivering to power edge environments for enterprises. And I absolutely wanted to have
you on the show. Why don't we just get started with an introduction of scale computing for those
in the audience who haven't heard about you and your role at the company?
Yeah, sure thing. So as you said, I'm vice president of product management. So I'll
start with the role. My job is really understanding the direction that we believe the industry is
heading and making sure that our product and the backlog of things we might be working on kind of
meets that trajectory so that we actually provide value to our customers out there that are trying
to deploy these solutions. So scale computing as a whole, though, we were founded back in 2008, 2007 timeframe and shipped our first HCI product in 2012.
HCI meaning hyperconvergence, where we're taking virtualization, storage, compute, and kind of combining them together into an appliance form factor.
And that was back in 2000.
And since then, what we talked about at Edge Field Day and went into a lot of detail around, we've kind of taken that as a basis for what we're deploying on-premises today.
Because there were some serendipitous things we did with that that make it a really, really good fit for edge computing and where we're taking the industries going.
But we've combined that with a really powerful fleet management service that we call Fleet Manager and the overall solution that we call SC Platform or Scale Comput computing platform that combines the two.
You guys have been in it from the get-go in terms of folks looking at developing data and exploiting that data at the edge. The deployment of powerful edge computing platforms is being
driven by creation of data at the edge and wanting to transform that data to business value. And
we see that from across industries and you see it better than most.
How did you get into the business of edge and how did that first product
introduction get you to where you are today with such a strong suite of products?
Yeah, that's, it's kind of fun to take the way back machine and think back.
So I've been with the company now for 13 years.
So I was here when we
first were launching the storage product and then transitioned into hyperconvergence and now into
edge computing. And it's kind of been fun to watch all these transitions, but just going back,
what we were solving for really when we came out with our first HCI product was we were hyper
focused on small and midsize enterprises. Like these customers that are IT generalists, they
didn't
have really IT expertise, or even if they did have the IT expertise, the time and availability to
manage and monitor the infrastructure. And so what they needed was something that was just set it up
and forget about it. Just maximum uptime that was, we always said, simple, available, scalable. Those
were our three core tenets of the product that we focused on. And it's because we
had to be. Now, fast forward to maybe three or four years ago, when we started to see what we're
now calling edge computing, there was a multi-billion dollar grocery retailer that came to us
with about 800 sites they were looking to replace. And as they started to go through,
here are the things, here are the issues that I have in these environments that I need to fix.
It was like this epiphany moment for the company.
It was, you know, we had created the solution that was, you know, highly efficient, that provided high availability, that was really intended for these small and midsize enterprises.
But taking a step back from that, it wasn't really the small and medium enterprises.
It was the lack of IT expertise on premises.
And so anywhere where that is constrained, you need something that is kind of autonomous. And that's, that's how we got brought into this market. It was a little bit of luck, but also, you know, a little bit of vision in terms of, you know, the hypervisor that we'd chosen the storage stack and the efficiency that we'd created in that. And then we built all this AI ops functionality into the product that allows it to run autonomously, and all that kind of came together.
And now we have this perfect product to go out there to take to distributed enterprises
and remote locations that may not have connectivity,
may not be able to have someone on-premises with IT expertise to deal with the infrastructure.
Now, I think you hit on what makes edge environments different
from traditional cloud computing environments.
The lack of IT is a real challenge. But how did that inform the way that you designed your
solutions? And what did you experience along the way that was like, this is a core capability that
would never be the case in a data center that is absolutely needed for this configuration.
Well, so even before we go into that, I just, you know, first, let me say that if you can
run your applications in a cloud or a tier one data center, you should, there's nothing
inherently wrong with that.
And as long as it meets the needs that you have for latency and throughput and, you know,
regulation and autonomy, all that stuff. Great. Do that. It's when those key drivers behind edge computing are in play that you
basically are forced to run on-premises. And now you're faced with these challenges. The first one,
exactly, limited no IT staff. That's huge. But other things such as physical constraints,
right? Just the physical size of where these are going to be deployed, you know, oftentimes in
closets or, you know, people like to joke about them being above the fryers in a quick serve restaurant,
that type of thing. But it's not too far off from the reality of a lot of these manufacturing
floors. And you're dealing with inconsistent power, it's dirty, you've got heating and cooling
concerns, the physical security is a huge one, right? So, you know, these things are not going
to be placed in tier one data centers where it's behind lock and key and maybe a security one, right? So these things are not going to be placed in tier one data centers where it's
behind lock and key and maybe a security guard. They're going to be out in the open and accessible.
And so there's just a different mindset, a different approach that has to be taken
to the infrastructure to make it work in that environment. So some of the scale computing
platform benefits and features that we had to address those challenges really are,
it's easy to set up. We have
what we call zero-touch provisioning, which is one of the things we talked about at Edge Field Day,
where you can take these nodes and put them in remote locations and have them provision themselves
without having the IT expertise on. They're easy to manage beyond that. We've got SC Fleet Manager
that provides us visibility and orchestration and monitoring and management across your entire fleet
of HyperCore clusters, as well as a Red Hat
Ansible collection that, again, it was one of the other things we demonstrated at a tech field day.
It's just, it's so cool to be able to interact with these clusters as code and the infrastructure
as code. And that whole mindset is something that, you know, may or may not apply necessarily
in the cloud, but definitely is a requirement out at the edge. A couple of other things that I would
say that fall into that are, you know, this efficiency that we provide in our software stack. It allows us to run on
really small form factors like the Intel NUC EEC, the Enterprise Edge Computer Edition,
and other small form factors like that that make it a physical fit for these types of environments.
And then one of the things that it's often not talked about,
but I would say that there's this unpredictable demand out there that IT infrastructure teams
face when they're talking to application teams and third parties that are providing these
software packages. They don't necessarily know what's coming next. And so a lot of times it
comes down to just future-proofing and making sure that whatever platform they select, they will work with what they think might happen in the next, say, three to five years.
And that flexibility is pretty pertinent in these environments.
And so you need this ability to scale out the infrastructure to really match the needs of the environment over time.
And that's one of the key tenets that we talk about with the HESI platform. When you look at your customer base,
and I know that you talk to customers from across different industries and different geographies,
where do you see the market today? And how sophisticated are customers
in terms of their vision for how to use Edge in 2023 and how to manage?
It's really all over the map i would say in general
edge computing is a pretty nascent market there's a lot of different approaches that people are
taking you know near edge and far edge and some customers are looking at deployment models where
they're they're consolidating a whole bunch of workloads they kind of have this vision that they
know that they're going to be additional services they're going to be asked for them in the future
that they want to run on the shared infrastructure and so they're trying to put it in place today.
That's that future proofing I was talking about.
Others are kind of interesting.
They're looking to the independent software vendors and saying, hey, ISV, I want you to provide your software as a service, which may or may not include the underlying infrastructure to run that.
And they're kind of doing point solutions through that, which is an interesting approach as well. I do believe that over time, it's all going to be consolidated, just the nature of the number of workloads, the amount of data, that sort of thing that's going to happen.
But that's what I'm seeing today.
I'll say one of the most interesting observations that I didn't expect, but I still see today, is that there are a lot of legacy applications out there.
I talked about the drivers for why people are running on-premises.
One of the unsung drivers of running on-premises today is the cost of
refactoring. You've got, you know, that grocery retailer I was referring to, they've got a point
of sale system. That point of sale system used to be a physical machine. Now it's a virtual machine
and it's running an old version of Windows. It's not going anywhere and probably won't for the next
five to 10 years. And they're going to need to run this alongside some of these edge native
workloads that are being developed right now.
And so that's an interesting piece.
I would say there are a handful of customers that are a little further down the edge path, if that is such a term.
And I would say it's those customers that are using AI, machine learning, computer vision, doing the real-time analysis in their environment and not to focus too much on retail, but it's, you know, queue depth and analyzing customer patterns or a manufacturer that you're doing predictive
maintenance and security and safety type of workloads with computer vision. And really,
every industry has their apps that kind of, but I'd say most are trying to more future-proof
against the unknown versus actually implementing these exact workloads today.
When you think about the portfolio, you mentioned Fleet Manager.
Can you just walk us through the portfolio and how it matches this varied market and nascent market?
Yeah.
And how you've developed something that really gives a crawl, walk, run to your customers as they go through their own learnings for deployment?
Yeah, so you can kind of think of SC platform in two different pieces. There's the software
that runs on-premises, which is what we call SC HyperCore, and that is the core functionality
that is really based around that HCI product that I talked about when we introduced in 2012.
The benefit of that is we've had years and years of hardened work on this
to be able to run autonomously out in the field,
and it's just a workhorse. It's awesome.
The second piece of that is what you're referring to as SC Fleet Manager,
and this is a cloud-based service that allows you to manage
and monitor just thousands of your hypercore clusters out there in the field.
And as a cloud-based service, it provides a number of things.
It allows you to you know number
one have no matter how big your fleet is whether that's one cluster or 10 000 clusters out there
at all varying sizes it it will all kind of give you a seamless experience in a single
pane of glass as much as i hate that term to be able to view and monitor and kind of see
as conditions are set out there in the field node has failed the drive has failed something
has happened with the infrastructure,
the software stack,
these conditions will kind of propagate up
and give visibility to the end user
through Fleet Manager.
And so the crawl, walk, run is,
you can imagine the out-of-box experience
that a customer would have to have with this.
And I actually, I mean,
this is the exact path that we see
almost every distributed enterprise go through,
which is, we're not
sure who we're going to select. Here's some rough criteria of what we're looking at. Let's bring in
some infrastructure and try it out. And so they will bring in, you know, they'll bring in a handful
of nodes. They will use zero touch provisioning through fleet manager to get the nodes set up in,
you know, 10 minutes or less. Then they use Ansible and start deploying some workloads onto
these and they'll go through
testing all the failure scenarios they might think could exist out in their environments.
And once they're satisfied with that, then they move on to more of a pilot phase. They pick maybe
five sites or in case of manufacturing, you might have five clusters on the same site, but just
kind of a smaller number of clusters to just test the water
a little bit more. And once you're comfortable with that, then you start working on your deployment
plans. And I've been really astounded by just how quickly some of these customers have been able to
deploy hundreds of nodes out there across their entire infrastructure out at these disparate sites
in a matter of months, as opposed to, in some cases, years for some of these
customers as a result of just some of the easy use and things we've built in.
Yeah, I would assume that if you're looking at truck roles and actually deploying with
an IT person on site, that almost makes Edge untenable for a lot of companies because of
just the human lift.
So you've really opened a door for a lot of companies that couldn't do this otherwise, which is very cool. You know, you're talking about them deploying applications and
doing that through your tools. One question that I have for you is, should we look at this as a
static deployment? Or, you know, what is the opportunity for reprovisioning of services,
adding services, as companies realize different ways that they want to use this infrastructure.
Yeah.
As you're talking about that, the first thing that popped in my head was back when VMware first started taking out.
Do you remember the term VM sprawl?
It's the, okay, now you've got a platform in there that allows you to very easily spin up additional workloads, whether it's container-based or you're running Kubernetes or you're running old-school VMs, whatever that is.
Now that you have a platform in place to run these on, now you can hand the keys over to the application teams and say, hey, go for it.
Here are the capabilities, the infrastructure that it's going to be running on.
You don't have to necessarily worry about that. And in addition to coming up with new cool things that they can deploy on top of that,
once it's out there, you know, you have this separate control plane that people can use for
the container management side of it. They want to, you know, have it at the end of the CICD pipeline
and push out new code periodically based on, you know, as soon as GitHub has the latest, you know,
it pushes all the way down to these applications kind of on the fly.
And all that becomes available now because you have an infrastructure
you can rely on out of the edge.
Now, as we look forward in time, and this is going to get a little bit,
you know, we're talking about a nascent market,
and I'm going to talk about a nascent vision for that nascent market
in terms of truly distributed computing from edge to cloud and where we get in terms of core capabilities
with introduction of AI at the edge, et cetera. Where do we go from here in terms of moving to
that model? And how is scale computing implementing technology to help us get there?
Well, I think it starts with an aligned vision. And I think what you describe is accurate. You know, our platform is really designed for helping to support distributed computing. You know,
wherever that needs to live, we want to make sure that we are the underlying infrastructure to make
that happen. Now, it's for the adoption of distributed computing to really take hold in
the way you're describing it, it's going to be the application teams and the application developers
who are deploying in a model that actually makes use of that. So I think, you know, ideally,
the teams are able to kind of programmatically look at the requirements of given service,
given workload and determine, you know, based on the metadata of the infrastructure capabilities, however that is decided, and say, you know, I need this latency, I need access to GPU,
I need this availability of the workload.
You know, kind of go down the list of attributes for a given service or workload,
and you say, okay, this can now be run on this subset of infrastructure,
which happens to be on-premises, or it could be in the cloud,
depending on wherever, you depending on whatever those resource
requirements are. I think that's the vision. That's fantastic. It's been really instructive
to listen to you today talk about what you're delivering. I am sure that folks online are
interested in learning more about scale computing. If they don't already know about you today,
you have a tremendous reputation in the industry for the
quality of your engineering and the quality of your products. But for those folks who haven't
first learned about and second engaged with scale computing,
where can folks go to learn more and engage with your team?
Well, first of all, let me say thank you for having me on.
This has been a great conversation.
I've really enjoyed it.
Website is a great place to start.
So scalecomputing.com,
there's all sorts of resources out there.
Of course, feel free to hit me up directly.
I'm Craig Teriak, T-A-G-E-R-I-A-C on LinkedIn.
I'm happy to get you in touch
with whoever makes sense to talk to.
One of the things you said there
kind of triggered another thought.
We, yes, we are known for kind of simplicity engineering.
That's our tagline.
And we're taking really, really complex things and putting it into solutions that are easy to use for our customers.
And a lot of times we hear from prospects that say, it's just too good to be true.
I don't believe it.
And we have been around long enough now where we can say, go check out the reviews.
There are thousands of these short-form case studies and reviews out there of people actually using this product in the real world.
And I would say start there as well.
Ask your peers.
Fantastic.
Well, thank you so much, Craig, for being on the show today.
It's been a real pleasure talking to you.
Same here.
Thanks so much.
Thanks for joining the Tech Arena.
Subscribe and engage at our website, thetecharena.net.
All content is copyright by the Tech Arena.