Screaming in the Cloud - The Gravitational Pull of Simplicity with Ev Kontsevoy
Episode Date: September 29, 2020About Ev KontsevoyEv Kontsevoy is the CEO of Gravitational, where he and other engineers build open-source tools for other developers for securely delivering cloud apps to restricted and regu...lated environments. Besides computers, Ev’s obsessed with trains and old film cameras.Links Referenced: Gravitational website: https://gravitational.com/ Gravitational GitHub: https://github.com/gravitational Teleport GitHub: https://github.com/gravitational/teleportÂ
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud. those boundaries. So it's difficult to understand what's actually happening. What Catchpoint does is makes it easier for enterprises to detect, identify, and of course,
validate how reachable their application is, and of course, how happy their users are. It helps you
get visibility into reachability, availability, performance, reliability, and of course,
absorbency, because we'll throw that one in too. And it's used by a bunch of interesting companies
you may have heard of, like Google, Verizon, Oracle, but don't hold that
against them, and many more. To learn more, visit www.catchpoint.com and tell them Corey sent you.
Wait for the wince. This episode is sponsored in part by StrongDM. Transitioning your team to work
from home, like basically everyone on the planet is?
Managing gazillions of SSH keys, database passwords, and Kubernetes certificates?
Consider StrongDM. Manage and audit access to servers, databases like Route 53,
and Kubernetes clusters no matter where your employees happen to be.
You can use StrongDM to extend your identity provider and also Cognito to manage
infrastructure access, automate onboarding, offboarding, waterboarding, and moving people
within roles. Grant temporary access that automatically expires to whatever team is
unlucky enough to be on call this week. Admins get full auditability into whatever anyone does,
what they connect to, what queries they run,
what commands they type,
full visibility into everything.
That includes video replays.
For databases like Route 53,
it's a single unified query log
across all of your database management systems.
It's used by companies like Hearst, Peloton,
Betterment, Greenhouse, and SoFi to manage their access.
It's more control and less hassle. StrongDM, manage and audit remote access to infrastructure.
To get a free 14-day trial, visit strongdm.com slash S-I-T-C. Tell them I sent you and thank
them for tolerating my calling Route 53 a database.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
This week's promoted guest is Ev Kontsevoy.
Ev, you are the CEO of a company called Gravitational.
First, welcome to the show.
And secondly, what's Gravitational do?
First, thank you for having me. And second, Gravitational enables developers to have secure access to all of their production environments. And it also allows you to run applications on any environment anywhere in the world.
It's definitely the right direction to go in compared to, I don't know,
giving developers insecure access to other people's production environments,
which seems to be incredibly in vogue in some circles these days.
Absolutely.
But it's important to solve problems like this, where I want to access different environments
that, if there's a just and loving God, are actually separated from one another.
It's the old saw of everyone has a test environment.
Some people are lucky enough to have it be separate from their production environment.
It becomes a challenge that everyone struggles with on some level. And everyone who talks about
having solved it is basically lying in my experience. Oh yeah, we have a completely
distinct test environment. That's a high fidelity copy of production. And then you scratch a little
bit and it turns out the,
well, except for the continuous deployment server,
because that thing talks to everything,
and please don't ask us about our security posture there.
So there's a lot of directions you can go in
when we're talking about securing access into environments.
Where do you folks start, and where do you stop?
Well, it's an interesting thing that you used this word environment so many times.
We believe fundamentally that having to maintain computing environments
is just fundamentally a major limitation for us software developers today.
Back in the day, I could just build my software, put it in a box, give it to you,
and that would be the end of transaction.
You would put it on your laptop or put it on your server in the basement,
and you will just run it yourself.
Or software will just run by itself.
But as we transition to this cloud world, so now in addition to building your software,
you also need to build and maintain the environment. We celebrate everything
as infrastructure as code movement. But what it leads to
is just enormous complexity. So essentially, we're not
just building our applications, we're also building computers to run
our applications on. Because that's what these environments are all about.
So ultimately, we want just to enable
you, the engineer, to not even think
about the environment. Don't you think that engineers should just do
git commit, git push, and go home, and have software magically run
anywhere in the world where users need it? So that's really where we're going.
But we're starting with access.
Access today is the problem that we solve really well.
And if you're accessing your servers using something else,
you're probably either not doing it well
or you're probably very inconvenienced.
I talk from time to time about the ridiculous serverless system
that I built that is my newsletter publication system.
And aspects of that do have a everything that hits Git automatically gets deployed.
Now, it sounds awesome, and it is from a developer perspective.
But let's be real for a second here.
There's no test built into this.
I'm a terrible developer.
Security?
Yeah, there's one user account in this thing.
It's mine.
It has full access to
everything. And my default
security posture is, sure.
It's not so much a posture as it is an
unfortunate slouch.
You know why it's that?
Because, again, what you talk about is right.
It's a colossal pain in the butt
when I don't have to worry about things like
insider threats. This is a
back-end system at the end of the day.
So if things break,
there is no user that is impacted beyond me
and it gets everything else out of my way.
There's no value to me in increasing the security posture
because in my perspective
and the perspective of many other folks,
security is a continuum
between effectively being it completely wide open and being completely unusable.
Where on that spectrum do you wind up wanting to fall?
I bias for getting things done quickly.
Absolutely.
If you don't mind, I will be quoting you in the future.
You basically said there is no value in security.
This is why I don't like when people call Gravitational a security company.
We don't give you security.
We give you instant access to whatever you need to be productive right now.
One of our open source projects is called Gravitational Teleport.
Why we called it Teleport?
Because it creates this illusion that you can teleport any computing device
your company owns into the same room with you
so you can instantly access it using protocols
like SSH, Kubernetes,
and support for other protocols coming.
So on GitHub, go check it out.
But it completely erases this mental partitioning
that we apply to all these environments
like test, production,
Amazon, GCP, VMware, on-prem, basement, satellite, self-driving car. All of those are going to be
instantly accessible to you without compromising security. Security is almost like when you're
building a bridge over a river. Is security a benefit? Of course, you don't want that bridge
to collapse, right? So it's secure in that sense.
But the benefit is getting there,
getting on the other side of the river.
So that's what we believe we enable
with Teleport specifically.
Unplug the computer and it's way more secure,
but your customers are probably not going to be happy.
Yes, you're losing access.
You're right.
You, I guess, tweaked my quote to say that there's no value in security.
There's a lot of truth to that,
because I'm very reassured constantly by a wide variety of companies
that security is incredibly important to them.
They always tell me this in their announcements of data breaches,
holding my data,
where it was very clear that security was not very important to them. It was
a back burner priority, and now they're doing damage control. It's, oh, what is your intrusion
detection system? The front page of the New York Times. We check that thing every day.
It becomes an afterthought because you're not going to improve security and get to your next
business milestone by doing that. In most cases, it is not directly aligned
with the stated goal of most businesses
to improve security posture.
It's something that has to be done.
It's not a value add.
Look, it's just expensive too.
And there's always talent shortage to remember about.
If you think of major tech companies here in the Valley,
Google, Netflix, Facebook,
go and ask them, how do you SSH into anything?
And you will see that most of these companies,
they build quite sophisticated internal products to do that.
They do not use off-the-shelf, completely unmodified components
like OpenSSH, for example.
OpenSSH is just a building block. Who else can do that? That's really my question.
I went to a couple of dinners with CTOs of other companies here in the Valley,
and they all confessed to me that they're all struggling with it, that it's basically
wild, wild west out there, How access to infrastructure is implemented.
And you could even do it yourself.
Go and ask your engineer friends, hey, do you still have access to the production environment
of your former employer?
You know how many of them will say, yes, I do?
Isn't that scary?
There are companies out there running applications, holding data of their customers, and their
former employees still
have access to it.
Yeah, because of this trade-off of security and convenience.
If you don't make it convenient for engineers, they are not going to be productive or they
are going to build backdoors.
So I've seen that happening.
One of the more, I guess, amusing anecdotes from earlier in my career was when I
was unceremoniously fired from a company. That kind of happened a lot based upon, you know,
my personality and everything I say combined with everything that I do. And for the next day or so,
there were a repeated series of outages on the company's service. My perspective is, and always has been,
once I no longer work here,
I don't care about you anymore.
I'm certainly not vengeful or wrathful.
But what I strongly suspect happened,
because remember, I ran the ops teams at these places,
where suddenly they're having to rotate credentials
across the board of the shared service role stuff,
which is absolutely the right move.
We wound up letting someone go,
maybe they're bitter, maybe they're vengeful, but oh crap, that person had access to all of
these secrets that are difficult to change. We better get started right away. And frankly,
as a former employee, I want them to do that. Two years later, if there's a data breach there,
I don't want to be even a remote consideration as the cause of having done that. Because it's, I don't work here anymore, please lock me out.
Absolutely. I would even say the rotating credential is an anti-pattern. You're not
even supposed to have long-lasting credentials. So that removes the need of rotation.
Technically, this benefit is called reducing operational overhead of implementing proper security.
Well, it starts with using proper tools.
I have a somewhat controversial statement, for example,
to say that, hey, if you're using SSH keys today
to access your infrastructure,
even if you're storing those keys in a secure vault,
you're doing it wrong.
You're not supposed to be using SSH keys to access servers.
You're supposed to be using certificates, and certificates need to be issued with automatic expiration for you every day.
So it's just as convenient to use, but it removes the need
to rotate anything. So if you just don't show up for work next day,
your access will be automatically revoked. So that's the way you do SSH
security, for example. And this list goes on and So that's the way you do association security, for example.
And this list goes on and on and on.
If you do it right, or if you are using a solution that does this right by default without you having to configure thousands of config files all over the world, then the operational
overhead and pain kind of go away.
And that is something that we, with our open source project,
are trying to promote and enable.
The challenge I run into is,
I agree with everything you just said
from a high-level perspective.
But then it turns into almost conferenceware on some level,
where the idea of what you say
versus the real-world consequences
slash implementation of that winds up breaking
down. For my exact use case, for example, I have a EC2 instance that I use as my development
environment. When I'm traveling, I only take an iPad with me, and I use a SSH client called Blink
to get into this thing. It also speaks MOSH, which piggybacks on the SSH handshake, but that's
neither here nor there. It does not support certificates. And it's an iOS app,
so getting it wedged in is going to be a little bit of a challenge. So I could see in an environment
where I'm doing that, yeah, we don't use SSH key pairs, except for that thing. Same story with
GitHub repositories. To my understanding, they also don't support SSH certificates and whatnot.
So this turns into the edge case exception territory pretty easily, where, yeah,
we generally believe as a best practice that there should not be SSH key pairs. But here's the long
list of things that require it, so we do it anyway. Absolutely. You could also talk about network
equipment, how many routers you've purchased even for your own house or apartment that have a baked
SSH server that only supports keys.
Someone has fancy network equipment,
mine is still stuck on Telnet.
Oh yeah, that's even worse.
But seriously, this is why we are doing it in the open.
Because at the end of the day,
if you have this vision for how future is going to look like, and that's a better future of the day, if you have this vision for how the future is going to look like,
and that's a better future than the present,
you need to find as many like-minded people as possible.
And the best way to do it is just to put everything you're building in the open out there
and make it easily accessible.
So if someone is working on the next generation of iPad SSH client,
they could just go and take and use our code to make it support certificates.
Or if someone is struggling to set up certificate authentication for SSH
with existing open source tools, here's another open source alternative.
And it does certificates by default, so there is no complexity associated with it.
So by doing it in the open and interacting with community
and sitting here chatting with you,
that's how I think we will proceed moving forward
to just slowly reshape the future towards simplicity,
ease of use, and then security compliance
and everything else that comes with it.
So tell me a little bit about what the getting started process looks like.
It's one of those ideas where, on some level,
we see this on conference stages all the time,
where the one that really stuck with me was I was reading an AWS blog post
about how the whole point and value of Kubernetes,
which is anything they say after this that isn't,
it's good on your resume,
is kind of a lie.
And that's a hill that I understand is unpopular,
but also completely correct.
And they say part of the value here
is that you never have to SSH
into your environment ever again.
And that was great.
And when I finished reading the blog post,
I checked what else had come out that day.
And oh, now they've launched this new integration
with Systems Manager Session Manager that lets you get a shell inside your containers so you don't
have to SSH into them anymore. It's, oh, that's right, the things we say versus the things we do.
What's the getting started process look like that helps make the ideal city on a hill version
a little bit closer to reality? So before we go into getting started,
I think that the question of either you should or should not
SSH into pods or SSH into machines that Kubernetes is running on,
it's really up to you.
It's up to your organization.
It's up to your operational philosophy.
I don't think there is a single answer or industry best practice that you just go out
and say you should always do that.
And when companies come out with these messages,
you're right, it just feels not genuine.
There is a way to do it simply and securely.
And if you want to associate each into the same infrastructure
that Kubernetes is running on, go ahead and do it.
However, make sure that you use exact same credentials.
Make sure that they are consistent.
So for example, Kubernetes has role-based access control.
SSH and its core does not have role-based access control.
So you should use SSH implementation that enables you to set the same kind of roles
and same permissions.
For example, you could say, developers must never touch production data. So your SSH layer
needs to understand what is the production and what is staging.
And then when you're accessing a machine, your SSH layer
needs to be aware if there is any customer data on that machine
or if that machine gives you access to customer data.
Traditional open source SSH tools, they're just too low level
to understand this modern cloud complexity.
And this is why some companies just say, no, we're going to disable SSH
completely. You should only use Kubernetes. And then they run into
other issues when they do that.
So going back to our project, how do you get started?
Well, go to github.com slash gravitational slash teleport.
And look at the readme. It's a very small readme. And it tells you that teleport is a single binary, same as SSHD.
So you put it on your servers, and then you give it
a very little configuration.
And it gives you all of these things.
It gives you the proxy that you use
to access all of your infrastructure.
It gives you role-based access control.
That's coming up in open source version, by the way.
It gives you integration with single sign-on.
So you could do something like GitHub authentication
to get into your infrastructure or Okta or whatever. And it does it in
the same way as you access Kubernetes. So if you are
a member of a developer's team on Kubernetes RBAC, you are going to be
a member of a developer's team on your SSH RBAC.
And the same rules that you set for developers will be applied for
both protocols.
So you could have your cake and eat it too.
Yeah, then you click download link to play with it.
And then there is documentation of the quick start.
It's basically the same experience as we're all accustomed to when playing with well-packaged open source solutions.
A lot of marketing on your site talks about using this to get into clusters
of machines, or
of course, Kubernetes, which is the third rail
I am not touching at the moment, because
oh, dear stars, do I get letters whenever
I do. And that's
great and all, but what my primary
use for most of what I do with EC2 these
days is using it as a developer
environment. I don't have a cluster
because it turns out that some of the EC2 instances I use are really big
and they keep making bigger ones,
so the problem gets way easier.
It's kind of interesting you mentioned this.
Let me step down from CEO of gravitational role here.
As an engineer, I do find it quite interesting
that we are now getting these enormous machines
with 64 cores and terabytes of RAM.
Thanks for AMD stepping up their game.
So it is kind of questionable,
do you even need an entire environment
with this kind of out-of-scaling stuff
if computing is now so cheap?
And engineering your application to run on a single box
is actually much simpler.
So part of me looks at this whole thing
and I'm thinking, how many startups are out there?
How many just web applications would do way better
running on a single box?
And you could have the other one for failover,
but the point is that I'm quite fascinated
with the progress we're making on computing again.
But going back to your question on what is a cluster? is that I'm quite fascinated with the progress we're making on computing again.
But going back to your question on what is a cluster?
Do you even need a cluster?
This is basically a question about the language.
It's the word environment I like to use. You have an environment you want to go to.
It could be a single machine, it could be two, it could be 2,000.
But then you have these things like regions. You could have a single node, it could be two, it could be 2,000. But then you have these things like regions.
You could have a single node in one region on AWS,
then you could have another node in another region.
So how do you call those?
And then you have, for example, systems like Kubernetes,
and they do use the word cluster.
So we try to use language that is as agnostic as possible.
Just think of cluster as just like a collection of machines.
And a single
node is a cluster. And that's another problem that we're struggling with. How do you call a server
these days? If you use the word server, some people will say, oh, no, no, no, I don't need
to access servers. I need to access VMs. Or they will say, no, I don't need to access VMs. I need
to access computing instances.
So what is this thing you're accessing?
Or is it an instance or maybe it's a Kubernetes pod?
So we try to use this language that's kind of neutral. So we use cluster to describe a collection
of any computing devices you may have.
And we use the word node to describe
anything you can SSH into.
It could be a pod, it could be a VM, it could be an instance, it could be a server,
it could be Raspberry Pi or a cell driving vehicle.
So it's all node from Teleport's point of view.
In what you might be forgiven for mistaking for a blast from the past,
today I want to talk about New Relic.
They seem to be a relatively legacy monitoring company,
and I would have agreed with that assessment up until relatively recently.
But they did something a little out there.
They reworked everything.
They went open source, they made it so you can monitor your whole stack in one place,
and, most notably from my perspective,
they simplified their pricing into something that is much more affordable for almost everyone.
There's even a
free tier with one user and 100 gigs per month, totally free. Check it out at newrelic.com.
To be clear, it has its own standalone installer or am I an NPM hell if I want to be using it?
It's a single binary. You don't need installers. Again, we as a company, we are addicted to
simplicity. You almost misspoke there as we're addicted to complexity.
And yeah, I actually have a list about 15 companies
I could absolutely put that as their tagline.
And you understand why this happens,
especially in the open source world.
If you make your product so easy to use and so dead simple,
then people will just start saying,
you know what, why would I pay you money?
Like it's open source, it's just this magic dust
that could sprinkle it all over my infrastructure
and call it a day.
So then open source companies,
they're basically forced to make their products more complex
and they say, well, these 57 features
that exist only in enterprise version
and you really need some consulting help to set them up.
So I could see why that is the case sometimes.
But in our case, I think it kills the value proposition.
If you have stamina and talent to deal with complexity today,
you could build yourself a fantastic access solution
using OpenSSH.
Go ahead and do it.
But if you want something that requires
as little as possible time commitment and even
expertise, you want the right thing by default. So go and download Teleport and see how easy it is.
It's a single binary. It's the simplest thing we could possibly think of.
Getting up and running quickly and easily is helpful. I absolutely agree with that.
This is one of those stories where I am in no way,
shape, or form an expert in the area that you have built an entire company around. However,
I'm an overconfident white guy on the internet, so of course I'm going to make unfounded,
wild speculation and present it as fact. My experience with the open source world has
always been that people are thrilled to pitch in on open source projects and get them to a point where they scratch their own itch. But a lot of what makes software usable
and approachable by various folks is accessibility. It's UX, it's polish, and that's not really fun
for people to pitch in on, in most cases, on a volunteering basis. So I've always sort of taken
the perhaps overly charitable position that the reason that so much open source software is crappy
is because the stuff that makes it easier to work with isn't the fun stuff to build. You need to
start paying people to do those things. Oh, that's absolutely true. And I do understand that I'm
conflating a bunch of things that don't necessarily agree.
Open source does not mean volunteers only.
A lot of people are paid to work on open source.
There's a variety of different governance models, etc., etc.
Please, please, please don't write me letters on this one.
I was expecting you to say something more controversial,
but honestly, everything you said, I think most of us will agree with.
Yes, it is true that what motivates people to begin open source projects something more controversial, but honestly, everything you said, I think most of us will agree with.
That yes, it is true that what motivates people to begin open source projects is to scratch their own itch.
For example, why we started Teleport.
The previous company, the same team here we built was Mailgun,
an email delivery, you've probably heard of it.
And after Mailgun acquisition by Rackspace,
so we joined Rackspace, this much, much larger cloud company.
And Rackspace, understandably, they told us,
well, you have to migrate Mailgun from SoftLayer,
which is now IBM, to Rackspace data center.
So think about it.
So you have this cloud environment that you set up
and you just need to take it and move it somewhere.
How long do you think it took us? Wild guess.
I'm going to guess, well, that's the problem. There's two different directions I could take that in.
I could come out with something, oh, 30 seconds. And then it's like, well, no, we're not that
good, which is never a good thing. Or I could go, 18 years.
And the answer is, no, no, no, we're a defense contractor. It took us 40.
So there's no good answer as far as how to come up with that.
So I would hesitate to even hazard a guess.
But why didn't you say 18 seconds?
Because if you think about it,
someone asked these guys to move a bunch of software.
So what is software?
Software is just text files.
So how big are those files?
I don't know, like a megabyte, a five megabyte?
How much code can you type in three years?
So even if it's, let's say, five megabytes,
you take the speed of internet and you divide that by five megabytes.
That's the speed of transfer that software can travel between data centers.
So why isn't it seconds?
That's the same question to ask.
And my non-technical friends, when they ask me,
so what are you doing post-acquisition?
And I said, well, we're moving our software from one data center to another. And they said, well, how long does it take to copy a
bunch of files? Why is it a project? So how long it took you? And I said, six months. And people
were like, wow, why is it six months? It's because of that. It's because of all of this complexity
that we've attached to all these environments. And that's why we started to work on Teleport
once we were out of Rackspace.
Because even setting up similar infrastructure security
takes you a while.
So yes, we did scratch our own itch.
And the second open source project we're also working on
is called Gravity.
Gravity allows you to take all of your software,
like your entire Kubernetes cluster.
So that's one important limitation.
Gravity only works with Kubernetes clusters.
So it takes your Kubernetes cluster.
And I personally really hope that the next words out of your mouth
to finish that sentence are,
and throw it in the garbage.
But I have the sneaking suspicion that is not the case.
You know what?
It makes it easier.
But let me finish that sentence.
So it takes your
Kubernetes cluster and packages it into a single file, similar to a Docker image. It says, this
file is your software. So if you want to throw it into garbage, you can literally drag and drop it
into garbage on your desktop. But if you want to drag and drop it into a different data center, you could do that too. So now when CIA comes to you and they say, we want to use your software,
but we don't want your SaaS, we want your software on our own top secret cloud,
and we're not going to let your DevOps people touch it. So then you will use Gravity to give
it to them and say, here is file, and that file is the software you want. This level of simplicity is what I've personally been missing since I moved from more traditional
server programming to this cloud world, where modern cloud applications, they don't even
feel like software sometimes.
They feel like it's an advanced form of configuration for your environment. It's this thin layer of stuff that you're spreading across
many, many instances on Amazon or something.
And then you can't even tell where my software is.
It's everywhere.
It's 15 different repositories and a couple of Docker registries
and no one really knows how to collect it together.
So that's what Gravity does.
It allows you to say, this file is my software.
Then it will just run anywhere by itself.
So yes, going back to your original question,
we did build these things to scratch our own itch.
But the ease of use is probably the most important
internal metric that we share when we work on this project.
Simplicity.
And that is because it just so happens that making things
simple and making them management-free is also
scratching our own itch. Just think of gravitational engineers.
We've all done our share of DevOps and system administration in the past
and we are also engineers, developers.
So we don't really like babysitting hardware.
Because when you are babysitting your environment,
that's really what you're doing.
So my ideal version of any kind of software product,
it needs to be unmanaged.
You know what people say?
Oh, buy managed Kubernetes, manage database,
manage this, manage that.
Why do we need to manage software?
We're dreaming about having self-driving cars.
You see the irony here?
So we want cars to be self-driving,
but we want to manage software.
Why can't we make software that's self-driving first
before attempting something even riskier?
Oh, absolutely.
It's the ideas of things we claim to want
and things we actually want are worlds apart. Great, we. It's the ideas of things we claim to want and things we actually
want are worlds apart. Great. We have this complete CI CD system. We need to add a step
where a human being can click a button for our audit compliance. Yeah, that's not great. And
then they automate something that hits a key every 20 seconds on a keyboard with some physical IoT
robot to get around that. And it's okay. At this point, you've built a ridiculous
Rube Goldberg contraption.
But that's every CICD system in a long enough timeline.
Let me tell you one little example.
Sometimes investors would try to book
a meeting with us and they would be asking
these questions like,
do you have any plans to add this advanced
user interface or user interface for this or that?
Sometimes they just give them a straight answer that if you make a feature robust enough
and it just works, you don't really need to manage it, so you don't really need a user
interface for it.
But I like to give them this joke, like, look, you have a SSD controller in your laptop right
now.
Every single employee at your company has that SSD in their machine.
SSDs have controllers.
Controllers run complex pieces of software on them.
Do you look for a single pane of glass
to manage SSD controllers across all of your employees?
Of course you don't look for that.
It's silly.
Why? Because it just works.
So why don't we make our cloud software
to be like SSD controllers?
So then we don't need to have this massive control panels
with knobs and charts and graphs and logs.
That is the future that we're optimizing for.
And I can't wait for this to happen.
I'm personally tired managing environments.
That's, I think, something that everyone wants to say.
They're tired.
They're tired of doing the stuff that is garbage
and doesn't add direct value.
Going with AWS organizations, which is where I tend to focus on, really emphasizes this.
I spend more time setting up subordinate accounts for isolation of workloads or teams than I often do working within those accounts.
It's a painful process.
It's boilerplate, and solutions are slowly evolving in that direction.
But if I didn't have to do a lot of that stuff, I wouldn't.
Which means that this ties back to the whole problem
of what are we trying to achieve?
And if whatever I'm working on now doesn't directly align with that,
I'm probably going to half-ass it as soon as humanly possible.
Yes.
Honestly, I thought that Heroku-like environments
are going to be the future.
It's been now almost 10 years since Heroku was launched.
I don't feel we have made enough progress on that era.
Some people call it serverless,
but what I see is this serverless movement
is being hijacked by cloud providers
by simply saying,
hey, we're going to manage this serverless framework
on top of servers, basically, for you.
But ultimately, that's something I don't want to think about.
I want to think about this entire environment,
AWS region access point.
That's my computer.
So I'm going to push my software into it,
preferably as a single file.
It just makes it easy for me to reason about this way.
And just have it run there by itself.
That would be the dream.
And I don't want to know what load balancer is.
I don't even understand.
If my application has a defined entry point,
why can't this thing make it
accessible for me?
Why do I need to understand different types of LBs
and out-of-scaling groups?
I think we're pretty close to closing
this gap in our abstractions. So I think it's about time and gravitational is working on it.
Before we call this an episode, there's one more thing I want to talk to you about.
Now, for folks who have not been on this podcast before, which I'm told is still more people than
have been because I don't have that many episodes yet. One of the things I do on the background process is I start having a quick conversation before I whack the record button, and I ask a very small list of questions.
One of them, which leads to fascinating answers, is what do you want to make sure that we don't talk about?
Because this is not a story of I only tell the stories people want to have told. But if I sit here and
I beat someone up on PR missteps that their company has made, for example, it's not a good
episode. It's awkward, it's uncomfortable, and no one likes it. So I want to avoid those things
if possible. And I've gotten a range of hilarious answers over the years of asking that question. But you gave me a great one,
which is specifically,
don't ask you to shit on other companies' technology.
I love that answer.
Tell me more.
Well, I believe that we are much better off
when we build on top of lessons
that we learn from each other.
No single company, even as powerful as Microsoft, Google, when we build on top of lessons that we learn from each other.
No single company, even as powerful as Microsoft, Google, or Amazon,
is capable of solving every single problem in the best possible way.
So if I make a mistake, I don't mind that some other open source project will come in and correct me for it.
That keeps me honest.
And I will do the same to other companies who are working in open source space.
And also as an engineer, I love stitching the best open source tools
from different authors, from different vendors,
to assemble a solution that works for me.
So by criticizing each other's projects,
we're not really helping anyone making these choices.
But what I do think is fair is criticizing approaches.
For example, what I earlier said about SSH keys,
it's just not a very scalable way of doing it.
And I could argue using very technical arguments for it.
But generally, I do believe that every open source project,
every product out there deserves
some attention.
And ultimately, we should be
trying to integrate with each other,
hopefully using open standards.
That leads to this outcome
that I'm dreaming about.
People think that a lot of my brand
is built on crapping on companies' technologies.
And they're kind of right. but I'm careful to punch up.
There's a reason that I own twitterforpets.com.
If I make fun of someone's actual small startup, I'm a jerk,
because that's people's hopes, dreams, etc.
Most of the companies I make fun of, other than a few very egregious examples,
are either multi-billion dollar entities or are publicly traded. At that
point, you've opened yourself up to scrutiny and criticism. And frankly, you can weather my slings
and arrows in a reasonable way. If people are listening to what I say and feeling bad as a
result, I've failed somewhere. I also think that when companies try this, our entire marketing
brand and persona is going to be
built on crapping on other people's work.
It doesn't look good at all.
I agree.
Look, even going after larger companies,
you have to remember that the companies are not monsters.
Take, for example, Amazon Famous was a two pizza teams.
So if you're picking a particular Amazon offering
and you're going after them,
there is basically 10, 15, 20 people behind it.
So that's really the group you're having an argument with.
It's not all of Amazon and they all have feelings
and we all work hard and I do believe that at the end of the day
that we love technology, we like computers, we like what we do.
So creating drama necessarily,
that's not something I could be excited about.
At that point, the feud becomes
the story rather than the actual value
of what it is that you've built.
Exactly. It's just too much of that
is happening in open source space, and
that is unfortunate. It's the rage-fueled
equivalent of, we don't have much useful to say,
so we're going to throw a big party at a conference.
Yeah.
One last question to get this slightly back
to topic before we wind up calling it a show.
Do you think that as we look at what's happened over the course of the industry progressing from
running things in mainframes to the whole data center story to the thin client, thick client,
etc. back, now we're in a world of cloud. Do you think this is the end state of, I guess,
the pinnacle of computing?
Nothing to go beyond this, shut it all down, we're done.
Where do we go from here?
I think there is definitely going to be another thing that will come to replace the cloud.
And I hope that soon we will be able to say that, oh, if you're doing cloud-native computing
or cloud-native application, that's the legacy way of doing things.
I don't know how this next thing is going to be called,
but I do like to think about it a lot, what it will look like.
And I think one area for improvement is for us to close this gap.
We keep saying the data center is a new computer.
Data center is a computer.
Data center is a computer.
Kubernetes is an operating system for data center as a computing.
There was a DCOS from Mesosphere, remember?
But it just hasn't happened yet.
Kubernetes still feels like a collection of primitives
to manage a bunch of containers and plus a million of other stuff.
It just feels like you're dealing with drivers.
If you compare Kubernetes to an operating system,
I say that drivers are too exposed.
If I'm building an application for Linux or Mac or Windows,
I don't think about USB drivers
if I wanted to take sound from USB mic.
But modern cloud environments,
they make us think about load balancers, volumes,
and all this other stuff that's just too low level.
So I believe that we will arrive at this post-cloud world
where data center truly becomes a computer.
Where the process of creating and publishing an application
for macOS or iPhone or AWS will be extremely similar.
Where you say, here's my file, this file is this image, it's my application.
You could put it into your AWS account and it will run. That's what I believe the post-cloud
world is going to look like. And it's almost like gravitational vision for the serverless.
Because today, when we talk about serverless, it's just basically another framework on top of
something like Kubernetes. But we believe that serverless is when the process of building an application
ends with a single build artifact.
Like, this file is my software.
Where it will run, I don't want to care.
I don't want to know.
That's a true serverless to me.
So hopefully, once that happens, we could finally say goodbye
to the cloud-native world.
Welcome to this post-cloud world
that gravitational is trying to enable.
And you're doing a better job of articulating that vision
and that story than the certain large company
that did a cloudless hashtag that resulted very quickly
in being effectively cyber-bullied off the internet
because the entire premise was ludicrous.
I think that you're right.
I think there is definitely something that has to come next.
If not, then what are we all doing here?
We could not have imagined 15 years ago
a lot of the things that we take for granted today.
And I imagine that that trend is not likely
to slow down anytime soon.
I feel like that is tempting fate
to make that observation in 2020,
but I mean it from an optimistic point of view.
Agreed, agreed.
I'm not going to talk about those other companies
because that's specifically something
I ask you not to ask me about.
Exactly.
We're not going to name names.
It's fine.
But if it helps anything,
they're worth hundreds of billions of dollars.
Again, I don't consider it punching down,
which is really, I think, shorthand
for what I view this as.
I learned you can punch down at big companies
when they're trying something new
and you're crapping on them,
or they're talking about their journey
and how they wound up going somewhere.
I got that one wrong in the early days.
And well, I'm sorry I'll do better
is sometimes the only thing you can say.
Sounds like a plan.
So if people want to learn more
about what you have to say,
what you have to show them, what you have to say, what you have to show them,
what you have to sell them in some cases,
where can they find you?
They could go to gravitational.com.
They can click on teleport to learn how we do this magical access to everything,
or they can click on gravity to learn about packaging applications as a single file.
Or they could go on GitHub and just dive straight into the code.
It's github.com slash gravitational.
Excellent.
Ev, thank you so much
for taking the time to speak with me.
I really appreciate it.
Thank you for having me, Corey.
Ev Konsevoy is the CEO of Gravitational.
I'm cloud economist Corey Quinn,
and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five-star review on Apple Podcasts.
Whereas if you've hated this podcast, please leave a five-star review on Apple Podcasts,
along with a comment explaining why everything gravitational is building is overly complicated and unnecessary because all you really need is Telnet.
This has been this week's episode of Screaming in the Cloud.
You can also find more Corey at screaminginthecloud.com
or wherever Fine Snark is sold.
This has been a humble pod production
stay humble