The Infra Pod - Security problems are.... infra problems! Chat w/ Mike Malone from Smallstep
Episode Date: September 4, 2023Ian (Snyk) and Tim (Essence VC) sat down with Mike (CEO of Smallstep) to talk about how software security will change, as we see more security problems becomes infrastructure problems. How does the i...ndustry potentially change if we think about security problems from the infra angle? What's the hurdle and benefits? Listen in to learn more!
Transcript
Discussion (0)
Welcome to the pod.
This is yet another Infra deep dive with the podcast runner, yet another Infra group.
So again, Tim from Essence VC and I'll let you introduce yourself, I guess.
Awesome. I'm Ian, helping Sneak turn from a series of tools into a platform.
And I'm super excited today to be joined by a good friend of mine, Mike Malone.
Mike, could you tell us a little about yourself, what you're working on, and all that fun stuff?
Yeah. Hi, Tim, and good to see you again, Ian.
I'm Mike Malone. I'm CEO and founder at SmallStep.
We do end-to-end encryption for devices, workloads, and people.
End-to-end encryption for everything, everywhere,
using certificates, cryptographic identity, TLS.
My background, though, I'm a software engineer.
I'm a nerd, so I'm here to have fun.
And what's the background on your company, on SmallStep?
What's the two-line sentence on why you started it? I'm
curious to hear. As the typical felt the pain, built the business. I've been in cloud native,
microservice, agile, dev sec ops, basically my entire career at startups. And security has been
a perennial problem. My most recent gig prior to Small Step, I was a CTO at a company
called Bettable. It was a platform for online gambling. And we used to joke that we were
regulated like a bank that sells liquor. So got to experience compliance and real security issues,
being the custodian of customer funds, actually caring about that. I don't want to lose people's
money. And what does that mean when you're trying to have an end-on call, but also make sure that a malicious insider can't steal
money? It's hard. And at Bettable, what was your solution? Traditionally, the way I framed security
is oftentimes it's about detecting and mitigating risk versus trying to design up front to prevent
risk in the first place. At Bettable, was that your experience or were you designing security in from first go to
reduce that like monitoring risk?
We were being scrappy.
So both corners were definitely cut and it was a higher trust environment than would
be ideal at scale.
But that is the story of a startup, right?
So my observation of the development of what I would call modern security solutions that are amenable or compatible with modern technologies and techniques is that modern technologies and techniques, they came out of Flickr and Twitter and a bunch of Web 2.0 companies in the early 2000s.
And a fundamental concept of security is it doesn't make sense to spend more on security than the value of the thing that you're protecting. And startups don't have a lot of value to protect
just realistically. And they're high trust environments. Like you have 30 employees,
you know all of them by name, you know where they live, you're friends with them all, you trust them
and it's reasonable to trust them. But then as you grow in scale, things get more complicated.
And it was only then that, you then that folks started to think about,
okay, how do we secure this without undermining it?
From your experience at Betable, but also curious from the customers,
you look at small-staff patricians that have been more on detect, monitor, remediate side,
or has that been more, okay, we're going to go change something,
first principles with the way that we actually build software
or the way that we actually build software or the way that we design our infrastructure? I would say you can draw an analogy here between like the lift and shift era of cloud versus cloud
native. So V1 is like, how do I model my old world in the new world so my brain doesn't hurt so much?
And then once you get comfortable with the new world, it's like, oh, modeling the old world so my brain doesn't hurt so much. And then once you get comfortable with the new
world, it's like, oh, modeling the old world in the new world doesn't make sense. Let's actually
think about this from first principles and do it the right way. And I think we're mid-journey on
that. The whole identify, protect, detect, respond cycle, I think, is just fundamental to security.
But ideally, you're identifying earlier and earlier
in the software development lifecycle.
Ideally, you're not identifying things
when they're already in production, let's say.
Yeah, that makes total sense.
One of the things I've been thinking about recently,
I saw a message I was telling Tim and Mike
before we started recording.
I saw a message in the ad group a while ago
where someone said security is principally
an infrastructure problem.
And it took me a while to really sit back and consider, what are we actually saying
with that statement?
And my interpretation, and this is where I think some of the zero trust movement fits
in, is that a lot of the mitigations we put in place, controls we put in place, are the
result of the fact that security hasn't been designed for infrastructure at all.
And so we have to add mitigation layers around it and controls around it to make that possible.
I'm curious, Mike, on your perspective on this, considering that you're primarily building tools that help people secure their software and be more productive at the same time.
I'm curious to get your perspective on the idea that security is an infrastructure problem.
I think that's valid.
I think a lot of security is an infrastructure problem.
Maybe not all of it.
I guess a little story. I was at RSA a few months back and I was talking to a CISO there
at a big company. And I was sort of talking to them about their job and what it means to be a
CISO. And the reality is you're a lot more like a chief counsel than an actual individual contributor
who knows a lot about security.
Like his take was, you know, my job is mostly risk management and enablement.
You think like I run security or own security.
I don't own security.
Developers own AppSec.
IT owns endpoint security and operators own NetSec.
I don't own anything really important security related other than just sort of overseeing it and providing sort of risk management guidance.
So the security is an infrastructure problem idea, I think, is related to that.
Because as these infrastructures are rolling out, the first goal, I think, is I don't even know what's going on.
So I'm going to buy a bunch of risk management tools that just scan networks and tell me what's screwed up.
And then I'm going to yell at people
or like hopefully enable them.
But ultimately the solution is handle security
the same way you're handling everything else.
Automate it, get it into your GitOps workflows,
get it into your normal development lifecycle,
test it early,
make sure your pre-production environments
look like your production environments.
All of those things need to be applied to your security program as well.
There's so much benefits if you built infrastructure
to enable security by default.
I think this is kind of the dream of everybody.
I have a totally safe environment that's free for all kinds of attacks,
free for all kinds of mistakes.
But obviously, we don't live in that world, right?
So therefore, we need more infrastructure and we have more things.
I mean, probably starting with your example,
I think everybody's done this so differently across industry.
You look at every different companies, does everybody use the same tools?
Everybody uses the same practices?
Not really, right?
There's like as few things are similar, but then the overall practice are quite different. And I wonder,
what do you think is the hardest part of trying to enable a company to treat maybe the security
problem you're solving, you know, just to start as an infrastructure problem where we have
infrastructure built up. We have infrastructure
practices, right? Rather than add more people, everybody more careful, you know, more checklist,
whatever other alternatives out there. Like what makes infrastructure hard to implement
in a company from your point of view? Well, you're changing the foundation. It risks a lot
of disruption. Like if you're talking about changing infrastructure, like think about like changing out the database that everybody's using, right? There are a lot of stakeholders, a lot of things that could go wrong. And fundamentally, that's it. So you gotta get buy-in from a lot of people, you gotta get support from a lot of people, and you gotta have the guts to do it you know that takes a special type of team i think
i'm curious like just thinking about small step and your experience and the broader like what
you're trying to do which is one enable zero trust and we should probably talk about what that means
in a minute but two is make it really easy accessible to like a core persona and i think
your core persona is you kind of the platform engineers. Traditionally,
what would 10 years ago or 15 years ago be called the sysadmin, the ops folks.
Right, yeah. And more traditionally, we call platform engineers, or some places in the world,
I call them DevOps. That type of, we're in the bits and bobs and enabling the path production,
and we're also putting a box around the path production. I'm curious how much you've thought about like the different types of people and how a security value prop around
specific software, like how that accelerates, decelerates and who resonates with that and who
doesn't. Like you're in a very specific space where you're solving an identity problem. I'm
curious if, you know, in your travels and in your thought process, does the security piece also
resonate with other types of buyers and other types of personas across the organization? Because it would be amazing as a
buyer on my side to say, huh, I can get great infrastructure and the CISO is happy with it.
That's great. But I don't think that's commonly how the conversation goes.
Yeah. I mean, in the best case it does, it's rare. There's often a lack of urgency selling security.
There's not a lot of pushback with the sort of thing that we do.
Everyone can agree.
Yeah, it would be better if I had end-to-end encryption everywhere.
So it's more of this softer sell of the problem isn't whether or not I think I should do it or a customer thinks they should do it.
It's whether it's worth doing now.
The ROI calculation can get tricky.
So it helps to have something that's more of a painkiller.
Often that's like bringing a security solution to a workflow or something
that's a required workflow that is important,
that also needs a certain base layer of security.
It can also be compliance related. That's pretty common or governance policy related.
Those things are often really annoying, you know, automating things like access reviews,
just really appealing to the people who have to do access review. So there are a few different
paths, I think. But yeah, it can be tricky to sell better security, broadly speaking,
is an uphill battle. What do you think is the hardest for you to sell to companies right now?
For small steps to succeed? Is their tool pretty specific? Everybody's really looking for it?
Or do you think you also have that sort of similar persona that you're looking for,
like somebody willing to take on this as an infrastructure
problem, to be willing to able to bring in more of an infra approach rather than piecemeal tools.
Do you view that as you're selling a platform and trying to align people on the best practices?
Or just like, hey, here's the tools, use however you want to, you know, and we're just kind of
replace some outdated stuff in your stack?
Both. We're definitely selling a platform and a framework for thinking about security from first principles, sort of fundamentally based on cryptographic identities that we believe is
simpler and better than the alternative than using, say, like network-based controls that don't really map to the logical units of a system anymore.
The vision certainly is build trustworthy connections
between people and tools to let good thrive
and make businesses move faster.
But the entry point for an organization
is usually much, much smaller.
The entry point for an organization is usually much, much smaller, right?
The entry point might be like, hey, I have a MySQL server and I need to turn on TLS.
And I know to do that, I need to meet certain operational requirements.
And it's sort of black magic.
I don't even know what an X.509 certificate is.
Help me.
And that might be an entry point.
Or like, I have on-prem Active Directory and I'm moving to Azure AD and I don't have Active Directory
certificate services anymore,
but I still have a requirement
to have certificates for my Wi-Fi help me, right?
So those entry points, again,
are like much smaller, more niche,
but then there's a huge expand play to like,
you know, this problem is pervasive
across your entire organization. Wouldn't it be nice to have sort of a fabric that covers all of that?
Is this the typical approach for security infrastructure? Like I use a small step
from my perspective, along with things like Tetrate and some others, like they fall into
what I consider to be a security infrastructure. It's like they are solving a problem for the
developer or for, you know, the ops person. They are solving a problem for a user, but we're also buying them because they check a lot of
really important initiatives for the security organization. And the trigger point might be
FedRAMP. What we're going to do is go through FedRAMP. FedRAMP has certain policies and things
you have to check. I guess to go back, my question really comes down to, it sounds like the entry
point for you is usually triggered by some larger initiative.
They run into problems and they come find, oh, you've built the automation layer or you built this thing and then you get an upsell opportunity.
The question is, is that a common go-to-market motion?
I think it's pretty common, even outside of security.
It's a land expand motion.
It's like the Slack motion, get one team on and then grow
to the entire company. So I think it would be great now to take a step back and ask some
fundamental questions. One question I must ask you before we move into the spicy futures, where
this is all going, what is zero trust and why should we care? And why is zero trust a great example of
security infrastructure? Zero trust is a stupid marketing buzzword at this point, but it does
mean a thing. So to me, it represents a shift in security philosophy mindset from, hey, I'm going
to build a fence around my stuff and everything inside of that fence can talk to everything else.
And it's sort of like a big bag of trusted stuff that can do whatever to protecting each item, you know, each service, each person sort of putting
a micro perimeter, if you want to use the old model around every individual item and being a
lot more granular about controlling those interactions. Fundamentally, trust is this binary relationship,
like X trusts Y, right? And zero trust, to me, really means from a security perspective,
the organization doesn't trust the network. You assume that your networks have bad actors on them,
either because your perimeters aren't perfect and
intruders will get through them or because you have malicious insiders. Well, both really,
and any number of other issues, right? It's leaning into the assumption that your security
is not going to ever be perfect. So control your blast radius. All right, Mike, let's hope you
love this question now. So we're going to jump into a spicy future.
What we're trying to do here is to really just talk about what you believe should be happening in the future.
And we're going to probably do near future, three to five years, and then the far future in 10 years.
Where do you hope the security landscape lands at?
Yeah, three to five years.
Like we have the technology now
and the know-how to truly move away
from these perimeter-based security mechanisms
and to really harden the entire
sort of production infrastructure environment.
The stuff that we're doing at SmallStep, like end-to-end encryption sort of between these
logical components, it's stuff like ChainGuard is doing and Snyk around identifying vulnerabilities
and hardening your software supply chain.
That stuff is really compelling and important.
And there are tools and technologies that are enabling that that now exist
and people need to read up on them and start using them, right?
So stuff like SigStore and the ability to sign kernel modules and Linux
and verify signatures and scan containers and identify vulnerabilities.
And then hardware components like TPMs,
Trusted Platform Modules, that are now ubiquitous.
Like a lot of people probably don't even know
that like they're virtual TPMs on all of the major clouds now.
You can spin up a Nitro TPM and attach it to your AWS VM
and you can run a tested code in there.
You can use it to get a tested identity
and enroll a VM using cryptographic identity
all the way down to the silicon
into small step, for example,
and get an X.509 certificate that way.
That stuff in three to five years
needs to become ubiquitous.
So that's my three to five years.
Should I move into 10 year?
Actually, maybe we'll stop there
because I think that already has some nuggets.
We want to dive quicker
given we're all in front of technical people here.
One statement,
and I kind of been intrigued on
is the use of like TPMs
or any sort of secure enclave.
Like, yeah, it's already there.
And we're seeing larger companies
start to pick up in some fashion. Let's say in three to five years everybody uses it by default that definitely could happen
what do you think the industry changes do we see companies and products go away or certain things
goes away with it now like what big change does that enable yeah a lot of security scan tools
you don't need them anymore you know you're only running stuff that's trusted.
You don't need virus scan anymore
that everyone hates anyways.
How does this change the way that people buy cloud software?
Cloud software is based on an enclave.
Does that change some of the trust assertions
in the way that we think about what data we allow the cloud to see?
Like if the TPM is a device,
I get what you're saying on device,
but I'm thinking about just like usage and the enclave and building cloud software.
Now we have a place we can attest to and be like, oh, okay, that's a hyper-enclave.
How does that change the way we build stuff? I think the enclave question is
very interesting. You can say, oh, it should be secure, or at least I believe
it should be secure. It does a thing that's almost
impossible to do. It's a bottom turtle thing
right apple just made this change so like to give people a flavor of what like the bottom turtle
solution here managed device enrollment forever has used this protocol called scat scep and
basically it's a password flow like the way it works is like an MDM like Jamf for
Intune gives a laptop a password and the laptop uses that password to get a certificate from the
CA. And then that certificate is used to get further instruction from the MDM. How do you
know you're giving the password to the right laptop? It's either some manual process or like you don't know
and you're hoping. So Apple launched a new protocol. Dude named Brandon Weeks at Google
actually developed the spec for this. Brandon's awesome. It's an extension to Acme, which is a
certificate management protocol called Acme Device Satisfaction. And it's now available in Mac OS,
iOS, tvOS, iPadOS, like the entire Apple ecosystem now supports Acme Device Satisfaction, and it's now available in macOS, iOS, tvOS, iPadOS. The entire Apple ecosystem now supports Acme Device Satisfaction.
And instead of that password flow, what happens is you have an Apple Business Manager account.
Apple gives you a list of devices that your organization has purchased.
There's an API for that. And then any Apple device can attest its identity
using a private key that's bound in the secure enclave at manufacture time. It can prove I am
device with UDID 12345. And then you go and look at the list of devices that your organization owns
that you got from Apple that say like device 12345 is your device you bought on this day, and you bought it for Ian.
And it's a much, much stronger security mechanism. You're solving a real bottom turtle problem.
And it's hard to build good security on bad security. So a lot of better security ideas
are waved off because it's like, well, I mean, I'm enrolling my laptop using a password, so who cares?
Right. But now we've solved that. And TPMs, like they do a thing that was basically impossible to do before.
So you can use it to attest all sorts of things. You can use it to get like a real guarantee that your data is in a certain place or is encrypted or that a particular version of an application is being run remotely on
your behalf it opens up a world of new and interesting things yeah it's really interesting
because before we had a design for a world where we're actually not quite sure who we're talking to
the password is the only unique identifier we really have to say well this is a thing a person
knows and we don't know who's going to be along the way, and we certainly don't know what's
sitting at that bottom layer. But now we can say, okay, there is a bottom layer here, and that's a bottom layer
that establishes a tier point, a bottom turtle. And that enables all this new interesting stuff
where we can say, well, I know who I'm trusting. And we couldn't necessarily say that before.
We could with some network traffic, Firelight, TLS, but we couldn't
say what was going on in that one device.
What was going on, and I think enclaves,
like with being able to test for an enclave,
be like, okay, I am talking to an AWS Nitro enclave,
and that enclave helps me understand what it's running,
so I can trust that both the enclave is an enclave
and that the enclave is running code I trust.
Therefore, I know that sending data to this enclave,
yeah, I trust sending the data here, right?
And that fundamentally changes the security paradigm
of how we build cloud software.
It fundamentally changes the security paradigm
of how we think about trust in general.
It upends stuff.
So that's pretty incredible.
So yeah, I love that.
Tim asked me to talk about what my three to five year
spicy future present is for security.
And I want to talk about it from the flip side for a second
and talk about the rise of these LLMs and where they fit in terms of the developer workflow
and where attestations and things fit as well. Five years from now,
it's very clear that LLMs and the developer workflow are not going to go away.
No one's going to give up that productivity improvement, especially not as the models get better.
That's insane. We're going to continue to use and generate amazing things.
The question is, how do you know where the code came from?
And how do you know that the code that is generated from said LLM is secure and valid?
And so three to five years from now, I don't believe we'll ever get away from some form of determinism in the way that we test our code.
And we need to know where that code came from.
And so the importance of supply chain security is very already important in general,
but it's going to be tied all the way back to the developer,
attested to the laptop, attested to the model.
Then there's going to be an attestation
of what's scanned and validated.
And that scan and validation will be the code
that's generated, which will live across
the software development lifecycle.
So it'll be in your CLI IDE.
It's going to be inside your pull request.
It's going to be checked again at the time you deploy your Kubernetes cluster. It's going to then be your pull request. It's going to be checked again at the time you
deploy your Kubernetes cluster. It's going to then be checked again inside your Shura enclave. So you
have a full end-to-end attested sign pathway of where this code came from, who wrote it,
how do we know what's correct across all the different binary compilation artifact creation
steps. And it's going to be deterministic and rules-based, one of the things that I believe is we'll never be able
to remove some level of indeterminism from the LLM. There's only so much context you can give
to it. So you'll always need a level of determinism to say, hey, if you take this patch that comes
from the LLM and patch this, is that still secure? So you'll always need a system on the other side
of that validating what's coming out there as you apply it and as you think about how that modifies your production. And it's the production context of how
code runs that will be the most important input to that generate validate loop.
I love it. I can see it. I believe in it. I think some interesting thoughts on that. So
you're bringing up LLMs, which is like a trigger word for me, AI in general, and connects back to some thoughts I have on hardware.
One like hot take I guess I have on LLMs is like, I think they're surprising.
Like they surprised me.
I've been like I'd say an AI bear for years, you know, but the LLM technology is like really interesting and impressive.
Right.
I also don't think it's like on a path to AGI or anything like that in any short period of time.
Because fundamentally, it's still an NP problem.
It's NP hard.
And the improvements don't scale linearly with data set size.
They scale way sub-linearly. So like from a first principles perspective,
it seems unlikely that you're going to get like,
you sort of iterate to an order of magnitude better tool.
And we'll probably just see sort of incremental improvements
over many years, sort of the same way we've seen
with self-driving cars.
And the reason I believe that is like,
fundamentally, it's a hardware problem.
AI is a hardware problem.
Like, I don't believe the sort of computer we have today will ever be able to simulate a brain because the hardware is different.
I don't believe that for the same reason I don't believe a person would ever be able to do math arithmetic as quickly as a computer.
Our computers are deterministic, sequential, extremely rapid sort of sequential processors. And brains are completely different hardware, right?
They're parallel, non-deterministic machines.
They're like pattern matching machines.
So simulating one and the other only gets you so far.
Before we started, you mentioned that someone told you
my long-term goal is to write a new operating system.
It's actually deeper than that.
Like, can we jump into like 10-year now?
Yeah.
Okay.
So if I'm king of the world, it's not just the operating system.
Like, yeah, like Linux sucks.
Every operating system, macOS sucks.
Windows sucks.
They all suck.
But it's deeper than that.
Like x86 sucks. The whole like von Neumann machine sucks. They all suck. But it's deeper than that. Like x86 sucks. The whole like von
Neumann machine sucks. Von Neumann was a genius, but like the von Neumann architecture, he developed
like over a summer. Like I doubt he would even say it's the thing that the entire world should
be running on top of now. And security problems run that deep.
A lot of our programming languages are x86 isomorphisms,
which are von Neumann isomorphisms.
So going back to what you were saying about this whole trust chain,
what if we didn't have to?
There are two ways to trust code.
You can trust who wrote it and who reviewed it or you
can trust the code directly because you know what it does but we can't do the latter right now
because the languages and the operating systems and the hardware don't allow for it and there's
deep and interesting computer science here right because like you can't do that for every piece of code either.
Turing proved that, right?
You can't prove termination, so you can't prove really anything, certain programs.
But you can for a huge subset of programs that are interesting and relevant and important.
And right now, we barely even try.
So that, to me, is an interesting 10-year idea.
How do we do more of that?
We should all use risk-free and just fundamentally change from a big way.
No, it's worse than that.
Risk sucks too.
Interesting.
I feel like researchers and academics like to play this role.
What if we can fundamentally change things? But none of those papers actually end up being anywhere because you need such a big force.
Yeah. Google or somebody like complete taking over. And then you have to propagate down.
I mean, since you start with a 10 year future, I'm kind of alluded to the question, like, how will it happen?
What change do you want to see? I guess to be be more specific and how do you think that might happen is it google taking on something
else do we have need another open source foundation does linux foundation is a startup
which is something like doesn't seem like any of these communities is enough on its own right
well it's been tried many many times honestly I don't know that it does happen in 10 years.
I think it eventually happens, maybe in our lifetimes.
But it would take an enormous amount of investment.
But the competition, I think, has an enormous amount of investment.
I mean, one thing that's just true is that the best technology doesn't always win.
And that a lot of money can also fix pretty crappy technology.
I mean, look at Meta slash Facebook and PHP.
When they started using it, PHP,
I think even the people who were core PHP developers
would not have compared it favorably
to a lot of other programming languages.
But from what I've heard, it's pretty decent now
because Facebook has probably paid a billion dollars
to make it pretty decent.
So with a billion dollars, you can cover a lot of sins, you know.
And if you look at the status quo sort of ecosystem around all of the tech stack that I was just describing, like there's a lot of money there.
So disrupting that would take a lot.
To get started, you need a hook, right?
Every platform has its sort of like core loop and killer app or
killer feature, right? Going back to Facebook, like Facebook wouldn't be Facebook if they hadn't
done people tagging in photos, right? So we need like the people tagging in photos version of why
people should care about this stuff. And I think the question I ask myself is for whom.
This is a tangential, but it is related, I promise.
I spent a lot of time in the real-time,
offline-first app space.
And the problem there is it's never enough of a value prop
to deal with all the UX consequences
of CRDT re-merging together.
And that world has gotten better as we've had more technology.
But the fundamental problem is how do you deal
with the merge conflict?
And what's the right merge to take?
Like who wins?
That's still fundamentally the problem
in that UX space.
But the place that's driving
a lot of that research
and driving a lot of new dollars,
like real revenue dollars
that you can create real businesses,
the people driving that
is the US military.
Because what happens in a world
where you have a connected soldier,
but then electronic interference,
electronic warfare kills all of your radio.
Well, you still need the soldier, the tank,
the airplane, the drone to fly.
I often think things like this,
you know, reinventing CPU architectures,
the way computers work,
it usually comes down to governments
and whoever would have such a huge leg up, you know, the U, the US military talks about overmatch and there's a lot of, you can talk
political pros and cons of this. It doesn't matter, but they're one of the only institutions
in the world where you could see fundamentally reinventing the computer would give them such a
huge, massive potential of mass security upgrade and such a fundamentally different security model
that would give them the ability to overmatch the cyber domain or something like that.
So I could see that actually happening.
Probably.
And they're operating on 100 plus year time horizons.
They take a long view of things.
And going back to my comment on Von Neumann, who he was working for when he developed the
Von Neumann architecture, it was the u.s military you know it was part of
the manhattan project the cover story was that it was weather simulation but that was bs it was
simulating atomic bombs so yeah i mean it's capitalism right like you need the resources
and you need somebody with those resources to to see that vision and decide to invest.
Simple as that.
So there is hope for your ultimate dream.
And this is barely your plan.
Hope is coming from the U.S. military.
I still want to maybe go back to TPM.
What do you think will also need to happen to everybody who uses TPM?
We just assumed in an earlier segment that it will.
But right now,
nobody does.
So few folks
actually use it
in production.
What do you think
are the biggest
friction
or enablers?
Almost like the Docker moment,
right?
Where everybody
just starts using container.
Is it just the tooling?
It's just the tooling issue.
It's just the tooling issue.
People,
it's conceptually
not that complicated.
The tools are getting there and then people will use them.
I mean, already people are starting to use them, and they probably don't even know.
You know, passkeys are effectively the same technology.
It's just a hardware-bound private key in a physical device.
So they make sense.
They're logical.
I think it's just a matter of the tooling catching up to where the hardware is and exposing people to the benefits and how they can
leverage them in their day-to-day. One of the biggest barriers for security and privacy is a
value prop is getting people to understand why it's valuable to them. Certainly, Signal's done
a good use case there. I'm curious how you think from a developer perspective, how do you get them
to understand the value and what do we need to get them there? There are varying use cases here.
But the one that I'm obviously most familiar with is what we're doing at SmallStep. It doesn't take
much for people to start to understand the value. You explain what a TPM is, the notion of like a
private key that can't come off of this piece of hardware. And now you can use it to identify that
piece of hardware, you know, there are a bunch of toasters out in the world.
This lets you tell which toaster is yours, right?
From nothing.
You need no other security mechanisms.
It can be over the public internet.
That's powerful.
And you start thinking about like why that's important, what that does.
Well, I mean, passwords suck, right?
We all know that.
Secrets suck.
API tokens suck.
And like people will sometimes counter argue that like,
oh no, an API token is really simple.
It's an intuitive thing.
It's intuitive, but it's nothing but simple.
Like B, it's deceptively complicated, right?
Like it's easy to generate one and to use it,
but it's effectively impossible to operationalize.
That's why we have all of these like Rube Goldberg machines
for secret management,
because they're basically impossible to keep secure.
They're a huge compliance pain.
The toil that goes into passwords is absurd.
We don't need to do that anymore.
We don't need passwords.
You don't need it.
We don't need it.
We have the technology.
Yeah.
Every developer that is
currently building a React app
would love to hear that. I don't need to
ever have a.NET file of secrets ever again in my
life. I think every CISO
in the world would say the exact same
thing. It's just we don't have the infrastructure.
Someone's got to take us on that journey.
We're building it.
Small steps are going to take
big steps in this industry, I'm sure.
Cool, sir.
Well, hey, we got so many spicy takes on this podcast.
I think we can go on forever on this.
But it was really cool.
I think we got a huge amount of what we wanted to go for.
So thanks so much, sir.
I think we will definitely have you on again to do even more spicy security
infra futures takes if you're up for it. Awesome. I'd love to do even more spicy security infra futures takes.
Give her up for it.
Awesome. I'd love to.
Thanks so much, Mike.
And we always give our guests an opportunity
to promote themselves.
How can people find you?
And listen to more of your desire
to build the future operating system.
Smallstep.com.
Find me there.
There you go.
All right. Thank you so much.
Thanks, guys. It's been fun.
Bye.