The Changelog: Software Development, Open Source - From Tailnet to platform (Interview)
Episode Date: March 11, 2026Adam talks with Tailscale co-founder and Chief Strategy Officer David Carney about where Tailscale is headed next: TSIDP, TSNet, multiple tailnets, and Aperture. They get into clickless auth (via TSID...P), TSNet apps, multiple tailnets for isolation and control, and Aperture, Tailscale’s private AI gateway for API key management, observability, and agent security.
Transcript
Discussion (0)
What's up, friends? I'm off the grid this week on vacation with my family. Spring break is here.
I'm enjoying my life. And this week I have a show for you with the chief strategy officer from Tailscale. His name is David Carney.
We're talking about where Tail Scale is heading. TSIDP, TSNET, acronyms all over the place, multiple tailnets, aperture, their AI gateway, clickless off, and so much more.
Big thank you to our friends and partners over at Fly for getting our back.
They support the show.
They make it happen.
I'm so thankful for that.
Check them out, fly.io.
That is the home of change law.com.
If you didn't know, learn more at fly.io.
Okay, let's do this.
Well, friends, I'm here with my good friend, Chris Kelly, over at Augment Code.
Chris, I'm a fan.
I use Augie on the daily.
It's one of my daily drivers.
Now, I use Claude Code.
I use Augment Augie.
And I also use Amp Code and others, but Augie, I keep going back.
to it. And here's where I'm at. I feel like not enough of our audience knows about Augment code,
not enough about Augie, the CLI. It's amazing. I love it. What can you share? Yeah, we often say
Augment is the best coding assistant you've never heard of. And that's both frustrating as to someone
that works there and it's like very proud of the work we've done. But also like inspiring.
Like we want to go and sort of punch above our weight because like we aren't anthropic and we
aren't open AI. And so the quality of the product itself, you know, with our context engine,
once you do touch it, people are like just blown away by that. And so like that keeps me going
every day. So not to bear the lead here, but this is a paid spot. You are sponsoring this show
to get this awareness. Now at the same time, we're selective. And I love to use your tool. But
there is in the world. So a lot of developers look at the space and they say, okay, well, how long
can this work? How long is this sustainable? In the case of,
Curser or a WinSurf or you pick the name and you think discounted tokens help me shape a lens
for audience.
I think it's a lot of awareness, right?
Like, Cursor got a lot of publicity early on for like fast revenue growth, which well
deserved.
I think, you know, frankly, some of the media got the, gets the story wrong and that like,
if I gave you a $1.50 for every dollar you sent me, I'd be the fastest growing startup in the
in the valley. And so when you're selling discounted tokens, yes, of course you're going to go very
fast, but all that money plus more goes to the model providers. So I think the real story is
the story of Anthropic and, you know, being an API provider, I think the market has just
moved so fast and there's so many pieces of competition out there that it's just hard to get
noticed. So friends, I love augment code and I love using Augie. And I highly recommend
you use it. I love using Augie. I can hand Augie a well-defined specifications.
a well-defined pep, as I call them in my world, an agent flow, and it executes flawlessly.
So the cool thing about Augia that I love most really is that context engine, and I can hand it a task,
and it can just churn away on my well-defined plan and just never bother me and accomplish the mission.
It is so cool leveraging the latest models, the context engine, and all the fun things behind the scenes in that awesome CLI.
So yes, go try it out.
augment code.com.
Right in the top there is a CLI icon, a terminal icon.
Click that, install it, and change your world.
It's going to be awesome.
Obmancode.com.
Well, friends, we're here with David Carney,
co-founder, Chief Strategy Officer of Tailskel.
Friends, you know I'm a big fan of Teleskill.
So, David, welcome to the show.
Yeah, thank you. I'm glad to be here.
That's a big role.
I mean, that would shake me in my boots
if I was chief strategy officer of tailscale.
What a big platformer building and a lot of moving parts
and a lot of direction you can go.
It is a big title.
And I just want to be clear,
I'm certainly not the only one thinking about strategy at tail scale.
There's a lot of moving parts.
But being the top of it all,
what does it take to lead strategy
inside an organization like Tailscale?
Well, I wouldn't even say I'm leading like the holistic strategy.
The stuff I'm working on in close partnership with my co-founder Avery and a new VP product and other parts of the team, I'm focusing largely on, I guess, the strategy at the edge of tail scale, which is something that's sort of come about in the past year or so.
So for the uninitiated then, describe tail scale from the non-edge and then take me to the edge and what that means.
The simplest way to think about tail scale.
And sometimes people ask me like, well, what are you building?
What do you do?
And so explaining it to like a layperson is very helpful.
And so first and foremost,
Tailscale makes it possible to connect any two devices anywhere in the world
with strong guarantees of the identity of the user
and the device at either end.
And if you can do that for any two devices,
you can do it for an arbitrary number.
So you can start adding like one device, one user, one server,
whatever at a time until you have basically what looks like a mesh network
and it's completely private.
Then you can layer on things like
access controls. So you can be like, oh, only these people should be able to access
these servers or devices. For instance, like the engineers could only access production.
The accountants can only access the finance servers, so on and so forth. But fundamentally,
what you can do with tailscales, create private networks. And so when we launch the product,
I guess we launched maybe six and a half years ago now, it was pitched as a VPN alternative
or zero trust kind of replacement. It does a lot more than what a VPN is, but at the heart of it,
It's a connectivity platform that lets you build private networks that are fully secured and sort of using a mesh overlay pattern.
So that's the core of what tailscale is and it has been for a very long time.
And we've been continuing to build and build and build on top of that.
There's this motion that's started in the past year or so where the, I guess the use cases for tail scale have gone from just internal within a company.
So it's like I'm using it to replace my VPN or I'm using it to access production or I'm using it to deploy like infrastructure within my company.
company. So people are starting to use tailscaled to deploy infrastructure to their customers.
So for instance, there's a couple of cloud provider startups that are using us in a way that
they bring up a tailnet is what we call it per customer. And so they connect a bunch of like say
bare metal GPUs with cluster infrastructure and they spin up one of these per customer and
then they manage it. There's this ability to create multiple tailnets inside of your organization now.
So people are starting to do that to build say like a staging tailnet or a testing tailnet or a production
tailnet, that kind of stuff. What I've been working more on over the past, I guess, year now,
is building, I guess, applications and services on top of this platform. So showing people like,
oh, you can create a tailnet, it has these very interesting primitives where identity and connectivity
and security are baked in. Well, look how easy it is to build, like, applications on top of
these tailnets and deploy them within a bit of private infrastructure. And that has gotten me, I guess,
more involved in things like agenic work.
workloads and that kind of stuff where you want really tight boundaries on who can do what
with identity associated with all those actions and some kind of compliance trail and all that
kind of fun stuff.
This fringe, this edge, you call it the edge, I call it the fringe.
What are some of the things that you've thought of?
How did you go with either yourself or other folks from the team to sort of go into a room
and think, okay, what is the true edge?
What are the applications we can build on top of our own platform?
I'm assuming that's how you've, how you position this.
What are some of those things?
I know we mentioned, you know, authentication is one of them, obviously.
So you have your aperture product you recently launched.
But what are some of the edges of that?
What products are you thinking about?
Well, the thing we started with earlier last year is that we revived a project,
an internal project called TSIDP, which, like a lot of things that we've built at
tailscale, we don't do a great job of telling the world about.
TSIDP was in our community projects repo.
We've done a bunch of work on it a year or two prior.
And for those of you who don't know what it is,
and it's probably most of your listeners.
That's me too.
Yeah, yeah.
So TSIDP, you can think of it as a reflection
of your identity provider inside of your tailnet.
So it's almost like a locally hosted version of your identity provider
that's private to your network.
The way it works is that it leverages the third.
fact that every connection in tail scale has your identity baked into it already.
When you provision a tailnet, you basically have to say, like, oh, I'm going to authenticate
with Azure or G Suite or OCT or whatever.
We don't have our own IDP.
We just hook into all the ones that are commonly used out there.
Well, once we can start generating keys based on like a handshake or an interaction with
your external IDP, every connection has got your identity baked into it.
And so if you're sitting inside of a tailnet, you know everything that is connecting to you.
And so you can actually build a small little application that says that just knows everything,
or it knows the identity of everybody.
And so with that, you can actually create effectively an OIDC provider.
So that's what TSIDP is.
You can think of it as like a locally hosted private OIDC or Oath endpoint.
And that allows you to do all sorts of neat little things.
Like you can start plugging MCP clients and servers into it.
You can build little gateway patterns where if you need to do like token exchange.
or if you need to do dynamic host, like dynamic client registration,
you can basically do it with DSIDP.
So you can keep all this interesting identity management stuff,
private-tier telnet, not expose it to an external IDP.
That's interesting.
I mean, we've talked about OIDC recently with Nicholas Zakis around NPM's security.
And that's one of the things that MPM required, you know,
modern maintainers of this age.
to essentially have one blessed way to publish to NPM.
And they had this issue with like rotating keys and that secure layer was largely brought on by OIDC.
That was the first time I'd started to dip my toe into, I'm not authentication nerd too deeply besides I like to authenticate with things.
I like to have an identity when I go around.
I like my SSH keys.
I like to be me where I need to be my me.
But I think I've been like hitting rocks together compared up to maybe what, even though I've been a tail scale user for so long.
I feel like every day I learn something new about tail scale.
So what does this help me understand what that enables them?
What kind of applications doesn't mean?
So if when I authenticate and I have a tailnet that gives me a mesh network across whatever device I want to connect to,
whether it's a home lab or enterprise or prod or staging like you mentioned,
what are some ways that enables a developer to not have to like shell out to somebody else's OIDC,
but to be their own within their own tailnet?
What does that do?
Yeah.
So also for instance,
have a home lab. I have a ProxMoc server on it. Fantastic. And so when I first started using ProxMox,
like, yeah, I set it up on tail scale. I'd hit the internal IP on it so I could access it over my
tailnet wherever I was in the world, but I'd still have to log in with my credentials.
There is a way to set up authentication with ProxMox that uses TSIDP so that it basically
just automatically authenticated you, just by virtue of the fact that you're on the tailnet.
it knows who you are, right?
So I don't have to type my username and password into a modal and hit submit.
I basically just visit, like, my ProxMox instance locally.
And I'm in.
I'm listening very closely now.
Yeah.
So Alex, yeah, I published a lot of videos on all TailSkel Tech.
He actually has a video on this.
And I think an update for it, too, after we brought TSIDP up to the MCP spec last year,
he refreshed his video on how to set up TSIDP to work with ProxMox.
But yeah, it's super convenient like that.
At Tailscale, we use it internally.
For instance, we have it set up so that our revenue team, if they need to access
Salesforce, they don't have to type a username and password.
They just, they can visit, they can just visit sales.
Salesforce is configured basically to jump through our local TSIDP instance instead of having
to do an Oathflow.
So this is like clickless login essentially.
Like this is just, I'm me.
And I can just go around my, do my business inside my business.
or in my home lab, which is kind of like a mini playground for most people,
and not have to even rely upon something like one password to put a password in.
You're just you and you just go there and you're just logged in.
How does that work?
Yeah.
I mean, you're already logged in.
I guess that's the thing with tail scale.
Like when you're on the tail night, you've already done an OAuth flow.
Right.
And so why do it again is basically the question we asked.
Right.
So every connection can assert your identity.
So we just leverage that.
So obviously there's a bit of work you have to do, say, with ProxMox or whatever to set it up and point it at TSIDP.
You've got to set up TSAIDP inside of your network.
It's pretty straightforward.
We're working on making that easier too.
But once it's set up, it's just like this magical experience where people just forget about the fact that they need to log in anymore.
But under the covers or under the hood, it's all safe and secure.
It's just like you're doing it all off flow.
It's just silent.
So if I had a ProxMoc server or maybe a database server or, you know, an Incas on top of like a ProxMox V, just anything like that where I have a network of other machines, what is the process to support TSIDP?
Like is that well published?
Is that burgeoning?
How mature is that for a developer to pick that up today and just start implementing that in their infrastructure?
Yeah, well, it is, like the code is open source.
It's on our community projects page.
I think it's GitHub.com slash tailscass slash TSIDP now.
If it does support, I believe, Oath 2.1 nowadays.
We brought it up to speed, basically to keep pace with the evolving MCP spec last year.
So we added a bunch of other stuff to it.
But people are using it.
You know, a lot of home labbers are already using it.
We've got a bunch of what we call T-Skill Insiders that are using it too.
Yeah.
You know, again, we just haven't talked about it enough,
so I don't think enough people are using it or even aware of it.
But it's there, and it's open for contributions, too,
if people want to add or extend or make more requests.
Is it something that is just inherent of you using Tailscale,
that it just comes along with using Tailscale and authenticating to it?
Or is this, you know, the OIDC part of it,
does it have to be like a hosted server locally and run that you authenticate?
against.
Yeah, you need to run it as an instance.
I have it running in LXC on my ProxMoc server, for instance.
That just like starts on boot.
What's involved in that?
Like, is that running on Ubuntu?
Pretty common, pretty easy via System D.
Can you walk me into some of the details potentially there?
I think I think I'm using Alpine for it.
It has to double check.
But it's a very, it's just a go binary.
Gotcha.
Gotcha.
So, yeah, what TSIDP is built on top of this other little piece of tech we have called TSNNet.
You can think of TSNNet as a complete user space stack, like tailscale stack.
It's like a user space networking stack.
Again, it's a Go library.
So you can compile TSNET into existing Go applications, for instance.
And we've done a bit of work on bindings for other languages, but the Go chain is the most mature by far for this.
But what using TSNET lets you do is any Go application where you've compiled it in,
you can have it show up so it looks just like a node on your network,
just like any like laptop or iPhone or whatever.
And so it shows up as a node,
gets its own IP address in the CGNet range, which we're using internally.
You can apply ACLs to it, you know, policies, all that kind of stuff.
You can build applications with that.
That's what aperture is as well.
It's fundamentally a TSNET application, right?
So it just shows up as a service.
So the nice thing about TSNET is that, yeah, you can turn any kind of service into effectively what appears to be a device.
And then you can apply rules in terms of who can access that with what level of permissions and all that kind of stuff.
I feel like you're the most coolest, underknown crazy tech.
Like every time I peek behind the scenes even further, you know, I've talked to Alex many times.
He and I are friends, Alex Kreshmore.
Our audience knows Alex.
And obviously you do too.
But every time I go a little further and a little deeper into my networking nerd world,
I just learn more about what I can do with Taylor Scale.
And it's so wild how, like, I'm really, like, when we're done with this,
the first thing I'm going to do, it's a Wednesday.
I'm hoping maybe by next week, I will get this spun up.
And I will not have to authenticate to my proxmocks.
What does a tool like proxmux or other services that one might run either in their
enterprise infrastructure or in their home lab, which is kind of a snapshot version or a playground
for most folks that might be not so much enterprise, but team-based or just running off
and infrastructure. Like who isn't doing that these days? A lot of people are. What does it take for
a proxmox to support that? Is that something that ProxMox has to do? And, you know, what are some
of the protocols required to support this? In terms of just running like TSIDP as a service inside of your
network? Well, like, how does ProxMox
be able to support TSIDP?
Oh, it's just another OADC endpoint.
Right.
So there's built and support.
So long as it supports OIDC, then they're good.
That's the, that's the ticket in.
Yeah.
Or, yeah, and we've added, I mean, there's Oath 2.1 support or 2.0.
I mean, we brought it up to the almost Oath 2.1 as the MCP spec was evolving last year.
And then we paused it a little bit.
But yeah, OADC, Oath, too, it should just work.
That's it.
Yeah, that seems pretty simple then.
I've never mess with this at all.
I'm going to mess with this.
It's a new world for me on that front there.
Yeah, please do it.
I'll give you feedback.
Good.
So this is an example of the edge.
So you got TSIDP.
As long as you support OIDC or Oath 2 or 2.1,
then you can use this as an OIDC provider,
which essentially is the effectiveness of sign-in with Google.
or signing with GitHub. Is that right?
Is that what you're putting down?
Yeah.
Yeah.
One of the reasons we spent some time working on TSIDP last year is because a lot of the existing
IDPs, like the big ones out there, didn't support some of the things that MCP was calling for,
like dynamic client registration, for instance.
So we built that into TSIDP, so it effectively let us bolt on these missing capabilities.
So you can not only continue to use your existing.
external identity provider, but you can augment its capabilities with TSIDP inside of your
private network. Take me into that world because there's a lot of folks who are down with MCP and not
down with MCP. They say it's a fad. They say it's here to stay. I think it's all about how the
context window gets impacted to the user. I think everything in this world was burgeoning and like
what was to yesterday is not tomorrow. What is, I mean, you've been steved in this for probably
12 months or more, building aperture and this edge you're talking about.
what's your stance on MCP from a, from a implementation standpoint, and leveraging it?
I was getting deeper and deeper into MCP last summer and definitely into the fall,
going to a lot more conferences, talking with many more people,
and it seemed like things were just getting bigger and bigger and crazier and crazier all the time.
It seemed like iterations of the spec were coming out.
People were reinventing things.
It got to the point where I think a lot of companies are organizations,
were like, you know, we're just not going to implement this right now because we're worried the spec is going to change again.
It's too fraught, yeah.
Yeah, and it got very fraught.
And we actually pulled back a little bit from it because I think in September I could definitely feel this fatigue just creeping in.
I was talking with more people and they sort of started pulling back from going to conferences related to it.
I think everybody was just getting tired of trying to keep up with the spec in the evolving landscape.
I think Simon Wilson had this great quote, which is a lot of people were just adopting MCP for lack of their own AI roadmap.
And so I think something along those lines.
I forget the exact word, but I was paraphrasing it.
But I got that sense too.
A lot of people were sort of chasing this thing because they thought it was the right thing to chase for them to actually come up with an AI strategy.
And it just got more and more complex for them.
And I think a lot of people started pulling the shoots, pulling the cords.
slowing down to regroup a little bit into the fall.
And that's sort of how we felt too.
It's just like the more we sort of dug into it, the crazier things seem to get.
So we're like, you know, we just need to take a step back and simplify our own thinking
and focus on like a couple of problems that we want to solve as opposed to trying to chase all of them,
which is what MCP felt like it was doing.
Now, in my experience with these kinds of standards, there's usually like this explosion
and then things coalesce to something a lot more sane.
I think that is going to happen with MCP.
it's just going to take us more time than we thought.
Dig me further into the dynamic term that you mentioned.
I get the write down with MCP.
You said it was calling for this feature set that wasn't there,
and that's why you had to go that route.
Oh, dynamic client registration, DCR?
And I will admit, I don't know all the technical,
yeah, I don't know all the technical details of this.
So you'll have to forgive me on it.
Well, don't go with the details necessarily about that,
but what does it do?
What does it enable for the MCP search?
Why is it in the protocol?
It basically lets things like MCP clients and servers
wake up and register themselves against like an endpoint
without requiring a lot of manual steps or human interaction.
So it removes a lot of friction in terms of getting things like MCP
deployed across an organization.
And you need to support that, and that's why you went that route.
Yeah, well, MCP was calling for it.
Right.
And so it was like, well, this is where the spec is going.
And there's other implements.
There's other parts of like Oath 2.1 and even some experimental stuff and a few RFCs that people were referring to.
Dynamic client registration was definitely one that a lot of people were talking about and we're like, look, we can build this.
Like it's not exactly rocket science.
It would be hard for an existing identity provider to retrofit this in because being an identity provider is a horrendous amount of work.
We don't, we, tail scale doesn't want to be an IDP.
We've known that from day one.
but we can extend the functionality of existing ones.
And DCR was pretty straightforward for us,
largely because with Taylor, like,
you know the identity of things on the network.
So, you know, if you have this trust in these assertions you can make
in terms of like who is connecting to what,
then like client registration becomes a lot more straightforward that way.
Is it the point of dynamic client registration
to enable the MCP server to spin up whenever you launch?
it or initiate it or instantiate it to attach itself to the tailnet.
Is that the point of this dynamic client registration?
Or is it the individuals coming into the MCP, maybe through a CLI or other agentic workflows?
I mean, in my limited experience with this, again, it's been more about us just allowing,
you know what, I don't have a great answer for you on that one.
That's okay.
That's okay.
I'm trying to pick it up with you because this is, you know, when you're in the land of burgeoning,
You kind of have to navigate some seaweed and some tall grass and you got to get your hatchet down.
You're like, you know what?
I don't really know where I'm at right now.
This is moving so fast.
What is the point of this need?
That's where I'm curious about because it seems like that might be the case.
Like I see a lot of folks delivering a CLI and a MCP in one.
You know, a lot of Go applications are doing this or GoCLIs are doing this where they'll deliver a CLI in an MCP server in one single thing.
And you can launch it via the CLI, which is pretty.
easy and I'd imagine you want that thing to authenticate
with a tailnet like if I'm spinning up a network I want it to I want it to have an
identity be the MCP server for XYZ server or service and that's who you
are and you've got ACLs and identity attached to you yeah and I mean that's that's
how we've been showing it off to people it's like oh I'm gonna I'm gonna
create a server I'm gonna create a client I'm just gonna have it automatically
join the network I'm gonna you know a big part of again tail scales like you
want to give these things identity you want to pass them around you don't
want to have people to have to say make a manual or static configuration with this kind of thing.
I should be able to spin up.
If I'm on a tailnet, for instance, I should be able to set up an MCP server.
I should be able to launch it.
I should be able to grant access to that to certain people.
I should be able to tell like my colleagues and maybe some agents that are in like a
tailnet somewhere like, hey, you've got this new resource available.
You can go access it safely and securely.
That's why dynamic client registration is like super important for us in general.
And I think it's why it's so important just to the MCP spec as well.
Take me back into TSIDP.
You mentioned not wanting to be an IDP.
These are acronyms, everybody, okay?
I think it's, what does IDP stand for, identity provider.
Something.
What does that mean?
Oh, identity provider.
It's, I mean.
Identity provider.
Yeah, like G Suite, Azure, Octa.
There's a lot of them.
Like, Kikloak.
Okay.
You know, a thing that, that, you know, a third-party service that you trust to assert the identity of other things, right?
Right.
Or yourself to other services, broadly speaking.
So, you know, everybody logs into everybody, well, not everybody has a Gmail account, but everybody logs into services like they enter your username and password or, you know, configure, like logging with SSO or, you know, locking with Facebook or whatever.
These are all authentication flows.
something has to know
who you are
and be able to assert that to other things
that people trust.
That's basically the job of an IDP.
Here's where I'm going with this.
And maybe this is just hypothesizing this edge
that you're maybe navigating with you
and your team that are thinking through these things.
I'm thinking like if you don't want to be an IDP,
that makes sense.
But you want to enable folks to create their own IDP,
which is TSIDP.
It's this thing I can use to build my own essentially.
And I can sort of carry my own
identity around with me. It seems like I could run my own instance in my home lab on my
proxplan server, but it could be me everywhere if I wanted to be. That's what it seems like
I could be. Or I can build my own thing on top of that. I feel like we're going into this world
where maybe the next layer of a lot of folks' stacks, whether it's internally in a home lab,
whether it's a small team, but in the next big thing to maybe a medium-sized team that's already
got motion in place. And now Aigentic is thrown into this world.
and they're bolting on and kind of rebuilding their platforms on top of essentially AI,
I feel like this world is moving towards self-hosted.
And the reason why I'm asking this question is like, is the idea for TSIDP to enable me
to self-host my own identity provider.
So I don't have to log in with Google or logging with GitHub or logging with whomever
because the thing I forget most is like which one did I log in with?
And now I've given you my stuff.
and I've got to trust you as my SSO, my single sign-on provider,
all these acronyms in this world is just ludicrous, basically.
But is that, am I picking up what you're putting down?
Is that the direction?
You're trying to think things?
Kind of.
So just to be clear, like we're, like, you still need an external identity provider
when you're using TAL scale.
And specifically, if you use TSIVP.
Like, it leverages the fact that you've got an external thing that you trust.
Because that's what generates the identity.
And that's what we use with the encryption key.
so you can figure out, like, oh, this thing is connecting to me, I know who it is.
And so however you're using tailscow right now, you've got to, you know, when you create that tail,
you know, whenever you're, when you create that tail net, you're plugging it into whatever identity provider is out there that you currently use.
What I think is really magic about TSIDP is that it lets you not only manage identities sort of privately and internally,
so you can bring OIDC to all of your internal apps.
you don't need to configure them to go external.
It lets you start thinking about your network
as more of an extension of your identity,
not just individual devices.
And so you can actually start treating like a tailnet
as a collection of identities or perhaps just one identity.
And so it lets you, yeah, I guess have a pocket of identity
that's privately your own that you can start to do
and manipulate and share things in the world with.
Now there's a lot of, I think there's a lot of interesting ideas
and maybe a philosophy we can get into on this kind of stuff.
And I'd love to explore it.
I know my co-founder, Avery, he's thought a lot about this kind of stuff, too.
It's very early on it is for us.
It's a very interesting and academic piece of tech.
This is, I guess, I can talk more about some of the journey last year, if you want.
But we started going to conferences and talking about TSIDP to people,
especially in the AI space, because we're like, oh, this is very fascinating.
From an MCP point of view, that's, like, facilitates these of use with them of a tailnakes.
It helps you keep things private.
Identity is a first-class thing.
TSIDB is a way of showing that off.
And people were very interested in it,
but then they kept on coming back to just more concrete problems
of like, oh, I need to get access to a customer network.
Or I'm dealing with API keys or like much more tangible, like, you know,
first order problems.
Whereas TSIDP is like three or four steps down the line for them,
which is why we pivoted last year sort of a little bit away from it.
But I guess in terms of projects,
it's still very actively, there's a lot of active interest
and we use it internally, but the stuff I've worked on has shifted a little.
Big thanks for our friends at Nordlare for sponsoring this episode.
So you 2FA your GitHub org.
You rotate your API keys.
Maybe you run Dependibot on every repo or something like it.
And then you onboard a contractor by sharing a VPN config over Slack and forget to revoke it four months after the contract ends.
And that person still has a tunnel into your internal systems, maybe even right now.
Go check it.
All this from a laptop you don't control on a network you can't see.
Well, Nordlayer is a network security platform built for businesses that actually operate the way modern teams do.
Distributed, remote first, and moving fast.
It combines VPN, access control, and threat protection into a single platform based on zero trust.
Only the right people get access to the right resources under the right conditions.
No implicit trust.
No access lists.
First, it deploys in minutes, not months.
NordLair runs on Nord Links, their VPN protocol, is built on top of.
wireguard and works across every platform. MacOS, Windows, Linux, iOS, Android, no hardware to rack,
no complex configs. You get grain of control over who access is what, from where on which device.
And when that contractor's engagement ends, well, you know what? Revoke access from day one on
one dashboard. And it's done. Right then and there. So plan to start at eight bucks a month.
You get up to 22% off in order layer right now, yearly plans, plus an extra 10% of,
off with the coupon code, change log dash 10 dash Nord layer.
Try at risk free with a 14 day money back guarantee.
Check it out at Nordlayer.com slash the changelog.
Again, Nordlayer.com slash the change log.
What are the things that are most active for you now then?
I think this might be one of them given apertures announcement.
And this is sort of the underpinnings of all that.
But like, what are some of the other things that are more active?
Like even with API keys, and those are being thrown around everywhere,
and tailscales kind of, to some degree, solved most of that.
And like you mentioned, it's hard to tell everyone about the cool stuff you have.
And now you have a chance.
Yeah, and thank you for that.
Yeah, so Aperture is definitely the evolution of a lot of that exploration last year.
And so for those of you who aren't aware,
Aperture is basically an AI gateway built on top of TSNNet,
which I mentioned earlier,
that works inside of your tailnet.
And you can expose it.
Well, there are ways to expose it externally.
But essentially, it's a private AI gateway
that lets you consolidate all of your API keys inside of it.
And it just looks like a node on your tailnet like any other would.
After spending months and months going to various AI conferences
and showing off TSIDP and just talking with all sorts of people,
like engineers, CSOs, people in IT, what have you.
Time and time again, people were saying things like,
Oh, TSIDP, it's super fascinating.
Interesting that you guys are working on all this kind of stuff.
Like I see the merits of tail-scale.
Multi-tail-net, which is this other thing we've been working on internally is like super neat.
But what I'm really struggling with is just trying to figure out how to manage API keys
because they're all over the place.
I can't claw them back because it'll potentially like disrupt production or some engineering workflow.
You know, we're trying to go really fast as a business.
And, you know, it's just dangerous.
You've got API keys all over the place.
People trade them.
They get exfiltated or they get checked in.
and they have very large, I guess, well, accounts or credit cards associated with them.
And so it's very hard in some cases to track usage because a lot of API calls are inherently anonymous.
We were sitting around at a company offsite back in November,
just talking about all the things we've been learning and over the past few months.
And we're like, well, wait a minute.
If we built a gateway and we used TSNet for that,
the gateway already knows exactly who you are because everything that connects to it over Tailscale has identity.
in. So if we put the API keys all inside of the gateway, then you wouldn't need to share an API
key with anybody inside of your company. You could just say, oh, if you have a coding agent,
just point the coding agent to use the gateway instead. The gateway knows who you are.
So then all the API accesses have identity associated with them. All the API keys can be
withdrawn. And so that you end up with a single point of, I guess, observability, control,
and access for your entire team. So as engineers get onboarded faster because they can just tell
their peers, like, oh, just point your coding agent at the proxy.
Everything just works if you're on tailscale.
That's the security team, say, like, great, we can get rid of all the security,
like all the API keys that are all over the place.
We can just tell people just go to HTTP.
For us, it's HTTP colon slash AI.
And then every API call has your identity associated with it.
So you can just log everything, right?
And so we use aperture internally right now.
And I think we've got, I don't know how many tens of thousands of API calls
across a big part of the company at this point.
We've just got every interaction with all of our coding agents
going through Aperture right now internally.
So we have full visibility into how people are using AI,
like all the requests, all the responses, like the full logs.
We're mining it for tool calls.
And our security team doesn't get some review it if they need to.
And we're working with third party providers
to start doing analysis of the logs in real time and sort of after the fact.
There's like all this stuff that is unlocked for us.
But initially it's just started with like the simple.
idea of like, we just want to solve the API key problem. And it's if I'm sort of, I'm meandering a
little bit, but one of the reasons I love it so much is it builds so much on top of the fact that
tail scale makes things so much easier because it takes identity and encryption. It brings them way
down into the stack, like right at layer three. And if you do that, it can simplify so much stuff
on top of it. And an aperture is just an example of that. Anybody can take TSNET and build an
application that looks like a node in your network and you get identity and connectivity and
security baked in for free. And that's, it's just like, it's such a joy to work with it because
as a developer for years, I spent so much time and heart, like just, I had so many headaches
about building authentication systems in managing, you know, managing infrastructure, opening up
firewall ports and dealing with like, like, whitelisting IPs. And now with tailscale, you just don't
need to do that. And then applications you built. Yeah. And with applications you build,
With TSNet, you don't need to do that.
So it's just been a joy to work on this project over the past well.
As a home lab, I'm thinking about having, is Aperture available to anyone?
Is it a product?
Do you have to pay for it?
How do you deliver it to someone?
It's, I mean, it's an early alpha right now.
There is a self-served flow.
We launched it just, well, we quietly launched it a couple of weeks ago.
It's not quiet now.
Yeah, yeah, yeah.
We did a little, we did more of a push just the other day on it.
Right.
You know, but if you go to aperture.com, you can sign up.
There's a bit of, like, we manage it as a wait list just because we don't want the service to get overloaded.
But the idea is that we're going to open it up, like, very quickly for everybody as soon as we're sure that things are just going to scale.
Just fine.
But basically, you know, we will provision an instance.
You authenticate it as a node into your network.
It just shows up.
You can start using it right away.
it's, you know, in line with what we already do for tail scale.
Like it's, you know, free for home lab use.
Like, it's bundled.
We're going to be announcing how we're bundling it as part of the free plan,
just, you know, free for home use, that kind of thing, just like we do.
Obviously, you know, we're planning on it being a paid product for enterprise, you know,
but we're still exploring pricing and all that there.
But yeah.
Yeah, but I want, you know, I want every home lab, you know, anybody who's playing with LMs and API geese and stuff at home,
they should just be using it.
It just makes things easier.
It's sort of like the tail scale way.
Is this self-hosted then?
Or is it not self-hosted?
Because you said provision and instance.
Yeah.
So we are hosting these instances for customers right now.
There is plans and talk about self-hosted versions and certainly enterprises,
like some of them would insist on that.
There's varying degrees of what that might mean.
If customers might be like, oh, I want to bring my own cloud, you just write the logs there,
but you can host the actual, you know, the stuff that's taking up the CPU.
or some customers might want just like, no, we have to have everything on-prem.
But right now, we wanted to get aperture into as many hands as possible, as quickly as possible.
And the easiest way to do that, and I think one of the safest ways, frankly,
is just to let us host the instances at this point.
The reason I asked that question is, one, I said earlier that I feel like the world is moving to self-hosted.
As a home laber, I already felt that way years and years ago,
but I feel that way more and more now because when the cost,
to produce an application that's bespoke to me goes to near zero except for my ability to specify it and my ability to describe intent,
then everyone theoretically can be a builder in this new future we're going towards.
And why not self-host a lot of things that I'm going to do?
Because I'm already a home labber that makes a lot of sense.
But I imagine a lot of a lot of teams, a lot of tech companies, a lot of non-tech companies that are now tech companies.
They think the same thing they want sovereignty over their.
things. They want to control their CPU costs. They want to trust the cloud less and, you know,
still leverage cloud native type things, but in their own controlled way, especially when it comes
to identity and especially when it comes to all tool calls and all responses, et cetera.
I mean, because as an individual AI user, a team of one, basically, when it comes to like the
things I'm building, one of the things that I have anxiety about is, or just, I suppose,
not anxiety, but like, I just wish there was a record.
It sounds like with aperture, I can gateway my way into all my AI and have my prompts
and the responses stored there versus the compacting that happens and goes away.
And you even have, you know, in cloud code, for example, you have an export where you can export
the conversation, basically.
It's like, let me snapshot what we've talked about.
So worst case, I can walk away with context of the conversation, maybe not context of what
we actually, the underneath we've described and, you know, some of the world we explored,
I feel like that to me, like I would want personally, maybe this is direct feedback, but like as a
homelabber, I would want to self-host that, especially because of how secure or exposed I might
be, you know, with those. It's like not self-hosting your email. I think today you definitely don't
do that. But I think in this case, it's such sensitive.
or could be such sensitive information,
I personally would prefer to self-host it.
And I'm curious why,
given that you largely haven't done a lot of infrastructure
in the history of tail scale,
like you've pretty much been a pointer in a lot of cases
and not a lot of infrastructure required to build what you've built,
why now?
Why build out instances and hosting and, I guess, responsibility?
Yeah, or liability.
Liability too, all the abilities, you know?
Yeah.
No, it's a very good question.
Accountability, responsibility, liability, all those things.
Yeah, no, no, it's a very, very good question.
And I agree with you.
I think a lot of people will want and will expect to self-host it.
And we do want to provide a path for that.
Yeah.
It's just, yeah.
So early.
I get that.
Yeah.
It's early.
It's early.
And it's, you know, this little team of mine right now that build aperture over the past few months,
it's cliche, but we're operating very much like a startup.
And it's like, and it's like,
well, we just need to choose one path and go really fast on that path.
And it's like, what's the quickest way we can get feedback from customers,
get people, you know, get people experimenting with it,
getting to the point where we actually have, like, traction.
Like, you know, it's, do we have product market fit?
Is the fundamental question.
Once you have more product market fit,
then we can sort of, we can start branching out to figuring out where that takes us.
But this, this.
And frankly, internally, I also wanted to experiment, like, well, you know,
I think there is a world, to your point,
I think data sovereignty and data governance
is extremely important to people.
But so is speed.
And I think there is a lot to be said for motions
where it's like, oh no, we can, like, Talescale can help you
provision in instances.
Like maybe we hook into other cloud providers
or maybe we get to the point where, I don't know,
like people could start sharing their infrastructure
and to compute with other people.
And we can make it possible for people to, like, I don't know,
create their own cloud providers or some kind of like,
democratic cloud system using tail-scale infrastructure.
But I wanted to experiment with the idea of like, well, what would it take or how much would
customers resist A, like, I just want to launch an instance into my network, right?
Just make it easy for me.
Like, just give me one click, drop a note in, right?
Because there are still the guarantee, like, we can't just simply start injecting nose into
customers' networks.
Like, people have to authenticate them in.
So there's a lot of guarantees that we already have in terms of our ability to just
is just, or the lack they're of to, I mean, take over customers' networks.
And I think a lot of customers just want the speed and convenience just to try something out.
So we made a decision.
It's working out really well so far.
I've been actually surprised with how many people have just been willing to say, like, yes, spin it up, add it to my network.
At some point, we might want to have a conversation about self-hosting, but for the purposes of experimenting with it now,
like, this is totally fine because we trust you guys.
But, yeah, it's really days for us, too.
You know, there's something else.
This thread I'm pulling on, too, with this self-hosted work.
world is like, you know, especially in your history. And just go with me here and think, think out
loud if you don't mind. Put all your cars on the table, David. I've been thinking about this
world of self-hosts. We just had a really good conversation with the founder of TL Draw. And one of the
things that TL Drawl basically does is sell an SDK. So it's not even, you know, fully big software.
It's the SDK to build on top of, you know, it's the concentrate. You just add water essentially.
which is one kind of fascinating idea.
Great business, a lot of upside.
No infrastructure.
You know, high margins because there's no servers to spin up.
There's no attached to AWS or GCP or you pick your cloud and you're now competing with them at some point
because you've got software that you're putting out there in the world as licensed software,
whether it's truly, you know, MIT license, open source or its source available open source.
code, not to blend the terms there, just to be clear about that. But you got this world where you
have, I think we'll have a lot more people wanting to self-host, and those people can still
be customers. Have you all explored or have you even thought about yet if you went that route,
how you can license it and still sell licenses of it, where you can provide essentially a source
available version to the open source world that, hey, if you're in a home lab environment,
like you do now, totally free.
This license applies to you.
But if you're in production or you're in an enterprise or you're in this kind of business class,
the sources available.
Contributions are not necessarily what you're trying to achieve,
but you do want to have your code out there so people can see it.
But then you're also saying here's a license to you or here's a license per node.
You want to have an aperture instance in your stack.
You want to have multiple.
I don't see why you'd know.
need multiple, maybe just one is fine, but you got maybe a per user license or a per node license.
You're telling me, I don't know how it actually permeates into the network, but have you
explore that world yet? Are you planning to? Yeah, all we're planning to. We've definitely
started talking about it. Even some of these bigger, we've had a lot of enterprises show up that
are very interested in aperture, and they've been bringing these things up with us as well.
I think there's a world of possibilities for us there. I'm a big fan of open source in general,
and I'm a big fan of self-hosting things.
I like to play around with it.
I like to think it.
I like to feel like I'm in control.
I like to be able to just turn it off
that I want to.
You got a home lab, David, of course.
I mean, that's where we bake all of ideas first.
We take them, we do them in the home lab,
we figure it out how it works,
we nerd out, we learn, we explore,
and then we take it to wherever we're going.
That's kind of cool stuff.
Yeah, exactly.
And, yeah, I think with me is, like,
the bigger thing that just keeps me going as a founder.
It's like, I want everybody in the world
to be using tailscaled because it's just a better way to do networking.
And so anything that's going to encourage that, I want to pursue it.
And I think, obviously, having a self-hosted version of Aperture is one way that will.
It'll enable that.
Getting more people just to build TSNNet applications is another way to do it.
So everything's on the table right now.
For the purposes of, I guess, just speed and getting feedback and figuring out where we need to go next,
yeah.
This model that we have right now, I think, is the best one, because frankly,
were making updates and brolling it features so quickly that trying to manage those with self-hosted installations and like worrying about database upgrades and all the security and everything behind them.
It just,
we sort of did a bit of the math.
We're like,
we need to choose one path and keep it super simple and just manage things like aggressively.
So it makes sense in this case then to cloud first,
not because you're not for self-hosting,
because you need to maintain velocity.
Yeah.
And the only way you can really maintain reliable velocity in this case is to have your own instances.
But in the future, if someone says, hey, you know what, I love this, I'm down for it, except I want to self-host.
Then when the product has proven itself, that makes more sense later on.
That makes a lot of sense.
Yeah.
Because I mean...
That's where I definitely, that's where I'd like to go with it, for sure.
It's easy to say that.
It's easy for me to say, hey, just, hey, David, just, you know, make this a self-hosted instance, right?
I mean, it's easy just to say those words, but then to actually support it is the challenge, because then you have versions.
out there. And I suppose you do have data-based migrations. But, I mean, if you know which version you're on, you can pin to a version and you do a good job with engineering and database migrations, then that can largely be collapsed to almost no pain. I get that that can be painful. And then you also have containers. Docker is certainly one of the things you can easily do that with. They've rolled out hardened images. I'm sure they would easily roll you into an aperture hardened image that you can just, boom, it's there. You're using the latest version of it kind of thing.
So there's a lot of things you can do to simplify it from a hosted standpoint or a self-hosting standpoint.
That doesn't have to be.
Here's my code.
It is a binary too, right?
You say it's a go-binary.
You're largely a go-shop.
So, I mean, go-bineries plus system D or go-binary plus container is pretty easy worlds to navigate for the most part.
Yeah.
Yeah, it is designed to be portable like that.
Yeah.
I mean, I'm excited about all the stuff you're mentioning.
It's definitely stuff we've talked about internally as well.
The moment you offer, I mean, I'm going to try to.
no matter what, but I would, the moment, I mean, I'm a self-hoster.
And in fact, a little side note here, I want Alex to bring back self-hosted so bad because I feel like now is the time to bring back the podcast self-hosted.
I don't know if he's thinking about it.
I know he's got a big job to do for you all there.
And maybe he's just too busy.
Can't think about it.
But I like the idea of it because there's just so much happening there.
Yeah.
So Aperture is built on TSNNet.
You mentioned TSNNet applications.
What are some other things?
Can I go build on TSNet?
If I'm a builder, what ideas can you give this world to say,
you know what?
We can't do it all.
Here's some ideas.
Do you want me to build?
Do you want people like me to build on TSNet?
Or do you want to be the only application builder?
Oh, no, no, no, no.
I mean, yeah, the bigger arc here is like, I think,
tailscale can and should become a platform.
I think every platform needs a few killer apps.
Yeah. And it needs to sort of lead the way, it needs to demonstrate to people like, look, this is viable. Like, this is why you should spend your time on this kind of thing because there's an ecosystem that you can plug into. An aperture is definitely one of those, is definitely one of those projects for us. Now, if somebody else goes and takes an existing AI gateway and retrofits it on top of tail scale, that's a win-win, as far as I'm concerned. Just more people are using tailscale. Customers have a better security, you know, more people are just,
familiar with TSNet, which is fantastic.
When you start thinking about
tail scale in terms of like, oh, I can create
a private network and I can add things
to it and then I can start controlling access to it.
And it's just like it's a very abstract,
I guess, model.
There's a lot of different kinds of opportunities
and problems that can help to solve.
And I will say there is something that we've started
talking about more in the past 12 months
is something called MultiTelnet.
We have a blog post or two about that,
which is the ability for you to create
multiple independent tailnets instead of your org or your team or your home.
And there's API only ones right now, which is basically for machine to machine use cases.
Like there's not a strong notion of identity, like a user identity in that.
And then there's ones that are more, I guess, that are more directly tied to the user identities.
So we've got some customers that have like a staging tailnet and a testing tailnet.
And then like user identity is a first class thing.
I believe those, I guess those tracks of development will converge at some point.
But there's already some really incredible stuff you can do with multi-tailnets, for instance.
And that's one of the reasons I think things like Aperture and TSNET are so fascinating in that I could create a separate tailnet.
And I could make sure that nothing can escape from that.
And I can go off and do its thing.
And I can run coding agents in like YOLO mode.
I can have them connect to MCP servers.
And I have permissions that I can fly around inside of the network and it's all nice and contained.
And you can use tailskill for that kind of thing.
And those are the kinds of areas where I'm really excited for people to start experimenting with and bringing up ideas of just like, oh, like, if Tailscale did this for me or if it let me export this kind of thing or if I could transport this kind of policy or if I could connect to these different things that I could achieve X.
And I want to hear more of those ideas so that we can start building more stuff and more features at the edge so that people can start using Tailscale more as a platform.
Yeah, I'd love to explore the idea, not so much in this podcast, but something that's on my plate is network isolation.
like spinning up an instance, let's just say like an Incus,
are you familiar with Incus money chance?
Did I say that earlier?
Yeah, I'm not.
Sorry.
Canonical used to have LXD.
So you know that you got LXC in,
and I don't even use those.
I'm not super versed.
This is where I'm exploring and learning more about.
So I'm sure my audience will be like,
Adam, get yourself up to speed here.
Okay.
I'll attempt to follow this here.
Incas is like system level containers.
So versus Docker where it's a protocol,
it runs on system D.
It spins up instances, and it is a Go binary.
It's built on Go.
The person who invented it for canonical, Ubuntu's parent company,
it was called LXD.
I think they had a license snafu.
I don't know what happened there,
but something happened licensing-wise that made canonical change the direction of it
and remove it away from, I believe, it's Linux containers.
org.
If I can recall, let me just double-check my notes here,
so I don't jack up that.
Yeah, Linux containers.org, which if you go there, you will see Inchus, you'll see InkusOS, you'll see LXC, which is something you use in ProxMox, Distro Builder, which can do some cool stuff with like building distributions.
But Incas, I may actually go to their homepage, so I don't actually jack up what they say.
Incas is a next generation system container, application container, and virtual machine manager.
So much like you can do a VM and you want to actually have control of the kernel or the container at,
can be you're borrowing the OS level, the hosts level kernel, where a VM you want to have
your own kernel. So it can do both of those in easy exchange. So they're unlike ProxMox, they're very
much differentiated and actually largely abandoned from a CLI standpoint. They don't really have a
good CLA in the Proxmunks world. So if you want to spend up a new VM or a new LXC, you've got to do
a bunch of clicking inside the WebUI. Painful. Whereas Incus is got a really great CELI.
You can script a lot of this stuff.
And the difference between a container and a VM from a scripting API CLI world is the same.
Like the same kind of commands, but arbitrarily different whenever you spin them up.
Why am I telling you this?
Okay.
The reason I'm telling you this is like, is I want to spin up different layers of Incas or a different container of VM.
And I want them to be in an isolated state.
I want that particular container to know nothing about anything else around it.
As far as it's concerned, it's in a black space.
Like it's literally, there's no star around it.
There's no asteroid.
There's no planet.
There's no moon.
Like, it's just in a sea of abyss.
And all I could do is do internet traffic out, pull down updates, send traffic back.
But to its peers, there is no peer.
You could do that with typical networking, but it sounds like with a multinet or some telnet food that I just don't have yet that I'm still learning about,
I might be able to jail one of those containers or VMs.
in ways I just weren't, wasn't able to do before.
So that's why I'm camping out.
It's like this world of like network isolation,
put that new instance,
that new VM or new container into network jail essentially.
Basically, yeah.
I mean, a lot of the stuff you can do with multi-tail nets,
you can do with just modifying the policy file in a single tailnet.
You know, if your rules are aggressive enough, for instance.
Yeah.
But there's definitely, there's a lot of people that were,
like, no, I just want, I don't want to mess around with like a complex policy file.
It's too risky.
Or I'm dealing with multiple customers and my customers demand, you know, complete guarantees
on isolation between them.
Like I don't want to be managing something where I accidentally like at a star somewhere
that all the customers can see each other, for instance.
And so, yeah, multi-tailnet is definitely, it's, it facilitates that significantly.
A lot of it's just peace of mind.
Like, oh, no, I have this particular tailnet.
It's used exclusively for this.
It can't like, it's not like it can make a lateral, it can't move laterally to a different tailnet.
Like they're completely isolated from each other.
Right.
So it gives a lot of people just peace of mind.
And frankly, it's like a divide and conquer kind of problem.
It's like, well, why I have like one big complex tailnet when I can have two simple ones.
Or I have my own that I've authenticated with because you said you have to have an OIDC provider.
I've got to authenticate with tail scale with one of those providers.
Could be Google, could be GitHub, could be whomever I choose.
And so that establishes my tailnet.
And then beneath me building on tail scale,
I want to be able to have a whole separate tailnet
that is not, I guess, even tied to mine,
but something I can talk to, but it's isolated.
That's where I think you provide some networking
that is just like dark arts, essentially.
Yeah, well, with the API-only telets,
you basically, when you create, when you get,
I mean, you get an OAuth client back.
And then with that OAuth client,
you can do things like add nodes,
create off-keys, provision,
stuff like within that particular tailnet.
So there is, it is still tied with like, as what you might call the primary tail net.
Like it is associated with that.
But, you know, for all intents of purposes, it's a separate network.
Right.
Well, for me to spin up another sub tail net, I have to have a tail net, which means I have to
authenticate the tail scale, right?
So I have to have a tail net to begin with to spawn subtail nets that are basically blue,
you know, they're just blue oceans.
They're by themselves.
They don't have a clue.
whoever else is around them.
They're just,
but I have to have a tail scale
and a tail net
to create sub-tail nets.
Yes, exactly.
Yeah.
Yeah.
Well, friends, this episode is brought to
by our friends at Squarespace,
the All-O-1 website platform
for building your online presence
and running your business.
Here's something I've learned over the years.
The best developers out there know when to build
and when to buy.
And you could hand-roll a booking system,
wire up stripe,
build an invoicing workflow, stitch together email marketing, and honestly, you'd probably do a great
job doing that. But that's weeks of your life spent on infrastructure that isn't your actual thing.
Sometimes the smartest move is choosing the right tool so you can focus on the work that actually
matters to you. That's Squarespace. It's the all-lum platform that handles the business side
of whatever you're building for yourself, for someone, for a friend, whomever. And two things,
I think, are worth noting. First, offering services. If you're doing,
consulting, workshops, freelance dev, mentoring, content.
Squarespace brings scheduling, invoicing, online payments,
and EMO marketing together in one single place, one dashboard.
You list your offerings, clients book and pay,
and you skip the part where you are playing accounts receivable in your DMs,
and it's a real business.
It's a business workflow, not a pile of SaaS subscriptions that are kind of duct taped together.
Second, selling content.
If you've got your course ideas, your video tutorial ideas,
or deep technical content,
you've been meaning to package up.
Squarespace let you gate it behind a paywall.
One-time fee or subscription, it's your call,
recurring revenue from expertise you've already built.
That's a good trade.
And the point isn't that you can't build this stuff,
is that should you?
Maybe not.
Okay, head to squarespace.com slash changelow for a free trial,
and when you're ready to launch,
use our offer code,
change log, to save 10% off your first.
first purchase of a website or a domain.
That's Squarespace.com slash changelog.
And we use our link, of course, you're supporting the show, and we love that.
Once again, squarespace.com slash changelog.
We got there by talking about different applications you think can be built on top of TSNet.
And I think where we may have dropped off at or not given a good trail for, and I know the listeners
may be potential upset if we didn't do this yet, is like, can you give me a couple ideas?
Like if you can go and if you were at a,
if you're in the hallway of a conference right now
and you were just talking to some folks,
you know what?
Here's what you can go and build on your TSNET or a TSN net application.
Here's a place you can go play.
If an enterprise came to you and they're like,
you know,
we're just leveraging our tail scale,
our tail net in these ways,
what are some problems they're coming to you with that you're like,
we're not going to get to that for a while?
These are things you should build.
What problems are some of your largest customers
just,
you know,
on your, not something on your case about,
but what's the loudest cry in the woods of
this is what I want to build on top of my TSNet?
One of the bigger issues that a lot of,
especially larger companies that I've seen coming up more recently
is they have a traditional network, traditional VPN.
It's like one monolithic thing.
And they're trying to bring up MCP servers
and they're trying to bring up MCP clients.
And they've got these notions of,
like they basically are thinking like,
oh, I've got now agents
that are trying to operate inside of the network the same way
humans used to,
right, except
well, I don't need to go on about the dangers
of sort of letting an agent run amok
on an internal corporate network.
But it's becoming, yeah, yeah,
it's becoming obviously more and more
an issue for like especially bigger organizations
that have like traditionally dealt with security
from like a very centralized monolithic
I guess perspective.
So I think there's definitely been this push of just like, okay, well, how do we start,
I guess, segmenting or isolating or subdividing our network?
How do we start decentralizing it more so that we can enable these different teams and
pockets of the company to work more independently with each other or within the, like,
sorry, to work independently to solve like particular domain problems while still,
I guess, enabling the velocity and freedom of access to like required information.
So I think I think tailscale is actually a way that companies can do that.
I think there's just lots of different approaches, but tail scale definitely makes it easy.
It's just like, okay, well, instead of like one gigantic monolithic network,
like maybe you start building like one tailnet per workload, for instance.
And we've seen this kind of thing with people who are playing around with Kubernetes.
Right.
And they have like custom internal tooling that they need like some kind of internal and monitoring system.
It's like, oh, no, when we bring up a cluster, we've got these applications that need to sit inside of it.
It has to sort of moderate or govern certain kinds of networks.
networking access, like we do a bunch of log analysis. Maybe we do a bunch of real-time stuff.
Like, how do we insert these applications that we've built into these networks, you know,
in an isolated way? And so I think there's a lot of internal applications inside of companies
that need to stay internal and private. Tailscale's good for that kind of thing. You can retrofit
them inside of TSNet. And then you can encapsulate sort of entire workloads inside of tailnets
themselves. And it keeps, it helps people reason much more about like a safety and security
that way.
Where does someone get more details about multi-nets then?
Because this is...
A multi-tail?
Yeah.
I mean, there's been a couple of...
Multi-tail net, yes.
Let me be on brand with that.
I like that.
Yeah.
Multi-Tel net.
Yeah.
I just shortened it to multi-net because everything else is something net.
T.S. Nazley, multi-net.
I was just trying to follow your lead there, David.
No, no, it's good.
And in fact, I'm sure our product team will watch this video and maybe think more about the name.
We've been debating it internally of how to call these, like, what to call these things.
There have been.
a couple of blog posts even over the past year.
So even if you're to Google for like
tail scale multi-tail net, you would find one.
It is, I believe, in
beta right now.
I'd have to double check that.
But it is accessible.
Yeah. Two home lab users?
Yep. Sweet. Okay.
While you're doing that lookup here, I'll fill some air with this.
I think for the brainy team or marketing team, whoever's
listening, our listeners is well, I think the better
name actually is multi-tail-net because you already say tail-net. It would not make sense to
shorten that to multi-net. I messed up. And so I'm putting my branding head on thinking about how
I would actually frame it in my own mind if it were my product. And I would call it a multi-tail-net
because that's truly what it is. That's what makes the most sense to me as well. So if that's
what you're thinking, good job. Yeah, I just double-checked. Yeah. So our marketing team did a great job
with this fall update we had back in late October
and there's a blog post called one organization,
multiple tailnets. It talks about this
over a page or two and then just how things are
at least at the time of this blog post in an alpha program.
But we've been steadily working on the features
on the features related to multi-tail nets over that time.
I'm super interested in this. I'd love to see where this goes
because I think there's a lot of opportunity
with subnets, multi-tail nets,
you know, this problem that your, you know, your customers are bringing to you.
I, I can care with that because there's things that I'm doing inside my network that
essentially is on my flat V-Land. You know, I got my traditional network that's across,
you know, my 192 space. And then I got my tail net, of course. And there's things I want to
isolate that I'm just not, you know, and I guess I'm just crossing my fingers because I'm moving
at a velocity that doesn't let me slow down enough to be secure. And that's the job of
tail scale I feel. That's where
you know, that's why I use tails. That's why
I keep, you know, investing more and more my knowledge
into what you are doing because you guys are like
just networking wizards. And you love Goh.
I love Govgo.
Yeah. If you're not going to be at
GoferCon this year, David, you all should reconsider
that. GoferCon is the best
place to be. Yeah, I just think this
is super cool. This idea of multi-tailnets.
So I want to like dig into this between
this and
T-S-O-I, what was it?
my gosh, so many acronyms.
I mean, my notes here.
Don't tell me yet.
We need better names for these things.
It's okay, though.
I mean, it is a TSIDP.
That does make sense at a tail scale identity provider.
That totally makes sense.
There's just so many acronyms in this world.
It's hard to keep them all on the tip of my tongue.
So you got a lot to cover there.
So those are two things I'm taking away from this for me personally on homework.
Because I don't want to log into my proxmos anymore.
I don't want to log into certain things I'm self-hosting.
everything I self-host that has a login prompt, even my True NAS box.
If my True NAS box can be OIDC compliant and let me use this TSIDP, gosh, I'm on it.
Yeah.
Then that'd be great, you know, because I don't have to log in anymore.
That'd be awesome if my identity could truly just follow me.
And I think the challenge that I think for you all is it's a constant, I would say marketing battle
but I think it's like you have so many home labbers out there
so many people are probably playing with your stuff.
You need more people just naturally sharing this stuff.
And I know Alex has a great job of your YouTube channel.
I mean, just tremendous job on that front there.
I think you're going to constantly have this problem, David.
I don't envy your position at all with constantly having to update people.
But there's just a sea of great stuff underneath tail scale from multi-tail net to the things we've just talked about.
those are things I'm walking away with.
You know, I do want to circle back to, if you don't mind, this idea of of aperture and really just this less about the product, but more about this idea of having an AI gateway.
Help me understand why you feel so strongly about it.
I imagine you do because I'm kind of having this feeling too, is that I'm not sure if I'd apply at my home.
I think maybe I would.
but certainly in a burgeoning team,
a small team,
definitely in a smaller business
or an enterprise
where you're sort of like
mid, small to medium size.
But this idea of an AI gateway,
one for security,
two for API keys and just like the
concerns there,
but even just me,
not having to have this anxiety of loss
when it comes to transactions
I'm doing with a generative AI,
where I'm knee-deep in a world
We're exploring it.
You know, it's largely just pros.
It's largely defining specifications, largely defining intent.
It's largely even learning about things, building toy applications just to learn.
Not so much to build something to ship it to the world, but to learn about, you know, even authentication.
Like when I go play with multi-tailnets later on, I'm going to go build a toy application and I'm going to work with an AI to do that.
Awesome.
But it sounds like the gateway can help me have a transactional history.
Help me understand that world of an AI gateway.
and what people can do with it.
Yeah.
So, I mean, we're definitely not the first person to build an AI, like an AI gateway.
You know, it's a, I mentioned this in a blog post just the other day, but, you know, I was
talking with another founder, like months and months ago.
And they're just like, we're just talking about the proliferation of gateways.
And they're like, oh, it's the obvious idea.
It's like, and I agreed at the time, you know, but then later on, I realized, oh, no,
there's a tail scale spin that can make things much simpler.
And so we should just show the world that.
But aperture in particular.
Anything.
Maybe I should talk about it, but the reasons I love working with it.
Sure.
Because I think it'll resonate.
You've been, like, knee-deep in 12 months of history.
Or research, you must have, if you don't love it, we've got problems.
Yeah, I know what?
I love it because it's, it's kind of like tail-scale.
The first time you're just like, oh, my gosh, everything just got easier.
Yeah.
And, you know, in tails, like, tail-scale, that, seeing that, wow,
or that realization of just, like, wait, it's like, like, here.
it's like here's all the work I didn't have to do kind of thing.
And Appeture is kind of the same thing.
It's like, oh, I just, if you have a gateway, if you have a coding agent, rather, that you can point at a proxy, you can use, you can use Aperture.
Right.
If you're using API keys.
And so I, I've run it, I mean, I've run aperture on my home lab.
And the reason being is just like, oh, no, I've got a server.
I like to run everything through it.
I like to have all my logs, you know, like I just, I just want one spot that I, you know, if I'm working on.
my laptop or elsewhere and I need to connect to my tailnet.
I just have a single path.
It's something where I just know I'm just going to route everything.
So I can take my tailnet anywhere.
That's one of the nice benefits of it all.
From a team perspective, it lets us have full visibility into the logs of other members of our team.
So I can, it's like, oh, how is like, you know, my teammate Ben?
Like, how does he write prompts?
Why are his, why are things with him?
He does things much more efficiently than me.
I'm curious, like, oh, I want to learn from Ben.
Like, how is he constructing prompts?
How are, like, how are he, like, how is he interacting with these LMs?
It lets us have access to all sorts of, I guess, different back-ins.
So, you know, we use cloud code extensively.
You can get to Opus, like, through a variety of means.
You can go directly to Anthropic.
You can use Bedrock.
You know, so we've got both of those configured inside of tail scale.
And so, like, you know, there's been times where, like, Anthropic or Bedrock
have had issues available to just quickly, like, switch over.
to the other one.
It gives us obviously, I guess, metrics into like token usage across the team.
So like input, output, cash tokens, reasoning tokens, use, using what, that kind of workload.
It gives our security team just visibility because like every, most people don't realize this.
Like every API call is stateless, right?
Which basically means that entire context window is getting shipped back and forth across
the wire every single time.
Like, we log every single one of those.
And so then we've got we've got some tech in there so that you can consolidate those in the
sessions, right, so that you can actually, I guess, like, you can go through all the API
calls in a given coding session and sort of get context and understand what's going on with that,
which is really helpful for visibility. It's like, oh, like, you know, what was Carney
working on at 2am, for instance? And then from, I guess, maybe more of a, like, a legal and a
compliance standpoint, you could actually start, you know, you could start pointing your Git
histories back to individual coding sessions. They're like, oh, like, this code was developed
in conjunction with this developer, like, and here's the proof of why and who contributed
to it and how, because I know that's an issue
that some legal teams have brought up.
Our security team, you know, we can export
the logs. There's a lot of
like post hoc analysis you can do in those kinds of things.
We're working with integration partners to do like sort of real time
investigations of like tool calls, for instance,
or like after the fact
log analysis.
So there's a bunch of fascinating stuff there.
Could you block things?
Because you like? Yes. I mean, the short answer is yes.
Something like a firewall like an AI.
firewall even too.
Yep.
And yeah.
It's what I think is really interesting.
You could do so much
with this gateway.
Yes, you can.
And there are,
so we mentioned them in this,
in a blog post just the other day,
but there's some integration partners.
Oso is one,
Servos is another.
Yeah, for instance.
Yeah.
And so, yeah,
they've been great,
like to work with over,
just over the past few weeks,
but yeah, we have an integration with them
where we've got some,
we have an API, for instance,
you know, tool call hooks,
like they can intercept them,
they can analyze those.
And you can start to do things like where you can dynamically adjust your network
or your security policy based on effectively real-time signals.
Yeah.
Like activity.
What's being called?
What's being prompted?
What's,
is that what you mean about that?
Like if I'm trying to do not so much nefarious things,
but let's just say dangerous things.
Yes.
I could be a new developer,
new builder.
Or just maybe have all.
to your motives, who knows what, but like,
could you pay attention essentially
and, like, do different things and react.
Yeah, or you have, or you have, like,
an agent that is starting to get a little bit more fast and loose with the rules
and starting to access things that maybe it shouldn't be.
Yeah.
Okay.
Yeah.
And so there's, there's ways, you know, we can send signals out to those tools,
and those tools have, like, their own kind of like,
I guess, application level, like, network policy.
Right.
That they can start to adapt in real time based on signals of getting
from aperture because it's all centralized.
So you can say like, oh, this thing, like this agent or this person or whatever, like,
they're deviating a little bit off of the regular kind of behavior.
Maybe we should start locking down or auditing more and slowing down their interactions
with these kinds of resources.
Is there latency involved in this if you're doing like tool level?
Is it literally like, are you, does the call go to the gateway back to the agent to allow it?
Or is it just paying attention and sniffing it?
Well, right now, it's just a sniffing part.
We actually are working on, I guess, adding an approval loop.
Well, like, RMs are like you don't do RMs that are like dangerous RMs, for example.
Like, yes, do some of those because you have to remove files and add files and take things away.
So, yes, do that.
But if I can intercept a really dangerous RM or a SQL injection or a database drop or a table drop or, you know, who knows what,
like I would want that.
That's where I would want the gateway to even as a device.
developer give me the ability to go in the dangerous zone of using the agent because I want it to
have free reign. I really hate sitting there babysitting the yeses in the nose and like oh my yes,
please commit and push the code. I don't want to tell you to commit and push. That's inherent.
That's what we do here, okay? It's part of like getting in and out of the door. Let's let's do.
I don't want to babysit you doing that. So I almost live in the dangerous world. I do live in a
dangerous world. I want to say like I do dash has dangers a lot on pick your,
you know, Codex, Opus, whatever.
But I feel like this gateway could be my security policy
and maybe even my own personal big brother in a way.
Yeah, it's, like, TailsGelt,
we're not going to build that entire ecosystem.
It's one of the reasons we want partners so early
to demonstrate the fact, like, oh, no, we want to work with a lot of people.
We're not going to, like, we're not going to build everything ourselves
by any stretch of the imagination.
There's a lot of really incredible tech out there that, you know,
want to be able to plug into. A lot of great teams doing deep research on this kind of stuff.
We think of our tailscale as generally like a very broad, horizontal like connectivity platform.
It's like pretty deep in the stack. There's a lot of a lot of great tech that can be built
on top of that. It's also just part of a security solution. I mean, I there's no way I'm going to
run a coding agent on my machine and like, you know, with dangerously skip permissions, for instance.
that in a sandbox. You're not going to do that? No. I'm doing it. I'm doing it. I'm not doing it locally.
I have a sandbox. I have a sandbox in a cloud provider that can't get out. And I don't put,
I don't put my SSH keys on that thing. I just, I push things to it. You know, and I think there's,
I know my colleague, Avery, our CEO, he's been experimenting a lot with this kind of stuff too,
just like, yeah, because like velocity is super important. You know, these agents nowadays and like just
like the LMs are just improving.
automatically week over week. I want to give them, I want to really leverage that safely.
And I think aperture is part of that safety solution, but it's not the whole answer.
So you built in the AI gateway with identity baked in.
API keys, not having to be passed around. There's a lot of benefits here. You mentioned the desire
for partners. Give me an idea of what you mean by that. You said, we're not going to build all that.
And you alluded to all that. We talked through a bunch of stuff. I mentioned agent policy and stuff.
at that. What do you mean by all of that and how can partners step in? How can an individual,
team of one, step in and become a partner of TELScale and build out what you're not going to build
out? Well, let me think. I mean, they can contact me. Happy to have a conversation with
companies and individuals working in the space. What's the best way to reach you? You want to,
say your email address here or is there another way to get a hold of you? Oh, just aperture at
Dalekill.com. I'll see it.
Yeah. That's you.
Yeah. Well, it's, it's me and the team.
It's U plus, yeah. It's the proverbial you.
Yep. And Aperture is APER, is that right?
Or APUR.
ER. Yeah. Apir. I always misspell. I mean, listen,
ever since Aperture, Apple had this software called Aperture for photographers, if you know this,
you know, 15 years ago. It was amazing. They don't make it anymore.
I've never been able to spell Aperture properly.
It's like I got to remind myself, you know, I before E after, like every single time my brain doesn't get it okay.
So please spell aperture, if you don't mind, info.
Appature at taelscale.com.
Yeah, A P-E-E-R-T-U-R-E.
Put that in the show notes for everyone to get a hold of you.
Because I'm really curious about this.
I think this is cool.
I mean, when I first heard about it, I was thinking, oh, my gosh, big brother.
But then we kind of need a big brother in a way because agents can't be trusted.
Until we could trust them, we sort of have to big brother them.
because already big brothering us in a way.
And logging all the things to me gives me some peace of mind because I want history.
I want my own history.
And from an enterprise standpoint and maybe a peer standpoint,
and it might be a little weird to see what David's coding or how you're prompting.
But I don't know.
I think we're going to have to be just okay with that to some degree.
But I'm going to want to learn or peek over your shoulder and say, well, David's clearly like 10xing my own 10x here.
What is he prompting here?
You know, how is he, what is this magic he is doing here?
Maybe it's just like, do it.
Is that your prompt, David, just do it?
Oh, I've been tempted.
Sometimes that's my prompt.
It's like, yeah.
There's that's an idea.
Do it.
Yeah, I like that.
Sounds great.
That's an amazing idea.
I gave you the original idea.
You morphed it.
Now it's amazing.
Yeah, do it.
Do plan one.
I love that.
Yeah.
I spent a lot of time in planning mode with Glaude.
Yeah.
Yeah.
It is crazy, though, like how fast those tools can, like,
you move and the stuff they can do if left unchecked.
You've got to be careful.
I think, you know, I guess comments on a couple of things you just said.
Like, there is, there is, we've got some basic permissions with aperture right now.
You know, like users can only see their own stuff, for instance.
Admin can see the world.
Like, we've got a lot of stuff to implement there about, oh, maybe we only want a team
to be able to see, like, you know, logs within their team.
Maybe we only want a manager to see the team members.
There's all these different kinds of, I guess, access models that we need to explore with customers on this.
And we are doing that.
So there's a lot of room for new features there.
But again, we're just trying to keep things simple at this point.
In terms of one of the neat hacks that we've done internally, and I really like this one,
is that you can actually point a coding agent because we have an API.
There's a whole bunch of endpoints inside of aperture.
And one of the endpoints inside of your tailnet is that you could actually get your own log.
out of it.
So you can point a coding agent and say like, oh, I want you to explore like basically
how you've worked in the past.
Right.
And so you can start to get like these recursive, I guess, like learning or feedback loops
with a coding agent, like reviewing how it's worked in the past or reviewing its previous
logs.
And I think you can unlock, I think there's a lot to unlock with that.
You know, we've just only started to scratch the surface.
But as yielded some really interesting insights for us as we've been digging into like, oh,
how do these protocols work, for instance?
Like how do these coding agents send to function?
Like when they start delegating stuff
or using sub-agents, like how does that kind of,
how do those mechanics work?
And so I think for a home labber that if you want to explore more
about just like the protocols and the request headers and the bodies
and like how these coding agents are sort of like,
you know, how the API calls evolve and how the context evolve,
it's super neat.
I like the FethGJV access models.
That gives me more peace of mind because, you know,
gateways are great.
You already have it at the network.
So you can't hide from that.
and IP addresses, DNS, those reveal a lot about an individual.
But, you know, an LLM and your interaction with a model like we do today reveals, you know, a lot more because you may say, hey, I'm new to this scenario.
And like, maybe your team thinks you're really steeped in there or whatever.
I think you're more advanced.
And that's not so much judgment happens, but you may show or have to be more truthful with your awareness and level.
of understanding of something.
And that may be either one be embarrassing or a fireball offense or I don't know.
But then you start to think about like, okay, who has access to that information?
I realize this is all work stuff.
But I have to be and we're almost getting more and more.
I don't want to say the word intimate, but it kind of is the word a little bit more relational,
a little bit more intimate with the other side.
Let's just say the machine behind the machine.
Because it feels a lot like we're talking to something, a peer.
And in a lot of ways, it acts as a peer, as an educational peer.
Education is dramatically changing, let alone code level understanding.
Like, navigating co-base is so much different today than it was, like, not even 180, like, 720 difference in terms of like spins.
That it's just dramatically different to navigate a co-base and have an understanding in a world where you have agentic properties available to you.
And in a lot of cases, you might be more forthcoming with the agent.
because you have to be, you know, to get it to give you what you need.
You kind of have to be like, you know what?
I don't know a lot about this subject matter.
You know, can you, can you steep me in it?
Can you school me?
What do I need to learn here?
Can we build a toy application?
And then you get launched into it.
But it can be a little revealing, sometimes maybe too exposed.
And that's where I, that's where I was like, you know what?
I like the fact that you have guardrails because I was a little, a little apprehensive myself.
And my own self-hosted version of it, yeah, that's cool because I'm not judging me.
But somebody else might if I'm in an enterprise.
Oh, totally.
I have like, when I started using coding agents, like the prompts are like, I almost, I'm
scared to go back because it might be.
Right.
You're like, gosh.
It's like, oh, I was burning a lot of tokens on useless things.
Like I didn't know how to use this tool properly.
But the thing is we're all trying to learn how to use these tools properly.
And the thing is they change every like few months.
Like I think back to, you know, what like earlier sonnet versions back in September versus
what I'm using now.
Yeah.
It's night and day, the capabilities.
Like, teams really have to stay on top of this stuff if you want to keep up with it.
So, you know, privacy, obviously, like, security is key.
I think people have to be pretty open and candid and transparent if they actually want to learn with this space and, like, really leveraging and getting advice from other people, too.
Which is another one of the reasons I want to, I'm so excited about aperture just in terms of, like, the transformation, I think it can help our company make.
Because, you know, we're seven years old.
we've got a lot of incredible engineers and other people working on all sorts of technical stuff.
When we started the company, LOMs basically didn't exist.
And look where the industry is now, I dare think where it's going to be a year from now,
we have to adapt and learn more about these things.
But a world where people are just like just clicking yes all the time is not a world I want.
I want us to be like leveling up.
And I think to do that, you have to sort of look at what you did.
It's kind of like code reviews, just much more aggressive.
It's like prompt reviews and just understanding
how do these tools work?
How do our workflows needs to change?
Like where did this, like, where did this agent go wrong?
Like, where did it make the wrong kind of assumptions?
How do we have to shape our code base?
So it doesn't do this in the future, that kind of stuff.
Like there's so much, I guess, knowledge that we have to uncover
of how to work with these tools going forward to make them effective.
I love the fact that prompt review and PR,
pull request, just national.
translate. I don't know if that was like serendipitous 17 years ago when GitHub and set
it itself and said, okay, pull requests are the thing. Fork yourself kind of stuff.
Yeah. That was like the founding detail of GitHub was being able to fork a code base and
submit a pull request. It was brand new territory. And now look at us. We're just talking to
our code bases. I still can't believe it. Still can't believe it. I have to pinch myself every day like,
that's crazy.
I don't know about you,
but GPT 5.3 Codex is really,
I've just never worked with an agent that was that advanced.
It's like working with the most advanced engineer ever.
And in some cases,
I'm a little pissed because it's like it knows way more
than I think I would ever know.
And I'm almost like, yeah,
I know a lot about this,
but like, wow, you clearly,
you're speaking a whole different language
and you're moving way fast
than I can ever do it.
And it's a little scary.
It's a little scary.
But I think this gateway idea
has got some merit,
especially in the field you're trying to go to,
which you need a killer app, right?
You need a killer app on,
on tail scale that is beyond the VPN.
So this is taking you beyond the VPN.
It's taken identity in a whole new place.
It's securing our API keys
in ways we've never been able to before
in a world that is burgeoning
and very tumultuous in terms of security.
So I'm all for this.
my only desire would be self-hosted at some point.
And whenever you get to that, obviously, I'll check it out in a matter what between now and then,
but I'm going to be in a world where, and this is just my own personal opinion.
I've just been in this world where these are the kind of things that I want to have sovereignty over
and I already have compute.
So why not dedicate my own compute to it?
I'm happy to pay a licensing fee or the business model however it is to go.
But I'm in this world where I want to self-host a lot more than I ever.
wanted to before. And it's not, it's not that I don't trust the world. It's when I want to have
more control over the world I'm building. And they're already interconnected. I've already
used tails. I already have my tailnet. I haven't tapped into multi-tail nets yet. But I'm going to.
That's where I'm at is, is, is, give me the self-hosted version of it. Because that's going to be,
that's going to be fun. I hear you. Yeah. I hear you. What's, uh, what's left? What have,
What have I not asked you, David, about your journey with Tailscale, what you think people do and don't know?
What is the biggest myth?
If you could debunk a myth about Tailscale, what might that be?
Do you often have to debunk myths about what you do and what you don't do and how deep of a well you all have?
Well, one of the myths is that we're just for home lavers or small teams.
That's not true.
Like we've got a significant number of enterprise customers.
but I've often heard and call
or like, you know,
we'll bump with the people at conferences
who were like, oh, like so-and-so was saying that,
you know, you're just for home labs
or you're just for small teams.
You're just like a hobbyist kind of thing.
It's like, no, we're serious.
Like, you know, having a free motion.
Some of the most advanced engineers I've ever known of.
Like, you've got some really talented people in your team.
I'm very fortunate.
I know you agree with that.
I mean, like, you really do.
You have some serious internet talent.
Yeah, we do.
Yeah, I'm very grateful for that.
It's, yeah, it's an incredible team.
Brad Fitzgerpatrick is one of the ones I'm thinking of.
Like he's been on our podcast GoTime.
We used to host this podcast called GoTime.
That's why I'm so emphatic about GoFriCon even, too, and go the language.
And Brad's been on that podcast.
And I think, I'm not sure who all is still there over these, all these years, but a lot of folks that I've just paid attention to and have, like, leaned on for wisdom on how to morally navigate the software we're building, even from a technical level, like just bar none.
some really awesome people there.
Yeah.
Yeah.
I count my blessings with the team we have.
But yeah, to talk about the myths, you know, we are, we're never going to give up having a free plan.
You know, I think free forever.
Yeah, free forever.
Like we're going to have, we're always going to have something and we're going to be pushing more and more into that over time.
That's really important.
To Avery, me and so many other members of the, of tail scale in general.
But, you know, we are building more and more stuff all the time.
And a lot of that has enabled us
to just take on bigger and bigger and bigger customers
to help, you know, frankly, pay the bills
and help us grow and help us expand.
And it just makes salescale a better product for everyone.
And so we are definitely like enterprise ready.
And that has been one of the myths that over the past couple years,
we've done a lot, I think, to establish that.
But there's always more, I think we could do
because we've come bottom up.
And so there's always that, you know, people who learn about you in the early days,
that's how they think about you.
It's like the curse of SaaS.
Like once somebody adopts you,
you have to spend so much time re-educating them
because the product's changed and evolve a lot.
So we had a lot of early adoption
and we have to go back and just talk more
about what we did and the technologies we build
and where we're acting
and that kinds of customers and stuff.
That's one of the myths.
The other one is that we're just a VPN.
Tailscales more than a VPN.
A lot of conventional VPNs,
it's all about IPs and connections.
like,
tail scale bakes identity in.
It's,
yeah,
it's a fundamental guarantee
of every connection.
Like,
you know who and what is connecting.
Like,
if it can connect to you,
it's already authorized
and you know,
you can tell who it is immediately.
Like,
you know who it is.
And that is,
that's more of a paradigm shift
where people think about,
like,
connectivity and identity
being the sort of the same thing
or paired very closely together,
which is not how most people think about,
like, networking.
They think about,
oh, I've got this,
I've got this,
I've got this connection and then I've got identity that's like way on top somewhere,
like layer seven.
It's like, oh, no, no.
But tail scale, it's like it's baked in.
And if it is, you can just do so much more with that.
You don't have to think of worry about identity.
And I think that once people get their head around it, like you can see these lights go off.
That's the real sort of, like I said, paradigm shift I want to bring to people.
I do not know what you can do differently because I'm a, I would say I'm a pretty steep user of tail scale.
And even some of the things you're saying now about these myths,
they're myths even to me as a daily active user of tailscale.
I don't know what you do or what you can do to solve for that problem.
But I agree, not just a VPN.
It's unclear to me how in all the ways that my identity is attached to my,
it obviously makes sense.
Don't you explain that that, but it's just not obvious in my daily use of it.
even when I
SSH around my network
I don't do it via
tailscal, I don't think.
I will SSH via tailscale with like maybe
let's say if I have a machine that has a host name of
Cineplex. So my Plex machine
is called Cineplex. So I like it like that.
I just SSAH Cineplex.
I don't know if I'm using any special
niceties of tailscale besides maybe
host name mapping. That's about it probably.
I'm not using my tailscale identity.
I don't think.
And that's maybe, and that's maybe my own fault.
Maybe it's your fault.
I don't know.
But I agree.
Like I feel like I learn more and more about what Taylor skill can do for me because you do so much.
And I don't know how you explain to folks more differently than you already do.
Besides just keep, I guess keep trying.
I don't know.
It's, yeah, it's, I mean, it's, it's, yeah, good infrastructure gets out of your way really quickly.
And that's been a guiding principle for us ever since day one.
Yeah.
You know, so it's, you know, you don't want infrastructure to make noise.
At the same time, it would be awfully nice if more people sort of, I don't know,
like just understood all the cool stuff that we were building.
So it's a very tricky balance because we try to be very sort of quiet,
just works, like always works in the background.
Do we want to make a lot of noise a better new feature?
Like it creates tension, I'd say.
We try to be very cautious with that.
Like even with Aperture, I mean, just thinking out loud here, I think, you know, one of the ways you can show off a lot about it is not so much build-twe applications, but show off the cool things you can do with it.
Like this podcast is one example diving into it.
There's just so much you could do with an AI gateway, which I think is worth exploring, that the way you can you can explain things to folks is really just to show it off, demo it.
And maybe that's conferences.
maybe that's in the hallway track,
maybe it's via YouTube,
and Alex's team of what they're doing there.
But I think there's so much you can do
with identity on your network that I'm just now thinking about
that I personally care deeply about on a daily basis
because I'm building things that require it,
and I'm not leveraging any of this tooling.
I'm doing it the hard way,
despite being a user.
I'm almost angry at you and the proverbial you for not,
and maybe you do.
And maybe Alex is like,
Adam, just watch my YouTube.
and maybe that's the easy button there.
Alex, there's a lot of content.
Yeah.
Right.
There is.
Yeah, there's a lot of things.
But, you know,
um,
uh,
but yeah,
I don't,
don't put an LLM on a public port.
You know,
like don't,
don't put an LLM like on the public internet.
Don't do that.
Right.
But you can,
you can share that with,
like you can create a private network
and you can share that with your friends with Tailscale.
Like there's like so many little things like,
it's like, oh,
I just,
I didn't realize there's sort of an easier,
better,
more secure way.
I'm telling you,
I need to know,
the easier, better, more secure way, the tail-scale way, the tail-net way, the multi-tail-net way.
I can't wait to play. Oh, my gosh, I punned and I rhymed at the same time.
David, it's been so awesome talking to you going into the details of this.
I'm a big fan, as you already know.
I'm looking forward to the self-hosted version of it, as you already know, because I've said that a couple of times already.
I think that's where the future is at in a lot of cases here.
I can see what you're spinning up instances to achieve velocity, but I think ultimately
sovereignty is the key to my book at least.
So there you go for that front there.
Anything left in closing?
What else you want to say?
Aperture at taelscale.com.
We'll put that in the show notes, of course, to reach out to you to become a partner or a
builder who's got some ideas on top of TSNET and the things you're building there or maybe
even aperture itself.
But what else?
I invite people to reach out, whether it's like for partnerships or ideas or feature requests
or what.
I'm here.
I'm accessible.
Like I don't want people to think that.
just because tail scales a few years in and we're sort of the size we're at right now that
I'm unreachable and that we're you know we definitely don't have all the best ideas we've got
some good ones but we want to work with a lot of other people and help them get their ideas to market
quickly and in a more safe, secure and expedient way so yeah please like reach out I'd love to
hear from people maybe some office hours for you maybe you could
do some office hours. Would you entertain that?
Yeah, yeah. No, let's come up. That's
definitely, I think once we get through
a bit more of this push on aperture.
I'd record it too. I'd turn it into some content
because I mean, I would pay attention
to the behind the scenes of that. I think that's
like peer to peer is where I think
a lot of developer, like that's
one of the ways we really educate
folks that
listen to our pod and, you know, we have a lot of
sponsors who sponsor our stuff, but one of the
directions we take is
is not just throw an ad out there, but
something that's like, it goes behind the scenes.
It's, it's informative and it's
peer led. And you can kind of
see, okay, well, this one team
over here has got this idea for how they can leverage
aperture or an API net, or
sorry, an AI net, an AI gateway.
And now I can
see what they're using it for. Now I've got
some ideas as well, kind of thing. Yes. I think
office hours could be kind of fun. And to
speak to the top like you are and
your team members, that'd be kind of cool to
bring some questions, dig into how that works out,
throw some ideas out there.
Let it be a little loose,
but also a little structured.
And yeah, that'd be cool.
I could see that happening.
Yeah, it'd be a lot of fun.
I'm, uh,
yeah,
I just,
I love helping people get their ideas to market and removing some of the pain.
Like,
it's just,
it's such a fantastic experience.
And, uh,
I just want to see and help with more of that.
Yeah,
very,
very,
very happy to talk to people about their ideas and,
uh,
work on collaborations and partnerships and stuff like that.
All right.
Well, David,
thank you so much.
appreciate you. Awesome meeting you. Awesome conversation. Yeah, thank you. We'll see you again soon.
Well, that's it. The show's done. Thank you for tuning in. Big thank you for being a listener of this podcast.
If you haven't yet become a member, it is free. Yeah, you can go to changelaw.com slash community.
Free to join. Hang with us in Zulip chat. Everyone's there. Everyone's welcome. And you are welcome.
And I want to see you there. And if you love this show, just a little bit more than you love every other show.
out there and you want to go deeper, get bonus content, get closer to the medal, drop the
ads, support the show.
We have a membership that is not free.
It's called ChangeLog++.
Learn more at changelaw.com slash plus plus.
It's better.
It is better.
You know why it's better?
Because you get bonus content, you get the little extras, you get close to that medal,
and like I said, you support the show.
Big thank you to our sponsor for this show today.
Thank you to BMC for being our beats freak in residence, the brakemaster cylinder.
My gosh, those are awesome beats.
And thank you to you for tuning in to this show.
That's it.
We'll see you again soon.
