The Changelog: Software Development, Open Source - Observing the power of APIs (Interview)

Episode Date: November 2, 2023

Jean Yang's research on programming languages at Carnegie Mellon led her to realize that APIs are *the* layer that makes or breaks quality software systems. Unfortunately, developers are underserved b...y tools for dealing with, securing & understanding APIs. That realization led her to found Akita Software, which led her to join Postman by way of acquisition. That move, at least in part, also led her to join us on this very podcast. We think you're going to enjoy this interview, we sure did.

Transcript
Discussion (0)
Starting point is 00:00:00 Gene Yang's research on programming languages at Carnegie Mellon led her to realize that APIs are the layer that makes or breaks quality software systems. Unfortunately, developers are underserved by tools for dealing with, securing, and understanding APIs. That realization led her to found Akita Software, which led her to join Postman by way of acquisition. That move, at least in part, also led her to join us on this very podcast. I think you're going to enjoy this interview. I know I did. But first, a quick thanks to our partners for helping us bring you darn good developer pods
Starting point is 00:00:48 week in and week out. And how do we bring them to you? Fastly, of course. Check them out at fastly.com. And our open source Elixir app servers are powered by fly.io. Check them out at, that's right, fly.io. So I'm here with Ian Withrow, VP of Product Management at Sentry.
Starting point is 00:01:19 So Ian, you've got a developer-first application monitoring platform. It shows you what's slowed down to the line of code. That's very developer friendly and is making performance monitoring actionable. What are you all doing that's new? What's novel there? Traditionally in errors, what's the strength of Sentry is we've taken not a stream of errors and said, hey, go look at this, like all these error codes are flowing into. It says we actually look at them. We try and fingerprint them and say, hey, we've
Starting point is 00:01:45 actually grouped all these things. And then we give you everything you need within Sentry to go and solve that error and close that out. And that's, I think, driven tons of value for our users. And traditionally, if you look at performance, it's not that thing. It's looking at certain golden signals, setting up lots of alerts, maintaining those alerts, grooming those alerts, and then detecting them. And then maybe you have a war room and you try and look at traces, or maybe you realize, oh, it's this engineering team that owns it. Maybe they'll look at logs, whatever they have available. Performance is very rotated on detection and then isolating to where the problem may exist and root causing is often an exercise
Starting point is 00:02:27 left to the user good performance products provide a lot of context and detail details that a an experienced uh engineer or devops professional can can kind of parse and make sense of and try and get to a hypothesis of what went wrong. But it's not like that century error experience where it's like, here's a stack trace, here's all the tags. Oh, we see it's like this particular segment of code, and Ian did the commit that changed that code, and do you want to fire a dear issue and assign it to Ian? It's not like that crisp kind of tight workflow that we have errors.
Starting point is 00:03:03 This is breadcrumbs. Right. And we said, hey, maybe there's no reason why we could do this for performance let's try okay so you took a swing you tried describe to me how that trial works if i'm if i go to my dashboard now and i enable apm on my application what are the steps largely because we kind of encourage you to go and set up uh transaction information when and set up transaction information when you set up Sentry. You probably, as a user,
Starting point is 00:03:27 probably don't need to do much. But if you skip that step, you do need to configure to send that data in your SDK. And what happens is we start now looking at that information. And then when we see what we call a performance issue,
Starting point is 00:03:41 we fingerprint that and we put that into your issues feed, which is already where you're looking for error issues right it's not a separate inbox this is the same inbox the same inbox yeah now we obviously give logical filters and if you just want to look at those we do that and for a newer user sometimes we detect hey you've probably never seen this before we can kind of we do things because we know we build for for mass market that bring your attention to it. But it's the same workflow you have for errors today.
Starting point is 00:04:08 So you don't have to learn something new to take advantage of these things. So you asked for the experience. So last fall, we did the experiment, the first one, which we called M plus one. And we didn't know how it was going, honestly. But people liked it. We kind of know people like it when they start tweeting and saying nice things about it and so um yeah it got traction very cool so if your team is looking for a developer first apm tool to use check out century use our code to get six
Starting point is 00:04:37 months of the team plan for free use the code changelogmedia yes changelovemedia. Yes, changelovemedia. Six months free of the team plan. Check them out at sentry.io. Again, sentry.io. That's S-E-N-T-R-Y dot I-O. Bye. Today we are here with Jean Yong with Postman, formerly Akita Software. I want to hear about that as well. Jean, thanks so much for coming on the show. Thanks for having me. I'm super excited.
Starting point is 00:05:43 I'm excited as well. I've wanted to have you on the show for a while. I think when I first came across you, it was Akita Software. And now you're with Postman. Do you want to tell us, let's start off with the story. Let's hear about what you're up to at Akita and how you ended up at Postman. Yeah, so at Akita, we were building the fastest, easiest way for software teams to see what endpoints they have and what endpoints they might want to pay attention to. The motivation is more and more developers don't have a handle on what's running in prod because of the rise of SaaS, the rise of APIs, and just the aging of software systems in general. Software isn't 10, 20 lines of code that you write and pass around anymore. It's these complex, living, breathing systems
Starting point is 00:06:29 with lives of their own. And so at Akita, we felt like if a developer lost control of their production system, so if they didn't keep up to date with monitoring or they didn't keep up to date with documentation, it could quickly spiral out of control. They fall off the wagon. They don't know what's going on anymore. And we were building to allow every
Starting point is 00:06:48 developer to really get a handle, not the deepest handle, but kind of what you need to know, a quick rundown of what's going on with your APIs within minutes of signing up for us, basically. That was the goal. And so we had made good progress at Akita. We were converging a lot with what Postman was doing because we were taking a very API-centric view of the world. We were operating on the thesis that the rise of APIs has caused all these problems, and APIs are also the solution. So showing people what's going on with their APIs can go a long way. And Abhinav, the CEO of Postman, had reached out to me in 2021 and said, look, it looks like we're converging and we're only converging more. You should think about becoming part of Postman.
Starting point is 00:07:40 At the time, I said, that's not what we're about right now. We really were just heads down working, you know, maybe one day we'll become part of Postman, but it's not time yet if that's the outcome, because we really just need to figure out what we're doing. And so now it's 2023, we have continued converging with Postman on an API centric view of the world. And it became clear that also that joining Postman meant that we would have a bigger platform to start off with in building what we're doing, in hooking in with other features that our users could use once they got their APIs into our system. And just Postman has a very much bigger machine in terms of user funnel coming in and platform support. And they've already got identity built and all this other stuff. And so for me, it was always about building for users the thing they needed and not necessarily about building an independent company or building the biggest company in terms of the number of people. It was really about the product. And so
Starting point is 00:08:42 I've been really happy that I've gotten to focus on the product and the users now that we're at Postman even more than before. That's cool. So we had Abhinav on the show, remember Adam? A couple of years ago. Way back. Way back. About four years. Very impressed by him and enjoyed that conversation quite a bit. So he was convincing to me in most of the things that he had to say to us. I could see how he would be convincing to you. Was acquisition something that was in there from the beginning with Akita?
Starting point is 00:09:09 Was it something that you eventually thought would happen? Or were you trying to build something bigger or smaller? How did that come across? I know he convinced you over time, or has it made sense? But had you thought eventually you were going to get acquired by somebody, whether it was Postman or not? I was open-minded. That's a good question. Like I said, it was really about what's best for the users and the product. I started Akita after leaving academia. So I left academia to start
Starting point is 00:09:36 Akita because I felt that starting a company was a better way to serve the user need of API chaos than staying in academia and writing papers, even though I had a pretty nice job at Carnegie Mellon University. And for me, I really kept in mind this goal of I want to do what's best for developers. I want to do something that provides real value to developers. And if it's building an independent company, that's great. And as I've written about before, there actually aren't that many independent companies that succeed staying independent as dev tools. You see a lot of developer tools innovation coming out of bigger companies. So when I was coming of age, Google and Microsoft were two of the biggest centers of developer tool innovation. And it was
Starting point is 00:10:26 hard to do a lot of innovative stuff in smaller developer tools companies. We see a lot more developer tools companies as startups these days, but I was always very open to there's the best place to build every tool. And it could be a startup at some points. Once the startup sort of finds its place, acquisition could be the right outcome for it. And it's always, for me, been about what's best for providing value to developers. How did you begin? What was day one for Akita?
Starting point is 00:10:58 Did you have a network built already? Did you have seed funding? What was day one through day 120? I don't know, something like that. I did have a network built already. So I did my PhD at MIT and then I became a professor. And during all of that, I had been quite interested in entrepreneurship. And so I had actually started an accelerator with my friend called Cybersecurity Factory. So my research had been programming languages and security. So I was doing a lot of security stuff ahead of starting Akita, actually.
Starting point is 00:11:35 And in 2015, I had started an accelerator with Frank with Highland Capital. And so it was a summer program where we gave people a small amount of initial funding, took a percentage of their eventual fundraise, and gave people a security network of industry experts to help them get started. And so some of the advisors to Cybersecurity Factory had been other founders I looked up to, for instance, Max Krohn, who founded OkCupid and Keybase and now runs security at Zoom. And so I had known these people from before, from starting Cybersecurity Factory. And in seeing the first batch of companies go through, I saw what the beginning part of starting a company looks
Starting point is 00:12:20 like. You do a lot of discovery calls, you talk to potential users, you segment the user base, and then you figure out what the product might be. And so I started this accelerator in part to see what it would look like if I wanted to start my own company one day. And my friend Frank, who I started it with, convinced me to participate in it in 2018 as a team when I was thinking about starting my own company. And the original incarnation of Akita was an API security company. Well, it was generally a general security data discovery company that quickly became API security. And I pivoted in 2020 out of API security into API observability after we realized that developers were much more interested in what we built as a non-security tool.
Starting point is 00:13:13 I will say that programming languages was always the primary part of my research and security was the application area. And so for me, I was always a very developer tool-oriented security person, which depending on how familiar you are with developer tools that are security-focused, many people may not be because there really honestly aren't that many. Developer concerns and security concerns have quite often had tension with each other. But that was how it all started.
Starting point is 00:13:39 Interesting. So API observability, I guess you found that the developers weren't super interested in the security side. The piece I first read of yours that made me want to bring on the show was why aren't there more programming language startups? And as you said, your background is in programming languages. And it's just interesting that you also had a startup, but then you didn't have a programming languages startup yourself, even though you're interested in the topic of programming languages. Can you unpack that in brief so we can discuss more and then maybe how your research fed into where you thought you might create a product as a startup?
Starting point is 00:14:16 Yeah, it's actually really interesting because people used to reach out to me when we were in stealth mode. And this was a big part of my reason of getting out of stealth mode because they wanted to work on a compiler or they wanted to work on a programming language. And I will say, even though I was doing research in programming languages, a lot of the big questions in the field of programming languages were not what people thought. So I think when people think programming languages, like everyone wants to do the next Python or, you know, they have, this is, this is how I program. And I want to make a language just for me. And, um, even, even in the research field, that's not
Starting point is 00:14:52 what a lot of the research was about. There's a lot of, this is how we prove software systems to be correct. This is how we analyze large software systems. This is how we build tooling to, um, do weird things in software. And this is actually a thought experiment. And so even in the field of programming languages research, there's a lot of different stuff. My work had always been more systems-y. So I was doing research on this is how we enforce what's called information flow policies across software systems. I quickly realized you need to do that not just in the application layer if you're really going to build systems with it, but in the database
Starting point is 00:15:29 layer and across web applications in a variety of ways. And so my work had already escaped the programming language layer itself, if that makes sense. And so APIs to me were the next thing. And actually, one of my last papers that I published from when I was a professor was about enforcing the security policies at the API layer, at the REST API layer across an application and a database. So one, I guess, context piece is that my work had never quite been, you know, what people on the outside might think of as
Starting point is 00:16:06 programming languages. Because I saw the field as really about how do we build software systems, and how do we do software development, and how do we ensure that when we throw code over the fence and cross our fingers really hard, that it's not just, you know, really causing huge problems and causing people to die and cars to crash into each other, etc, etc. And one thing I think is really funny, is that 10-15 years ago, I got into programming languages for a lot of the reasons people are afraid of AI right now. So people are saying, how do we know what the AI is doing? And how do we know the AI doesn't have a life of its own? And it's really running all of our lives. And for the last 10, 15 years, I've been saying the same thing about software. We don't know what software is doing. We don't know that it's doing the thing that we told it to do. It has a life of its own.
Starting point is 00:17:00 Any software bug can take down so much stuff right now. And to me, that's what the field of programming languages was about. That's why I'm interested in developer tools. And so I think the people who are interested in developer tools because they have certain aesthetics about this is how I want to program, that's a different reason than me. I want software to be better quality. I want it to be easier for people to build software that does what it's supposed to do. Sometimes there's no thing the software is supposed to do. That's maybe a problem. I want people to be able to get their head around their software systems. And so to me,
Starting point is 00:17:37 what we're doing at Akita is very much, or what we were doing at Akita now at Postman, is very much in line with that. I think that a lot of the future of quote-unquote programming languages, so the people who are interested in software reliability and software development, that's going to end up in systems areas like observability or AI. So a lot of what people are talking about with AI safety, AI explainability. I was interested in very analogous ideas when it pertains to all of software. Because look, what runs AI? Software. What's gluing pieces of AI together? Software. And I think a lot of the things that people are worried about, like what if the AI does something that we didn't expect and how can we trust it?
Starting point is 00:18:22 We should be asking these questions of all of our software. Well said. I think that does help. A lot of good points. Yeah. Help paint a picture of where you landed with Akita with API observability, specifically your interest in reliability and security and building systems that we can know why they work the way they work.
Starting point is 00:18:42 I think API observability in that sense makes a lot of sense, especially because so many of our software systems cross the API boundaries, whether it's just internally via microservices or externally. I mean, we have here at changelog, we have an open source CMS. It's like the smallest little software system you could possibly imagine, right? Like it's running a small business that publishes podcasts. It's not complex software. It's very straightforward domain space. And yet if you had to go through my list of API integrations with third parties,
Starting point is 00:19:13 it's double digits. It might be two dozen integrations for a very simple CRUD application. And so that's just a small app. Yeah, yeah. And that's so common. Yeah, everything people are saying about AI, they should be saying about APIs.
Starting point is 00:19:27 AI taking over the world. APIs are taking over the world. Is AI developing a life of its own? APIs are developing a life of their own. You get your AI via an API these days as well. So like you said. Yeah, exactly. Yeah, people should be much more afraid of APIs.
Starting point is 00:19:44 Much more afraid. Wow. You're making me afraid of APIs. Much more afraid, wow. You're making me afraid of APIs. Can you share a shortlist, Jared, of integrations? Five or six? All 12? Just to name a few. Buffer, GitHub, Slack, Mastodon, TypeSense, Cloudflare, Fastly, Campaign Monitor, S3, Sentry. There's a few, just scrolling through my list of API wrappers.
Starting point is 00:20:15 And so, Jean, on larger, non-simple software like we've got for running our business, what are the integrations on like postman for example i actually so we focus on internal first party apis and so i haven't uh paid as much attention to third party apis but other ones you often see like we have segment we have all of our analytics we have sun grid uh the the email. So those are just our team. I haven't even looked at the rest of Postman. But pretty much any functionality that we need from the outside is an API. So why should we fear the APIs?
Starting point is 00:20:56 Well, I think we should fear them and love them. But I think right now people are not really talking about APIs that much. So there should just be more discussion. But here's what's happening. In the last 10 years, the rise of APIs has meant that it's easier to build software than ever before. An example I have is I was judging a university hackathon a few years ago. And I myself had gone to hackathons when I was a student. And back then we were like, oh my gosh, look, it's the end of the weekend and our Lego robot can now bang into the side of the wall.
Starting point is 00:21:30 Great. That's about how much you could achieve over the course of the hackathon. And the students that I was judging, they were sending text messages based on your bank account. They were doing things that based on where you were geographically, playing different kinds of music. And it all came down to APIs. They're using the Capital One API,
Starting point is 00:21:53 the Google Maps API, the Twilio API. But that was the building blocks that the APIs provided was what made all of this possible. And so the trend of APIs taking everything over has been incredible. People can build way more quickly than before. People aren't building as much in-house. And you can spin up a whole functioning company in a week now because of everything that's available out there. And so that part people have been talking about. People say, oh, this is really
Starting point is 00:22:22 cool, et cetera, et cetera. API companies are getting a lot of funding. But the flip side is that this is now a huge pile of software that is not known end to end to any individual user. And all of the tooling that most developers are using these days are built for software that a developer has built to first party end to end. And so if you think about what people think about when they think developer tools, it's, you know, what's in my IDE? What's in how do I do my integration tests? How do I do my end to end tests? What's my build, deploy, release process like. There is some stuff people use for monitoring and observability, but often it's, you know, did I log everything I intended to log because this is my system and I'm optimizing 99th percentile tail latency. And so what I believe is
Starting point is 00:23:19 missing is a space of tools that accommodate the reality that people are building software systems that they don't have full control over. They're building software systems that are evolving in ways that they're not determining themselves. And they shouldn't be expected to be monitoring these systems end-to-end or know what it means to get low-level logs on parts of these systems.
Starting point is 00:23:45 And so in terms of the questions that I said people should be afraid of, what is my software doing? Is it doing what I'm supposed to do? We should be really afraid of APIs because we're not able to answer those questions anymore. The lack of awareness, the lack of knowledge, really. Is it common to get logs from an API you might be using? Is it these low get logs from an API you might be using? Is it common to request access or get access to some of the things to understand how your data is consumed and transposed
Starting point is 00:24:14 once it's behind the API and then comes back out the other input output? That's a great question. No, it's not common. And I don't think it should be common. But the common way of debugging is based on low-level logs. Ask anybody how they figure out an issue and they say, get the logs. How do you get the logs when you're interacting with a system you don't control? How are you supposed to debug in that case? And so I believe there needs to be a zooming out from logs of how people are thinking about their systems. When we debug these days,
Starting point is 00:24:47 we're not going and printing out the assembly code anymore. In the same way, I think logs aren't going to be the end-all be-all forever of how people are dealing with these systems. There are going to be new ways of figuring out what's going on with your systems. And we will get there, but we're just not talking about it nearly enough yet. What's that path look like from where we are? Which is really, okay, it's different if we talk first-party APIs, third-party APIs. But I mean, when I talk to GitHub API third-party,
Starting point is 00:25:21 I poke at a black box and I hope it returns what the docs say is going to return, right? Yeah, that's a really good question. So I think we should be thinking about all APIs more like these third party APIs. And this is a big part of why we built things the way we did at Akita. There's this illusion of control for first party components of your system that, hey, they should be documented. I should be able to talk to the person that wrote it. I should be able to fully understand what's going on there. No, none of that is true anymore. What's happening is things actually are not documented. So many of the
Starting point is 00:25:52 teams that we talk to say even their own stuff isn't documented. You're not going to get another team to document their stuff. There's more churn than ever before on software teams. So the chance that you're going to be able to talk to someone that wrote a software component is decreasing by the day. And the number of software builders is increasing, which also means the number of junior new software builders is increasing. And so we're dealing with these large populations of people who are pretty new to software development in general and the systems that they're working on. And so I think the path forward is actually doing a lot more stuff in a black box way, just like abstracting out from assembly is the only way that we're able to enable the large numbers, millions, billions of software builders that we have and are going to have today. Zooming out from low-level debugging
Starting point is 00:26:46 is how we're going to enable large-scale debugging. And so a lot of previous DevOps observability work before Akita was really about, here's how we trace everything in great detail. Here's how, if you have full control over the system and you're optimizing it, this is how you figure out what's going on. And we took the exact opposite approach. We said, look, we're going to drop an agent in. We're going to watch API traffic, anything that's observable from the outside, essentially. And our conceit is we're going to tell you as much as we can based on what we can see in a largely black box way. And so I really believe that this is a main part of what's missing going forward. Do I think it's the only thing? No.
Starting point is 00:27:34 Do I know what else needs to be there? Also, no. I think accepting black box and accepting that we're going to have to zoom out and giving up the illusion of control are going to be really important parts of the path forward. Okay, that makes some sense. So you drop an agent in, this is code that's running on my side of every API conversation, correct? As a developer. Exactly, yes. So we have an agent. So at Akita, we built the Akita agent. It's now the Postman Live Collections agent. The goal is to drop in as seamlessly as possible into your system. And so it uses what's called BPF, Berkeley Packet Filter, the agent, to watch all of your network traffic and see all of your API calls.
Starting point is 00:28:21 And then the agent does some post-processing, ships data off to the cloud. But the idea is we don't need any information from the developer if we have this agent. That's the goal. So it doesn't matter how legacy your system is. It doesn't matter how undocumented your system is. It shouldn't matter how little knowledge you might have about your system before you install the agent. The goal is for anyone to be able to install that agent and start getting insights about their system. What layer of the stack is this operating on and what is it reporting on? So am I seeing like TCP packets going back and forth? Am I seeing API calls? What exactly is manifest?
Starting point is 00:29:01 Yeah, that's a really good question. So what we watch is the network traffic we reconstruct. So we do packet reconstruction. For the networking nerds out there, we use GoPacket and Go to do the packet reconstruction. And then we infer API endpoint structure from the reconstructed packets. So I guess theoretically, we could spit out the raw API calls if people wanted to. We actually, for security purposes that are somewhat historical of reducing friction, we don't look at the payloads themselves right now. But we will infer API endpoint structure, error information, latency information, and some other information like types. We also infer data types from the reconstructed packets.
Starting point is 00:29:50 And so what we present to the user is here are your API endpoints. Here are your API endpoints with errors. Here are the ones that are slow and might have errors soon. And here are trends over time. So that's everything we had in Akita. Now we're building this up as part of the Postman Live Insights product and working with a very targeted group of alpha users to figure out our MVP on the Postman side.
Starting point is 00:30:13 Does this live in production, this agent? Or is this like in dev? Where does the agent live? The ideal place is to live in production because this agent does the best, the more traffic it sees. And what we learned is dev doesn't see very much traffic. Staging sees very little traffic for most companies. Quite less than dev. Yeah, even less than dev sometimes. And production is where the traffic and the insights come from because it's really about unknown unknowns here. And so if you're testing something in dev, you know about it probably, or someone knew about it at some point, but there's a lot of stuff in prod that
Starting point is 00:30:50 people do not know. They do not know about. Wow. Does this introduce any latency to the application? Does it spike the CPU? Like what's the resource required? What's the footprint? That's a great question. So because we use Berkeley packet filter, it is less invasive than using a proxy and some other approaches, but it requires, well, the agent needs to eat too, basically. So the agent itself requires some memory and the agent itself needs, if it has its own core to run on, it doesn't affect the latency as much. But the agent is not in the path of traffic and so shouldn't introduce overhead that way. But by contending for resources on the machine it's running on, that's where the agent affects potentially the performance. If there's enough memory allocated, the agent should be fine. So a gig or something like that? Something reasonable?
Starting point is 00:31:46 It really depends because it depends on how much traffic there is and how much processing our agent needs to do on the traffic. And so this is not super optimized yet, I'll have to admit, because in Akita we were in open beta on medium, just starting to hit large customers. And now at Postman, we're targeting small to medium companies. And this hasn't come up as an issue yet. But when the time comes,
Starting point is 00:32:18 we know there's a lot of optimization to be done. For sure. With scale, you'll have to eventually. When the agent processes information, does it write to a database? Does it do an API call itself? How does it collect this information and then store this information? That's a really good question.
Starting point is 00:32:34 So the agent batches data and then sends that data back to the cloud, to our cloud in increments. So it writes to something local and then sends later on. So the agent looks at the traffic locally, does some processing on the local side, and that's where it takes up memory and CPU.
Starting point is 00:32:53 So to be more specific, what the agent does right now is it off-use skates out payload data so that our cloud never sees that. So in order for our cloud to see that, we would need to increase our security in various ways. It's all doable. And I think it's likely we do that sometime in the next year or so.
Starting point is 00:33:12 It's just not something we've done so far. And then it also infers type information because that's something that you need while you have the payload data. And then it collects error latency data and then ships all of that obfuscated request response metadata off to the cloud in batches. What are some insights that I would gain as a developer looking at your dashboard or whatever it is you're reporting tools in order to observe? What might I find? I assume this API is slower
Starting point is 00:33:43 than you thought it would be. Seems like an obvious one, but what else? The main insight that was surprising to me, and a side note that I'll say, is that in leaving academia and getting into what do real developers need, it's just been a process of realizing that software development requires much more basic information than I think tool builders want to believe and definitely than the software developers themselves want to believe. And so the biggest insight we've really provided to teams is what are my API endpoints? And so this is the thing that very often surprises teams. They discover API endpoints they didn't know about, or they discover there are fields of those endpoints they didn't know about, or those fields are being sent data types that they didn't know about. There's often something about the APIs
Starting point is 00:34:37 themselves and or how they're used. So the data that's getting sent or which fields are actually being used, that is surprising. So I'll say that, you know, it's not quite traditional discovery, but what are my APIs and what's actually getting sent to them? That's actually the most common and basic insight. Then which endpoints are slow? People often didn't realize or, you know, which endpoints are throwing errors. So the way we get, we've gotten some of our users is they get an alert somewhere else that, hey, you have errors, but you didn't monitor the endpoint that the errors are coming from. So where our solution wins is they can install us within minutes of install,
Starting point is 00:35:19 they can start seeing this is the endpoint with errors. So where is stuff going on is something that we help with. So what do I have and where is the action that I need to be paying attention to? Are the two major classes of insight. That's super interesting. I think that's unintuitive probably to me, a developer. But it makes sense once you explain it, how some things seem so basic, and yet so many of us lack the basic necessities to do our jobs. And when you provide one back, it's just like, oh, wow, I didn't know.
Starting point is 00:35:54 Because there's always some hidden box somewhere that's talking about something else that somebody set up six months ago, and then they left, that kind of stuff. Also, I've discovered that if you read the documentation for a tool, they'll say, oh, we give you X, Y, and Z. For instance, you can get this kind of debug information from your front end, then hop on over to your back end, and then you get this thing, and then we help you correlate with that. And then there's a really big caveat, which is
Starting point is 00:36:20 if you've taken the time to set us up everywhere. And there's usually also some amount of maintenance work. Like you have, you know, every time you do this, you update your code, you do this corresponding update of your monitoring. And what's happening in the real world is that developers just don't have the bandwidth to necessarily do that. And so if you don't have fully up-to-date monitoring states, you're not actually getting everything that's on the box with your tools. And so that ties into what I've written about 99% developers and the needs of real software developers. But I really came into this not assuming that developers were doing anything, in part because I came from academia. So I was
Starting point is 00:37:12 like, who am I to know this is what, you know, my team did before or something like that. But I just kept asking developers, what's it actually like? And I realized that it's never like what they say on the box. What's up, friends? I'm here with Vijay Raji, CEO and founder of Statsig, where they help thousands of companies from startups to Fortune 500s to ship faster and smarter with a unified platform for feature flags, experimentation, and analytics. So Vijay, what's the inception story of StatSync? Why did you build this? Yeah, so StatSync started about two and a half years ago. And before that, I was at Facebook for 10 years where I saw firsthand the set of tools that people or engineers inside Facebook had access to, and this breadth and depth
Starting point is 00:38:05 of the tools that actually led to the formation of the canonical engineering culture that Facebook is famous for. And that also got me thinking about like, you know, how do you distill all of that and bring it out to everyone, if every company wants to like build that kind of an engineering culture of building and shipping things really fast, using data to make data-informed decisions, and then also informed what you need to go invest in next. And all of that was fascinating, was really, really powerful. So much so that I decided to quit Facebook and start this company. Yeah. So in the last two and a half years, we've been building those tools that are helping engineers today to build and ship new features and then roll them out. And as they're rolling it out, also understand the
Starting point is 00:38:50 impact of those features. Does it have bugs? Does it impact your customers in the way that you expected it? Or are there some side effects, unintended side effects? And knowing those things help you make your product better. It's somewhat common now to hear this train of thought where an engineer developer was at one of the big companies, Facebook, Google, Airbnb, you name it. And they get used to certain tooling on the inside. They get used to certain workflows, certain developer culture, certain ways of doing things, tooling, of course, and then they leave and they miss everything they had while at that company. And they go and they start their own company like you did. What are your thoughts on that? What are your thoughts on that kind of tech being on the inside of the big companies
Starting point is 00:39:35 and those of us out here, not in those companies without that tooling? In order to get the same level of sophistication of tools that companies like Facebook, Google, Airbnb, and Uber have, you need to invest quite a bit. You need to take some of your best engineers and then go have them go build tools like this. And not every company has the luxury to go do that, right? Because it's a pretty large investment. And so the fact that the sophistication of those tools inside these companies have advanced so much and that's like left behind most of the other companies and the tooling that they're they get access to is that's that's exactly the opportunity that i was like okay well we need to bring those sophistication outside so everybody can be you know benefiting from these okay the next step is to go to statsig.com slash changel. They're offering our fans free white glove onboarding, including migration support, in addition to 5 million free events per month.
Starting point is 00:40:35 That's massive. Test drive Statsig today at Statsig.com slash ChangeLaw. That's S-T-A-T-S-I-G.com slash ChangeLaw. The link is in the show notes. I definitely wanted to ask you about this 99% developers concept. And it kind of plays into something, Adam, that Kurt from Fly talks about. I think he calls them blue collar developers or the ones that get forgotten and left behind and that aren't targeted by a lot of the sexy startups or the big dev tools are going after
Starting point is 00:41:19 this certain group of online developers. I don't know, influencer developers, people probably, honestly, who listen to shows like the changelog, they try to keep up with what's going on and adopt new tools and stuff. There's a lot of us that don't have what some of us assume that they have, right. And so there's a whole set of people who, for who the future hasn't arrived yet, so to speak, right. And a lot of them are being ignored by tool creators. Is that a decent gist of your synopsis there? Yeah, and I would say that it's not even about the future not arriving yet. It's that some'm, there's this notion that everything
Starting point is 00:42:08 trickles down from a small set of companies that are doing best practices. And this set of companies tends to be like very large, well capitalized, very profitable companies. You know, the thing, Facebook, Amazon, Apple, Netflix, and Google being, being, you know, the thing, Facebook, Amazon, Apple, Netflix, and Google being, you know, the models of this is what needs to happen. But it's not actually trickling down and not because people are slow to adopt or because, you know, they're lazy or they just don't understand the good solutions. But if you think about it, Google has a set of constraints for their processing like no other company.
Starting point is 00:42:51 How many companies actually need to process at the rate of Google in terms of data, in terms of requests, in terms of many other things? Most websites aren't going to get that many hits, you know, in 10 years, what Google gets in a day. And also, you know, there's other things like if you're not set up that way, then it's not that you don't have the luxury of having 10 teams to work on, you know, optimizing certain things or developer productivity. You don't have the need to do that. And so it's kind of like, you know, if luxury cars were like really lightweight race cars that were actually dangerous for most people to drive, like you, like the, you know, that's not a luxury vehicle. That's just something you
Starting point is 00:43:39 don't need. And so, um, I, I, I think that, you know, a lot of the influencers talk about they tell great stories. They tell stuff that would be great for engineers starting out, like any junior engineer learning about how Dropbox did their distributed systems. That's great education for learning how to do distributed systems better. But most companies don't have problems of that scale. They don't need to solve them in the same way. And if they tried anything similar, they're just overbuilding. So there's a quote unquote common wisdom
Starting point is 00:44:16 among a lot of investors that if you saw it at Facebook or you saw it at LinkedIn and you spin it out as a company, it's gonna be successful. I think it's really worth questioning that because most companies don't have problems at that scale. They have problems at a different scale. And so if what you need, so I'm, you know, I had a really a big realization moment recently when I was talking with one of my team members and he bought a motorcycle. And in my mind, I'm like, oh my god, motorcycles are so dangerous. Why wouldn't you get a car?
Starting point is 00:44:48 He said, I live in Bangalore. You can't get anywhere with a car. Everyone rides motorcycles. It's totally different. It's the only way to get from point A to point B. I think there's a similar reaction sometimes in dev tools when it's like, oh my god, you haven't set up this kind of cluster or you haven't set it up this way. What are you doing? But at the level of requests that you actually need to serve to be profitable and to hit your targets as a company, maybe you don't need to be doing it that way. And actually doing it that way slows you down is impossible. So I think that
Starting point is 00:45:24 even calling these people blue collar workers, it's just I think most developers are not Google. I think people have written a lot of things that have the exact title. You are not Google. And that's OK. But I think we should stop having this idolization of a small set of companies that have problems that no one else actually has. People should stop feeling bad that they're not solving those problems or having those problems.
Starting point is 00:45:51 I think it's also a side note, a little bit strange that in school we're teaching people like the cutting edge of algorithms. And I think one reason people get really drawn to this is they learn an algorithms class. This is what computer science is. And then they're like, wow, Google is actually applying all of the things I learned in algorithms class to all their problems every day. We should be doing this too. But maybe actually there's other skills that should be taught too. And side note, but yeah, it's just software development is a variety of things. Most of it doesn't look like what people learn in algorithms class, and that's okay. That's reality. And it's not about catching up to the future, that this is the future. This is the present, and the future is going to be more of that. It's not necessarily writing distributed systems and assembly code that can move at the speed of light.
Starting point is 00:46:41 Yeah, I think there's two categories there. I think they're related. So for instance, what I was referring to was like, okay, Facebook publishes React. Everybody at Facebook is using React. And then everyone who's attached to that ecosystem starts to adopt React. And 80% of the web is still jQuery for many years. And slowly jQuery fades and React takes over. And so certain technologies do get distributed down over time but there's absolutely also things that
Starting point is 00:47:10 facebook and google and you know name your your big tech company publishes that are solutions to their problems and then we are out as regular joe developers grabbing looking for solutions to a problem that we have and we see a solution by a very impressive company who has very impressive engineers, and we say, ah, yes, I will adopt their solution. But their solution never solved my problem in the first place. It solved their problem, and so now I have a mismatch. So I think that's the second category that you're talking about.
Starting point is 00:47:39 It's never going to solve my problems. Yeah, there's this interesting phenomenon, which you're alluding to, which is that a lot of programming tools development does come out of these big companies because they are the only companies that can afford to have whole teams developing programming languages to make their own developers more productive. So you see really good language development and tooling development coming out of Facebook, Google, Microsoft, and Microsoft monetizes a lot of it too.
Starting point is 00:48:05 And that has to do with other stuff I've written about, about why does no one pay for that stuff and why does it have to come out of these big companies. But that doesn't mean that everything coming out of these big companies translates to other people's needs. You just made me think about a compiler I had to buy when I was in college. It's dating myself there. But back in my day like i remember my first year of school i was going to take c++ and like step one was to get the book and to go buy the compiler yeah and and it's really
Starting point is 00:48:35 interesting these days because people don't expect to pay for compilers they don't expect to pay for python but i mean dropbox is funding essentially Python development by paying a salary to the benevolent dictator of Python. And, you know, I think this is a bigger topic for another time. But if you look at the main maintainers and creators of a lot of these programming languages, they're being bankrolled by single ones of these companies. And this is in part how this culture develops around like, oh, well, Google is the force behind Go, so everything coming out of Google,
Starting point is 00:49:14 if we like Go, we must like everything else. But that's a really interesting cultural and ecosystem thing around not paying for programming languages. Yeah, and open source plays into that as well. But yeah, that's a big topic my mind is kind of raising, just thinking of all the places we could go. Let's focus back in now on APIs, because that seems to be the thing that you're most interested in, even though
Starting point is 00:49:40 lots of these topics are very interesting. So API observability, this is one of the things, at least the thesis is, this is one of the things that will take us to the future of understanding our software better and treating it like a black box because ultimately you're going to have to. Even your non-black box is going to turn black box eventually
Starting point is 00:50:02 when you switch jobs or something. It sounds like a really great way to onboard folks or to come on to a new business and say, install the agent. And now I understand really not just how it works conceptually, but how this software actually operates because I get to see it doing all of the things it does. Yeah. And what I believe is most people would benefit from having a black box analysis. The illusion of white box is an illusion most of the time. Is it called white box if it's not black? Or is it clear box where you can see inside?
Starting point is 00:50:36 I think there's gray box. Gray box. White hat, black hat, gray hat. Semi-opaque. I don't know, something like that. I was thinking about a conversation we had at strangeloop recently and it's this may be relevant directly or not and you can correct me if i'm wrong but we were standing where our booth was next to vonage and i'd kind of forgotten
Starting point is 00:50:54 about vonage and vonage they describe themselves as basically twilio and they said that some well known delivery service uses both. They use both Vonage and they also use Twilio. And it's mainly for cost purposes and latency and resiliency in their system. And the fact that they're both black boxes, they can't control the APIs they're calling. What is it like when you have that scenario? You have a company at scale using essentially a copycat of each other, but not the same software, but roughly the same function.
Starting point is 00:51:29 Is that part of that black box must have two scenario where because I can't control one, I can't observe one, and I can't tell if it's going to be down, I have to have two for failover and also potentially financial failover when one is cheaper than the other if they have sliding scales of cost. Yeah, that's a really good point. And again, I'll just say that we primarily focus on first party APIs and not third party. And so my views here are not fully expert. But I think we're seeing this a lot where people are relying on software components that they don't have control over more than ever before. So we have these new patterns of redundancy.
Starting point is 00:52:10 We have new patterns of defensive programming. And there are, you know, there's just new things that people are starting to do as a result of working with so many APIs. So we haven't really dug really deep into that yet we're still you know at a much more basic level of what we provide but definitely um you know what you're talking about really reflects like a paradigm shift and how people are developing software and um i i think that the tooling hasn't reflected this shift yet. Back to the logs. I mean, I think in that scenario, if they pay one of those companies slightly more versus just having two,
Starting point is 00:52:54 maybe it's better to have two. I don't know. For downtime purposes or just sheer scale in numbers, maybe it does make sense to have two. Always have two if you can. I wonder if logs or some sort of deeper relationship could give them more information just to have one versus two. Yeah, I think this is the business manifestation of the don't have a single point of failure. And people talk about this a lot too
Starting point is 00:53:13 with depending on APIs for AI. So people say, for instance, what if OpenAPI becomes a lot more expensive? What do I do? And so I see a lot of people having their tools depend on multiple AI APIs. I think there's a lot of unpredictability when it comes to both third party and first party APIs. Yeah, we still don't have necessarily best practices. I think the best practice is use many of them if you can. And I guess keep an eye on if anything changes with them.
Starting point is 00:53:47 Changes are a big thing that people seem to want to know about. Round robin failure, being able to choose which one to use based upon latency and other factors. Well, you mentioned Segment earlier on, as you guys are a user of Segment. And I mean, that company, which is basically the adapter pattern for your tracking scripts and interactive scripts and stuff is like evidence that the trend is more api is not less right like we have to have an actual thing that swaps in and out our connections to these things like we're just we're not saying the the that we're going to trend towards less apis like it's clearly more oh absolutely for us even um Akita, we use Segment for both Intercom
Starting point is 00:54:28 and Mixpanel because we needed to track and then we also needed to talk to our users. And I knew for the different purposes of different things we wanted to track eventually, it was only going to be more things. And in the beginning, one of our engineers flagged, hey, you know, why do we need so many different tracking platforms? But each one does a really specific thing. And so I can see, you know, for every purpose, like having something feed out, you know, then I can see actually having like a Twilio Vonage adapter at some point if there's enough of these companies. And there's some other thing that provides the same services.
Starting point is 00:55:08 Yeah, exactly. The beautiful thing about Segment is that it is tailored to a set of marketing APIs, but you don't have to worry about one being better or worse than the other. You just pipe all of your data. Yep, and toggle them on or off. Commodity. With the click of a switch.
Starting point is 00:55:27 Yeah, it really is a sweet idea. I'm curious about the acquisition process, not the business side of it necessarily, but more the product direction of Akita to Postman. I was also taught to call it Postman, so I keep calling it Postman. I know some people say Postman, some people say Postman.
Starting point is 00:55:44 So just so you know, that's why I say postman that's my that's my thing i'm sticking to it but when you were acquired was it hey come keep doing exactly what akita did but here rename it how did the product direction you know did you continue on the same path are you going down the same paths like how did the product direction change or not change? Yeah, that's a really good question. So when we were getting acquired by Postman, we actually talked to a set of companies to make sure we were exploring all of our options and really explore what are the different ways
Starting point is 00:56:19 we could fit into a company. And so for us, we were still fairly early along, like we had just launched open beta. And so we were too early to really have a company just drop us in and be like, here's our, you know, really next big product line. But there were a set of companies that were interested in taking our tech, our product vision, or both, and integrating it into their product in a way that made sense for their product. And with Postman, the conversation was, we know we want API observability. That was the direction they had already set off in, and they had set off in an SDK-based approach. They were really compelled by two things. One was our agent-based approach, which led to a very smooth onboarding, or Postman had a very specifically smooth onboarding.
Starting point is 00:57:08 Some of our users were still working on their onboardings. We got lucky in the case of their onboarding. I think the CEO had told his head of platform, if you can get onto the system within 30 minutes, we're going to consider acquiring this product. Otherwise, no go. He told me that too. We were very nervous because we're going to consider acquiring this product. Otherwise, no go. And he told me that, too. We were very nervous because we're like, all right, most of the time it's really fast.
Starting point is 00:57:30 But sometimes it's slow. Who knows? But they got in under 15. And they were able to poke around, get a bunch of stuff. And so we started a conversation from there. And our initial starting point, I would say, was actually further from the Akita product than where we've landed now. Because for them, Postman has been primarily dev time before. And so they're like, all right, we have collections.
Starting point is 00:57:56 What we announced first with the acquisition was we were going to extend collections with the agent and populate collections with new endpoints from our agent and then see where we went from there. And since then, we've ended up developing a product called Live Insights, which is now in alpha, which is here are your endpoints, here are the ones with errors, and everyone's been asking for latency. So that's something that's coming out too. And so a big part of what we've been exploring, and I'm really glad we've taken the time to do it, is if we went from essentially first principles and looked at what does it look like to build the best API observability platform for Postman users, what is it? And what are the needs we're solving? Instead of saying, hey, we were Akita. We did a bunch of stuff that worked for our users. We're just going to transfer that over. And so my first few months
Starting point is 00:58:49 were talking with our users, talking with collections users, surveying the people who had signed up for our alpha and really getting a sense of what do they need and what makes sense for us to build here. So that's the product. What about the software? Did you start over? Did you bring it all in and spruce it up? Yeah, so I mean, there's no way we could have launched anything if we had to start over. It takes too long to build this stuff. So we spent the first couple of months porting the back end.
Starting point is 00:59:19 And so a lot of what we're working on is iterating through different incarnations of the front end with our users. Did you feel like it was a success to be acquired or did you feel like there was, from a founder standpoint, was there any emotion? Obviously you chose the direction, so there's clear opt-in to the direction. But did you feel any remorse or mourning of a key to, Postman lives kind of situation? Like, how did you feel with that choice? So strangely, I felt less sadness or, you know, grieving for Akita than maybe my team did. Because I think my team was like,
Starting point is 00:59:58 oh, like, you know, so fun to be Akita. Now we're part of Postman and it's an adjustment. You know, we have a different job now. I think for me, I was just very focused on what's the best thing for the product and our users. And the minute I joined Postman, I was like, wow, we have such a bigger platform to build on top of. We have a megaphone, whereas we had a little microphone before. And we have this whole marketing team now. We have all this data to dig into. We have all these users that we can survey. There was just a lot of work to do. And so for me, we weren't done with the job at Akita.
Starting point is 01:00:34 And we're still not done with the job of defining the category of API observability at Postman. So in some sense, I think I'm an anomaly here in that I'm just like, cool, we were doing a thing. We're still doing the thing. We're not done yet. So we're just going to keep doing it. And I'm really excited about how much more resources we have now and how much bigger of a platform we have. So for me, it's really been a win so far. I think if you ask some of our team, they're like, man, we were going great. And now we had to spend like two months integrating. It feels like a step back. Although, you know, I think intellectually they know it is for the best, but we've had to slow down. We were in open beta. We're now back in early alpha with
Starting point is 01:01:20 a much smaller number of users. We're, you We're redoing all of our monitoring in the new postman system. We're redoing all of our runbooks. And we had really good ways of doing user support before where we had our whole setup, our whole data, our whole intercom automations and everything. And in some sense, we don't have some of that. But in terms of the ultimate impact that we're going to have, I think it's not hard to feel just what an opportunity it is. And I think in some sense, some founders are like, man, I'm not in control anymore or something like that. For me, I'm just like, there was so much stuff that was all on me
Starting point is 01:02:02 because I was a solo founder. So anything with the data that we have, the marketing that I now have access to. I had been trying to hire a designer for years and Postman was just like, here's a designer. She's been great to work with. But there's a lot of things that I knew was on me and would be kind of, even if we had the resources, would take a long amount of time to get right. I feel like we pushed fast forward on a lot of these things and we got a lot more non-engineering resources when we joined Postman. Resources are good. To have somebody to call upon that's like, hey, you're just there.
Starting point is 01:02:43 I didn't have to go survey and find and vet and look and scrutinize. You just gave having someone who's living, breathing your UX really just takes you to the next level really quickly. So I think there were just like a few things that I had been, I knew we were missing. I knew it was on me to build up and I knew each one was going to take a lot of time and effort. And so it's really, to me, it's really setting us up for an acceleration. So I've been really excited about it. What about the, you mentioned defining API observability. What is the maturity level of that definition or the current status quo of tooling available to API observability? Well, we were named the Gartner Cool Vendor earlier this year
Starting point is 01:03:48 in API Observability, and I would say it was before our open beta. So that gives you some idea. Not much competition, I guess. I told our team, this is a great honor, but there's a lot of work to do in the whole field if that's the case. I think there's other players in the space. Datadog acquired a company called Secret a year or two ago. There's Emosef, there's API Metrics.
Starting point is 01:04:14 I think that a lot of people know they need API observability, but the category hasn't been defined yet. People talk about category creation, category definition. We don't have to convince anybody that API observability is a thing, like this is a term, and people ask about it. Does anyone know what it is? No. If you ask 10 people on the street, they'll probably all say something slightly different. Like, what's an API? Yeah, yeah, exactly.
Starting point is 01:04:39 Depends on the street. They'll say like, AI, I've heard of that. So you want to observe the AI. People often drop the P when you talk to them about APIs. That's what I noticed. You said earlier, it may have been a Freudian slip. You said open API. I think you may have meant to say open AI, but I don't know. Oh yeah, I didn't mean it.
Starting point is 01:04:56 See, you added the P. She's all about adding the P though, man. She wants the APIs. Yeah. That's right. What's up, friends? AI continues to be integrated into every facet of our lives. And that remains true because you can now index your database with AI. You can Thank you. And that's the focus of the three-part season opener of the award-winning podcast called TraceRoute Podcast. You can listen and follow the new season of TraceRoute starting November 2nd on Apple, Spotify, or wherever you get your podcasts. And this show is all about the humanity and the hardware that shapes our digital world.
Starting point is 01:05:56 In every episode of TraceRoute, a team of technologists seeks to untangle the complex question, who shapes the internet? Seasons one and two gave us a crucial understanding of the inner workings of technology while revealing the human element behind tech. And season three tackles not just AI questions, but also how can we use technology to preserve the earth? Who influences the technology that gets made? And what happened to the flying cars we were promised? I think it's safe to say that the future of AI
Starting point is 01:06:23 is both exciting and terrifying. So it's safe to say that the future of AI is both exciting and terrifying. So it's interesting to hear the perspectives of experts in the field. Listen and follow this new season of TraceRoute starting November 2nd on Apple, Spotify, or wherever you get your podcasts. Do you have a demo instance or a video? I would love to see an action. I'm just now going based off your description, but I'd like to see how it works or see it working. So we're not ready to show it to the world.
Starting point is 01:06:54 Like I can show it to you guys, but... Okay. As long as I can see it, I don't care about the world. Yeah, yeah. I can show you guys. Cool. All right, let's demo off beta. Let me see what I can do here.
Starting point is 01:07:07 A few minutes later. That's cool. So because Postman already has all of these concepts inside of it in terms of the collections with the endpoints and the data and stuff, you're really kind of piggybacking that UI by building this into it by saying we're going to take the insights drawn from the agent and collect it into the cloud, and we're going to display it to you as if it was like a pre-populated
Starting point is 01:07:29 Postman collection. Yeah, yeah, absolutely. So that was one of the compelling aspects of partnering with Postman, because for us, we were just having to build up everything ourselves, which is both time-consuming and expensive. And a lot of our users were asking for integrations with something like an API platform, essentially. Always cool to see the inside.
Starting point is 01:07:51 I think you should demo AlphaSoft more often. It's just fun to see the beginnings, to see the rough spots in some ways and the thought behind just getting to their user saying, okay, can I have this? Can I have that beyond errors and how that manifests as an initial screen and what that initial screen has and how it evolves. I think it'd be a cool kind of video series, actually.
Starting point is 01:08:14 Wouldn't this be cool? Like, you know, we do fixer uppers, you know, like the before and after. Like people watch TV shows where they take a house and fix it up and there's like a project. Yeah. And then we see the end result. It'd be cool with brand new product screens and stuff. Here it is, just spitting the data out.
Starting point is 01:08:30 And then the after would be like, here's the finished, well-designed, shined up, spit-polished end result. Kind of cool. Problem is people are usually embarrassed by their in-progress works, and so they don't want to share those things. But we appreciate you showing it to at least yeah when when a user sort of this is an alpha currently right even in postman it's enough and you were in a beta scenario in akita is that right i'm trying to just map yeah so we had launched our open beta in march we got acquired in late j. And so we rewound to early alpha to give ourselves time to integrate our
Starting point is 01:09:09 backend into the postman environments and to really make sure we're building the right product on the postman side. Okay. So I just wanted to make sure I mapped that correctly. So still not GA, but people are using it. Orgs are using this. What are some of the, even in its early state, what are some of the impacts to developers having these insights, having this observability, the error even, or even just knowing where their endpoints are and what's getting the most traffic and what kind of error responses are?
Starting point is 01:09:41 Yeah, so developers are saying they're happy to get this information because they aren't getting it from anywhere else yet. I will say we just shipped the pages that I showed you, so I think it's too early to tell what the impacts are. What we do know is just populating the endpoints, they're like, the impact to us is low until you ship these next screens. And so what I showed you
Starting point is 01:10:00 actually isn't even shipped to users yet. So we showed it to them and they said this will have impact. But I think the last screen I showed you is actually in the middle of release right now. It's going through end-to-end testing or something like that. This is a very, very early demo. Can you hypothesize impact with me? Can you hypothesize some?
Starting point is 01:10:22 I'm thinking like we talked about earlier, you mentioned how often there's churn in organizations. So there's a lot of new developers coming into a team, so they're learning the system. So this is a mechanism for learning an API that they have, right? Yeah, so our target is smaller teams. So it is teams with engineers somewhere between 10 and a couple hundred, where a lot of them are moving fast, getting things off the ground. The impact that they told us that they want from this is it's easier to keep an eye on their systems. They get a central source of truth where they didn't have one before.
Starting point is 01:11:01 They can more quickly find and fix issues than they could before. That's a good impact. And so with our Akita users, for instance, we were a part of ops review for our best users. We were a source of, they had turned off their other alerts and they kept their Akita alerts on, basically. I think it's TBD. I'm trying to stay open-minded about, you know, this is actually a different user base. This is a different platform that we're becoming part of. But I learned from Akita, there's definitely a need for people to get easy to use, lightly configurable API level insights about their performance and errors. All right, last question from me at least.
Starting point is 01:11:46 You mentioned this future where we have better understanding of our software systems. They're more reliable. We can build higher quality systems. We're not afraid of our APIs anymore. We should be today, but in this future we will not be. And API observability and specifically the tools that you're building is like one thing that you said is going to help us get there. You don't know all the things that we need, but do you have any other ideas that you're not working on?
Starting point is 01:12:15 Things that would help us get there along that path? Maybe it's a good idea, maybe it's a bad idea, but it's something you've thought of that would be another thing somebody else could work on or try that would get us closer to the future that you're talking about in addition to the work that you all are doing. So something I'm really excited about is low code with APIs, because part of me is like, let's just all be really honest about what we're doing here, basically gluing together APIs. So I've been a big Zapier fan for many years now. And I'm also a really big fan of Postman's new low-code product called Flows. But as a programming languages person, it's always about if your language or your builder abstractions are at a higher level of abstraction, it's always easier to analyze what's going on. And so from my point of view, like we have to do all this stuff
Starting point is 01:13:05 with API observability right now because we have to back engineer all the API traffic and, you know, like re-piece together all of the API interactions. But if you're just straight up using a low code tool, that's just right there. And so that's something that's really interesting and compelling to me. I think that that's very clean from an abstraction standpoint and also just enables more software builders, which I think is very cool. So to me, that cleaning up. So like right now, you know, calling APIs from low level code kind of feels like you're mixing like assembly with some other stuff right now. Like you're at like a low level of abstraction. So lifting the whole abstraction layer to something that's API centric is very exciting to me. And then you would only need something like us for like the messy stuff that you customize or something.
Starting point is 01:13:58 You know what I mean? But like all the other stuff, it's cleaner to begin with. So that's something that's really exciting to me. And then there needs to be a better solution for legacy stuff. So legacy subsystems today are like toxic waste. They're just sitting there waiting for a big bug or vulnerability to really cause things to spill over. And the work we're doing is one piece of what allows people to make inroads into legacy software. I think
Starting point is 01:14:33 there's some work that Docker is doing that's really interesting, helping people containerize legacy software. So the reason I'm excited about that is if you have legacy software that's just kind of like sitting somewhere running on your own systems, like on a nonstandard tech stack, it's really hard to make sense of it. But the minute you like virtualize it, you can start poking and prodding on it, at it in a black box way like that, that supports some of the stuff we're doing, actually. So we can only watch things if they're sufficiently virtual, or, you know, like we could also, this is, this is a gray area, but we could also install our agent like on bare metal,
Starting point is 01:15:10 et cetera, et cetera. But, you know, the minute things get containerized, things are easier. So the push to containerize and standardize infrastructure, I think will help some of the legacy problem. But a lot of software tools discussions really gloss over the fact that we have growing amounts of legacy code that are never going to be part of this future that they're describing.
Starting point is 01:15:36 And what do we do with all of that code? Good point. I imagine you're using eBPF, which is, I guess, modern Linux kernels. So is that some of your, if you said like we can't get underneath or get further back than certain places, is it basically like if your machine or virtual machine or container doesn't have a modern-ish Linux kernel, then your agent doesn't work? So we're actually more flexible than this.
Starting point is 01:16:01 It's really about ease of install for users. Okay. more flexible than this. It's really about ease of install for users. So we use BPF, so we don't use any of the extensions of BPF. And so this was a conscious decision. I didn't want us to be kernel specific for exactly the reason you said. It's really, especially if we want a drop in experience, it's a lot of work to determine kernel versions and convey that to users. And what we found is we're building for a user that doesn't read. Not saying they can't read, but they're in a hurry. We don't actually expect them to read our docs.
Starting point is 01:16:39 We don't expect them to read our onboarding. We expect them to basically copy and paste commands and click buttons. And so if that's what we're working on and we want working off of, and we want them to onboard within 15 minutes, the E part of eBPF is just out of reach right now. Like we don't know how to make that easy to use yet. And similarly, we actually support raw, like bare metal installs, but we haven't figured out a way to do it. If we assume the user isn't going to read, if that makes sense. So we've set a very high bar for usability or low bar for people actually internalizing any of our product, if that makes sense. And so for me, the Docker instructions have been way easier to convey. Because here's the thing, if you're on Linux, are you on Debian? Are you on Linux versions like this and later or this and earlier? Because how BPF interacts is different,
Starting point is 01:17:39 even though it's not eBPF. And so to me, from a developer experience point of view, that's just terrible. You shouldn't have to know all these things about your system just to get started. And that's why we've stayed away from supporting, you know, every bare metal install under the sun. But also it's not, it's not, it's not just bare metal. Some of these legacy systems are on, people are migrating off delphix people are on some pretty like like if you're if you're modern up to a certain point we work but you know earlier versions of stuff just have stuff working differently and to know to have to figure out which early version you're on and like for us to extend stuff to support it
Starting point is 01:18:21 it seems like a a zone that we are not ready to go into right now i feel like an idiot because uh i didn't transpose back berkeley packet filter to bpf earlier i just was like it's a new thing i just didn't connect it back because i never expanded the acronym i've you know been aware of eppf and whatnot but i'm like i just didn't expand it to the to the full thing. So that's just people in and out there. Yeah, it's just been packets the whole time. People have been talking about eBPF.
Starting point is 01:18:52 That's hardcore. Not even the eBPF, just a straight up BPF. It's just BPF. Yeah. Yeah, very old school. No extensions here. Cool stuff, Gene. Adam, any other questions you have before we let her go?
Starting point is 01:19:04 Maybe just one more layer on no-code, low-code. Do you have any, not so much prescriptions, but pontifications of where we're going to go with this? What might happen with API developers, those who maintain APIs and the future relationship that's inevitable with low-code, no-code tooling to hackathon our way to the next startup? Yeah, I think if we look at the future of low-code and no-code, it is APIs. That's the whole reason we're able to do interesting things with low-code and no-code.
Starting point is 01:19:36 And I don't do that much hands-on coding these days, but I have so many zaps. And I guess our CEO would prefer it if I had so many postman flows. In fact, he's told the flows team, you got to really onboard Jean. She's just making more zaps every day. Um, but, um, you know, I, I think that we're, we're in a really exciting time, especially with AI stuff. So I can now log into Zapier and within 15 minutes of logging in, I've been able to make zaps for doing things like I want you to generate me a template for a weekly retro doc every week and put it in
Starting point is 01:20:10 Confluence and then message in the Slack channel and tag my team that it's up and they should fill out the retro doc. I have automated weekly messages to my team, like asking them questions. Do they respond? Not always, but you know, that part can't get automated. But there's been like pretty complex things that I've been able to automate. And compared to two years ago, it's actually crazy. So I have another automation. So I was losing track of who was on call. And I think no one really, well, okay, other people than me were tracking it, but many people maybe did not know who was on call. So again, within 10 minutes, I made a Zap that goes to PagerDuty, looks up who's on call, looks up that person in Slack, tags them, and posts it in our Slack channel every day. And so these were all things that took a lot of code to do before.
Starting point is 01:21:01 And now, you know, it's like five, 10 minutes of Zapier in large, it's APIs plus AI, because the APIs are what make it possible to get that information. Zapier has done the work of, you know, making authentication really easy. And like, I can just click a thing within a minute, I can get authenticated, it manages the tokens for me. So actually, two years ago, I had to like put in the tokens by hand, I had to put in the API call and the values by hand like pipe it through that was maybe like two hours of work now like they've set up the APIs automatically I don't really have to even know how to use Zapier anymore so I just say like hey like you know I'm just hanging out there's a thing I want to automate like some team process another thing was I want every time I add this emoji to this channel, I want like
Starting point is 01:21:45 this Jira ticket to get created. And I wanted to like do all this stuff. Like I was able to do that. It was very slow zap because it had like 10 steps. So I had to turn it off. But, you know, it was it was pretty good. I was able to do that in like 45 minutes. And it's really incredible. I think it's really APIs plus AI that have really made this easy because the API part is someone has had to do all the work of making it easy to authenticate and call and pipe the values from the APIs and get the responses back so you can pipe it again. And then the AI part has just made it so that, you know, you want to do these five things, just use these five components and you don't even have to have learned
Starting point is 01:22:25 what the components are, but it's really crazy to make these systems maintainable. Oh yeah. Like my 10 steps app. I have no clue why it stopped working. I have no clue why it was slow. I don't know how to make it faster. That's where being able to understand those systems, get observability, get some kind of, um, understandability of the underlying workings will be good. But in terms of getting off to the races, I think that even my team now knows, well, Gene, you know how you made an automation for that other thing? Well, we're having a process issue here.
Starting point is 01:22:58 Can you just make a zap or something? That is pretty cool, though, to make zaps like that. Yeah, it's really the future. Yeah, it really is. Yeah, one reason I'm really excited about it is it's not one of those unattainable features. Anyone can just get onto Zapier and do it. And I think Zapier's not going to be the only one. Postman has Flows.
Starting point is 01:23:18 They don't have as many APIs as Zapier does in there yet, which is a little ironic. But Zapier's really smooth this whole process. But I think they're at the cutting edge of something that's just going to be ubiquitous. Yeah, the whole if this, then that kind of situation too. I don't know what that platform, how closely it compares to Zapier, but I know they were closely aligned for a while.
Starting point is 01:23:41 Yeah, well, so if this, then that. I mean, they had like if the weather you know like a very fixed set of things now exactly i can connect up slack with pager duty with confluent with jira with um you know like any like data dog um we were sending like we had some pretty complex apps that were like going to octa doing some some stuff calling out to our own lambdas and you know writing metrics to mix panel and segment and so um yeah there's like it's it's all it's all apis i think there's there's just like so much of code is apis that's wild it's been so awesome like i will echo what jared said at the top of the call i've been uh you know a fan from afar and i don't
Starting point is 01:24:22 uh follow you so closely i know everything about your, but I've seen you out in the sphere and have been a fan and have been just excited to eventually have you on the pod. And here you are, and here you are, and that was awesome. Yeah, same. Yeah, it's been really fun. I've also been a fan from afar. Cool. Fan from afar.
Starting point is 01:24:39 Thank you so much. Cool. All right. Well, thank you so much for having me on. So there you have it. Treating all your APIs like black boxes and observing them via installed agents for discovery, for monitoring, for helping us build better software.
Starting point is 01:24:56 Are you sold or not so much? Do you also love Zapier? Have some amazing Zaps to share with us? Leave us a comment. There's a link in your show notes. Or write about it on your blog and send us the link. We love hearing what you all have to say in response to these conversations. Oh, and we will be at KubeCon next week.
Starting point is 01:25:16 So if you happen to be one of the 20,000-ish attendees, hit us up and let us know so we can say hi. We're changelog at changelog.social on Mastodon and changelog on Twitter. Thanks once again to our partners, fassy.com, fly.io, and of course, typesense.org. And to Breakmaster Cylinder
Starting point is 01:25:38 for hooking us up with so many amazing beats that we started producing albums and putting them out as changelog beats on spotify apple music and the rest speaking of bmc we'll be speaking to bmc on friday's changelog and friends so stay tuned all right that's it this one's done please do spread the word about the changelog if you enjoy the show and hit us up with a five-star review if you love it we'll'll drop that BMC episode in
Starting point is 01:26:05 your feed on Friday. Thank you. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.