Software Huddle - From Academia to Startup Founder and Successful Exit with Jean Yang
Episode Date: February 6, 2024Today on the show, we have the founder and CEO of Akita Software and now head of product at Postman, Dr. Professor Jean Yang. Jean has a super interesting background, a former computer science profess...or at Carnegie Mellon University with a focus on programming language research. She then went on to found Akita Software, which was focused on solving hard problems around the API observability space. And last year, the company was acquired by Postman. And during the interview, we covered a lot of ground talking about Jean's academic experience, motivations for starting a company, and the problem Akita set out to work on. 01:05 Intro 06:40 Software as a Social Problem 12:10 Over engineering 20:44 Motivation 25:22 The problems 32:10 Existing methods to solve 36:21 Some other similar systems 36:21 Packet to Reconstruction 39:43 Aha moments for customers 41:33 Why sell to Postman 47:23 Would you do it again? 52:03 Rapid Fire
Transcript
Discussion (0)
If you could invest in one company that's not the company you work for, who would it be?
I think Common Room.
I feel like I still believe DevTools are having a moment.
I think Dev communities are really interesting.
And tracking, not in an enterprise way, but tracking individual developer adopters of DevTools is very powerful.
And I am optimistic about the future of Dev tools, startups in general. So yeah, common
room. What tool or technology could you not live without? I like to think that I can live without
anything, but I can't live without most things. I would say it's probably my super high resolution monitor. So I think that I like,
I'm five times more productive with my monitor.
I, whenever I work without it,
I get eye strain, I get less done.
It's very great.
Yeah.
Which person if wished you the most in your career?
Hey everyone, it's Sean Faulkner.
And today on the show,
we have founder and CEO of Akita Software
and now head of product
at Postman, Dr. Professor Jean Yang. Jean has a super interesting background, a former computer
science professor at Carnegie Mellon University with a focus on programming language research.
She then went on to found Akita Software, which was focused on solving hard problems around the
API observability space. And last year she was acquired or the company was acquired by Postman.
And during the interview,
we covered a lot of ground
talking about Jean's academic experience,
motivations for starting a company
and the problem Akita set out to work on.
It was hard to keep everything focused
on a single topic with Jean
because she's done so many interesting things
and has lots to say.
She's a great guest, super chatty,
strong opinions, hot takes.
I really enjoyed this one.
One last thing before I kick it over to the interview. My apologies for my voice. I was
struggling through a cold during the interview, but hey, there is no sick days in podcasting.
So I soldiered on. I'm a professional, but I might not sound the way I normally do. Maybe
I sound better. I don't know. All right, let's take things over to my interview with Jean.
Jean, welcome to Software Huddle.
Thank you for having me.
Yeah, thank you so much for doing this.
So it turns out, as I was preparing for this conversation, it turns out that you and I actually have quite a bit in common in terms of the start of our careers.
We both did PhDs in computer science, postdocs in medical schools, eventually became entrepreneurs and sold those companies.
Can you walk me through your past and your journey in academics to founding and growing Akita to where you're now sort of fresh off this acquisition and your position at Postman?
Yeah, no, I also looked you up and it does seem like we have a lot in common.
For me, I've always been driven by this
idea that I should be working on a problem that's important to humanity. I also, I grew up in a
software family. That's sort of what I've always known. I feel like that's where my natural skills
and experience lies. And a lot of my career has been about reconciling how do I work on software while working on something that's good for people.
And so I feel like you can sum up my career in terms of most of the time I was happy with my decision that developer tools are really important.
Software is what I believe to be the biggest social problem no one's talking about.
Everything's coming to run on software.
Software bugs are becoming huge vulnerabilities in critical infrastructure at the country level, at the everything level. And I feel like every now and then I have this interlude where I'm like,
man, am I really saving lives? And then I try to apply software
to medical things. And then I come back and I'm like, no, no, no, no, no. Dev tools, that's where
it's at. So I did my undergrad in computer science. I got my PhD at MIT in programming
languages. So with a focus on a programming model to automatically enforce
security policies. I did a brief one year, it was a pre-batical of sorts. So I had gotten a
tenure track job at Carnegie Mellon in the programming languages group. And something
that's in fashion is to take a postdoc once you have the job just to hang out for a bit. And so
that was my, do I want to be working on software for
protein modeling to cure cancer, or am I still really committed to dev tools? And what I realized
was, no, I'm really committed to dev tools. I'm a dev tools person still. I was at Carnegie Mellon
for a couple of years. And then I was getting really into this idea that I felt like
APIs were a big part of both the problem and the cure for a lot of what I was seeing.
So I can get more into that later. But APIs weren't really a hot topic in programming
languages research, but in industry, they're all over the place.
And so I concluded the best way to work on tooling around APIs was to leave. And I ended up starting
Akita. And now I am head of product and observability at Postman working on very
similar things to what I did at Akita. Awesome. And then, you know, one of the struggles I had
when I was in doing my postdoc, and I had similar motivations where I wanted to be able to use sort of my technical skills to work on something that was going to like benefit, you know, humankind. And it was just attracted me to working sort of in bioinformatics. And it felt like I was doing something meaningful there. But I think one of the struggles I had was things just felt like they'd move so slow that I wasn't actually having as much impact as I could if I actually went and started something myself.
Was that something that you kind of ran into as well?
Yes and no.
I do love the pace of things in startups.
I think it's amazing that you operate on the timescale of days and weeks rather than months and years.
And in fact, I had an academic friend who said, whoa, I'm really glad I'm in academia because
just watching how things are unfolding for you over the course of days, that's too much for me.
So the speed definitely is something interesting. I feel like being in programming and being in
systems, in academia, it's a little bit faster because you can have open source systems.
And I think I think I think the pace, the pace of publishing is slow.
But I think a lot of people didn't let that hold up the pace of dissemination or anything like that.
But there was one big bottleneck to me, which was the pace of tech transfer was extremely slow because what you had was you could
write a paper or you could do something and then you either waited for this to make its way into
a mainstream programming language or a mainstream tool that people design, which could be upwards
of 15 years. And so, you know, a lot of the new types that we saw in the Swift programming language
in the last five years, they've been around since the 70s. Or you could wait until someone at Amazon or Google picked up what you
were doing. And this is a very specific narrow subset of all the research papers. And so then
you're tied to both tech transfer curves and also big, fang timelines.
And so for me, starting a company and now joining Postman seemed to be much faster ways
of taking my ideas and directly giving away for developers to start picking them up.
You mentioned when you were discussing your background, how you saw software as one of
the largest social problems that exist.
Can you elaborate a little bit on that?
What specifically do you mean in terms of software being a social problem?
Yeah, that's a great question.
So I read a fantastic quote a few years ago about data, how data is like a corrosive acid where it's just sitting there. It can only implode or explode, but it's like a
ticking time bomb. And not to be super negative, because I think software is great. It's what
allows for so many things. But at the same time, the legacy systems and the legacy subsystems that are running a lot of our lives,
they are also, you know, huge, huge vulnerabilities. And so I saw this interesting
story that I didn't post for reasons that it might be very political. And I'll say it now,
you can cut this if it's, you know, not appropriate. But I saw this story yesterday about how
this Ukrainian hacker hacked the lights of a Russian apartment building to show the Ukrainian
flag. And now the apartment building owner is in big trouble. And so like these kinds of things are always top of mind. Like you're only able to
do something like that if software is running it. Well, you're only able to control lights very well
these days if software is running it. You can only hack into it if there's a vulnerability in the
software. But if we think about our entire lives from our homes to our banking systems, to our cars, to what's running all the commerce
that's producing what we are consuming, our supply chain, our social networks, these are all software.
And it's, I mean, not to take too military of an angle on it. These are like, you know, national security risks. They're also just
like, you know, our lives are just very dependent on software. And people talk about, you know,
many other things like climate, or they talk about food shortages or all these things. But my take is, hey, look, there are a lot of risks and there's
a lot of potential for control of our lives that software has created that, you know, most of us
aren't thinking about, most of us aren't talking about. Although I will say a lot of people have started talking about this in the context of AI.
I have the view that AI is not that different from other software.
We've been letting software control our lives for a very long time.
And if we weren't scared of software then, we shouldn't be as scared of software now.
But the baseline level of fear should have maybe always been higher yeah i mean i think in particular i've talked about this in previous shows as well as like i think like technology feels somehow closer to something that like
humans can do whether it's ai or a robotic system or something like that and it's complicated and
it's hard to explain and people don't fully understand it people have a more of a like sort of like fearful reaction to it for some reason
and also i think like you know media feeds into that to a certain degree but i think you make
some excellent points in terms of the the scale of all these different software systems it's very
different than you know 40 years ago when we lived in a disconnected world we might use software
in our day-to-day lives if we worked in an office or something like that.
But the sort of impact of a problem, which is much more constrained, is kind of the difference
between, you know, if I go to a bar in San Francisco and they ID me and I show like
an address license number, like in theory, if that person has a photographic memory or
something like that, they're going to get some level of information
about me, my address,
my little bit of my biometrics
and stuff like that
in my data bird.
But that's very different
than me taking a picture
of my driver's license
and posting it on Twitter,
which I would strongly advise against.
So the scale, essentially,
of these types of problems
is much, much bigger now
because we have all these connected systems as well. Yeah. And similarly, before your lights
just worked as lights, but now if they're controlled by IoT, that's something anyone
with access to that system can now do something about. Before, if you're just getting food from
the grocery store and making it or going to
a restaurant, now there's subtle things that software can influence you by if you're ordering
all your stuff via apps. And so there's sinister things. And then there's just small ways software
just changes how we live our lives. And I feel like in the research community, I worked on a lot of this
from the point of view of correctness and reliability. I had colleagues who worked on
this from the point of view of something closer to media studies or things like how social media
influences people's behaviors or elections or things like that but there's there's just a lot
that that software is uh doing these days yeah absolutely so i know that you have some strong
takes as well on sort of i guess like the industry obsession or focus on sort of programmers kind of
or software trying to like emulate what's coming
out of like the big fan companies essentially like you know if google does it then and i'm
starting a company today that i need to architect the system the way that like google architects
the system so do you think that we tend to over engineer products by sort of over indexing on
what we're seeing coming out of these marquee brand companies, which might
actually not be sort of a fit for whatever we're doing or building? Yeah, absolutely. And I think
it's not necessarily people's fault. But if you look at what, well, there's two things. One is,
if you look at what computer science students are learning in school. It's closer to, you know, what the
big companies are doing. So at least when I was a professor, we were teaching things like, this is
how you scale a system a lot. Or, you know, here's how you do like crazy algorithms to get stuff like,
you know, down to this. And I think that, you know, a lot of people when they come out of school,
and they've learned all this stuff, it can be very jarring to realize, hey, exponential algorithms are not terrible most of the time.
And in very few situations in your life, well, you know, tweaks on big O complexity, like make a huge, huge difference, unless you're processing things at Google or Amazon scale. So I think it's not even necessarily
like a Google or Amazon scale problem. But when you learn algorithms, or when you learn data
structures, what actually feels like you're making use of these lessons is this hyperscale stuff.
And then I think another part of it is, if you look at the well-written blog posts out there, they often come out of the big
companies who can afford to spend time having an engineer, you know, spend, spend the month
cleaning up that blog post. They have the marketing department to, to disseminate that, or, you know,
there there's enough people with enough bandwidth to really, you know, provide polished content for the world about this is how we did
this, this is how we did that. And so I think it's not necessarily people's fault that they say,
well, so and so at Google did it this way. You know, it both resonates with their sensibilities
about this is what a cohesive case study looks like. And it also might resonate with what they
learned in school or boot
camp. And so of course, people are going to gravitate towards that. And I think that there's
also a sense of if you're not doing things at that scale, is it a blog post to scoff at? Or is it not?
And I think there hasn't been an analogous genre of, you know, I, I wrote this like very simple exponential time algorithm.
It took me five minutes, but look, it got me, you know, 10 million users. And I, I mean, I would
love to see more of those posts or, or maybe not even 10 million users, but look, you know, I,
I'm a software developer at a mom and pop shop. Like this is all we need i think that like that genre
of blog post isn't necessarily valued the same way um and and it really isn't a thing yet yeah i mean
you're really seeing like the instagram version of what's going on within these companies anyway
because as you as you mentioned like there's they're creating the very sort of polished
asset that they're going the very sort of polished
asset that they're going to put out there that helps with their recruiting and promoting whatever
they're doing, but it's not necessarily telling the full story of even what's going on within
the company. Do you think also that there might be a challenge in terms of, you know, people copying
things that they see without like fully understanding what they're copying and just
assuming that that's the way that they should be doing things? Yeah, so there are also great blog
posts. I think there's multiple blog posts out there if you search you or not Google.
People have written blog posts about that. And I think that, again, it's partly blog posts aren't
like, oh, well, this is how much data Google processes in a day.
And so here are all of our stats for how we decided to do it this way.
And then part of it is people are just saying, hey, this is what we have to do.
I think the more senior engineers I've worked with have been experienced enough to qualify,
well, this is what we have to support.
It doesn't make sense for us to do it that way.
But I think that it is very tempting
for a lot of engineers to reach for patterns
that are well-known.
Yeah, I also think that there's a certain,
especially if you're coming out of
sort of a class of at least you could train
in a computer science program,
you might sort of fall in love with certain approaches or something like that
where, like you mentioned, I want to get the optimal big O from this thing.
Whether it makes sense for what you're trying to do is like,
oh, I actually only have five users.
I can probably just hack this thing together.
It doesn't matter whether it runs in log n or n squared or exponential at that point. Yeah, yeah, yeah. And I mean, we never
taught that in school. You know, if I look at the entire curriculum that I was familiar with,
there was never a problem that said, hey, you have five users. You have to ship this in two days.
What algorithmic shortcuts do you take to um to do this you know that that just
wasn't part of any any class that i took or taught yeah yeah when i talked to uh joe reese uh who's
uh well known in the data engineering space like one of the things that we talked about was
how now people kind of fall in love with this idea, this like complicated modern data stack
where we're putting together
like hundreds of different tools
when a lot of times you use something really simple
like a Python script and an Excel spreadsheet
might actually solve the problem.
So it's kind of getting back
to this like practical computer science
versus and looking at it
in terms of what do I need to accomplish now
versus what do I need to accomplish now versus, you know,
what do I need to do if I'm serving billions of people or something like that?
Yeah. Yeah. And I think it also goes back to people love to see complex setups. They love
to see kind of like the sci-fi version. And so it's really hard to say, here's my setup.
I have two tools. It takes, you It takes a certain level of reputation and confidence
for the person who's posting that
not to just expect to be laughed off of the internet.
And it also takes an amount of reputation and confidence
for people not to laugh that person off the internet.
Yeah, it's hard to write like a blog post
about architecture,
which is like a super simple thing
where you just have, I don't know,
like a front-end application running in an EC2 instance
with a database on the same server
or something like that.
People are going to be like,
oh, you know, what is this?
Like, where are your, you know,
50 different AWS services and so forth.
Absolutely.
I think that, you know, a lot of like small apps I see, like you're maybe using, you know, they see different AWS services and so forth. Absolutely. I think that, you know, a lot of like small apps I see, like you're maybe using
SQLite, you start out using SQLite, then you move to Postgres. And I think there's a lot
of temptation to say like, okay, let's do something really fancy now. But, you know,
I think people feel ashamed or they just don't feel like there's an audience for people showing their super simple setups.
Or, yeah, I just think that from building a tool that's targeted towards earlier companies, from building a tool that's targeted towards companies that don't have the bandwidth or the resources to really set everything up,
there's some patterns I've seen that every time I talked about them,
people come out of the woodwork and they all say, that's my setup too.
But just nobody else talks about it, which is too bad
because I think people could really help each other out
by trading notes about their SQLite setups.
Yeah. All right.
Well, for those listening, you heard it here.
Share yourself regarding whether you think it's too simple or not.
Yeah, yeah.
I think people are just like, well, I'm not at scale yet.
And it's like, come on, most of us are never going to be at that scale.
It's fine.
It's really helpful to people to share where you are now.
Absolutely. So I want to talk a little bit about Akita
and your time and motivation to become an entrepreneur.
So maybe we start with motivation. You talked about the wanting to work on
things that do good in the world and have
impact. But there's lots of ways of doing that. Why become an entrepreneur?
Why take on that headache?
There's a lot of problems beyond just having impact that come along with being the founder of a company.
Yeah, starting a company was not my first choice of how to go about this problem.
But I had always been prepared to start a company if I needed to. So, um, I, you know, but before starting Akita,
I had done the requisite, you know, entrepreneur curious things. Like I took the entrepreneurship
class, uh, when I was in grad school, um, I I'd spent a year thinking about starting a different
company, uh, with, with a friend, uh, when I was in grad school, she was in business school and,
and I learned a lot from doing
that. I met a lot of other entrepreneurs. That ultimately didn't go anywhere, but I got a sense
of this is what it would take to start a company. And then I also started an accelerator in
collaboration with Highland Capital in 2015 with a friend. That friend went on to become a proper VC for a few years,
but it was mostly a project where he had this idea for the accelerator
and I felt that he needed someone to help him do it.
And I was curious to learn a little more about it too.
It's called Cybersecurity Factory.
There's like a couple dozen companies that have gone through it,
including us early stage or me early stages of Akita.
And so prior to starting Akita, I had a decent idea of what it would take to start a company.
I didn't really want to do entrepreneurship for the sake of entrepreneurship.
That's what I learned from my one year.
I thought about starting a company in grad school. I realized that I'm pretty tied to the technical skills that I have, the technical
perspective I have, the technical insights I have. When you spend over a decade developing that,
it's, first of all, it's a shame to throw that away. But also that's just what you live and
breathe. You know, like it's just, it's exciting to be, it's exciting to have insights about something. And I realized that
that's, that's just the things I have insights about. Although that said, the kinds of technical
insights I had around developer tools and programming languages, those usually get put
to use on platform teams or internal tools teams or programming language research
teams within much bigger companies. And so at the time, I was thinking about I am interested
in working on API-related problems outside of academia. The first thing I did was go to my
friends at Google, go to my friends at Facebook, see what they were working on, and see what kind
of match there was. And what I realized was Google and Facebook are working on Google Facebook scale problems.
And so most of the super technical problems that Google and Facebook work on, and even Amazon,
I had my friends at Amazon as well, they were like super large scale scaling problems. I've
never been a scaling person. I've always been more of a, like, here's an innovative programming model, or did you think about the world this way kind of person.
And I felt like, you know, like putting me on a scaling problem wasn't the right fit at the time.
What I really wanted to do was think about there's so many APIs, how can we better understand
systems with heterogeneous APIs? How can we better incorporate legacy systems into what we're doing?
And doing this as a startup would be harder because it's kind of like a multi-year effort,
like multi-decade effort in some ways. But it would give me the time to really test out these
ideas with smaller customers than all of Amazon or all of Facebook,
even one team within Amazon, which is needing to serve requests at the scale of Amazon,
if that makes sense. And so once I realized that, well, I looked down the path of starting a
company and most startups fail, most dev tool startups definitely fail. But I was okay with the fact that even if we were independent for a few years, I could work out some of these ideas enough to carve out a niche so that we could continue the vision as part of a bigger company.
And that's effectively what's happening now at Postman. So when you were starting, you mentioned you're focused on trying to figure out
a way to understand what's happening with all these different APIs and heterogeneous APIs.
So what was the specific problem that you wanted to try to address and solve for?
Yeah. So here's what I noticed when I talked to every single software team. So when I was a
professor, I spent a lot of time just visiting industry teams,
talking with people I met when I spoke at industry conferences. And what struck me was over and over again, they'd say, Jean, the ideas you talk about sound fantastic. If we lived in an ideal world,
that would be great. But here's what we're dealing with. And even my advisor once joked that some of
the techniques that I was talking about were like,
if you went to the doctor and the doctor said, well, you kind of have some problems,
but if you just ate healthy since the day you were born, you wouldn't be having those problems
right now. It doesn't help at that point. And so there are a few specific topics that I was
interested in. One was that I was interested in.
One was that I was working on programming language design and program analysis, like static analysis, dynamic analysis.
And all of that was about, you know, how do we have airtight guarantees on languages at the application layer?
And what I realized was software was so much bigger than a single application layer.
It was, you know, the application layer and then everything below.
And it was the application layer and everything you're calling across the network.
And so this is where API calls really came into my consciousness.
And in fact, one of the last papers I wrote in academia was about how to enforce security
policies across REST APIs.
Because there are a few things motivating that paper that I can get
into at some point. But what I realized from this paper was that REST APIs were very simple.
REST APIs were kind of a boundary that you could start enforcing things. And it was often
much simpler to look at what's happening at the REST API layer, or, you know, it doesn't have to
be REST, it could be gRPC, it could be GraphQL, it could be something else. But it's often much
simpler and impactful to look at what's happening at the API layer, instead of every single line of
code. Because what I was running into was, you know, I had drank the Kool-Aid of if you use
types everywhere, it's great. And, you know, anytime you interface with an untyped language, what happens?
Or if you use this nice, safe language, it's great.
But anytime you make an API call, what happens?
And so I felt like if I was if what I was interested in was software reliability and
getting a handle on this is what's going on with our software systems.
This is how we have cross system guarantees, Moving up a level and operating at the REST API layer was going to be more impactful.
So that was the first part of it. And the second part of it was I had been living in the world of
airtight guarantees. I want to be able to prove this bound on the number of times I access this memory resource,
or I want to prove that the memory never gets touched in this way. And I sort of inverted
how I felt about software systems. So previously, it was about, I know exactly what guarantee I want
to enforce. And every tool that I'm working on, every action that I'm working toward is in the name of enforcing guarantees.
What I realized was people actually don't know a lot of the emergent behaviors that their system
is about to do. And a lot of making sure your software is making sense is about eyeballing
if the emergent behaviors are okay. If those emergent behaviors are manifesting themselves in the form of log lines, that's a little hard. And so one of my goals was to give people
a way to understand what their software was doing at a higher level than logs and based on things
they didn't necessarily think to log. And again, watching software at the API layer was very
appealing to me there. So going back to your at the API layer was very appealing to me there.
So going back to your question of what problem was I trying to solve, there was this idea that
we don't have a handle on what our software systems are doing. We're not able to guarantee
or have some sense of confidence that the software is doing what it's supposed to.
But there's a bigger problem, which is what is the software supposed to do has become more unclear.
So you talked about these like emergent behaviors. So can you give an example of what you mean by an emergent behavior in a software system?
Yeah, a very simple one is permission systems. So you might think that a lot of permission
systems are very cut and dry. Like Alice can see this, Bob cannot. And this is because
Alice is my friend and Bob is not.
And if you talk to people responsible for enforcing these permission systems in practice,
there are much more complex interactions. So if we think about just Facebook or Instagram,
there's friend relationships, and there's friends of friends, and there's pages,
there's all kinds of different components. So what I actually am and I'm not allowed to see depends on how the components are interacting,
how permissions got implemented across these components.
So if I just wanted to prove,
my friends can always see this and other people can't,
that might not actually be provable
because there's like a thousand
exceptions, if that makes sense. And so this is actually the example that made me realize, hey,
like any property that you might care about, about a software system, it's not so cut and dried.
It's actually super implementation based. And you kind of have to have some way of looking at what's actually going on.
And so Mark on my team said he once saw a talk by someone who said that a very major
retail company software team, how they understood their permissions on their site was they just
went and documented, this is all the cases that people can see different things.
So that's one example.
Another example is I think a lot of teams, if you ask them up front, like, how many errors do you tolerate on this endpoint?
Or, you know, like, how many, like, timeouts do you tolerate?
It's not zero.
But the actual number, like, there's no, like no platonic number of errors you tolerate. It's
really like, what was it like last week? Was everything roughly okay? Were people able to
refresh and things kind of worked out? And so I think this model of software rules can all be
derived philosophically. It is becoming increasingly less of a fit with reality.
So how are companies, you know, outside of using something like what you're providing at Akita or
we're providing there, how are they like traditionally like trying to solve these
different problems? Is it just sort of combing through log files if they're doing anything at all um so yeah so what we're trying to do is i am so today if you
want to you know make sure your system is generally reliable and robust and you're a more mature
company you would get something like data dog you'd get something like new relic you would spend
a bunch of time uh setting it, getting graphs, and doing what I
was describing, eyeballing things week over week. When there's an incident, making sure you look at
the right graphs. There was a period of time I was asking people to walk me through their Grafana
setups. And one guy showed me his 200 Grafana dashboards. I'm sure it takes non-trivial time
to set up 200 Grafana dashboards.
Um, but, but, you know, I, I think you can take the time and put together, you know,
enough tools to eyeball it.
What we're building at Akita, the goal is, or I guess what we're building with, um, what
we were building with Akita, what we're building with Postman Live Insights now is, um, we
want to give people software or the understanding of your production system in a box.
Four dummies in a box.
Within 15 minutes, you get to see something about your system.
And I know this sounds vague, but what people are doing now is there's stuff they care about
in their system.
This actually differs company across company.
There's a set of endpoints.
They sort of eyeball them week over week and see like, is anything wrong? And then if something's wrong in your system, you
want to look at, you know, a bunch of other graphs to see what might be causing things to go wrong.
And so I'm, I would say what we're doing, we don't call it alerting or monitoring because we're like,
you know, our, our, our top, our, our top value is not providing the alerts themselves,
but it's really allowing people to see what's going on with their system.
And so in DevOps, observability serves this function.
A lot of people associate observability with very specific functions
like traces, logs, traces, and metrics.
What we're doing is spiritually observability
because we're giving people this understanding,
but it's at the API level,
which is why we call it API observability.
Okay.
And then in terms of how it works,
is this similar to how a lot of modern logging systems
or is agent model essentially sitting
on these production systems
that's pulling in the monitoring
what's happening on the API level? Yeah, yeah, yeah, exactly. So we modeled our agent insertion
very similarly to Datadog. The big difference between us and some other systems is our agent
requires no code annotations. That was really important to us because in order to get into a
legacy subsystem or code that someone
doesn't necessarily have a lot of ability or bandwidth to change, we want to be able to drop
in. So we are based on something called BPF, Berkeley Packet Filter. We do passive traffic
sniffing essentially. And we're not the only company that uses BPF or eBPF, but we're the only company I know of that's put in a lot of work to make this data accessible.
So there are a lot of companies out there that, well, okay.
So there are a lot of companies out there with an agent.
Many of those will require some amount of co-changes from the developer.
Then of those companies that don't, like us, that are using something like traffic sniffing, most of every other company out there, they're really focused on, okay, this is some really high tech, really high cardinality, detailed data.
What do you, the expert programmer, want to do about it?
And we flipped that notion on its head and we said, look, this is an awesome technology, being able to watch systems without knowing anything about them. The challenge we set out
for ourselves was, can we actually tell people stuff about their systems that are interesting
that they didn't know without them giving us any information about their system besides installing?
So how do you actually go from essentially packet to reconstructing what's happening in AGIware?
Yeah, so we use GoPacket. It's pretty standard packet reconstruction there.
I would say the more interesting piece of that is how do we go from the reconstructed packets to actual API endpoints. And there we have heuristic algorithms that we worked on for years that look at individual
API calls and consolidate into all the ones that are like users slash user zero slash user one.
That's actually the user path. And so instead of saying, hey, you have 3 million different
endpoints that are like user one slash item two one slash like, you know, item two slash thing four.
It's like, all right, it's like the user, the user variable and then the item variable and then the thing variable like that.
That sounds like, you know, to people who aren't using APIs a lot like that, that sounds very boring and low level, but actually being able to collapse 3 million calls into here are your 10,000 endpoints or 1,000 endpoints, that actually turned out to be the biggest win of what we're really doing this for is inferring properties and patterns
that we see about APIs.
And we've essentially stopped
at just inferring the endpoints alone,
tells people something they don't know about their system,
like 90% of the time, which is,
hey, did you know you have this endpoint in your system
and you weren't thinking about it?
And doing it with some level of fidelity
has turned out to actually be super hard.
It's still a work in progress for us to get the long tail of that.
And even maybe the middle tail of that.
So it's essentially you're reconstructing the endpoint,
you're being able to collapse them based on these patterns.
So you're not reconstructing the body of the payload, just essentially the URL structure.
Yeah, just the endpoints.
So our agent also pulls some information out of the body.
We have a type inferencer.
I will say that I feel like people are always saying, oh, that's really cool, that's really cute.
But I don't know that anyone looks at the types.
The structure of the endpoint seems to be the most useful.
And then response code and the basic request response metadata has turned out to be way more useful than I expected starting out. And then all of this, presumably, is surfaced in a dashboard where I can analyze it and
see the breakdown of potentially errors by endpoint and so forth.
Yeah, yeah.
So in our Postman Alpha, we're looking at which dashboards do our Postman users want
to see now.
With Akita, what we had was your errors over time, your performance
over time, and the ability to slice and dice your endpoints by slowest, fastest, most used, least
used, that kind of thing. And so we're looking at what dashboards do our alpha users in our
Postman Live Insights product want to see now. What were some of the, some of the, your observed
some of the big aha moments for some of the customers that you worked with that were adopting
the software? Were there things that they like, you know, immediately recognized that they had no
idea was a problem? Yeah. So I had always operated under the mantra, show user something they don't
know and show it to them
as quickly as they could. And I had thought this would be something very fancy. Like,
did you know you had this pattern about your systems? And I feel like the one lesson I keep
learning is that the simplest thing is often the most useful thing. And so I'm simply showing users
their endpoints. Often, you know, what they say is, oh, I knew about these
five. Wait, what's this endpoint doing here? Or once we started adding error and latency
information, oh, I knew that these five were slow, but what's the sixth one doing being so slow?
And so that was one big aha moment. And then another lesson I kept learning was users know best about their own systems.
So I think initially we were all about like, we're going to automate all of path parameter
inference, or we're going to automatically show people like here are the top five things
you care about.
And what we realized was for different users, like our users are not dumb and they're very practical.
And so sometimes they care about slow. Sometimes they care about this. Sometimes they care about that.
They they if you give them the right tools to search for the right part of the information that you have, that's much more useful than trying to do everything for them. And so that lesson I learned in many forms over the years.
And then last year and the last six months or so,
you were acquired by Postman.
So why sell to Postman?
Yeah, that's a great question.
So I would say I'm maybe an unusual entrepreneur in that, again, first, I got into the whole situation not to have my own company forever, but because I felt like this problem needed to be solved, this product needed to exist, what this product looks like obviously evolved over time. Side note, we actually started out as an API security company
to show APIs for security vulnerabilities
and it evolved into a dev tool.
But to me, I was always open to the idea
that we could be better off as part of a bigger company
if simply for the reason that most dev tools live within
bigger companies. It is really hard to be a dev tool as a startup. And I'm not saying dev tool
startups don't exist. Shout out to all the dev tool startups out there. I have a lot of respect
for all of you guys. I think it's great that you're all fighting the good fight. But I think that, you know, having a bigger, being part of a would predict that we were part of, I would probably thought like Microsoft.
Because I think Microsoft is one of the best dev tools companies in the world, you know.
And, you know, the biggest platform just like, I don't know, Microsoft absorbs all dev tools at some point.
But, you know, as of last year, I felt like we'd made really great independent progress in terms of building out the tech, defining the initial user base, really building something that seemed to be resonating with our early users.
What we were missing was sort of what happens after the initial experience.
So we built something, you could drop it in, within 15 minutes, we show you some stuff.
And then there was the question, and then what?
And so, you know, our early users were like, cool, we're using this in weekly ops review, but like, you know,
like, do you integrate, like, you know, what happens when we mature a little bit? Can you
build us this? Can you build us that? Like, do you integrate with Datadog next? And for us as a
small company, if we just integrated with Datadog, we're kind of better off just maybe being part of
them, you know? And so at that point, we were also fundraising at the
time. And so I felt it was prudent to think about, should we be part of a bigger platform? And what
kind of bigger platform should we be a part of? And so Postman had approached us previously,
Datadog, you know, the usual suspects had all talked to us before. And what became really clear was we had their point of view. Like we actually have a pretty
bold stake in the ground when it comes to like API all day, API all the way. And so Postman was
like, great, you know, what we can offer you is like, we have this platform of people who like
use APIs all day. They want to know everything about their APIs. And there seemed to be just a lot of places
we could collaborate there. And so it made a lot of sense. And I think that to me, what was also
compelling is Postman is a high growth startup. So it wasn't like going to a company where
the pace of development or the pace of release slows to a grind. We're still releasing on about a weekly
cadence. We have the freedom to be an alpha. We don't have super onerous security reviews at this
point in our alpha. Postman has really gone above and beyond to make sure we can preserve a lot of
the velocity and innovation that we had as an independent startup. And that was what was promised to us.
And, you know, it's been delivering so far.
Yeah, that's awesome.
Yeah, I mean, it feels like there's a natural alignment
with the focus on APIs to a company like Postman.
And I don't know, I have no data to back this up,
so I'd be wildly wrong.
But I feel like there's a lot of history and examples of startups that get acquired by really big entities where those companies kind of just like, you know, die on the vine to some degree because they aren't given the resources or the time of day to actually continue the work. work um so it looks great as an entrepreneur but might not necessarily be overall like a great
outcome for the product versus being acquired maybe by a fast growing startup where they're
like you know not going to waste time sort of just you know buying something for the sake of like
of hiding it or not doing anything with it yeah yeah yeah absolutely it's it's you know i feel
like postman is in a situation where uh the was way too focused on actually growing to acquire a company for vanity reasons or just in case to quiet a competitor or anything like that.
I think it was very much the CEO said, look, if you join, this is our bet in API observability.
And here's how we could work together. And that was very compelling to me because, like I said, the mission and vision
to me are more important than remaining solo or, you know, making certain amounts of money or even,
you know, like any other reason. So now that you had this experience of being an entrepreneur under your belt
and actually, you know, even having a successful exit,
think you'd ever do it again?
I think it really depends.
I realized from this whole experience
that I'm really a DevTools person.
There's very little else in the world I want to do.
So I have, I'm friends with some other exited founders
and I'll catch up with them.
And they're like, I'm really into this space now.
And I'm just like, nope, still into DevTools,
gonna still be into DevTools.
So I think I learned a few things about DevTools.
I think once you do it once,
you either decide, oh, it's not that hard,
or you decide, oh, this is actually pretty hard.
And I think my conclusion is,
it's actually really hard to start a sustaining long-term DevTools company.
But I thought this from the beginning. I think that if there is an existing company that allows
you to do the DevTools thing you want to do, you save yourself a lot of time in some ways,
because you can directly work on it. Because I feel like in terms of like setting up the company, like, you know,
getting, getting like the right stuff built to like start doing like the cool dev stuff,
it takes a while. So, so I, I also think that, you know, even though DevTools is,
is having something of a moment in terms of, or until the recent investment turn of the tide. But, you know,
DevTools was having a good moment in terms of getting investment, in terms of there are like
way more DevTools companies than before. I think it definitely is possible to start a long-lasting,
freestanding DevTools company. You know, HashiCorp is a company I super look up to.
Docker had its ups and downs, but I'm a big
fan of what they're doing these days. But I think it's really hard. I think if you're a database
company, people pay for databases. I think people's relationship with money and dev tools
is a little more fraught. So I think I would have to have a pretty good idea of like, okay,
this is why this is an independent company and not just part of another company to do it. I don't think I have the gene of like, well, I feel like you have so many productive working years of your life. You kind of have to think about, you know, how many years do I want to spend like setting something up versus like executing on something.
And, um, yeah, I think, I think it would really depend on if I, uh, if I found the right thing to start again.
Yeah.
Yeah.
That makes sense.
I mean, I think that if your goal is to work on dev tools and you can find a place where you can work on dev tools and kind of have some level of creative license to do that.
Then you don't have to do all the other things that come with entrepreneurship, fundraising, and go-to-market, and other parts that you have to think through and own that take away from the time that you might be focusing on actually building the product that you care about building.
Yeah, yeah, absolutely. And that was a very compelling part of joining Postman.
I feel like if there had been more DevTools companies at this stage when I was thinking
about starting Akira, it would have been very compelling to think about joining them at the time.
To me, I feel like you start something if there's nothing out there that does the thing you want.
And, you know, you're... Well, also, I think there's multiple reasons.
Because if there's something out there that could do the thing you want,
you don't want to start a company anyway.
You're going to get crushed.
But I think you have to believe there's significant advantage
in starting something to do it.
I guess you could start something that exists,
provided you have got a unique
take on it or there's existing pain with that thing that people are using.
Yeah, yeah, yeah, yeah.
Or, you know, like, I guess you feel like you're uniquely positioned to do
something because I think that also once you start thinking through, like, this
is what my company should do, you think through like if Data Dog could spin up a
team to do this or, you know, if so-and-so could spin up a team to do it, it's kind of not the best use of your time either.
And so, yeah, I think the reasons to start a company that for me are not different from
the reason someone should start a company in general. But I think that when I first started Akita,
I was more optimistic about what can't be done at a different company. And once I started really
thinking through strategy and things like that, I sort of realized like, hey, actually, you have
to be interested in like a pretty narrow set of things for this to work. So as we start to wrap up,
I want to go quickfire here. So I'm going to ask
you some questions. This is something new we're trying. So don't think too hard. Just give me the
first answer that comes into your mind. So first off, if you could master one skill you don't have
right now, what would it be? Being a better sleeper. I feel like sleep is the foundation
of all the rest of life. I'm a terrible sleeper. I have to work so hard at it. If I could sleep
better, more easily,
go to sleep faster, stay asleep,
like my life would be so much easier.
If you could invest in one company
that's not the company you work for,
who would it be?
I think Common Room.
I feel like I still believe
DevTools are having a moment.
I think Dev communities are really interesting
and tracking not in an enterprise way,
but tracking individual developer adopters of
dev tools is very powerful.
And I am optimistic about the future of dev tools, startups in general.
So yeah, common room.
What tool or technology could you not live without?
I like to think that I can live without anything, but I can't live without most things. I would say it's probably
my super high resolution monitor. So I think that I am five times more productive with my monitor.
Whenever I work without it, I get eye strain, I get less done. It's very great. Yeah.
Which person influenced you the most in your
career? I would say probably my undergrad mentor, a person named Margo Seltzer.
Well, she taught me what imposter syndrome was. She taught me not to make a big deal out of making mistakes. And she taught me to keep going about
stuff. And it was Margo and another undergrad professor I had, Radhika Nagpal, who told me to
start finishing things. Because she noticed that I sort of assumed that I wasn't smart enough to
finish some of the big projects I took on. And she observed that, no, you're just actually not
finishing them. Like being smart has nothing to do with it. You're just not doing them.
I think the combination of those two things, because I think for somehow I had this notion
that, you know, I'm not a genius. So like I work on something for a day and if it doesn't work,
I should just give up. And I, or, you know, I would say like, Oh, like, I'm not smart enough.
If I ask a question, and it was the wrong question, I should just stop asking questions.
And they both took me aside. And they were like, Who do you think you are?
You need to actually put work in. You need to be wrong. Sometimes you need to,
you know, do stuff that takes longer than a day. And I think teaching me to actually put the work
in, I think sometimes lack of confidence is an excuse just to do nothing, which I was clearly
doing sometimes. And I think they taught me not to do that slash called me out.
What's the probability that AI equals doom for the human race?
The people who worry that AI can take everything over,
that's so much hubris. That's saying that humans can create something so much bigger than ourselves
that kills us. And here's what I'm seeing. So I have this blog post I wrote a few years ago about
what I call the spam filter apocalypse. So we have this friend who didn't come to this event
we organized because we got spam filtered in her email. And then I talked to some other people.
I realized like, hey, spam filters are actually really bad. I talked to some other people, I realized like, hey,
spam filters are actually really bad. You know, I talked to some people, they were starting a startup, they actually like almost died because no one was receiving their emails because they're
getting spam filtered. And I was like, oh my god, spam filters are so powerful. Not because they're
smart, like they're Bayesian filters. It's, you know, like one of like the dumbest AIs you could
have out there. They're powerful because we allow them to be a
layer between us and our email. And so I think, well, it's one of those things. Yeah. I feel like
in life, people say you can only feel bad. People can only make you feel bad if you make yourself
feel bad. I feel like AI can only take stuff over if we let them take over. Because people are like, oh my god, these machines, they're going to attack. You can turn off your computer. The only reason your computer is not turning off is because you are letting it not talking about. Like we're letting software have all this power that it doesn't need to have. We're not talking about the power it's having.
And maybe this is what people mean, but I don't think so. I feel like people are actually
imagining like the AI like jumps out of the monitor or something like that. Like that,
like I'm like 0% afraid of that. But you know, I read a really good book a few years ago called The Fires by this guy named
Joe Flood, very ironically. And it was about how a very progressive government in New York City,
because of algorithms by the Rand Corporation, actually ended up allowing the destruction by fire
of a bunch of Black and Puerto Rican neighborhoods in New York. And the danger there was, you know, they trusted an algorithm.
The algorithm told them to under-allocate firefighting resources and, you know, maintenance
of infrastructure in those neighborhoods.
And it led to outcomes that were against what they stood for as a progressive government.
And so I feel like those are very powerful cases.
But that's not AI, man. That's like algorithms from the 70s. And sure, you could argue that's AI and spam filters are also AI. But I think it's very different than how people are thinking about the doom today.
Awesome. Anything else you'd like to share? And how can people reach out to you? So I'm on X a lot. I'm Jean
Kassour on X and
that's pretty much the only way.
I mean, email kind of works, but
I get a lot of spam.
Maybe due to not trusting spam filters.
Well, Jean, thanks so much for coming on
the Software Huddle. I think we covered
the gambit of topics today.
I think we'd love to have you back to have a run.
I'm sure there's a ton of other things we could get into.
Cool. Yeah, thanks. This was super fun.
Thank you.
Cheers.