The Data Stack Show - 13: Building Open Source Products at Scale with Reza Shafii from Kong Inc.
Episode Date: November 6, 2020This week on The Data Stack Show, Reza Shafii, vice president of products at Kong Inc. discusses open source projects and products with Kostas and Eric. Kong is a cloud connectivity company best known... for being the creator and primary supporter of Kong, the most widely adopted open-source micro service API gateway.Highlights from this week’s episode include:Being a self-proclaimed middleware geek (2:17)Middleware explained (5:41)Kong as a company, open source project, and a brand (10:44)Drawing the lines between the open source and property parts of a SaaS platform (24:22)Dealing with the extra friction in adopting middleware from the bottom up (33:02)The Data Stack Show is a weekly podcast powered by RudderStack. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Transcript
Discussion (0)
Welcome back to the Data Stack Show.
We have a really incredible guest for you today.
Little background, Kong makes all sorts of different products,
but a lot of products for APIs,
and they really exemplify sort of the API-first world
that we live in today.
Reza from Kong is joining us and we have so many
interesting technical questions we want to ask him, but his experience with open source, I think
is most interesting to us as Redderstack because we're an open source company as well. So Kostas,
I think you're going to drive this conversation because you're our head of product. What
questions do you have brewing in your mind for someone who has so much experience,
both on the technical side, but also building companies in the open source world?
Yeah, absolutely.
I mean, Reza is a veteran in the space, I would say.
You see that his resume, he's a person who went through companies like CoreOS and Red Hat,
which are like the definition of companies
that are working in the open-source space,
with huge, huge experience with anything
that has to do with middlewares
and the API economy in general and products around that.
And of course, open-source, as we said.
So I think we are going to have
a very interesting conversation with him
on the intersection of building products
on top of open source projects. And I think there will be a lot of lessons to learn from his experience.
Hopefully we'll also get a bit deeper on what Kong is doing and what their next plans are.
Pretty recently they also had their virtual conference, so I'm pretty sure that he might have some insights to share
from that. And super excited to chat with Reza and see what his opinion is and share some of his
crazy knowledge on building tech products on top of open source projects.
Great. Well, let's dive in and I will try not to get in the way.
Hi, Reza. Welcome to the podcast recording today of the Data Stack Show. I'm very happy to have
you here. Would you like to start by sharing with us some background information about yourself and
also Kong? Hi Gosses. Yeah, it's great to be here. So yeah, background information about me.
I call myself a middleware geek, which is a weird thing to say.
But over the last, God, I don't even want to say it, but 20 plus years, I guess, that
I've been in my professional career, the onset, I was thrown into the world of distributed
computing back in the Corba world.
And I started kind of falling in love with this field, starting with the product we're
using back then, Iona and so on.
And so, you know, went on to actually join my dream company, which at the time was BEA Systems, which was acquired by Oracle, then left to MuleSoft.
I was the first product manager hired at MuleSoft, helped the team there, worked together to bring on the Anypoint platform and their API offering.
So I also fell in love with the whole API space back in that role.
Then had the fortune of joining CoreOS as their VP of product.
That was an amazing experience.
Just great people to work with over there and some really amazing technology that they created.
As you know, CoreOS got acquired by Red Hat and I had the opportunity to lead their platform services on OpenShift, which is everything above Kubernetes on OpenShift.
Things like Knative, their PaaS layer, their Istio layer and so on.
And then here I am at Kong and I i've been at kong over a little
bit over a year uh it's been an amazing journey so far and really looking forward to the things
we're we're doing oh that's very interesting actually i really i find extremely interesting
that uh you said that you fall in love with midware because of korba most of the people
that i have here talking about it is usually with horror stories around Korba. It's funny because first of all, when I say I love middleware and
I'm passionate about middleware, everyone looks at me funny most of the time because it's a weird
thing to kind of be excited about and fall in love with. But back in the Korba days, we used to have
this thing called IDL. I think it's called Interface Definition Language.
And the very first project that I got as a full-time person,
someone gave me an IDL for an interface and said,
look, this is the interface you're programmed to.
Go write the C++ code.
And then the UI side, which was a swing-based Java UI,
is going to consume this.
You just write the back
end. What I didn't realize at the time was that we were basically doing spec-based API design,
right? Spec-first API design, but we're doing it with IDL. So these concepts have been around for
a long time, right? Yeah, absolutely. I don't have experience with Corp. I mean, I know the acronym.
I started working, I think the first similar technology
that I interacted with was RMI.
Then I've done some work with Soap.
And yeah, actually, it's very interesting,
the whole evolution of middlewares.
But in your mind, in terms of the products that we see
and we interact today,
what you would put under the umbrella
of a middleware?
What's the definition of a middleware?
And can you share with me a couple of products
that you think are representative
of a middleware?
It's interesting because the term
is no longer used in some ways
and it's kind of disappeared.
So in some ways that, you know, you can't really say middleware anymore, but the concept that we used to call middleware exists.
Now, of course, back then it was called the application server was the quintessential middleware.
You know, back when we had we had the WebLogic and WebSphere type application service.
And what really it stood for in my mind is that you as an application developer,
whether you're a backend service developer or a frontend developer consuming these services, you want to focus on business logic.
And you want to focus on the capabilities of your application
that really are domain specific to the problem you're solving.
But you end up having to deal with all kinds of sort of like miscellaneous problems that you have to reinvent the wheel all the time.
How do you communicate with, say, publish, subscribe, queuing mechanism in a reliable way?
How do you secure your front end? How do you talk to a
database in a transactional way? And these types of problems can be solved in a generic manner,
right? Now, back then it was the application server that did it. Nowadays, you've got a panoply of components that do it. You've got
the Kafka layer, you've got the gateway layer, you've got probably the most famous of them all
now, Kubernetes layer. And all of these tools are really there so that theoretically, at least,
you don't have to worry about how all of the sort of plumbing works.
And you can just focus on your business logic.
Yeah, yeah, I totally agree with that definition.
Actually, it's very interesting because you're right that, okay, the term like middleware has been used for quite a while.
And now that I remember, maybe I was abusing the term a little bit, but during my previous company, like Blendo,
we were building an ETL solution, a cloud ETL solution.
So the whole idea was we want to pull data out of cloud applications,
like Salesforce, for example,
and put this data into a cloud data warehouse like Snowflake.
And I caught myself trying to describe the product.
And after a while, I started using a lot of the term middleware for that.
And I was doing that.
I mean, it wasn't that much on the technology level as you did
when you were referring to Kafka and Kubernetes
and all these software components that give this kind of abstraction
where you don't have to, as you said about like the business logic.
But on a product level, I mean, for me, it made a lot of sense like to describe it like
this as a middleware, the product, mainly because it was an abstraction layer between
different services.
You had like from one side, like the web applications where you want to like to pull the data out
from. And on the other side that you had like to push the data like the web applications where you want to like to pull the data out from
and on the other side that you had like to push the data into the data warehouse where the analyst
would go to work with the data. And you didn't have to worry about like all the complexity of
implementing and taking care of this infrastructure. So somehow in my mind, at least made a lot of
sense like to use the term middleware. I know that's a bit far away from what the people who designed Corba
had in their mind about middleware.
But I think it made it a little bit easier
for people to understand at least
what's the point and the position of the product
compared to other products out there.
I think the term originated, I'm not sure of this,
but I think it originated because it's the thing
that sits
in between, in the middle, between your app code and the operating system. I think that's how it
started out. Instead of directly writing to the operating system, you write to this middle layer,
the middleware. And I think that's how it came about.
Yeah, makes sense. Makes sense. And of course, when was Corba first introduced? Do you remember that?
Well, when I saw it first was in 1996, maybe 95, 96. That's when it was really popular. Everyone
wanted to be a Corba programmer back then. Yeah, it's been a while. And I mean, okay,
I think the paradigms remain. It's just that as we add more and more abstraction layers,
how we build technology,
we just have to redefine a little bit the terms.
But at the end, the meaning remains the same.
Anyway, we repeat it all the time in our space, right?
We take the same problems and reinvent it just a little bit better.
That's true, that's true.
That was very interesting about the
bit we're like okay moving moving forward can you give us a little bit more i'll share more
information with us about like kong what the product is i mean as we continue we will get
deeper into like the products of kong but can you give like a high level description of what kong is
how it is used and by who? Yeah, yeah.
So Kong is interesting because the term Kong refers to the company, to the open source
project, and it's also used as a brand for our products, right?
So it stands for many different things.
But Kong, the project is the most famous part, I think, in that it's the open source project that the team at Mashape, Augusto, Marco,
and the team created because they had a problem themselves in the service that they had called
Mashape. And they needed to expose these APIs in their API marketplace, and they needed to
manage these APIs. They needed to make sure they're rate limited, they're secure, and they needed to do it
in a performant way, right? And in a highly automated, lots of changes going on at the same
time way. And that's how Kong came about. And as luck has it, it also came about at a time where Docker was becoming very popular.
And so it was created with that mindset, right?
Which is everything has to be programmatic.
Everything has to be declarative config driven.
Everything has to be highly dynamic.
And so they created this gateway layer that way. Now, of course, the gateway space has been around for a while.
You know, we can go all the way back to the Oracle API gateway and then the next generation
of API gateways with Apigee and MuleSoft.
But what Kong was, was basically the beginnings of this next generation gateway, which is
super lightweight, high performance and declar declarative config, sort of cloud native
Drippr, right? And so that project has been and is still the nucleus of everything we do at Kong
in terms of our product strategy and everything we built on top of it. Let me pause there and see
if there's any thoughts or questions before I talk about the rest of what we're doing at Kong
and the rest of our product offering. Yeah, just like a small question.
I mean, I know what a mass API is, but I think it would be interesting to give like the complete
context like to everyone who will listen to our conversation.
Mass API, if I remember correctly, was more like a marketplace for APIs, right?
It was like a place where you could find like APIs and actually consume these APIs through
mass API.
Is this correct?
I believe so. Now, we'll take that with a grain of salt because I wasn't there at the time.
Agi and Marco did misshape. But from what I understand, yes, that's what it was. Basically,
it allowed you to commercial or productize your APIs and publish them to a marketplace.
And it sort of took care of all the things you need to do in order to have a productized APIs.
Yeah, yeah.
So it makes also a lot of sense of why the guys back then
had the need of something like Kong to integrate
and interoperate with all these different APIs
that had to leave behind the marketplace.
So yeah.
At scale, right?
At scale.
They had to do it at scale
because you got thousands of API owners
wanting to publish their APIs on the marketplace.
So doing it at scale was key got thousands of api owners wanting to publish their apis on the marketplace so doing it at scale was key yeah yeah absolutely absolutely yeah yeah that's uh the only thing that i wanted like just to make a little bit more clear so we can have like a bit
of like better context around how kong came came up as a product yeah let's move uh forward and
get into more detail around the products i know that that, I mean, if someone visits the website at Kong, there is like a quite extensive like catalog of different products
there. Can you tell us a little bit more about these different products and how they relate to
the core Kong product that you mentioned earlier? Yeah, for sure. So the genesis of Kong the company is the project Kong open source gateway.
And so the products that Kong the company offers on top of the Kong gateway is the primary product today is Kong enterprise.
And Kong enterprise is basically two things.
First, it's that gateway runtime
with capabilities to operate it at scale
with higher level of security,
higher level of governance,
and also enterprise support.
But also think of it as functional modules
on top of that,
that enable more advanced end-to-end
lifecycle of service management. So being able to publish APIs to a portal, being able to check for
anomalies, being able to design APIs at the onset and publish them directly to a portal from a
developer side, things like that. So that's what that's at the top of it.
That's the platform services that we have. And then there is the enterprise version of the gateway,
which we call Kong Gateway Enterprise. That's today. And by the way, we just announced at our
summit two weeks ago, the platform that we've been working on, which is called Kong Connect. And that is a new era for us.
And I can talk more about that.
But our current main product is Kong Enterprise.
That's great.
So quick question about the difference between the open source version and Kong Enterprise.
I assume that, I mean, the open source version is something that someone can go
to GitHub, clone it, install it on their own environment, run it and maintain it on their own,
right? The enterprise version is usually packaged and delivered as a cloud service,
or there are different options there? No, it's not. It's not delivered as a cloud service. So,
I mean, let's just actually
decouple ourselves from Kong in general. And I know, Costas, you wanted to talk about the whole
open source and open source monetization motions, right? And we can talk about it in that context,
maybe in that framework and Dell help, right? You know, at Red Hat, it was great because I learned
quite a bit around how powerful open source can be and how the whole company built on top of open source works.
Right.
Yep.
And one of the things that Paul Cormier used to say, Paul is now the CEO of Red Hat.
He was the president of the R&D operations than back then. You would say, look, Red Hat is an open source project company
that builds open source products, right? And these differences between project and product is
critical in that, and it's a very tricky, hard one to manage, right? So you want to
enable open source projects that are thriving.
So thriving with the number of users, obviously, but thriving also in terms of contributions to it by the community.
There's goodness on that win-win goodness on both sides.
Now, how do you make that a product?
Now, the playbook for that has been largely figured out, right?
The idea of how you make a product out of successful open source projects is that you, first of all, need to generally aggregate a number of different projects to bring higher value, right?
So for us, for example, Kong Enterprise includes a project called DEC.
And DEC enables you to do full command line declarative config,
CICD maintenance of Kong clusters, right?
That is part of the project.
So when we release a Kong enterprise version,
we test it with DEC, we make sure it all works together, right?
If you take a bigger project like OpenShift, right?
OpenShift is an aggregation of dozens of different projects coming together.
That aggregation and making sure all of these things work together is kind of like,
you know, a car, right? Like you could say that a car is a wheel and a chassis and, you know,
an engine and so on. You can go take all of those and build it yourself. But someone taking all that
and making it a car, there's huge value in that just by itself. Right. Yep. So that's one thing.
The other thing is providing support. Of course, that's that's the other part of Playbook.
You have the experts who know that project well. So when you buy the product, you get sort of this support level that support guarantees that the company can stand behind.
Right. That's that's another big pillar of it. And then the
last pillar, if you think about it in terms of three, you got the aggregation and the testing
and bringing this together type thing. You got the support. And then you got services that,
and this is the so-called open core model, that are not going to be open source, but that provide
value on top of this open source project. And so Kong Enterprise brings together these services on top of that as well.
And those services are the portal, Vitals, which is API analytics,
Immunity, which is anomaly detection.
Those are the main services that we bring on top of the Kong open source capabilities
and that are included on Kong Enterprise. But again, all of this is
pretty standard way that open core companies work. Now, Connect is our SaaS platform that
enables you to run these runtimes, right? The Kong Gateway runtime and delivers the functionality
module on top of it as a service. The runtimes themselves, you can run it anywhere
you want yourself, so-called self-managed, but the functionality modules in Connect are going
to be delivered fully as a service, multi-tenant services. And one more thing, what's really
interesting about Connect is two things there. One is it's not just gateway, but it also supports
mesh in that it supports human mesh instances, as well as ingress controller instances and kind of brings the connectivity
layer together.
So we're able to provide richer capabilities on top of it,
but also it's going to be fully declarative config driven and sort of a cloud
native SAS platform, if you will,
because you can do everything through a command line declarative config
interface, just like Kubernetes. Does that make sense?
Absolutely. Extremely interesting, just like Kubernetes. Does that make sense? Absolutely.
Extremely interesting, actually, to hear all that
and how you move from the project to the product
and the platform at the end
and how you also implemented that on Kong.
It's extremely interesting.
Fascinating.
So you said that it's very tricky to make like this differentiation between the
product and the project okay can you elaborate a little bit more on that like why why is hard and
i guess that like most companies they reach like the open core let's say a state as a company but
in many cases if not most of them like everything starts as a project right
you have like an open source project there might be like adoption there there are like interests
for people like to use it how is this migration like achieved at the end how is the mindset in
the company changes from going to from the open source to from sorry from the project to the
product does this make sense what i'm asking absolutely
because there's a there is a natural sort of life cycle right to this that you start with a highly
successful business project and everyone is super excited because wow look the the project we
created is getting all these stars and it's getting all this traction that's awesome right
and then you get into okay well, well, you know what,
all these people are using it. We can't just handle it with forums anymore. They want some
kind of like formal support with SLA guarantees. And we can't deliver on that unless we actually
like, you know, get paid and are able to eat ourselves. So we're going to, we're going to
create a support model around this. Right. So you start selling support, right? And then you find the sort of synergies with other projects. And so you
aggregate them and you build these sort of aggregated builds. That's when you deliver the
car instead of the wheel and the engine, right? And then you get into the points where you're
like, okay, well, now I'm starting to see patterns around what people are building on top of this themselves.
And these are usually companies and organizations, bigger ones.
So I can go and help build these and package and deliver them out of the box.
And that's where you become an open core company with proprietary capabilities on top of it, really designed for the bigger organizations, which we call enterprise.
And then finally, the last phase is when you go, well, okay,
we've done all this successfully. We can deliver a bunch of this stuff as a service, or we can help operate the runtime as a service for you. And this is where I think things actually
get less tricky when you enter that stage, because at that point, sort of the project
value is proven itself and everyone understands it and you can start
delivering value into the SaaS services and it's very clear the value that you as a company are
providing if that makes sense. Yeah yeah absolutely that's very very interesting. How it's like right
now if I understand correctly I mean not, the way that you describe it,
for Kong at least, you are on the third phase right now.
You have gone all through these phases.
You had the beginning, the gateway that went open source.
There's a lot of traction.
And you have reached right now the point
where you can have a SaaS platform
that will deliver the value on top of that.
At this stage that you are now,
how would you like the relationship
between the product and the project?
And when I say the project,
I mean the open source and the community over there.
And how is this like managed
and also like, let's say,
used and balanced from you
and like the product teams in Kong?
Yeah.
So I think that relationship will probably be different
for every company, depending on the nature of what they do.
For us, there's a natural fit, right?
In that, if you think about Connect,
it's sort of the connectivity central nervous system, right?
It's the thing that has all these little sort of interceptors of data in motion
running everywhere. And it can use these interceptors to understand what's going on
within your organization. And then based on that, allow you to have, say, a single system of record
of all of your services and be able to slash and dash that within different groups and search across
them and things like that. Or be able to allow you to do sort of service map analytics across the whole thing.
That's what Connect is.
Now, it depends on these agents, on these data in motion agents, for lack of a better
term.
And so for us, those run times, that's where it's very easy for us to say, well, that needs to be open source.
And even the protocol between these run times and the platform need to be open source. And even the
way that policies are executed on this run time needs to be open source. Because from our
perspective, the only way we can make sure that these run times are evolving the right way is by
ensuring that there is a wide
community of users are using them and they're giving us feedback and they're themselves
contributing to it and enabling them. It's like the scale that we get out of having these be
open source is amazing because we can basically cluster the brain of all developers and sort of
leverage the collective insight of everybody who's working on these projects to enable to have these most powerful agents out there, right?
Now, with agents being out there,
you can run them by yourself on their own
as an open source project, and that's totally cool.
But once you connect them to connect,
the network effect takes in,
and you're able to see as an organization now,
not just as a group or as a user
who's just focused on one product and project,
but as an organization,
which is the target of our commercial products,
you're able to see so much more
and you're able to actually get a lot of value
by seeing all of these agents come together.
So that's our strategy.
But again, every company will have a different sort of
default lines between the open source part and the property part of the SaaS platform will be
different for every company. Yeah, absolutely. I mean, at the end, it's a relationship and
every relationship is unique at the end. I mean, it depends on like the community,
the product itself, the nature that it has. So it
makes total sense. So now that you have like the SaaS offering available, I mean, I was like
browsing the website, and I get very clear on the Kong website and the product offerings there. I
mean, there's a distinction between the enterprise and the open source. And when we use the term
enterprise,
we are usually having in our minds
like pretty big companies
that they are using like the product.
And also from what you said
about like the network effects
inside the company and all that stuff,
it made me like wonder
from your perspective at Kong,
like what's the, not the size,
but like let's say the maturity
of an organization
to start using technologies like Kong, the open source at the beginning and migrating
and start using the cloud offering.
Do you see there that there's like some kind of maturity level that the company needs to
achieve, like from a technological point of view before they start using these technologies?
And when does this usually happen in the organization?
Yeah, of course, I'm glad you asked that question
because it just brings about a really important question
and change that I think we're going through right now.
Typically, if it was six years ago,
the answer I would give you is that,
look, the end users pick up the open source components
and use that
because the easiest, least friction way of starting. And then when there's enough critical
mass of that, the company needs to actually purchase the product because that's when
the product satisfies needs that the project themselves cannot satisfy, right? That's six years ago. I think what we've
seen is that the world has changed quite a bit in terms of sort of how software in general,
and now enterprise software is being consumed, right? And I talked a little bit about this in
my summit keynote. The best example I saw of this is an article out there, I forget who it's from,
but they talk about, I believe, Oracle versus Salesforce versus Slack, right? And if you think
about those three companies, Oracle was a product of the CIO era. The people who bought Oracle were the CIO groups. They bought it and sort of brought it to
the entire company and made it available to them. And their main decision criteria was
IT ecosystem compatibility. And the way the purchase was made was over steak dinners or
golf courses or whatnot. And then a big contract was signed, right? Salesforce changed
that in that the sort of buying criteria was much more around the sort of business unit level
and the executives there, the sales cycle became smaller and you could buy it as SaaS. So you
didn't have to install the thing on-prem to run it, right? But still,
you know, even though it was more closer to the end user because the business units were making
a decision. And by the way, just as a piece of trivia, one of the biggest challenges that we
saw that MuleSoft through our integration was Salesforce to Salesforce integration,
just because each business unit ended up buying Salesforce and had this sort of division of data across business units.
But that change was fundamental, right?
Because now you get closer to the users.
And then Slack is an end user, right?
Because what happens with Slack typically is that end users start using it in the company.
And they love it so much.
And then they go to their bosses and they go, we really need to buy Slack and enterprise Slack,
buy that. And then the bosses go, okay, this is making the productivity of my users better. I'm
going to buy it, right? Very different, totally bottom up. Connect is an end user era platform,
right? It's not designed for that top down, even though it does satisfy the enterprise architect needs as an end user. And it can be used to have this end to end view of the connectivity landscape when you are a large company. presumed bottom up from end users and provide that value to a single user single developer
single architect single infrastructure operator with a piecemeal approach and then as the end
users start using it more and more there's of course capabilities in there that enable you to
then have that wider sort of lack of a better term governance perspective on top of things, if that makes sense.
Yeah, it makes total sense.
I was wondering, as you were saying,
the difference, the transition from the Oracle model
to Salesforce model to the Slack model.
And I mean, the Slack model makes total sense.
I mean, probably everyone has experienced Slack
and how they started using it
and then we got it inside our organization.
And pretty much right now, we start a company at the same time.
Before we even start the company, we start a Slack channel or something.
But there is a difference, I think, between Slack as an application
and how much friction there is to introduce it in the company
compared to something that is like a middleware as you said or like an infrastructure or something that is going to be used in some way
that's more critical and probably exposes some kind of threats also like the organization
especially when you have to do that like with services data and the rest of the like
infrastructure the company has so how do you deal with that extra friction that we see there?
I mean, okay, of course, the adoption probably is not going to be,
the velocity is not going to be the crazy velocity in adoption that Slack has.
But still, there are differences there.
What's your experience from Kong with that?
And how do you deal with that?
Yeah, my sense is that, yes, there's a major difference in terms of volume because everybody, almost everybody will need Slack, right?
And the end users is almost everyone in your company, whereas not everybody will, every end user needs an API gateway, right?
So that's very, you know, astute perspective. But I would argue that how easy it is to get and run an API gateway just to get started should not be any different.
It should be just as easy.
It's not today.
I think Kong makes it the easiest.
It's one of the places where it makes me make it the easiest.
But it should be for the end user involved. If I'm a
developer or I'm an infrastructure operator, I shouldn't have to go through five hoops and
sign five legal terms and sign, you know, three different forms to make sure I've got some MQL
limit reached on the site before I can get to my gateway. It should be really easy,
right? And that's what it is about being an end-user platform.
Do you think that also being open source helps towards that?
Does it make it easier?
Increase the velocity and makes it easier,
especially for developers to adopt and try these products?
Oh, very much so.
Because I think being true open source helps you really
already have that mindset of, look, the community is the priority and making ease of access to the
community be one of your number one goals for the success of open source is, of course, important.
So in some ways, and you're making me think
here this is good brainstorming open source is end user era already has been end user era
and what this whole slack end user era for sas platform says is like let's do the same thing
for your paid software as well yeah makes total makes total sense. It's great.
So how do you, I mean, you mentioned
it's still not that easy to set up
and run like an API gateway
as it is to use like Slack.
You have done like a great job so far with Kong,
but can you share with us
like some of your plans in the future
of how you can improve that in general?
Like what's next?
Now you have deployed
Connect, so this is something huge, major, but what's next for Kong? What makes you excited
about the future around Kong as a product? Yeah, what makes me really excited is that to me,
Connect is to the connectivity fabric of your organization. So all the gateways, all of the edge gateways and the
mesh gateways or mesh sidecars, what connect is to that is what the Kubernetes API server is to
your pods, right? And if you combine that with a declarative config driven sort of GitOps model
and the ability for our sort of API server, which is connect to have full historical sort of GitOps model and the ability for our sort of API server, which is Connect, to have
full historical sort of GitOps perspective of things, really interesting and great things
can happen.
You can effectively go in and say like, you know, connect CTL, get me the history of changes
across these five services, and then revert back to a snapshot with one command to, you know,
connectctl-f apply and revert back to a snapshot from, you know,
two weeks ago to correct the problem.
This is the type of thing that we're shooting for with Connect
and the beginnings of it, you know, we showed at Summit.
That's super exciting to me.
That's great.
And by the way, just to let everyone know that the summit happened like a couple of
days ago and everything is online, right?
So people can go and like watch the keynotes if they didn't attend.
Is this correct?
That's correct.
The keynote is online.
I think the rest of the material will be online.
I'm not 100% sure.
But definitely you can go check out the keynote by going to konghq.com. That's great. Reza, thank you so much
for your time today. It was great chatting with you. It's very exciting to hear about Kong,
you and your perspective around products and open source. And yeah, I'm looking forward to
chat again in the future and learn more about how we progress
and chat again about open source communities and projects.
Thank you, Goss. It was a pleasure talking to you.
Well, that was pretty incredible.
It's always interesting to hear from someone
who has both deep technical knowledge, but so much business experience
in the tech space. And I just always appreciate the unique nature of someone who can speak really
clearly to both sides of that in the same conversation. Kostas, what stuck out to you?
I really enjoyed the conversation with Reza. First of all, I think that part of our
audience, we learned some new things today, some relics of technology in some cases. I found it
very entertaining and also with a touch of nostalgia, to be honest, to be starting the
conversation about Korba and what the definition of middleware is. We learned a lot about Kong, the evolution of Kong, how they
started. They have like a crazy interesting story behind them and what's the offering today and how
they made the progress from an incredibly successful open source project to a cloud
offering that was announced a couple of days ago. So very interesting to hear about that on our conversation.
And of course, we are talking about a person with crazy experience with anything that has
to and very deep knowledge with anything that has to do with open source.
I think he made very clear the distinction of the difference between what the project
is and what the product is, how you can start from a
project which is open sourced and how you can end up with a product, how you can build and how you
can keep maintaining and building this in harmony without having issues between the different
communities around this. For me at least, it was an amazing experience to hear from someone like Reza about all that stuff. And I'm really
looking forward to chat with him again in the future because I'm pretty sure that we can be
discussing together for endless hours and learn more and more about open source and building products.