PurePerformance - AI-Native: The Next Revolution after Cloud Native with Pini Reznik
Episode Date: September 15, 2025Defining AI-Native in 2025 is like trying to define Cloud Native back in 2014! We are in the early stages of understanding what AI really means to us. The ecosystem is just evolving, and many organiza...tions are still struggling with re-architecting their digital systems to cloud native patterns!To learn more about the current transformational wave—the AI-Native Wave—we have invited Pini Reznik, CEO and Co-Founder of re:cinq. We will discuss what we can learn from previous "waves of innovation," why the business must care, and why the primary AI use case should not be just cost-cutting! Make sure to get a copy of his book or catch his talk from Cloud Native Munich. All links we discussed here:Pini's LinkedIn: https://www.linkedin.com/in/pinireznik/The Next Transformation Mini Book: https://re-cinq.com/mini-bookCloud Native Munich Talk: https://www.youtube.com/watch?v=CHb3TLEV8ZU
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatch is ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello, everyone, and welcome to another episode of Pure Performance.
My name is Brian Wilson.
And as always, I have with me, my very wonderful host, Andy Grabner.
Andy, how are you doing today?
I'm good, and I'm feeling, I feel like I need to grow a little more hair.
But unfortunately, my hair doesn't grow that well anymore, so I won't be able to color.
You know, I was thinking about this.
You know, Mark Tomlinson, the father of our podcast, died his hair at one point.
Now I did.
So you're going to have to grow your hair and dye it at one point.
Hey, but you know what?
This is, I'll throw a little bit of trivia out here.
before we get started. This is episode number 242. And back at the end of high school in early
college days, it was the early 90s, and I was angry young man in getting into industrial music.
And there was a band front 242 that I used to listen to, Belgian industrial band. So I didn't listen to
a whole bunch, but as soon as I saw it was episode 242, I was like, oh, there you go. There's a
reference to maybe one person listening I'll get. Well, and as time from the 90s to now has
changed quite a bit. I think there were multiple
different iterations of
styles of music. Some maybe call it
a transformation in the
music. Right? Today.
VR was the big wave that was going to
kick in on the 90s, but that
one never really came in. But it was
a big wave, right?
And today is also all about
waves, waves of innovation, waves of
transformation. I had the pleasure
in Munich. So at
Cloud, Natives Summit, Munich.
to watch Pini Resnick.
I hope I pronounced the last name correctly.
At least this is how I would pronounce it as an Austrian.
Well, I am an Austrian.
That's why the way I pronounce it.
And Pini did a great presentation.
It was called AI Native, the next revolution after Cloud Native.
I think it was really bold to stand there at a Cloud Native conference
and say kind of like, hey, the next thing is already coming.
Watch out because you're still struggling with Cloud Native.
And now there's a new revolution coming, a new transformation,
wave. And I want to show you, give you a couple of things to make sure you can write the wave
and not drop off it or be swallowed by it. So without further ado, Pini, thank you so much
for making time to be on the podcast. I have a lot of questions, especially around your mini book.
Folks, if you listen to this, all the references to his talk, to the mini book, to everything
else, there will be, as always, links in the description. But now Pini, over to you.
please do us a favor for those of you that for those that don't know you who are
you what drives you what motivates you what's your background so thanks for
inviting me it's great to be here and yeah and it was probably the first time I
expressed this coordinate or AI native transformation in a conference on
the stage in Munich so it was first time for this talk in his format but I
I come from engineering background.
I finished my computer science studies in end of 90s, actually, 99.
So just in time for experiencing a bit of the dot-com bubble and the burst
and then long career and development in operations and whatever, everything, in management,
in engineering roles.
And about 10 years ago, I started a consultancy called Container Solutions, which was
basically from 2014 from the first days of early days of docker we did all kind of
things around that we're on the docker con conference and software circus conference and a lot of
different things to promote cloud native before it actually been called cloud native and
when we moved on from container solutions which was sold at some point and about year
a half ago, we started a new company called Resync,
which originally was all about sustainability in IT,
but gradually moved more into AI and AI native transformation.
And now we're trying to figure out what actually AI native means
and how we are transforming towards that.
I think that's sort of the main story.
And I wrote an Riley book about cloud native transformation
about five, six years ago,
and now finishing the AI Native Transformation book,
which will be out in about in early October.
And I can only encourage everybody to start with the mini book.
Chatching based on that mini book,
the rest of the book will just be as insightful,
but also thought-provoking or thought-inspiring
because there's a lot of food for thoughts in that mini-book.
What I would like to start with,
because you kind of asked the question already,
what is AI native?
Have we found a definition?
What does this mean?
Actually, I don't think so.
And I'm actually actively working on this right now.
Just today I had a conversation with my colleague, Daniel, about this.
And we are trying to figure out what is the main concept of native,
something native, AI native or cloud native,
is that we are natively building something for new technology.
In the cloud native space, the argument was it's not about running in the cloud,
but actually architecting your systems to natively fit the cloud concept.
So microservices, containers, dynamic scheduling, team topologies,
all kinds of things.
We have quite consistent view of that.
So now we're trying to figure out what is that in AI native.
So it's obvious that it means AI as a heart of the system.
term so architecting for AI but we don't really know what it means it's
obviously about data and collecting data and managing it and and but then it's
it's also about actually using AI in different spaces it's not just about
infrastructure it's also about building products which are natively built for
AI age but also using AI and internal operations like
in HR and legal, in finance, in all kind of other departments.
There are all kind of technological principles that we are trying to figure out.
But, I mean, there are all kind of like data leagues and, you know, in LLMs and MCP and agentic architecture.
So there are all kind of principles that are clearly emerging now.
But I still struggle to like clearly define.
those three, four core technologies or core principles that define AI-Native.
And I think you actually, I want to paraphrase you,
and I want to read something from your book, right?
And you said agreeing on the definition of AI native as we write this in 2025
is like trying to define cloud native back in 2014.
And that's, I think, that hits it on the name.
I mean, that's perfect, right?
because back in 2014, as you mentioned,
nobody knew what cloud native really means, right?
And it has evolved a lot.
The ecosystem evolved, the standards evolved,
the first people built things.
And I think also in your mini book you started,
like there were different ways of how people adopted the technology.
Some were moving their monoliths over,
just containerized them and then try to learn a little bit,
which was a good step.
But then they really figured out to truly build something for that new technology,
for that new wave of technology.
you need to rethink on how you build and architecture applications to fully leverage the power of it.
And, you know, I've been there when Cloud Native was starting, right?
We worked very closely with Ken Owens from Cisco.
He was a member of a steering committee, the technical steering committee in CNCF.
He was the one to drive the original definition of Cloud Native.
That was very interesting experience, right?
So containers, microservices, dynamic scheduling, that's the regional definition.
And there was so many arguments.
Like there were, everyone wanted to do something different.
It's now easy to say how correct it was.
And by the way, it also changed over time many times.
But I think what's important for us is that how those two waves of innovation are similar is because
is because they are not just changing or introducing new piece of technology,
because that's happening all the time all over in different places.
The way we see the waves of innovation is fundamental changes.
And typically those changes are driven by significant architectural changes.
Like, for example, Cloud Native was mostly transitioned from monolithic architecture,
vertical scaling, large servers,
client server architecture, layered architecture,
all these specialized teams,
what we call today waterfall,
and gradual transition with transition to Internet
and then later containers and cloud
to something else, horizontal scaling microservices.
And what's interesting and important to understand
is that architecture defines,
to Conve as well also defines the team structure.
So when my car services were introduced, we had to change the team topologies from
specialized teams.
And that is actually what drives the transformation, not what actually forces the
organizations to change organizational structure, to hire differently, to manage teams
differently, entirely rebuild their own delivery processes.
And I think what people don't fully understand is that
disorganizational transformational changes
they need to be managed in different way
different from incremental changes
and that's where most of the problems come from
yeah absolutely fascinating
and I was thinking as YouTube were starting this topic here
especially I've really liked the analogy
going back to the beginning of cloud native
and how it transforms and I think what we saw
you know your point about talking about cloud native in 2014
a lot of what made Cloud Native, Cloud Native
was learning from all the mistakes
that were made in the early days, right?
Oh, let's just lift and shift everything again.
Several years ago,
we had a client who was trying to move
a monolithic.net application to Kubernetes.
I'm like, you have to re-architect it, you know.
But I think a good example for our listeners,
and correct me if I'm wrong here, Pena,
I know there's a lot of different aspects
that we could talk about with Cloud Native,
but I think one that really makes this idea,
concrete is currently
you see across organizations
everywhere is we're trying to fit AI
into existing systems
right we just came up with a really cool one
using agentic AI someone on my
team created it niche for anybody who knows them if they're
listening where
it's connecting to all the different
dinetrays documentation so that if you
want to see can dinetraes be used on this
instead of having to go and open up all the documentation
the agentic component just reaches out
to all of our different types of documentation
and brings back an answer for you.
But what we're doing there is
we're fitting AI into those existing systems.
Those systems were not designed
for the AI interface in mind.
They're not organized and structured
for that readability,
for that accessibility of that AI.
So when you have existing systems
that we're trying to plug into AI,
sounds like cloud native,
at least one part of it would be
building those systems
so that they're by nature friendly to AI
they're going to be
or parts of AI stuff might even be built into it
like it's just going to be much more
integrated setup
as opposed to retrofitting AI
into the existing systems
and that there is where you need these sort of standards
or this AI native
so that we get that right and we move that forward
and it all works very seamlessly
and again going back to
cloud native this is exactly what we experience
10 or 5 years ago.
And you've been in this ecosystem for the same time.
So the lift and shift is people saying something like,
we will take the current applications, we'll put them in the cloud,
and then later we will refactor them.
But they later never comes, right?
Because once it's in the cloud,
you need to use the opportunity of crisis of shifting away
from your own data centers as an opportunity to re-architect
and restructure your teams and your applications.
You don't do that.
Then you basically have a monolithic application
running in waterfall processes in the cloud.
That is not cloud native.
And in a way, that's exactly what you're saying.
If you have an agent which is very smart,
it's actually totally okay to have an agent
which is sort of like a microservice
or like a group of internal application.
It's totally okay to build something like that
and connect to existing systems
that allows you to experiment
research, learn the technology, and also connected to the old systems.
We actually talk in the book about this.
We're talking about six stages of innovation.
And the first one is research, right?
So figuring out how AI works.
You need to learn an experiment, so learn by doing.
And this typically happens on the side in separate teams,
in like skunk work-style team in a sandbox without connection to production environments.
But when you decide to actually go for full transformation and become cloud native,
the first thing you do is building an MVP and connecting it to the old world.
Because at least in our experience doing this for many years,
is that every time you build a new system
that is not connected to old world,
these projects don't succeed.
So this is a very essential step.
Build a small MVP, which is functional.
It helps you to learn how it works.
It helps you to connect to the old world.
And from that point, you scale the new
and scale down the old.
And this way, there is no disruption to current business.
So in a way, it's a good thing to develop an MVP,
as self-contained agents that actually helps you.
But it has to be part of longer process of transformation,
which has a particular structure.
Folks, the six stages of innovation,
everything is covered in the mini book.
Just again, a reminder to read that mini book.
The link is in the description.
You brought two examples when you talked about what happened in the past,
what can we learn from the past.
And you talked about Nokia, Netflix,
and the way I kind of extracted the information is,
don't become the next Nokia, become the next Netflix, right?
Because obviously, Nokia completely missed the transformation from, you know,
dominating the mobile market.
And when the iPhone and Android came out, came out,
and then they were too late in the game.
Netflix on the other side did it right.
and they were completely destroying blockbuster
because they understood they were looking ahead
and how this new technology allows them
to also provide a different business model
for streaming.
What they meant that they started with streaming.
One of the questions I asked you in preparation of this podcast,
sent me a couple of questions you would like to,
you know, that we should discuss.
And one of them is actually
why are people still struggling
with these transformations?
What are some of the common barriers?
Why are organization?
What makes somebody follow the Nokia path
versus the Netflix path?
First, before I answer the question,
actually a couple of comments on Nokia and Netflix
because you need to look deeper into this.
It's not just those fails and those succeeded.
it. So
Nokia actually failed.
So I've been
in business, right, actually doing my
job throughout
the success and failure of Nokia.
Nokia was
in early 90s was super
successful business. Like I had
Nokia phones for years.
Same here. Actually, I had a
smartphone from Nokia, right? So
the Symbian phone.
Which was terrible, right?
But it was amazing and terrible.
But the interesting part is that they saw the transformation.
So they built the first smartphone together in collaboration with other companies, which was Symbian, which was really, again, breakthrough in technology.
And then they started building a new operating system based on Linux, which was not that far away from modern Android.
It was called Memo.
Earlier it was called something else, but at the end it was called Memo.
And when I was working, my first job in Holland was in TomTom.
At that point, Nokia was hiring everyone around Europe.
Thousands and thousands of people relocating to Finland or other offices to build me.
And they just didn't build it in time.
It took them four years.
And Android came just about a year before they were finished.
So it's not like they didn't see it coming.
It's not like they didn't put the money in or effort.
It all was in place.
The only difference was that they couldn't build it in three years
and they spent four instead of three.
And that's the difference between succeeding or failing.
On other side, Netflix, I personally know Edian Coughcroft
who was leading a lot of their cloud initiatives.
They invested in cloud way, way before everyone else.
So they effectively had cloud-native setup based on VMs way before the modern cloud-native infrastructure became popular.
So they actually prepared for scale before the scale happened.
So in this sense, this is exactly shows the differences.
So Nokia became this traditional corporate monster that changes very slowly.
They said, if we put enough money, it will be fine.
Eventually, we will succeed.
No one can affect our growth.
We are invincible.
Netflix came from entirely different perspective of underdog and fighting for their place.
And I think this is very normal.
So the startups, they're used to this agile way of working, changing direction.
Because if they don't, they don't survive.
They don't get the funding, they don't get revenue, they disappear.
But companies like Nokia or high street banks or anything, like any modern and traditional enterprises,
they treat this transformational change like his business as usual.
Like it takes them five or ten years to do transformation, which means we'll just install Kubernetes in some place, you know,
and we run some air.
on top of it
and then we will do
something else
but they don't
actually transform
most of the
companies will go
to the traditional
enterprises
we see that
they're still
mostly waterfall
they're still
mostly releasing
once a month
once a
even if it's
every two weeks
it's just faster
waterfall
yeah they're not
doing proper
microservices
they are not
doing proper
observability
they are not
doing proper
anything
right
so they call
themselves
cloud native. But in reality, they never finished the transformation. They never got to the
maturity. And if you look into companies like Monzo starting bank in UK, they are amazing,
right? They're fast. They can change very quickly. They can change direction quickly, but they can
build value very quickly, right? And engineers are really, really efficient there. So five years ago,
you know, who cared about them?
Who knew that they would have 10,000 employees
and they would dominate the banking system?
The question really for them,
are they going to continue like Nokia
or they will preserve their capability of change
and then jumping on the next wave of innovation?
I think they will, by the way.
Yeah, yeah.
And I'm just looking at,
I have both your book open
and also the presentation you did in Munich
and I think what you're talking about
because in Munich you called it six modes of operation
and in the mini book you call it six stages of innovation
so basically what if I hear this right
what you call pioneering right
you explore new ideas fearlessly
this is exactly what made the difference
between Netflix and Nokia
because Netflix was continuously looking
looking into the next thing, and they were early adopters.
Obviously, if you're constantly doing it, and there will also be failures, right?
If you're constantly looking at the new thing and investing time and resources,
not every path will be successful.
But if you don't do this, then somebody that finds a successful path with a new technology
and comes up with a better way, with a new business model, will just bypass you.
I think what's important to understand is that if you have like a scientific method,
if you have a hypothesis and you test it and it doesn't prove what you wanted to prove,
it's not a failure.
It's a success, right?
Test is a test.
You cannot wait.
If everyone would knew what they're doing, then, you know, there would be no need to test anything.
Yeah, so this six stages is basically, it's based on product lifecycle model.
So when the product starts, you really don't know anything about it, right?
you're trying to figure out who are the audience, what's the technology, how to build
in the architecture, everything is new.
And this is correct for a new product.
It's also correct for transformation, for adoption of new technologies.
So the first step is actually even before you start the transformation, is the research.
That's something that every company, if we talk in old McKinsey's three horizons model,
every company needs to invest in Horizon 3,
which is something long-term research
to be ready for whatever comes.
You always need to have maybe,
depending on the size of the company,
anywhere between 1 to 5% of your development budget
should go into pure innovation
without even,
wait, without it directly going into active products.
Then Horizon 2 is something for meat term
and Horizon 1 is something that you are actually selling now.
So, first,
research. It's just looking around and learning new things because that's the problem of Nokia.
Actually, they actually did look around, but many enterprises, they are just not ready, right?
So they're not investing enough in research. But once you identify that there is something that
actually is ready for your adoption and there is a business case and it's the right timing for your
organization to do it, then you appoint transformation champion, you get executive committee.
you create a core team
and then you start developing
the MVP.
All this is buzzwords, right?
But they all have minis.
You have to start small
and entirely independently
from the rest of the organization.
Because if not,
the old organization
will push their culture
on this new team
and it will not work out.
And then you build the MVP
and then you start gradually scaling
by inviting another team
and then another team,
and you gradually build in this new environment
where there are new processes,
new infrastructure, new architecture,
new ways of working,
most importantly, new culture.
And it's very important that this new culture grows
without the possibility
that the old culture will destroy it.
So it has separate management.
It has separate processes.
And gradually, it gets the momentum
when it's dominant enough
and can protect itself.
And then gradually the old legacy goes away
and this becomes the new norm.
This is the only way to go through transformation.
There is only one way.
There are many books about that.
I have many of them here.
So a lot of them went,
the stories went into the full book.
There is no other way,
but there are many, many ways to do it wrong.
Many ways.
Sounds a lot like our transformation.
doesn't it, Andy?
Yeah, multiple transformation.
We actually did.
We separated it off a separate team
and they were on their own.
And we did it twice, right?
We did it from Epmond, then Ruxit
that we incubated and let it build our second chain
and then we did it again with Krayal.
Yeah, but it's definitely the way
and again, it was on the executive level,
it was you all go off and do this
and come back with results, right?
And that also transformed
the lessons and things
that those teams learned transformed the way we do the rest of our business, right? Because
our old product was waterfall to releases a year, right? And then when we had our Ruxet product,
which is mostly what we have now, but big changes with the back end, it was they started
adhering to a lot of these ideas of, you know, quick releases. And then that, once they had a handle
on that, that then moved into the rest of the organization. So I think it's a very poignant idea that
when that new success is found
that new wave is coming
the organization has to be
open and ready to adopt that new wave
and then continue to
modify it and improve
upon it
and there are ways
to do it wrong
just a few examples one is
not doing it
right that's blockbuster
they didn't do it they weren't even in the game
yeah exactly
so it's just ignoring the change that's one way
That's the easy one, right?
This is the easiest one.
This is the big banks ignoring Monza and Starling, right?
But then at some point, they're panicking, oh, they're getting big.
They're eating our money, right?
They are now customers.
Then I'm okay, let's put 30 million pounds to build an alternative.
Oh, that's the other way to do it wrong.
Because if you put too much money in it, you have to hire too many people.
So you have to start building things.
like, for example, observability.
Observability is not essential for POCs in early stage, right?
Because it's like you need to observe something that is already working, right?
If you start investing in that, in the way, you start dominating the architecture
or you're affecting the architecture and you are restricting the growth of evolutionary architecture.
So there are certain things that in the early stages, they need to be flexible and open,
but it's very important in later stages
to introduce them properly.
So putting too much money,
it means that you have an ultim of 50 or 100 people.
Everyone is busy.
Everyone is trying to do something.
But no one knows what, right?
So then there are project managers
that are creating very long backlogs
of useless tasks.
Everyone is trying to basically solve the problems.
It means that the architecture,
like no one knows the architecture in the beginning
because it didn't grow,
up yet.
Right?
You see that this defines a failure even before the project starts.
And the annoying part is more money you put in, the worst the failure is going to be.
At some point it's going to be too big to fail and then you keep it going and keep it
going, right?
So we have a customer, I can't say the name, but it's a big bang in UK that they, that's
exactly what they did.
They spent, they put 30 million pounds budget to create a.
a competitor to those new banks
and after two years they just
they just closed it on
after spending all the money and basically wasting the money
and then they went and bought some other company
so this is just massive waste of money
it's just the transformations they don't work that way
because in this kind of when you put 30 million pounds
you typically put a very senior executive
to run this transformation
who comes from the old culture and brings the old culture wisdom
and starts like telling people what to do.
And then what you realize is that nothing is working fast,
nothing is agile, right?
And they basically created a weird variation of what they already had in the past.
Because you cannot solve new problems with the same means that you create mindset.
Yeah.
Yeah.
I love that.
Hey, I got two questions, I mean, thoughts and I want to just get your idea.
Like you, right, I finished my education in the late 90s.
I remember in 99.
We were all hired, obviously, because of that $2,000.
And then the dot-tube.
I actually opted in.
So I remember when I finished high school, that was specialized in software engineering.
and when we finished our last year it was 98 they came with career buses to the school
and they basically put us everywhere we were 18 years old they put us in the bus with the big
video screen and they're like how cool they are and they're looking for cobal programmers and
everything right and then we had to sign the contract and back then it was a lot of money that
they got offered i ended up not taking this offer i went with another company
but it was an interesting days now shortly after right the whole dot dot com boom right
So there was a lot of money being thrown in IT.
And we all know what happened, right?
The bubble burst eventually.
And so my first question is to you,
is the AI boom that we see right now
kind of analogous to the dot-com boom?
And are we about to see the bubble burst?
Or is it more related to a lot of people have learned
that the cloud native transformation, they missed out
and they don't want to miss out the next transformation
and therefore so many more people are jumping on
this train early on and that's where there's so much money
or is the answers now in the middle?
It's both.
I think it's both because
so first the dot-com boom.
First, I had the luck not to work with like pett.com
but in Checkpoint, which is a phyroll company.
I didn't have, I mean, it had effect on the stock
but not on the company.
So the burst of, like, the dot-com boom or creation of Internet was actually the right thing.
What people underestimated is how long it will actually take to build the ecosystem.
Because they thought that it will be early 2000s, right?
It will take five years, but it took 15 years.
But 15 years later, the world did look very different.
We have social media.
We have everything on the Internet, everything communicating to everything.
this is not something that you could imagine in late 90s.
So the internet wasn't, it didn't fail.
It changed our life drastically.
It created qualitative and many, many other things.
So I think underestimation was to sort of put the money expecting the payout within five years
while it took 15 years or 20 years.
And I think this is exactly what's going to happen with,
AI, I think there's
this weird assumption,
which I was also struggling with it for a while.
There's this weird assumption that
the value of the AI capabilities
will continue growing exponentially
forever.
I have never seen a technology
where the capability is growing
exponentially forever. It's always
growing exponentially and then it slows down
and it finds its natural limitations.
I don't know if we are close to finding AI limitations.
I don't know yet.
I think we are probably close to finding LLM limitations.
I think the next way will be probably SLM, small language models,
and there will be distribution and all kind of other things.
There's a lot of innovation we can still develop in AI.
But I really think that getting to real value,
it's going to take longer than a year or two that people expect.
Yeah, and we are now building trillion-dollar data centers.
We're talking about like terawatts of energy.
I think there are physical limitations we're going to hit
that will not let us actually get there in less than 10 or 15 years
because that's the normal timeline it takes for new technologies to actually
become usable
but it's also
we don't actually know
what AI is good for
like LLM is just a beginning
right we don't we don't really know
I'm doing it for a few years now
and I'm trying to figure out
except you know
finding some answers in some documents
that I have in there
in you know in my company
it's the same like
no one could imagine
TikTok in end of 90s
what is the
TikTok of AI
I don't know
I think a lot of the issues with the bubbles
is that it's investor driven
right so I was working at CNBC
which is a financial news channel
during that time I was I worked in TV initially
and I'm glad you mentioned pets.com because it was
every website that came out with a flash
was a huge deal
I was working there when Red Hat went public
right now Red Hat as you say
is the more 10-15-year investment, right?
But it wasn't flashy.
It lost a lot of value after an IPO.
I mean, it skyrocketed it up and everything.
But you had the Pets.com,
you had all these flash in the pans
that investors went for the shiny thing.
And I think we're seeing the same thing in AI
where there's a lot of stupid, stupid, stupid AI
and really stupid uses of it.
That's gaining a lot of attention.
But on the flip side,
there are some really amazing uses, right?
And I think once that starts to settle in,
that's when everything will settle.
So I think there's definitely going to be a big burst
because everyone's trying to use,
like right now even companies are telling all their employees,
use AI wherever you can,
because everyone's trying to figure out where it can fit,
where it can make innovations, right?
They're trying not to fall behind on this wave, you mentioned, right?
Because if they can get an advantage in some way,
but it's not organized, right?
It's use this and see what happens, right?
So we have to go through that research and experiment phase,
but we're all the guinea pigs on it.
And one thing I think, and I think you allude to this in the deck that was sent over,
because I saw one of the slides about, you know, high-skilled workers, right?
We see all the time people with Kubernetes clusters that are an absolute mess, right?
They're just a mess.
Part of that reason is, number one, it could be that they're operating under the old paradigm of management where they're not getting to spend that time in it.
But I think another part of that reason is there are a finite number of people who really know how to manage those systems.
The skill level to get Kubernetes rolled out properly, there's a learning curve, right?
You're working at an organization.
We're going to move to Kubernetes, but we're not going to bring an expert in.
you're going to have a lot, a lot of trouble.
To me, that is a great way for this place,
for this new wave of AI to come in
to help assist and recommend how to do this.
Like, recently I've been thinking of Akamas, Andy, right?
All the Java tuning, right?
Where you can use AI to look at your system
and it'll tell you, change this, this, this, this,
all this infrastructure, the infrastructure side of Kubernetes might change.
Just using that as an example of,
That would be a really groundbreaking and fantastic use of AI because take the people out of that.
Let them focus on the product and everything else, right?
But I'm curious of, you kind of had like some ups and downs before I was going,
so I'm curious what your thoughts there are.
Let me give you a different angle of this.
If you think about this, like in best companies, you have engineers,
and it's sort of all of them are okay, like all of them know Kubernetes.
Then you go to poor companies, and you maybe have somebody who knows Kubernetes, but most don't.
Why is it per company and not per person?
Like, why is when you get to a good team, everyone performing well?
Right?
The good teams, they are obsessed with quality.
They're obsessed with continuous improvement.
They're like, you're going in the hallway of your company and you see a piece of garbage and are you going to pick up or not.
or not. I are going to fix that bug or not.
Now, I personally think that all these failures of people knowing how to
Kubernetes is not super complex piece of technology.
I mean, it's complex, but it's not more complex than other things.
The fact that you have entire teams that don't have the skills is a managerial problem.
It's people problem.
It's not that difficult to educate and train and create the environment where they
continuously improve. Now, assuming this is correct, now give them AI tools. What do you think
is going to happen? Are they going to be amazing? I think the teams who are already great,
they will be even better. This is the original idea from, actually, Bill Gates said it,
is that you basically, automation is going to amplify what you have already. If it's terrible,
it will make it worse. If it's good, it will make it better. So in this sense, I think,
it's too optimistic to think that AI automation will fix human problems.
I think humans have many, many ways to destroy technology.
It's very interesting because there's also another side to that.
I saw a report on, and I didn't dig too deep into it,
but I read a little bit of it,
a interview with a bunch of CIOs and the discussion of employees
employees
sabotaging AI efforts
so you have that to contend with as well
right because there is a big
scary you know I saw I saw on your
decking of the idea of will will AI replace developers
and all this right and I think
there is a happy ground
happy medium ground
where there's an augmentation
right the idea as you said
amplifying what you do well
but we also know that
I think a lot of companies will
try to just like, how do we save money with this and not maybe do the best thing?
And that's just the nature of corporations, right?
That's going to happen in places and not.
But getting past that point, right, it's going to be the difficult piece.
I think not, I agree with you 100%.
I think there is misunderstand.
The first thing is that there is currently AI perceived as cost-saving exercise, which is a bad idea.
I mean, it's not wrong, but it's not the point.
there is also a concept called
Jevon's Paradox
when you make something more
easier to use
than the use increases
so actually the new technology has never reduced
the cost they always increase the cost
by increasing the use
so
in this sense
I think saying it is purely
cost reducing exercise is a mistake
And we had this, the best example I can give is that we have, they didn't become a customer, but a company we spoke to in UK, and they, effectively, a big booth, like somebody from the board came to the CTO and said, you have to use AI because we have massive call center, support call center, and we want to fire half of them.
I mean, this is a terrible message, right? This is not going to work very well.
because first, they have the domain knowledge.
Second, AI is like most of AI tools are sort of snake oil today.
They don't really work very well.
What you really want is collecting all the data in a consistent way,
creating tools to empower the support people to do better work, right?
You want to augment existing experts.
It may lead to reduction of stuff.
That's inevitable.
It's like a tractor leads to reduction of people digging in the field.
of course it's going to lead to reduction.
But it's not direct replacement, right?
It's enablement or empowerment of small number of experts
that will use the power tools to do better job.
I think that's what AI is going to create.
And as a result of that,
there will be relocation from some jobs to some other jobs.
Like, for example, translators.
They're all gone now.
Like, it makes no sense.
But seeing it as a cost-saving exercise is really, really a big mistake.
Yeah.
I think my last thought to wrap that up to Andy, but it was, you know, I think, you know, in thinking of this as a wave and all that, right, the goal of AI should be for innovation.
How do we innovate and how do we get in front of that wave, right?
That then, just like if we're talking about observability, you look at your end user and make sure that experience is good, and that then dictates everything that goes on in your system.
system. If the edict for AI is how do we innovate and make it all more usable, more enjoyable, all that kind of stuff, that's going to dictate everything else. And that being your goal, as opposed to, let's go in and save money.
And that's where the way it catches on. Yeah. It's really about creating new value. And we don't yet know. I think the most obvious thing it will create new value is like mass customization. Like we can sort of connect economy of scale,
reducing massive amount of things, but those things are not all the same, but they're all different, all personalized, both on software and the hardware level.
So I think, but this is the most obvious thing that we can see, but the next, like, once we over this hurdle and understand the technology, and we will remove the limitations of old thinking, I think we will uncover new uses of AI, and I think we are not there yet.
So I think by the TikTok of AI, the big new applications that will change everything, I think they didn't arrive yet.
We don't even know yet how they will look quick.
I also wanted to bring one kind of positive framed example.
I was just in India with customers and partners.
And one of the arguments they had is that we have a massive amount of engineers and they want to use AI to make every engineer a certain percentage more efficient.
on the one side, using the AI as assistant to make sure that they don't need to focus
and repeat the same tasks all the time, but just more efficient and therefore produce more
output.
The other one, they have a lot of new people that they hire on a continuous basis.
And so for them, the AI is used to also teach people on how do we code here, what are best
practices, giving feedback, right?
And I think you also, that what you have in your presentation, right, will the AI be
eliminating junior developers?
No, AI will help to make.
junior developers better, more productive, right?
It's going to be, if you use it right, we will, if you use it right, we will produce in
the end better talent, right?
And it's essential to understand that this, well, it's not about junior or senior.
It's about change of technology.
I think the best metaphor I used, I had in a presentation was about engines, right?
When tractors replaced horses, it really did change the way people did things.
I mean, now we only have 2% of people working in agriculture,
and 200 years ago it was over 70%.
So, yes, it pushed people into the cities into industrial work.
But the people who joined their agriculture jobs, they still were juniors.
but they came with skills to operate machinery.
And I think this is what's going to happen with next generation of developers.
So right now we have senior developers and the juniors that are still not educated on AI.
And those are not very efficient because they don't have the tooling and they don't have the domain knowledge.
And the senior developers, they have the domain knowledge.
So at least they can direct the AI.
What's going to happen next is that there will be next generation of junior developers that will know very well,
how to use AI.
And they will gradually replace.
They will gradually get into the workforce.
They will acquire the main knowledge and they will push away everyone else.
So five, ten years from now, we will have new generation of software developers.
My son is 20 years old.
He's helping us with marketing.
For him, AI is his nature.
Everything he does as AI.
He is doing work of entire marketing team on his own almost.
Right. So that's what's going to happen. So he has the tooling and he gradually gains the expertise and domain knowledge. So what's going to happen with him five or ten years from now? He's going to compete with those old style marketing people that were educated five, ten years ago. And he's going to dominate.
He's AI native. So we have one definition of AI native. Yeah, in a way. Yeah. Yeah.
Hey, Pena, unfortunately, we're getting close to the top of the hour here,
which means we're getting to the end of the recording.
I had another thought on you had a posting four days ago on LinkedIn,
which I also encourage everyone to look into where you discussed about the misconception
that LLM is often mistaken as an entire agent.
Maybe we will invite you back for another episode because this could be an interesting
discussion as well because there's a lot of misconception.
So open invitation if you want to come back for another episode.
I would love to chat about this.
I will be very happy.
Very busy with the book, but once it's all out,
I think there is a lot of other things to talk about.
It was great to be here.
It's a really interesting conversation.
I hope people learn something from it.
I think we both did.
That's one of the reasons we continue doing this podcast after so many
podcast after so many years is because it's just a chance for us to learn as well.
You know, we hope people listening get it and it's great to have fantastic guests like
you on. Definitely, please, when you're done with the book, let us know. We'll have you back on.
We can discuss some of that and some of these other topics as well. I think this is,
it'll be interesting to see how this wave transforms over time or how it develops over time.
So we'd love to have you be on periodically to update.
Sounds good.
Happy to join whenever you're ready.
Perfect.
All right.
Brian, you want to close it out?
Sure, I guess I sort of just did.
But yes, thanks everybody for listening.
And Pini, thank you so much for being on.
I think this is a great topic.
Very different topic than I would have expected from an AI conversation.
But I think it's a fresh perspective and a really fresh way to look at it.
looking at this wave
in terms of a wave
how it's being used now
or how
it's being leveraged now
and managed
and some of the pitch.
Yeah, there's just a lot to unpack here.
So hopefully people who listened
well, I think my brain is still spinning from it all.
So really, really appreciate you being on today.
And good luck with the book and the publishing of it and all.
And I hope to have you back on soon.
Thanks very much, everyone.
Again, for invitation. Thank you. Bye.
Bye.