PurePerformance - AI‑Native: Building Faster Than We Can Spec with Wolfgang Heider & Benedict Evert
Episode Date: March 16, 2026AI is transforming software engineering—faster than many teams can adapt. In this episode, Andi talks with Wolfgang Heider and Benedict Evert about what it really means to build “AI‑native” so...ftware, where prototypes turn into production apps in minutes.We explore why good engineering fundamentals still matter, how multi‑agent workflows mirror traditional roles, and why testing, governance, and clarity of intent become more important—not less.We also discuss the future of junior engineers, the risk of everyone reinventing the same solution, and why value—not code generation—is becoming the real differentiator.Links we discussedhttps://www.linkedin.com/posts/wolfgangheider_productmanagement-softwareengineering-ai-activity-7425746505883607042-D1OZhttps://www.linkedin.com/pulse/machines-making-wolfgang-heider-5mvsfhttps://www.linkedin.com/pulse/i-built-app-between-final-stranger-things-episodes-wolfgang-heider-5penf/https://futurelab.studio/ora/ https://futurelab.studio/htmlctl/
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Welcome everyone to another episode of Pure Performance.
And as you can tell, the sexy voice of Brian Wilson is not greeting you today.
It's just a regular voice of Andy Grabner.
I hope you can live with this.
But you know, this is a change and change is coming, changes upon us.
And this is also something that we have observed over the last couple of months,
especially in Pure Performance, while we have been talking a lot about performance,
engineering, side reliability engineering.
The overarching
pressing topic
that is on everybody's mind is obviously
the new tools that are
given to us through AI
and what this means not only for us as performance
engineers but really in general
in software engineering.
How does this all change software
engineering as we know it?
How does this change the roles?
How does this change the output, the expectations?
How do we deal with quality
with governance? There's so many topics.
and I thought I will invite two people, Wolfgang Heider and Benedict Everett.
Grisselich.
Hi, how are you guys?
Servos.
Hi.
The idea of this podcast actually came from Brian, who cannot be here with us today
because we had some time constraints.
And he actually read an article, Wolfgang from you, which was titled A Show Don't Speck.
Building is now faster than writing about building.
And I would really like to dive into this, so really what has changed?
But before we get started, Wolfgang and Benedict, please do my favor.
Maybe Wolfgang, you start a quick background on your person, what you do, where do you come from,
and why is the I topic, and then the same for Benedict.
But let's start with Wolfcon, please.
Yeah, sure.
I'm a product manager at Dinotries.
And well, with product management, as you mentioned already, things are changing.
Things are changing a lot.
there is always a lot of communication, synchronization alignment needed, what to build, how to build it,
how to build it with at least dependencies as possible.
So, well, and now having tools at hand where I can really build out stuff to look at it, to almost touch it.
Problem we have with software since ever, we cannot touch it, but almost there.
So it's way better than iterating on a lot of text, try to eat.
explain what I have in my head.
Well,
now I can simply visualize
it. I can do it without tool
knowledge, actually. I don't
even need tools for painting,
for crafting any design.
I simply explain what I have in my
head and I have something to show
and to discuss. Yeah.
I have a follow-up question
on this for you, but I will take notes
on this. Before
we do this, Benedict,
please also a quick introduction from your end.
Yeah, I think for me the real background is I'm not a software engineer, right, by training.
I have some background in programming.
I understand a little bit of systems in place.
But for me, that is really what AI is for me about, that I can basically learn and use AI as a guide to build my visions into prototypes.
and hopefully into real products later on.
But yeah, I'm sort of a vibe coder since day one.
I've tried the first GPT 3-5 turbo, I think it was, in 2022.
And we've come a long way.
And with the models nowadays, I think you can really accomplish astonishing things.
And yeah, for me, I really try to as much incorporate this mindset of not just adopting AI,
but really learning AI engineering.
And I've since gone,
become way more technical and knowledgeable
because I think,
because of the things that I'm learning through these tools.
And I really see there's a big learning experience.
I think a big learning experience for us all
because it's going to shape our work life going forward in general.
And also I see there's a real like unlock for everyone's creativity
because you can just be creative,
work with your agent and bring you a,
visions to life and I think that's really
an awesome vision
of the future, a very empowering one
and yeah, I'm super excited about it.
Cool. And also,
I know you've done some internal
enablement within Dinotray's
engineering to show people how to leverage
these new tools. Thank you
also so much because you helped me quite a bit
for my preparation at
Dinotries performed this year.
But I want to go back to something Wolfcon that you said
because I want to play a little bit
devil's advocate here.
I remember when I started software engineering back in the 90s, when I was in high school,
we started with assembler, then C++, and then all of a sudden in our last class of high school,
we were introduced to Microsoft MFC and Oracle Forms, kind of with Oracle Forms, especially a 4GL language
where to point and click, I could very easily click together my UI and then connect it with SQL.
I think I don't remember how everything was called back.
But it was a game changer from writing applications in CC Plus Plus and Assembla to then this.
Now, fast forward, I also know that prior to this whole hype of AI and agentic and code agents,
you especially in product management, there's a lot of product managers using tools that allowed you to do visual prototyping.
You could create visual prototypes with different drawing tools, click through UIs.
and that was also already a great way.
So now my question to you is, what is different now?
What is different to kind of this productivity boost I saw in the 90s
versus to the productivity boost?
I think everybody saw when we had these rapid prototyping drawing tools
to what you have right now.
What does AI native now really mean?
So for me personally, the main difference is just,
also thinking about what I did in private life, I really built an app.
So I published it on Marketplace, so a fully functional thing.
But the same effort, actually I would have had in simply doing some designs about that app.
So I spent just a few actually just minutes at the end of typing a little bit on my phone,
what features I would like to have, how it should look like.
like what did you do?
And I wrote an article about that with the funny title,
well, while watching the final episodes of a famous Netflix series,
I was building that app and I did not end up with simple designs.
Yes, that to my original introduction could also give a vision of what I have in head.
But it is the working thing that I ended up with.
So of course, and I know that's a,
highly discussed or quite often discussed topic now,
is it then a product?
Is it just a prototype?
Is it something to throw away?
Or is it really an AI engineered product?
To my experience, what I built in private life,
well, that is at the end of product.
I have it in the marketplace.
It's working.
My kids are playing that game that are on the iOS marketplace.
So it really depends, to what extent you work,
that way, how far you drive it.
And my honest
belief is that
yes, the world is changing
to the point of where
it is not just wipe coding.
I quite often say
wipe coding was always
in mind when talking about wipe
coding, so wiping it away.
No, it's not anymore.
It's not just a prototype. It's
something that is working.
And yes, there are huge risks
to it. A lot of things to be
considered when it comes to compliance, governance, a lot of policies that should also apply,
of course, for that piece of software, where we now have a lot of people interacting with
the source code in a sense of reviewing it, changing it, adapting it.
That can also easily be done with AI.
So to your question, it's not ending up with the design, but with the very same effort
right now, I'm ending up with already working.
Yes, it might be a prototype.
it might need some moral love to make it a product,
but it's way beyond what I have had with simple designs.
Thanks for that clarification.
Also, folks, or Wolfgang, do me a favor.
You mentioned the app that your kids are now using that game.
If it's on the marketplace,
we should definitely make sure that people that listen to this podcast
actually see this app coming to life.
So just quickly remind me, what's the name?
It's a time killer
because I built it
for simply killing time
in the free time.
So it's time killer,
a little puzzle game.
My kids love it.
If you like emojis,
if you like flags or number games,
just for killing the time,
give it a try.
Very good.
And we will definitely make sure
so folks,
time killer,
puzzle game
will also make sure to add
links to the description of the podcast.
Benedict.
I know that you said you are not a software engineer by trade,
but really AI has helped you to unlock your productivity.
And you've done, you've been, you said I think since 2022,
you've kind of played with this technology.
What has changed?
Or let me ask you in a better way,
what have you learned over the last couple of months,
especially on how to better leverage that technology?
and not just have it used like a like the classical, the way I started with it, right?
Like everybody, please create that script for me, or please create that function for me or show me how to get started.
Can you tell me a little bit more about what is, if somebody just starts or is in the early steps,
what is the right way to use this technology to really become productive with the tool that we've been given?
Yeah.
So I also built my own applications and tried to launch them.
And Wolfgang actually was an inspiration for that for me as well.
It's just a shout-outs to him.
Because I've also published now my first Mac OS app that's like a voice interface
that allows you to control your Mac, which is actually working really well.
And that actually I started just before Christmas.
and that connects really well to your question.
Because I think there was this aha moment,
or another quantum jump in these models' capabilities
when Opus 4-5 was released in December.
And since then, we've seen GPT-5-3 Codex
and now already the newer versions, Opus 4-6 and Codex 5-4.
We are at, so it's hard to keep track of all of that.
But these models allow you to really produce a great code.
I think that they are really awesome in understanding the system you're working on,
and they are really good at figuring out sort of the architecture of your product.
And they are very smart about it.
They know how to investigate your code base.
They know a lot of the stuff that already works out of the box.
And I would say that pushes almost the boundary of products you can build,
without taking too much care of the surroundings, right?
Without taking too much care about the overall architecture,
the overall vision, maybe even you don't need to think that much yet about
where you deploy this, what's the production infrastructure and stuff like that.
But no matter how good those models get,
once they hit the real world, the product that you're building with them
faces a lot of different constraints than whatever works in your machine.
So when I go about playing my product, I almost employ the same kind of mental model that classical soft engineering uses.
I just do it with AI, right?
Like I build story documents and epics.
I try to define as well as I can before I build it what I want, what it should look like, what the constraints are.
I try to write detailed architecture documents so that my...
AI later knows what to do and how to do it,
then they can reference these documents.
I build story specs.
I do documentation and all of that stuff.
And I review all of that multiple times over with different personalities.
So I basically, I used to have to instruct the very, very precisely what to do and what to check.
Now that these models are way smarter, they can do it on their own.
They already have a lot of procedural knowledge baked in.
They have design knowledge.
they know architecture really well.
So you don't need to be as much handholding anymore.
But I still do these passes.
I still review all my stories.
I review the architectural plan and all that stuff
in dedicated passes when I come up with my product design.
And then the same thing happens.
So I go to implementation, the agent writes my code, writes my product.
But I also do post the post-appas that post-implementation passes
that classical software engineers do.
I do a code review.
I check.
And then I check also once I deploy it.
I do sanity checks and checks against my architecture.
So to really make sure, like, end-to-end that the system works
and works in a sustainable fashion.
So that I don't run into surprises later on.
And, yeah, AI has really made huge strides in automating a lot of that more and more
without hand-holding.
But yeah, this is really the big quantum jump ISO is just the models' capabilities
and that they have a lot of knowledge baked into the models
that you don't need to instruct them at least procedurally as much anymore.
I think because Wolfgang and I have some experience now
with very specific programming language that is on Apple devices,
namely Swift.
There's models, I would say,
or there's programming language,
it's actually, sorry,
that the models don't know that well
and where they need more,
at least technical knowledge.
So you still need to provide good context
and that, I would say,
but when it comes on to the more procedural tasks
like design or architecture,
they have a lot of knowledge already baked in.
And it's been, yeah, it's been a joy
because you've been,
it's way more fun,
and you are way faster with implementing stuff now
and building products on bringing them to life.
Cool.
You bring up a very good point
because you said in the very beginning,
I think you had to apply the same good principles
of software engineering, right?
Everything that we have learned as software engineers over the years,
how you're doing a good requirements document,
like how you really understand what is the problem you want to solve,
and then writing good requirements,
specking it out, obviously then the life cycle,
of software does not end at the end of the prompt.
It goes much further.
How do you scale this?
How do you validate this and all this?
For me, I just thought about,
I'm not sure if this is the right time,
but if you remember the T-shaped persona that we talked about,
so T-shaped meaning really broad.
So for me, the best,
one of the, like a capability of an engineer
that can truly become very efficient
with these new tools is somebody that,
has a very broad understanding, the T, from the idea, the creation of an idea until it's in production,
until you retired it.
But you don't need to be a deep expert in every single aspect, but you need to have the full end-to-end understanding.
I think if I remember this, Craig, I think we call them T-shaped personas.
I really like this idea a lot.
I didn't know that T-shaped personas honestly, but I think if you're a generalist, it's really,
really helpful.
And it even goes further than that.
I think because think about it.
At the moment, you have software engineering teams building products end to end.
But for example, how do you go about go to market or these things?
But with AI, you can.
Because you get a change log.
You have the product vision.
You know exactly what you want to build.
So you just instruct the agent with the right context obviously, but you tell it,
hey, craft me and go to market strategy for this and they can do it.
So I want to even say it goes further than just the software engineering part nowadays.
you can really think beyond boundaries and automate a lot more and communicate a lot more.
AI just allows you to do a lot more things than you previously could have.
Wolfgang, any thoughts from you on that?
Yeah, two points to what we've heard.
The T-shaped persona, yes, I fully agree you become a generalist as long as you can do proper prompt engineering.
So you need to learn how to instruct any other individual in that case.
by coincidence, an agent and AI, how do instruct it to do what you wanted to do?
So you become more or less a people manager at the end.
You become someone who needs the very same skills as of now to properly delegate,
to properly formulate, to properly craft the idea into written words,
what you want to have.
And yes, maybe the other individual already knows you very well,
has memory files.
Well, in our case in real life,
the people, the longer
you work together, the more beers you
had together already, the more they know
what you're really talking about when you
just very briefly throw
something in, what you want to have.
But yes, agents need
a kind of memory systems.
We all know that currently
there is a lot of effort going into that part
so that it
better understands what you want to have
easy with very small, very short prompts.
But what at the end you need for on an enterprise scale in bigger companies with software products
that really serve critical business systems, well, what you want to have is to make it
reproducible.
You do not want to have that one agent with all of that memory and that one individual that is
the only one who can properly prompt that agent, you want to have everything documented in a way
that you have any failover, you have other people who can take over work.
So back to all of the process steps, Benedict mentioned, that you still need in crafting
business-critical software systems. Yes, you need the requirements back. You need a design,
you need an architecture. You need proper runbooks for, yeah, if things go south in the running
system and to have that whole chain, you cannot simply do a bit of prompt, next prompt,
next prompt, and you have a fully operational product.
You still need all those artifacts.
And also the work of creating those artifacts doesn't go away.
It's just way easier to now do it with agentic work to leverage the knowledge that the models
have, as Benedict explained, the models know exactly how you properly write the use case,
how you properly write a design specification.
that can be then read by any other agent or any other system to generate code,
but you still need those artifacts.
You need to run almost the same processes.
Even thinking about design, for example,
yes, an agent can exactly generate
UIs or software in the design system you tell it to do,
but at the end you want to fine tune.
And fine tuning, well, how do you do it?
If you do not have any intermediate artifacts that describe your UI,
how do you do the fine tuning?
So just one anecdote to, I think everybody can relate to whenever you did a prompt for getting you a picture.
So like in chat, GPD, you tell it, create me a picture of this and that.
So it's maybe a good first shot, then you want to tune that picture.
And if you do not have a very precise,
description of what is in the picture, it's really hard to fine tune it.
And so what is way better for that example, first do some iteration on the prompt.
Tell it what you want to have in the picture, let it ask you for details, fill in the details
of that picture, and then you have a written form of that picture you want to generate and then
generate. And then if you don't like it, you know exactly where in that prompt description,
for example, you need to change something to get the picture tweaked.
And that analogy, I think, is something that is very much applicable also to the whole software
engineering process.
So any AI software development lifecycle is, yeah, it needs all of those artifacts and not
just a bit of prompting and you have a fully operational product.
I really like that example, Wolfgang, because so many times have I put a problem.
prompted for an image and then I had to go through many, many cycles and many, many wasted CPU cycles and costs because it always regenerate the whole image.
And I like the idea of first optimizing the prompt or the definition of what you really want.
I also, on that same topic, I had a discussion last night with friends.
And maybe you're aware of this.
I was not.
But it seems when you're prompting and you're creating an image, then some of these models, some of these SaaS offerings, they have a second age.
that then validates the output.
Is this really what was generated, what was asked for?
And then they're basically going back and forth
until really something comes out before it actually gets pushed back to the end user.
And I really like the idea when you think about software engineering,
when you think about you describe what type of UI you want,
what type of user flow you want, right?
Maybe we need to then also create agents and we need to think about functional testing
and UI validation as also an agent that automatically then,
gives feedback to the agent that generates the code on,
hey, is this according to our specs,
according to our colors,
design principles, blah, blah, blah.
And does everything work?
So I really think, because I know Wolfgang
where both of us have a big background in software quality,
in testing.
I think testing is more important than ever,
but I think the methods on how we test also need to change
because there's so much more stuff being generated
and therefore we can no longer think,
we can update test scripts because we cannot update them as fast enough.
But we need to think about how can we use agents that validate.
And I think this is, I'm not sure, maybe this is already all out there.
But if not, this could be obviously the way we're leveraging agents across the whole
T-shaped software delivery cycle, right?
I mean, it's...
Yeah, Andy, I'm running experiments already with in private life hundreds of euros running into
into with 17 agents actually simulating a company with all of the different roles.
And I started with one agent, with one assistant I was talking to, which then was simply
flooded with too much of context, too much of instructions.
So I needed that kind of divided conquer.
So I needed one agent that was specialized, that was instructed to do proper coding.
one that was instructed and had memories and all of the context to do proper requirements
engineering.
And also then the next one that was doing the testing, the next one that was instructed
to do the proper observability, to write some policies regarding quality, regarding
performance requirements and all of that.
What I experienced is it is like working with a set of people.
You cannot give one single individual.
all of the instructions starting from a business idea down to do the proper testing and
then operating all of that.
That's simply too much for one individual to first of all know all of that how to do, but
also even if I would have the knowledge, I would be not sleeping well thinking about all of those
steps all the time.
So I need that dividing conquer in human.
And the very same applies to those agents I'm experimenting with right now.
They need a certain focus, a task, to not have context overflow and to have really a small and narrow focus on what they should look like.
And that's exactly what you are also describing now with agents that are doing testing.
Yeah, that should not be the same agent, not even be the same model to Benedict's point of how the models are trained.
it should be a different model that does the testing, because it would have maybe different
things in mind or in reality than with agents in memory, in the context, in how it does all of
the thinking then when it applies how to properly test.
Benedict, I think you have a lot of experience to that.
Yeah, actually, I wanted to ask you, because I'm a control freak in a way, especially
when it comes to my agents.
I actually want to know what they're doing
and why they did it,
and I want to know the reasoning.
And I want to learn along the way, basically.
And every time you tell me that you have like this fleet of agents running,
I'm like, oh, my God, like he must have sleepless nights
because all these agents run off and do stuff.
So I just want to ask you, like, how do you keep control of them?
And especially how do you make sure that they don't get into each other's way?
Well, they do.
That's part of the whole experiment.
to find out where are the boundaries of where I need to divide and conquer for certain roles,
where I need to have more restrictive rules and instructions to not get agents,
yeah, working out of their bounds, out of their guardrails.
And yes, I mean, I do not want to make any advertisement for my employer,
but yeah, you need a lot of observability to that fleet.
And I'm really lacking the observability.
if I would not apply the principles of I need to know all of the traces, the thinking processes,
all of the events when an agent really does something like filing a pull request,
I need all of that visibility to also keep track of what is actually going on.
Where did my hundreds of euros flow in now?
I mean, I see a website at the end, I see a software product at the end,
but yeah, what happened in the meanwhile?
Very interesting.
And one fun fact that I experienced, and yes, I did not check all of the details,
I just saw the final result where my agents in my experiment, they share one GitHub user.
So they all worked with one user on the GitHub repositories, which ended up in, they got confused.
Because on GitHub, there was one issue assigned to my AI user.
And I did not properly black what agent now should pick that issue.
It was the wrong agent who picked that issue.
It was a coding task.
It was a coding issue.
But my marketing agent took that task, took that issue.
With the funny outcome, well, the marketing agent, as it was instructed,
it did not produce code.
It created a read-me file with the to-do inside needs implementation.
So, fun fact, it ended up with a simple file of a to-do remark inside because it was the marketing agent.
And yes, that happens.
That, Benedict, to your point, is exactly where, as of now, as a manager of a group of people,
you can have a lot of trust because they have empathy.
They have, well, they depend on the job and the money they actually get from the job.
So they at the end are also reliable, reliable employees.
But how can we make the agents feel or behave similar?
Because, yes, human also might act out of bounds
and maybe even with bad things in mind.
Agents could do the same.
But at the end, there is a lot of trust to people
because there is all of the social relationships you have.
So many aspects to the human-to-human relationships that we simply don't have with agents.
So how to replicate that to bring more trust into what the agents do.
An interesting aspect.
I have no experience, to be honest, with my agent-fleet to that.
I simply know I need a lot of observability to at least see what they are doing.
So if I can quickly recap on this, it feels like we are replicating processes.
and roles that have worked over the last decades, right?
A good software development process starts with a good understanding of what problem you need to solve,
good requirements, good definitions of how you're implementing these features, good testing.
On the other side, we also have different, instead of the T-shaped,
that is kind of responsible end-to-end.
We have, I think they're called the I-shaped, they go deep.
We have the individual personas that really know their stuff really well,
whether it's speccing, whether it's coding,
whether it's testing, whether it's writing marketing material.
And we basically replicate this also with different models
and different agents and in agents, right?
I have a question now, and I think maybe this goes to Benedict.
Because I have, over the last couple of weeks,
it feels like it could be a strange perception,
but it feels like a lot of people are reaching out to me on LinkedIn,
especially based on the pictures that I see,
they are rather young engineers.
And they're reaching out and they ask me for advice,
what should they do in the time of AI?
Because they're afraid, right?
What does this mean?
Does this mean that they will not have a job at all anymore?
Or do they need to, what do they need to learn?
What do they need to survive in this day of age?
And I just wanted to get an understanding of what do you see out there?
How do we make sure we're not neglecting the next generation of juniors,
which will then become the T-shaped engineers that we need for the future software development?
Yeah, that's a really good point, Andy.
And I think there is also out there, I think, a lot of miscommunication.
Because I think there is a tendency to see the glass half empty on that.
I see that internally to people coming to me or designers saying,
well, what do we now need
designers for?
Because the agent can just design
for you, right?
But I think it's the wrong
perspective to look at it. I think the right
perspective is that this thing is an
enabler. Like you can do more now.
You don't need, as a
designer, you don't need a program and
the product manager, but
as a coder, maybe you don't need a product
manager and a designer anymore. You can just do it
yourself. So I think
my message would be
very encouraging one, engage with these tools and learn them. And AI engineering is a skill
that is also to be learned. You can usually people test it out that they get all excited,
and then they happen to build products that are franksteins, that are unmentainable systems,
or with AI, like, content explodes, it produces way more than you could have a review.
You need sound engineering principles more AI development. And I think that's the really important
point there. Develop your skill, maybe not the same way as you did before, but add at least
AI engineering, add that skill sets to your repository of skills because I think this is the most
empowering technology and not the one that really rationalizes all of us away. I think that you can do so
much more with this, you can really bring your visions to life. And in order to do that, you need to
experiment, you need to engage with it.
I think it's the completely wrong way to just sell.
Now I'm just like checking, checking out because
everything is done already. I think the very
opposite is true. I think that software becomes
a lot more cheaper to produce, but that also means
that everybody can do it and everybody can throw the product and
go to market with it and experiment with it.
And I think that that would be my big vision for it.
And I try to enable as many people as I can internally as well as hopefully externally too
with this because I think for me it's one of the most exciting technologies and one of the
most empowering ones for the individual.
I want to, I have a follow-up question, but I first want to recap because also when I get
these messages on LinkedIn, I typically tell them, you know, A, learn the basics, have an
understanding on why in 2025 we have certain development practices.
why we talk about CICD, about quality gates,
why do we need to have good tests,
why do we need to have good observability and resiliency?
Because if you don't understand this,
independent on where you are right now
and in which direction you're expanding,
I think it will be very hard for you
to really then take ownership of this end-to-end process.
So cover the basics, understand this.
But now I have a question for you.
You just said, obviously, software engineering
will become more affordable, cheaper,
more people can create software.
What I observe, and I'm sure you've seen this as well,
with our customers, within our organization,
the same problem is all of a sudden not solved once,
but five, ten, twenty times,
because everybody can solve the same problem
by creating yet another tool that does the same thing
or in our case an app that does the same thing.
So how can we make sure that we are in the end
not ending up with 10,000 time killer puzzle games,
and every game is only used once by the creator.
How can we make sure we're not ending up with 10,000 versions of open cloud
that doesn't create any good ecosystem
because we don't find any good big community to actually drive it?
How do we make sure that open source communities
that have built and found themselves over the last decades
and we're driving standards,
How can we make sure we're not losing this
because we all,
all of a sudden, individually go
egoistic and we just solve
the same problem myself and I'm good
and I'm not collaborating anymore and trying to
find a common solution. So,
trying to play devil's advocate a little bit here.
It's great that we can solve the same
problem 200 times, cheaper,
but how can we make sure
this is not backfiring?
Yeah, that's a great question, I think.
And it's not an easy one to answer either.
I think there's probably
no simple solution to this problem.
I think that obviously this is currently a hype,
and a lot of people are just doing what you just said,
well, I can solve this problem myself.
Why do I need other products now anymore?
And I think a certain part of that is actually also true.
I think that AI can also help you to instrument open source solutions
where you would maybe in the past have used the paid product.
So, I mean, these things are not new to AI.
These problems have always existed in a way.
And there is a free open market that has always vetted out the products that were unfit, right?
So this has always been a topic, but I think company internally, what at least we are trying to do is having forums,
having discussion groups, engaging with the technology, and then sharing everything.
we can, having good practices in sharing not only our knowledge, but also the things we built,
the things we've learned along the way, and then collaborate on it. Because ultimately,
I think not only, so I think the one thing is true that you said, people can just build
a lot easier products, but I think the bar for quality products will increasingly be higher.
So figuring out how to collaborate and building better products is actually
the key to all of this.
At the end of the day, our products will be
significantly more capable in the future
and the bar will be way higher
for people to actually adopt them.
And I think that is the end game
of this sort of like this is where
we will be driving because
you're right, like nobody will
be interested in the millionth version
of OpenClaw, but everybody
will be interested in the super powerful version
that our colleague Peter
is developing at Open AI.
And that has to be
the really solid product that is sustainable.
Thanks for the Wolfgang. Any final thoughts for you on this topic?
Yeah, fully, fully agree. I would have simply said, we don't know and we will learn on the way
what that really means as we learned with many other domains and markets in the past.
And just to underline what Benedict said, yes, there is competition. And I think what now comes
much more in focus of competition is not the actual technology, how to build software and how
does it look, but really what value it provides. At the end, you pay much more attention now
with software products arising. What does it really solve for me? What's the real value? And I think
the easier it becomes to create software products, the much more importance is what does it provide,
How does it really kill my time or how does it really make my life better?
And yes, competition, to be honest, will be hard, will be hard for many companies.
But I think a lot of market will remain to the broad creativity.
Because if there wouldn't have been Peter Benedict, you mentioned him, he said himself,
well, he was wondering, why not big vendors came up already with the ideas.
had. So he was building it with the community and suddenly it became kind of a game changer
for many. And I think that will happen just much more often that it's not just the big vendors
who are capable of disrupting something. It's given that power of AI, crafting software products,
it becomes the commodity of building that, but the best ideas will survive.
And for everybody that has escaped the all the hype around OpenClob.
When we talk about Peter Steinberger, an Austrian software engineer
that obviously made the big news and made it to the Silicon Valley with this OpenCloor project.
Guys, thank you so much for this.
I know we are just in the beginning phases of this revolution.
It's no longer just an evolution.
I think it is a revolution because it will change a lot of things.
things.
We can obviously see it every day with the new models, with the new tools, with new products
coming out.
It's really hard to keep track of everything, to be honest with you, from my perspective.
I'm sure you see the same thing.
But I really like the encouraging words of Benedict.
I think we are, we need to look into the positive side of all of this.
What this enables us to do, it really enables us to fulfill our dreams, to build our own.
not to steal
the tag line of one of the
autumn manufacturer, build your own dream
becomes also true now very easily
with software
and I like Wolfgang your
quote. I think you said
the challenge will
be the one will win
that can show the value of the software
and not how it was built.
What is the value that you provide? Which
problems do you solve and in the end
nobody cares? How you built it? We will be able
to build it. Obviously much more efficient.
there's probably a little side note to that because it's not just going to be the value to the human
but also the value to the agent in the future true yeah that's a good point cool um guys thank you so much
for your time um also thank you to brian wilson who today couldn't be with us but he was the one
who triggered this whole discussion he said he wanted to have this conversation so brian hopefully
you were able to listen to this episode to the end even though you were
were not co-hosting. And Benedict, Wolfgang, thank you so much. Keep us posted. I know,
obviously, we talk anyway all the time, but really exciting. And thanks for doing what you do.
And any other links, anything else that you wanted to share, just let me know and we'll
add it to the show notes. We do. Thank you, Andy. Thank you very much, Andy.
Thank you. Goodbye. See you. Bye. Bye. Bye.
