PurePerformance - Don't babysit your AI Agents to keep them on track with Lukas Holzer

Episode Date: April 27, 2026

AI coding agents are fast—but speed alone doesn’t guarantee quality. In this episode, Andi Grabner talks with Lukas Holzer (Straion) about why large context files and “almost right” AI code cr...eate new risks for engineering teams. You will learn about the "Lost in the Middle Syndrom" and why many organizations are not getting the promised 10x engineering boost right now!Andi and Lukas also explore rule adherence, dynamic context generation, enterprise readiness for AI-first development, and how software engineering roles are evolving in the age of AI.Tune in to learn more ...Links we discussedLinkedIn Profile: https://www.linkedin.com/in/lukas-holzer/Straion Website: https://straion.com/90 Percent Rule Blog: https://straion.com/blog/90-percent-rule-adherence-straion-coding-agents/1million tokens Blog: https://straion.com/blog/1m-tokens-wont-save-your-engineering-standards/

Transcript
Discussion (0)
Starting point is 00:00:00 It's time for pure performance. Get your stopwatches ready. It's time for pure performance with Andy Grabner and Brian Wilson. Welcome everyone to another episode of Pure Performance. As always, when something like this happens when it's not the sexy voice of Brian Wilson, but it's just the average regular voice of Andy Grabner. Welcoming you to another episode, unfortunately, Brian can't make it today. But you know, it is the time or I guess it's the time of,
Starting point is 00:00:44 of our industry right now that we are optimizing things away. Maybe we're just optimizing Brian away with AI. I don't hope so obviously because we want the human touch. Talking about human touches, I have invited somebody
Starting point is 00:00:58 that actually doesn't live too far away from me. Lucas Horza. Servas Lucas. Hey, nice to meet you. Hey, hey, Lucas, we just, obviously I knew that you are also based in Linz.
Starting point is 00:01:10 I didn't know that you are living dead close. Just maybe two kilometers away from here or a kilometer and a half. Lucas, the two of us, we both had a time when we both worked
Starting point is 00:01:21 at Dinotrace, but you now started on a new adventure. Could you quickly just remind me a little bit about the background about yourself
Starting point is 00:01:31 before we jump into the topic, which obviously has to do with AI. So this was not a joke to start it like this. AI is upon us and you are doing some really cool stuff
Starting point is 00:01:41 at Estryon. But first, maybe some background. Who are you? and why do you think you're here today in my podcast? Yeah, thank you. As I said, Lucas is my name. I'm one of the three co-founders of Strayon.
Starting point is 00:01:55 So my background comes originally none from the IT field. So I was at Graphic and Communication Design at school, and I somehow managed to get into IT and get a good job at Dinotrace where I learned a ton about engineering and software development. and yeah at some point the CEO of Netlify a big Silicon Valley company just landed in my inbox and asked me if I want to build a build system there which was super exciting so that was moving very quickly then and I always been in sync with my other two co-founders
Starting point is 00:02:33 because we've all three worked at Dinotrace together so it's Fabian Friedel who worked until the end at Dinotrace and my other co-founder, Katrin Freyhofener. She had a quick, like a longer internet to add Elastic. Probably everybody knows it from Elasticsearch and kind of a competitor of Dana Trace back then.
Starting point is 00:02:56 Yeah, and we all three seen this problem in AI space, especially when it came to spec-driven development. So that was kind of where we started the whole journey with Strayon in the field of you know, everybody wants to build a software with AI back then with a two-liner and this doesn't quite work.
Starting point is 00:03:16 And this was like the time where specs and RFCs and everything was quite a thing. And we've seen that at Dynotrays, Elastic and Netlify. And this was one of the early startings of this. It's so funny when you say back then, this is what we thought about AI coding or wipe coding. Back then is not that far in the past. So this also shows us how fast things are changing. Now, you mentioned you found Estreya on back in 2024? Yeah, end of 2024.
Starting point is 00:03:47 We officially founded it. And we were pretty lucky to get like a huge governmental grant from the state basically. So we could all work full time from beginning 2025. Cool. Hey, when I look at some of the material that you've put out there, right? You have some really cool slogans. And I think this is also a great way for our discussion today. Now, we're not going to use this as a sales opportunity today.
Starting point is 00:04:16 Rather, I want to discuss with you what real problems to resolve. We are just a couple of years into the whole AI native coding era or whatever is going to be called in the future. You basically say AI coding agents are fast, but sometimes they're going on off in the wrong direction. And I've experienced this myself that it's amazing. how productive you can be, but sometimes you're just producing stuff
Starting point is 00:04:43 that actually either nobody needs or stuff that doesn't work as expected. You also have some really interesting stats that you are showing, and I think these are stats from, you also have the links from Stack Overflow, from Coder Rabbit, from pull flow, that a lot of developers are very frustrated. I'm quoting here,
Starting point is 00:05:02 66% of developers are frustrated by AI, code that's almost right. We see a 23.5,000, percent increase in incidents per pull request year over year. And plus 70% of more major issues in AI authored PRs. And this feels like a reality that we didn't anticipate, huh? I mean, I kind of did. I think if you check Reddit and you or like those developer forums where you see
Starting point is 00:05:39 like people using AI and how they're using AI. It is kind of, yeah, it explains why GitHub has probably the worst uptime ever. And like big companies like Amazon have like really big outages because it's like kind of, you know, as a developer, I had so many poor requests in my life that I just waived through or stamped through because it looked almost right, but it didn't did like, yeah, look very deeply into it on every detail because it almost looks right. And I think that's the same with now AI producing code that doesn't adhere to your engineering rules.
Starting point is 00:06:19 But let me ask you a question. I've been in the industry since 1999, right? I started my engineering career as a software quality engineer. So while I was in software engineer, I had to start in the software quality department, which taught me a lot about software quality. Over the years, I advocated a lot for automated testing, good CICD. Shouldn't we have all of the tools available that can detect almost right but not completely right code, whether it's written by a human or N.E.I. automatically?
Starting point is 00:07:01 I mean, that's a really good question. Like, to be honest, I think the point is who defines. what is a good quality code. I think that that's the first thing. And I think if you're like a solo deaf that works on a site project, you're probably really
Starting point is 00:07:20 satisfied with what Claude gives you. Like if you have a side project and you just ask it to build like I mean, I have friends that build like they are not in the IT industry at all. They don't know how to program, but they are currently capable of building dashboards
Starting point is 00:07:36 and small apps to solve some quick use cases. And if you would ask them, they say, like, it's perfect. Like, they are now capable of building things that they previously couldn't do. And it looks good and it feels good. They can deploy it. And it works. I even know a friend, a very good startup founder friend that has, like, a finance startup
Starting point is 00:08:00 with, like, significant revenue that is 100% wipecoded. And, like, they have customer traction. And it kind of works. It's just, I think that the problem gets bigger or that the quality definition of, like, larger enterprises is quite a different one from, like, a small or mid-sized companies. Yeah, and I think this is also kind of like I want to play a little bit devil's advocate here, so you maybe can tell. I want to poke some holes into what I have been trying to advocate for for the last 20 years, right? This is investing in big foundational aspects of software engineering. And as you said, I think many large organizations that have grown their software engineering practices over the years,
Starting point is 00:08:50 some of them are really good, but some of them also now, I think, feel it even more where they didn't invest in good automation, in good CI, so continuous integration, in good CD, in good automated testing. Because nowadays, as you said, right, there's the promise of the 10x product. activity gain. So if we are using coding agents and we're producing 10x the code, 10x the functions, the features, but we have holes
Starting point is 00:09:17 in our CICD, it's going to be really hard to catch all of this. It's probably impossible because we were not able to catch it before, but before the output was smaller or less and now it's going to be really challenging.
Starting point is 00:09:32 I think that the point is, even if you have the best CICD, I think you cannot catch everything. Like that's something we focus with Strayon a lot is like there are so many rules that are not lintable or testable because it's like
Starting point is 00:09:48 kind of like normally an engineer would review it based on their experience, on their knowledge and tell you like hey, that's the way how we write software and that's not like lintable via an abstract syntax tree of the code
Starting point is 00:10:04 or it's not testable because it's like such a detail. For sure, there could be some tests like every login flow needs a social login or something like that. That could be probably very good tested. But I think the nuances there are pretty hard. And I mean, test times in CI probably currently rise the bit or spiked a bit.
Starting point is 00:10:28 Yeah, that's true. So then Lucas, let me ask you then the question. If coding agents promise the 10x productivity, but now engineers spend time keeping them on track. That's also one of the lines that I stole from your deck and your website. You've flagged it the hidden costs. What can we do? And what can, especially organizations that have been around for a while
Starting point is 00:10:56 that are now facing the challenge, they have to become an AI-first company, right? I think that's the big challenge. Everybody wants to become AI-first or AI. How can they make sure they're not driving in the wrong direction with high speed? I think there are multiple angles. I would first start on tracking progress. Like there are some cool Dora metrics that really tell your engineering organization, like there are companies like I don't advertise but jellyfish for example.
Starting point is 00:11:25 They can really give you very well insights in how long does it take from like creating an issue until it's really deployed. And those metrics, you probably should. should have up front in place so that you have something to pen to benchmark and to see if the 10x productivity gain that they are all advertising for is really holding up later then. And then I think requirements engineering got so much more important as a daily job. And like we are talking to a lot of big companies. And often we hear like, yeah, the engineering standards or rules, first of all, they are scattered across multiple sources.
Starting point is 00:12:04 or in the worst case, they're just in the head of single individuals. And I think that's a big problem if it's just in the head of someone, because if this person is leaving your company, this knowledge is completely lost. And it's like completely lost for an EI agent as well to kind of stick to these rules. And I think having a good documentation policy where you have one central place, where all the rules are, where everybody knows where to find them, because that's the next thing, because people after everybody cooks their own things and every team does their own documentation or building their own knowledge basis.
Starting point is 00:12:44 That's not the good way either. I think having one central place where all these rules are stored is already a great start for humans and for AI agents later on. So then a couple of months ago when I started down the track of not only using AI, but also trying to figure out how I can enable others. As you may know, right, I'm typically trying to do some advocacy for engineers on observability best practices. I have a background in performance engineering.
Starting point is 00:13:17 So I try to tell people if you are a performance engineer and you have your experience over 5, 10, 15, 20 years, the best you can do right now is, as you said, write this down in a way that not only benefits the next, junior engineer that you're hiring and getting on the team, but also the next AI. So I've basically been advocating for creating your
Starting point is 00:13:41 agent files, your skills, your prompts, depending, I guess, I'm not up on the latest, maybe terminology, but I think the idea would be if I am good in my job over the years, I should make sure
Starting point is 00:13:58 to not only train the next human, but also write it down in a way that an AI can also benefit from my knowledge. So my question to you now, am I here on the right track by advocating to write everything in a big, large agent file and then I'm done with it, or is there still something that I need to take care? I mean, I think it's a very good practice
Starting point is 00:14:20 to have it written down somewhere. I think advocating for that is really good. The point is if you have then huge gigantic context files and you just pass it clawed and think they will magically follow all those rules. Sadly, this is kind of not the truth. So I'm not sure if you've seen the recent hype on LinkedIn, Reddit, or like I can use about the ETH-SURG paper on like basically you should delete those
Starting point is 00:14:49 context files because they are like making things worse. In general, there is like a big problem with those transformer models and it's the lost in the middle syndrome. So those models are very good. good in pick and choose. Like if you have a huge context that's full of information and you know I need one single thing, this pick and choose works pretty well. Like they can find it. It's like kind of the Neal and the Haystack test where those are models are really good at. The other part is if you take a look at how models are like capable of retrieving parts of the information
Starting point is 00:15:25 based on the placement in the context. So if you would map basically the whole concept, the whole window on like an X axis, where you have like the beginning and until the end, which is the user prompt. Then models are really good in the beginning, like the first parts of the axis where there is a system prompt in recalling that. And they're very good in recalling what was the last addition to this context, like the user prompt. But they're very bad or like the more it went into the middle, like the 50% middle, the more it drops on. on having the capability of getting all of this information. So I'm not wanting to say that big context sizes are bad per se,
Starting point is 00:16:10 because I think one million context size is great if you can dump like a whole code base in and just say, give me an architectural overview. So it just elevates basically the flight level. But it's very like in the terms of following all your engineering rules, I think it's the wrong thing to do, to be honest. By the way, I'm not sure if you also have this on your website, but the PDF that you sent me, is this something we can also share? Yeah, the deck you can already share.
Starting point is 00:16:42 We use this just of, if we're talking to clients, to basically have some information there. Yeah, because you also mentioned this here, exactly the U-shaped form, the ETH-Syric research, that age, I like, obviously, you're doing a good. great job in educating people. There agents MD and Claude MD don't solve the real problem. You have this big drop in accuracy depending on the size of the context of where information in the context is stored. So if I hear this correctly, Lucas, instead of me taking my 25 years of experience
Starting point is 00:17:21 and dumping it into a large agent file, I should rather think about breaking it up into individual pieces and then making these individual pieces better findable searchable how does this work how do i is there a good way for me or best practice way than how structural this i think there are multiple evolutions of this like having it written down is already great because with this information you can already go one step further and further than like i i think the the poor man's approach which works in a lot of cases is breaking it down into skills and like simple solution. We always say if there are not too many rules for smaller companies, you don't need stray on for example.
Starting point is 00:18:11 You can easily achieve that with a skill because it's just like 10, 50, 20 rules or something like that you can put that into a skill and the skill will be loaded based on a one line description. The point is this is like already a bit fragile because for the audience, if you don't know how a skill works, it's kind of basically a collection of markdown files, that's kind of the context, and then you have a description for this skill.
Starting point is 00:18:38 And this is a one, two-sentence paragraph, and based on this paragraph, which is always loaded in context, it will decide if it uses or loads the other Markton files into context or not. That said, if you have a lot of skills, all those two-line descriptions will be always in your context on every prompt. And if you've probably seen the latest cloth code leak, basically, where the source code was leaked, this will be even like re-added after every prompt to enforce that it knows it ahead of the time.
Starting point is 00:19:11 So to actually fight against this lost in the middle syndrome. So this is already a good approach for smaller things. And for larger enterprises, that's why we build stray on. because what we do is we dynamically can load the right rules at the right time. So we have basically a matching algorithm, a pipeline, a machine learning pipeline, that is capable of saying like, hey, that's my scratch pad on Claude or that's the task. Out of 5,000 rules, what are now the five important rules for this one single task? And by that you have like, you don't have to worry when to use which rules
Starting point is 00:19:51 and you don't have to load back-end rules into a front-end task or we can go even way deeper. Like if you have a React front-end or you have some other front-end rules, we just know like, hey, now just those three or five React rules are important. So our bet is a bit contrary
Starting point is 00:20:09 to the whole industry because we say the smaller the context or the sharper, the narrow, the much more the models will follow your instructions. Well, it also makes sense If I pick an analogy, if I would have recipes for every dish in the world. And if I have, let's say, recipe suggestions for every spice in the world,
Starting point is 00:20:35 then I could give all of these and create different skills, maybe thousands of skills because there's thousands of spices. But the problem is if I am here in Linz, Austria, and I know I need to create a dish for some. that has always eaten pork in his life and he doesn't eat anything else, then probably I can get rid of 90% of the recipes for certain spices or ingredients that I don't even need. So basically the context would be who do I, who do I need to create something for that already
Starting point is 00:21:08 if I know what to then look at and also what to exclude. I think that the part is also what to exclude and what to focus on. So you basically saying a pre-selection of what is important. on the test that is at hand. And you are doing this by continuously doing machine learning on all of the skills and all of the instruction files
Starting point is 00:21:29 that are out there. And then when instructions are needed, when context is needed, then you are dynamically creating the right context for that problem. Do I get this right? Yeah, exactly.
Starting point is 00:21:42 So we had here a great partnership with the University of Applying Science in Hagenberg with their AI lab. We've developed together this algorithm, which is our proprietary software. And that's our USB, to be honest, because competitors don't rely on this smart matching. They more rely on, like, you have to place it in certain folders or directories. But we just think engineers shouldn't worry or care about where those rules live.
Starting point is 00:22:16 And it should just work. Cool. got a question for you then so I completely understand if I go back what I learned today I understand that the 10x productivity gain promise didn't doesn't work for most
Starting point is 00:22:31 organizations because the context got so big we're creating a lot of stuff but the quality that we're creating is not great because the context is too big also large enterprises may not have all of the checks and balances in place
Starting point is 00:22:47 the good CIS and CDs that can validate and see that this is actually not good good stuff now there's your approach where you say you're dynamically generating the right context if i if i now think about this um how do i know which context was used when i created certain code because maybe later on i need to kind of somehow figure out how was this generated do you also then keep track of what you generate do you generate do you generate it all the time for every prompt? So can you give me some technical details on A, when you're creating this? And also is the generated and provided context and also somehow persisted and stored in my session
Starting point is 00:23:32 so I can also go back in history and understand why certain things were created? That's a really great point. Actually, that's something we're currently working on, which is not released in the product yet. Like our team is heavily working, it's called trajectory files. That's kind of a new standard establishing which collects all the information about your session. And we are currently in the progress on visualizing that and showing you exactly those traces that you see exactly like, hey, in this session for this poor request or for this code change, those rules were taking into consideration and those rules were not needed at all.
Starting point is 00:24:13 So that you, for us, we see this especially in the beginning where we have to get a get trust for the quality of our system that we can really provide this trust in this form to the engineers. Cool. Yeah, for me it makes obviously a lot of sense because thinking with all of the organizations that I've worked with, I think this would be one of the question that comes up. Because if I have right now all of my instruction files, my agent files in my source code repo and I basically know at which time I had which instructions. It's easier for me to audit and to basically also show to the compliance department,
Starting point is 00:24:53 like what was the input. But yeah, that's a cool, cool thing. I really like, though, I mean, the great thing with what you have built here is there are clearly studies, that show the technical, the reason the technical problem on why these agents are super fast, but sometimes go off in the wrong direction. and you have a very nice solution. Now, technically, can you still explain to me, when does your generation kick in?
Starting point is 00:25:24 At which point does it kick in? So we said we want to keep it as simple as possible. So all you have to do is have all the rules in Steyon and the developer just installs a CLI and the CLI is invoked via a skill. On the setup, we just create the skill and the skill is just a wrapper that tells Claude,
Starting point is 00:25:44 cursor, GitHub, how to use the CLI. So that's basically all. So we are using a skill as well, but a skill for instructing how to work with the STRAN CLI. And the STREANZLA has then different commands,
Starting point is 00:25:57 like get rules, find rules, etc. And these commands are then used with, basically in Claude, you can tell Claude, hey, let's work with, let's work on
Starting point is 00:26:11 this issue and please follow all the rules. So it will pick up the strain skill and give it everything which it has in the scratch pad. So Claude has like kind of some internal memory scratch pad. And this will be passed by an API call to our backend. Our back end then runs our pipeline passing in that in and giving you back like the five, ten rules that are now important for planning this whole thing. And with this rules, it will start the plan. And then later on, it will validate the plan with a different hook where it then gives like the finished plan or
Starting point is 00:26:46 the spec basically back on our back end it will pass again that are the rules that are needed and the cool thing is we don't do the validation we don't do the judgment we just do the matching so your coding agent
Starting point is 00:27:02 your clawed instance with your own model is doing that so this works very well with bring your own model like you don't have to pay ask for token usage or whatever because it runs solely on your plans that you already have in the companies on Claude or GitHub co-pilot or what works best for you. And Claude is just getting the right context to accomplish a good result. And that's it.
Starting point is 00:27:29 Like we don't do the validation. Claude is doing that. We just give Claude all the information it needs to get to the right result. do you also keep track of if it's possible do you then keep track of what type of context that you pulled in
Starting point is 00:27:49 really made sense and which context was never used is this also not that's exactly that what I what I said with the trajectory we do a self-improvement here on like hey this was helpful or this was not helpful
Starting point is 00:28:04 like think of reinforcing learning for this. That's cool. And especially what we've seen so far in our benchmarks as well is like really, it really is important how a rule is formulated. You know, the articulation of a rule can heavily influence how good it is picked up or is followed by like a coding agent. And this is something a lot of people underestimate when they are just writing like those
Starting point is 00:28:32 files on their own because they think like, hey, I'm just writing it that and that way. And often the results are very bad, depending on the language they choose, for following these rules. And that's a huge thing we're focusing here on Stran as well to benchmark and validate what is like a really good way to express this for a machine readable format. So what's the best way to pass all this information to the coding agent so that it really follows all those rules? Now, you mentioned you have some benchmarks. Do you have any numbers on what type of improvement people see with your approach? Yeah, to be honest, I'm not the biggest fan of just putting out numbers because you see that all over the place in the industry
Starting point is 00:29:18 that people just drop numbers and random numbers. So we just open-sourced our benchmarks, therefore, so that people can run it on their own repos. Our approach to benchmarking is quite different than the industry does with the SWE benchmark or others. because they often focus on a very small problem that is single turn, like just add a flag to this GOSI. We think that's not the real world example.
Starting point is 00:29:47 So instead, what we did is we took like a huge repo outside that is open source. In this case, like the Elastic Kibana or Argo CD, that probably a lot of your audience is knowing. And we just picked an open source contribution there. There are issues and pull requests, and what we did is the issue is like our starting point for the agent. Then we've extracted out of this repository like rules, like coding guidelines, convention, etc. And we have three scenarios. One without any additional context.
Starting point is 00:30:22 One with a handcrafted agents and defile that contains the exact same rules as we have in Strayon. And one with all the rules in Strayon. And then we checked out the commit. before this pool request got merged. So we have the perfect testing ground. And then we ran the three scenarios. And in the case of this ArgoCD or the Elastic Ibana, we could get to 90% rule following with Strayon
Starting point is 00:30:49 and 5% with the agents MD. But as I said, I don't like dropping here some random numbers. So the best is people should just check out our blog post and see their open source repo. And they can set up the benchmark on their own to see it on their own code. That's cool. And I think you shared the blog post with me as well. So, folks, if you're listening to this on the strayon.com website on the blog, there's a couple of cool blog posts that are discussing.
Starting point is 00:31:18 I think this one was this one that was written by Catherine. Yeah, exactly. March 26th, the 90% rule adherence. Yeah, that's cool. It's impressive. and also great, obviously, that you do this on public projects, so people can definitely follow what you do. Hey, so you were founded in the end of 2024.
Starting point is 00:31:43 So now it's, let's say, 18 months into your journey, and a lot of things have already changed. Other things that are worrying you, other things where you say, well, this is as an industry, Are there, is there anything that is worrying you? Yeah, to be honest, I would be super afraid if I'm now like a student or graduate somewhere and deciding for software development because how fast this job changed is crazy.
Starting point is 00:32:22 Like it feels like if you talk to people that love to actually code and write code themselves, I think this will dramatically change in the next year how this job is looked like, to be honest. Yeah, I think, I don't say that this job will be replaced. There will be software developers. I think it just changes how it works this job. And I think it will be way harder for juniors to get into this industry. And I think that's kind of our responsibilities as companies
Starting point is 00:32:56 to find a way to integrate or look at. learn juniors and help them to get them up because otherwise we will have a problem in 10 years or 15 years if there is no one coming after. Yeah, I think we had this discussion, the same arguments also with our previous podcast guests, that obviously we need to make sure that we're not forgetting about the next generation of software engineers or whatever we will call them. I mean, in the end, it's just a new tool that has been given to us, but a tool that is also dramatically, changing the way we do things. I'm not sure if the analogy makes a lot of sense, but before the automobile, we had a lot of people that were
Starting point is 00:33:37 riding horses and horse carriages and they had to figure out how to get a horse from A to B and feed it and pauses and everything, right? And then the automobile came that dramatically changed everything. Now self-driving cars are upon us, so that will also change a big industry, a big time. And so we're seeing this change here as well. but I mean if you take a look at an SRE like you have great AI tools
Starting point is 00:34:04 I'm now capable to be like if I get an incident if I get a page I am capable mostly in five minutes to find the root cause through AI because I have an MCP server that can analyze or grab the right data the right errors in the right moment I don't have to dig through some dashboards
Starting point is 00:34:22 or whatever I just give it like a lot of observability data and it will point me to the right things in really no time. And even with like Kubernetes read only access on clusters, this is amazing. But you still have to know what to do here. And it's like the AI, like I wouldn't give AI auto access currently to my cluster to fix something.
Starting point is 00:34:46 So I think there is still a lot of human creativity needed here in this case. And I think creativity in terms of architecture will be still a huge thing. Yeah, especially, right, coming back to your own stats that you pointed out, if I'm just using AI now in these cases where systems fail, if I only have a 5% success rate in the AI following my rules right now, if I'm not smart with how I define my rules, then I may get the wrong answers and I may do the wrong things as an SRA. So I think in the end, what I take away from all of this,
Starting point is 00:35:25 it's amazing how fast we can move. We need to move into the right directions. And for that, we need to make sure that the AI or the agents get the right rules at the right time and are not overwhelmed with too many rules because they will pick the wrong rules in a very high percentage. To be honest, if you try to argue with an AI, I had this where I was super convinced by the statements the AI was providing. And I felt really, yeah, that makes total sense. that makes total sense and then I let the AI challenge itself.
Starting point is 00:35:58 And it convinced me then from the different point of view. So I think it can get very tricky in finding the right arguments and therefore it's really important to have the right context in there. Hey, and Lucas, to end on a positive note because I asked you earlier,
Starting point is 00:36:15 what are you worrying about? But maybe let's end it on a positive note. What are you excited about? I mean, I think we are in the greatest area now. Like people can build their dreams. They don't need huge big agencies to build their stuff. Like if someone wants something, they build it. And I think this accessing like this era of builders, I would call it, that everybody is empowered to build what they have in mind will probably boost our creativity because a lot of great ideas that were buried through missing potential skills
Starting point is 00:36:52 and software development are now enabled to build maybe great startups, companies and building great products around that. It is a really good future to look forward to. And let's also hope we figure out how to then enable the next generation of engineers to leverage the tools that bring their creativity into fruition. That's really good. Hey, Lucas, really all the best with Strayon. Folks, if you're listening in, there's a lot of links of
Starting point is 00:37:22 obviously that we are going to post, whether it's on the website, the blog posts, also in the document. We'll see how we share this document or if we share some of the links that you have in there. I always say to people that start their own companies, I'm always impressed by anybody that makes the bold move to start your own company because it's not an easy thing to give to the basic end your sales. safe haven of a regular job. So all the best for you and your adventure. And I'm pretty sure we'll have you back at some point and see where Strayon is going, where the agents are going.
Starting point is 00:38:04 I do hope you're moving fast and always into the right direction. Cool words. Thank you very much for having me. Stay on track. Exactly. And Brian, next time I hope I'll have you back. But I hope you are good with everything we discussed today. See you next time.
Starting point is 00:38:21 Bye. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.