PurePerformance - From Vibe Coding to Vibe Architecting with Abhimanyu Selvan
Episode Date: December 22, 2025It started with the prompt: "Create an Uber Clone"! Several iterations and some months later Abhi presents his lessons learned when vibing a Ride Share Platform for RoboTaxis at Cloud Native Days Aust...ria!"Commit to one tool and go deep. Don't get distracted by all the options you have. Treat your agent like a human! Get better in expressing what you really want!", those are the many lessons learned in Abhi's journey applying the potential of the latest AI agents that are available for software engineers.Tune into our latest episode and understand what Abhi means when he says: Context is important! Give it Macro Context and do Micro Incremental Improvements!Links we discussedAbhi's LinkedIn: https://www.linkedin.com/in/abhimanyuselvan/Cloud Native Austria Talk: https://www.youtube.com/watch?v=VjMPHWjawxM&list=PLtLBTEzR4SqU9GwgWiaDt10-yOVIN0nzM&index=9Cursor AI: https://cursor.com/OpenSpec: https://openspec.dev/
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Welcome everyone to another episode of Pure Performance.
as you can see
well you cannot see
because it's a podcast
but as you can hear
this is not the
awesome voice
of Brian Wilson
you got a deal with me
today only
this is Andy Grabner
he well
our host
Brian unfortunately
had some stuff
that came up
we talk about AI today
and vibe coding
unfortunately Brian couldn't
wipe code himself
out of the situation
to make it in time
but Brian I know
this topic is most
likely something will have come up more often now in our future episodes.
So I'm pretty sure you will have a chance to discuss this topic.
But now I want to introduce my guest.
My guest is Abby for short.
I don't dare to pronounce the full name.
But Abby, before I let you give us a little bit of background about yourself,
we met each other at Cloud Native Days, Austria.
There was a conference here where you were presenting and I'm reading out loud,
beyond reactive systems event-driven architectures for AI agents.
I had the pleasure of welcoming you on stage
because I was moderating that conference
and I thought this is a really cool topic and talk
and I want to just get you on a podcast.
Now, Abby, over to you.
Please let us know who you are, what is your background
and what brought you into building that event-driven app
with AI agents.
What does vibe coding mean to you?
there will be some things that I want to like to kick off this conversation.
All right. Hey, hey, everybody. Thank you, Andy, for having me in this podcast.
And Brian, virtual hi here. We'll surely catch up after this. I'm Abhi. My name is Abiyamanyu Chitra Selvan, but you can call me Abbey.
I have a background in embedded systems and robotics. I did my bachelor's in electronics and communications.
and then I moved to the Netherlands 15 years ago almost
where I did my master's in embedded systems
and began my career as a software engineer programming
cockpit display systems for aircrafts, Boeing and Airbus A320s
and Boeing 787s, and then eventually moved into autonomous
or automated robots, autonomous is an overkill,
automated robots that are running in industry,
like Porsche, Mercedes-Benz, who worked in a couple of German automotive clients.
Eventually got myself hands dirty in the cloud-native world with Kubernetes,
moved more into cloud infrastructure and DevOps,
and became the head of software engineering for a medical IOD startup,
working with patients with sleep apnea and helping them, you know, sleep better.
And then I joined a four-person Kubernetes startup
in the Netherlands, which later got acquired by Akamai.
And I joined there as a developer advocate.
But as a developer advocate in a four-person company,
you can do a lot more than advocacy.
That was really nice.
Eventually found my way into digital ocean cloud infrastructure provider
where I started as a Kubernetes advocate
and eventually I ended up heading the developer advocacy team at DigitalOcean.
Last month, I made the decision to resign from my role,
and I started pursuing some independent projects that I was interested in
and also taking a step back to understand more about AI systems.
That's pretty much my background.
And, yeah, I had the pleasure of speaking
about a project that I was building
and the whole reason for that is
the whole buzz, the AI buzz
kick started
two years ago
but the beginning of this year
when I looked at it
I didn't have a background in AIML
but I had a cloud infrastructure background
so I was like how do I use these
LLMs
and how can I use my
background, my skill sets, to adopt these tools and technologies and build something cool.
And as a result of that, I went through this whole exercise of working with different
coding agents and cursors of the world, GitHub co-pilots.
And as a developer advocate, right, you needed to do two things.
One, you wanted to inspire people on what is plausible on our platform and two is to enable
them with the right set of tools and
SDKs and content.
So when I
started this project, I was working at DigitalOcean
and we are cloud infrastructure provider.
So I was like, how can I build a
really complex or an advanced project
that can resemble
with some real life scenario?
But at the same time, we'll act as
a framework to showcase how these
AI agents can talk to one another at scale.
So the question that I was
trying to ask myself
is, yes, there are
these AI systems, but what would the infrastructure look like when these multiple AI systems
are deployed at scale, how will they communicate with one another, how would that look like?
And this question kind of like drove me into the project that I presented, which was
like a right share platform, a clone of Uber, that my goal was to have that platform and
you know, as a framework where I can validate my idea, whether this idea is right.
or wrong, whether even driven architectures is going to power these AI agents' communications.
So that's the long story short, but I'm sure we're going to go into detail about how we built
that.
But that's how I ended up, you know, building this clone of Uber.
I wipecoded it or should I say wipe engineered it?
Web engineered, yeah.
I mean, I got so many questions now.
But first of all, folks, if you want to see Avi's talk, then, you know, we always put the links
into our podcast description
the talk that you gave
at Cloud Native Vienna is on YouTube
so it's called Beyond Reactive Systems
Event Driven Architectures for AI agents
in hindsight
I'm actually wondering if the talk
could you should have probably called it
like maybe wipe engineering right
vibe engineering how my
how I vibe architecting maybe
right how I think
an event driven system
that's pretty much yeah
I gave this talk at like two, three places, but as in when I was giving this talk,
I felt like this is much more of value to people, particularly in this era where, you know,
you're adopting tools like a series of the world.
And so, yeah, that was like a good feedback.
And that's something that I definitely should have considered now that you think about it in hindsight.
Yeah, yeah.
And, you know, we talked about this earlier while Cloud Native Austria,
2025 is over
Cloud Native Austria
2026 is coming
and we just opened up
the CFPs
oh yeah
you got the bean here
yeah
stay cloud native
it's a pretty
we can't show it
but I haven't
so
the
what I'm interested in now
obviously there's a lot
of stuff
that is happening
in the cloud
natives
but in the AI space
right
you said you started
wipe coding
earlier this year
so early
2025
and this feels like almost an eternity
if you think about the advancements
that all of the models
and the AI agents have done
have come
did you feel this as well
while you were going through the project
that maybe with every week
with every month
things get just better
or did you get better
in using the system
so is it a combination
how can you kind of quantify
that a little bit
yeah it's an excellent
question actually. And I think it's a very valid and pertinent question in today's software
development. So when we, I think I picked up Cursor was like mid, when Cursor launched like
mid last year when it was getting traction, we experimented as developer advocates, right?
You know, what are the new tools that are out there? And we were one of the, our team at
DigitalOcean was one of the first teams to try out Cursor. And we were like first just mind blown by the
initial, you know, the, oh, wow.
I could just ask this and, you know, get something up and running.
And we advocated for it within the company.
But as an organization, the engineering teams were already using, for example,
GitHub Copilot.
And we did use GitHub Copilot.
And at that time when I used it, I don't know how good it's become right now.
At that time when I used it, it was not that great.
GitHub Copilot was not giving me the right code.
and it was not able to understand the right context.
Whereas Cursor was able to understand all the directories that I had in my workspace
and produced much better result.
And the whole experience was really nice.
It was just a fork of VS code, right?
I could just import all my existing plugins and the seamless integration was really nice.
So we started that whole thing.
And of course, every single – and we had all the –
these new tools coming up as well as in when we are building cloud code my one thing i was very
clear is i wanted to stick to one tool i wanted to commit to one tool and go deep into that
one tool because you're oftentimes yeah you have warp you have clod code you have so many new
tools kilo code and all this i didn't want to deviate and distract myself into trying out multiple
different things because cursor was already giving me what I wanted.
So I just pick one tool and that's something that I also recommend people as one of my
learnings is instead of getting fascinated by all the new tools coming out, just pick one
tool and commit to it.
If you're more of a whim person, maybe terminal user interface is your thing, maybe cloud code
is a better.
Eventually I think all these tools are going to produce like similar results and all these
models are going to hit a plateau where everything, it's just a matter of preference at a certain
point, right? Like, we've hit a point where I think it's pretty much that. And a tool is only
as good as its user, right? So the more you can extract out of these tools is the better. It's
not because you give someone cloud code, they can super code someone, outdo someone with cursor.
But if, you know, it depends on how good you want to use the tool and how can you, you know,
extract more out of it. So I think it's a combination
of committing
to one tool, but also trying
to go deeper and trying to learn
how these
tools operate.
But yeah, if you want a certain
example, right,
and it was funny because when I started
I just gave, like, build me a clone of Uber.
I just gave a single prompt.
And I said, just, and it threw a bunch
of scripts.
It didn't work. But
But the whole, what I got out of it is it gave me some ideas about event-driven design
and how I can build this system, right?
If you think about it, these AI agents are pretty much like humans.
They have to operate independently, asynchronously, you know, they don't have to wait for,
so let's say that I needed some input from you and you're working on a project,
but maybe it's taking you some time.
So I'm not going to sit and wait there.
Did you give me that?
I'm going to do my other stuff.
And when you come, I'm going to pay.
I think we have to think agents like pretty much like humans.
And I think like even driven design and I believe even driven design.
Let's forget about like the tools or the Kafka or message to what it is like the technology stack.
But more of the design in itself, like even driven is going to power these AI systems at scale.
So when you, so it gave me that thing.
and I went deep down into trying to understand if people have done it
and there are some people like at Confluent
who have been advocating event-driven design
like Sawn Falconer and a couple of folks in the industry.
So I've been reading their blogs and their tutorials
and trying out their projects.
So I took a lot of inspiration from that
and said how can I apply that to my coding experience?
So that's kind of how, like, I started.
So that means you were, just to recap, you started with a very blunt prompt.
Build me an Uber-like clone, right?
Obviously, it didn't work.
Gave you some ideas, though, on if you want to build something like this,
you need to think about the architecture because it's a complex system.
They brought you to what eventry of my architectures,
and then you did your own research on that typical topic.
Did you then go back with all of your,
lessons learned and
you gained knowledge about that topic
and went back to the
AI agent and then said, hey, basically
rephrasing your prompt
obviously with all of your knowledge, yeah?
That's how I worked. Yeah, 100%.
That was just like
I wanted to see what it's going to do.
It's not like I wanted to take that path of just
build me a clone and I'm going to work on it.
I just wanted to see how far it's going to go
and maybe the same
prompt nowadays can maybe build a better version of what it did like one year ago. But it was
just like a fun exercise that I did. I didn't take any code from that version at all. The only
thing that I took from it is the inspiration that it gave for event-driven design. And it suggested
a tool like Mapbox. So Mapbox is an external map service where I can basically, you know,
load a map and perform some roads and things like that. So I did not know about it. So those two
are the fundamental things that are still there
but the code is not there. So what I did
next is
I took a pen and paper paper and this
might sound I don't know pretty old
school but
I literally I also have my
notes I can probably show you like I literally took pen
and paper and I started writing out
like what I want this system to look like
what is the business logic flow basically
so when you're building I wanted to think like
from a customer in a business standpoint rather than
an engineer right so in this
case, usually engineers were stuck with, like, a tech stack, and I want to optimize the code,
but I want to move all that. If I want to launch a product, what would that be? And I decided
that, yes, I'm going to do a clone of Uber, but I did not know anything about it, right? So
I started writing the business logic. What are the components that we need? We need someone like
a rider, someone requesting a ride. You need some sort of an interface where these requests are
accepted. You need some sort of a centralized system that processes these requests. And
is also aware of where the taxis are.
So you need some sort of a taxi simulator.
So I started like drawing out these.
And if you see my talk, I also have like a flow diagram there in the talk where I talk about it.
So when you're listening to it, if it sounds more abstract, but maybe you can also counter-reference it with the talk where I have the slides.
So I started drawing this business logic flow.
And if you think about it, these event-driven design should actually.
actually empower your business logic flow.
So a request is an event, something processing is an event, a billing agent is something
taking that right information and calculating a city-based prices is an event.
So everything started flowing as events.
So I started like drawing out these events.
And then once I had this in my paper and then I started drawing them out as in my canvas,
in certain tools, and I had them as like my images, architecture diagrams, you know.
I had that first as one of the context that I wanted to give Cursor.
The second one I wanted to do is I started determining the data structure.
What would the payload look like?
What are we talking about?
What does the write request mean, right?
What information should it have, a write ID, a user ID, the pickup location,
lat long, drop-off location.
So I started for each of these components, I started drawing.
the writing down the payload, like what is the structure
look like, basically, adjacent for
the Kafka topics, which will come later.
Writing them down and then drawing the sequence diagram.
So I had three more inputs, right?
I had a high-level business logic flow.
I had the data structures and the payload
of each, what a write would look like,
what would the, for each of these events
for different components. And then I had the sequence diagram
written as MMD file. So basically,
basically with all this information that I did, I used cursor to generate the MMD file.
Once you have all the sequence ready, I gave it a cursor, and it drew the, gave me the MMD file,
and I converted the MMD file into an image.
So I had all this information now.
And then what I did, I gave it to cursor.
Listen, this is all that I have right now, and this is far more context, right?
It has much more context, and I gave it.
and that was like my first kind of like my serious iteration if you call it like I gave this
and it and it came up with like a much better at the time I think I was using Clonet Codsonet 3.5
model when I first tested it out and it gave me a lot of Python scripts and it understood it better
and it automatically started designing Kafka topics for it and it gave me really I was pretty
impressed with how far it could give, but it didn't work. It gave the code, but when I, it gave
me the startup scripts and everything, but when I ran everything, it didn't work. And then I realized
a huge lesson back then was context is very important, but I think these AI systems or these
AI agents work far better when you have, give them micro increments. So you have macro context
and micro increments.
So that was kind of like my second learning
was context is important
but you don't give the context all at once
and you kind of like
break it down into
planning and tasks.
And now cursor and all these tools
have this mode called plan mode
and task mode where
you can have like checklist
of each of these how do you want it?
But when I started it, it was not available.
But this was like a hard
like a self-learned lesson back then
when I broke them down into different micro tasks,
and then I said, okay, do this task.
After you completed, we do a test, we verify it,
and only then we move to the second one.
And that took me a lot further in the process of building this.
So that was like my first serious iteration.
So if I recap, fascinating, thank you for those answers.
For me, what I take away from this,
it feels a little bit like,
You know, being good in writing good product requirement documents.
Because you say it, right, just saying I want an Uber clone, that's not enough.
So you need to really figure out what is the problem you really want to solve,
what are the components involved, what are the flows.
And I think if you are a product owner or a product manager,
you're designing a product, you need to get very good in describing really what is the problem
you want to solve for whom?
And how should that particular, you know, consumer then in the end really walk through your software system to get this stuff done?
And then I think the other thing is with like breaking things down into smaller increments.
That's essentially the same thing and how we do software engineer, right?
We always break things down into smaller pieces.
Now the benefit, however, and correct me if I'm wrong, and I think this is what I hear out there,
the benefit of now having these AI systems, if I am non-technical, if I'm non-technical,
but I know what problem I want to solve, I can use these systems now to get fast iterative
feedback because I don't need to wait on a software engineer that may have time next week
or next month. I don't need to pay somebody to get this done and give me feedback and then
try it out, but I have very rapid feedback loops to validate if what I try.
try to build actually feels good, not only for me, but I can also give the prototype
to somebody and then get rapid feedback, right?
I mean, that's...
I think it's 100% true.
I think it's good that you mentioned about product managers.
I think the whole product manager category is going to be like it's no slowly changing
into product engineers.
So basically you need to start thinking more from a, yeah, how the PRD is, but also rapidly
prototype instead of having like a wireframe and things, you can rapidly prototype
something and give it to early and let's say you want to build a feature and you have a team
of engineers and as a product manager back like I don't know a couple of months back people would
like write and even like in a couple of years ago as well like until a couple of months ago
people are writing like specs and Jira tickets and breaking down into tasks and epics and all
that. But now, with all these tools that you have, I can do like a, I can still have my PRD
and my spec written down, but also have a shot, proof of concept ready and attached to it.
And I said, okay, this is exactly what I want or this is something that I want. And the engineers
can then take that as inspiration and, you know, work and make it more robust and production ready.
But this will rapidly also reduce the time it takes for engineers to build.
but also, you know, rapidly launch new features.
And with lean and mean team, I think with like less people,
you'll be able to go further.
And this is what is like, as you clearly mentioned,
like it's redefining how you're going to do software development.
And they have now these frameworks called open spec and things like that,
where from cursor itself you can say slash spec template
with this Gira issue, create something.
and so we're redefining the whole developer workflow.
I'm a big fan of analogies,
and one of the analogies I may want to bring
because we, two weeks, not two weeks,
two years ago, my wife and I,
we moved into a new apartment,
or we actually, we had the chance to,
it was a house that was under construction,
and we bought the apartment that was not yet built,
so we had a little bit of a chance to architect it, right?
And the beauty of it was,
we work with somebody
and he started to build a 3D model
of how our place will look like
and we had a chance to walk through that apartment
in 3D and actually see
actually this doesn't make sense, this doesn't feel good,
this feels good, color looks strange
so it feels the same way, right?
The person that did the 3D rendering
and the designing had a chance
to do rapid prototyping with us.
us to validate it before they actually then gave it to the real builders to say, now you're
going to build this, but we know this is going to be good because they have told us that
this is what they like, because they've seen it in a prototype 3D model.
Yeah.
So it's the same thing.
I mean, that's the whole power.
Like, when they say like AI is going to replace people, it's like AI probably will
replace people who are not using AI and people who are like able to use AI to provide more
value.
So now this
designer that
you and your
wife met
is showing
you more value
and he's
taking you
or she,
whoever it is
is taking you
through that
journey of
you experiencing
it with all
these modern
tools.
And this is
why I think
I'm bullish
in the sense
that I'm not
against it.
I think we
should embrace
AI and of
course validate it
verify it
that's a
different problem
but I think
you need to
embrace it
so that
it definitely is a production, sorry, a boost for us.
Yeah, it's efficiency boost.
It shortens feedback cycles.
It allows us to experiment more as well, right?
It allows us to go into multiple different directions at once or validate
and then come back with the right path forward.
And we have, I always say, we can make better informed decisions to move forward
because we have more valuable.
feedback from different attempts to try something right now because we can move faster yeah yeah
and now they launched all this background agents so uh one of this is and particularly let's say
something that i learned um was my i had like 10 microservices right so in my architecture
i split it into like a couple of microservices and it was it didn't do a good job then so i
containerized one service and then i gave it as an example say this is how i want a containerized
it and then it did a far better job of containerizing all this and, you know, giving it
a Docker Compose example and then it was able to do it.
But when I just said like, okay, now turn these Python services or into a Docker container
and no, it didn't do a good job.
And this was needed because I wanted to validate each and every service, right?
of scripts, I wanted to have these modular services that I could stress test.
And so this is how, what I'm saying is, like, you need to work with it like hand in hand
and try to teach it and you learn from it. And it's sort of like your buddy.
And truly, I think it's like your programming body here that you can brainstorm and,
you know, rapidly build something together.
And I also think, I think you mentioned the,
Now they're building these background agents or whatever you want to call them, right?
Basically agents that are kind of listening in and kind of trying to find and see what you do,
but then only show up when they have something to say, right?
And if they want to contribute.
So this could, for instance, be if there would be an architecture agent that is sitting in the background
and it sees what's happening in your code base as you're working with your coding body,
then at some point the architecture agent could come up and say,
hey, I wouldn't do it like that because this is a bad practice, right?
You're not thinking about resiliency.
You're not thinking about failing over.
There might be a better pattern for this particular thing, right?
You should hide this behind the feature flag and things like that, yeah?
No, it's an excellent point that you bring.
I think this is something that even yesterday I was at an event in Utrecht when I was talking about it.
Someone suggested, why don't I have one agent per service attached to it?
So when you have these agents dot MD files, right, like you give the rules of how you want to build.
So let's say you have a rust back in and you have a couple of Python services.
So you just define how you want each of these services to do in these rules and you give it to it.
And as you clearly said, you have like one master orchestrator agent or the super agent in the background that kind of has the overall context.
And then you have micro context for each of these services.
and I think maybe it's something that I want to try now
is with this whole architecture,
I've just done it with one agent.
I'm going to now split every service
and have an agent for each of these services
and when you mentioned about the infrastructure,
I'm a super agent.
I'm going to have a super agent as well
and I'm going to see how that's going to work
because I learned it the hardware.
You try and build something,
but when you stress test it and validate it
and without having proper observability,
place. I ended up losing a lot of money as well, which is something that I, in retrospect,
I went back and I then started instrumenting instrumentation to a couple of these services.
But yeah, I think that's a very good point. And that's something that I'm going to take from
this talk for sure is to try this out, this approach.
Yeah, because if you think about it, coming back to the house analogy, you could have a general
contractor, right, that is basically taking care of the whole end-to-end project.
have the oversight.
Then you have a company that is responsible for building the structure.
You have a company that is responsible for the electricity, for the plumbing, for the interior.
So it's the same thing, right?
In software engineering, if you do it for the individual services, then you may have an agent
that knows and is trained because you prompted it what this should be, what should that service
do and how should it act on the load and what is the service it provides?
And then you can have agents that are really then, now at the end, the question in the end is, do we use agents to build these systems or will the agents in the end become the service?
Do we just in the end no longer build any type of microservices or systems anymore?
The software on the flight.
Yeah, no, we just build agents and every agent then gets triggered through events.
Because in the end, the agent is then doing its job, right?
So we'll see where this goes.
it's really fascinating
indeed it's really fascinating
and
but I mean
for the listeners
and people want to experiment
with it
I would just say that
go through a process
of building something
I think that's the only way
that you can learn
reading about it
watching YouTube videos
and trying out
some simple projects
will yeah
it might be just like
but if you want to truly
experience the power
of these software tools
I think now is like the time, like back then when I had an idea, I was not a front-end
person. So to even think of building an app like this was not even. So now you can also
play to your strength rights. If you're really good at back-end, maybe you can allocate
all your, like a couple of your agents to build the front end for you and focus on the
back-end and, you know, work together. But yeah, unless you go through this process, you
you'll learn a lot and I think that's the and you'll be inspired I think heavy to build
out yeah I have one more concept that I want to quickly kind of bounce off of you because I'm
currently I'm co-authoring another book after we wrote platform engineering for
architects a year and a half ago with with two other authors from the CNCF space one of
the authors Hillary she asked me to to do another book with her and then this one is about
AI ops, kind of like the modern observability in the age of cloud native and the AI native.
And one of the chapters I'm writing on is actually trying to figure out what does agentic
AI really mean in context of observability. And you brought observability up in what you said earlier.
And so I'm actually talking about, I bring some examples on, you know, I'm in, let's say,
in cursor or I'm in Visell Studio code. And I'm using GitHub co-pilot and say, you know,
So how can I, where do I need to optimize my code to be faster, to be more performance?
And then co-pilot could reach out to the backend observability system through an MCP agent and then make some suggestions.
And now my point that I make in this chapter is if I just use a general trained model that doesn't have any experience in performance engineering inside reliability engineering, of course I will get benefit out of it.
but it's like working with an apprentice.
An apprentice is somebody that has a basic understanding of the world,
but it's not an expert yet in that particular domain.
And then I talk about how we can elevate that apprentice to master,
because in the end you want to have a master agent,
mastering meaning knowing the trade of this particular area.
And I can do this by training that model on what does it actually mean
to build a resilient and a performance app.
So you can do this through instruction files.
You could obviously build your own agent that is then using your own trained model.
But I wanted to see if kind of this idea of from apprentice to master, mastering a trade,
is this something that you have experienced, the more you worked with cursor,
the more you work with it, the better it got because the more context,
the more it also learned from you?
Yeah, yeah, for sure.
So initially, like, when you're getting, yeah, you're just, it's a two-way thing, right?
Like, it's, it's like trying to work with like someone who's joined the team recently and you want to work closely, pair program together.
You're trying to understand what you want to say and then they want to.
So earlier, maybe my, my prompts were not good, good enough.
the way I provided
my context engineering skills
was not good enough in the beginning
but also at the same time I think the model was not
so it's like working hand in hand
and as in when the model was also getting better
and like I said from the beginning of the year
I've been fully working on this
so it got better
and right now with I think
like cursor launched one of their own models
like compose composer model
and I think that's
I've not gone into details
of how they've implemented it
But when I compare that with the results that Claude is giving me,
I find composer model from Cursor gives me code that is much more relevant to my project,
much more impactful for my project, and with less errors than.
So I think that it's a two-way thing.
I also try to learn better of how I need to provide context,
but also these specialized models, coding models,
or getting better, which is why
in cursor, if you hit a limit and you switch
the mode to auto or some of these
low models, it just creates
a mess. So
what I started doing is I have like
three tabs now. So any questions that
I want to ask or planning or something that I
use like a lower model? But when I
move into the agentic mode, I make sure that
it's in the compose model
so that I don't use
like waste my tokens on
composer to do
to ask basic questions for it.
Cool. That's an awesome suggestion, right? Because obviously, you can still probably also find
some information just by old Googling, right? It feels strange to say, I used to Googling
with data. Yeah, exactly. But so this is the cheapest option and you also get it. But then you
always have to weigh what you want to get out, how fast you want to get it, how impactful should
be and then you can use different
models that are charged differently.
Hey, Abby, really, really cool.
I know folks
will obviously have a chance to watch
the recording of Cloud Native Austria.
You just mentioned that you've also spoke,
you're speaking at different events, right?
I guess it's like last, yesterday or two days
before it's my last event for this year.
For this year, yeah, yeah, yeah, cool.
No, it's fantastic.
Maybe just a little bit of, to wrap this up, a little bit of an outlook, where is your journey going?
What's happening next year?
Yeah, so basically what happened was when I built this whole Uber clone, I started giving it to the hands of people.
And it's all simulated, right?
It's not like a real taxi is going to show up at your doorstep or something.
But it's the whole idea to validate this distributed systems and how you're going to.
to do it. As in when I got also like a lot of good feedback from the folks at Cloud Native
Days Austria, one of like a very good crowd, highly intelligent crowd, and then also
critiquing me of my certain choices that I made like Kafka and things like that. But it was
not like Kafka was not like deliberate, it was intentional because I also wanted to promote one
of our product offerings Kafka. And as part of my talk, it was not like, so it was a bit
strategic move as well.
So I gave it to hands of people.
And then how they started using it was a couple of them started using it to explore
their neighborhoods and the geography through taxis.
And so they'd say, oh, pick me up from here, drop me off there.
And some of the parents had kids.
And they were like, Sunday I got a message on WhatsApp saying, hey, why is your app not
working?
Because my son wants to go to the beach and he's waiting for his taxi doesn't show up.
and that got me thinking.
I'm like, wow, okay, there is a different angle to this application.
I'm building a clone of Uber and competing with Uber is never going to, like,
it's far more complex and has like a lot of logistic issues and it's,
as a solar developer, it's going to take me a lot of time.
So then what I thought of doing is taking all the learnings that I did from building this.
And right now I'm focused on building a game.
so it's going to be hardbite.io it's there in my slides and my aim is to launch it a basic version end of January early Feb it's going to be an Android Apple and it's going to be also a web app where it's going to be still maps and exploring your neighborhoods but it's going to be in a far more fun way and then you're going to have a lot more agents in the background working together to give you that experience
So that is where I'm moving forward.
I'm not going to spend more time
on this particular robot taxi platform,
but I'm taking that and building a game
so that anyone across the globe can play.
Awesome. I will definitely try it out for Linz, Austria.
So I'll explore my neighborhood.
Yeah, for sure.
And maybe this is the
I think we talked about this in the preparation.
This could be your next talk for Cloud Native Austin.
Yeah, I still need to figure out, like, what's the story that I want to say there, but definitely, yeah, I've selected.
It'll be a live demo and people in everybody in the audience can start playing the game and that would be like a really interactive session.
But that's where I'm heading.
Of course, I'm also working on other side projects and helping other companies where I can with my skill set.
but my focus now is to get this game up and running.
Yeah, cool.
That's awesome.
All the best for this and keep us posted.
Definitely looking forward to this.
Also, thank you so much for this discussion.
It's always great to hear from somebody that really had a longer experience.
So I really like what you said in the beginning.
You said, commit to a tool in the beginning.
I think there was a really nice thing to say.
Also, the way you started with kind of like a very,
vast prompt
like create an Uber clone
and then how you then
refined you really have to think about
what do you really want
because this is most often
the toughest thing
especially for very technical people
because we always like to solve
technical puzzles and problems
but we don't really think about
how to eloquently
describe on what is the real
problem we want to solve
because the solution how to solve
it should be secondary
versus what's the problem
what's the problem we really want to solve
and then and then figure
this out, yeah. I think these tools are just making us become a clear thinker. So if you're a
clear thinker and you exactly know what you want, these tools are mainly executing it for us
and that's it. Yeah, yeah. Be a clear thinker. Yeah. Awesome. Hey, Abby, again, thank you so
much. Folks, if you listen to this and you want to follow up with Abby, the link to your LinkedIn
is there also to the presentation that you gave in Cloud Native Des Austria and we will make
sure that anything else we discussed today will make it to the links section of the podcast and
to brian sorry that you couldn't be here hopefully everything is good we'll see you in the next
recording and abby i hope i see you next year thank you it'll be a pleasure definitely we'll meet up
thank you thank you
