Orchestrate all the Things - Foundations or vibes? Lessons learned from using AI in software engineering. Featuring Greg Foster, Graphite Co-Founder & CTO
Episode Date: January 29, 2026Software engineering is being transformed by AI faster than any other domain. Unpacking how this transformation is playing out may offer a glimpse into the future Greg Foster's journey into soft...ware engineering began in an unlikely place: a Nevada high school where he couldn't land a job at Starbucks. Instead of serving coffee, he taught himself software development. Foster developed what he calls "a lifelong obsession with the craft of software engineering". That obsession evolved from building apps to building for other builders. From Airbnb cofounding Graphite, where he now serves as CTO running what he calls "a dev tools team for the entire industry". Foster doesn't just have a front seat to watch how AI is changing software engineering - he gets to shape the change. We caught up and talked about the past, present and future of software engineering. The takeaway? There is a world of difference between vibes and solid foundations for software engineering at scale. Article published on Orchestrate all the Things: https://linkeddataorchestration.com/2026/01/29/foundations-or-vibes-lessons-learned-from-using-ai-in-software-engineering/
Transcript
Discussion (0)
Welcome to orchestrate all the things.
I'm George Anadiotis and we'll be connecting the dots together.
Stories about technology, data, AI and media and how they flow into each other, shape in our lives.
Software engineering is being transformed by AI faster than any other domain.
Unpacking how this transformation is playing out may offer a glimpse into the future.
Greg Foster's journey into software engineering began in an unlikely place,
a Nevada high school where he couldn't land a job at Starbucks.
Instead of serving coffee, he taught himself software development.
Foster developed what he calls a lifelong obsession with the craft of software engineering.
That obsession evolved from building apps to building for other builders.
From NPNB to co-founding Graphite, where he now serves as the CTO,
running what he calls a Dev Tools team for the entire industry.
Foster doesn't just have the front seat to watch how AI is changing software engineering.
changing software engineering. He gets to save the change. We caught up and talked about the past,
present and future software engineering. The takeaway, there is a world of difference between
vibes and solid foundations for software engineering at scale. I hope you will enjoy this.
If you like my work on orchestrate all the things, you can subscribe to my podcast,
available on all major platforms. My self-published newsletter also syndicated on
substack, hackern, medium, and DZone, or follow,
gesturate all the things on your social media of choice.
I'm optimistic. I think it's never been a more fun time to get involved in software engineering.
There's never been more things to build. In late 2010s, we were polishing really,
most incremental things were like a lot of polish on existing ideas. Suddenly,
everything's blown wide open. There's so much overhang. There's so much opportunity to build really novel,
ideas. For anyone
listening, I really truly
deeply believe there's never been a better, easier,
more fun time to get involved in creating
software engineering, building apps, building
companies. So I very much encourage people to go
try it. I think it's,
we're entering a world where code review has
never become more important because
I used to be able to trust, George,
if you and I were coding together and you submit
me a code change at work, I used to be able to trust
that at least you read it
and thought about really deeply. Heck, you had to. You wrote
it. And I'm there and I'm going to
double-check you, but I have some trust in you as a human and as a teammate. Now, in 2025,
I don't know, I don't know, maybe you thoughtfully wrote it or maybe you were watching, you know,
TV show and you asked Cloud Code to write it, or maybe, heck, maybe Codex or one of the,
the background agents just create this code change and no human has ever laid eyes on it. Now it really
matters that at the code review steps, someone's grocking it, someone is thinking deeply about it
and testing it and taking ownership with the impact of that code change as it goes out of the door.
It's interesting. The purpose of code review, I think, is really becoming magnified in its importance in pedagogy, in accountability, and some of these deep-rooted meanings.
It's so fun to observe. I think DevTools has never been experiencing more change and upheaval than it has in the last two years.
And that's a blast. It's blasting up at the center of that. If I look over the arc of DevTool companies over the last 20 years, you had very little action.
for a long time. And then the very first kind of meaningful DevTool companies in my mind
were that of GitHub, Stripe, if you can call Stripe a DevTool company. These are in the kind of
mid-2000s. You start getting the birth of Heroku and Datadog, GitLab in 2012. It's starting to become
a little bit more popular. But there's also a long time, for a long time, a skepticism of dev tool
companies by venture capitalists of how big can they really get? What's the total market size?
you constantly had this fear that open source would be undercutting the the you know the the total amount you could bill for these dev tools and also for a long time engineering teams had very low purchasing power so I remember when I was working in you know in the mid 2010s at Airbnb and it would be it'd be like a whole thing to go and expense IntelliJ and I'd be like hey I really want to expense this IDE for 20 or 40 bucks and it was non trivial for me as an IC to go make that argument and it's not until 2025 or really now it's like oh no actually actually
the most insane, leading,
as fastest growing highest revenue companies in history
have to be dev tool companies,
the lovables and cursors and cloud codes of the world.
So it's just been fascinating to watch that growth
as a founder of the space.
Cool.
I'm Greg.
I'm one of the co-founders of graphite.
I lead engineering here.
But before working on graphite,
I've been obsessed with software engineering for,
at this point,
more than half my life,
hopefully my whole life as it keeps going on.
I was, I sent most of my time growing up in Nevada in Reno, which is, you know, really fun place.
Not a lot of engineering going on.
My father was in mechanical engineering.
My older sister got into mechanical engineering.
I was the black sheep who just liked playing with computers.
In high school, my parents were like, oh, you know, your turn of 15, you got to go get a job.
We're not going to pay for your gas money.
And I looked around for a job.
I applied to all the local Starbucks's with a high school resume that has no actual work experience on it as one does.
And this was just after the financial crisis.
There was people with college degrees who couldn't get a job at Starbucks, let alone me as a high schooler.
But I turned my attention actually at the time to iOS development.
I had already been really interested in computer design and Photoshop and CAD.
And that was very fun.
And I was really interested in coding.
I didn't know much about it.
And I had a family Intel computer, one of those towers.
But I was able to pirate VMware onto it.
And then I was able to pirate an image of MacOS onto it.
And then from there, I could do Xcode.
And from Xcode, I could run an iOS simulator.
So I was like several layers deep of simulation.
But it was enough that I could actually start coding iOS apps.
And I didn't know much about coding.
It was a lot of Googling.
And I read coding for dummies.
I watched an iTunes, U, like university video.
on learning how to do iOS coding.
And I remember, and I made my first iOS app.
Luckily, it was pretty good at graphic design,
so I was able to make it look good.
And I made it like a pros and cons app.
But it was early enough in the app where it was actually one of the only
frozen cons app.
You could make small utilities and sell them for a dollar.
And it would do okay.
And sure, it had a lot of memory leaks and would crash after, you know,
20 or 30 minutes, but that was fine.
In mobile, you know, you could actually get by.
Like the apps were destructible enough.
They got the functionality there.
And I had a lot of fun doing that.
So I kept making iPhone apps.
And I would walk around my high school and I would pay my friends a dollar.
I would actually carry a stack of like, you know, $21 bills.
I'd give my friends a dollar.
Like, hey, give me your phone.
I'm going to buy my app.
I'm going to rate it five stars.
I'm going to give you back your phone.
I'm going to give you a dollar.
And I would use this to bootstrap like the first 20 or 30 year reviews.
And then it would snowball a little bit.
So I loved, I found a lot of love of coding and entrepreneurial kind of attitude.
Then I went into college and I studied CS proper.
That was all on the side.
I still have to do my high school classes.
So college was so exciting because it was like, oh, you can actually
just make your studies and your passion, your hobby of coding, one thing. I didn't have to
keep rushing to finish my homework so I could go do coding. I could do the same thing.
And then it was even better after I graduated because like, heck, you could actually make your
full-time job and your full pay, just doing some great coding. So I joined Airbnb after I graduated.
I was doing internships. So I joined them after I graduated. And it was really fun. I got to go really deep.
I was hired. It was so funny, actually. I was hired as an iOS engineer. But then I immediately
got reorged onto an infrastructure team working initially on all their notification infrastructure,
all the emails, SMS push notifications. And from there, I got reorged eventually onto dev tools
and building out their internal CLI, their deployment infrastructure, some of their feature flagged
stuff, their test data creation systems was really deeply embroiled in dev tools and infrared.
At this point, I've been coding for many years. And so I was reading all these textbooks about,
like, what is the best way to engineer stuff, clean code, clean architecture, all the overall
Riley books and was like really becoming obsessed with the craft of software engineering.
And that played into being a DevTools team member as well.
You could get really deep on that.
But this is kind of what took me to Graphite where if you spent enough time being obsessed
about the craft of engineering, not just how to build something well, but how should all people
engineer things really well?
And what are the tools we can build and becoming a tool builder for other engineers.
That obsession is you're kind of limited within an individual company at Airbnb and I could
serve a thousand other engineers, but it was kind of limited.
It was not the core of Airbnb is not to build great dev tools.
It's to build a great hosting platform for travel.
And I was really excited by the idea of like,
what if I was able to work at a company where like the core of the company
is just building the world's best dev tools for all the other software engineers?
And what if I could do it with the best people I know in my life at the craft,
which were my co-founders now, some people I knew from college, some of my best friends.
That was such an exciting idea for me.
And that was an exciting enough for it was like, hey, I know startups are risky.
I don't care.
We're going to take a leap of faith.
And I'm going to be able to go full in on this.
session. So my walk of life has been just deeper, deeper, deeper engineering and deeper and deeper and closer to my obsession of like building for other engineers. And here we are. And he and now I get to, it's funny. I get to serve all these Airbnb, you know, many companies. I get to be in Slack channels with so many wonderful engineers, 100,000 of companies who I would be so excited to work at myself. But I get to be building for them and be in Slack with them and go talk to the user meetups. And I feel like I get to be a DevTools team for the entire industry. So I'm living the dream.
Yeah. It does sound like like. Like, like,
I mean, it sounds like coding started as a kind of passion projects for you and then you gradually
manage to get more and more involved.
So I guess a lot of people will sort of identify with that.
Not everyone gets to found a startup, but this is a path that I think many people will identify
with to some extent.
So my question to you as someone who has been along some parts of that path, I think,
I never ended founding a startup, at least not a successful one.
But regardless, so I guess what many people would naturally tend to ask is like,
okay, fine.
So if you love coding that much and now you're a CEO and founder, do you still get to code?
Well, for the record, CTO.
I don't want to do my co-founder short change.
CTO.
But actually nice because being CTO, I have a slightly better excuse than CEO to code.
You know, it's evolved over time.
initially graphite, it was just the three of us co-founders.
Myself, Tomas and Merrill, and
being Tomas did a lot of coding.
We were the guy like the engine of the company.
We just sat and coded all day.
And then, you know, there was no meetings.
Your meetings were just like taking a walk so you could take a break from coding.
And then you should code more.
This is like 2020, 2020.
And then, you know, we were able to bring a couple engineers onto the team,
but I still got to code a ton.
You know, I did a very little amount of recruiting,
a little bit of sales, a little bit of management.
But I was really just still coding most of your time.
If you have two engineering teammates, you're still just coding a lot.
And then they got bigger.
And then we got up to like six or seven engineers.
This is, you know, it's approaching 2023 and beyond.
And I coded less.
Now I was coding maybe like only half my time or less.
I was still helping on key projects, but it was not the most important thing I was doing.
The most important thing I was doing was trying to ramp up and enable these other amazing software engineers and help recruit and bring on more folks.
And also join in on sales meetings and help on founder related activities.
and coding became something I was still passionate about.
And I still had kind of high leverage because I had so much context with the code base after three years.
I could still be quite impactful on key moments and key projects.
I was also doing a lot of the SRE style responsibilities for a long time.
So permanently on call for the company, permanently helping on every incident
and helping teach other people how to roll back the system and recover the infrastructure.
But it became less and less impactful.
And I think I really took on a mindset of how can I replace myself?
because I can do a good job, but to be honest, even if I'm amazing,
I'm not that much better than two other amazing teammates I can bring out to the team.
If I'm doing a really good job hiring and I'm bringing on incredible people,
maybe I'm 10% better.
I don't know.
I don't want to flatter myself, but I don't think I can beat two other amazing people.
I think my time is better spent building out the team, enabling other people,
getting an engine.
If I get sick, I want there to be redundancy.
I want this team to be able to operate and that's going to free me up to do more ambiguous tasks.
The other thing I find about software engineering is as we've gone further and further at the company, the software engineering wall critically impactful is less and less ambiguous.
And I think that as a co-founder, my time is often best spent dealing with projects and problems that are just, we have no idea how to even get started.
We don't know what the answer is.
It's very unclear.
It's so messy.
And that's actually a really good use of my time to go in there and try and apply structure and figure it out and experiment in debug.
If there's systems and projects that are pretty clear cut, but just challenging,
I think it's much better use of my time to help enable and set other people up for success to execute those projects.
So that shift has really happened over the years.
And nowadays, now in 2025, I spend most of my time actually recruiting, building out the team.
We've 4xed the engineering team this year.
Actually, just over 3x it so far.
And by the end of the year, I hope to 4x it from 10 going on to 40 engineers.
And I'll tell you what, it's a relatively straightforward job, but there's a lot of time and energy that goes into meeting hundreds of candidates running so many interviews,
helping issue offers and convincing folks to join,
and then helping ramp them up once they're in the room,
actually getting them set up for success
and explaining the history of company
and showing them the architecture we have going
and sending them up onto teams.
That is just absolutely a full-time job for me.
And it's one I'm happy to do.
It's actually really fun because I actually like meeting people
and nerding out about our technology.
But I do this year, because I've been so obsessed with that expansion,
I've done very little coding.
And if I'm coding, it's often for my own guilty pleasure
and not because it's like the most impactful thing.
could do for the team. There's like a little bit of a tear roll down my cheek situation where I'm like,
there's so many projects where I'm like, ah, this would be so fun to get in their code. But you know what,
it would be so fun for me to get there in there in code. I'd rather hand it to an amazing teammate,
let them have the fun and, you know, I'll keep doing whatever job needs to be done to keep scaling
up the company. Yeah. I think this sort of dilemma, let's say, or career path choice is again
common for many people who start with purely technical background. And then,
life they sort of evolved into different roles as well. I also remember very clearly
about myself when I was kind of crossing that chasm, let's say. Part of me was like feeling really
guilty for not contributing that much to the code base, but then I realized what you also sound
to have realized. So, you know, I can make more impact in other areas. But I think it's a little
bit like, you know, players turning coaches. There's a saying that goes something like you have,
You have to kill the player within you if you want to be a successful code.
And I think it's a bit similar in this situation.
Part of the mindset I adopt is, you know, I think about myself as an engineer and a builder.
I really have that identity in myself.
But maybe what I'm building is no longer the code base, but I'm building the team and the company and the community.
And I can still take pride in that.
I can still look at that like an engineering project.
But it's not about shipping a bunch of lines of code.
It's about helping build out a new team or a new org.
And there's still problems to be solved.
absolutely it'd be i'd be naive to say there's not like interesting problems to solve within that
so i do still i but i view like the thing i'm creating this point is more is more the company
than just the lines of code of the project yeah yeah that's right okay so um the next question i
I wanted to get your opinion on how you have seen the craft, the art and science of coding
throughout the years that you've been exercising it or, you know, whether you're actually
coding yourself or in your current role. And my, I got an interesting start, let's say, to this,
because I was going through your LinkedIn profile just before we connected and, you know,
in LinkedIn, the things you share sort of get frozen in time, basically.
So at some point, you share the link to the website of your company.
And I got through that, I got the opportunity to check the motive that you had at the time that it was shared,
which I guess was like maybe a couple of years ago.
And it went something like the end about graphite, so the end-to-end developer platform.
Graphite helps teams on GitHub, ship higher-quality software faster.
which is pretty succinct and gets the message across.
But then the interesting part is like if you fast forward to the day,
that same motto has revolved to ship higher quality code faster.
Graphite is the complete AI code review platform built to keep you unlocked.
And I think that's very telling because it sort of reflects the fact that, you know,
within a couple of years really,
coding and AI or AI assisted coding has become sort of.
of the norm, I would say. So how have you experienced that? Because obviously, you've been around
for much longer before, you know, the chatypties and cloud codes of the world. So how have you
experienced this transition? It's so fun to observe. I think DevTools has never been
experiencing more change and upheaval than it has in the last two years. And that's a blast.
It's blasting up at the center of that. And I look over the arc of DevTool companies
over the last 20 years, you had very little action for a long time.
And then the very first kind of meaningful debt tool companies in my mind were that of GitHub,
Stripe, if you can call Stripe a DevTool company.
These are in the kind of mid-2000s.
You start getting the birth of Heroku and Datadog, GitLab in 2012.
It's starting to become a little bit more popular.
But there's also a long time, for a long time, a skepticism of dev tool companies by venture
capitalists of how big can they really get?
And what's the total market size?
You constantly have this fear that open source would be undercutting the total amount you could
build for these dev tools.
And also for a long time, engineering teams had very low purchasing power.
So I remember when I was working in the mid-2010s at Airbnb, and it would be like a whole thing
to go an expense Intelligy.
And I'd be like, hey, I really want to expense this IDE for 20 or 40 bucks.
And it was non-trivial for me as an IC to go make that argument and do that.
And I would be like, oh, you know, you can get VIN.
You had all these tools for free.
There's really like a version to paying for stuff.
You compare that to like being a account executive on the sales team or in marketing.
And you need expensive stuff.
Man, those are pretty generous budgets in order to get the correct tooling you need,
like Salesforce and other stuff to do your job.
Even GitHub on the per seat model is not the most profitable.
People would find ways to,
they really made a lot more money, I think, on selling compute and testability.
But the per sees was not the most high revenue situation.
And that persist for a long time.
Coming into 2020, that was a little bit the world we were entering in where, you know,
it was believed that you could have great companies as debt tool companies, but it was not the most lucrative.
You know, the most lucrative and interesting ones would be FinTech or crypto, stuff like that.
It's not until 2025 or really now.
It's like, oh, no, actually the most insane, bleeding,
at fastest growing highest revenue companies in history have to be dev tool companies,
the lovable and cursors and cloud codes of the world.
So it's just been fascinating to watch that growth as a founder of the space.
the other observations I have is that, you know, for the longest time to create a successful
Dev tool company, I think there was actually quite a stable formula. The formula, in my opinion,
was you go and look at all these companies, all these companies who have large engineering teams,
Facebook, Google, Amazon, also mid-sized ones, Uber's and Twitters and Airbnbs, and you say,
great, what internal systems have you all built? Pager duty, you know, this app that makes a phone call
on text to you when there's an incident, that came from inspiration that a tooling that was built
internally at some companies like AWS, a lot of the best graphing software, a lot of the best
deployment infrastructure, you know, airflow, all these kind of systems. They were built first within
large companies. And then some entrepreneur would say, oh, that's quite a good idea. Let me take that
and let me bring that to the rest of the market. Let's not lock up that wonderful idea at a single
company. Let's bring it out in a way that everyone can adopt it even if they don't have a large
engineering team to just build it internally. I think it's been really the formula of
dev tools for quite a long time.
And part of the reason I think this formula is successful is because
dev tools are really unique.
Engineers are able to solve their own problems.
This is not always the case in, again, in marketing, sales, recruiting.
It's not like a recruiter.
It's like, oh, I'll just build my own recruiting software.
But dev tools, engineers will just try and build and solve and hack together their own
solution.
So if there is not an existing tool somewhere locked up in some company, then you got
a question, was there ever a real problem for any of the engineers in the world that
you'd be selling back to?
So I think this formula existed.
What's unique, what's different about the times we're living in now, you ask like, you know, the value prop changing at graphite is with AI, that formula is breaking down a little bit because there's so much innovation happening so fast that you can't just say, oh, for five years, Google has had this amazing tool. Let me just bring us for the rest of the market. You say, hey, AI's, AI got better two months ago and we're going to immediately try to innovate and productize that. We're going to try and bring that to the rest of the market. It's finally the, you know, the actual like companies are, debt tool companies are kind of getting ahead of what's being built internally at these.
private companies, or at least staying neck and neck. So that formula is breaking down a little bit.
And then the last thing I'll say, you talk about the value prop changing on the graphite website.
For the longest time, we were kind of applying that formula. For us, that formula was in the world
of code collaboration and code review, we could take good ideas from Google and Facebook and other companies.
We say, hey, they've done amazing things for how they stack their co-changes, how they have
inboxes to find those PRs, the interfaces that they do to review those, the systems they use to merge
that stuff in. We don't have to get too innovative. We can come up with new ideas, but we don't have to.
can actually take a lot of tried and tested patterns, technologies, and just unlock them from Google and Facebook and bring them to the rest of the industry. That was a generally successful enough strategy. What's shifted with AI is we've had to get a lot more innovative. We'd say, okay, what is the bleeding edge of this? Let's try and nudge that forward. We can no longer just take inspiration. It's not like all these patterns have existed for five years. We actually now need to be the innovators ourselves, too, and intermingle that. Now, we're very lucky because we have quite a good platform. It's great to have a platform that you can layer on AI innovations. But it's had to
You know, you talk about that shift in the H1 header of our marketing,
and I think that shift has been felt internally, too, in our product development cycles.
Okay.
So how are you also seeing your users adopting and adapting to this change?
So you spoke about the fact that, you know, the cycle of innovation is really, really fast these days.
But are you also seeing the same thing of the adoption side?
I think so.
I think it's actually really helped us.
You know, as I watched the selling distribution and graphite over the last five years,
in the first couple of years, it was a good tool.
It was very useful.
But there would always be that hesitancy from engineers like,
oh, can I really expense a $20 tool, even if it makes me 20% faster?
And you think, you know, hey, you're paid a lot of money.
You should absolutely expense $20.
Like that is the easiest deal in history.
But people would be hesitant around making that expense.
And they'd also be hesitant around security.
There was a lot of questions around how secure and trusted, you know, should you be about your code base.
It's tricky.
It's not user data.
So it's not, it's, it doesn't follow all of the really lockdown protections of, like, if you're exposing your user payment information or something like that.
But there is a general belief, like your code is kind of sensitive.
You should be quite protective about that.
But I feel like in, you know, in 2020, 2020, like, people were just like hadn't quite made up their minds.
We were really helped by the way of AI tooling, uh, in the distribution and adoption because,
suddenly every engineer was expensing like five random AI coding tools.
And every AI coding tool they're using and every plug in it,
every trial they're using on their laptop is also indexing their entire company's codebase for better for worse.
Some of the companies, I think I should do a very good job with the security and I'm very trusting of them.
But I'm sure some of those side projects, some of those like scrappy stories are probably doing a bad job.
And companies don't really have a way to limit this or are not currently applying a way to limit this because the codebases,
because of Git and because of Git's portability are all on the user's laptop.
And there's not a lot of restrictions of what an engineer can randomly install on their laptop to help them do their jobs.
So suddenly, you know, you have the code bases are a little bit less secure.
People are expensive more.
And it has incredible top-down support.
It was really interesting.
This wasn't like a rebellion of the engineers against management.
It's actually, you know, you read this in Hacker News, it's actually a lot of management kind of pushing this down.
Or you have these like executives saying, hey, why are we not adopting more AI tools?
Why are we not trying?
Why are we not using like Cloudco and cursor?
I keep hearing it's making these other companies faster.
They're actually kind of pushing their engineering teams to do this,
and they're pushing past security and past procurement.
So the adoption massively picked up,
and it just was very helpful for us because we've been sitting in for years,
creating value, being able to measure it,
being able to have great success and reputation.
And suddenly the industry was much more open-minded
to adopting dev tools like ours,
with or without AI, but it really kind of opened the door.
That's great.
And that's how it should be.
And I'm so supportive of this because, you know, again,
engineers are incredible craftsmen and practitioners.
If you're a dentist and you're like, hey, I need to expense a $20 tool here and a $1,000
tool here.
If you're a car mechanic, of course you have reasonably expensive tools that help enable you
to do your job well.
I think engineers should be the same way.
I think that absolutely people should be expensing on the order to magnitude of $10,000
tools and help us do our jobs.
Assuming the impact is there, assuming it's real, 100%.
It was shocking for so long that engineers didn't actually have that budget to
to expense and adopt tools.
Yeah.
Well, I think the key word there is assuming.
So, you know, when we talk about developer productivity, basically,
I think that's a conversation in and of itself, really,
because that's, we need to sort of come to an agreement on how to measure this, you know,
infamous developer productivity before we can talk about, you know,
the impact of AI or really anything about, about that at all.
So I'm pretty sure you must have your own opinion and your own references that you go to there,
because this is pretty core to what you do.
So I'm curious to hear what your take is on developer productivity.
I think that developer productivity, as you point out, is extremely hard to measure.
At the same time, I'm convinced that the wave AI dev tools we're seeing right now,
on average in general, especially the good ones like Cloud Code and cursor and so on.
are absolutely increasing developer productivity.
So I think it's incredibly hard to measure,
but I believe the value is there and being had.
It's hard to measure because, you know,
engineering in many ways is very much like an art form,
you know, for better for worse.
You know, if you work with sales, you work with recruiters,
you can get kind of numerical around the outcomes.
You can be like, hey, you know, you're your account executive.
What's the deal volume that we close?
How much revenue to be closed?
If you're a recruiter, it's like, you know,
how many people did you hire?
we all know within engineering, you can't really just be like, oh, how many lines of code did you change?
That's a pretty tough way to measure it.
Now, you can look at the aggregate statistics.
And I actually think that people actually underrate that in general, the statistics do tell a good story.
If I look at how many lines of code a person changed, the median lines to code per PR, the number of PRs they merged in, the average time to merge, the number of code reviews they gave.
If I look at like enough of these metrics, the reality is I'll actually see a reasonable correlation between a strong engineer and a weak engineer.
it's not the full story.
Of course, if you're working on different parts
of the code base, if you're a data scientist,
you're working on infrastructure,
you're shipping a front of the feature,
there's going to be variations within those sets.
Sometimes though, I think people are a little too dismissive.
Fundamentally, like if you're doing a ton of good work,
it will actually reflect and show in the metrics.
And likewise, if you're taking a week off randomly,
like it's going to show in the metrics.
But it's not the full story.
And I think you do have to really build in
the qualitative side of this story as well.
I think,
but these are,
tools. And I think about like, you know, okay, like, let's say we treat engineering more like an art
form and a little bit less like a factory line. You know, okay, but like look at artists, you know,
would you rather have a digital camera or not have a digital camera or just only have a
paintbrush? Sure, yeah, if you give me only a paintbrush, I can still do amazing things
and create amazing art. But maybe the majority of people would like a digital camera. Maybe the majority
of people on average can do wonderful work with the digital camera. It's going to up level them
and enable them. So even if we view it more as an art form and less as like a factory line,
there's a really valid case for innovative tools to up level the productivity and create new forms of creation.
And the last thing I'll say is like there's kind of a gut test here, which is like, you know, if this stuff was not real, if it was kind of fake value, I would expect a lot of engineers to not actually use it.
And I expect a lot of managers to not push it.
But if I have both managers pushing teams to adopt it and I know every engineer I work with uses some form of AI auto-complete,
cursor style, agentic editing, or even full, like, kind of headless cloud code stuff.
Everyone I know is using something like this.
So they're using it voluntarily.
People who are self-managed and are doing like their own project companies are using it.
And managers are pushing it.
Okay, that's a lot of qualitative energy and data points pushing to the fact that like, oh, this must actually be quite productive and useful.
So I think the argument that this is kind of like, you know, pseudo value.
I don't really buy it.
I think it's real.
Okay.
Well, I think there are a couple of counter arguments that one could bring to this conversation.
So, you know, first the top down, let's say, push for adoption.
I've heard stories from organizations who are executives who are like literally, please do something with AI.
That doesn't necessarily mean that, you know, they have the most well informed you or that they necessarily know what they necessarily know exactly what they,
mean, they feel peer pressure. And as you also very correctly pointed out, software engineering
is something that stands out as a prime candidate for adopting AI for a number of reasons,
because the tools can have a lot of impact because there's a lot of, there are many advanced
tools as well because of the fact that there's a lot of code out there for the model to train
and so on.
So, you know, top-down, top-down pressure that not necessarily equate value in that case.
So I would argue that maybe the opposite signal is, could be more, more telling.
So having people spontaneously adopt these tools.
And that's my question.
It's like, you know, if you yourself or people you know, are they using these tools in their
personal projects?
If they're starting companies, where they themselves are their own managers, are they using
these tools or not using these tools. And for me, everyone I know in these examples,
they are using cursor, they are using cloud code very voluntarily. That makes me think,
okay, hey, like there's actually a correct connection here. It's, yes, the management team wants
it, but also like the ICs are voluntarily picking this up, even in cases where no one is
pressuring them to use it. And I think the reality is it creates productivity. Now, who doesn't
want to use this or who are the people who make an argument against adopting these tools?
I don't think what I've come to realize is not all engineers want to be maximally productive.
And it's not, I don't say that like with a ton of judgment.
I don't think that's like a moral sin or anything.
Like you don't have to live your life by trying to maximize for activity.
As much as engineers are also the same people who love like biohacking and like eating like the optimal nutrition.
Like, look, you don't have to live your life like that.
You're allowed to actually just say, no, I like the way I do it.
I don't like change all the time.
I don't want to throw away all these things I've learned.
It's kind of scary.
I think there's a valid argument there.
And again, I would equate it back to like if you're an amazing painter.
And like people are coming out with digital cameras.
You can make a lot of arguments against the digital cameras
of why you don't want to adopt it,
why it's not perfect in all cases.
In low light,
it's not the same.
You can come up with all these kind of edge cases.
But I do think if you are obsessed with productivity
and you are trying to maximize that output and productivity,
then I do think you'd be lying to yourself to really be against adopting
and trying some of these tools.
And yes,
you might have to learn new patterns or change the way you work
or get a little bit creative.
But that's a choice.
And that's a willingness.
So the engineers I see who are most resistant these days,
now that we're late 2025, who are most resistant to AI tools,
I think are folks who care more about their enjoyment of coding,
a little bit more about the craft,
who are a little bit more afraid of change.
And that's okay.
Again, I don't want to say like that's negative.
But if we're talking about developer productivity,
you're talking about maximizing it,
I think it would be a little bit silly to not be trying and experimenting
and adopting some of these tools.
Well, definitely, I can definitely see how, you know, this type of persona that you described would react to that.
Like, okay, basically, this is the way we've always done it.
I enjoy, like, coding manually.
So I'll just say no to these things altogether.
But I think, and I'm also, you know, kind of in watch and learn mode here because I have minimal exposure to these tools myself.
So I'm just trying to pick the signals here and then figure out what's,
really going on. But and also it's, it's, you know, we tend to forget that we're only a couple of
years in this whole thing. So I think we're all in learning mode still.
You know what's the one thing I'll throw on as well? This stuff is fun. I love, it's so great.
You know, it's fun in a way that like coding's not always the most fun. We've all had times
or debugging and it's like you get at the end of your rope. It's not fun. We're always, like,
you realize articles and everyone's so afraid of like, you know, the use of. You know, the use of
used to be really good at like handcrafting as an artisan and now this factory line is going to come and
take it away. And I do think that's a scary hypothetical. But what I see in practice is my weekend
projects have never been more fun. Like before I'd be like, okay, I don't really want to like figure out
this new technology or play with Russ or something like that. Now it's like, you know what,
hey, I'm going to have, it's the weekend. I'm going to put a sports game on the TV and I'm going to
open up, you know, quad code or cursor. And I'm just going to, you know, wing it. And I might even
finish this side project over the weekend.
It's unlocked a new funness to software engineering that didn't exist me before.
I get more flow to it don't hit as many bugs.
So I'm very thankful that there's also an element that like it's not completely soul crushing.
It's not fully removed all the aspects.
In fact, as an experienced coder, I kind of really enjoy it.
Okay.
Okay.
I mean, definitely having fun while doing that is a bonus.
Nobody would deny that.
What I was trying to get to is like, okay, so we're still trying,
all of us are still trying to learn what these things are capable of, you know,
and what the good use cases versus the not-so-good use cases are.
I would like to refer here to a very, very recent use case that Andrei Carpathie
shared with the world.
And for people who may not have heard the name, he's a very famous AI engineer
and also the person who literally coined the term vibe coding.
So sort of, you know, this interactive flow state with your AI assistant that helps you get code out of the way, so to say.
So he basically said, okay, so here's my latest project.
I tried shipping that using, you know, vibe coding and AI assistant, and it completely failed.
And he also interestingly gave the reason why, in his opinion, that failed because it was a new project.
It was, you know, the architecture behind it was elaborate and innovative, you may as well say.
So what he found was that AI didn't work in that case.
It's something that required, like, you know, genuine innovation.
And it was something that probably had not been encountered previously in the training
dataset for his AI assistant.
So I think that probably tells us something that, you know, there are use cases where you can get, you know, like maybe a good boost.
from AI and there are others that
are maybe not such a good fit.
I think here's the difference.
And vibe coding is really fun.
I don't want to be too negative on vibe coding,
but I don't think vibe coding is very good
in meaningful professional
workplaces, for example.
Like, you know, you want to create a fun app
or game on the weekend.
Absolutely, go vibe code it.
But if you care about collaborating on a team,
you care about maintaining a codebase for multiple years,
having enterprise-grade reliability,
and uptime, I think vibe coding is, as it currently stands, a pretty bad option.
To me, one of the biggest differences between vibe coding and normal coding, even if you're
using AI tools, normal coding, is do you deeply understand and have opinions on the actual
code that's being written and architected?
In your vibe coding, I don't really care.
This thing is a black box to me.
And I'm just going to take every error message.
I'm going to feed it back into the AI machine until it eventually fixes.
And I'm going to black boxes system.
And then I'm going to go look at the outputs.
I'm kind of kind of QA test it.
Like, okay, does this app or game kind of feel right to me?
That's vibe coding.
In software engineering at a company, we're doing things like code review.
This gets to my expertise like code review, which is, okay, hey, you, however you created
this code change, whether you asked an AI to make it or you hand typed it, at some point,
around the 100 to a thousand line diff mark, someone's going to review it and read it and think
about it carefully and think about how it integrates into the overall system and debate
whether this architecture is right or wrong or if you're using the right patterns.
And that still matters a lot.
And why does it matter?
Because, you know, people start caring a lot about the, around the uptime, the bugs,
the maintainability, the extensibility.
People are going to have to perform DevOps.
People are going to have to be thoughtful on how they roll it out, how they provide a database
upgrade or migration.
What happens if yesterday, AWS was down for half a day?
It really matters that engineers know, like, what's running on those production-grade systems
so they can figure it out and debug it and help restore the internet for the rest of the world.
that's where vibe coding to me falls apart is it's the day that engineers don't really understand the code that's running in their systems and that they don't understand how to iterate or have opinions on the strengths and weaknesses those architecture and to me that's actually orthogonal that to raw AI coding I think you can use auto-complete you can ask you know like cloud code to help you with a code change but do you understand what you're doing and are you still code reviewing it and testing it and being thoughtful around the rollout very incremental that's what really matters to me vibe coding I think will will remain for more experiments and weekend projects and whiteboarding stuff
I think the crucial point is precisely this.
Basically knowing what you're doing, you may take hints or entire code blocks or modules
even that are suggested by your AI assistant, but you should always be in a position to actually
dissect the code and decide, well, okay, does it do a good job?
Could I possibly change it in that way?
Will it maybe have this negative side effect that I'm going to trip over down the road and
so on. So basically
use with caution would be
I don't know, my one line
to this. I think
it's, we're actually a world where code review
has never become more important
because I used to be able to trust
George, if you and I were coding together
and you submit me a code change at work,
I used to be able to trust that at least you
read it and thought about really deeply. Heck, you
had to. You wrote it. And I'm
there and I'm going to double check you, but I have
some trust in you as a human and as a teammate.
Now, in 2025, I don't know.
I don't know, maybe you thoughtfully wrote it,
or maybe you were watching a TV show and you asked Cloud Code to write it,
or maybe, heck, maybe Codex or one of the background agents
just create this co-change and no human has ever laid eyes on it.
Now it really matters that at the Code Review steps,
someone's rocking it, someone is thinking deeply about it and testing it
and taking ownership with the impact of that code change as it goes out of the door.
It's interesting.
The purpose of Code Review, I think, is really becoming
magnified in its importance in pedagogy and accountability and some of these these deep-rooted
meanings yeah and well i think you also just described one of the side effects of using
a i in general so one of those is well the non-determinism aspect at least you know the transformer-based
AI that we mostly refer to us as a i this day so you know as a previous guest on the on the podcast said well you
get it perfect like 99 times out of 100.
And then the 100 time, it just goes completely off rails
and it's like a random bug that you'll never be able to figure out.
So do you think that this non-determinism aspect in a profession
that used to be precisely founded on the idea that things may be complex,
but they are deterministic?
If you dig deep enough, you can find the root.
and you know, you can fix the bug and you can see working code.
And do you think that adding this element of non-determinism changes the other picture radically?
I don't worry too much about it.
Here's my reasoning.
To me, the AI code, and assume we're limited to AI co-gen, but we're thinking about AI
code gen, either you, my teammate wrote the code or an AI bot wrote the code.
Both are fallible.
You can make a mistake.
Look, we're human.
look, we make, we write bugs.
If I write 100 code changes, yeah, you're right.
One out of 100 is it going to have a bug?
AIBot can make a mistake too.
It's like self-driving cars.
Now we can debate, like, which one makes more bugs and stuff.
But in both cases, there's a possible, like, these are a fallible system.
Someone's going to make a bug.
My question is, what are we doing about it?
How are we thoughtfully testing, reviewing that code?
How are we thinking about the architecture around that code to say, hey, let's assume there's
unknown unknowns in the system.
How are we protecting the blast radius of that?
You know, what happens if this new module, this new library,
errors, what's going to happen to the overall system? Are we architecting things in a graceful way around that?
Are we rolling it out incrementally, watching the errors and then performing DevOps and conducting a
rollback if something goes wrong? I already act as an engineer, assuming that any code change
might accidentally have an issue in it, and yet we achieve good uptime. And the reason that works
is because there's a lot more that goes into engineering than just writing the code. We kind of bake that in.
So I think in the era of AI co-gen, the old practices are still quite useful, modularizing
my code to limit blast radius, being thoughtful around error handling, being incremental
rollouts, being thorough about my testing.
All of this used to matter with humans and now matters of the AI.
What's silly, the mistake to be made is to assume that this AI is so much better that a human
is not going to make bugs and to throw away all those practices.
And, you know, as we lower the bar to generating that code, there is a risk that people forget
to apply all the other classic practices, but I think they're just as important, if not more
important now.
And what's great, you know what?
we got more time because everyone keeps saying like what are we going to do when we you know
when we when we get all this time back that we used to spend writing code well how i'll tell you
let's go into the outer loop of software development let's go spend our time becoming really really good
at the testing and the roll as and the validation and the devops side of that because at least today
there's not a clear short-term path to automating all that away so i think there's some great
work and energy to be done there but i don't worry you ask you know am i worried about the
fallibility and the the occasional mistakes of these things i don't worry about it too much
Okay, so how do you see, how do you actually apply this mentality and these actual capabilities that the AI models give you in code review, in graphite?
So are you using it as a sort of, well, possibility generator or a sparring partner when people submit their code reviews?
The way we use it in graphite today, we have a pull request page, so a way to kind of stare at the diff and consider it in review.
view it. It's actually one of the most popular ones in the industry outside of
a set of GitHub and GitLab themselves. And the way we use AI, we've we've kind of
walked a very logical path. The first thing we did is we got the LLM to leave comments on
your PR. And if you want to be a fancy marketer, you call this AI code review. If you want to be
a logical engineer, you might call this like AI testing or AI linting even. We're
running the code through a complex pattern of LLMs, but with a vein of like, hey,
read this and tell me that there's some minor mistakes,
call out some issues, style, functionality, security, you name it.
But try and scan my code and attempt with some accuracy to leave some inline comments.
That's what we started with.
And now this is very common across the industry.
There's a lot of companies and tools and open source patterns that can do this.
I think it's great.
I think this is like a new pillar of testability and validation on the pull request set of things.
You got your unit test, your integration test, your linters, and now you have your AI reviewers.
But we're going to add that in.
It's going to be a nice layer.
The second thing we did, I think this was a little.
more innovative, is we observed some of the patterns that companies like Cursor were applying
at the IDE and giving you a chatbot to ask questions, research, reason about, and even mutate
the code. We said, great, let's bring that into the PR page. Because when you're reviewing code,
wouldn't it be great to have this assistant that can help explain things to you if you're new,
that can help pull in context from outside the code change from other PRs, other parts of the code base?
And also, if you're the author and someone's left you a bunch of comments and you need to fix some stuff up,
Wouldn't be great if you could just resolve those right in line on the PR page without having to check out the code, fix it, rerun it, push it, push it back up.
So now we have chat on the pair page.
I think it's been quite a level up that needs AI.
And then the third thing we've been developing, we have, we're alpha testing right now, is the ability to trigger any of these agents you want from that pull request page as well.
So whether you like your cloud code or your codex or whichever coding agent, I think it's fantastic to immediately be able to execute those in a sandbox from the PR page.
Maybe you want to resolve merge conflicts.
Maybe you want to clean up some comments and retest and rebuild that code.
Maybe you want to break that code into two separate PRs.
There's a lot of larger operations you might want to do at that code review, that the poor request stage.
And I want to give you the ability to execute those and trigger those right from the PR page in a coding sandbox.
To me, again, we can put fun marketing spins on this, but I think it's the evolution of CI in a way.
Because for many, for a long time, what was CI was, I'm going to give you a sandbox of compute.
I'm going to clone your code into it, and we're going to run some scripts you have.
Agents running on your PR is roughly the same thing, but instead of running a deterministic script,
I'm going to run a non-deterministic cloud code or codex, and I'm going to take the output of that and then push it back to the PR.
So it's the evolution of CI in a way, but allowing you a lot more power and all in vain of helping speed up this workflow.
So we can keep that outer loop as fast as the cogent side of things is moving.
That's how today we're leveraging AI within Graph.
I'm sure we're going to keep finding new and fun ways to apply.
There's AI code tours.
There's filling out your descriptions.
There's lots of fun things we can do.
Those are some of the three big ones we're using.
Okay.
So how are you seeing users respond to those features?
I mean, how are they being adopted?
And what are they besides, you know, I don't know if you have any metrics that you can share,
but how, what kind of qualitative feedback are you getting?
Are they finding this useful or maybe confusing in some situations or what's the uptake?
No, I think people love it across the board.
Now, you know, it's important that you make this stuff an opt-in.
So the big caveat, we're not forcing this on anyone.
You don't have to run an AI reviewer on your code.
You don't have to use chat and ask questions.
You don't have to trigger agent.
This is all opt-in.
So I think we would get harsher feedback.
We somehow like forced this upon all the users.
But given that it's an opt-in world, I think people really like it.
it's not perfect. So the hardest one is AI code review, which is no matter how good you do it,
there are false positives in the current state of being. And people get really
frustrated when they see a comment that's left that's, that it's like, oh, it's kind of,
I see why they left that comment, but it's not quite right. It's a little bit of a hallucination,
a little bit of false positive there. That really frustrates people. Likewise, people get really
frustrated when there is a clear bug and the AI doesn't call it out when it has a miss as well.
So that one, I think, is particularly tricky. Chat,
I think it's very positive feedback because it's a relatively straightforward feature, to be honest.
It's a relatively tried and tested UI pattern at this point.
And we've just brought it to a surface area and given it a bunch of powerful tools.
Some people choose not to use it because they don't like the novel UX of it.
And that's totally fine.
You don't have to.
But the people who choose to opt into it and who are a little bit more early adopters in the AI technology, they seem to love it.
I don't know.
Quantitatively, I think we've run like tens of millions of PRs through AI code review at this point.
we've left millions of comments.
We look very closely at the acceptance rate of those comments.
We look at a lot of metrics, like are they thumbs up or thumbs down?
But one of the things we look at is the acceptance rate.
My acceptance rate, what I mean is, given that a comment was left on the line,
what are the odds that that line is updated before it's merged in?
It's not perfect, but it's actually an okay proxy to say that action was taken based on that comment.
And what's really nice is we can compare human comment acceptance rate to AI comment acceptance rate.
And we can say, is this about as good as human comments are creating action?
And in reality, yeah, is yes.
It varies week to week.
But it hovers around 50%, both for human comments and AI comments.
The odds that if one is left, it creates action on that PR.
And to me, that that's really valuable.
And what we see with the users, you ask about quality of feedback.
Like a lot of users, they'll turn on these technologies.
And they'll be like, oh, you know, I don't know.
Is it useful?
I'm not sure.
It hasn't comments in a lot of my PR.
Is it left some, those are okay.
And then it'll catch something fantastic.
It'll catch, after a couple days, it'll catch like a really subtle, deep, important bug.
And then suddenly the user's like, oh my God, like, totally worth it.
In fact, I'll sit on this all day.
If once a week this thing catches like an amazing, deep, subtle issue, absolutely worth it.
You know, like, I'll actually take a couple false positives.
I'll keep this thing running.
We find that, like, there's really that fall in love moment.
So I think with or without graphite, we are but one player in the space,
but I am very convinced that everyone in the future will run some AI skills.
some AI code review on every one of their PRs.
I think this is here to stay as a pattern.
Okay, so how do you actually see the software engineering profession developing in the future?
And we talked about a number of ongoing things so far.
And I think to me, what all of this seems to point to with pretty much certainty, I would say,
is that it's going to change.
I'm not exactly sure in what way, but people have come to rely more and more on AI assistance.
But the thing is that it looks like to be able to productively use this,
you actually have to know the basics really, really well.
If you go towards the Viboding side of things, like, okay, hack this thing for me while I'm having my coffee,
I don't think it's going to end up very well.
So I think it's really, really important that, I don't know, maybe people start
their journey into the profession without actual exposure to those tools so that they get to learn
to code using their own ability and then they can gradually adopt these tools.
Is the question, you know, like how do I think modern software engineers should enter the field
and think about using these tools and, you know, is there a risk of using them too much as a crutch?
I'm on the fence. I mean, we're pontificating, but I think on average people should be using these tools.
I think you should be thoughtful and careful about it.
You know, I could compare this to other trends in the history of coding.
You could say, should new engineers learn Python or should we force them to use C
so that they understand memory and pointers and have to think about, you know, stacks and
heaps, or should we just let them like use Python and HTML and make some websites?
You can push it even further.
You say, hey, should new engineers write and C or should we really make them to use assembly
so they understand like the deep inner workings about how computers work?
You can make an argument.
But my bigger issue is that if coding is too hard,
coding's already pretty hard.
If coding's too hard,
the bigger likelihood is that someone's going to not learn it.
Someone's going to like try it out,
not hit the activation energy,
and just be like,
oh,
this kind of sucks.
I don't like doing this.
I'm going to go do a different profession.
I'm going to not fall in love with the craft of software engineering.
I don't have that much sacred holiness around how people conduct their engineering.
It's kind of funny.
As a tool builder,
you would think I would.
I'm obsessed with the craft.
But at the end of the day, I do see that this is a means to an end.
And I think a lot of people are trying to make a nice iPhone app or they're trying to make a fun website.
And the tools they use to get themselves there, there are a means to an end.
Not everyone needs to be like really deeply understanding and obsessed about this.
I think you'll learn it as well as you have to learn it to pull off the job.
I think about myself in high school learning iOS programming.
And I was hacking my way through Objective C and I was learning some concepts.
And I was skipping some other concepts.
And I had a lot of memory leaks.
and it was still good enough to get off the ground,
and I could learn deeper principles around memory allocation later on in my career.
So I think it's okay.
If people pick up and start using AICodent
in order to help them fall in love with engineering and creativity
and creating cool technologies,
and then over time, their apps crash and they get bit by bugs,
and then they have to learn a little bit deeper patterns to fix that.
But maybe the way that they learn about then how to fix that is also a very AI-pilled version.
Maybe they're using tools to help them learn, help them debug,
I want to be supportive of that.
I think it would be way too
cremogeny or conservative to say,
no, you've got to use the way.
You got to learn it the way I learned it.
You got to cut your teeth on the raw stuff
and then later in the AI later.
I think the new generation of engineers,
they're going to be using AI from day one
and they're going to be very skilled at it.
And I think they'll learn the fundamentals anyways.
They'll find a different path to it.
Yeah.
Well, you know, I think you're right.
I mean, it does very much depend on the situation.
If you're just, you know, like a solo coder
working on a fan side project, it doesn't matter all that much, you know, if you have
memory leaks and, you know, if maybe after 30 minutes your, your software crashes, it's fine.
If you're working in an enterprise setting, it matters really, really a lot.
So you can be very thorough about that.
Yeah.
I do think it's funny.
I do, I do look around.
I'm like, man, I am essentially that C programmer who is looking at a new generation of Python
programmers being like, oh, you guys don't, you don't understand.
and how the real stuff works.
And I don't want to be that guy.
I don't want to fall into that trap.
I want to be supportive of new waves here.
Okay, cool.
All right.
I think that's a nice message to wrap up the conversation, actually,
because it's both forward-looking and kind of optimistic, which I like.
I'm optimistic.
I think it's never been a more fun time to get involved in software engineering.
There's never been more things to build.
in late 2010s, we were polishing really,
most incremental things were like a lot of polish on existing ideas.
Suddenly, everything's blown wide open.
There's so much overhang.
There's so much opportunity to build really novel, exciting ideas.
For anyone listening, I really truly deeply believe there's never been a better,
easier, more fun time to get involved in creating software engineering, building apps,
building companies.
So I very much encourage people to go try it.
Cool.
I think you know, I think you're pretty convincing, so I expect a huge uptick in developer
in developers sign-ups the programs after this podcast.
Amazing.
Thanks for sticking around.
For more stories like this, check the link in bio and follow link data orchestration.
