Screaming in the Cloud - The Transformation Trap: Why Software Modernization Is Harder Than It Looks
Episode Date: August 21, 2025In this episode of Screaming in the Cloud, Corey Quinn talks with Jonathan Schneider, CEO of Moderne and author on Java microservices and automated code remediation. They explore why upgradin...g legacy systems is so hard, Schneider’s journey from Netflix to building large-scale code transformation tools like OpenRewrite, and how major companies like Amazon, IBM, and Microsoft use it.They also discuss AI in software development, cutting through the hype to show where it genuinely helps, and the human and technical challenges of modernization. The conversation offers a practical look at how AI and automation can boost productivity without replacing the need for expert oversight.Show Highlights(2:07) Book Writing and the Pain of Documentation(4:03) Why Software Modernization Is So Hard(6:53) Automating Software Modernization at Netflix(8:07) Culture and Modernization: Netflix vs. Google vs. JP Morgan(10:40) Social Engineering Problems in Software Modernization(13:20) The Geometric Explosion of Software Complexity(17:57) The Foundation for LLMs in Software Modernization(21:16) AI Coding Assistants: Confidence, Fallibility, and Collaboration(22:37) The Python 2 to 3 Migration: Lessons for Modernization(27:56) The Human Element: Responsibility, Skepticism, and the Future of WorkLinks: Wiz Cloud Security Health Scan: https://wiz.io/screamModern (Jonathan Schneider's company): https://modern.aiLinkedIn (Jonathan Schneider): https://www.linkedin.com/in/jonathanschneider/
Transcript
Discussion (0)
And these are all the sort of basic primitives.
And then, you know, at some point we said, well,
recipes could also emit structured data in the form of tables,
just rows and columns of data.
And we would allow folks to run those over thousands or tens of thousands
of these loss of semantic tree artifacts and extract data out.
This wound up being the fruitful bed for LOMs eventually arriving.
It's that we had thousands of these recipes,
emitting data in various different forms.
And if you could just expose as tools, all of those thousands of recipes to a model and say,
okay, I have a question for you about this business unit, the model could select the right
recipe, deterministically run it on potentially hundreds of millions of lines of code,
get the data table back, reason about it, combine it with something else.
And that's the sort of, I think, foundation for large language models to help with large-scale
transformation and impact analysis.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
And my guest today has been invited to the show because I've been experiencing, this may shock you,
some skepticism around a number of things in the industry.
But we'll get into that.
Jonathan Schneider is the CEO of Modern.
And before that, you've done a lot of things, Jonathan.
First, thank you for joining me.
Yeah, thanks for having me here, Corey.
Such a pleasure.
Crying out cloud is a way.
one of the few cloud security podcasts. It's actually fun to listen to. Smart conversations,
great guests, and zero fluff. If you haven't heard of it, it's a cloud and AI security podcast from
Wiz run by CloudSec Pros for CloudSec Pros. I was actually one of the first guests on the show,
and it's been amazing to watch it grow. Make sure to check them out at whiz.io slash crying dash out
We always have to start with a book story because honestly, I'm envious of those who can write a book.
I just write basically 18 volumes of Twitter jokes over the years, but never actually sat down
and put anything cohesive together.
You were the author of SRE with Java Microservices and the co-author of automated code
remediation, how to refactor and secure the modern software supply chain.
So you are professionally depressed, I assume.
Yeah, I mean, as like most software engineers, I,
I hate writing documentation, so somehow that translated into, you know, write a full-scale book instead.
I honestly don't remember how that happened.
A series of escalatingly poor life choices is my experience of it.
I think no one wants to write a book.
Everyone wants to have written a book.
And then you went and did it a second time.
Yeah, much more, a smaller one, that second one, you know, just the 35 page or luckily.
But, you know, still, or it's always quite the effort.
So one thing that I wanted to bring you in to talk about is that the core of what your company does, which is, please correct me if I'm wrong on this, software rewrite, software modernization.
Effectively, you were doing what Amazon Q transform purports to do before everyone went AI crazy.
Yeah, it started for me almost 10 years ago.
Now, at Netflix on the engineering tools team, where I was responsible for making people move forward in part, but they had that,
freedom and responsibility culture.
So I could tell them, you're not where you're supposed to be.
And they would say, great, do it for me.
Otherwise, I got other things to do.
And so really that forced our team into trying to find ways to automate that change on their behalf.
I've never worked at quite that scale in production.
I mean, I've consulted in places like that, but that's a very different experience because
you're hyper-focused on a specific problem.
But even at the scales that I've operated at, there was never.
an intentional decision
of someone's going to start out today and we're going to
write this in a language and framework that
are 20 years old. So this stuff has
always been extant for a while. It is
grown roots. It has worked
its way into business processes
and a bunch of things have weird dependencies
in some cases on bugs. People are
not declining to modernize
software stacks because they haven't
heard that there's a new version out.
It's because this stuff is painfully
hard because people and organizations
that they build are painfully hard.
I'm curious in your experience, having gone through this at scale with zeros on the end of it,
what are the sticking points of this?
Why don't people migrate?
Is it more of a technological problem, or is it more of a people problem?
Well, first I would start, and hopefully with a sympathetic viewpoint for the developer,
which is like pretend I haven't written any software yet, and I'm actually starting from today.
And I look at all the latest available things, and I make the perfectly optimal choice.
for every part of my tech stack today, and I write this thing completely clean.
Six months from now, those are no longer the optimal choices.
Oh, God, yes.
Me, worst developer I haven't met is me six weeks ago.
It's awful.
Like, what was this idiot thinking?
You do get blame, and it's you, and wow, we need not talk about that anymore.
But, yeah, the past me was terrible at this.
That's right.
And always will be.
Future you will be the next past you.
So there's never an opportunity where we can say we're making the optimal choice
and that optimal choice will continue to be right going forward.
So I think that paired with one other fact,
which is just that the tools available to us
have essentially industrialized software production
to the point where we can write net new software super quickly
using off-the-shelf third-party open-source components.
We're expected to because you have to ship fast.
And then what do you do when that stuff evolves at its own pace?
So nobody's really been good at it, and I think the more authorship automation that we've developed for ourselves from IDE rule-based intention actions to now AI authorship, this like the time that we spend maintaining what we've previously written has continued to go up.
I would agree. I think that there has been a shift and a proliferation, really, of technical stacks and software choices.
And as you say, even if you make the optimal selection of every piece of the stack, which incidentally is where some people tend to founder, they spend six months trying to figure out the best approach, pick a direction and go.
Even a bad decision can be made to work.
But there are so many different paths to go that it's a near certainty that whatever you've built, you're going to be one of a wide variety of different paths that you've picked.
You're effectively become a unicorn pretty quickly, regardless of how mainstream each individual choice.
might be. That's right. That's just the nature of software development.
I am curious, since you did bring up the Netflix freedom and responsibility culture,
one thing that has made me skeptical historically of Amazon Q's transformabilities and many
large companies that have taken a bite at this Apple is they train these things and build these
things inside of a culture that has a very particular point of view that drives how software
development is done, like how many people have we met that have left large tech companies to go
found a startup, tried to build the exact same culture that they had at the large company and just
founder on the rocks almost immediately because the culture shapes the company and the company
shapes the culture. You can't cargo cult it and expect success. How varied do you find that these
modernization efforts are based upon culture? I'm glad to say that for my own story, I had a degree
of indirection here. I didn't go straight from Netflix to founding something. So I was at Netflix.
I think that freedom and responsibility culture meant that Netflix in particular had far less
self-similarity or consistency than, say, a Google that has a very prescriptive standard
for formatting and everything in the way they do things. And so I left Netflix. I went to
pivotal VMware was working with large enterprise customers like JP Morgan, Fidelity, Home Depot,
et cetera, working on an unrelated problem in continuous delivery and saw them struggling with
the same kind of problem of like migrations and modernism, like everybody does. And what struck
me was that even though they're very different cultures, J.P. Morgan much more strongly resembles
Netflix than it does Google. Netflix's lack of consistency was by design or by culture
intentional. And J.P. Morgan's is just by the sheer nature of the fact that they have 60,000.
thousand developers and 25 years of this of history and development on this.
And so a solution that works well for dissimilar by design actually works well in the
typical enterprise, which is probably closer to Netflix than it is to Google.
Yeah, a lot of it depends on constraints, too.
J.P. Morgan is obviously highly regular.
Sorry, J.P. Morgan Chase, they're particular about the naming.
People are.
They're obviously highly regulated and mistakes matter in a different context than they do when
your basic entire business is
streaming movies and also
creating original content that you then cancel
just when it gets good.
Right, right, right.
Yes.
So there's, there is that question,
I guess, of how this stuff evolves,
but taking it a bit away from the culture side
of it, how do you find
that modernization differs
between programming languages?
I mean, I don't know if people are watching this on the video
or listening to it. We take
all kinds, but you're wearing a hat right now
that says JVM. So I'm just going to
speculate wildly that Java might be your first love, given that you did, in fact,
write a book on it.
It was one of my first loves, yeah, technically a Java champion right now, although, you
know, I actually started in a C++, and I hated Java for the first few years I worked on it.
But I actually think...
Stockholm syndrome can work miracles.
It sure can. It absolutely can.
I don't know that the problems are that different.
There's, you know, a lot of different engineering challenges, how statically typed is something,
how dynamically typed is it, how accurate can a transformation be provably made to be?
But in general, I think the problems are, the social engineering problems are harder
than the specifics of the transformation that's being made.
And those social engineering problems are like, do I build a system that issues mass
pull requests from a central team to all the product teams and expect that everybody's
going to merge them because they love it when, you know, random things show up in their,
in their PRQ or do product teams perceive that like unwelcome advice coming from an in-law and
they're just looking for a reason to reject it, you know, and then they would prefer instead to have
an experience where, you know, when they're about to undergo a large-scale transformation that
they pull or they initiate the change and then merge it themselves. So like those are the things
that I think are highly similar regardless of the tech stack or company that's because
people are people kind of everywhere.
You take the suite of Amazon Q transform options, and they have a bunch of software modernization
capabilities, but also getting people off of VMware due to extortion, as well as getting
off of the mainframe, which that last one is probably the thing I'm the most skeptical
of.
Companies have been trying to get off of the mainframe for 40 years.
The problem is not that you can't recreate something that does the same processing.
It's that there are thousands of business processes that are critically dependent on that thing,
and you can't migrate them one at a time in most cases.
I am highly skeptical that just pour some AI on it
is necessarily going to move that needle in any material fashion.
I think that there's a two different kinds of activities here.
One is code authorship, that new authorship.
That's what the co-pilots are doing, Amazon Q's doing, et cetera.
It's really assisting in that respect.
And then there's code maintenance, which is,
I need to get this thing from one version of a framework to another.
maintenance can also include
I'm trying to consolidate
one feature flagging vendor to
or two feature flagging vendors to one
but when I
think of something like a cobal to
a modern stack
JVM or dot net or whatever the case might be
I honestly see that less
of as a maintenance activity and more as
an authorship activity
and you're writing net new software
in a different stack and a different set
of expectations and assumptions
and so I'm skeptical
too. I don't think there's a magic wand, but to the extent that our authorship tools
help us accelerate in that new development, those problems, the cost of those problems goes
down, I think, over time. Yeah, that does track. It makes sense of how I tend to think about
these things. But at the same time that the cost of these things goes down and the technology
increases, it still feels like these applications that are decades old in some cases are still
exploding geometrically with respect to complexity?
That's right. Yeah.
How do you outrun it all?
Well, to me, there's not just one approach here, but I feel like, you know, for my own
sake and where my focus is is really trying to reclaim developer time in some area so
that they can refocus that effort elsewhere.
And I think one thing I hear pretty consistently is that because of that explosion and software under management right now,
a developer spending like 30 or 40 percent of their time just kind of restitching applications and keeping the lights on.
And that's something we need to like get rid of a bit or as minimize as much as possible so that, you know,
the next feature they're developing isn't just a net new feature but is actually, you know,
pulling some like old system into a more modern framework as well.
It's just another activity that can go back onto their, is something they can do.
That does track.
I guess the scary part, too, is it, having lived through some of these myself, where we know that we need to upgrade the thing to break off the monolith, to master the wolf, et cetera, et cetera, et cetera.
And it feels like there's never time to focus on that because you still have to ship features.
But every feature you're doing feels like it's digging the type and will debt hold deeper.
It is.
It is.
Yeah. So, I mean, that's, and this is what I mean, it's, like, if we can take the assets that we have under management right now and, like, keep them moving forward, then we have, like, less drift and less, you know, complexity to deal with overall.
It's an important part of piece of that puzzle, I think.
As you said, you've been working on this for 10 years.
Gen. AI really took off at the end of 2023, give or day, well, during 20203.
and I'm curious to get your take on how that has evolved.
I mean, yes, we all have to tell a story on some level around that.
Your URL is modan.aI, so clearly there is some marketing element to this,
but you're a reasonable person on this stuff, and you go deeper than most do.
I think a lot of what I've developed over the last several years our team has
has been accidentally leading towards this moment where we've got a set of tools
that an LLM can take advantage of.
So the first thing was, you know,
when I'm looking at a code base,
the text of the code is insufficient.
I think to the abstract syntax tree
of the code is insufficient.
So things like tree sitters, you know,
that I won't mention all the things built on top of tree sitter,
but if it's just abstract syntax tree,
there's not enough information often
for a model to latch onto
to know how to make the right transformation.
And the reason
I started Open Rewriding at the very beginning 10 years ago was because the very first problem
I was trying to solve at Netflix was moving from Blitz for Jay and internal logging library
to not Blitz for J. We were just trying to kill off something we regretted. And yet that logging
library looked almost identical in syntax to S. Flipper JG, any of the other ones. And so just looking
at log.com, well, that looks exactly like log. info from another library. I couldn't, you know,
narrowly identify where Blitzford Hayes still was, even in the environment.
So I had to kind of go one level deeper, which is what does the compiler know about it?
And that is actually a really difficult thing to do.
To just take text to code and parse it in an abstract syntax and text tree, you can use Tree Sitter.
To go one step further and actually exercise the compiler and do all the symbol solving,
well, that actually means you have to exercise the compiler in some way.
Well, how is that done?
What are the source sets?
What version does it require?
What build tools require?
What do you like?
This winds up.
being this like hugely complex decision matrix to encounter an arbitrary repository and build
out that LST.
And so we built out that LST or loss of semantic tree.
And then we started building these recipes, which could modify them and those recipes
stacked on other recipes to the point where like a spring boot migration has 3,400 steps
in it.
And these are all the sort of basic primitives.
And then, you know, at some point we said, well, recipes could also emit structured data
in the form of tables, just rows and columns of data.
And we would allow folks to run those over thousands or tens of thousands of these
loss of semantic tree artifacts and extract data out.
This wound up being the fruitful bed for LOMs eventually arriving,
is that we had thousands of these recipes, emitting data in various different forms.
And if you could just expose as tools, all of those thousands of recipes to a model and say,
okay, I have a question for you
about this business unit. The model could
select the right recipe, deterministically
run it on potentially hundreds of millions
of lines of code, get the data table
back, reason about it, combine it with
something else. And that's the
sort of, I think, foundation for
large language models to help
with large-scale transformation and impact
analysis. This episode
is sponsored by my own company,
the Duck Bill Group.
Having trouble with your AWS
bill, perhaps it's time to renegoti
a contract with them. Maybe you're just wondering how to predict what's going on in the wide
world of AWS. Well, that's where the Duck Bill Group comes in to help. Remember, you can't
duck the Duck Bill Bill, which I am reliably informed by my business partner is absolutely not
our motto. To give a somewhat simplified example, it's easy to envision, because some of us
have seen this, where we'll have code that winds up cranking on data and generating an artifact,
and then it stashes that object into S3
because that is the de facto storage system of the cloud.
Next, it then picks up that same object
and then runs a different series of transformation objects on it.
Now, from a code perspective,
there is zero visibility into whether that artifact
being written to S3 is simply an inefficiency
that can be written out and just have it passed
directly to that subroutine,
or if there's some external process,
potentially another business unit,
that needs to touch that artifact for something
for reporting, quarterly earnings,
are a terrific source
where a lot of this stuff
sometimes winds up
getting floated up
and it is impossible
without having conversations
in many cases
with people and other
business units entirely
to get there.
That's the stumbling block
that I have seen historically.
Is that the sort of thing
that you wind up
having to think about
when you're doing these things
or am I contextualizing this
from a very different layer?
I do think of this process
of large-scale transformation
and impact analysis
very much like what you're describing
is like a data warehouse ETL type thing,
which is, you know, I need to take a source of data,
which is the text of the code,
and enrich it into something that's everything
the compiler knows, all the dependencies
and everything else.
From that point, and once I have that data,
that's a computationally, it's expensive thing to do.
Once I have that, there's a lot of different applications
of that same data source.
I should point out that I have been skeptical of AI
in a number of ways for a while now.
And I want to be clear that when I say,
skeptical. I do mean I am middle of the road on it. I see its value. I'm not one of those.
It's just a way to kill trees and it's a dumb Markov chain generator. No, that is absurd.
I'm also not quite on the fence of this changes everything and every business application should
have AI baked into it. I am very middle of the road on it. And the problem that I see as I look
through all of this is it feels like it's being used to paper over a bunch of these problems where it has
to talk to folks.
I've used a lot of AI coding assistance,
and I see where these things tend to fall short and fall down.
A big one is that they are seen incapable of saying,
I don't know, we need to go get additional data.
Instead, they're extraordinarily confident and authoritative and also wrong.
I say this as a white dude who has two podcasts.
I am conversant with the being authoritatively wrong point of view here
at sort of my people's culture.
So it's one of those, how do you meet in the middle on that?
How do you get the value without going too far into the realm of absurdity?
Well, I do think that these things need to be, they need to collaborate together.
And so it is with Amazon QCode Transformer that's working to provide migrations for Java and other things.
You see that that Amazon QCode transformer actually uses open rewrite or rule-based or deterministic system behind it to actually make a lot of those changes.
An open source tool that incidentally you were the founder of, if I'm not mistaken.
That's right. Yeah, and that our technology is really based on as well. And it's not just Amazon QCo Transformer, as we've seen, you know, IBM Watson migration assistant built on top of open right, Broadcom application advisor built on top of open rewrite, Microsoft GitHub, co-pilot, AI migration assistant, I think is the current name also built on that. And they're better together. I mean, it's, you know, that tool runs open rewrite to make a bunch of deterministic changes and then follows that up with further verification steps. That's, that's, that
That is the golden path, I think, is trying to find ways in which non-determinism is helpful
and to stitch together systems that are deterministic at their core as well.
I hate to sound like an overholming cynic on this, but it's one of the things I'm best at.
The Python 2 to Python 3 migration, because Unicode, and no other real discernible reason,
took a decade in no large part because the single biggest breaking change was the way that
print statements were then handled as a function.
And you could get around that by importing the future package, which affected a lot of two to three migration stuff.
But it still took a decade for the system tools around the Red Hat ecosystem, for example,
just around package management to be written to take advantage of this.
And that was, and please correct me if I'm wrong on this, a relatively trivial straightforward uplift from Python 2 to Python 3.
There was just a lot of it.
looking at that migration to anything
that's even slightly more complicated than that
feels like at past a certain point of scale
an impossibility. You clearly feel differently
given you've built a successful company
and an open source project around this.
Yeah, I think actually one of the characteristics
that was difficult about that Python 2 to 3 migration,
are there's things like that that you described
that were fairly simple changes and that were done at the language level.
But alongside that came a host of other library
changes that were made.
Not really because of Python 2 to 3,
but because there's an opportunity.
They're breaking things, we'll break things,
and everybody lets just break things, right?
And so a lot of people got stuck on not just the language level changes,
but all those library changes that happened at the same time.
And that's an interesting problem because it's kind of an unknown scoped problem, right?
Well, how much breakage you have in your libraries?
Very much depends on the libraries that you're using right now.
So I mentioned earlier, like the Spring Boot 3 migration, 2-2-3 migration opener at recipe right now has 3,400 steps.
I promise there's some part of 2-3 that we don't cover yet.
I don't know what that is, but somebody will encounter it.
And for them...
In production, most likely.
Yeah, they're going to be trying to run the recipe.
They're going to find something that, oh, you don't cover camel or something.
Great.
You know, like, and so, and that's fine, you know, and we'll encounter that.
And probably if they use camel in one place, they use it a bunch of places.
And so it'll be worth it then to build out that additional recipe that deals with that camel migration and then, you know, boom.
And then, you know, that and then that's sort of contributed back for the benefit of everybody else.
I think what makes this approachable or tractable is really that we're all sort of building on the same substrate of third-party and open source stuff.
it's from J.P. Morgan all the way down to tiny, like, you know, 15-person engineering team,
Mederan, like...
Oh, we can't overemphasize just how much open source has changed everything.
Back at the Bell Labs days, 70s and 80s, it was...
Everyone had to basically build their own primitives from the ground up.
Yeah, it was all completely bespoke.
Yeah. Now, it's almost become a trope.
So, I'm a quick sort and a whiteboard.
Like, why would I ever need to do that?
It's okay.
I guess another angle on my skepticism here is I work with AWS bills, and the AWS billing ecosystem is vast, but the billing space is a bounded problem space, unlike programming languages that are turning complete. You can build anything your heart desires. Even in the billing space, I just came back from Phenops X in San Diego. And none of the vendors are really making a strong AI play. And I'm not surprised by this, because I have done a number of experiments with LLMs on AWR.
has billing artifacts, and they consistently make the same types of errors that seem relatively
intractable. Go ahead and make this optimization. Well, that optimization is dangerous without
a little more context fed into it. So I guess my somewhat sophomoric perspective has been,
if you can't solve these things with AI in a bounded problem space, how can you begin to
tackle them in these open-ended problem spaces? I'm with you, actually. And there's a counterpoint
to this, which is that I think that all the large foundation model,
are somewhat undifferentiated.
I mean, they kind of take a pole position
at any given time, but...
Right, two weeks later,
the whole ecosystem is different.
Yeah.
They kind of all roughly
have the same capabilities,
and there are some, like we said,
they're very useful things they can do.
There's some utility there.
You know, there are places
where nondeterminism is useful,
and to the extent that you can apply
that nondeterministic,
then great, you know,
like that's fantastic.
But I'm not in a position
where I think,
spring boot two to three upgrade or Python two to three upgrade applied to five billion lines
of code is going to be deterministically acceptable, either now or six months from now or a year
from now. And maybe I'll be a fool and wrong, but I don't think so.
Honestly, this is, this whole AI revolution has turned my entire understanding of how computers
work on its, on their heads, like short of a RAND function, you knew what the output of a given
stands up code was going to be given a certain input. Now, it kind of depends. It does.
It really does. Yeah, the problem I run into is no matter how clever I have been able to be.
The people I've worked with, we're far smarter than I am, have been able to pull off these insights.
There's always migration challenges and things breaking in production just because of edge and corner
cases that we simply hadn't considered. The difference now is instead, because there's a culture in
any healthy workplace about not throwing Stephen under the bus,
Well, throwing the robot under the boss is a very different proposition.
I told you AI was crap, says the half of your team that's AI skeptic.
And it's not the AI's fault, says the people who are big into AI business daddy logic.
And the reality is probably these things are complicated.
Neither computer nor men nor beast are going to be able to catch all of these things in advance.
That is why we have jobs.
I've noticed this just even managing our team.
you know, I catch people when they say, you know, but Junie said this or but, you know, don't pass
through to me what your assistant said. You're the responsible party when you tell me something.
So you have a source. You check that source. You verify the integrity of the source. Then you
pass it to me, right? You can outsource the work, but not the responsibility. A number of lawyers
are finding this to be the case when they're not checking what paralegals have done or at least
blaming the power illegals for it. I'm sure. Exactly. Always has been.
Always has been.
It's, I also do worry that a lot of the skepticism around this,
even my own, my own aspect of it comes from a conscious or unconscious level of defensiveness,
where I'm worried this thing is going to take my job away.
So the first thing I do just to rationalize it to myself is point out the things I'm good at
and that this thing isn't good at the moment.
Well, that's why I'll always have a job.
Conversely, I don't think computers are going to take jobs away from all of us in the foreseeable future.
The answer is probably a middle ground.
in a similar fashion to the way the Industrial Revolution
sort of did a whole lot of a number
on people who were independent artisans.
So there's a, it's an evolutionary process
and I just worry that I am being too defensive,
even unconsciously.
I think that sometimes too.
I really do feel like this is just a continuum
of productivity improvement
that's been underfoot for a long time
with different technologies.
And I mean, I remember the very first eclipse release.
And the very first eclips release is when they were providing, you know, rules-based refactorings inside the IDE.
And I remember being super excited every two or three months when they dropped another and just looking at the release notes and seeing all the new things.
And what did that do?
It made me faster at writing that new code.
And, you know, here we've got another thing that has very different characteristics.
It's like it's almost good at all the things that IDE-based refactors weren't good at.
But I still guide it.
And, you know, unlike a, yeah, I think the DROPIC CEO said, or CTO said IDEs will be obsolete by the end of the year.
I don't believe this at all.
I don't believe this at all.
I don't believe this at all. I think we're still driving them.
I am skeptical in the extreme on a lot of that.
I, because again, these, let's be honest here, these people have a thing they need to sell.
And they have billions and billions and billions of other people's money riding on the outcome.
Yeah, that would shape my thinking in a bunch of ways both subtle and grows too.
I try and take the more neutral stance on this, but who knows?
I think it's not just neutral, it's a mature stance, and it's one that's a lot of experience going behind it.
I think that you're right.
I don't think we're anywhere close to being obsolete.
No, and I also, frankly, I say this coming from an operations background, SIS had been turned SRE type, where
I have been through enough cycles
of seeing today's
magical technology become
tomorrow's legacy shit that I have to support
that I have a
natural skepticism built into
almost every aspect of this
just based on history
if nothing else. You know what vibe coding
reminds me of it? Reminds me a model-driven
architecture about 25 years
ago. Just produce a
UML diagram and don't worry
like the codel
I'll just generate the rest of that.
application. Or it reminds me of a behavior-driven development when we said, oh, we'll just
put in business people's hands. They write the test and, you know, don't, you know, we don't want
engineers writing the test. You want business. Like, I feel like we've seen this play out many, many,
many times in various forms. Maybe this time's different. I don't think so. And to be honest,
I like to say that, well, computers used to be deterministic, but let's, let's be honest with ourselves.
We'd long ago cross the threshold where no individual person can hold the entirety of what even a
simple function is doing in their head.
They are putting their trust in the magic think box.
That's right.
Yes, that's absolutely right.
So I really want to thank you for taking the time to speak with me.
If people want to go and learn more, where's the best place for them to find you?
I think it's easy to find me on LinkedIn these days or, you know, go find me on
modern, M-O-D-R-N-E-D-I-I, either place.
Happy to always send me a DM.
Happy to answer questions.
And we will, of course, put that into the show notes.
Thank you so much for your time. I appreciate it.
Okay. Thank you, Corey.
Jonathan Schneider, CEO at Modern.
I'm cloud economist Cory Quinn, and this is screaming in the cloud.
If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice,
along with an insulting comment that maybe you can find an AI system to transform into something halfway literate.
Thank you.