Screaming in the Cloud - AI's Impact on the Future of Tech with Rachel Stephens
Episode Date: May 7, 2024In this episode of Screaming in the Cloud, Corey Quinn is joined by Rachel Stephens, a Senior Analyst at RedMonk, for an engaging conversation about the profound impact of AI on software deve...lopment. Rachel provides her expert insights on programming language trends and the shifts in the tech landscape driven by AI. They look into how AI has reshaped coding practices by automating mundane tasks and offering real-time assistance, altering how developers work. Furthermore, Corey and Rachel examine the economic and practical challenges of incorporating AI into business operations, aiming to strip away the hype and highlight AI technology’s capabilities and constraints.Show Highlights: (00:00) - Introducing Rachel Stephens, Senior Analyst at RedMonk(00:28) - The Humorous Nemesis Backstory(03:42) - AI, focusing on its broad impact and current trends in technology(04:54) - Corey discusses practical applications of AI in his work(06:00) - Rachel discusses how AI tools have revolutionized her workflow(08:12) - RedMonk's approach to tracking language rankings(10:29) - Public vs. Internal Use of Programming Languages(13:09) - Rachel and Corey discuss how AI coding assistants are improving coding consistency and efficiency(15:55) - Corey challenges the purpose of language rankings (20:51) - AI tools affecting traditional data sources like Stack Overflow (26:28) - The challenges of measuring productivity in the AI era(29:21) - The macroeconomic impacts on tech employment and the role of AI in workforce management(36:33) - Rachel and Corey share their personal uses and preferences for AI tools(39:25) - Closing Remarks and where to reach RachelAbout Rachel: Rachel Stephens is a Senior Analyst with RedMonk, a developer-focused industry analyst firm. She focuses on helping clients understand and contextualize technology adoption trends, particularly from the lens of the practitioner. Her research covers a broad range of developer and infrastructure products., Rachel Stephens is a Senior Analyst with RedMonk, a developer-focused industry analyst firm. She focuses on helping clients understand and contextualize technology adoption trends, particularly from the lens of the practitioner. Her research covers a broad range of developer and infrastructure products.Links Referenced: RedMonk: https://redmonk.com/Rachel Stephens LinkedIn: https://www.linkedin.com/in/rachelstephens/* Sponsor Prowler: https://prowler.com
Transcript
Discussion (0)
Are we comparing it to our ideal selves, to what people were kind of doing before?
What we're doing now?
We're flawed in every way, I guess, is where I'm at.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
My returning guest today is Rachel Stevens, senior analyst at RedMonk, and more importantly,
my personal nemesis because she's horrible.
Rachel, thank you for joining me.
Hello, Nemesis.
How are you?
I was so good, and then I'm talking to you.
As RSA Week unfolds on Last Week in AWS, let's redefine cloud security with Prowler Open
Source.
In a landscape cluttered with rigid and opaque tools, Prowler shines with its community-driven, customizable platform.
Trusted by top security leaders,
Prowler empowers you with transparent, adaptable security checks
that fit your unique cloud environment.
Don't just secure your cloud.
Own your security strategy with a tool that evolves with you.
Embrace a future where cloud security is open,
transparent, and under your control. Don't think too hard about something called Prowler
talking about being an easily recognized shining beacon of something. Seems not what a Prowler does.
That's not important. Join the open cloud security movement at prowler.com and secure your cloud your way.
Let me give a little context here because otherwise people think I'm just being hostile
for absolutely no reason.
And I absolutely have the greatest of reasons.
A couple of years ago, we were at the excellent Monctoberfest in Portland, Maine.
One of the only things that will get me to go back to the state that I grew up in.
And getting out of Maine, which is the best part of Maine,
you and your husband, I believe,
booked the last set of first class seats on the flight out of Portland.
And notice on flight, there's only one.
It's Maine.
Not a lot of people go there on purpose.
So I had to sit in that flight in the back of the plane
like some kind of agrarian farmer and
i have never gotten over it and so you became my nemesis and what's more galling is you're good at
it well what happened was that you declared me your nemesis and i don't really give off nemesis
vibes to most people and i was so excited that i felt like i did not really have good evil villain
vibes when i accepted my nemesishood.
And so then I felt like I really had to lean into it.
And so then I carried on with my nemesis efforts, which included lots and lots of shipping.
You understood the assignment.
I'll grant that.
You sent flowers to my wife on Valentine's Day.
You sent me a holiday card that says,
sleep in heavenly peace on the outside of it. And inside, happy holidays, nemesis. May your
remaining days be merry and bright. It's great. Suddenly I look like the complete jerk in this
story. And I'm not. I'm the aggrieved passenger prince who had to sit in the back like a person. My God. Sorry.
I apologize.
You gifted me a bunch of body parts of Lego, obviously, in a lovely container.
And it is apparently an homage to a movie that I'm not brave enough to watch.
Seven, I think it was called.
That was great.
You've done all these things here and there.
And it is. Yeah. Like, not only are you my nemesis self-declared here, but you you're beating me at it.
And I don't know what to do about that. Probably a couple of ideas for this.
And my editorial committee keeps shooting them down like, no, no. Arson is not cute and funny.
That's dangerous. Like, OK, great. We're going to have to solve for this problem.
But yes. So let me just begin by saying how much I despise you. Now, let's talk about work things. Yes. Well, I am delighted to be here and I'm
delighted to be your nemesis. So it's all good. Exactly. It feels like we should at least talk
a little bit about AI because you are actual analysts and I am not an analyst. I just basically
go wherever people pay me, but I don't view myself as being particularly
analytical. I mean, I guess you could call hot takes to be a rapid analysis form. Generally not.
You folks actually do the work, which is why when people reach out and say, hey,
we'd like some analyst work done, have you talked to Redmark? They actually enjoy it.
And that's usually the way that things wind up playing out but these days all anyone can seem to
talk about at industry events is ai start to finish which i sure i'm glad they solved the
whole you know pesky cloud infrastructure thing so none of us need to talk or learn about any of
those things no it's it's all ai that's true also i don't think you're giving yourself a totally fair
shake because you read faster than any like you you absorb information better than anyone i've ever met in my entire life and so i think you do the work too you just do you do the
reading work in particular at like light speed so you know you're in there but yes so ai ai really
has taken all of the technology messaging by storm from it it. It is all we collectively talk about sometimes, it feels like.
It is.
And it is a neat technology and there is clearly value there.
I don't want to come across as being overtly cynical.
I use it myself in a variety of ways as part of what I'm doing.
Usually in ad hoc questions or generate this following particular image set because I need
it for a slide or something like that.
Or, okay, here's a blog post I wrote.
Give me 10 title options and I'll tweak number six because that's the right one.
And things like that where a human reviews it before it sees the light of day, awesome.
It's when people start slapping these things on the front of their website as a chat bot,
the least efficient form of getting information to customers.
And then it just tells lies that if any human told this,
they'd be fired on the spot.
And everyone's talking about this as a revelation.
It's like, it's not just inaccurate and annoying.
It's also horrifyingly expensive.
And I get it.
It feels like we are dramatically chasing hype at this point.
And that obscures the very real value that's there.
Yeah, I think it's very much a both can be true.
We are 100% in a hype phase for this technology.
And I also think this technology has merit
that will last beyond the hype.
And so it's trying to figure out
kind of how this all goes together.
So I used to work as a DBA in a past life forever ago.
And in theory, I know how to write SQL queries.
I don't write SQL queriesl queries anymore i just go to
chat tpt and i have it figure out what what is it that i need to do here and i don't have to
figure out how to do inner joins and i don't have to i don't have to do some of that tedious work
that i can do but it takes me just i'm rusty at it i haven't done it professionally in a long time
and so it's really nice to have a tool there that can help me figure out what I'm trying
to do and kind of offload some of that skill that's buried deep down somewhere that I don't
necessarily have to figure out.
And so it's great for the things you're talking about, like brainstorming and trying to figure
out how to do reviews, trying to summarize or synthesize things like those are all great.
And it's also great for some of that boilerplate
stuff so it's got really clear applications but i feel like at the same time some of the things
that we are seeing come out of necessarily even the products but the way that we're talking about
the products it very clearly does not feel like we have um nailed how we're discussing the technology
one thing that i'm curious about when i say that you are a real analyst and I'm not,
one way I mean that,
I'm not disputing your assessment,
which is very kind and thank you,
that I absorb information quickly,
but it's the output side of it.
Very often I'll be asked by people to do benchmarks.
I refuse to do that because generally speaking,
when you put out benchmarks,
the company that comes out in the front
is very happy about them. And everyone who didn't argues with you about your methodology until the end of
time. And I don't have the headache to deal with that sort of thing. You folks do something that
is benchmark adjacent, I think we'll call it. And that is the language rankings that you put out.
I'm sorry, is it quarterly or six months? It every every six months plus a year okay time it's a
sort of a flat construct on some level and it effectively i forget its exact methodology which
i'm not yeah let me explain your own work to you no i'm trying to explain it for folks who may not
have been exposed to it but i shouldn't be doing this please what are the language rankings i can
make a pig's breakfast of it you can be be accurate. Fair enough. So you're right.
RedMonk doesn't really play the benchmark game either. It's in terms of assessing the technical capabilities of specific products or companies. That's not where we play.
But one of the things that we have found is we take a lot of qualitative information into our
conversations from customers, from not customers, from talking
to people who are practitioners, from leaders. We get a lot of things where we can kind of start
to triangulate what are people in the industry talking about, what are trends that we see.
And that's great. And that's a lot of what we do. But when and where we can, we'd like to see,
can we actually back up any of these qualitative trends with something quantitative. And for a long time, we started this
process in 2012, is taking what are we seeing around public usage of programming languages,
both in terms of how they're being used in public places like GitHub, and then how are people
talking about those languages in terms of asking questions, answering questions, things like that on Stack Overflow?
And for a long time, those were two really big publicly available data sets that were well trafficked by programmers.
And so we could kind of see like, what can we triangulate from these data sources in terms of what languages are being used?
Is it perfect metric? Absolutely not.
Does it capture all of the languages that are being used in the enterprise that don't use GitHub? Like, no way. Are there conversations that are happening outside of Stack Overflow? Absolutely that's what we use language rankings for. We 100% do not say this is a definitive set of rankings
of the best programming languages to use.
One, because that doesn't exist.
Two, the data is very flawed,
but it's just skewed by the nature of the sources.
And three, the best programming language
is going to depend on a whole lot of different factors
that are going to be internal to your organization.
Don't think of it that way.
It's like one, two, three, four, five best rankings, but more just like some data points that
we've tracked over more than a decade to try to trend, like how are things moving in the industry
in terms of what languages people are using? I do wonder increasingly how, like the obvious
question I have on this, and I don't know if there is an answer to it, but if you look at the
languages people use publicly for things, and you look at the languages people use internally at corporate jobs,
I see some misalignment. I don't see too many things written, as I'm working on GitHub and
various projects, written in Java, for example. A bit of.NET, but not a lot. However, in enterprises,
those are the bread and butter of everything that winds up getting written. So there's a question
of selection bias there. Surprisingly,
unless you're some very large analyst firm, they don't generally like to come in and have you run analytics on their internal code base for some unknown reason. Can't imagine why that might be.
No, so you're absolutely right. There's selection bias and the fact that it's just based off of
public GitHub data. So Java 100% is going to be underrepresented. Like COBOL runs all of the
financial institutions
in our entire world.
It's not really reflected in the language ranking.
So, because that's not something
that you're going to see.
No, and there's also so much
that is written so long ago
that it's not exactly under active development either.
So at some level,
is this accurate as a perfect representation
or a model of the ecosystem?
Almost certainly not.
As they say, all models are wrong, but some are useful.
I think the value of it in many ways comes from not the raw data it spits out,
but the watching the delta from reporting period to reporting period.
And given that we're now, what, three language rankings in
since ChatGipity burst onto the scene, have you seen changes?
We have seen changes.
So like I said, the data comes from two primary sources.
We look at Stack Overflow and GitHub.
And I would care to guess that most of your audience
is going to understand that Stack Overflow
has seen some very significant changes in that time.
But there's also been some interesting changes
from GitHub as well.
Computers are better at copying and pasting
from randos than I am. Who knew? I think about this though, because I run inside, I program just
enough to get through things like a script, but not super well. And it's not a core part of my job.
And the number of times I've run into questions when I'm like, what am I doing wrong? What,
what am I missing here? Like, how have I done this wrong? Like comes up all the time. The number of times that in like my,
in like the prior to chat GPT era, where I felt like my question, I had researched thoroughly
enough and knew what I was asking and had like made sure I didn't find like an alternative of
the question and like had not bothered. It was like, you don't want to fill up onto Stack Overflow and get yelled at for new or
have their question marked as like a duplicate.
People yell, like making sure that you've done the legwork to make sure that your question
is not a dumb question.
Like for me, it would take like half a day.
Whereas now if I have a question, I can just go to chat GPT.
Oh, it's terrible.
And I want to call out and I do
want to call out and self-correct myself as well, because I just pulled up the latest rankings in
Java's number three on the axes of both Stack Overflow and GitHub. So clearly there is a strong
Java GitHub community out there. Just it's just not something I encounter. There's strong Java,
but it would be even stronger if you looked internally. Yeah. Oh, yeah. Which would turn
a bit of that in his head on some level.
So I want to correct myself so people aren't sitting there saying,
ha ha, Corey doesn't know how computers work.
I don't, but I at least try not to blatantly spread misinformation.
Well, I don't think you're wrong.
It's a strong performer in our language rankings.
And also, it is absolutely more prevalent in the enterprise
than what we see from a public
way of doing it.
Like you weren't wrong about that.
I also want to call out as well.
I've worked in jobs where this was not fully understood by my supervisory chain, where
in many cases, people start looking at activity and rankings and the rest, and they start
trying to assign metrics to things like number of pull requests or lines of code. That is not a great way to measure most things that you would naively assume it was. Otherwise,
you wind up doing things like, well, why would I submit this as one change when I could make it 12
and boost my metrics? People will optimize for what they're measured on. So number of pull
requests, numbers of line of code. One of the programming weeks that I'm the proudest of
had a net result of adding three lines of code
because it was a really hairy regular expression SNMP thing
that took me a week to get right of solid work and research.
And at the end of it, I was happy, the client was happy.
Well, I was happy.
This was 10 years ago when I was writing a regular expression
to work with SNMP.
How happy could I possibly have been?
But it was, everyone was satisfied
with that being my output for that week.
And so that again is one of the limiting factors
of how we look at this
is because the way that we look at activity on GitHub
is we look at non-forkable requests by language,
the primary language of a repo
and then aggregate them.
And so, yes, like absolutely. I think our general hope in this is that at that fine, minute level,
you're evaluating a personal or a team assessment, lines of code, pull requests,
those are going to be really terrible metrics by which to judge anyone's productivity.
You're trying to look at that industry-wide trend.
Like at some point,
you kind of just have to go with the metrics
that are available.
And also you're hoping that like,
I don't think anyone's trying to game
the red monk metrics.
We are just trying to assess what has happened.
So it's one of those ones where yes,
is it imperfect?
Like, absolutely, it's imperfect.
But it's one of the ones that we can try to look at
see trends over time. I do want to ask that then. What is the purpose of the language rankings?
Because I have to say, maybe this is exactly as intended. Maybe it's a refudiation of some of
your premises. But when I'm trying, when I'm building something new, I want to move in a bit
of a hurry. What language do I write this in? Never once have I decided to look at a language ranking, yours or anyone else's,
to make that determination for me. There are definitely things, looking at it on the other
side, there are definitely things I've seen in your language rankings that have reflected trends
that I have gone through. For example, I used to do an awful lot of work in Perl. This may surprise people,
but Perl, last I checked,
was not at the top of the language rankings,
though it is in the upper quartile.
Yes, so I think it dropped off the top 20,
but yes, Perl has been in some declines.
So part of it is those trends that we can watch and see.
Part of it is, well,
we found that our developer-based audience,
you either play into
their confirmation bias or you play into their outrage that their chosen language has not done
it's like some of it is just that people like to engage with it because people like to see
how how the their chosen language is performing but i think a lot of it in terms of the purpose
is so we'll have clients who want to do something like i need to expand my
ecosystem support here in the languages i'm currently looking at like i'm evaluating these
ones can you help me figure out where is it that i should be investing my resources going forward
and why so something like this is a data point that we can feed into that so yeah it's it's it
doesn't necessarily have like a hard purpose. It's
again, data points and trends.
There are, everything can take issue with these
things. Like if it were a little bit more
objective, I would say, then
clearly I think the number one
number one language, both by number of
pull requests and weird questions involving
it and things people are doing, would be YAML.
People might say that YAML is
not a configuration language.
It is not for programming.
Oh, I miss the days of being that safe and insulated from the horrors of reality on that
with Kubernetes taking over the world.
And CSS is listed, which, yes, CSS is a programming language.
I know there are pur purists love to argue the
point but gatekeeping what a computer language is or isn't isn't really my bag yes so there's
css comes up every single time we publish there's a bunch of configuration languages and like sql
variants that github has started adding into their things that we have pulled out primarily just because for overtime trends, like it's,
it's odd to kind of have this wasn't included like from like,
it was four years ago and all of a sudden HTML showed up in the,
no, no, no, we're not going to add that one.
But so it's one of those ones where people take issue with just about every
aspect of this project.
Oh, absolutely. It's the perfect navel-gazing thing. You can see, no matter who you are,
you can find a problem you have with something on this. Dear God, you list SaltStack as a language.
I wrote part of that. You also do some great things. You've included, well, ZSH, Cornshell,
CSH, Bash. You've just lumped them all
together as shell which is the only way to do this and stay sane because yeah crappy bash was my
primary programming language now it's been replaced by brute force mixed with enthusiasm and that
works out so a lot of what we let we let so github has the linguist project and they tag all of their
things so wherever we can we let the language data groupings come from GitHub itself rather than us
trying to make those editorial decisions or somewhere we've had to.
But for the most part,
we try,
we try to just take the data as it's reported and show what,
show what it says.
But every once in a while we have to,
but for the most part,
the things that are in there are the things that come out of GitHub archive.
I love things like this because it is right in the sweet spot of things I find
interesting to giveibitz about,
but also I do not have a variety of,
I don't know if it's a combination of skill set
or personality attributes.
I am never going to sit down
and do the data crunching
to come up with something like this.
There are people who are very good at this.
I'm talking to one right now.
But, and for whatever reason with me,
I find the,
I don't, I don't work that way.
It's one of those things where every time I try,
I sit and I get stuck
and I get frustrated
and I spin my wheels and it's sad.
So it's, no, it turns out
that you can hire people
who are good at these things
or even better when you folks
do things like this,
I don't have to
because it exists in the world now
and we can talk about it.
So I'm glad it exists and I will never in a million years do something like this i don't have to because it exists in the world now and we can talk about it so i'm glad it exists and i will never in a million years do something like this i'm happy i'm happy to
provide this for us but yeah so i think the interesting thing i do want to tie this back
into the ai discussion that we're having yeah because that's that's the leading this is all
build up to that what has changed because of ai Yes. So obviously we've seen huge fall off in Stack Overflow participation.
We talked on that one.
So it's like Stack Overflow as a data source,
it's viability long-term for this as a measurement.
I'm not sure how we're going to do that.
But as people take their questions out of public forums
and into code assistance,
I as a user, it's a much better
experience for me. It's like, you don't get the judgment from Stack Overflow. You don't have all
of these issues. You get an easy answer. You get answers that are formatted to what you need. Like
it's, it's a better user experience, but like from a public data perspective, like it's kind of a sad
loss. It's a worthwhile trade-off though, if everybody is having a better experience. But I'm not sure how we're going to change or replicate what we're doing to account for the migration of
where people are actually interacting. When it's time to secure their cloud environments, AWS itself
recommends Prowler Open Source. Prowler gives you the tools to oversee and secure your AWS
environment openly. Why hide behind closed doors
when you can empower your team
with a security tool that's open for all
to see and improve?
Embrace a transparent approach to AWS security
with Prowler Open Source at prowler.com.
I've noticed that the way that I write code has shifted
since I started using some of these things.
I can tab complete through a bunch of boilerplate,
which is awesome.
Sometimes it'll suggest something that clearly won't work
and I'll run it for a laugh.
Holy crap, it worked.
What did I not understand about the nuances of this?
But I have noticed that whether it's ChatGipity,
whether it is GitHub Copilot,
which is my coding assistant of choice
for something embedded in the editor,
and that includes VI. I live in Vim most of the time. And yes, it does tab completion there,
which is gnarly. The problem I have with it, though, is that at different times when it does
these things different ways, it comes back with nothing that could even remotely be considered a
consistent coding style. So everything I've done is very, how do I solve this one discrete task here, spaghetti together to everything else?
And to be clear, that is not a unique to AI problem.
You have people writing code like that all the time when they start doing the copy and
paste from stack overflow thing, when people find different ways to do it, or it's been
a long time since they worked on a given section of the code base.
People drift as far as preferences and how they wind up writing things.
But this does seem to exacerbate that in some key ways.
For sure, for sure.
I think for me, it's one of the, it exacerbates it in that it's not at all consistent with what
I have in there, but like a stack overflow even close to consistency, the copy paste
from the website.
So it was not consistent.
So are we comparing it to our ideal selves, to what people were kind of doing before,
what we're doing now?
We're flawed in every way, I guess, is where I'm at.
I will say that when I punch an error message
into one of these things and ask it for an answer,
it's better than the Stack Overflow experience
by a landslide.
Because yes, while on Stack Overflow,
it will often give me context and competing ways
to solve it that people have responded to, I've lost count of the number of times where it has been the first result for an error message or something I'm trying to do, and I click on it, and it has been closed as off-topic.
Okay, great.
So you've decided that you don't think this adds value, yet you're not going to delist it from the search engine ranking. sitting at the beginning of the first search results for the error message. The thread has been closed
and not allowed to be updated since 2018.
And there is new information here
that I might hypothetically want to contribute back
as I have solved this thing and as people find it,
but now I can't because it's locked.
I've never yet had ChatGipity do an outright refusal
on the grounds of that's off topic.
It has refused at one point to tell me how to kill processes.
It's like, okay, you need to understand that kill is a term that has multiple meanings.
And in this case, it is not a particularly problematic one
until I'm doing it in production for funsies, which is a separate problem entirely.
But I don't need judgment from a robot on that.
My performance review is enough.
My Stanford Googled a question and had a Stack Overflow result come up.
And yes, that's exactly what I need.
And then it's like your question from like multiple years ago.
That's the most disheartening.
Oh, I personally love someone's asking how to do a thing and the responses are all,
well, why do you want to do it that way?
This other entirely different approach is better.
Okay, great.
I appreciate that if someone is coming at this from the naive question perspective in a vacuum, yes.
But very often, I don't want to have to send you my entire code base.
If I'm saying, assume that you have this input and need this output, well, you should restructure your inputs differently.
Terrific. Thank you for that completely unhelpful answer.
Give me some credence here and just get there.
I understand this is not ideal, but what is?
Yeah.
So it's all tricky, but I love the way that the code editor
has incorporated in the coding assistant experience
to help kind of answer these things in line.
And I think it's gotten,
at least my experience has gotten a lot better
and my productivity has gotten a lot better.
But that takes me to the other side of the data, which is the GitHub side of the data. Because I think one
of the things that is the common, I think, understanding of we are going to incorporate
AI-based coding assistance and our developers are going to be able to do everything so much better
and nobody is going to have to write boilerplate code anymore and we're all going to be super productive amazing humans which is great i love this and i have found myself
that i can move faster i can do different things my more my core job is not coding though so it's
like i i absolutely do not measure my success in this role by code output can't imagine why
but there are people whose job it is it's's like we need to be moving with velocity.
Like how many times have we heard
in this digital transformation era?
It's like people are trying to move faster.
We need velocity.
So we have this industry that is obsessing
in and around velocity of code deployment,
code development.
We have these tools that are saying
we're going to help all of our developers move faster.
But if you look at our core data out of the github site and just in terms of total overall pull requests
that made were made into the public github non-forged repos we saw the number actually go
down from 2022 to 2023 and i don't understand how to square that circle and i mean there's a ton of
potential confounding variables there that we could potentially go into but like it's also one of those things like when we're in an era where we're saying that everyone's
going to be more productive wouldn't you expect to see that at least directionally trend up you
you'd think so i i found that when people are talking about their internal experiments with
code assistance at enterprises something that comes up repeatedly has been we found that we
absolutely cannot judge their efficiency on a
daily basis because something that I was sort of surprised to learn is that there are an awful lot
of engineers that very frequently will spend a day writing zero lines of code. And it turns out
that in some cases, management is very perplexed by this. Like, are these people lazy? No, it turns
out that it's not just about output. It's
about understanding things and
figuring and gathering requirements and having
conversations with people before you
start writing code that isn't
going to solve the actual problem.
Like, that's a sign of maturity in many cases.
But you can start looking at it on a weekly basis.
Like, if you haven't written any code in a day, great,
fine. If you haven't written in a week, you start to wonder. It's been
a month. Okay, what exactly is your job again?
Because I'm clearly not understanding something.
Yes.
And so I think for me, again, I do not want to get us back into a place where we are judging
people's performance by line of code or by total pull requests.
Those metrics individually are not good metrics to use.
But if you're looking year over year, industry-wide,
in the face of having introduced a new technology that's supposed to be making us more productive,
it just feels like trend-based. Looking at all of these variables together, it feels like we
should be moving into a place where we're seeing increased output rather than decreased output.
It's just where I'm coming at it from.
I get that there's been layoffs.
The macroeconomic climate has changed.
We have maybe peaks from pandemic
that have made year over year stuff weird.
I know GitHub changed how they do two-factor auth.
Like there's a whole bunch of different other things
that could be impacting it.
But I think, and this maybe plays into the AI hype cycle,
it's like the way that we've talked about AI
and its ability to change how we're working, it's surprising to see that that's not actually
reflected in the numbers, at least this specific really confined set of numbers.
I strongly, I have trouble identifying large swaths of roles that can be removed by AI. I
can see a bunch that can be assisted by them.
But the closest I've got is that I use a lot less stock photography
when generative art works for slide deck purposes.
And that is, on the one hand, I do feel mad for artists and photographers
who have taken these things.
On the other, it saves me so much time because, yeah,
there's a bunch of stock photography of data center hallways.
And there are a bunch of stock photographs out there that I could purchase of giraffes, which, by the way, are not real.
But that's OK. I don't expect people to understand that.
And but there's nothing that has a giraffe in a data center aisle looking at a server.
But and I'm not good enough with Photoshop to do that in less than an hour.
So it turns out that Chad Jippity can spit that out very quickly. and boom, now the slide is at least that much funnier as a result. And that is an awesome
change. But I don't see that there are anyone that I hire for things is going to be replaced
in that thing by AI anytime soon. Even small individual projects that I find developers for on Upwork, for example,
like, well, careful, computers can take their job away from you. You don't think I asked some of
these systems to build the thing for me and they've had them fail utterly before I bothered
to write out a job spec? Of course I did. And it didn't work very well. So yeah, there's still
roles for people out there. I can't see a scenario where we're just going to fire a third of Team X and replace them with AI. I just don't see that the technology is there, ignoring the human impact entirely.
I mean, I guess we've seen a ton of scenarios where we're just firing a third of Team X and not replacing them with anything, such as the nature of late stage capitalism and layoffs. Oh, yeah. And part of it, too, was pile the work on other people.
There was a comment in an earnings call, I think yesterday, as of this recording,
where the CEO of, I want to say Spotify, but it might have been Shopify.
I can't keep them straight.
They sound too alike.
Whatever it was, said that they let go of 1,500 people
and that impacted their operations more than they would have expected.
It's like, well, okay, what did you expect exactly when you did that?
Did you think you had over 1,000 people sitting there twiddling their thumbs,
doing nothing?
People do things unless you're the most incompetent corp in the world.
And sure, maybe not all of them are basically pushing it to the max every week,
and maybe some of them do spend a significant amount of time
not focusing on their core job.
But yeah, you fire off a significant portion of your workforce,
you're going to be impacted by that.
The fact that you're surprised by that
says a lot more about you than anyone else.
So now we just put it on people,
people with hero complexes and keep working on these things.
Yes, that's true.
And I think for me,
when I'm thinking about the reason for the AI hype,
I think part of it is that transition
from where we were a couple of years back
where we were at like great resignation
and there was skyrocketing salaries in tech.
We had a lot of competition for tech labor
and people were able to make their own calls.
And now we're getting to this point
where we're seeing mandates for return to office.
We're seeing these layoffs.
We are seeing people having kind of just increased desire to have more control over their workforces.
And I think part of that is macroeconomic climate that we're seeing.
And I think we're also using AI as a scare tactic for people.
It's like your jobs are less secure than you think they were.
But I think part of the AI hype
is very much a labor
versus capital pendulum force.
And people are using AI in that.
It's wild and also fun
to see these things unfold,
to really, I guess,
get a deep sense
of what the zeitgeist is around these.
Because you can't get it.
You cannot get it from keynotes anymore,
from the corporate environment.
I've seen it with all three of the majors now,
where they are filling their entire keynote,
start to finish, with nothing but AI.
And that's great,
but customers care about other things beyond it.
I'm sure there are reasons that I don't fully grasp. Yes, part of it's that
they're talking to investors, not just customers. But I feel like I'm the one missing something
because believe it or not, they don't generally staff these companies with fools. So there's
clearly something that I miss. What annoys me is that I've been looking for it for six months and
can't find it. I think part of these hype cycles is there's definite FOMO of like, I don't want to be seen as being left out of this trend.
That is like, and even though it is a hype trend, like I do think that this is a step change in terms of how we are operating as an industry and as a technology that's going to matter.
And so you don't want to come across as the people who don't have the capability to do that. So you have to have some
of that forward-looking projection of like, yes, I am ready to help you into this new era and I can
shepherd you in. And all of my customers who are already customers are going to be ready to join
me on this whole transformation journey. I get where it's coming from, but I do think that
most enterprises are still at a place where they need to spend the work on their data and data structures rather than on the AI.
I think we're very much in a place where a lot of the enterprises who are wanting to take advantage of this are not necessarily in a place where their data is going to let them do that easily. So if you were making these keynotes, though,
about like, hey, we've been telling you
for several decades that you should probably
have a data organization and governance strategy,
that's not quite as exciting for the CEO
to get on stage and talk about
as it is to talk about all these wonderful
LLM-based things that they're doing.
It's, yeah.
On some level, I'm starting to worry it is hype
in that this stuff is extortionately
expensive to run. I think that that is underappreciated at large. And two, okay, great.
Everyone's talking about the upcoming value that's getting unlocked. What is that value?
Coding assistance. Terrific. Great. I pay or would, if I weren't an open source maintainer,
for GitHub Copilot in a blink because I will never miss the money
and it has saved me from times
when I'm just weird,
when I least expect it,
not intending to use Copilot,
it will suddenly auto-suggest
the completion of a sentence
that I'm working on
and a note to document or something.
It is, okay, this is really neat.
That adds value in ways
I did not expect it to.
And that is worth having, no argument.
But do I need seven of them from different companies? Do I want to? I'm not going to think to ask the robot how to do
every aspect of my day-to-day life. And every example that I see these things give involve
people buying things or whatnot. The Google Cloud Next keynote talked about someone on a website
wanting to complete a transaction. It was so contrived. It's that if I'm on your website, I'm trying to buy something.
The solution is then like reach out through a chat bot.
No, I'm going to go to your competitor who can make a functional checkout experience.
Although in fairness, I 100% used chat GPT to help me buy my mother-in-law's Christmas present.
So demographics, her hobbies, what am I
getting? Yeah, the problem
I run into with it is that in many cases
it comes across, it's sometimes
like to self-censor before it gets to the really
unhinged stuff. Not that I'm actually going to follow
through on the unhinged aspect, but it gets
the creative juices flowing on my end.
Because that's what a comedy writer's room
is. It's people playing off
one another, and yes, and and even though someone will say something
they know will never in a million years
see the light of day
and would get people canceled if it did,
it's, okay, that's bad,
but it gives me an idea
for how to take it somewhere in a new direction.
And that's the value.
And you can't do that when you have different robots
arguing about how to best frame a refusal.
I'm really loving it for brainstorming.
I think brainstorming is my favorite use case.
And every one of these things is ad hoc.
I don't need to do these things in large scale.
I have a couple of things
that I wind up programmatically interacting with
as part of a pipeline system.
And that's awesome.
But nothing on that ever sees the light of day
or other humans without my review first
because I'm not a lunatic.
Everything else is these weird one-off ad hocs.
And yeah, okay, great.
I can either pay for the ChatGipity Plus
or use one of the increasing number of websites
that winds up implementing almost the exact same experience
and just hooks the API on the backend
and in turn winds up saving money, which is kind of wild.
But there are different ways to get there.
And I'm not here to optimize over $15.
I'm sorry.
The only time I do that is my AWS bill.
One of the things that's interesting is I think a lot of the companies that are coming up as trying to be competitors to open AI are talking about model choice and kind of trying to do this openness.
And I think openness in this ecosystem is we should not at this stage in the podcast get in tow.
I think I opened a can of worms.
But I do think that it's an interesting thing to assume that people are going to care significantly around which foundation model they're using or even like just which general model they're using.
Because I feel like there's a lot of people who are going to get to a place of good enoughness in this.
And I will use the tool I know
and I will use the model that is working.
And I think that a lot of the industry
is assuming there will be significant amounts
of time spent speaking and optimizing.
And that's probably very true if you're a data science team,
but I'm assuming that a lot of the general users
are not going to optimize for those things. They're not going to optimize for small dollar changes. They're not going to
optimize for small performance or output changes. And they're just going to use what they know.
And I think that's going to be interesting to watch. I'm curious to see how it goes.
It's one of those areas where it almost feels perilous to talk about this stuff on a podcast
because there is a little bit of production delay between us speaking these words and it's seeing the light of day.
Oh, yeah.
We might be all wrong by the time this comes out.
Right.
Exactly.
It's a well, why didn't you comment this thing that happened this morning in today's episode?
Like, gee, professor, I wonder if that happens, we'll just have to have another conversation,
I guess.
Exactly.
And we should.
I want to thank you for having this one with me.
If people want to learn more, where's the best place to find you?
Redmonk.com.
In this era of fragmented social media, that's probably the easiest place.
Come find me there and then you can branch out.
I hear you.
Thank you so much for your time.
It's appreciated.
Wonderful.
Have a good day.
Rachel Stevens, senior analyst at Redmonk and my personal nemesis.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud. If you enjoyed this podcast, leave a five-star review
on your podcast platform of choice. Whereas if you hated this podcast, please leave a five-star
review on your podcast platform of choice, along with an angry, insulting comment that's written
in a disgusting enough language that even the language rankings have never heard of it.