Screaming in the Cloud - Complex Tech, Public Learning, & Impostor Syndrome with Kyler Middleton
Episode Date: June 25, 2024Kyler Middleton, a Senior Principal Engineer at Veradigm and co-host of the Day Two Cloud podcast, joins Corey in this Screaming in the Cloud episode to talk about how tech careers are changi...ng and the big impact of AI on starting in tech. Kyler, who once wanted to be a librarian, tells her story of becoming a tech pro. She highlights the importance of learning and sharing what you know, especially in tech. Corey and Kyler also get into how AI is changing the game for new techies and what that means if you're starting. Kyler's take on using and teaching tech offers some really helpful tips for anyone looking to get into or move up in the tech world.Show Highlights: (00:00) - Introduction(01:49) - Kyler describes her multiple roles(03:21) - Discussion on the realities of 'Day Two' operations in cloud environments(07:38) - Insights into technical debt and the concept of 'Day Two' in DevOps (13:54) - The importance of sharing knowledge and learning in public to benefit others in the tech community(20:07) - The use and limitations of AI in professional settings(26:41) - Debate on the overreliance on AI technology in decision-making processes and its potential consequences(32:05) - Closing remarks & where listeners can connect with KylerAbout Kyler:Kyler grew up in rural Western Nebraska, fixing neighboring farmers’ computers in exchange for brownies and Rice Krispies. Then she was going to be a librarian to help people find the information they need. Then she discovered computers were a real job, and more than just a fix for her munchies, and she's now been a systems, network, call center, and security engineer, and is now a DevOps lead, and software engineer. She speaks at any conference that will have her, hosts Day Two Cloud podcast from Packet Pushers, and writes up cool projects with approachable language and pictures as part of her Medium series, Let's Do DevOps, with the intention to upskill anyone of any skill level. I have an insatiable curiosity and desire to help the folks around me succeed and grow. So - Let's Do DevOps.Links Referenced:Day Two Cloud Podcast: https://packetpushers.net/podcast/day-two-cloud/Kyler on LinkedIn:  https://www.linkedin.com/in/kylermiddleton/Kyler's Blog on Medium: https://kymidd.medium.com/SponsorPanoptica: https://www.panoptica.app/
Transcript
Discussion (0)
imagine that as a human person that you're hiring for a job and you would say,
of course not, you're not allowed in my company, I'm not giving you any authorization over my
systems. And yet with AI, we're just like, eh, someone will catch it, probably. That is concerning.
Welcome to Screaming in the Cloud. I'm Corey Quinn. I periodically have lamented the fact that the path
I walked to get into the role that I'm in, which by the way is not something to aspire to so much
as be a cautionary tale, has long since closed. Where does the next generation come from when
we're talking about technologists? Sending people down the same technological path that I went down
isn't viable for a variety of reasons.
Here to talk about that and several other things as well is Kyler Middleton, who is a senior principal software engineer at Veridigm.
Kyler, thank you for joining me.
Absolutely. I'm really excited to be here. Thanks for having me.
This episode's been sponsored by our friends at Panoptica, part of Cisco. This is one of those real rarities where it's a security product
that you can get started with for free,
but also scale to enterprise grade.
Take a look.
In fact, if you sign up for an enterprise account,
they'll even throw you one of the limited,
heavily discounted AWS skill builder licenses they got
because believe it or not,
unlike so many companies
out there, they do understand AWS.
To learn more, please visit panoptica.app slash last week in AWS.
That's panoptica.app slash last week in AWS.
So let's start at the beginning for folks who have no idea who you are, what place you occupy in our ridiculous ecosystem.
Where do you start?
Where do you stop?
Oh my goodness.
I am collecting jobs.
So that's a great question that I ask myself each day too.
So my day job is a senior principal DevOps engineer at a healthcare company in the United
States.
So I'm writing automation with GitHub.
I'm trying to help the software team develop and deploy their software
and have it actually work
and do interesting stuff.
What a novel concept.
Oh my goodness.
I also do consulting on the side,
just direct hours consulting
to help people with stuff,
mostly because I get a little bored
and have ADHD
and it helps me stay entertained.
And job number three
is hosting a podcast
with Ned Belovance for Packet Pushers
called Day 2 DevOps. And you should totally come listen to us. We're awesome. And also just writing
blogs. I generally try to do my work out loud and release all the stuff that I can legally release
as open source tools and open source content. Let's start with the podcast piece. And I think
that's probably the easiest point of entry because every time I've seen the name,
I can't help but snicker.
Amazon has famously said, it's always day one.
What does day two look like?
Someone once asked Bezos and he gave an answer
that looks suspiciously like the Amazon of 2024.
And great, like they're in denial that it's day two,
but it's very much day two.
I'm curious what day three looks like.
But yeah, day two DevOps in many cases
seems kind of like what is set in over there
in a bunch of different respects.
That said, that is certainly not how you intend
the podcast to come across
because most people don't live their lives
playing a game of inside baseball
with Amazon corporate references.
What is day two in your context?
I think it's run.
I think it's in the crawl, walk, run sphere. This is run. You
are in the cloud, you've deployed, and now you have to keep the damn thing on. I hope cursing's
okay. I think it is. Oh, I think we'll allow it. Yes. Sweet. So I just think it's interesting.
And I think it's really a science that we're discovering together collectively as we go. How do you control your costs?
How do you control your security?
When anyone with a credit card can build their own BPC or VNet and deploy software to it,
and maybe even connect it to your corporate network, how do you secure everything?
Generally, after the fact, in my experience, people care about these things in a reactive
context a lot more than they do proactively.
It's like buying fire insurance for your building. People care about it right after they really wish they cared more about it. Same as backups. Yep. Flood insurance sales go
nuts after a flood, but not before, of course. I've always felt that the day two approach of
how to run things has been dramatically underserved because there's a universe of blog posts of varying quality on here's how to set
up a thing ranging from a vhost in an Apache configuration all the way up to Kubernetes itself.
Great. Okay. Now it's up and running. How do I maintain this thing in an ongoing basis? Now
something has gone wonky. How do I troubleshoot what that is? And historically, in the roles that I've had,
the way that this was addressed was they hired ops people who were generally a little older than a
lot of the developers who were around there, because there's no such thing in my experience
as a junior DevOps person or junior sysadmin. The answer you want to hear when there's a weird
problem is, oh yeah, I've seen this before. This is what it is, and here's how you fix it.
As opposed to, this is an interesting problem, which is scary when you, oh, I don't know,
work at a healthcare software company, for example. I previously worked for a security
startup called IAM Pulse, which was focused on IAM in the cloud, which is foundational security.
It's how absolutely everything works for authentication and authorization. And if you
Google any of that to go learn it on the internet, which is how we all teach
ourselves this job anyway, right?
You will find all of these examples that say, this is how it works.
And then a little note at the bottom that says, don't do this.
It's totally insecure.
So we're teaching all of our newbies, which are all of us at one point when we're learning
something new, the wrong way to do it.
We're not teaching them the way to run the cloud. We're teaching them the way to set it up in a terrible
way. So that's just endemic to a lot of these fields. But it's worse and stupider than that,
because now not only do you have the blog posts where in the tiny type at the bottom for a human
to read, great, oh yeah, go ahead and just don't ever do it this way. We're just doing this because
it's expedient to get it out there. Yeah, you know what else doesn't read that thing is these AI
large language models that are trained on everyone else's work and that, oh, okay, this must be how
you're supposed to do it because surprise, virtual dumbasses tend to lack context, almost like actual
dumbasses. Hi, I say that as a dumbass myself, but there's's a the problem then is that you wind up with this
very confident also wrong
robot that's doing its
best damn impersonation of a white guy in tech
that I can imagine because being confidently wrong
is my job god damn it I feel like I
should have a union maybe I shouldn't have fought against it for all
of those years the white guy's local
no it's the problem that
you have though is that now you have all these
bad examples out there and they're terrible and people learn from them.
To their credit, AWS has gone back and fixed virtually all of those those blog posts historically
where we're just going to grant star permissions on everything, because if we don't do it that
way, it's going to be the first half of the blog post is setting up permissions, at which
point I freaked out and yelled at them.
How about that? Because that's what everyone does. Like, yeah,
I want to get this thing up and running. I'll go back and fix the security later.
Spoiler, later never comes. And that to do in the comments becomes load bearing.
Absolutely. I have reviewed so many pull requests that are, oh, I'm just going to do it this way
for now to get it running. And I swear that's a
lesson that we all learn eventually, but it's years down the road and we have built so much
crap that our production now relies on. Really, we got to start teaching that to our entry-level
folks. Do not do stuff just for now. It'll never go away. Yeah. Tomorrow never comes. And technical
debt is something that tends to get a bit of a bad rap there. You're always going to make trade-offs and compromises in favor of getting to a goal that matters to
the business. It's not necessarily a bad thing, but you have to understand what you can give up
to get there, what should not be addressed, and what at some point you need to go back and service
some of those things and go back and clean some of them up. Amazon, to its credit, does have the
concept of one-way doors where don't casually make decisions that are going to be very difficult and painful to
unwind. Most decisions don't look like that, but recognizing them from where one starts and the
other one stops, that becomes a bit of a challenge as far as identifying them in advance. And that's
often where I think experience plays in. Absolutely. And I like to think that's what
the podcast Day 2 DevOps is about. The Day 2 is we're interviewing people that have been through
those painful exploratory journeys where they've done it wrong and they've come here to tell you
what not to do. Sometimes they even tell you what to do. But even if you're getting started,
you want to listen to the folks that have done it wrong before and noticed because they'll help you
avoid those same pitfalls. And that's all we can really hope for, right? That's part of the
challenge I keep running into is people, for whatever reason, are reluctant to talk about
their own failures. They're reluctant to talk about things that they have done going down a
path that didn't work out. I still have a strong memory early on in my conference speaking career where I was
watching another presenter give a talk about how they had this amazing infrastructure where they
worked. And I turned to the person next to me like, wow, I would love to work at a place that
ran infrastructure like that. And they said, yeah, I would too. And I looked at their badge and they
work the same company as the presenter. It's the, everyone gets up and tells a modified, glorified version of a story.
And understand, I do not believe in mindless adherence to literal truth when giving a conference talk.
Sometimes you have to embellish a story to make it make sense or dramatically abbreviate it
because you don't want to tell a joke, effectively, that takes 20 minutes of setup in order to get to a punchline, usually.
That's, so you have to make the story work.
But by going in the other direction and just hand-waving over as if everything were perfect,
everyone else starts to feel inherently bad about their environment.
Now, I've been a consultant to an awful lot of big companies.
And I have worked in a variety of environments over the 20 years I've been in this space.
And I have never yet found an environment that wasn't on some level a dumpster fire internally.
And I'm sure that some of my clients are going to be upset if they're listening to that.
Like, hey, our environment's awesome.
It's like, is it though?
What about X, Y, and Z?
And they start going, wow.
And my point is not to name and shame anyone.
It's to name and shame anyone. It's to name and shame everyone. Because every single environment, including the stuff that I built six months ago, is
trash.
And there is technical debt there.
And I would never do these things a second time the same way.
But that's the way that infrastructure inherently works.
That is the nature of the reality.
And Amazon, Google, Microsoft, all the giants, they don't have this magical Valhalla-style
infrastructure. They have a different kind of problem in some cases, but they very much have
infrastructure fires. Absolutely. There are thousands of engineers at all of those providers
running around putting out fires all day, all the time. And when your webpage loads for Facebook,
you think, oh, their infrastructure is perfect. Nothing ever goes wrong. That's not the case. Stuff breaks all the time. Thank you, Steve Jassy. But you do need to just put out the fires,
learn and help scale. We have the same problem with people, especially when you're a junior
engineer. You look at your seniors and think, oh my goodness, they're geniuses. They've been the
smartest people in the room their entire careers. And that is such a silly idea because I am the
smartest person in the room sometimes. And I have done so many dumb things and broken
so many systems. And that's how I've learned. I've broken a ton of stuff and that's why I
know stuff today. So I think it really sets our learners back when we're not upfront as
senior engineers that, you know what, I've done a lot of dumb stuff and I'm here to tell you about
it and what not to do. Oh, I've done that a few times now and you can't ever, I have always found being unassuming about those things is
incredibly important because otherwise it's like, huh, why aren't the slides working? Someone's
going to try with, just hang on a second. The smartest person in the room forgot to plug in
the projector. Like great. We all like pride goes before the fall. It always does. Now I want to be
clear. I, I introed you with the idea of talking about where the next generation comes from and where we find the next generation, bring them up.
That is not you.
Again, you are a senior principal engineer.
And unless title inflation has gotten bizarrely out of hand where you work, you are very clearly not a junior person.
But the reason I say that,
the reason I allude to you in that sense is that you have done a lot of learning in public. You
are passionate about passing on the things that you learn, learning in public every chance that
you get. And my gripe about it is just that this is such a rare thing. I feel like people are
terrified that they're going to be discovered as a giant fraud that they think they secretly are. Yeah, you and everyone else. There's a support group for this.
It's called All of Us, and we meet at the bar. And I feel that way all the time still. I have
done so much in my career that is incredible. And still, just about every day, I have a moment where
I'm like, how did I trick all these people to let me on shows like this, to let me lead meetings
like this? It's ridiculous.
Yeah, I've done as much as I can to be introspective and think about when I was a
new learner and I knew nothing because you do when you're new to anything. I Googled stuff
and I read people that released information for free because you don't have a lot of money when
you're getting started and you're young and you're just trying to survive. And that stuff only existed to get me here because people like me made it free, made it available,
put it on the internet, took the time to write it and speak it. And so as much as I can, I'm
releasing all the software that I write that I can legally get away with. Hello,
Veridigm Lawyers, if you're listening to this, I do that. And also just write everything
that I can down. And I put it on Medium to pay for coffee, all the caffeine that I imbibe that
lets me write all those blogs as much as I can. And it's useful. It's the sort of thing where,
like I have lost count a number of times. I have gone looking for how to do a specific thing
and discovered a great blog post that explains exactly how to work with it's doing.
It's explaining this in terms of from someone who is clearly smarter than I
will ever be.
And then I look at who wrote it and it was me five years ago or something.
And it's,
Oh,
huh.
I guess I have forgotten how to do the nuances of those things.
I mean,
half of the blog posts I wrote at writer basically notes to my future self
because I'm probably going to come back this way around again. There's a lot to be said for knowing how to look
for things and how to figure out the answer to something you don't know. If people were to ask
me when, if they're getting into tech as a whole, or honestly, most things, what's the first thing
they should learn? One of the things that I would come back with almost instantly is how to ask
questions in productive ways. This used to be a problem in IRC. It's been a persistent condescending problem.
People make fun of folks who are in really obnoxious ways over on Stack Overflow, but you
see it again and again and again. It, it's not working. It's the worst report, it's the worst
bug report in the world. I always liked the approach of breaking it down into a small
reproduction case.
Okay, I'm trying to do X and I'm not seeing it.
Instead, I'm seeing Y.
The documentation says this, but I'm not seeing it.
And over half the time when I'm putting together that minimal reproduction case,
I solve the problem.
It's, oh, I forgot something simple.
And occasionally, sometimes, oh, the documentation is wrong.
Or, huh, I found a really weird bug.
What's going on?
Because even if I discover these things,
I am never the only person to make that mistake.
I just have zero problems saying,
hey, I'm a fool.
You know, commas are super important.
And I remember learning stuff.
Anytime I have a problem, I'm going to Google
and I find a Stack Overflow question that it's,
oh, it's my problem.
That's great.
And the most upvoted answer is,
only an idiot would ever ask this. You should never be doing what you're doing here. And it makes me,
I have been mad for 15 years about those types of responses. So I am out there to seed the world as
much as I can with stuff that says it's okay to not know. In fact, it's probably better. That's
where you should start. Ask those questions and educate yourself. And if someone doesn't know, teach them. Don't make them feel bad. You were there once too.
Few things are better for your career and your company than achieving more expertise in the cloud.
Security improves, compensation goes up, employee retention skyrockets. Panoptica,
a cloud security platform from Cisco, has created an academy of free courses just for you.
Head on over to academy.panoptica.app to get started.
Anyone who comes back with, oh, you shouldn't ask that question.
It's only something a moron would ask that.
It becomes ridiculous.
It's a, who answers a question like that in good faith?
Now, I will absolutely answer bad faith questions like that,
but you've got to do a lot of work to convince me
you're asking something in bad faith.
And to be honest, the signs are pretty freaking obvious.
Absolutely.
Someone just saying, it doesn't work.
Here's a picture.
Like, I get it.
You probably need to put a little more effort
into giving me your situation and context
and architecture and error messages.
But still, sometimes that's people who are busy. It's easy to give people grace and it's free to give people
grace too. Remember today, someone else doesn't know something or is having an issue or whatnot,
but tomorrow it's going to be you. And how do you want to be treated when that happens? Spoiler,
it's probably not with a bunch of sarcastic jokes when you're in the middle of a production
outage. What is it? I'm curious, what got you into the idea of effectively learning in public and writing
everything down as you discover it yourself? Was this just something that you came by naturally?
It's just a part and parcel of who you are? Is it a habit you had to train yourself to do?
It's a funny story. When I first started doing computers as a young teenager,
they kind of just made sense to me. The color-coded connectors, and it's very logical,
and that's how my brain works anyway. So it's great for me. And I didn't realize until college
that that is a career because it doesn't come easy to everyone. And until that point, I was
planning to be a librarian because I like to help people. When I was a kid, the librarian was always there and they were always helpful and they
never judged me for my dumb questions.
And so I mixed that with just absolutely chaos ADHD that can't remember anything.
And so wanting to learn with not remembering anything, I combined everything into writing
everything down, doing as many podcasts as I can,
because before it leaves my brain, which is about two weeks time, I need to write it down or I will
not remember it. I am in the same boat. I wind up, I've tried the ADHD urge to set up a system
of note-taking and the rest. Setting up a system is fun. That's day one. Day two, using the system,
nope, I have dozens of them all over the
place. The things that I have found that work for me are I can basically have a heap of notes. I use
drafts historically. I'm getting into Obsidian. I'm sure I'm using it wrong, but it's a bunch of
markdown text files. I know how to handle those. But you know what's gotten really, really good
over the years is searching through a bunch of text files with my old buddy Grep. Who knew? So I can
find the thing in the reference. It's not like a room where I have a big pile of components like,
now where did that hammer go? And then I'm trying to find it and I can't. And well, I guess
technically isn't anything a hammer if you hold it right. And that leads to disaster and I've
replaced iPad. But yeah, there's this entire approach of, oh, I'm just going to have this system that works.
The only system I found that works is letting computers do the things that I'm inherently bad
at. Absolutely. And that's their job. They're dumb. They're not terribly creative, but they're
incredibly fast. So use them for that. You're the creative one. And that's why I'm not terribly
afraid of AI. And I suppose if I'm eating my words and eventually I'm put out of business or enslaved by an AI, like I guess we all will be. But for now, you should be using AI
to help you. It is there to assist. It's going to be dumb and it's going to be not creative,
but it can totally help you get there. And I similarly have, I think, 2,500 Apple Notes
in my little notes app. and I just search through them
when I need to remember how to do something. I do want to talk about AI because in 2024,
I'm legally required to in every conversation I have, apparently. But I was a big skeptic of
machine learning and AI for a long time. And it took a single instance, I think with GitHub
Copilot, to radicalize me. Which was, I
asked it for funsies. Okay, you think you're
good at writing code? Try this one.
Go ahead and query
the AWS public pricing
API to come up with the
hourly cost per
region of a managed NAT gateway
and then display it in a table going from
most to least expensive.
And it did it and it
worked with very little tweaking, which was, okay, there is actually something of value here.
This would have taken me a few hours to do myself because I am bad at data structures. And this is
the AWS pricing API because they are worse at data structures. And it looks like JSON and often is
not, but it's an annoying layout here. This would have taken a couple of hours for
me to do easily. And I showed this to a senior engineer I work with, and his immediate response
was, well, that's great for like the easy stuff you throw to like a developer on Upwork, but
there is always going to be a place for senior engineers. And I thought that was an interesting
response on a few axes. First, and obviously, was the defensiveness. Interesting. I didn't expect
it, but I guess I'm not surprised that that does make sense.
Is this thing coming for my job?
Followed as well by, okay, but let's be honest with ourselves for a second here.
Senior engineers don't just emerge fully formed from the forehead of some god.
They, what is a senior engineer?
A junior engineer who's fucked up enough times that now they're going to be, they know where
the sharp edges are. If you outsource all of the easy low-end stuff, quote unquote easy, let's be clear here,
nothing is easy if you don't know how to do it. But the, so where do these, where do these people
come from? And I've seen this in the ops world enough. I started off doing support and that was
a gateway to doing really interesting things and moving up and expressing curiosity. Today, those
jobs don't exist the way they once did.
They're largely highly metric to death.
They are effectively, you will quit the same job that you entered.
There's not nearly the level of upward mobility that there once was
as the industry becomes increasingly stratified.
So I don't know what it would be like to be entering the sector in 2024.
I don't have advice that is useful.
In fact, I worry that most advice I would have would be actively harmful be entering the sector in 2024. I don't have advice that is useful. In fact,
I worry that most advice I would have would be actively harmful in this era. And I don't want
to give boomer style interview advice. Oh, hit the bricks, print out your resume on fancy paper,
ask to speak to the owner, have a firm handshake. You'll have a job by dark. I don't want to give
bad advice, but I do not know where I'd even start if I were entering the space today.
What's your take on it? I have the exact same perspective.
I get asked all the time,
well, how do I get started?
I'm a janitor or something.
I don't use computers for my day job.
And I want to,
because look at all the money there.
And there is, there's so much money in tech today.
And my advice is the same.
It's go out and be an IT one,
do support, learn how networking works.
But when AIs are doing those
jobs, which they soon will be, right? We're going to start to have licensed full-time employee seats
at larger companies that handle tier one. I have no idea how we get folks to tier two without them
being able to do tier one. And my best advice is go to school for it. But I'm a little bit worried
that if you get a two or a four-year degree, by the time you're there, AI might have taken tier two as well. And so it's
concerning for entry-level folks and for just overall the health of the ecosystem here that
leads to senior engineers. I don't have the answer here and it's concerning.
I'm very interested in the idea of sending the elevator back down. I find the attitude of,
well, I got mine. Kids will figure
it out for themselves to be largely abhorrent. It's, I got supremely lucky in the course of my
career. I am enough of a statistical aberration that I should not exist. And when people ask,
oh, how do I, how do I grow my career that way that you do? My immediate response was, oh my
God, don't do that. Don't do that. Part of the reason I'm good at thinking on
my feet and telling stories quickly is because I was great at getting myself fired from jobs.
And when rent's due in six weeks and you don't have it, because I was also bad with money in
my twenties, you learn to tell the story they want to hear during a job interview and get the
offer quickly. I don't suggest then as a result, well, how do I get to be more like you?
Walk in tomorrow and call your boss an asshole
and get fired.
That's step one.
Don't do that.
It is counterproductive.
I think of AI as an assistant
that makes a lot of mistakes,
but is very fast.
It's a tier one that is just kind of bad at their job.
But if you don't have the exposure
in your career path,
your ecosystem to call it out when it does dumb shit, then I don't know what you do. I think you
learn to trust it. And that problem will only get worse as it gets smarter and it starts to make
mistakes a fewer percent of the time. So today it's maybe right 90% of the time. And that means
like intuitively, I don't trust it.
10% of the time someone tells me a lie,
I'm going to check the 90% that are right.
Exactly.
If I'm interviewing a candidate
and they make up an answer to a technical question,
which by the way,
if an interviewer asks you a technical question,
they probably know what the right answer is.
And the candidate is confidently wrong.
I can't trust anything that they tell me.
The correct interview answer, by the way, is I don't know, but if I had to guess and then speculate wildly, if you're wrong,
you've already disclaimed it. And if you're right, you've shown an ability to pick up concepts
quickly and reason your way through a problem. Either way, it's a win. But I guess my challenge
right now is that I see AI as being terrific at delivering a surface topical level of answer to things.
But as soon as I start asking it
about anything that I'm more than passingly familiar with
and questioning it,
its answers fall completely apart.
And it's, okay, this is a thin veneer of bullshit
on some level.
And the disturbing part is the realization
just how much of the world functions
on a thin veneer of bullshit.
And that's not to say it doesn't have value.
It is not useful.
It is terrific at taking my emails,
which are very terse, which codes as rude,
and turning them into, make this polite and friendly.
And it adds four paragraphs.
And people are like, oh, it was such a lovely email you sent me.
And my prompt was simply like, make this polite.
Give me the file.
Great, make that polite.
Good.
It's a, and that's fine, but's fine, but the danger of it, the
hallucination problem, I think is endemic. I don't know there's necessarily going to be a fix there.
And at some level, I can't shake the feeling that companies are over-indexing on AI and its solution
to all things way more aggressively than any aspect of the technology currently deserves.
I've seen so many jokey style posts that are like from people
outside the industry saying tech leaders see this technology that hallucinates 10% of the time.
And they're like, oh, great, let's put it in everything. I'm sure nothing bad can happen.
But of course it can. In the interview scenario you set up, I look for the exact same thing.
I want candidates to say, I don't know, because that is very important to
stop and evaluate when you're on a project and you don't know the answer. I would rather you go look
or ask for help than just make it up because make it up is where you destroy stuff. You break your
databases, you delete your data, you bring down production. So unless we can have AI say, I don't
know, I need help. I'm going to have a hard time trusting it ever. And I feel
that we should have that response and we don't. As an industry right now, we're just accepting
that AI is going to lie to us and AI is going to make up stories. And I don't think we should.
I think that's concerning. I know they're banking on AI will be fixed, whatever fixed means by the
time that it rolls out broadly. I sure hope that's true. I don't have the answer
to that one either. I've seen no indication that the hallucinations are getting better.
What I have seen in some example tests that I run, because I have a whole library of fun prompts I
like to hit things with from time to time, and I usually don't share unless the answers are
outright hilarious, but I have noticed an increased tendency of AI to double down when wrong.
Imagine that as a human person that you're hiring for a job and you would say,
of course not, you're not allowed in my company. I'm not giving you any authorization over my
systems. And yet with AI, we're just like, eh, someone will catch it, probably. That is concerning.
Something you learn pretty quickly as a consultant, when a client says something to you
that you know for a fact to be wrong, you don't exactly win points by contradicting them outright. You say, oh,
that's interesting. That doesn't match my understanding and experience. Let's look it
up together as a quick detour. Oh, look, it turns out it does work this other way. Huh,
I guess now we both know. And it's polite. It makes people feel like you've been along with
them on the journey and you didn't just call them out in front of their boss, which is helpful. But there's a human element to this. And I think that there is
an increasing direction of AI drift toward, nope, I'm going to choose my own facts. And basically,
if you don't like it, you're wrong. Absolutely. And I'm concerned. I'm very concerned about that
part. I don't exactly think it'll take all of our jobs, but I am worried that we're going to pivot
so heavily into a technology
that imagines its own reality
that we're going to start to over-depend on it
and under-develop our own skills.
And I see that as the future of AI
is we're going to be shepherding the AIs
and catching their bullshit
and trying to correct their mistakes.
And I don't know if I want
that job as much as I like just building technology. Regardless, that is potentially the future in five
or 10 years. We're automating that AI. I have had such a mixed bag experience with so much of the AI
stuff that it's just, it's great. But if you send, if you send this output to the outside world to speak on behalf of your company or as you without human review at the least, you're a fool. I've
done a number of experiments to see, can it analyze AWS product releases in the tone of voice that I
use and the insight that I tend to apply to it? And the answer is on a terrific day, maybe 60%
of it. It is, It is tricky to get there.
It misses a lot of it.
And sometimes it goes in horrifying directions,
like brand destroying directions
if this stuff saw the light of day,
which is why it never does.
It's great.
You've given me some awesome turns of phrase.
Thanks, AI.
And you can create hilarious images
when I basically bully you into it.
But that's about it at the moment.
I'm sure this will change, but there are, today at least, I'm not willing to put my entire professional
future in the hands of a robot. I've heard an engineer, I can't remember their name now, say
that AI today is the internet in 1999. And sure, a 15-year-old will break into the FBI a couple of
times and destroy a telco.
That stuff's gonna happen.
So put your safeguards around your AI,
but that doesn't mean stop using it.
That means learn to use it better,
learn how it works, develop it.
And I think for better or worse, that's our future.
So that's the skillset you should be learning
is how to use it and what its limitations are.
I don't disagree.
It reminds me on some level of the math teachers
back in the 90s when I was in school telling me, oh, you won't have a calculator in your pocket
all the time, so you've got to be able to do it all by hand. And I think the better, more realistic
answer is you have to understand enough about this to know that when the calculator gives you an
insane answer, there's an error somewhere in there and maybe just don't blindly trust it.
But there are definitely scenarios where it almost feels like it's a protectionist thing,
like, well, I wouldn't do well in an AI world, so therefore I'm going to boohoo it, the whole thing.
No, I think it would be terrific if this stuff worked as advertised. I am tired of being shoved
down my throats and every product under the sun and it taking all the oxygen out of the room
compared to, you know, things in infrastructure that I really care about today that are not
touching AI. But I feel like that does get fixed
in the fullness of time.
I hope anyway.
I hope so too.
And I don't have the answer.
I don't think anyone does
because we're still collectively building it together.
And they're sure, you know,
collecting all the data that I put on the internet.
So, hey, I'll do my best to help out.
Exactly.
I really want to thank you for taking the time
to speak with me about all of this.
If you want to learn more, where's the best place for them to find you?
Totally.
I have the hilariously URL kyler.omg.lol, which is a real website that you can go to
fantastically.
I'm very active on LinkedIn.
Please connect and message me on there.
And also on Medium, where I'm writing Let's Do DevOps and Packet Pushers,
Day 2 DevOps with Ned Belovance.
And we will put links to all of that in the show notes.
Thank you so much for taking the time to speak with me today.
I really appreciate it.
Thank you so much.
Tyler Middleton, Senior Principal Engineer at Veridigm.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice.
Whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice,
along with an angry, insulting comment that says in small text at the very end
that this is not actually how you're supposed to do it, that I'm sure everyone will read.