Screaming in the Cloud - Piledriving the GenAI Grift with Nikhil Suresh
Episode Date: July 23, 2024While we can’t repeat the title of his blog post here, Nikhil Suresh recently gained notoriety for his scathing takedown of the hype surrounding GenAI. On the surface, it appears his anger ...lies with the tech, but that’s not the case. In this episode, Nikhil explains to Corey why his frustrations are targeted at a predatory bubble swindling young professionals and investors. You’ll hear their thoughts on the correlation between AI and crypto grifts, why most tech keynotes are just fluff and buzzwords, and when industry catch-all terms start to lose meaning. While GenAI may still show some promise, this week’s episode breaks down why you shouldn’t believe the hype.Show Highlights: (0:00) Intro to episode(0:41) Backblaze sponsor read(1:08) The origins of Nikhil’s viral article(4:20) The disconnect between buzzwords and work(5:26) Throwing money at AI(7:17) AI vs. craftsmanship(13:36) The rush to get AI tools out the door(16:12) The telltale signs of bad AI content(18:50) AI, crypto, and GPU grifts(20:33) The fallout of Nikhil’s blog post(22:34) Firefly sponsor read(23:10) The practicality of GenAI(26:24) GenAI presentations vs. reality(29:07) Predatory hiring practices and tech’s current barrier for entry(32:03) Sturgeon’s Law in the industry(35:22) Consequences of the hype cycle(38:48) The fantasy land of “conferenceware”(42:01) Where you can find Nikhil About Nikhil Suresh:Nikhil is one of the directors at an Australian data consultancy named Hermit Tech, though he’s probably most well-known for writing a blog titled Ludicity. Nikhil has a background in psychology and data science.Links Referenced: https://ludic.mataroa.blog/https://www.hermit-tech.com/SponsorsBackblaze: https://www.backblaze.com/Firefly: https://www.firefly.ai/
Transcript
Discussion (0)
You know, every week there's some new thing about how all writers are going to lose their jobs,
all artists are going to lose their jobs, you're going to be out the streets unless you learn how
to program. They tried to terrorize programmers, but it didn't work because we knew what we were
talking about. Welcome to Screaming in the Cloud. I'm Corey Quinn, and I am thrilled to be able to
have the chance to talk today with someone who took a, what I found to be a very fair and reasoned approach to the ongoing zeitgeist fixation on AI.
Nick Suresh is a director at Hermit Tech.
Nick, thank you for joining.
Very happy to be here. Thanks for having me on.
Backblaze B2 Cloud Storage helps you scale applications and deliver services globally.
Egress is free up to 3x of the amount of data you have stored and completely free between
S3 compatible Backblaze and leading CDN and compute providers like Fastly, Cloudflare,
Vulture, and Coreweave.
Visit backblaze.com to learn more.
Backblaze. cloud storage built better.
And you are the author of the very even-handedly titled blog post that came out a couple weeks
before this recording titled, I Will F***ing Pile Drive You If You Mention AI Again. And that is
just a chef's kiss, beautiful title. Well done. Even if this goes no further, thank you for that title.
It absolutely made my week.
Thank you.
Thank you.
I was so calm while writing it.
I had no pulse.
My heart was not beating.
What I love every time someone writes an incendiary topic like that, where there's profanity in
it, and I put it in the newsletter that I send out to 32,000 people every Monday.
I love the bounces I get where the chiding ones of like,
the mail filter rejected your email
because you used unkind language, yada, yada, yada.
It's like, great, this is a terrific list of companies
that I never would want to work at
because let's treat people like adults.
But ignoring the stylistic aspect of it for a second,
can you describe the basic,
where did this blog post come from?
What inspired you to put digital
quill to ink?
Digital quill to parchment
and pen this amazingly
well-drafted screen?
First, thanks for that.
I only entered the tech market around
2019, coming out of a
data science degree from a big university here.
Back then, GPT didn't exist.
But at that point, you still couldn't really do any AI work, right?
It was just hundreds of managers who had no idea what they were talking about,
barely knew how to operate a computer.
We just go on and on about it.
They'd hire people, you'd join the company, and there'd be no work for you to do.
They had no clue what they were talking about.
It's almost like quantum computing, where the hello world tutorial is
go and get a PhD from Berkeley or equivalent
and then come back and we'll go on to step two.
That's exactly right.
They also talk about quantum.
I went to a conference last year in Queensland,
something digital.
Half the talks were on quantum,
just quantum.
They just say quantum, whatever that means.
And there's no way the audience had the credentials
to know what that means,
because I don't,
and I was more qualified than them.
It feels like it's an episode of Star Trek
Technobabble come to life when a lot of these people
give conference talks about this. Like,
okay, great. The only people that can really
say yay or nay basically
fit around a diner table at Denny's
and that's great, but you're giving this
talk to 10,000 people. We're just all going to
smile, nod, and wait for it to get back to something
relevant to our experience. Yeah,
it was fascinating. And, you know, so that was 2019, though, when I mentioned that you couldn't really
do any serious AI work. And then I just left the field. I was like, I'm not going to have a job in
two years. And around the time GPT-3 was coming out, the jobs actually were disappearing here in
Melbourne. And now they've just exploded again. People have no idea how they're going to get
value out of this technology in any way.
They talk about it obsessively.
I got a call from one or two execs here in Australia who somehow read that post and didn't
realize I'm their natural predator.
And they were like, come on my podcast, come on my podcast.
And when I asked them what they do at work, they always described their technology stack
as Gen AI and other stuff.
They can never talk about the part that needs them to understand any math.
And finally, someone sent me that scale survey, which appears in the thing, which says 8% of companies have not seen positive gains from Gen AI.
And something in my brain broke.
I just started writing 10 seconds after getting that.
It's a wild statistic because it's it also it doesn't necessarily measure truth.
It measures who is willing to go on record saying, oh, this thing that everyone is convinced is a
savior. We're going to tell you that we're not seeing business improvement from it. It's similar
to surveys that show that overwhelming percentages of CISOs rank security as their top priority.
But where companies spend put the lie to that because no one is going to answer a survey,
even if it's anonymous, with, yeah, we don't actually give a crap about this till a regulator
makes us care about it. It feels like it's the right answer. But there's a definite divergence
between what people say they're getting value from and what they're actually doing.
Yeah. I wrote a blog post that went pretty viral last year, which was about this, right?
Companies say they have values they don't hold, and you actually look at their behavior.
It's a huge cause of workplace burnout.
People come in and they're like, hey, you know, we said we care about this quality,
so I'm going to put that quality in.
You know, and that's like a real human cost to companies just lying to employees and assuming
that the employee knows it's all some sort of fiction.
Like, it is not obvious to everyone.
There's been a tension gold rush towards Gen AI.
People are hurling money at it across the board.
But what I found in my own experiments with it is that it's very good at a topical surface level of bullshit.
And when you start digging into it on any area you know well, it immediately falls apart because it's not actually reasoning despite how it appears. But a lot of the world does, in fact, run on bullshit. I found it incredibly useful, for example, to take a very terse email of please get the thing done and then turn that into something that people will receive and not be convinced that I'm a raging asshole when they when they read that, like, oh, there's a period at the end of the one sentence. Oh, he must hate me. No, it's put this into business context for me. It's useful
for things like that. Don't get me wrong. But the unspoken message in so much of the Gen AI
boosterism is soon you'll be able to fire half those useless bastards hanging out in your
company's customer service team and replace them with a chat bot. Press X to doubt. I appreciate the press X to doubt reference. And yeah,
it's so interesting because when I listened to a podcast from an exec who reached out,
and they seemed nice enough, I was listening to it, and they work with an engineering team. So
they go, oh, of course, you'll never replace the programmers. Of course, you can't replace
programmers, chat GPT. They were just signaling how human aligned they were.
And then one sentence later they come out with,
but you're going to replace all your cheap customer support people, right?
You know, those guys, they're expensive.
Get them out of here.
Bring chat GPT in.
And it was just so interesting to see how quickly this person
was flipping somewhat ignorantly between like signaling human alignment.
And then he's compute a complete kind of HR,
all humans are fungible cogs,
get them out of here.
And the funniest thing is like,
it's not going to work for either of those.
I don't know why they published that episode.
Well,
there's some aspect of it too,
where,
well,
it just gets rid of like the junior level work.
Like you still need senior engineers to do things. That's great.
Where do you think senior engineers come from?
Do they spring fully formed from the forehead of some god?
Generally not.
We learn by doing a lot of those junior tasks.
Now, great.
AI is, in fact, better and faster than we are
at copying and pasting code without context out of Stack Overflow.
It's very good at this.
But as soon as you start digging into,
why are you doing it this way or building an app this way
with a bunch of different snippets
that turn into wild spaghetti code
because nothing has a context window
to understand the entire app,
it's pretty clear that, no, no,
this is just a bunch of stuff being glued together
and maybe it works for an MVP.
But this is not artistry.
This is not a well-crafted solution.
This is brute force mixed with enthusiasm, which are my two favorite programming languages. But it doesn't have a soul to things, right? Like this chair I'm on was mass-produced in Ikea. There was no handcraft and shit. But there's no development of judgment
in deploying LLMs on these topics,
and that's the thing that makes you a professional.
So if someone's relying on ChatGPD to do stuff,
for one thing, it can't.
I tried to get it to Hello World in Elixir a while ago,
and it's fine at Python, but it can't.
It doesn't know anything about Elixir.
It's just so bad at writing it.
This thing wouldn't run, which kind of shows you the man behind the curtain. It's not as smart as it
might initially appear. But also, when junior friends I have start trying to use it, they're
like, I'll get into programming. They very quickly find that they're not getting smarter while they're
using it. It might accelerate one or two kind of small bits that didn't matter that much anyway.
And then, you know, it constrains their brain in weird ways
because it's very hard to learn how to write good tests in programming.
It writes your tests for you.
It doesn't write them very well, but then you stop thinking about it.
It's very dangerous to use as a junior, I think.
You have to know the rules before you understand when it's okay to break them,
as I think is part of the approach on this. But I want to be clear on this, that I have been accused in the past when I have pushed back on AI that, oh, I'm basically being defensive because I think that it's coming for my more than a little offensive, just from the perspective
of my value is not the sheer number of words I can bang out in a short period of time. It's
the insight and the thought that goes behind it. That's the reason people presumably listen to me,
either that or they don't know how to click the unfollow button on Twitter,
six of one, half a dozen of the other. It's not about getting words out quickly that sound vaguely
good. It's about building a story. It's about about getting words out quickly that sound vaguely good. It's about
building a story. It's about understanding who the audience is and what they need to hear.
And I worry that a lot of this slop is going to just flood the zone with basically dangerous stuff,
if you let it. It absolutely is. And it's funny because I've heard that line of, you know,
people go, oh, you're worried about your job. That's why you're saying this. I'm not worried about my job.
I am worried about the jobs of people who come out and say things like,
Oh,
it's so much better at writing than me.
You know,
like,
like it's such an incredible cell phone.
Anyone with a modicum of talent in almost 80 field looks at it and goes
like,
this isn't as good as getting a professional,
right?
If I was writing a book,
I would not use AI to make the cover art.
I wouldn't use AI for any serious writing.
I wouldn't use it for programming.
And all these execs coming out, like, it's so good.
I'm just like, come on, you're just, you're embarrassing yourself in public.
It's been memory hold, but one of the big consultancies came out with a statement that
said they were going to be using generative AI to crash, to craft their business strategy
for the coming three years.
And it was, what? You're basically having a sarcastic parrot do this for you. Do you mean stochastic
parrot? No, I absolutely do not. It's you've prompted correctly. It's a very sarcastic parrot.
And that's kind of the point of it. Like it's, it's just empty words that, that act, that go
well together on a predictive algorithm. When you do vector math on it, that that's not,
that is not the stuff from which good strategy springs. And if it is, maybe your job
is nonsense. Yeah. And what it writes is so predictable almost all the time. And it does
have very weird use cases that humans are quite good at. Something it's great for is you describe
a problem. It's very good at describing technology. You should go Google because it turns out Googling
tech is kind of weird. Companies have names like Stripe.
How would you know that's what you need to look up
to get to what you're looking for?
But outside of that,
how is it going to help you with strategy?
If someone at...
We have a flat structure.
None of us at any point have been like,
we'll use ChatGPT to do our strategy.
Not because we're opposed to it.
We like winning. And that's a opposed to it. We like winning.
And we like that's a path to losing.
And we didn't even come up in discussion.
I do see value for it for things that are honestly bullshit type jobs.
Whereas great, we need a 400 slide PowerPoint deck that no human will ever go through.
But we need that artifact to sit there and check a box somewhere.
Okay, great.
Use it for stuff like that.
Personally, I love using it for in ways that I don't think that they quite expect me to use it
in because it turns out that you can bring creativity to prompting. I just received an
email that I think is kind of inane. Great. Respond to this email with either overwhelming enthusiasm
or withering sarcasm, but is impossible to determine which it is.
And sometimes it is just spectacularly on with prompts like that, because I'm not going to
bother to write a five paragraph email thanking you for your invitation to some gen AI nonsense.
Let the robot do it. It's difficult. I'm looking for a point here that's maybe less obvious to the
audience, because I suspect people who listen to this already have the same view here. So maybe the interesting thing to comment on that is not that
it's obviously bullshit. But we need to address the fact that a large number of people running
the industry have not developed personal judgment. They can't make that determination. It does not
look like bullshit to them. It writes those sentences. They've somehow become CEOs.
And that's how they think, right?
So they look at that and they don't think.
What they should be thinking is, wow, I've been spouting bullshit for 20 years.
And that's why this looks good.
But they haven't connected those dots.
They also put two and two together.
I mean, Amazon recently launched its Rufus AI assistant in the iOS app.
So when I encountered that, okay, don't try to outstupid me.
I'll play those games.
And they, of course,
do their best that they can
to defang the thing
so it doesn't make them look bad.
But, you know,
if you have enough creativity
to bang two neurons together
and make a spark, it's not hard.
Write a limerick about this product.
Easy enough to do.
Great.
And it did.
It spat out a limerick
where the last line didn't rhyme
because why would it?
What really is a limerick?
Talking about how much it enjoyed riding a dildo
because that's right.
Amazon is also the world's largest dildo emporium.
People forget that.
I call them the underpants store,
but that's really out of respect
because I couldn't call them the many, many things
they would have deeper problems with.
But you can't, on the one hand, sell things like that
and then act shocked when your AI robot on your website spits out
commentary about that thing. But companies are rushing to stuff these things directly in line
with customers and having them say things that are never reviewed by a human being before they're out
there representing your company. And I don't understand that. If a human were to say even
half of these things, they'd be fired on the spot. And yet here we are.
Yeah.
And the rushes, I have to assume it's not related to Gen AI in particular.
It has some interesting characteristics that are good for grifters, right? A comment I made on Better Offline was that if you look at rolling out a crypto app or
something, I don't like that space, but you actually do need to know how to
code to do something in that space. If you look at serious engineering companies in Melbourne,
crypto companies are overrepresented because you can scam a lot of people, but you need to
engineer to do the scam. If you look at the AI space, I think a lot of people don't realize,
especially non-technical executives, they just have this class of person that is rolling out
really basic Django web apps.
And the AI component
is just someone typing in import open AI
and then whatever
string you pass. It's a very thin
shell script wrapped around a call to their API,
but sure, that's enough if you tell a good story to raise
$4 million. Yeah, I'm
not even thinking about VC money. I'm
thinking big institutions here in Melbourne
that they're not even making money off
of this. They're not getting good valuations or anything.
But they do this anyway, and it's not even
because of this grand institutional
plan. It's because this individual
grifter class that is just
infiltrates every big organization. Everyone's seen
it. They just convince non-technicians
that they're as good as OpenAI
because it's not
obvious. It looks like you've built the thing that the specialist team in the US built,
but you've just got two lines of code in there. It feels like it hits differently in different
arenas. Whenever you have the chatbot or you just generate a blog post or whatnot,
it always feels like it's making the fundamental attribution error here that you don't care enough to write it,
but somehow magically through the power of internet and AI, people will give enough of a
shit to read anything that you shove out from this thing. I think that is, that is a mistake.
I think that people are going to learn to tune it out extraordinarily quickly. And when I'm
gathering news, when I'm gathering articles that I'm considering, do I put this into the newsletter on Monday?
I don't know if it's that I'm good at spotting AI writing or if it's just that I have a very low tolerance for bad writing.
But either way, there's so much stuff that I see that I don't know if a human wrote it or not, but either way, it's crap.
So we're not going to be including it.
And I can pick that up extraordinarily quickly as I read it when, you know, there's three logic errors and two
misspellings in the first sentence. It's fascinating from a, when you're looking at
spotting AI writing, I try to be sensitive to the fact that like, I don't know how many have
slipped past me, but I'll just say that at this point, I would have expected someone to have
pointed it out, right? Like generate something by AI, save the proof somewhere that you did it by AI and see if
you can slip it past humans.
And then when it gets past them, just point it out, right?
Like that seems like a pretty easy column to run to get clicks and no one's done it
yet.
And there are valid use cases for this.
For example, I've just written a blog post in English, which is not my first language.
Can you edit and improve this? Great. That's a great use of it. But you're a fool if
you don't read the thing that it spits out before slapping it into medium and hitting the publish
button. Yeah. And that it's actually a problem with kind of older, less, it was actually still
super hyped at the time, but less hype AI stuff, more classical statistics, which was, it was really
easy to hit accuracy levels that felt pretty good, but weren't suitable for the business,
right? It's why I can't just take Gen AI or something older and start automating away
lawyers. I might even get a pretty good hit rate on normal boilerplate stuff, if it's like that
constantly, but you could never never ever put it out without having
a lawyer look at it so you haven't actually saved very much labor possibly none because reviewing
stuff is kind of you get really paranoid you're like was it that i have enough coffee before i
read it do i need to go through it again like is this going to crash prod it's it's just the same
issue as before which is like a bunch of almost working demos it's now easier to get to a working
demo and then the pathway to revenue i just don't see what it is for 99% of these applications.
No, it feels like it's hype chasing. You talked earlier about cryptocurrency being a terrific way
to scam people. It feels like some of those exact same people have pivoted to Gen AI and it's,
what is the affinity between these two things? And then it occurred to me, these people are
clearly NVIDIA's street team.
They don't care what you're using GPUs for
just so long as you're buying more of them
so that they can get their commission
or see the stock bump or whatnot.
I'm only half kidding when I say that
because it does feel like there's an awful lot of folks
who have this insane urge to push whatever it is,
hard math that demands giant farms of GPUs.
There's something interesting with them because it's hard to tell which ones are grifters
because there are some grifters.
And there are also some people who have just become so credulous and excitable over their
career that they've been elevated because that brings them a certain amount of energy
in social settings.
And when you look at someone individually, it's so hard to tell
who is like, they actually doubt the tech and they're just here for the money. And how many
other people have just been swept away. I know a lot of salespeople like this, who when crypto
really started taking off, they used to sell other stuff. They all sell cryptocurrency in my home
country, Malaysia now. And I think they have made themselves true believers because they want it to
be true that there's this
thing they don't need to study for or learn anything in, and they can just print money
and still be a good person.
Obviously, they can't.
That's not how the world works.
Hope clouds observation.
Oh, I'm unaware of that.
No, it's a hard problem.
Oh, sorry.
Sorry.
I thought you were talking about someone called hope clouds.
Oh, no, no, no, no, no.
Just the idea of you want it to be true
so that clears your ability to be objective.
And it's a complicated problem.
And I do feel for a lot of these people.
I am curious.
I know that whenever I write a blog post
that has a certain virality level to it,
it breaks containment and goes outside
of the people I generally hear from about these things.
And I start getting some responses from folks I would never have expected to hear from,
which is a polite way in some cases of saying complete wackadoos. Okay, great. This is not the
typical demographic I envisioned writing for, the typical audience I wind up seeing
my writing targeted for. I have to imagine you got some element of that just given the
sheer overwhelming popularity of, for about 24 hours, you could not go on the internet without writing targeted for. I have to imagine you got some element of that just given the sheer
overwhelming popularity of for about 24 hours, you could not go on the internet without encountering
your post. That did happen. I was very surprised to see it broke out of the IT circle. There's a
lot of programmer-specific jokes in there. You know, very early on, I make a joke about Postgres.
Non-technicians don't know what Postgres is. You know, this was, I did not write it to maximize reality.
But basically what I tapped into and I was quite upset to find was we had a lot of non-technicians writing it.
So writers, artists, people who are just like near grifters and didn't really have the credentials that they knew was kind of bullshit.
But they didn't have the ability to definitively call it out because they don't program themselves.
And it really made me aware of this massive human cost.
You know, psychologically, for the past one or two years,
you have had these kind of complicit rogues running companies who have been just terrorizing people.
You know, every week there's some new thing
about how all writers are going to lose their jobs,
all artists are going to lose their jobs,
you're going to be out in the streets
unless you learn how to program.
They tried to terrorize programmers,
but it didn't work
because we knew what we were talking about.
But it's just, you know, it's been horrific.
And there is going to be
not just this psychological cause,
there's going to be a real one
when people who are simply
not particularly talented at business
are going to preemptively lay off
their writers and artists because they think that Gen.AI are going to preemptively lay off their writers and artists
because they think that Gen AI is going to do it. And then they're going to have to hire them back.
But that's going to be like a rough one to two years while people go through this cycle.
Are you running critical operations in the cloud? Of course you are. Do you have a disaster
recovery strategy for your cloud configurations? Probably not, though your binders will say
otherwise. Your DevOps teams invested countless hours on those configs, so don't risk losing them.
Firefly makes it easy. They continuously scan your cloud and then back it up using infrastructure as
code and, most importantly, enable quick restoration. Because no one cares about backups,
they care about restorations and recovery.
Be DR ready with Firefly at firefly.ai.
I think it's going to be interesting to see how it unfolds just because I,
in those circles that I travel in,
I don't see people losing their jobs for Gen AI.
There's a,
there's a sense it'll happen real soon once it just a little bit better,
but I don't see it yet.
I see excuses for layoffs coming all the time because we're bad at planning, so we're going to lay off a bunch of people.
It doesn't play as well as we have optimized their roles with Gen AI.
I feel like there's a lot more of that latter case than there are the former in the circles that I tread.
But I see it myself. Instead of the conference talks I give,
lately, instead of doing a lot of purchasing of stock photography, I will just have one of these
things generated because there is no stock photograph that I will get without a commission
of hiring photographers to specifically do this. But I needed a picture of a data center aisle.
Great. Now put a giraffe in it. There is no zookeeper who is
wandering there, is taking a giraffe and wandering that thing through a digital realty trust data
center or Equinix somewhere. That just isn't going to happen. So bad Photoshop or just wind up having
the bot spit that out for quick and dirty things like jokes on Twitter or throwing it onto a
conference slide. That seems to be acceptable.
And I think that that's where you're going to start to see some erosion from the bottom up.
And I don't honestly know what to do about that.
Yeah, well, I guess there's two things.
And one is, if you couldn't do that, would you have hired someone to bring a giraffe in the data center?
Or would you have just not made the joke?
Exactly.
I would have done some bad Photoshop and Microsoft Paint, and that would have been the end of it.
Yeah, it'd save you a little bit of time.
And then I don't know what you bill per hour,
but how often would you have to do that
to get to like 600 billion market cap?
That's an awful lot of giraffes
and an awful lot of data centers.
I feel like at some point, that's going to be hard to do
because as we all know, giraffes aren't in fact real.
It's a terrific scam.
But I've seen giraffes.
They're clearly fake.
I mean, there's no way that thing can exist.
They're just long horses.
Exactly. But oh no, remember, unicorns aren't real because. I mean, there's no way that thing can exist. They're just long horses. Exactly.
But oh, no, remember, unicorns aren't real because, you know, a horse with a horn on
his head.
Oh, yeah, that can't possibly exist.
But this thing with a 20 foot long neck.
Yeah, that's real.
How gullible do I look?
It is just it is fascinating to me that so many people are uncritically just talking
about how these LLMs are going to revolutionize everything.
And some degree, I wonder if it's almost like cult signaling. When you're very deep into a cult
or something, you make displays of faith, not by necessarily, it's not just believing, it's by
saying unbelievable things very, very sincerely. And the more unbelievable it is, the more you're
showing to people around you like you're fully committed.
And I think there's like two management classes, one with people who kind of know what they're doing and they're nice to talk to.
And another who I always say they just had like Forbes magazine flashing their brain. And I think that's like it's more than 50 percent.
Like it's most people in the corporate world.
You know, it's a pretty horrifying number to say most people.
And yeah, they just might be doing all of this
to signal to people around them.
Like, I mean, on the grift,
when the Gen AI thing, you know, disappears,
I'm going to say the next crazy thing I need to
about crypto or quantum.
And that's how you know you can bring me on board
and I'll help you trick someone into giving you funding.
It's wild to me that I will go to cloud keynotes and they will have their CEOs
on stage talking about how we're given reference customers are using Gen AI to completely
revolutionize their product. Okay, great. Well, some of those companies are in fact my clients.
So I talked to them. I'm like, oh, great. I somehow must've missed that our latest engagement call.
What's going on? Like, oh yeah, we tried it for a bit.
Didn't really work super well.
Didn't see much value.
So we canceled the project.
It's like, huh, that's not the story that was being told on stage.
So you start to wonder on some level, these executives and these managers, do they genuinely
believe the things that they're saying because someone else told them that?
Do they know that they're not telling the truth?
Or is it a game of telephone
where someone's like, yeah, we tried it. Like, yes, this company is using it. Oh, this company
is using a lot of it. Oh, this company is transforming themselves with it. And by the
time it gets to them, it sounds like the greatest thing since sliced cheese.
I've met a couple and I at least try to be charismatic enough that they open up occasionally.
So I've met one who admitted that they don't think any of it works,
but they feel like they have to say these things.
And they've always got some reason.
They're like, we'll use this to get funding and do something good
for the employees or whatever.
But I'm like, it's the line.
You got a job.
You can't just come out there and sling bullshit all day
and be a good person.
You get others who are true believers.
I think the most concerning one is when you meet a non-technician
who has kind of ended up in
management because they did a lot of large-scale
enterprise corporate work.
And that's almost entirely political,
which is promising people things.
Politics is not that hard.
And with those people,
sometimes
they're actually kind of clever.
But you can almost see
that they have been groomed by people
around them for 30 years. Like they've just had their bullshit detectors turned off and they
usually make, you know, they'll go like, they'll ask one or two self-critical questions, but the
thought process stops there. They can't sustain that line of reasoning for long enough. And I
kind of understand why, right? If you've had your direct reports lying to you for 30 years,
like you're just like this crazy gaslight echo chamber.
If someone did that to me,
I would think I'm a genius and stop questioning myself.
Yeah, when you remove people who are telling you how it is from your orbit and surround yourself with yes men,
yeah, it gets very hard to identify the truth
from all of the nonsense filtering through.
Yeah, and you know, drifters, people have good social skills.
They play this complex metagame where they like deliver just enough bad news that it
seems like they're being sincere, but it's carefully calibrated.
So, you know, it's just short of like you need to fire them.
They just occasionally deliver something that sounds pretty bad.
Like you can't stand up to like 200 people doing that for 30 years.
You're just going to come out a changed person.
At the moment, when someone self-describes a data scientist or an AI expert,
one of the ways that I've filtered for are they a grifter or not
is to pull up LinkedIn or equivalent and see what were they doing in 2019?
Because if it involved data science or machine learning, great.
They probably have some idea of what they're talking about.
If they've only been doing this for six months,
they're probably a grifter.
Problem is, I feel terribly for people
who are graduating in the field legitimately now
because they get buried in the nonsense noise.
How do you guard against that?
I've been advising a lot of students
for each set after that post
because a lot of them said,
for some reason, a lot in Brazil specifically, were like, we're all graduating from universities
and we're really scared jobs aren't going to exist or we can't stand out.
The thing I told them is actually to stay really clear of the Gen AI space, go find
a small to medium business doing like more classical statistics, or you can demonstrate
you've got actual mathematical ability and work on your operational knowledge. Just become a good software engineer with some experience in this
type of algorithm. And I think that'll be fine because when the bubble collapses, we'll go back
to where we were pre-2019, which is a small number of companies will need people who can actually do
machine learning statistics. And you just want to be well positioned and networked at that point. I wouldn't
want to get into the field now, not because you can't get a job, but because it's
so hard to find a job where you're actually going to pick up real skills. If you
just join a random company out of university, you are just going to be hanging out with
more of us. It's so hard to find a good place. It takes some time
in the industry to develop a
fine-tuning on your bullshit filter to figure out, is this founder high on their own supply,
or do they actually have something that they're doing that is legitimately useful? Because without
a bit of experience under your belt, it's very easy to instead get fooled by whoever sounds the
most convincing, and that's dangerous. Yeah, and as a student, you're just not going to have that
kind of background.
And what's horrifying is they're still more
well-positioned than non-technicians, right?
So I think the healthy
attitude for anyone who's graduating
is find a place doing
something hard that isn't Gen AI.
And if you're not a technician, it's like
crypto, right? The moment someone starts
talking about it, just kind of walk away.
They might have an actual product
in the same way that there must be
a real blockchain application.
I've never found one,
but I haven't looked that hard.
There must be somewhere on the planet.
I've been looking, but 15 years in,
it feels like it's a solution looking for a problem.
And honestly, it feels like it's speculation,
fraud, and fraud adjacency.
I think it would be generous to say
1% of crypto projects actually have something
backing them, but let's be generous and
we'll even say 10%, right?
If someone starts talking about even a 10%,
unless you're really deeply invested
in the field, just don't listen to them.
And the same with Gen AI. If someone's like, I've got a Gen AI product,
I'm like, just don't talk to them.
They're probably on the balance of things
trying to scam you.
You introduced me to Sturgeon's Law,
which comes from some sci-fi work in the 50s
that states, quite simply, that 90% of everything is crap.
The problem is, in this case,
how do you weed out the wheat from the chaff, so to speak?
Not that you weed wheat, but that's okay.
I'll abuse metaphors to death.
It's an interesting thing because,
so I came from a psychology background,
and psychology is very competitive in Australia. Competitive psychology
is a tournament I would watch, but please continue.
That'd be pretty good.
Just a bunch of people trying to psychoanalyze each other
until someone cries. That's called Scrum.
Scrum's a good example
of Sturge's Law, right? But
psychology is meant to be super competitive.
And I very, very easily
just got into
whatever the highest-end program is
in the country. Again, not because I was a genius
or anything. I was just like, people
weren't even trying. I was sitting
at a university with supposedly the best
students in the country. And they
would fail at assignments and go, how did you
do it? And it would turn out they didn't open
a book. They didn't open the textbook.
Like, well, it's pretty easy. I read the book, they aside.
And then I left psychology
because I'm like, no one's serious here.
Gone to data science. Same with that,
right? I'm at the end of a two-year master's
and students are coming up to me going,
what is machine learning?
I'm going, well, we've been studying it for two years, brother.
I don't know what to tell you if you haven't
figured it out now. Did you pay attention
to anything?
Yeah, it's like, where do you even start when someone hits you with that?
There is a counterpoint that I've been working in cloud for a decade and change now.
Like every time someone asks me, what's the cloud?
It's like, increasingly, I have no idea.
It feels like it's become such a wishy-washy term.
And I think on some level, that's what machine learning is turning into.
It's become a hot button phrase that people want to pile into and it's diluting the actual meaning of the term.
They all are just starting to boil down
to like a computer was involved somewhere.
How many services that you use
that require no Gen AI
are now referred to as a Gen AI service,
particularly from cloud providers?
It's, well, why are they calling it that
if it's not really using anything?
Ah, this is where we learn how politics work
and why project managers who are ambitious would like
to get promoted and build larger orgs.
It's this modern-day feudal lords
of Amazon that we wind up seeing from time
to time. Yeah, it's the, you know,
you mentioned Agile is how you torture
people. Agile is actually
very useful as a judgment
signifier because when someone talks about
it too much at a job,
before you get in, if they even bother saying something like, we're an agile team, you know that they either
have no idea what they're talking about, or, and this happens sometimes, they actually do something
that approximates working agile, but they're not socially savvy enough to realize what they sound
like. In which case, you still don't want to work for this, right? A smart person doesn't talk about
agile. Or within the last three months, they had a massive
reset and took all the engineers for a week and
threw them in a room with an Agile coach at ruinous
expense and people's time. And they
have to say the right things for the next quarter.
I know of at least one company that said everyone is
Agile trained. And it turned out that meant
they all did a 20-minute LinkedIn
course and then saved the PDFs to a
network drive. So, you know, digital
transformation. Very powerful. Very good. That's a huge organization.
Yep. It's amazing how people love to play these games. Oh yeah, it'll work. It has to,
because we need it to. Otherwise we're going to have some awkward questions on the earnings call
or to our regulators or to our investors. It just feels like it's an overheated hype cycle.
I'm someone who sees
value in Gen AI in a bunch of different ways. I love tricking different models into ranking the
U.S. presidents by absorbency, which is something that's very hard to get a human to do and
surprisingly simple to get a robot to do if you give it the right prompt. But whether I necessarily
want to have it do my taxes, I don't think that's going to end well for me.
No, no, it certainly isn't.
And, you know, I realize that you asked about Sturgeon's Law earlier.
It's something that comes up in my writing.
And people get really, really upset.
They go, how can you call 90% of things crap?
And it's like, it's how do you judge?
Look at them, exactly.
You know, and it's like, it's just how they are.
And it doesn't look that way if
you're not a specialist in that field for instance most people can't actually tell that 90 of writing
is not particularly good every writer or journalist i've spoken to since the blog post comes out just
endorses that immediately they go of course 90 of writing is not terribly good it's not a judgment
about the person but the judgment about the writing writing. I'm not a very good piano player, and everyone
who plays the piano knows that. My random
friends don't. But the reason I raise
this is because I think
that is kind of the mechanism people defend
themselves from this Gen AI,
absurd hype cycle, which is
just keep developing
your professional judgment, and you can
work yourself into spheres where
your output matters
more than chat GPT just spamming crap out. Just embrace Sturgeon's Law. It is not a bad thing.
People think it's pessimistic. It's a profound source of optimism. Because if things weren't
90% crap, I would be homeless. I'm not smart enough. But because people aren't trying,
it's like, yeah, this is tremendous. Back when I had my SRE days,
people were somewhat surprised
with my job hunting approach.
Like, don't you want to work
in the best environment you can?
It's like, no, all I can do there is fuck it up.
I'd much rather work someplace
that's an active burning fire
because I know how to fix
at least some of those things.
It's how do you want this to evolve?
I mean, so many of these companies
are telling on themselves.
We're, oh, we're using RAG
to wind up bringing in our documentation to answer things through a chatbot. It's like,
that's a lot of words to say your documentation is dog shit. Maybe you should fix that. And
suddenly you don't have that problem anymore. It's the technology that people are using to
paper over other cracks, but you're just building the house of cards a deck higher.
The RAG thing is so interesting to me because you have companies that
they have hundreds of Confluence pages
that no one can navigate.
No one's updated.
You go on there and it's just like
this graveyard of employees
who all left one year after they wrote the page.
Almost to the day.
Yeah.
It's just so, so common.
And they go, well, this is an unnavigable mess, right?
It's all out of date.
They'll acknowledge that and then go,
and then if we hook all that documentation up to an LLM,
you know, people will be able to work with it.
We're going to customize an LLM on our code base
or our internal documentation.
Like that's a polite way of saying
we're going to take a really smart robot
and then give it a traumatic brain injury
to see what happens.
Yeah, just like delete all the pages.
Why are you wasting your time on this?
And again, it's so hard to tell
which of the people doing this
think that's going to work
in Witcher grifting.
I'm no longer sure there's a coherent
enough world model in their head
that you can really categorize them.
I think they just flip
between the two modes randomly.
But yeah, everyone I hit up,
I go, hey, you know, they hallucinate
and then they come back with,
you know, oh, we'll do RAG.
Well, it works in our internal stuff,
say Microsoft and Google and Amazon.
It's, yeah, you understand
that your companies internally
may as well be alien organisms
compared to 99% of the companies
on this planet.
So it works for your use case
and you trained it for your use case.
That's great if we all suddenly
start acting like you.
But the last time one of you tried to get us to do things the way that you do, you inflicted Kubernetes on us.
And here we are.
Yeah, it's also it is fascinating to me that, you know, when these companies talk about their stop working, that might be what they're saying.
But their actual senior and staff engineers email me to tell me it doesn't work that way internally.
So even though, like, if it can work, it would work there, but it's still mostly not working there, right? Like, they're also trying to trick us. During conference talks, I love the back
channels with the speakers' colleagues who are basically calling all the bullshit in the talk
that they're seeing right now. It's like, I sure wish I could work at a place that did that. Yeah,
me too, because I do work there and we don't do anything like this.
Maybe they do in fantasy land, which we call conference wear, but for the rest of it, not so much.
Yeah, it's just a whole bit of the industry that's based off of just lying to people about how cool you are.
I'm pretty happy to admit, like a lot of my early projects were dumpster fires.
Most of mine still are.
The goal is to get far enough along where you can bring responsible grown-ups in
to push you out of it
and take it over to do it correctly.
Yeah, a lot of consulting
is just selling fire extinguishers.
And you rarely end up in some beautiful...
I think I've only had two projects ever
with an existing code base
where there was a genuine chance
of making it something pristine and beautiful. And none that was because i came in as like this brilliant consultant it was
because the team internally i don't know what happened they got like real drunk one day or
something i just had like a soul-searching moment and they just approached us and were like we're
actually all we're ready to clean all of this up we just want some advice on how to do the thing
and we really end up doing it for them because
it's hard to hair drop consultants in
to do that. What we can do is stuff like you do
because I believe you do cloud cost optimization.
Oh yeah. And I know a lot of companies
that have died through
not having product market fit or not
being able to succeed as a business, but I know
very few who died because their code was so bad
it killed them.
You can generally work your way around engineering.
If the business has found success,
the inverse is never true.
Our code is pristine.
Yes.
And you're out of money.
So turn it off.
Yeah,
exactly.
Because that's the thing.
Even,
even if a code base is really,
really bad until you're like late stage enterprise,
you know,
whale on the beach dying.
It's like basically get away by just dropping more and more bad engineers
under the product. Revenue tends to be
non-linear, right? So you tend to
end up with either nothing or way more money
than you need. Very rarely, like on the
exact, like we're barely surviving
once you're past the early stage of the business.
So yeah, I see companies all the time.
They just keep saying they're retraining data
analysts and data engineers, which
you can't do that in two months.
Takes more than two months to learn like the basics of programming.
Imagine that.
But they just keep doing that.
They keep moving the analysts over
until you have like 40 people ingesting five gigabytes of data a day.
Like they're alive.
Those companies are kind of doing fine.
I really want to thank you for taking the time to speak with me about all this.
If people want to learn more, where's the best I really want to thank you for taking the time to speak with me about all this.
If people want to learn more, where's the best place for them to find you?
So there's the blog, which is the main reason people are talking to me.
So that's at ludic.matarowa.blog, which we'll have to put a link, I suppose, because that's hard to spell.
And then there's Hermit Dash Tech, which is where I work.
And we will include links to both of that in the show notes.
Thank you so much for taking the time to speak with me. I really appreciate it. Thanks very much. It was good to be on.
Nick Suresh, Director at Hermit Tech. I'm cloud economist Corey Quinn, and this is Screaming in
the Cloud. If you enjoyed this podcast, please leave a five-star review on your podcast platform
of choice. Whereas if you hated this podcast, please write a five-star review on your podcast
platform of choice, along with an obnoxious comment that a Gen.A.I. thing wrote for you badly, and then one of your executives will not shut up about on stage.