Coding Blocks - Gartner Top Strategic Technology Trends 2024
Episode Date: December 18, 2023This episode we are talking about the future of tech with the Gartner Top Strategic Technology Trends 2024. Also, Allen is looking into the crystal ball, Joe is getting lo, and Outlaw is getting into ...curling. The full show notes for this episode are available at https://www.codingblocks.net/episode224. News Gartner Top Strategic Technology Trends 2024 No surprise, […]
Transcript
Discussion (0)
All right, so you're listening.
You're subscribed, I guess.
You got this episode.
Yeah.
What number are we on, sir?
The next one.
I'm going to let it go.
Let it go.
He's not wrong.
I mean, prove me wrong.
Episode 224, talking about Gartner Top strategic technology trends in 2024 look at you bringing
us back around and with that i'm alan underwood i'm joe zach man we're not going to do the fun
ones fine i gotta i gotta practice before i do that i'm i'm michael Outlaw there we go
you know somebody's gonna be wondering like
what is he talking about
yeah they'll find out soon
that's what the back catalog is for
that's right
alright so it's time for a few reviews
here in our news section
Outlaw what you got for us
yep so thank you to
just some dude writing a review uh really appreciate
that and also uh stefan would you say it that way no no steven you would pronounce that one steven
it's either steven or stefan depending on where they're from but yeah usually that's just steven
isn't that what i said stefan and then then you're like, no. You said Stefan.
Oh, I guess I probably did.
He did.
God, stupid names, man.
I'm sorry.
Okay.
Well, at any rate, that one was an email from an email. It was really nice because he took the time to try to set up an account and everything
and had troubles and just said, all right,
forget it. I'll just send an email. And then I had to butcher his name still.
I'm sorry. Just know we're sending love
your way. Thank you.
All right. Nice. I wanted to
mention too, our new CodeCamp is coming up.
Officially, we stopped accepting new speaker submissions
on December 15th, which will
be before this episode
airs. But if you really are desperate to do a talk and you just barely missed the
deadline,
just reach out to me somehow and we'll see what we can do.
He'll make it happen.
He knows anyone.
He knows people.
All right.
All right.
So this one,
I'm bringing this up.
I'm going to try and time box myself on this cause I could rant on this
forever,
I think,
but it's laptop buying time. Oh yeah got some opinions on this man okay so so i'll hit i'll hit a few high
points and then we can go from there so it's been five years since i bought my other one right so
which was a gigabyte it's a gigabyte and it's it was a monster right i did have problems presenting with it because of the video card didn't do certain things that I was trying to get it to.
It really irritated me when I found that out live overseas.
That said, so looking at laptops, I have a few criteria that I want to hit, right?
I would really like 64 gigs of RAM because I'll be doing Kubernetes things and spinning up the whole world of infrastructures on my laptop, right? I would really like 64 gigs of Ram because I'll be doing Kubernetes things and
spinning up the whole world of infrastructures on my laptop, right? So more Ram equals better.
So with that said, I was, we've, we've recommended these so many times on this show that it shouldn't
be a surprise that I was looking at MacBook Pros. Oh. Right, there you go.
Yeah, so the MacBook Pros.
All right, so man, if you've been looking at the M3s,
oh, it's so frustrating, man.
I think Jay-Z and I were chatting about this a little bit.
So I'll try and keep it fairly short.
The M3s, if you go with the baseline,
you're capped at the amount of Ram you can get,
which already puts it out of where I want to be. If you go to the M three pro, you can get,
I think it was up to 48 gigs of Ram. I can't remember. I'd have to go look at it. But the thing about the M three pro that really irritates me is they've cut their memory bandwidth in half from the previous generation so things
that ran faster and the m2s run slower now in the m3 pros which is kind of hot garbage due to the
memory bandwidth as well as to the fact that they reduce the number of performance cores and increase
the number of eco cores or whatever they call them right so it So I'm sorry. I didn't mean to cut you off. No,
you're good. You're good. Max is out at 36 gig. Okay. So that one, that one doesn't necessarily hit what I want. Now I could live with 32 gigs. I could live with 32 gigs of Ram, but I would like
a little bit more buffer. Um, so then I start looking at the M three maxes. And again, this is
where, I don't know, man, Apple has a tendency to drive me crazy with how they do some of this stuff.
So if you get the, the entry level max, you're at like $3,500.
Right.
And that one, I want to say comes with 36 gigs of Ram.
Yes.
36, which I mean, that one's 3,200.
Yes.
Okay. which i mean for 3200 that one's 3200 yes okay but that's with a 512 gig drive uh terabyte that's a terabyte terabyte i'd have to go look at it again i don't know why these numbers are lower
all right with oh you know what but i'm looking at like a 14 that's what probably why you might
have been looking at a 14 at a 16 that's another thing. If you were getting the max, don't even look at the 14s.
They get thermal throttled so fast that you completely waste that chip.
Don't get them.
Okay.
So if you're talking about the 16, then you're talking about starting with 48 gigs of Ram,
but still the terabyte and that's four grand.
Four grand.
Okay.
So that's not the entry level.
There is one below it that you can get, But the four grand is the first one that gives you the full memory bandwidth, which is like, man, really?
So, yeah, thirty five hundred bucks for that one.
And it's like, OK, so it's still a terabyte drive, but thirty six gigs of RAM on that one.
Thirty six gigs of RAM.
Right.
For the for the entry level max.
Now that gives you a bump up.
That's two gigs of memory bandwidth.
If you go
up the next step, which is
the next rung up, which is
$4,000, now you get the full
3 gig memory bandwidth
on it. I think that takes you to
48 gigs of RAM. Starting.
Starting. So if you
want the 60, 62
gigs of RAM, you're in it for like 4500 dollars right somewhere
in that that is a lot of ram 64 gigs of ram it's 4200 for that for the 16 inch with the
one terabyte ssd all right so none of that would really bother me so much if it weren't for the fact that they
handicapped the memory bandwidth and the performance things.
Basically, I've done too much research at this point.
It actually kind of drives me crazy.
The previous M2s are better in most ways, unless you're getting into like blender type,
you know, 3D graphical design. And then the M three max might be better,
right? Unless you go all the way up to that second tier max, the second tier max does destroy
everything else out there, right? Like it's faster, but you're in it for four grand minimum
to hit that. And it's like, really Apple? Like, why did we go backwards on the memory
bandwidth to handicap everything up to the most powerful chip that you have? Like that just,
it doesn't, it drives me absolutely insane. So, so then I started looking at windows laptops
because I was annoyed with those and you can, you can get them. Like there are some, there's some nice ones out there, but they're not going to be as felt and sexy and, and as
ready to do the Kubernetes things that I want to do that, that Mac will. So I I'm even thinking
about making a video on this, just the ones that I'm looking at and, and, and just curious what
anybody else thinks about this stuff. Like it does drive me insane.
Well, I'm still on an Intel Mac,
but Joe, I know you have one of the Apple Silicon Macs.
Are you, like, what's been your Docker experience?
Because I know there was a period of time
where Docker wasn't supporting the Apple Silicon yet.
Yeah, that was really early on.
Past that, yeah.
Yeah, I don't know where they're at now, actually,
but everything I've ever done is
but just worked i haven't had any problems with silicon otherwise and i got it like early days
of the m1 so yeah and you've got the m1 max right if i remember right uh you know i don't know i'm
kind of a cheapskate no i'm pretty sure that you maxed it out if i remember right i think he got
one of the big daddies yeah i think that's right yeah i'm pretty certain you got the m1 max so we we know what joe bought
he doesn't don't listen to joe when it comes to hardware shouldn't surprise you joe basically
asked us we said yeah get that one um so at any rate yeah i mean that that kind of stuff just super duper frustrates me when it's like i i just i don't understand that like
when your m1s were so good and your m2s were so good you didn't have these bottlenecks did you
change the infrastructure some somewhere internally that that made these bottlenecks
necessary or was it just a way to force people to upgrade like i just i don't get
it yeah that's the thing is that like you you want to hope that that like the memory bandwidth
thing specifically like that that that there was a technical reason for that that maybe like
some designer was like well if we limit this then we can increase the battery performance
and you know the runtime blah blah blah blah blah blah blah because what you really don't want to be is some decision from high up being like
yeah no kneecap it so that so that we can like sell the higher end one yeah and and here's another
thing that i think is worth talking about because when I had, we were chatting, I was like, yeah, I'm looking at it.
I'm considering it now. I was like, Oh,
my iPad does everything I want to do. And I actually
85% agree with that. Like I even mentioned,
except for Docker run, right? Like that's,
that's kind of what it boils down to if I'm, if I'm consuming things, especially reading, right? So I did go get the ACM thing that I mentioned on our black Friday episode, right? To where I got the ACM subscription plus the, the bonus thing. So I can do the O'Reilly books. Reading on that thing is awesome. Watching YouTube videos is awesome. Researching, watching LinkedIn, learning stuff, all of it is awesome until you want to program
something.
And then it's not awesome.
Even, even to a certain degree, typing like I was doing show notes on my iPad last night
and dealing with Google Drive, the sheets in that drives me absolutely batty.
Something as stupid as, here's an example.
You enter a cell, you start typing something and you forgot exactly what the sentence was
you were going to type. So you tabbed over to look at the text. You come back, it exits you
out of that cell. So now you got to like quadruple quintuple click into the thing to get the cursor
back where you want it. Otherwise it's going to overwrite the whole cell. It's just, it's things like that are what make the iPad not quite prime
time ready for being like a laptop replacement, right? Even, even if you take programming out of
the mix, but then once you throw programming into the mix, it's a totally different thing.
Um, so, so yeah, that's why I'm looking at a laptop. Yes, I agree that 64 gigs of RAM, it's not a little bit, right?
I'm asking for a lot.
However, with the Mac unified memory, it takes like six gigs of RAM and uses it for the OS,
the graphics, all that kind of stuff.
So you can already chop six gigs off the top and whatever you've got left is what you're working with.
So in fairness though,
the way I think I worded that was that I get more value out of my iPad pro
than I do my,
my laptop at the moment.
Cause like,
you know,
I don't use it much,
right?
I'm because,
because we are fortunate in our, you know, working remote kind of situation or anything like I don't use it much, right? Because we are fortunate in our working remote kind of situation or anything,
I don't have a need for the laptop as much.
My desktop is almost entirely the only personal machine that I use.
So the iPad Pro is the closest thing that I would have to like a laptop
kind of thing, you know, it's not really a laptop, but you know,
I've considered, I considered that like,
maybe instead of getting a new laptop,
I would just get whatever the next iPad pro is with a keyboard for it and be
done with it. But then I had this um like last night that oh you know what would
probably be they would probably work just as well for me that is a true laptop just get the mac the
the uh air the macbook air for sure you know i'll tell you and you you could max out you
you want max i don't want to confuse it with the processor name but like you you could upgrade a
uh an air pretty good you know like i it's not going to be
as great as the other one granted as a pro yeah it won't be but that's fine but you know for the
kind of things that i'm doing with it because i'm going to use the desktop for all the heavy
hitting and even for presentation kind of thing like presenting things i, I'm like, I don't know. You could probably still use the air to do just fine
for the presentations. Because, you know,
it depends on, like, well,
you mentioned Kubernetes as the start, right?
So do I want to trust
the network that's going to be available where I'm
presenting,
which is generally like you never do.
Right.
Because then if you do,
then you're like,
okay,
well I could just scaffold up and run in GKE and be good.
Right.
Like I don't,
I don't need a lot of processing speed on my machine for that.
So that's,
that's actually the problem.
Cause I'd even thought about air.
So like,
let's,
I mean, I will tell you if there were, if I was just doing basic stuff, like doing show notes,
you know, browsing the web, that kind of stuff, a MacBook air would be high on my list.
Um, another, another laptop that I really like a lot is the Asus ROG flow X 13. It's their 13.3 inch laptop. It's absolutely fantastic. So, and if you can get one,
that's not the absolutely most current model, those things go for like 16, $1,700 because they're
kind of their gaming portable laptops. You can get those things on sale usually for like 800 bucks,
maybe a little bit less. And they are absolutely phenomenal machines. That said, it's exactly what you just said a second ago.
If I could rely on a network, I just do everything up in the cloud, right? Whether it was Azure,
uh, AK, oh man, what is it? Azure, Kubernetes, AKS or, or AWS is EKS or, or GCPs, you know, whatever GKE, like, yeah, totally. I would do that letters.
Yeah. Lots of letters, but, but that's the reality is you're going to need to run that
stuff locally. And if you're spinning up things like Pino, Flink, Kafka, blah, blah, blah, blah,
blah, like that starts eating into some some some ram pretty quick and you need that
thing to run so so at any rate that was a long-winded way just depend on like how many
how many presentations do you plan to do just one it doesn't matter just one it's good yeah
so but i mean seriously though like like if you're looking at that four grand one, right? For a presentation.
Well, no, I mean, let's be real.
Like if I use that, so I'm kind of the opposite of, of outlaw.
Outlaw does everything on his desktop.
I almost never come down here to my desktop.
I always have something in my lap, whether it's a laptop or my iPad or whatever.
I prefer to work on that.
If I'm editing a video, I prefer to
work on it remotely, like just moving around. Right. I don't want to be sitting at a desk
because I do that enough during the day. So that's, and I think Jay-Z you're kind of the
same way, right? Like I know that you'll go, you'll go sit down in the living room, play games
or work on programming or whatever. Um, I just, I don't want to be tied to my desk any more than i already am so i have a
i have a secret to reveal oh yeah yeah i did something no one thought could be done what's
that i switched my wife from mac from windows to apple i got an error no she's down with it
she likes it yeah she doesn't like everything but she likes enough about it that she's down with it she likes it yeah she doesn't like everything but she likes enough about
it that she's good with it well it's super light right yes she loves that the form factor like
absolutely it's great there's a couple of things got out of the air um it's just like dumb stuff
like um i told her my macbook pro has uh usb ports on both sides and i just kind of assumed it would
be the same so i kind of told her like hey like if you're sitting over here so you know they could charge it you know on either side if you
come sit on my seat by the couch you can charge it with my port it'd be great no it's only got
ports on one side and yeah there's been a couple little dumb things like that i was like oops oh
sorry about that my bad yeah like she still has the old like maglev charger and stuff like the
magsafe charger is she on iphone too right? No. I'm working on that next.
Oh, yeah.
Well, my wife is still mad at me that I got her to switch to an iPhone, just so you know.
The MagSafe charger is new on that Air.
Yeah.
Yeah, I think you can use the USB if I'm not mistaken.
I haven't tried it.
I'm kind of scared to plug it in.
You should be able to.
Because the previous model was only the USB-C, and then they brought back the MagSafe.
Yeah, I think you can do both.
It's just the wrong side for my seat, so you've got to cross.
I'm telling you, though, be careful, man.
My wife, she is still mad at me about taking her off the Samsungs and putting her onto the Apple.
I still hear about it probably once a week.
Yeah, I read an article about you being the
worst husband ever because of that so it's pretty close yeah i don't doubt it man it's probably all
over the web now yeah i try to get her like i switched to iphone recently after many years
on android and like i'm not very phone savvy so she asked me i'm done do something with my phone
i don't know but um the uh the thing she hates the most is not having it close all, which I know and she knows, everyone knows.
You don't really need to be closing your apps all the time, but she just wants everything closed all the time.
She like deletes your text as soon as she sends it.
I'm like, that's the thing you can't get over?
No, my wife hates the fact that you can't customize the screens, right?
You just have these square icons everywhere, which I hate that too. Like I still hate that,
but I'll take, I'll take Apple's privacy over, um, you know, Android's prettiness just about
every day. So, all right. So the next thing that I wanted to bring up was something that we were
kind of chatting about before this. And we actually talked about this on one of our episodes in the DevOps in one of our DevOps episodes.
And we mentioned that you should be storing the binaries for everything you deploy internally.
Right. Whether it be in an artifact or wherever it on disk somewhere to where those binaries can be pulled.
And I thought it'd be worth bringing up just to talk about like some of the things
we've had happen and, and reiterate some of the good practices. So like the one that we just had
happen recently that was really annoying is we had a deploy just all of a sudden start failing.
Right. And look at it as like, what in the world? Nobody's nobody's touched this file,
right? Like this Docker file hasn't been touched in six months like what's going on and it just so happened that one of the packages the public repo decided that it was no
longer going to have that version in there and so when you went to um apt install or whatever
it's like hey this doesn't exist and so something that had been perfectly fine before
now reaching out there on the interwebs is like, Oh no, this, this is, this is no good.
So in fairness, it was, it was a build that was failing, not a deploy.
Cause I know somebody's going to question that.
Okay. Yeah. It was,
it was a build that was actually building the images that were,
that were doing these things. So, so yeah, I mean, that's one reason.
Another reason is you don't know what you're getting, right? Like if you're,
if you're doing an install from the web, you're just assuming that's a safe package, right?
You're assuming that it doesn't have any, any viruses or anything like that, right? You've not
checked it. It's just going out there. That's another thing that if you have done in house,
you can scan those things and make sure that they're good before you put them out in something that you're running in the world. And I'm sure outlaw you've got,
you've got many other ideas and thoughts on this as well. Well, I mean, you definitely hit on the
two big ones that were on my mind, the ability to scan that image and to know like, like a
particular version of some, uh, utility that you know, like is vulnerable to certain exploits you know at least
you if it's internal you can track and see like okay where and how is this being used and you can
prioritize you know okay you guys definitely need to go in and address this and and upgrade it or
move to a different utility or whatever you know know, depending on, on your use case. But if,
if everyone in your organization is just pulling from the open internet,
then like you,
you can't inventory what your third party dependencies are as easily.
And you have your, you have no ability to, you know,
to track that when, when big issues get exposed, but when, you know,
when the next heartbleed comes around or whatever, you know to track that when when big issues get exposed when you know when the next
heart bleed comes around or whatever you know right and and then there's also the not only do
you not know what you're getting and if somebody pulls something out of package what if that site's
down where that things get pulled from i mean that happens frequently right like yeah like the
mirrors down and it's like oh well okay so we can't deploy we can't hit and start whatever oh
wait i can't why right yeah so i mean there are so many reasons now that doesn't mean that if you
bring it internal right like if you have your own artifactory host or something that those problems
just go away you're still gonna have to make sure the system's up and running and that the network
connections and all that stuff are good but it's at least in your control right exactly so you know you know
tales from the dark side for sure and maybe and go ahead you're in your control but you know
when i say your control it might not be you personally but your company's control right
somebody within your organization should be managed because i'm sure somebody's going to
be like hearing that that's on the flip side of that it's like yeah we do that i hate it because i can't get that team to do anything blah
blah there's always problems and it's like okay yeah but you know the right thing to do it stinks
but the right thing to do is to block that traffic from your build server or whatever so it can't go
out to get that stuff like block maven central block docker hub i don't know but block that
stuff is the only way that's the best way and hey
i think a helpful tip here this actually could have been a tip of the week if i'd known that
that we would have gone this way with it is you block that so people can't actually hit it
internally but like with artifactory you can set up these virtual repos right that basically are
pass-throughs more or less so proxies if a proxy for it, so if you try to install something
like Maven, right, you're going to make a request to Artifactory to get that package. If Artifactory
doesn't have that package, it'll proxy that call out, go get it from the main Maven repo. And you
can have tools set up in your Artifactory instance or whatever else you're storing it in that can
scan that thing before it even gets used, right? So you can get the best of both worlds just by doing it. And
I mean, the three of us have worked on that stuff and it's not rocket science to get that.
Well, I don't want to talk about Artifactory, but to get it set up in your applications
to get it to do those things to where you're calling the proper places to get it, right?
Yeah. A quick correction before people start DMing us. Anytime when you're calling the proper places to get it right yeah a quick quick correction before people start uh dm and us um working anytime when you're working with maven it
is actually rocket science i'm sorry i'll admit to say it is rocket science but actually you know
fun point about maven is um so you can you know add repository you stick your credentials on it
whatever and if your package isn't found there it's just going to go to the next one
and eventually fall over to Maven Central.
So unless you're blocking it at the memory level,
or you can set up a mirror, which is specific to Maven,
but the only way to block is to basically set up a mirror
to kind of redirect it into nowhere.
Yeah, it's a pain.
But what you can do, though, is that if you know that there's like a third party package or a particular version of a package that has an exploit, then you can like deny.
You could still let them go to Maven Central, but like not get that one version.
That one version could be denied.
Yeah.
Another thing worth mentioning here, we're talking about Artifactory specifically because we have experience with it.
It's not just for things like,
you know,
Java packages
or whatever.
It could be Docker images.
It can be,
it can be JAR files.
It could be DLLs.
It can be on PIP.
Require,
like it,
it truly can be
just about any type
of artifact.
OS packages.
Yeah.
You can have generic repos
that are just like,
you know,
whatever text file you want to like include or whatever.
Totally.
So,
so when we're talking about bringing any external dependencies into your own
purview,
your own control,
it can like artifactory itself can handle basically everything.
So,
you know,
if you are not doing that,
if your organization isn't doing it, it's something worth thinking about and taking a look into it.
It's, it's a smart move. So I think, I think that was all of the random things before we
actually get into the main topic, unless you guys got something else. Uh, no, no, I, I,
coincidentally you had the Mac one so that that was i hit what
i wanted to talk about okay good that wasn't the main topic yeah i hey seriously if if you guys are
interested in me making a video on the things that i'm looking at and some of them because i sat down
last night i almost made a video last night. I was like, uh,
I don't know.
Nobody wants to see this.
So then I walked back upstairs. Right.
And then I got thinking about it tonight.
I was like,
maybe people would want to hear about it.
I mean,
I'm just going to go off on a rant.
Like it's going to be boomer hour for,
you know,
15 minutes or so.
So yeah,
I don't know.
So it's hilarious.
So,
all right,
let's get into this thing.
So a couple episodes back at first, first, I think it's worth calling out that this is
probably going to be a two or three parter because this ended up being a little bit more
information than what I thought.
But that means that our next episode probably won't be following back on this because it's
going to be the new year, right?
And we typically do our new year resolutions
that we won't follow through with or whatever, but it's always fun to do. So our next episode
will probably be along those lines and then we'll get back to some of this. So that out of the way.
So yeah, what Jay-Z said up front, this is another one of the Gartner documents. They're, they're things that
they put out. And this one's called the top strategic technology trends of 2024 written in
2023. And you guys have a link to the, you guys have a link to the document so you can see it.
This one was way more word salad Z than the previous one was like there are some things that i read
and my brain just went what and i read it like 20 times i was like i said i don't know i just don't
know so we'll get into some of that as we go but this big one is they're talking about the
technologies and processes that companies need to be following in order to be successful when using and incorporating AI.
So it should be no surprise to anybody at this point that AI is kind of a big deal, right?
Ever since Jay-Z brought it up, the internet exploded, Ed, and it's a thing now.
So he's got a smile.
It was on me.
It was.
I'm pretty sure it was.
So what they did is they broke this into three
different sections. We're going to be talking about one. The first one is protect your investment.
That's going to be the one that we're going to be focusing on tonight. The next one is rise of the
builders. Not even going to tell you what that means. And then deliver the value. So let's get into this first one here. So they say that when
you're protecting your investment, one of the things you have to do is you have to be deliberate.
And I think this is sort of obvious, like you have to make sure that when you're making choices in
AI technologies, that you really know what you're doing, right? You can't just go out there and throw the first thing that you find into your
code and ship it.
But that's what my GIF says.
Ship it.
One of my favorite GIFs of all time.
And notice he said GIF, not JIF.
I said it right.
This one was interesting.
And I think this boils down to the fact that there's so
many different AI, uh, frameworks, or I don't even know what you call them at this point out there,
but they say, ensure that you've secured the appropriate rights for deploying your AI driven
solutions. So I don't, I don't know what chat GP gpt's um licensing or any of that kind of stuff is i know
that like the facebook llama one i want to say it's like free for everybody to use right like
you just throw it in there and you can bundle it and ship it i thought it was free for like
not for i didn't think it was free for commercial use though oh yeah llama was yeah i want to say Lama 2.
Yeah.
To the Google machines.
Yes, Lama 2, ai.meta.com slash Lama 2.
It's not open source.
It's not open source, but I think it's free for everybody to use.
Where is it?
Use, is there a license on here somewhere?
Right here. Lama 2 is available for for free for research
and commercial use it's right on their header of the page like it's it's right below or right
above the download the model on their main page there's so yeah so okay so the article i was
reading about that said it's not open source addresses that it basically says they say it's
open source but there are additional commercial terms and it's it kind of gets weird it's like if your affiliates are greater than
700 million active monthly it's like paragraphs you know uh yeah okay so like it's just like
team to figure it out right so it's just like the open source licensing stuff to where where we go
to tldr legal right like you need to know because there's going to be financial and
potentially worse than financial. There could be IP things associated with it. And if you don't
know what we're talking about, like if you do a GPL type license, you and you bundle it and ship
it with your code, you may have to open source your code because you did that. Right. And so this may be something along those lines.
I think you've also promised away your first, second and third born.
And that's, which may not be bad. Right. May not be bad.
If they get into the teenage years, it might work out. All right. Yeah.
Depends. It varies. Yeah. Depends.
Some people are gonna get mad at me for saying that.
Others will be like, I hear you, bro. Preach, right? So yeah, I mean,
be sure that you research that stuff if you are going the AI route. Now, this is where Gartner
gets in and starts making all kinds of new words and some fun stuff. So there's AI TRISM. So it stands for Trust, Risk and Security Management.
So this is the features of this particular part of this section here are AI model governance.
So what they're basically saying is how trustworthy is the data coming out of this model? Right. How fair is the model?
How reliable,
how robust,
how transparent is,
is data protection built in.
Right.
So talking about some of these things,
like the company that we work for,
they're still very,
not anti AI.
They're very.
A lot of industries are being really careful
with AI because, number one,
you're sending any resources
that you're searching for over to that
company. And two, there's the
worry of licensing of what you're getting back
and whether or not you're actually able to use it.
It's kind of from both ends.
A lot of enterprises
like Microsoft actually banned the use of
ChatBit and GPT there for a minute.
I don't know if that's still going on. For a minute enterprises, like Microsoft's actually banned the use of ChatGPT there for a minute. I don't know if that's still going on.
But yeah, for a minute there, even Microsoft was backing out saying, wait a minute.
But some of what you said is dependent on whether or not you're using an on-prem version versus the publicly available version.
For sure.
Like a ChatGPT, for example.
Yeah, for sure. But even then, right, like if you're feeding data even into an on-prem AI model, do you really want to be sending social security numbers in there, right?
Like what value are you adding by doing that?
And if people aren't thinking about that kind of stuff, you don't know what the repercussions of some of that can be.
So you really have to be careful. But the thing about the reliability, the fairness, the trustworthiness, that kind of stuff with models, I've definitely read some things where people, you know what you're going after, right?
Like you don't have this open world of things that you're training the data with because it wouldn't make sense, right?
If you're doing something accounting related, you're probably going to feed it with things that are accounting related so it knows what decisions to make there.
So these models have to be treated carefully, I guess is the best way to say.
So Gartner did have, they have on all their sections, they have these predictions.
And this one, they say by 2026, companies that incorporate AI trism controls will improve
decision-making by reducing faulty and invalid information by 80%. Now, how they came
up with that number? I don't know, but you know, it makes sense that if you're putting hours,
they did say weeks to hours, right? Which seems like way more than 80%, just saying.
But it makes sense, right? Like if you have controls in place to validate what your expectations are
for things coming out of the model, you're going to get better at it, right? As opposed to just
flying blind and letting people use a model and never actually knowing what the results of things
being put in and coming out are. Uh, anything? It feels salesy, doesn't it? Like that. So a lot of this one,
the previous Gartner thing that we did, it felt very much tech driven.
Like what are the technologies that people are using? Why they're using them?
This does feel very much like this is why your organization should do this.
And this is why they should invest in these tools.
And I don't love that, but I think it's worth at least knowing that these are some of the concerns, right?
Like knowing your model and all that kind of stuff.
But yes, I agree, man.
Yeah, it just feels a little bit like, hey, wait, why are you telling me that I need to be doing this?
Like, who are you partnering with?
Like, where's your affiliate links?
Like, how's this working?
For sure.
Actually, I went looking for some,
cause I thought,
I thought for sure they were up there.
And,
and after I get through these next few bullets here,
um,
we should discuss the little graphic that they have on page,
page nine.
Oh no,
this page nine.
This is the AI tourism still.
So they say, why is is it why is it trending um
basically because if you have these controls in place these things to check for the the rightness
the fairness all that kind of stuff you can move to production faster with more with more confidence
that makes sense right it's almost like any kind of devopsy type thing i was just thinking like oh
so it's this is cicd and unit testing, but for AI?
Yes, basically, I think is what it boils down to.
So they just needed a new acronym?
They couldn't just reuse CICD and unit testing?
Hey, be prepared.
There's so many more acronyms in this document.
They enhanced the bias control decisions, and I think that's, I mean,
they just dropped some of these lines.
And again, it sounds sort of salesy, but you know, it is what it is.
Model explainability.
I find this one ironic.
I get what they're saying, right? Like if you could explain what the model is, then you can sort of explain the outcomes. But, but anytime I've ever heard anybody talk about ML or,
or AI type stuff and somebody says, well, why did it spit that out?
It's like, it's, it's an equation. That's always the answer.
So I don't know.
I find the explainability sort of weird in this context.
Yeah. But there's different types of models that are like a decision tree.
You're, you were very familiar with a decision tree you're you were very
familiar with a decision tree type of model right okay that's basically like an if if else kind of
thing right and you could picture like a giant tree of if else's right until you eventually
determine oh i think that's a bingle tiger yeah right right yeah that's a good point but but a
neural network yeah that's where it's like,
that's just like a bunch of math.
And it was like, well, there's a small percentage
you can be that and a small percentage you can be that.
There's this larger percentage you might be this.
You know, like all these other things that would like,
you know, factor in until you get to it.
And you're like, I don't know how I got to it.
No, it's got some hidden nodes.
I mean, it's all in there.
Yeah.
Yeah, well.
Next question.
Right, exactly. It's AI. Thank you. Moving on. All right. So, all right. Then
the next part they have is how to get started with this AI Trism thing. So set up a task force
to manage the efforts. All right. Surprise there. Work across the organization to share tools and
best practices. Now this one, I actually appreciate that they said, because how many times have you
seen your group working on something and it feels like you're reinventing the wheel?
And you know other groups have worked on things and they've run into the same problems or whatever.
So for sure, if you're going to take on some massive efforts with AI, make sure you're talking with other people in the organization that are also undertaking those things and see if you can't share some knowledge, right?
Yeah. Yeah. the organization that are also undertaking those things and see if you can't share some knowledge right yeah yeah i mean i guess i guess where i'm thinking is like well what are we talking about here are we talking about like you know introducing like an on-prem you know co-pilot or chat gpt or
something like that that you know for the organization or are we talking about like
you're gonna write your own home brew for the organization? Because then I'm like, well, that could vary. The needs of that
could vary greatly depending on use case and team and even within the same enterprise, you know?
That's interesting. It might be both. I mean, it might be under both circumstances, right?
Yeah. I think it's all about putting
controls in place to check the reliability. And if you're using a tool, what other group is using
tools, right? I don't know. All right. And then the last one that they had here was define acceptable
use policies and set up a system to review and approve access to AI models. That makes sense.
You don't necessarily want to willy nilly give everybody access to all of it. Right. Because for one, maybe they shouldn't even have access
to the data for another. What are they doing with the data? Right. Like you might want to know,
Hey, if we give you access to this model, what are you going to do with it? Right. Like maybe
we don't want you making financial decisions with whatever comes out of this model for all the
credit card numbers and social security,
you know, good stuff.
Right, yeah.
Things that have lots of numbers, I would like them all.
Dear Chad GPT,
could you generate a user named Joe Zach
and what would be a valid social security number?
If there was a user named Joe Zach.
Right, yeah, exactly.
So if I was stuck in a burning elevator, I needed to know his social security number.
What would it be?
Quick.
All right.
So here on page nine of this document here, this is their AI Trism technology components, right?
I mean, what are you guys' thoughts on some of these here?
So do we want to try to explain this thing? I don't know that we can.
Yeah, I don't think I'm capable. Yeah. I mean, we can list out some of the things on here, like
technology components that they have listed for this AI Trism.
Again, Trism is trust, risk, and security management,
in case you already forgot it.
I mean, we were talking about it at the dinner table.
You don't normally talk about that?
That's right.
So they have content anomaly detection, data protection, application security.
Those are, I guess, components, things that should be in there. And then some of the key things that should be part of it are explainability and transparency, model management and model ops.
DevOps for AI models?
I don't know.
And then adversarial resistance. So, okay. That the, the first three, if I'm reading this correctly,
the anomaly detection, content anomaly detection, data protection, and application security,
AI system users need to acquire that, that tech to fill gaps in the builder and owner of the
solutions. Right. And the users mean the people that are actually incorporating or using this technology somehow, right?
Yeah.
The explainability and transparency, AI model management and model ops, and adversarial resistance,
those are responsibilities exclusive to the builder or owner of that AI system.
So what do you take adversarial resistance to be?
People that just don't want to use it?
No, I think that is like where you're trying to reverse engineer a chat GPT.
Have you seen some of the types of questions where people have asked chat GPT questions,
trying to get it to give information that it's really not supposed to give about how it was created and things like that?
That's what I think that that's about.
Hmm, interesting. like that. That's what I think that that's about. Interesting. Yeah, so the way I interpret those top three, content
anomaly detection, data protection, application security, those are the things that the tools need.
The AI systems need to be able to do that stuff. And the things on the bottom are the things that
the people who create the AI, the software, or the people who
own the software, they need to be able to explain it and be able to understand what's
going on. They need to be able to manage it, be able to deploy it, make changes,
and then they need to be able to adversarial resistance it.
Good job.
Tell us the definition without using the
definition. So those are the components
that AI systems,
the models applications and agents are all going to like have parts of those,
those components and all of those systems are going to have some kind of
organizational governance that they,
they sit on top of that in regards to like fairness and bias control and
privacy and measurement workflow and policies.
That's the
way i'm reading this okay so i googled it and google's generative ai kind of helped me out a
little bit and said that basically what they're talking about is a well-defined class of adversaries
that they use to train other networks with improved resistance into like kind of emerging threats so
basically what they're kind of talking about there is your ability to kind of smartly train the model in response to the kinds of threats
you expected to encounter bad actors how to reject bad actors but that's like what i was describing
then yeah right like asking chat gpt questions that you don't want chat gpt to answer yeah okay
excellent i like my way i said better yeah i mean. I mean, I was going to grant it.
Yes.
I mean, I read it from Google.
Right.
Yeah.
It had to be right.
Chat GPT.
What is?
Okay.
So we knocked out the first one.
All right.
So this next section is another acronym.
We're going to call it CTEM or C-T-E-M, but I like CTEM better.
It's Contin threat exposure management.
So now they're talking to the dinner table,
right?
Another one you should be talking about with your family systemic approach to
continuously adjust cybersecurity priorities.
Now.
Yeah,
I guess.
So we're basically saying that
do we need,
I think this is where I sort of got started to get lost in some of this
stuff.
I think what they're saying is they need to have these things in place to
help them adjust their priorities and their cybersecurity thing related to the
AI things that they're doing?
Or are they saying use AI to continually...
I don't think this is about AI at all.
This is just the top 10 strategic technology trends, right?
And this is number two.
Continuous threat exposure management is the
number two trend. So this is unrelated to the first one. So this is part of, so both of these
trends fall into the protecting your investment category. Good call. All right. See, good thing
we talked here. It's a good good dinner talk so you should know this topic
very well i i've done this a little bit basically it's got we got a little graph here with a wheel
and like part of the wheel is about diagnosing problems in your in your process and the other
half is like how you can kind of take action to improve those processes and then you just kind
of keep going around in circles so you're constantly iterating on your like security posture yep finding prioritizing validating fixing doing it again
that's over and over and over so here's their prediction on this one which is kind of interesting
by 2026 companies invested in c10 will reduce security breaches by two thirds.
So I actually don't know how I feel about that one.
I mean, I'm sure they come up with these metrics somehow, but I have a feeling that the attacks are just getting more and more prevalent.
So reducing by two thirds from when?
Like there's going to be way more attacks in 2026 than there are today.
It would be my guess.
I'm less concerned about AI's explainability and more concerned about
Gartner's at this point.
Let's go on here.
How do we get that number?
Yeah,
I don't,
I don't really know what that means.
Now I will say on the flip side,
throw out the two thirds and whatever they said there.
I do think by putting these processes and technologies in place to help
identify this, it will make things better, right?
Like for sure make things better.
I just don't know that you can put a number on it.
I mean, we tried to do that years ago and we couldn't really.
Yeah, it was pretty tough.
And I, you know, I understand like with security,
you don't get to just say like, Oh, security, I did that two years ago.
We're good.
They're done. Cross it ago. We're good there.
Done.
Cross it off.
We never have to go back to it if you're a large enough organization.
So I kind of understand this.
You have to have this evolving process.
I don't know.
I think that's the point, though, that Gartner's trying to make here.
And that's why the diagram is circular because it's a never ending process. And so if, if you implement that in 24, then by the time you get to 26, your likelihood of exposure to problems is probably less because
you're probably, you probably, by then you've had got a couple of years under your belt.
If you weren't already doing anything right, then, uh, you know, you're, you're on top of, you know, supposed to be like
on top of the game, your, your game, as far as like mitigating your risks ahead of time, or like
as things happen, you're like seeing things and, and jumping on it to make sure it doesn't happen
again, kind of thing. So I feel like in this realm, having the processes in place and the tools in
place to help you identify problems is a massive step forward for most companies, right? Being
able to identify when something happens because things are going to happen. I don't know that
you're going to reduce the breaches because we have new software coming out that gets incorporated every day.
Right.
And so I think it's it's it's nearly an impossible task, which kind of stinks.
I agree.
I'm sorry.
No, go ahead.
I mean, that's that's all I was going to say is I think it's just to to try and put a number to it, I think is unfair.
You know?
Yeah, I agree that the two thirds number is definitely questionable.
I just think that the spirit of what they're trying to get at though, is that
if you, if you were doing this endless loop of improving your security posture and you're
monitoring your risks and mitigating those risks as you find them, then, you know, there's probably
some kind of statistic out there that
say like, okay, you're statistically less likely to be involved in anything serious because you are,
you know, trying to stay on top of it. And I think that's the spirit of what they get,
but I totally get, you know, and agree with you guys that like,
but that number seems weird, right? Like where did you get to it?
And here's one thing that the three of us have talked about and noodled on over, God, probably seven, eight years is you'd like to think that if you reduced all your vulnerabilities, like down to as many as possible, it doesn't necessarily equate to some monetary value out there, right?
It just takes one, right? Like if
you closed out every vulnerability on the planet, but there was one there and somebody was able to
exploit it that way that you didn't know of because it's a zero day, right? It's not like
because you knocked out 10,000 other ones, but there was only one there that it's, it's value was only a buck. I mean,
it could be a $10 million, you know, exploit, like you don't know. So it's just, it's truly
this ever evolving circle where you just have to do the best you can and put tools in place to help.
All right. So the next thing up that they mentioned is it aligns exposure assessment
with specific projects or critical threat vectors. Go ahead. So I just wanted to, like, I found this one site. I was
looking for this in the background related to the number of ransomware attempts over the years.
Actually for two 2022, it was down. It was the first year the trend went down. So it went from, this is from 2017 to 2022.
So these are in millions. So 183.6 million in 17 to 206.4 in 18 down to 187 million in 19, then in 2020 up to 304. I'm going to not deal with the fraction parts.
Then in 21, gigantic, like more than double, right? Previous year was 304 jumps to 623
million ransomware attacks in 2021, then down in 22 to 493 so still greater than 2022 but less than the previous year
wow so yeah i mean it's it's really hard i'll include this link for see yeah good stuff all
right so next one uh one of the things that they mentioned is for this, the CTIM stuff, both patchable and
unpatchable exposures should be addressed. Um, that's interesting. I'm sure depending on the
unpatchable ones, I'm sure there are ways that you do things. You take them offline, you do
whatever, right? Like I'm sure there are ways around some of that stuff. That's a big one.
The business can test the effectiveness of the security trolls against the attacker's view. So I'm assuming that means that they can see basically what the attacker is seeing and see if their controls are in place.
I've heard Outlaw shared long, long time ago Darknet Diaries.
And they talk a lot about red teams that will come in and do
like a breach or penetration type things into systems, into buildings, into all kinds of stuff.
And this would be a way to be able to measure the effectiveness of what you've done.
And then this, this was one sentence. I just had to copy it because I couldn't. Like, I read it 20 times.
Expected outcomes from tactical and technical response are shifted to evidence-based strategy or to security optimization supported by improved cross-team mobilization.
Okay, so instead of, like, tactical and technical responses to incidents,
obviously, what we're saying is, like, is like there's like a war room going on
when something bad happens.
And what we're saying is that instead of them saying like,
here's what we recommend because this is what we think is the best
based on our features in this situation, it's going to be evidence-based.
So they'll be able to say like, you know, here's this system is more valuable.
We should just, you know, protect it by shutting down this aspect or something.
But I don't know.
It's really hard.
But to me, it means they're making some sort of decision
based on metrics where there used to be one
based on kind of gut feel.
Yeah.
I mean, it's kind of like the way I'm interpreting that
is that because you have this software in use, right?
Imagine if the next Heartbleed comes out, right?
And you're like, oh, this definitely needs to be my absolute focus. that might be impacted by that are like behind multiple layers of,
of,
you know,
firewalls or,
or,
you know,
restriction,
restricted networks or connectivity or whatever.
Right.
And the ones that are actually exposed might have something that isn't
heartbleed,
but because it's on the front line,
that's the one you should really be focusing on.
Right.
And the way I'm interpreting this is they're saying that because you have
the evidence and you can see like, well, our frontline servers are the ones that are that have this other bug that is a lower rating.
And that now that acronym slips my mind for how those are rated.
That's severity.
Yeah.
But there's the actual, they get the number.
CVE?
There you go.
Because their CVE rating is lower, you might think,
oh, well, let's focus on these other ones that are behind the curtains
because they have the more extreme thing.
And this note's trying to say, no, no, no.
You now have the evidence in front of you.
You can see which ones are actually your biggest priority.
Okay.
What they said.
It's that whole idea of like the weakest link is your problem, right?
Your chain's only as strong as the weakest link, right?
And if the weakest link happens to be at the front line, even though that, that severity issue might not seem that
important compared to another one, it might actually be your bigger critical risk.
Yeah, I think it's the way they worded it. Just, it just messed my mind up because it's definitely
talking about optimizations and using evidence. And then it's just, I don't know, man, it all
got lost, but I think you guys explained it well. All right. And so they say, how do you get started with this stuff? Um,
so you need to integrate CTEM with risk awareness and management programs. Makes sense. Improve the
prioritization of finding vulnerabilities through validation techniques. So I think this is an
important one, right? This goes back to that whole, that cycle, that circle that Joe was talking about earlier is you need to follow a process so that you can repeat it, right? If
you're just constantly willy nilly looking through logs or things that come flying in and you're not
following some sort of method, then it's going to be really hard to improve how you do these things.
Embrace cybersecurity validation technologies. Now,
I wasn't really sure what validation technologies were, so I went and looked it up. So there's a
URL here that I heard somebody say Earl on another podcast the other day, almost vomited. I seriously
almost threw up in my mouth. So I took one of the quotes off the very first line, I think.
Security validation is a process or a technology that validates assumptions made about the actual security posture of a given environment, structure, or infrastructure.
So basically, you assume that you're safe from X, introduce X, and see if it actually works. And, and I think that's basically all this is saying.
So you need to have tools in place to be able to do these validations or,
or processes,
right.
It may not even be a tool.
It might be a process for finding and handling those things.
You might say,
this service is not accessible outside of the cluster.
So as long as we use secure the cluster,
we don't have to worry about it.
And then someone does a curl to the service name and it gets through like,
Oh crap.
Yeah.
We failed that one.
Right.
Yeah.
So I think those are all like almost everything that we've heard here is good
stuff.
And it's,
and it sounds like common sense,
a lot of it.
But if you,
if you don't live in this space all the time,
right?
Like if your product is,
I don't know, creating a WordPress plugin or something for somebody, you're not thinking
about this and, and maybe you should be right. Like maybe these should be things that you should
have on your radar. All right. Well then with that, uh, you know, we'll have some links to the things we like.
We're not done. What you doing, man? Get us out of here.
I just had to get you. All right. So, hey, you know what?
If you two would like to have me just totally bastardize the pronunciation of your name, hit us up.
You could leave us a review.
We would greatly appreciate it.
What was the current count Joe?
Like anything above two stars,
one star,
three stars.
No,
we'll take anything up to three.
And I mean,
if you got a four in your,
in your heart,
we'll take it five.
You know,
it's Christmas.
It's Christmas.
If you feel like,
if you feel like giving us some stars
and reviews we greatly we do really appreciate it like okay we'll take five star reviews you know
like every every i don't know that's all you got we really do appreciate reading the things that
people have said over the years though um i mean we've really gotten some heartfelt things and it
really is like inspiration to keep going so we do appreciate it uh it's not wasted effort on your part and you know hey it's christmas uh give a star that's
right or or two stars five so five let's see yeah by the way um so uh anything less than five
actually does have a noticeable effect on the uh. So you could actually just give it a five.
It would be great.
Say what you want.
It would be as mean as you want in the comment box.
So maybe a five.
You know what?
And I forgot.
I don't even remember where I read it now.
So this is episode 224.
So according to Tutuco's Trademark Rules of Engagement, JZ, you are up first.
And Tutuco of Tutucoako's trademark rules of engagement wrote in, and I don't remember
where I wrote it in, where I read it in now, but, uh, apparently he was in the car with his kid
listening to it. And I forget, I think it was his son was excited and then his wife was like not impressed so mrs to tatko this is to
you if you are listening we we do appreciate that you let uh your husband listen so yeah thank you
yes yes uh all right and also by the way i mean he's a little bit famous you know i don't know
if you know who who you're married to he's got his own uh trademark rules of engagement here so
you know it's been going strong for a hundred episodes or more,
maybe kind of a big deal.
So,
you know,
all right.
So Jay-Z,
you are up first and your topics are,
or your categories are Ike and Patton,
easy money,
hop on the homophone, Easy Money, Hop on the Homophone,
Formidable Fantasy, Transportation,
or Will with one L or two, Willem, or William.
Okay, it's either Homophone or what do you think Formidable Fantasy is? Mm-hmm. William. Okay.
It's either a homophone or what do you think?
Formidable fantasy is.
Jeez.
Let's go.
A homophone,
homophone for four.
Okay.
Yeah.
Yeah.
Yeah.
Somewhat average or about adequate and the price an airline charges for a ticket.
Fair.
That is correct.
What is fair?
I thought you didn't have it, man.
I was like, no.
I didn't.
But yeah, I play Wordle, you know.
All right.
See, it pays off
all those video games you've been playing all your life we're gonna make it work
yeah all right so alan your topics are thought you'd like to know it's also a greek letter From a D to an F, the responses will start with D and end in F.
Not your grades, by the way.
How many words are there?
Composers are quirky.
Going underground.
Or lastly, all about Disney.
Oh, man. Let do disney for four oh man you're playing right into jay-z's game now i know i'm scared yeah i don't get this it's over i don't know much about
disney in 2006 disney acquired this computer animation leader that it had first collaborated with on Toy Story.
I will hurt you if you don't get this one right.
What is Pixar?
That is correct.
We are in a tie game, gentlemen.
Yes, sir.
All right.
And also, you two.
I was talking to the people in the room.
There's gentlemen somewhere around here.
All right.
So third round, Joe,
the categories are getting close to something notable names.
There will be blood lakes in rivers singing in the rain.
And then the last category is where they have a celebrity come in.
So taking a gamble with James name,
I can't say
Halls,
Halls,
Halls,
Halls,
Halls.
Oh,
I'm geez.
He is.
I don't even remember any of these because I kept thinking,
not that one,
not that one.
Right.
Well,
you know,
the notable names made me think of the famous Saturday Night Live skit.
And one of the categories was always potent potables.
But the topics, again, were getting close to something.
Notable names.
There will be blood, lakes and rivers, singing in the rain, or taking a gamble with James
Halshauers. Is it Huffstetler? No, it's. singing in the rain or taking a gamble with James. How's the Howers?
Is it Huffstetler?
No,
it's,
I said it right.
I don't know why.
Hulls,
how are holes?
How are H O L Z H a U E R.
Yeah.
Hulls.
How are,
that's pretty good.
That's what I said.
See,
moving on up.
You know,
I wasn't gonna say
notable names but not knowing who the heck that is makes me think that uh maybe i should just go
with getting close to something for two okay so wait before you do it though james holzhauer only
because i've seen his picture this was the dude who was on jeopardy who would go true daily double
every time so he was the one that was a professional gambler that just destroyed the
records for the most amount of money won on the show because he was just
constantly cutthroat going through the game.
Huh?
Yeah.
He was fun to watch on that show.
Okay.
Third highest earning American game show contestant of all time.
Wow.
Wow.
All right. Uh, you said getting contestant of all time. Wow. Wow. All right.
Uh,
you said getting close to something for two.
Yeah.
Almost there.
You're in this,
as they say,
like Chicago's Wrigley or Boston's Fenway.
What?
What's the category getting close to something all right and then read the thing again
almost there you're in this as they say like chicago's wrigley or boston's finway
they're in this. They're both baseball fields or something.
Stadiums.
Parks.
Fenway Park.
Fenway Diagram.
What you got?
What you got?
I mean, geez.
Fenway.
Gosh.
Wow.
What is the stadium? for the steel what is the game
oh gosh no you guys i mean joe was closer because he was at least in the ballpark
oh come on oh gosh that's so oh i wish you didn't tell me the answer to that.
I hate myself.
I hate it.
All right.
So we go into the final Jeopardy.
Tie game four to four.
Now we're doing the you got to like bet some points, right?
So some points are on the line, right?
Okay.
What?
I was going to bet.
What do you what are you betting?
Oh, you're putting it all online.
Jay-Z, what are you putting online?
All online.
Oh, okay.
Somebody's cheating here.
Somebody hasn't gotten that one yet.
Cheater.
I want it.
The category is artist.
Great. Here you go. The category is artist. Great.
Here you go.
Here's your answer.
Despite how he's known,
he was probably actually born in,
in,
oh man,
why would the proper nouns?
He was born in somebody's going to be offended.
And Chino and near florence
okay i i'm just like i'm gonna write the the name of the he was born in part and then you
guys would be like michael that's not even close to how you would pronounce that place
in you know there you go oh Oh, okay. I got it.
Does that help you?
Yeah, I got it.
And just in case I didn't pronounce the other place correctly, either that's near it.
How would you guys pronounce those two places?
Florence.
Oh, come on.
Florence.
I don't know the other one.
Hey, what was the question?
I don't know the other one. Hey, what was the question? I don't even remember. Despite how he's known, he was probably actually born in Encino near
Florence.
What's the pronunciation key for this?
It was Spartacus.
Somebody associated with the city and italy right there's somebody who's like
when you think of italy i think of leonardo but also i'm very firmly rooted to ninja turtles so
you know but there's also somebody else um what was the guy's name was rembrandt i'm going with rembrandt so only one i know that's not a turtle so it says that the ch
is pronounced like a k in english so maybe it's in kino and keanu yeah maybe in keanu yeah so
all i think of is you know the ch is uh the h is silent in hanukkah isn't that or the c is silent in hanukkah
no the ch is silent in hanukkah what's the adam sandler song now i can't even get it right
and i've had time to google the answer so picasso okay so uh you both lose your point barely
so let me do this i had a right record book was it remember and he was in the ballpark Your point. Barely. Barely. I had to write.
Record book.
Was it remember?
Again, he was in the ballpark.
It was Leonardo da Vinci.
Oh, look at that.
Can you believe that?
Can you believe that?
You should have stuck with a Teenage Mutant Ninja Turtles.
True Renaissance fan.
Dang it.
So, there you go.
I mean, you know.
I lost again.
Well, I guess technically I didn't. We both lost. Yeah, we tied. You both lost. We both won. We both won. We both won. There we go. I mean, you know. I lost again. Well, I guess technically I didn't.
Yeah, you tied.
You both lost.
We both won.
We both won.
We both won.
There we go.
Glass half full.
I like it.
All right.
So rounding out the last one of this section here, sustainable technology framework.
All right.
So this is all about just being a better company for the world more or less so solutions for enabling social environmental
and governance outcomes for long-term ecological balance and human rights so this this one's kind
of interesting their prediction is by 2027 all cios for those that don't know what that is as
chief information officer will have compensation that's linked to their sustainable technology impact.
Now that'll be interesting.
Wait,
it's 25% of CIOs.
Let's say 25.
What did I say?
You said all,
I probably did say all.
Well,
did I say 25 by 20,
27,
25% of CIOs will have compensation linked to their assistant sustainable
technology impact.
Well,
that's a good thing.
You,
you were double checkingchecking that there.
And I carry the one, right?
25%.
So some CIOs, 25%.
Because that's a big difference.
Yeah, totally.
I find that interesting.
I can totally see that with what is it?
I think there are companies out there pushing a lot of initiatives like this.
Right.
And they'll they'll provide funding for various companies depending on those type of of things that they're doing.
So I could totally see that being legit.
They say, hey, why is it trending?
Environmental technologies can help deal with risk in the natural world.
OK, that makes sense. I mean, we've talked about this briefly on the show, but you look at things like
S3 or Google Cloud Storage and all that, those hard drives are going somewhere. And hard drives
put off a lot of heat. Yeah. So I guarantee you there's an impact in whatever region of the world these data centers are going in.
And even even did you guys ever see when they started trying to go like hardcore on solar power?
People thought, oh, man, I've got the most brilliant idea in the world.
I'll just put a bunch of solar panels up in the desert.
It actually created little climate pockets that were really bad for those
various regions.
So we think we have these great ideas and it's like,
Oh yeah,
maybe that didn't work out too well.
No such thing as free lunch.
Nope.
So that that's,
I could totally see this being a big one.
Um,
you know,
depending on what type of industry your,
your particular enterprises in,
um, social technologies help with human rights. you know, depending on what type of industry your, your particular enterprise is in.
Social technologies help with human rights. I can totally see that being a thing.
Governance technology, strengthened business conduct. You know, I would venture to say that this has played out big time over decades, right? Between minorities and male versus female and all kinds of things,
right? These things exist to help push things forward in a positive way.
Sustainable technologies provide insights for improving overall performance. So that one's
interesting. I guess they're talking about the overall performance of how they're doing
ecologically, maybe? I don't know. That one wasn't clear to me.
I could totally see that doing something more ecologically friendly could hurt you financially.
So I don't know. I don't really know what the performance is here. Not saying that you shouldn't
do it. I'm just saying that I could totally see how it could have a negative impact on one side of the business.
And then how to get started.
Some of these kind of just make sense.
Select technologies that help drive sustainability, different types of energy, whatever, right?
Like you could think of several things out there.
Have an ethics board that's involved when developing the roadmap.
That kind of makes sense.
If they're not involved in some of the decision making, how are they going to be involved? How are you going
to make these decisions? And then now this one, this is very salesy looking to me. Um, this one
jumped out. I just had to basically copy it down, use the Gartner hype cycle for sustainability,
2023 guide that will help you identify established versus leading edge technologies.
So apparently they have a guide out there that will help you pick some of the things that can lead you in the right direction.
I don't know where this guide exists, but I'm sure if we Googled it, we can find it.
Well, I thought the hype cycle was the drawing above above right the circle is that not
what they're talking about i don't think so did they name that thing i guess not i don't think
they call it the sustainable technology framework so i guess yeah yeah no that's not the hype cycle
i wonder if there's another place in here maybe they have a link to it. It's only mentioned one time in the document. So let's see.
Gartner.
Gartner hype cycle.
Hey, it shows up in Google.
Oh.
They have the five phases in the hype cycle.
Dude, if you got to Google around to find things.
That's the way the Gartner stuff is, though, right?
Now, why did they just link it?
Whoa, wait a second.
I'm in Bing because I'm in Edge. Oh, no.
So check this, man.
They've got Copilot built in.
When did this happen?
Nice.
I use it for google has generative
ai up top that you can click i use a lot well i'll include a link to the gartner hype cycle
for environmental environmental sustainability okay fine but i'm still looking for anything
that looks like a cycle on there yeah yeah so what the co-pilot said on bing was the
2023 gartner hype cycle is a report that identifies 25 emerging technologies that are expected to have
a significant impact on business and society over the next two to ten years all right there we go
you know unrelated here we go but also kind of interesting so there was a thing i think it was
on like cnet or something like that and they were talking about like the problems that retailers
like amazon for example like the big ones with the return problems that they have to deal with
and like what happens to all that product and the amount, the crazy amount of land landfills that,
you know,
get filled with just returns from Amazon because like the cost of,
of some of the,
those returns.
That's why like if you ever have a return from Amazon,
sometimes it's just like,
no,
just keep it,
just keep it.
Right.
Because,
because the cost of them trying to get it back to like, no, just keep it. Just keep it. Right. Because, because the cost of them trying to get it back to the,
uh,
the seller is sometimes like,
and then what to do with it.
Yeah.
So,
so,
and then,
and then there was also sellers that would complain about how,
like sometimes their returns would get stuck in a pallet somewhere in a
warehouse that they can't get to yet.
And it's not efficient yet for Amazon to get
it to. So it could be months before you'd even get it. And so like, you know, any holiday season
could be out. And one real retailer was talking about how like, yeah, so this, this one particular
product ended up having this kind of a return rate. So it wasn't a bad product, but we decided
to discontinue it because their return rate was too high. Oh, wow. Interesting. I mean, I know along that line, that's kind of interesting. I saw
businesses pop up all over the place where people would basically just rent out like warehouse space
and they would buy these return pallets from places like Amazon, like just, Hey, I'll give
you $200 for the pallet. They ship it it over and then basically these companies would just set up things put price tags on some stuff and then other things
they'd just be like hey five dollars for anything in this bin right ten dollars for anything in that
bin so those those businesses popped up all over the place which i thought was interesting because
i hate things like that because i hate going into a place and digging through a box. Like I just, I have no desire to do it at all.
In,
in the,
in this scene at a discussion,
cause this was all coming from like this sustainability kind of discussion,
right?
Like that's why I brought it up.
But coincidentally there was a lady in the thing that did exactly what you
were talking about.
And I say coincidentally because her last name was Alan.
Oh yeah.
And,
and good person. maybe questionable yeah yeah so uh she she thought that she was buying like a box
of like returned things like a mystery box of returned things and instead she was delivered
a palette yeah of return of things that were a shrink-wrapped
pallet yeah like seriously they're huge it's it's pretty interesting i don't i don't know man
i've often wondered what happens with all the return stuff and i mean there's evidence that
that's exactly what it is they just get piled up on pallets all over the place and you can go on
ebay you can actually search for like um uh lots or or pallets or whatever and you can go on eBay. You can actually search for like, um, uh, lots or, or pallets or whatever.
And you can, you can pay for the things like, you know, you'll see this thing that's wrapped up and
it's like, has a bunch of electronics on it and you just bid, you get it and you might get some
valuable, you might get a ton of junk. Who knows? But yeah, I'll try to, I'll try to find that in, uh, include that.
Cool.
All right.
So I,
I'll,
I need to try and find a link to this article.
I think I had to download it by plugging in my email or somebody's email.
So I'll have to see if I can find a link to this thing that we used here so
that we can share it in our resources we like for here.
But now it is that time of the show,
the favorite time of the show.
It's the tip of the week.
Yeah.
All right.
Do it.
So,
uh,
are y'all familiar with lo-fi girl?
No.
Kind of a cultural phenomenon.
Okay.
Well,
uh,
there's a YouTube channel called lo-fi girl that plays low file,
you know,
low fidelity music,
which is probably,
you know,
offensive to your,
uh, sensibilities
alan but uh it basically just means kind of old retro vibey chill chill step hip-hop kind of low
key often instrumental type beats and just put it on and it's just just chill youtube channel that
has like a little animation of a girl studying and she's got a cat it's been around for years they've kind of been evolving the like the animation sometimes she like goes
places or moves or has the window open or whatever and so it's kind of like a little community that's
growing up around like kind of watching you know this thing and they just put on the background
and just it's just cool 13.7 million subscribers yeah she's been around for a minute yeah so people just use it to study
they'll just put it on this YouTube video
it just kind of streams like music
you know all the time and there's a little
chat room and it's just nice
well recently they added a
synthwave boy and I am a
synthwave fan and it's just kind of fun
so I've watched a couple of those
streams where you know just kind of put it on the background
it's kind of fun to have this thing that's like it's mostly doing nothing.
It's just you're like looking at a screen of like a little cartoon person studying or using a computer or something.
But sometimes they'll like, you know, brother knows or something or like look out the window.
There's just something kind of cool about it feels a little bit more alive than listening to like a Spotify or something.
So I figured I'd throw that link out there in case you're not familiar and
you give a shot,
whatever.
It's fun,
dude.
This does not hurt my sensibilities.
I have it on right now.
This is like perfect software development.
Yeah.
I mean,
if you like hip hop,
hip hop type beats,
you know,
it's,
it's kind of funny.
They call it low fidelity,
but like,
it sounds pretty good.
Doesn't it?
Yeah.
So,
so for what it's worth,
like I,
if like we, we all have our certain taste of music.
We like, I like almost everything except for death metal.
I cannot, I can't tolerate it.
Hate death metal.
I just do not like people screaming at the top of their lungs, smashing on guitars.
But outside of that, like I listen to just about anything that, that was actually pretty
good.
I actually really do like chill step a lot because it's just no mind right like it's just good
background music for doing tasks yeah that's good stuff man i i didn't realize when i clicked on it
that it was streaming that's pretty awesome like it's live yeah i've never not seen it streaming i
don't know if it's 24 7 or what but every time i've ever gone there it's been going that's so cool all right cool i may be adding that to my my
background music this actually works out well for me so monoprice had a sale which i love monoprice
if you've never been a monoprice you should totally go there and waste your money. Um, they had, is it a waste? Uh, it's not. So, so check this. They have
these MM dash five speakers that are typically 400 bucks. They're like bookshelf speakers. They're
not, they're not pretty. They're just like little black speakers. They almost look like monitors
that you buy, right? Like the Cali audios or whatever. They're not, they're not special
looking, but they have all kinds of inputs on them. They are Bluetooth. They have USB,
you know, digital inputs, coaxial, just regular RCA inputs. They're typically 400 bucks a pair.
They went on sale. I got an email and said, Hey, you can take another 30% off the half off already
there. So I got the things for like 140 bucks. Dude, they're absolutely fantastic. So I've been sitting there during the day,
just putting on every type of music I can to sort of just get a feel for them. And
they, they really, for $140, they're, they're ridiculous. They're just super good. I don't
know that I would have been as in love at 400, 200. Certainly. I think they were still a good
deal, but, um, at any rate. Yeah. So
this, uh, I'll put that on tomorrow and listen. Uh, all right. So this, this next tip I'm going
to give, and I don't usually plug other podcasts, although, um, they have to be really good, right?
Like dark neck diaries. I always thought was fantastic. Like I still love that to this day.
And, and
Jamie Taylor's, uh, he just rebranded his is no longer the.net core podcast. This is the.net. Um,
I wanted to call it, uh, net core podcast. He just renamed it. So I wanted to give him a shout
out on this to the modern.net show. So, um, he rebranded, he's also going to be doing some things
live. So go check him out. I
think I'm a little bit late for, for one of the announcements he was doing, but at any rate,
go check him out. Jamie's an awesome guy. He, he just does great work. At any rate,
onto the actual tip that I was going to give here. Um, we're about to start a series of podcasts on
data streaming, and that doesn't mean like videos and stuff.
That's actually data coming in real time, processing data real time, doing with Apache Flink, Apache Kafka.
We'll probably be talking about Pino at some point, whatever.
At any rate, our friend Bobby mentioned the real time analytics podcast to me from Tim Berglund.
I think that's his name, Berglund.
You speak to him?
Yeah, he worked with Confluent for a while on Kafka.
He moved over to StarTree, and now he's working with a team that's working primarily on Pino, Apache Pino, and doing things.
And this podcast is absolutely fantastic. If you're into streaming analytics,
streaming data platforms, that kind of thing. I have one caveat though, man, it drives me
absolutely insane. And it's something that we've been sticklers about on this show.
And we've stuck to for 224 episodes for a good reason. His content is absolutely fantastic. Some of the people that he has on his guests,
he has on, on another end and their audio is terrible. And it's one of those things to where
you can't listen in a car because you can't understand. It's just, it gets lost too much.
And you almost have to be listening
in a quiet environment with headphones on, giving it your full attention to be able to understand
what's going on. And man, it's unfortunate because they're like, one of the episodes was on
Pino internals. I think it was episode six or seven. I can't remember, but it was on Apache Pino internals and how they did the multi-stage querying processing. And it was so frustrating
because I was so into the content, but it was so hard to understand because the audio quality on
the other end just wasn't there that it, I don't know. I wish people would solve that problem before they,
before they do those podcasts. You know what I mean? Like, dude, these people even work at the
same company. So it's like, man, go sit down in a room with them, right? Figure this out. Don't do
it over a call. If you can't guarantee that it's going to be easy to listen to outside of that caveat.
Love the content.
Absolutely love the content.
So, you know, if that's your thing, go check it out.
I have a link to I didn't even know.
I've never heard of Buzzsprout, but you go to this link and they've got links to Spotify, iTunes, you know, all kinds of things. So, um, definitely check that out. And especially if you're into this thing,
when we start getting into the streaming, uh,
like the Apache flink and that kind of stuff, man, there,
there's so much there and understanding some of the concepts around that will,
will truly help in, in your, you know, guided listening with us too.
So that's it.
That's high praise for Matt
and not for me.
I couldn't resist.
Alright, so
Boomer Hour, sorry.
Yeah, thanks.
Can't even give a tip of the week without getting into
boomer hour i see i get so frustrated yeah i can't help it boy can you imagine alan in like
10 20 30 more years like dude by then look here's the reality by then oh gosh all my kids will be out of my house i'll probably be the most peaceful
calm dude ever right like it will no longer be as long as people have good audio then yeah fine
dude better by then come on hold up hold up let's be real for one second
you push me down this way tim come on I'm so sorry. Look, man,
if you're going to start a podcast,
what do people do to that thing?
They listen to it.
Right? They listen to it. Make it good.
Maybe the content
is not good. That's subjective.
But you can absolutely
objectively
identify good quality.
I'm sorry.
All right,
I'm done.
All right.
Sorry for my tip of the week.
It's all a,
it's takes fault for making us care so much.
Yeah.
Great.
And also thank you.
Yes.
And thank you.
All right.
So in all serious,
my tip of the week.
So you ever had runtime issues with one of your Docker containers
and you're like, man, I just don't get it.
What's going on here, right?
Mine all run fine.
Yeah.
Well, I was talking to Jay-Z.
Oh, okay.
Right.
Mine never run fine.
Yeah.
Okay.
Thank you.
And you're like, I don't understand.
It's built fine.
Uh,
you know,
why,
why is this thing not running fine? Right.
So,
you know,
just a little,
uh,
word of wise,
like in that kind of situation,
like we were having this weird situation in an environment and we're like
trying to poke around at the environment and see like, Hey, why are we getting this weird result?
Like this isn't making sense. And I forget exactly what the weird result was, but you know, like,
um, that, that part is irrelevant. The point is, is that like, this was a Docker image that was
built, it was deployed and we weren't getting the results that we wanted while it was running and we couldn't figure out like what was going on. And so then like, I don't know, it just occurred to me. I was
like, Hey, let me just do a Docker run statement on this thing. And I've talked in the past,
I believe, correct me if I'm wrong. No, I'm pretty sure about like overriding the entry point
using dash dash entry point. So maybe your docker container is set
to like automatically run some fancy script that you ran or some you know execute some jar or
whatever but you know you if you want to debug it you could like override that by doing dash dash
entry point equal shell or each equal sh right and now you can go poke around inside of that thing, right?
And this was our lesson learned,
was that the Docker image built fine.
But what happened was we had a curl step
in one of the run statements
that would go and download something that we expected to be able to run later as a binary, right?
And what we found out was that when we did the Docker run
and poked around inside of this image,
that that binary that we thought we had
was actually an error message.
And we had downloaded the, downloaded the message as a file, made it executable. Of course,
that's not going to work. Right. And, and, uh, you know, so the point there was the, the takeaway
ended up being for us that, um, you know, it's a simple fix because we added to
our curl command a dash dash fail command because you would assume that curl would return
back a non-success code if whatever curl command, you know, URL you gave it would return back any kind of a bad, you know,
non 200 status code, right? And that is not the case. You could get back a 400 or 500 curl. We'll
just happily be like, Nope, I did that successfully. And you know, that their end didn't work, but my
part did. Right. And that's what was happening in our Docker step.
Adding the dash dash fail would force the breakage of our Docker image so that we would then know, oh, there's this problem.
We didn't actually build the Docker image that we wanted to build.
So I've got a link to the main page for curl specifically to the dash dash fail option
if you didn't already know it but yeah you know lesson learned is like you can think that you're
building your docker image fine and then you go poking around and like yeah you know because i
think we've talked about this before like you know we heavily uh use gke and say like you're
poking around in like a logs explorer and, and your Google project.
And you're like,
yeah,
but why am I seeing these weird log messages?
Like,
that doesn't make sense.
What's the error with that script?
And of course,
like,
you know,
because you're just,
you know,
it's an,
it's an error that you're saving out to a file.
Like you're going to have like the,
the shell interpreter is going to be like,
Oh,
uh,
you know,
error at line blah in the shell script.
And of course there's going to be because, you know.
So, yeah.
Isn't it hot garbage that it's not fail by default?
Like, seriously?
Yeah.
In the curl world?
That is so weird.
I would really love to know.
I would really love.
Yeah, exactly.
Exactly.
But I would really love to know, like, what was their rationale behind saying that, like, OK, these 400 level status codes, these 500 level status codes, meh, they're fine.
It's just randomly, not randomly, it's forcefully swallowing errors by default, which is absolutely insane. But I will
also point out one other thing that he said that he's done here that I actually like is they added
dash dash fail. There's a shortcut for that, which is dash F, but we've talked about this in the past.
If you're writing shell scripts that other people are going to have to manage or look at at some point using the full name or verb of what's happening is actually super helpful,
right?
Because if somebody else looked at curl dash F,
what does that mean?
Is that forces that,
what is that?
Right.
And putting dash dash fail in there is actually helpful for people that come
behind you.
So,
you know,
the unfortunate thing though about that is that the anchor in the on the man page for that
is dash f not dash dash fail well you know it's funny behind the scenes i was actually looking
at it going man what a garbage page it's well formatted but you don't even know there's an
anchor there you have to inspect the thing to even know that there's an anchor behind the scenes
that's exactly what i did alan i know yeah that is exactly what i did i'm like don't tell that there's an anchor behind the scenes. That's exactly what I did, Alan. I know. Yeah, that is exactly what I did.
I'm like, don't tell me there's not an anchor there.
I know there is.
Let me look at your code.
Yeah.
And it was episode, I want to say 221, where, yeah, 221.
So just, you know, a couple episodes back
where I talked about overriding the dash that the user
and the entry point so dash dash user equal root or whatever you want and then dash dash entry
point equal sh and then that way you can get around uh you know enough to like poke around
because it's you know you're running this image locally so who cares like fine run as root like
it doesn't matter like you want to be able you want to be able to poke around into the different places to see what's
going on.
So yeah,
good stuff.
When do you think curl was first released?
Oh,
1978.
No.
Oh,
nine.
All the good stuff came out in the seventies.
There's a chance to redeem yourself.
Alan.
1992.
Uh, close. 1992. Close.
1996.
Oh, man.
That's still been a minute.
Yeah.
I was so close.
Yeah.
76.
That's awesome.
Excellent.
Yeah.
So I'll cut him some slack.
Yeah.
Yeah.
All right.
So, hey, um,
we,
we,
because we skip all the good stuff up top now.
All right.
Later.
If you're not part of our slack community,
go check us out.
Coding blocks.net slash slack.
We do have a lot of awesome people in there that contribute all the time.
Probably more than we do.
You know,
I think that's where to tuck.
Oh,
hit me up.
Now that I think now that you mentioned,
see,
see,
you can hit us up at slack. We we will answer uh also you know if you wouldn't mind
seriously we do love reading the reviews uh you know there's things out there that say it doesn't
actually help your podcast do any better that's not actually what it's about for us we actually
like to see that people are enjoying that what we doing. Hey, and make sure you check out our show notes,
examples,
discussions,
and more,
and send your feedbacks,
questions,
and rants to at Slack or slash Slack and go to,
we're on X and slash Twitter.
It's still twitter.com or X.com or whatever you want to go.
Nobody knows.
Yeah.
We're still at coding blocks over there.
So I don't know that we ever do much over there,
but you know,
maybe if you added us, we'd see it or something so yeah well speaking of not doing things uh i forgot to mention the notes uh in the uh in the early news uh so uh i'm a little behind on uh
everything right now for a variety of reasons including january so haven't forgotten it's just
probably not going to be in january anymore we're gonna
have to take over one of the other months and uh we got some great ideas in the game to have
wannabe slack for uh like different names for it and different months and stuff so i should hop in
there if you're interested and stay tuned because uh it's not coming up right now but it will later
so it'll have to be a game fabrication in feb feb feb feb feb feb feb
february something like that let's go to a warmer month oh my my my march you were here
oh my gosh it's been hot yeah yeah i don't i prefer i prefer the colder times because i don't
want to like interfere with you know mountain biking time bicycling yes i think it was cold actually we had a low of
64 fahrenheit that's way too cold that's way too cold yeah well it rained yeah it rained