Screaming in the Cloud - Uptycs and Security Awareness with Jack Roehrig
Episode Date: April 11, 2023Jack Roehrig, Technology Evangelist at Uptycs, joins Corey on Screaming in the Cloud for a conversation about security awareness, ChatGPT, and more. Jack describes some of the recent developm...ents at Uptycs, which leads to fascinating insights about the paradox of scaling engineering teams large and small. Jack also shares how his prior experience working with AskJeeves.com has informed his perspective on ChatGPT and its potential threat to Google. Jack and Corey also discuss the evolution of Reddit, and the nuances of developing security awareness trainings that are approachable and effective.About JackJack has been passionate about (obsessed with) information security and privacy since he was a child. Attending 2600 meetings before reaching his teenage years, and DEF CON conferences shortly after, he quickly turned an obsession into a career. He began his first professional, full-time information-security role at the world's first internet privacy company; focusing on direct-to-consumer privacy. After working the startup scene in the 90's, Jack realized that true growth required a renaissance education. He enrolled in college, completing almost six years of coursework in a two-year period. Studying a variety of disciplines, before focusing on obtaining his two computer science degrees. University taught humility, and empathy. These were key to pursuing and achieving a career as a CSO lasting over ten years. Jack primarily focuses his efforts on mentoring his peers (as well as them mentoring him), advising young companies (especially in the information security and privacy space), and investing in businesses that he believes are both innovative, and ethical.Links Referenced:Uptycs: https://www.uptycs.com/jack@jackroehrig.com: mailto:jack@jackroehrig.comjroehrig@uptycs.com: mailto:jroehrig@uptycs.com
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
Lands of the late 90s and early 2000s were a magical place to learn about computers,
hang out with your friends and do cool stuff like share files, run websites and game servers,
and occasionally bring the whole thing down with some ill-conceived software or network configuration.
That's not how things are done anymore.
But what if we could have a 90s-style LAN experience along with the best parts of the 21st century internet,
most of which are very hard to find these days?
Tailscale thinks we can, and I'm inclined to agree.
With Tailscale, I could use trusted identity providers like Google or Okta or GitHub
to authenticate users and automatically
generate and rotate keys to authenticate devices I've added to my network. I can then share access
to those devices with friends and teammates or tag devices to give my team broader access.
And that's the magic of it. Your data is protected by the simple yet powerful social dynamics of
small groups that you trust. Try now. It's free forever for personal
use. I've been using it for almost two years personally and am moderately annoyed that they
haven't attempted to charge me for what has become an absolutely essential to my workflow service.
Kentik provides cloud and net ops teams with complete visibility into hybrid and multi-cloud
networks, ensure an amazing customer experience,
reduce cloud and network costs,
and optimize performance at scale,
from internet to data center to container to cloud.
Learn how you can get control of complex cloud networks
at www.kentik.com
and see why companies like Zoom, Twitch, New Relic, Box, eBay, Viasat, GoDaddy,
Booking.com, and many, many more choose Kentik as their network observability platform.
Welcome to Screaming in the Cloud, I'm Corey Quinn. This promoted episode is brought to us
by our friends at Optics, and they have once again subjected Jack Roerig,
technology evangelist, to the slings, arrows,
and other various implements of misfortune
that I like to hurl at people.
Jack, thanks for coming back.
Brave of you.
I am brave.
Thanks for having me.
Honestly, it was a blast last time,
and I'm looking forward to having fun this time, too.
It's been a month or two-ish.
Basically, the passing of time is one of those things
that is challenging for me to wrap my head around in this era.
What have you folks been up to?
What's changed since the last time we've spoken?
What's coming out of Updix?
What's new? What's exciting?
Or what's old with a new and exciting description?
Well, we've GA'd our agentless architecture scanning system.
So this is one of the reasons why I joined Uptix.
What was so fascinating to me is they had kind of nailed XDR.
And I love the acronyms.
XDR and CNAP is what we're going with right now.
And we have to use these acronyms so that people can understand what we do
without me speaking for hours about it.
But in short, our agentless system
looks at the current resting risk state of production environment without the need to
deploy agents, as we talked about last time. And then the XDR piece is that's the thing that you
get to justify the extra money on once you go to your CTO or whomever your boss is and show them
all that risk that you've uncovered with our agentless piece. It's something I've done in the past with technologies that were similar, but upticks is continuously improving.
Our anomaly detection is getting better. Our threat intel team is getting better. I looked
at our engineering team the other day. I think we have over 300 engineers or over 250 at least.
That's a lot. It's always wild for folks who work in small
shops to imagine what that number of engineers could possibly be working on. Then you go and
look at some of the bigger shops and you talk to them and you hear about all the different ways
their stuff is built and how they all integrate together. And you come away on some level
surprised that they're able to work with that few engineers. So it feels like there's a different
perspective on scale and
no one has it right, but it is easy, I think, in the layperson's mindset to hear that a company
like Twitter, for example, before it got destroyed, had 5,000 engineers and what are they all doing?
And well, I can see where that question comes from. And the answer is complicated and nuanced,
which means that no one is going to want to hear it if it doesn't fit into a tweet itself. But once you get into the space, you start realizing that
everything is way more complicated than it looks.
It is. You know, it's interesting that you mentioned that about Twitter.
I used to work for a company called Interactive Corporation.
And Interactive Corporation is an internet conglomerate that owns a lot
of those things that are at the corners of the internet that not many people know about.
And also like the entire online dating space.
So, I mean, it was a blast working there.
But at one point in my career, I got heavily involved in M&A.
And I was given the nickname Jack the Riffer.
Riff standing for reduction in force.
Oof.
So Jack the Riffer was, yeah, you know, right?
It's like Buzzsaw Ted.
Like when you bring the CEO with the nickname of Buzzsaw in there,
it's like, hmm, I wonder if he's going to hire a lot of extra people.
Not so much.
Right?
It's like, hey, they're sending Jack out to hang out with us,
you know, in whatever country we're based out of.
And I go out there and I would drink them under the table.
And I'd find out the dirty secrets, you know.
We would be buying these companies because they would need it optimized.
But it would be amazing to me to see some of these companies that were massive.
And they produced what I thought was so little.
And then to go and analyze everybody's job and see that they were all so intimately necessary.
Yeah.
And the question then becomes, if you were to redesign what that company did from scratch,
which again is sort of an architectural canard, the easiest thing in the world to do is to
design an architecture from scratch on a whiteboard with almost an arbitrary number of constraints.
The problem is, is that most companies grow organically.
And in order to get to that idealized architecture, you've got to turn everything off
and rebuild it from scratch.
The problem is getting to something that's better
without taking 18 months of downtime
while you rebuild everything.
Most companies cannot and will not sustain that.
Right, and there's another way of looking at it too,
which is something that's been
kind of a thought experiment for me for a long time.
One of the companies that I worked with back at IC was AskJeeves.
Remember AskJeeves?
Oh, yes.
That was sort of the closest thing we had at the time to natural language search.
Right.
That was the whole selling point.
But I don't believe we actually did any natural language processing back then.
So back in those days, it was just a search index.
And if you wanted to redefine search right now, and you wanted to define something that was truly a great search engine, what would you do differently?
And if you look at the space right now with ChatGPT and with Google, and there's all this talk about, oh, ChatGPT is the next Google killer.
And then people are like, well, Google has Lambda.
What are they worried about ChatGPT is the next Google killer. And then people were like, well, Google has Lambda. What are they worried about ChatGPT for?
And then you've got the folks at Google who are saying,
ChatGPT is going to destroy us,
and the folks at Google who are saying,
ChatGPT has got nothing on us.
So if I had to go and do it all over from scratch for search,
it wouldn't have anything to do with ChatGPT.
I would go back, I'd make a directed cyclical graph,
and I would use node weight assignments
based on outbound links,
which is exactly what Google was
with the original PageRank algorithm, right?
I've heard this described as almost a vector database
in various terms, depending upon what it is that,
how it is you're structuring this and what it looks like.
It's beyond my ken, personally.
But I do see that there's an awful lot of hype
around chat GPT these days.
And I am finding myself getting professionally,
how do I put it, annoyed by most of it.
I think that's probably the best way to frame it.
Isn't it annoying?
It is, because it's people ask,
oh, are you worried that it's going to take over what you do?
And my answer is no, I'm worried it's going to make my job harder more than anything else.
Because back when I was a terrible student, great, write an essay on this thing or write a paper on this.
It needs to be five pages long.
And I would write what I thought was a decent coverage of it, and it turned out to be a page and a half.
And oh, great, what I need now is a whole bunch of filler fluff that winds up
taking up space and word count, but doesn't actually get us to anywhere that is meaningful
or useful. And it feels like that is what GPT excels at. If I worked in corporate PR for a lot
of these companies, I would worry because it takes an announcement that fits in a tweet, again,
another reference to that ailing social network, and then it turns it
into an arbitrary length number of pages. And it's frustrating for me just because that's a lot more
nonsense I have to sift through in order to get the actual viable answer to whatever it is I'm
going for here. Well, look at that viable answer. That's a really interesting point you're making.
That fluff, right? When you're writing that essay, you have that one and a half pages out. That's gold. That one and a half pages,
that's the shit. That's the stuff you want, right? That's the good shit. Excuse my French.
But ChatGPT is what's going to give you that filler, right? The GPT-3 dataset, I believe,
was, I think it was, there was a lot of Reddit question and answers
that were used to train it.
And it was trained, I believe the data that it was trained with ceased to be recent in
2021, right?
It's already over a year old.
So if your teacher asks you to write a very contemporary essay, ChatGPT might not be able
to help you out much, but I don't think that
that kind of gets the whole thing because you just said filler, right? You know, you can get it to
write that extra three and a half pages from that five pages of required writing. Well, hey,
teachers shouldn't be demanding that you write five pages anyways. I once heard a friend of mine
arguing about one presidential candidate saying, this presidential candidate speaks at a third grade level.
And the other person said, well, your presidential candidate speaks at a fourth grade level.
And I said, I wish I could convey presidential ideas at a level that a third or a fourth grader could understand.
You know, right?
On some level, it's actually not a terrible thing, because if you can only
convey a concept at an extremely advanced reading level, then how well do you understand? It felt
for a long time like that was the problem with AI itself and machine learning and the rest.
The only value I saw was when certain large companies would trot out someone who was themselves deep into the space,
and their first language was obviously math.
And they spoke with a heavy math accent through everything that they had to say.
And at the end of it, I didn't feel like I understood what they were talking about any better than I had at the start.
And in time, it took things like ChatGPT to say, oh, this is awesome. People made fun of the hot dog, not a hot dog app, but that made it understandable and accessible to people.
And I really think that that step is not given nearly enough credit.
Yeah, that's a good point.
It's funny you mentioned that because I started off talking about search and redefining search.
And I think I used the word digraph for, you know,
directed graph. It's like a stupid math concept. Nobody understands what that is. I learned that
in discrete mathematics a million years ago in college, right? I mean, I'm one of the few people
that remembers it because I worked in search for so long. Is that the same thing as a directed
acyclic graph or am I thinking of something else? Ah, you're close. A directed acyclic graph has
no cycles. So that means you'll never go around in a loop. But of course,, you're close. A directed acyclic graph has no cycles. So that means you'll never go
around in a loop. But of course, if you're just mapping links from one website to another,
website A can link from B, which can then link back to A. So that creates a cycle, right? So
an acyclic graph is something that doesn't have that cycle capability in it. Got it. Yeah,
obviously my higher math is somewhat limited. It turns out that cloud economics doesn't have that cycle capability in it. Got it. Yeah, obviously my higher math is somewhat limited.
It turns out that cloud economics
doesn't generally tend to go too far past basic arithmetic.
But don't tell anyone.
That's the big secret of cloud economics.
I think that's most everything.
I mean, even in search nowadays,
people aren't familiar with graph theory.
I'll tell you what people are familiar with.
They're familiar with Google.
And they're familiar with going on Google
and Googling for something.
And when you Google for something, you typically want results that are recent.
And if you're going to write an essay, you typically don't care.
Because only the best teachers out there, who might not be tricked by chat GPT,
unless they probably would be,
but the best teachers are the ones that are going to be writing the syllabi that require the recency.
Almost nobody's going to be writing syllabi that requires essay recency. They're
going to reuse the same syllabus they've been using for 10 years. And even that is an interesting
question there, because if we talk about the results people want from search, you're right,
I have to imagine the majority of cases absolutely care about recency, but I can think of a tremendous
number of counterexamples where I have been looking for
things explicitly and I do not want recent results, sometimes explicitly, other times because, no,
I'm looking for something that was talked about heavily in the 1960s and not a lot since. I don't
want to basically turn up a bunch of SEO garbage that trawled it from who knows where. I want to
turn up some of the stuff that was digitized and then put forward.
And that can be a deceptively challenging problem in its own right.
Well, if you're looking for stuff that's been digitized, you could use archive.org or one of the web archive projects.
But if you look into the web archive community, you'll notice that they're very secretive about their data set.
I think one of the best archive internet search indices that I know of is in Portugal.
It's a Portuguese project. I can't recall the name of it, but yeah, it was a Portuguese project
that is probably like the axiomatic standard or like the ultimate prototype of how internet
archiving should be done. Search nowadays, though, when you say things like, I want explicitly to get this result, search does not want to show you explicitly what you want.
Search wants to show you whatever is going to generate them the most advertising revenue.
And I remember back in the early search engine marketing days, back in the algorithmic trading days of search engine marketing keywords, you could spend $4 on an ad for flowers. And if you type the word flowers into Google,
you just, I mean, it was just ad city. You type the word rehabilitation clinic into Google,
advertisements everywhere, right? And then you could type certain other things into Google and
you would receive a curated list. These things are obvious things that are identified as flaws in the secrecy of the
pager and algorithm. But I always thought it was interesting because ChatGPT takes care of a lot of
the stuff you don't want to be recent, right? It provides this whole other end to this idea that
we've been trained not to use search for, right? So I was reviewing a contract the other day. I
had this virtual assistant, and English is not her first language, and she and I are redlining
this contract for four hours. It was brutal because I kept on having to Google, for lack of
a better word. I had to Google all these different terms to try and make sense of it. Two days later,
I'm playing around with ChatGPT
and I start typing some very abstract comments to it. And I swear to you, it generated that same
contract I was redlining. Very verbatim. I was able to generate multiple clauses in the contract.
And by changing the wording in ChatGPT to say, create a, you know, more plaintiff friendly,
that contract all of a sudden was redlined in a way that I
wanted it to be. This is a fascinating example of this because I'm married to a corporate attorney
who does this for a living and talking to her and other folks in her orbit. The problem they
have with it is that it works to a point on a limited basis, but it then veers very quickly into terms that are nonsensical,
terms that would absolutely not pass muster, but sound like something a lawyer would write.
And realistically, it feels like what we've built is basically the distillation of a loud,
overconfident white guy in tech, because they may not know exactly what they're talking about,
but by God, is it confident when it says it. Yes, you hit the nail on the head. Ah, thank you. Thank you.
And there's an easy way to prove this is pick any topic in the world in which you are either an
expert or damn close to it or know more than the average bear about and ask chat GPT to explain
that to you and then notice all the things it glosses over or what it gets subtly
wrong or is outright wrong about, but it doesn't ever call that out. It just says it with the same
confident air of a failing interview candidate who gets nine out of 10 questions absolutely right,
but the one they don't know they bluff on. And at that point you realize you can't trust them
because you never know if they're bluffing or they genuinely know the answer.
Wow, that is a great analogy.
I love that.
You know, I mentioned earlier that I believe the part of the big push of the GPT-3 training data was based on Reddit questions and answers.
And now you can't categorize Reddit into a single community.
Of course, that would be just as bad as the way Reddit categorizes other communities.
But Reddit did have a problem. I remember there was the Ellen Powell debacle for Reddit,
and I don't know if it was so much of a debacle, if it was more of a scapegoat situation.
I'm very much left with a sense that it's a scapegoat, but still, continue.
Yeah, we're adults. We know what happened here, right? Ellen Biles, somebody who is going through some very difficult times in her career. She's hired to be
a martyr. They had a community called Fat People Hate, right? I mean, like, Reddit had become a
bizarre place. I used Reddit when I was younger, and it didn't have subreddits. It was mostly about
programming. It was more like Hacker News. and then I remember all these people went to hacker news and a bunch of
stayed at reddit and there was this weird limbo of like the super pretentious people over at
hacker news and then reddit started to just get weirder and weirder and then you just described
chat gpt in a way that just struck me as so Reddit, you know, it's like some guy mansplaining
some answer. It starts off good. And then it's over confidently continues to state nonsensical
things. Oh yeah. I was a moderator of the legal advice and personal finance subreddits for years.
And no way. Were you really? Oh, absolutely. Those corners were relatively reasonable.
And like, well, wait a minute. You're not, you're not a lawyer. You're correct. And I'm also not a financial advisor. However,
in both of those scenarios, what people were really asking for was how do I be a functional
adult in society? In high school curricula in the United States, we insist that people go through
four years of English literature class, but we don't ever sit down and tell them how to
file their taxes or how to navigate large transactions that are going to be the sort
of thing that you encounter in adulthood. Buying a car, signing a lease, and it's more or less,
yeah, at some point you wind up seeing someone with a circumstance, yeah, talk to a lawyer,
don't take advice on the internet for this. But other times it's, no, you cannot sue a dog. You have to learn to interact with people as a grown-up. Here's how to approach that. And that manifests as legal questions or finance questions, but it all comes down to, I have been left unprepared for the world I live in by the school system. How do I wind up addressing these things? And that is what I really enjoyed.
That's just prolifically, prolifically sound. I'm almost speechless. You're 100% correct.
I remember those two subreddits. It always amazes me when I talk to my friends about finances.
I'm not a financial person. I mean, I'm an investor, right? I'm a private equities investor.
And I was on a call with a young CEO that I've been advising for a while. He runs a security awareness training company. And he's like, you know, you've made 39% off of your investment,
three months. And I said, I haven't made anything off of my investment. I bought a safe and, you
know, it's like, this is is conversion equity and I'm sitting here thinking
like I don't know any of this stuff and I'm like I talked to my buddies and they you know that are
financial planners and I asked them about finances and it's it that's also interesting to me because
financial planning is really just about when are you going to buy a car when are you going to buy
a house when are you going to retire and what are the things, the securities, the companies,
what should you do with your money rather than store it under your mattress? I didn't really
think about money being stored under a mattress until the first time I went to Eastern Europe,
where I am now. I'm in Hungary right now. And first I went to Eastern Europe, I think I was
in Belgrade, in Serbia. And my uncle at the time, he was talking about how he kept all of his money in cash in a
bank account in Serbian Donata.
Serbian Donata had already gone through hyperinflation like 10 years prior.
Or no, it went through hyperinflation in 1996.
So it hadn't been that long.
And he was asking me for financial advice.
And here I am.
I'm like, you know, in my early 20s.
And I'm like, I don't know what you should do with your money, but don't put it under your mattress.
And that's the kind of data that Reddit or that chat GPT seems to have been trained on, this GPT-3 data.
It seems like it's a lot of Redditors, specifically Redditors sub-2001.
I haven't used Reddit very much in the last half a decade or so.
Yeah, I mean, I still use it in a variety of different ways, but I got out of both of
those cases, primarily due to both time constraints as well as my circumstances changed to a point
where the things I spent my time thinking about in a personal finance sense no longer
applied to an awful lot of folk, because the common wisdom is aimed at folks who are generally on something that
resembles a recurring salary, where they can calculate in certain percentage raises, in most
cases, for the rest of their life, plan for other things. But when I started a company, a lot of the
financial best practices changed significantly. And what
makes sense for me to do becomes actively harmful for folks who are not in similar situations.
And I just became further and further attenuated from the way that you generally want to give
common case advice. So it wasn't particularly useful at that point anymore.
Very, yeah, that's very well put. I went through a similar thing. I watched Reddit quite a bit through the Alan Powell thing
because I thought it was a very interesting lesson in business
and in social engineering in general, right?
And we saw this huge community, this huge community of people.
And some of these people were ridiculously toxic.
And you saw a lot of groupthink.
You saw a lot of manipulation.
There was a lot of heavy-handed moderation. There was a lot of groupthink, you saw a lot of manipulation. There was a lot of heavy-handed moderation.
There was a lot of too light moderation.
And then Alan Powell comes in and I'm like, who the heck is Alan Powell?
Oh, Alan Powell is this person who has some corporate scandal going on.
Oh, Alan Powell is a scapegoat.
And here we are watching a community being socially engineered, right,
into hating the CEO who's just going to be let go
or step down anyways.
And now their conversations have been used
to train intelligence,
which is being used to socially engineer people
into falling for fishy schemes.
I mean, you just listed something else
that's been top of mind for me lately,
where it is time once again here at the Duckbill Group
for us to go through our annual security awareness training. And our previous vendor has not been terrific,
so I start looking to see what else is available in that space. And I see that the world basically
divides into two factions when it comes to this. The first is something that is designed to check
the compliance boxes at big companies. And some of the advice that those things give is actively harmful.
As in, when I've used things like that in the past, I would have an addenda that I would send out to the team.
Yeah, ignore this part, this part, and this part because it does not work for us.
And there are other things that start trying to surface it all the time as it becomes a constant awareness thing, which makes sense.
But it also
doesn't necessarily check any contractual boxes. So it's, isn't there something in between that
makes sense? Found one company that offered a Slack bot that did this, which sounded interesting.
The problem is, is it was the most condescendingly rude and infuriatingly slow experience that I've
had. It demanded itself a whole bunch of permissions to the Slack workspace just to try it out. So I had to spin up a false Slack workspace for testing just to see what
happens. And it was start to finish the sort of thing that I would not inflict upon my team.
So the hell with it. And I moved over to other stuff now and I'm still looking,
but it's the sort of thing where I almost feel like this is something chat GPT could have built
and cool. Give me something that sounds confident but is often wrong.
Go.
Yeah, Optix actually is, we have something called Automate,
spelled O-T-T-O space M and then the number eight.
And I think, I personally think that's the cutest name ever for a Slack bot.
I don't have a picture of him to show you,
but I would personally give him a bit of a makeover.
He's a little nerdy for my likes.
But it's one of those Slack bots.
I'm a huge compliance geek.
I was a CISO for over a decade.
And I know exactly what you mean with that security awareness training and ticking those boxes.
Because I was the guy who wrote the boxes that needed to be ticked because I wrote those control frameworks. And I'm not a CISO anymore because I've already subjected myself to an absolute living hell for long enough, at least for now.
So I quit the CISO world.
And so much of it also assumes certain things.
I've had people reach out to me trying to shill whatever it is they've built in this space.
And, okay, great.
The problem is that they've built something this space. And okay, great. The problem is,
is that they've built something that is aligned at engineers and developers. Go, here you go.
That's awesome. But we aren't really an engineering first company. Yes, most people here have an engineering background and we build some internal tooling, but we don't need an entire curriculum
on how to secure tools that we're building as web interfaces and public-facing
SaaS, because that's not what we do. Not to mention, what am I supposed to do with the
accountants and the sales folks and the marketing staff that wind up working on a lot of these
things that need to also go through training? Do I want to sit here and teach them about SQL
injection attacks? No, Jack, I do not want to teach them about that. I want them to not plug random USB things
into the work laptop and to use a password manager.
I'm not here trying to turn them into security engineers.
I used to give a presentation
and I onboarded every single employee personally for security.
And in the presentation,
I would talk about password security.
And I would have all these complex passwords up.
And I'd be like, you know what? Let me just show you
what a hacker does.
And I'd go and load up dhash, and I'd type in my old
email address, and oh, there's my password.
And then I would
copy the cryptographic hash
from dhash, and I'd paste that into
Google, and I'd be like, and that's how you crack passwords,
is you Google the cryptographic hash,
the insecure cryptographic hash,
and hope somebody else has already cracked it.
But yeah, it's interesting.
The security awareness training is absolutely something that's supposed to be guided
for the very fundamental everyman employee.
It should not be something that's highly technical.
I worked at a company where, and I love this, by the way,
this was one of the best things I've ever read on Slack,
and it was not a message that I was privy to.
I had to have the IT team pull the Slack logs so that I could read these direct communications. But it was from
one, I think it was the controller to the vice president of accounting. And the controller,
the VP of accounting says, how could I have done this after all of those phishing emails that Jack
sent? Oh God, the phishing emails drive me up a wall too. It's you're
basically training your staff not to trust you and waste their time and playing gotcha.
It really creates an adversarial culture. I refuse to do that stuff too.
My phishing emails were fun. All right. I did one where I pretended that I installed a camera
in the break room refrigerator. And I said, we've had a problem with food theft out of the Oakland refrigerator.
And so we installed this webcam.
Log into this sketchy website with your username and password.
And I got like a 14% fish rate.
I've used this campaign at multinational companies.
I used to travel around the world and I'd grab the mic at the offices that wanted me
to speak there.
And I'd put the mic real close to my head and I'd say,
why did you guys click on the link to the Oakland refrigerator?
I said, you're in Stockholm, for God's sake.
It works.
Phishing campaigns work.
They just don't work if they're dumb, honestly.
There's a lot of things that do work in the security awareness space.
One of the biggest problems with security awareness is that people seem to think
that there's some minimum amount of time
an employee should have to spend
on security awareness training,
which is just...
Right.
Like, for example, here in California,
we're required to spend two hours
on harassment training every so often.
I think it's every two years.
Every two years, yeah.
At least for managerial staff.
And it's great,
but that leads to things such as,
oh, we're not going to give you a transcript
if you can read the video more effectively.
You have to listen to it and make sure it takes enough time.
And it's maddening to me just because that is how the law is written.
And yes, it's important to obey the law.
Don't get me wrong.
But at the same time, it just feels like it's an intentional time suck.
It is.
It is an intentional time suck.
I think what happens is a lot of people find ways to game the system. Look, when I did security awareness training, my controls, the way I worded them, didn't require people to take any training whatsoever. The phishing emails themselves satisfied it completely. I worded that into my control framework. I still held the trainings. They still made people take them seriously. And then if we had, you know, if somebody got phished horrifically and let's say
wired $2 million to Hong Kong, you know who I'm talking about. All right. Person who might is
probably not listening to this, thankfully, but she did. And I know she didn't complete my awareness
training. I know she never took any of it. She also wired $2 million to Hong Kong. Well, we never
got that money back, but we sure did spend a lot of executive time trying to. I spent a lot of time on the phone
getting passed around from department at the FBI. Obviously, the FBI couldn't help us. It was wired
from Mexico to Hong Kong. The FBI had nothing to do with it. Bless them for taking their time to
humor me because I needed to humor my CEO. But,
you know, I use those awareness training things as a way to enforce the code of conduct.
Code of conduct requiring disciplinary action for people who didn't follow the security awareness training. If you had taken the, you know, 15 minutes of awareness training that I had
asked people to do, I mean, I told them to do it. It was the code of conduct they had to do.
Then there would be no disciplinary action for accidentally wiring that money.
But people are pretty darn diligent on not doing things like that.
It's just a select few that seem to be the ones that get repeatedly.
And then you have the group conversations.
One person screws something up, and then you wind up with the emails to everyone.
And then you have the people who are basically doing the right thing, thinking they're being singled out.
Management is hard.
People is hard.
But it feels like a lot of these things could be a lot less hard.
You know, I don't think management is hard.
I think management is about empathy.
And management is really about just positive reinforcement.
You know, management isn't going to sound real pretentious.
Management is kind of like raising a kid, you know?
You want to have a really well-adjusted kid.
Every time that kid says,
hey, dad, answer him.
Yeah, that's a good approach.
I mean, just be there.
Be clear, consistent.
Let them know what to expect.
People love my security program
at the places that I've implemented it
because it was very clear.
It was concise.
It was easy to understand.
And I was very approachable.
If anybody had a security concern and they came to me about it, they would not get any shame.
They certainly wouldn't get ignored.
I don't care if they were reporting the same email I had reported to me 50 times that day.
I would personally thank them.
And you know what I learned?
I learned that from raising a kid,
you know, it's, it was interesting because it was like, um, the kid I was raising when, uh, when he would ask me a question, I would give him the same answer every time in the same tone.
He'd be like, Hey Jack, can I have a piece of candy? Like, no, your mom says you can't have
any candy today. And they'd be like, oh, okay. Can I have a piece of candy? And I'd be like,
no, your mom says you can't have any candy today. Can I have a piece of a candy, Jack? I said, nope. Your mom
says you can't have any candy. And I'd just be like a broken record. And he immediately wouldn't
ask me for a piece of candy six different times. And I realized the reason why he was asking me
for a piece of candy six different times is because he would get a different response the
sixth time or the third time or the second time. It was the inconsistency. Providing consistency and predictability
in the workforce is key to management and it's key to keeping things safe and secure.
I think there's a lot of truth to that. I really want to thank you for taking so much time out of
your day to talk to me about topics ranging from GPT and ethics to parenting. If people want to thank you for taking so much time out of your day to talk to me about think topics ranging from GPT and ethics to parenting. If people want to learn more,
where's the best place to find you? I'm Jack at jackrohrig.com. And I'm also jrohrig
at uptix.com. My last name is spelled. No, I'm kidding. It's j-a-c-k-r-o-e-h-R-I-G.com. So yeah, hit me up. You will get a response from me.
Excellent. And I will, of course, include links to that in the show notes.
Thank you so much for your time. I appreciate it.
Likewise.
This promoted guest episode has been brought to us by our friends at Uptick,
featuring Jack Rorig, technology evangelist at same. I'm cloud economist Corey Quinn, and this
is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your
podcast platform of choice. Whereas if you hated this podcast, please leave a five-star review on
your podcast platform of choice, along with an angry comment ghostwritten for you by ChatGPT,
so it has absolutely no content worth reading.
If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS. We tailor recommendations
to your business and we get to the point. Visit duckbillgroup.com to get started.