Screaming in the Cloud - The Intersection of AI, Security, and Cloud with Alyssa Miller
Episode Date: April 4, 2024Corey sits down with Alyssa Miller, the CISO at Epic Global, for a discussion that cuts through the noise of the technology world in this episode of Screaming in The Cloud. Alyssa celebrates ...her personal journey to becoming a licensed pilot and invaluable insights into the current state and future of AI, cloud computing, and security. This episode ventures beyond the typical tech hype, offering a critical look at the realities of AI, the strategic considerations behind cloud computing at Epic Global, and the importance of explainability in AI within regulated industries. Additionally, Alyssa and Corey highlight the cyclical nature of tech hype, the misconceptions surrounding AI's capabilities, and the impact of startup culture on genuine innovation. Show Highlights: (00:00) Introduction(01:33) Corey celebrates Alyssa Miller getting her general aviation license.(04:10) Considerations of cloud computing at Epic Global.(06:45) The hype and reality of AI in today's tech landscape.(11:49) Alyssa on the importance of explainability in AI within regulated industries.(14:21) Debunking myths about AI surpassing human intelligence.(19:30) The cyclical nature of tech hype, exemplified by blockchain and AI.(24:58) Critique of startup culture and its influence on technology adoption.(29:01) Alyssa and Corey discuss how tech trends often fail to meet their initial hype.(31:57) Where to find Alyssa Miller online for more insights.About Alyssa:Alyssa directs the security strategy for S&P Global Ratings as Business Information Security Officer (BISO), connecting corporate security objectives to business initiatives. Additionally, she shares her message about evolving the way people think about and approach security, privacy and trust through speaking engagements at various conferences and other events. When not engaged in security research and advocacy, she is also an accomplished soccer referee, guitarist and photographer.Links referenced: Alyssa Miller’s LinkedIn Profile: https://www.linkedin.com/in/alyssam-infosec/Epic Global's Website: https://www.epiqglobal.com/en-usAlyssa’s Aviation Journey: https://www.linkedin.com/posts/alyssam-infosec_i-landed-at-ohare-kord-in-my-cherokee-activity-7079088781575811074-ZsSx?utm_source=share&utm_medium=member_desktop
Transcript
Discussion (0)
Yeah, and this is, I mean, I hate to minimize it in a way down to this, but it's the same thing we go through with every damn new technology.
We get all excited, we scream and proclaim it's going to fix the whole world's problems, it's going to be revolutionary, and then we start to figure out, okay, this is what it can really do. Welcome to Screaming in the Cloud. I'm Corey Quinn. It's been a couple of years since I
caught up with today's guest. Alyssa Miller is now the CISO or CISO or however you want to
mispronounce it at Epic Global. How are you? I'm doing well. And as long as you don't say
CISO, I think we're okay. I've heard that one before, and that one gets a little weird. with AWS, or just figuring out why it increasingly resembles a phone number, but nobody seems to
quite know why that is. To learn more, visit duckbillgroup.com. Remember, you can't duck
the duck bill bill. And my CEO informs me that is absolutely not our slogan.
Now, you were a BISO previously at S&P Global. Now you're a CISO, which tells me your grades are worse.
I mean, how does that wind?
What does the transition there look like?
I went from a B to a C.
I must be worse at my job.
Some might argue I am.
No.
No, really exciting stuff, actually, because the BISO role, when I was in that role, I
kind of knew that that was sort of like that last step before I'd probably land in a CISO role somewhere.
Happened a little faster than I thought it was going to, but, you know, when the right
opportunity comes knocking at the door, you kind of got to just jump in with it and go.
So that's kind of the story of my career.
We'll get into that in a bit, but I want to talk about something that you've been fairly
active about on the socials for a while now, specifically getting your general aviation pilot's license.
Oh, yeah, all this stuff up over here.
Yeah, that was a lifelong dream, honestly.
And I guess it was one of those dreams I never really even thought of as a dream
because I just didn't think it would ever happen.
When I was a kid, we lived on the northwest side of Milwaukee,
and same house my entire life my entire childhood right
but we were like maybe a mile from this local municipal airport and so those training planes
were flying over all day every day right i spent summers out in watertown i know exactly what
you're talking about they're all there see there you go all right yeah so you like you'd be driving
down the road and they're like landing you landing 35 feet over the top of your car.
And you're like, wow, this is pretty cool.
But never thought it was going to happen.
Didn't have money back then.
Then when I had money, I was married with kids.
Now I'm divorced.
My kids are out of the house.
It was kind of like, well, kind of now or never.
So yeah, back at the end of 2022, I finally, you know, completed the
all necessary training, passed my check ride, and I've been flying like crazy ever since.
I looked into doing it myself. There's always been a back burner item for me and seeing you
doing it. Like it's not the, well, if she can do it, how hard could it be? None of that. But it's
like, okay, if someone's actually doing it, let me look into it.
I mean, it's one of those things I think for a lot of people, it's kind of sits there partially
because I don't think people really realize how attainable it is.
Like that's kind of what shocked me too is like, you know, oh, like literally anybody
can go to a local airport that has a flight school and take a discovery flight and find
out if it's something you actually want to do.
And then if you do, then you just start taking lessons. And the only thing that keeps you from
getting there is the FAA and their medical standards sometimes can be problematic for
some folks. Funny that you mentioned that, because as I was looking into this, I realized that my
ADHD acts as a disqualifier for this. And I was talking to someone who started coming up with this.
Well, there are ways around that.
It's like, no, no, no, no, no.
You misunderstand.
Because as soon as I really thought about it, about what actually goes into being a pilot and the fact you have to follow a rote, boring checklist every time without deviation, et cetera.
Oh, I should absolutely not be flying a plane.
That's a really good point. But yeah,
so in my case, yeah. Yeah, because I mean, it is that very systematic execution of a checklist
every time that helps keep you safe. So yeah, I definitely get that perspective too.
So let's get back to the cloudy stuff. It's not the literal clouds, but more the metaphorical ones.
So what are you doing mostly these days? Are you seeing your workloads moving to the cloud as everyone tends to?
Are you repatriating left and right like everyone in the tech industry tells me is happening,
but I see no evidence of?
Where do you fall on that particular spectrum?
So that's been kind of interesting, right?
Because at S&P Global, our entire environment within my division was 100% cloud.
We were invested in functions and containers.
In fact, we had very few EC2 instances anymore, right?
It was all super ephemeral wonderfulness.
I get here at Epic and we are the most super hybrid, right?
So we've got on-prem, we got multiple cloud you name it so you know there is kind of
that motion of slowly we're seeing more and more in our cloud environments but what i actually
really appreciate is there's not a let's do a trial a cloud transformation for the sake of
doing cloud transformation right it's it's one of those things as new products are being launched and it makes sense to launch it in the cloud, great.
But, you know, we do e-discovery work.
We've got a ton, a ton, ton of data.
And, you know, trying to put all that storage in the cloud,
we'd go broke, you know?
I mean, seriously, it's just like there's so much data
that we hold in on-prem, you know i mean seriously it's just like there's so much data that we hold in on-prem you know
storage infrastructure that you know if i started moving that into you know any type of cloud
storage i i can't even imagine what those cloud costs look like oh i can i've seen companies do
it and that's i think you're right there's this idea that you need to be in cloud that's almost
like a mind virus that's taken over.
And I don't think it holds water. Conversely, I also don't think that, oh, wait, we're going to
migrate out of the cloud now because we'll save a bunch of money on that. I don't see that happening
either. I see workloads where people could do basic arithmetic and realize, oh, moving to cloud
is probably going to be financially ruinous for us. We're not going to do it. Or if it was going
to be expensive, they did it because of the value it was going to unlock financially ruinous for us. We're not going to do it. Or if it was going to be expensive,
they did it because of the value it was going to unlock.
But there's some stuff that was never looked at seriously
when it came to cloud.
I am seeing proof of concepts not pan out.
No, for whatever reason,
it's not going to be a cloud workload for us.
I am seeing people shifting workloads
as they have different environments
and they think that it might be a better fit somewhere else.
And I'm seeing more cross-cloud stuff as people start using, you know,
a cloud provider that understands how infrastructure works,
but also want to do some AI nonsense with someone who knows how AI nonsense works.
And in AWS's case, those are not the same thing.
So, yeah, you're going to start seeing more cross-cloud.
That word, AI.
Oh, my God, have we heard enough about AI yet?
Speaking of technologies that people are adopting for the sole purpose of adopting the technology and not because it fits any use case, AI, top of the list, right?
There are so many, I've seen it in every single industry.
Everybody wants to say they're using LLMs and generative AI and all these magical
words that suddenly, you know, at the end of 2022, people had never heard of them before,
suddenly knew all about it, right? I got a pitch this morning about this podcast from some rando
saying that, oh, well, you should go ahead and feed the transcript of the podcast into GPT-4
through our service, and it'll write a blog post.
Okay, first, what do I need you for if I'm going to do that? Two, why should someone bother to
read something that I couldn't even be bothered to take the time to write? It's a complete lack
of respect for the audience's time when you start down that path. Totally. I mean, I think about it even, I've seen use cases for HR teams
to use ChatGPT or LLM of your choice
to send rejection emails.
Like, wow.
I mean, because those, we all know,
those rejection emails aren't cold enough as it is.
Now we're going to have some AI chatbot write it because
we can't even be bothered to do that. I have seen people who have done something I think is kind of
neat where they'll wind up taking like an email that's overly terse or they want to make sure it
comes across the right way. And then they'll launder it through chat GPT. Then I feel like
at some level, some of the other end is going to get this wordy thing. Like, can you just distill
this down?
It's like a protocol, encapsulation, decapsulation on both ends, where it now acts as an API between humans communicating.
I love it because that's probably already happening.
Because, yeah, we've done it.
I'm literally in the middle of a whole, you know, everybody has to do these quarterly access reviews.
You know, we're in the middle of that and I'm
escalating to people who aren't completing their reviews and blah, blah, blah. But so I had my
BISO create an escalation email and she fed it through, I think, Copilot. And she did two
versions. One was like the nice version and one was the very in-your-face authoritarian,
you got to get this done now
kind of thing. And it was, it was fascinating to look at that. Of course I said, use the nice one,
not the mean one. I'll send the mean one later. But you know, it is funny because yeah, somebody
also take that now, you'll pop it through, Hey, summarize this for me. It's right there. And I'll
look, just hit it, you know, copilot, summarize this for me. So it's kind of a, I never even thought about it that way, but that is almost sadly comical.
It is.
And part of the funny aspect of it, too, is that I find it's great for breaking through
walls of creativity, where I'm staring at a blank screen.
I need something up there.
Great.
Give me a badly written blog post you think is something I would write.
Then I can mansplain, angrily correct the robot. Fine. That's useful. But I think when
people get into trouble is when they start looking at using it in a scaled out way where there's not
going to be any human review of what goes out, which explains this proliferation of nonsense
chatbots on everyone's website that just tell lies. Now, what I love is there was a recent case
with Air Canada, where they
were held to the terms that their idiot chatbot made up on the fly to answer someone's question
about a bereavement fair. Awesome. I think that when you start being held responsible for the
things your lying robot tells people, you're going to be a lot more strict about what the
lying robot is allowed to tell people. And that's kind of the point. I actually, I'll have, I have
to admit, I am really glad that I kind of was ahead of
the curve on a lot of this when I was at S&P Global.
So we did, I was at S&P Global ratings.
So we did all the financial ratings.
And this was before chat GPT exploded on the scene and everybody suddenly understood, supposedly,
what LLMs were and what generative ai meant and you know we were already looking at how
can we use artificial intelligence in ratings and and crafting ratings and the core concept that
kept coming up was the idea of explainability right because you're talking now about a heavily
regulated industry the rating you know financial ratings of, you know, SEC's got a lot to say, as they should. And, you know, if you make a credit rating determination
based on AI, if you can't go back and explain how it got there, how did it make that decision?
That's, you know, two things. One, you mentioned like the hallucinations and that whole concept, but there's also even just that we all understand, hopefully by now, the inherent biases that we're unable to eliminate from our artificial intelligence system. So it's like, if we're going to leverage this, we need to go back and have the explainability of how decisions are being made so we can ensure there isn't bias.
Regulators have caught on to using AI as bias laundering.
Yeah.
So now when, you know, I'm at Epic and suddenly, you know, I remember very specifically, you
know, November of 2022, suddenly all this is going nuts and everybody's talking about
it.
And a few months later, everybody's, you know, where does this make sense in our, our, our, you know, product set? It was great to have already been down that road and say, look, you know, if you're going to make decisions about e-discovery information, being able to say why, right? Someone's got to be able to explain it at the end of the day if it comes up in court.
So, you know, it was one of those things where it was great to be on the forefront of that
because I've already seen, you know, organizations have gone headlong into various implementations
rather of AI without understanding that particular concept.
And I'm sure people in my organization
are tired of hearing me use that word, explainability, defensibility, whatever you
want to call it. But it's so critical if we're going to leverage it, kind of to your point
before about breaking through that creative barrier, I want to create a blog, but I'm going
to go back and read it and then figure out how to make it actually valuable and sound like it came from me and everything.
It's kind of that same concept. Use it to do some work, but you got to be able to go back
and understand what it did and fix it where it did it wrong. Because if you miss that,
you're in trouble. There's this idea of we're going to make the robot automatically say something,
and we're going to assume that it's going to get everything right, because how hard could it really
be? I don't know. That strikes me as a disaster just waiting to happen. So you know where that
comes from? And this is where I'm going to really tick off a lot of the AI people. I saw a video
this past week of some guy who was trying to give a basic definition of artificial intelligence.
And he talked about making decision or making this, you know, consciousness or whatever that's better than a human brain.
And I'm like, that's not what AI is.
AI is never, never.
Well, I mean, OK, I won't say never. That's way too
absolutist. Depends on the human in question.
It's a long way off
before AI is going to be better than
a human brain, because for one thing,
who's creating it? Humans!
With the training data, yeah. It's great
at creativity, it's great at doing things rapidly,
but it's basically just a parrot on some
level, where it's...
It's still a computer at the end of the day. It is it's a computer at the end of the day it
is still ones and zeros at the end of the day a lot of them and they're very expensive right or
maybe we go into quantum computing now it's not ones and zeros it's all sorts of that's a topic
for a whole other episode but no seriously it's you know and i think that's that's the core of
the issue is people are expecting that artificial intelligence is somehow going to be better than humans when humans are the ones creating it.
Well, how does that work?
How do we even know what the better is?
How do we even know?
What is better?
What is better than the human brain?
We can't say because it's outside of our brain.
I mean, seriously, come on.
So we can focus on key aspects, like what are
frustrations we have with the human element of things we do, and we can try to address those
with AI. But it's never going to be, in my opinion, and I will use never this time,
I don't see it being like this superior decision-making entity over the top of a human brain because every bit of
intelligence that's built into it is the result of things that originated in a human brain.
Here at the Duckbill Group, one of the things we do with, you know, my day job is we help negotiate
AWS contracts. We just recently crossed $5 billion of contract value negotiated.
It solves for fun problems, such as how do you know that your contract that you have with AWS is the best deal you can get?
How do you know you're not leaving money on the table?
How do you know that you're not doing what I do on this podcast and on Twitter constantly and sticking your foot in your mouth?
To learn more, come chat at
duckbillgroup.com. Optionally, I will also do podcast voice when we talk about it. Again,
that's duckbillgroup.com. Part of the problem too is that if you start really digging into
a topic you know an awful lot about with Gen AI, you start noticing that it gets
things remarkably wrong, remarkably quickly. And there's a phrase, gal someone amnesia, where
you understand that when you read a mass media article about something you know a lot about,
you find all the mistakes, but you forget that. And you take everything they say about things
you don't know about at something approaching face value. And we're seeing that on some level with Gen AI in the same approach, where there's this overall overarching idea that it somehow knows all these things super well, but we can prove objectively that it doesn't know these things in depth.
What it does expose, if you want to look at it from a certain point of view, is the sheer amount of how much
bullshit exists in the world today. Oh, totally. Absolutely. And what's really interesting,
I don't know if you've seen these articles yet. There's at least one study I was reading.
They've already proven that AI forgets for the same reason as the human brain does.
It is in there somewhere, but only so many penguins are going to fit on that
iceberg. And so as you keep adding penguins to the iceberg, it's knocking other ones off.
Now it can go back and retrieve that, but ultimately you look at the way AI is designed
to work. It's, you know, it's, you know, the concentration of information around a specific
concept and it's the recency of that information.
So the same way we forget things because we keep knocking those penguins off the iceberg as we're
adding new ones, AI is starting to do the exact same things. It's looking what's the most recent
information, what have I seen most often in my model, what have I been exposed to the most,
which is of course where bias comes from as
well and then that's what it's leveraging and it may be forgetting or not in its case accessing
that other piece of data that was actually the correct data about that particular topic
and this is that's where it's like fascinating to me that this is actually occurring we so you you know, again, back to are we creating anything that's better than the human brain?
Well, if it's forgetting, just like the human brain forgets.
Well, OK.
Yeah, we've pretty much failed in that pursuit already.
It leads to weird places and unfortunate timing.
It's a I don't know what the right answer is on most of these things.
Truly, I don't. I am curious to figure out right answer is on most of these things. Truly, I don't.
I am curious to figure out where we wind up going from here. And I think that there's going to be a
lot of, I think, things that we learn mostly by watching others get it wrong. The thing that
blows my mind is it's such a untested area where, okay, you now have a magic parrot that can solve
for a number of rote, obnoxious tasks super
quickly. I've had it do things like write an email canceling my service with this company, etc.
And then it just winds up spitting it out. I can copy and paste and call it good.
As a CISO, I imagine you'll appreciate the fact that I don't ever put things like account numbers
into these things. I can fill that in later. I'll use placeholders for names. I don't put in
specific company data. And I don't put in specific
company data. And I don't have to worry about who's training on what at that point. But apparently,
that's not a common approach. But then, okay, it speeds those things up. And it's great.
But all these companies right now are talking about how it's going to revolutionize everything.
There's hype, just like the blockchain at its worst. But at least here, there's some
demonstrable value that can be found. It's just question of how much these are the hype yeah and this is i mean i hate
to minimize it in a way down to this but it's the same thing we go through with every damn new
technology we get all excited we scream and proclaim it's going to fix the whole world's
problems it's going to be revolutionary and then we problems. It's going to be revolutionary. And then we start to figure out, okay, this is what it can really do. You know,
blockchain is a great example. Blockchain fails. I shouldn't say it failed, right? It didn't fail
and it's in use, but it didn't become this big overarching revolutionary changing the world
thing that everybody said it was going to. And the reason why was it was super complex.
And where we thought it was going to impact us the most,
we're in tasks that didn't need that level of complexity.
Sure, we created cryptocurrency off of it, which, okay.
Yay.
We've still yet to see where that's going to end up,
but it's not going where people who started it thought it was going to go.
Real soon, it's been almost, what, 15 years?
And we're still waiting for it to, okay, demonstrate the value, please?
Well, I mean, just the volatility of it.
We've not seen that relax yet.
And you look at currency markets, and if there's one thing investors and anyone else values
in currency markets, it's the relative stability of those markets. Right. So it becomes a speculative asset. And
yeah. Oh, yeah. We can rant about this and wind up with the most odious people in the world in
the comments if we're not careful. And we've also seen how manipulatable it is. Right. I mean,
granted, we can you know, we've seen people, you know, countries, nation states manipulate
monetary values and things like that to some degree. But it's always
open and it's always clear what's happening and there are countermeasures for that.
The problem too is, do you remember the same people though that are hyping cryptocurrency
are now pivoting to hyping AI? And it made no sense to me until I realized
what's going on. They're NVIDIA's street team. They don't care what it is you're doing with them.
They just want you to buy an awful lot of NVIDIA GPUs.
I can see that. You know, it would actually make a lot of sense because who is benefiting from all
this? Yes, the GPU makers, NVIDIA being probably the biggest.
The only one of relevance. They're now a $2 trillion company at the last I checked.
They're larger than Amazon.
So all those bit mining rigs just turned into ai rig
and before and you know and then of course you throw in there a little measure of deep fakes
which we you know heard that was the opposite right that was going to revel i revolutionize
the world because it was going to make everything evil and horrible and terrible well that we
started talking about in 2018 six years later i'm still waiting you know we hear these little anecdotals oh somebody got
breached by a deep fake audio i was reading an article recently about someone who wound up
like they start off by telling us how intelligent and sophisticated they are okay great and then
the story continues on and they wind up like getting the cia involved and amazon involved
supposedly to the point where they're now taking $50,000 in cash out of a bank, putting it in a shoebox and handing it to a stranger in a parking lot.
It's like, OK, there are several points here at which something should have flagged for you that this might be something weird going on here.
And that's exactly my point.
It's like, you know, the deepfake was the issue.
Deepfake videos.
OK, we all know they
exist if deep fake videos did anything that was a threat to society i think it's that they enabled
the tinfoil hat crew to come up with all new levels of conspiracy theory like any video i see
i can just claim that it's deep fake and i don't have to believe that it's true and and there are
people working on that problem and our solutions for for it, but the fact of the matter is it's just not that widespread of a problem that anyone needs a
technology solution to solve it. They don't. And instead it becomes something that people are
looking to slap it into everything. How many startups have been founded? It's quite simply
just, it's basically an API call to open AI, and then it does a little bit of text manipulation
and it comes back. Then they're surprised and terrified when an open AI feature release puts them out of business.
How many companies were tanked when the only thing they did is they taught
chat jippity how to talk to PDF files? Yeah, sure. It's never going to be something that
they figure out on their own. Oh my gosh, Corey, you just touched on something now.
You ready? I'm just sorry. This is me getting up on my soapbox now, because what you're talking about is startup world in general. And now I'm really
going to piss people off. So I'm, I'm, I'm hearing all the VCs right now saying, if she ever tries to
start a company, screw her. But here's the reality of startup culture right now. Startup culture is nothing more than find some minimally useful, just incremental improvement on existing stuff, declare it a revolutionary new functionality that nobody's doing and it's the greatest thing, and then produce a product on it, get enough suckers to buy it in three years and then sell, right? I mean, it is a,
it is a cookie cutter approach and it's why we have a, just a metric ton of God awful,
you know, different acronyms and things that we have to have all these products for.
And the reality is these are just incremental features that your existing tool set could just
build in or maybe you could actually i don't know innovate a little and create yourself to work with
your tool sets and it's it happens across the board so yeah we're seeing it with ai right now
and what happens oh yeah hey we got this cool new whiz bang ai blah blah blah well yeah how's it
work oh well it's based on chat gpt so literally all you're doing is like scrubbing my prompts and then feeding at the chat GPT and,
and, you know, parroting back chat GPT's response to me. What do I, again, to your point before,
what do I need you for that? I've had a bunch of coworkers. Like there's some people that I've,
I've met over the years where it's, yeah, their job is going to be imperiled by what is effectively an automated bullshit generator.
And all of the world is bullshit.
And awful lot of what they did was absolutely bullshit.
I'm it's their job functions is potentially winds up getting rid of, but not well.
Right. Not well. So, okay, what is it in your job function that was so easily replaced by this that then you couldn't expand your expertise, right?
And this is the other part.
Yes, I do have empathy, God, for anyone who feels that their job is legitimately in threat from this.
But it's also like, okay, we've seen that throughout the course of time, right?
That's the nature of everything in how our capitalist society works.
You know, technologies evolve and change.
Robots were going to replace autoworkers.
Yeah, they did in a lot of cases, right?
We reduced the number of autoworkers.
And yet we still have autoworkers.
What are they doing now? They're engineering the robots. They're working. I mean, they're
programming them. They're, they're maintaining them. They're doing all these other things.
So it's just the skill sets have to keep evolving because as we create new skill sets, we create new
tools to automate those skill sets. This is, I mean, we can go back, you know, centuries and see this occurring.
It is nothing new, that concept. So that's why I try to encourage people who do feel threatened.
It's like, yeah, it might change your job. It might mean you need to retool your brain a little
bit to do something else. But this is just the natural progression. It doesn't make AI any more evil than, you know, heavy machinery did when it took over mining.
Remarkably few jobs look the same as they did 100 years ago.
Everything is being upgraded, replaced, et cetera, as far as tooling goes.
The only exceptions are specific artisans who are doing things from the perspective of, yeah, no, we want
to do this the same way that we've done this for centuries. Great. But agriculture has been upended.
You see it with masons, with all kinds of folks doing different things with different tooling.
Why would office workers necessarily be any different? I also think that some of the
enthusiasm comes from the idea that it's very good at making code suggestions because computer programming languages are a lot more syntactically rigid than human
languages. So it becomes easier to wind up bounding the problem space there. It doesn't work quite as
easily when we're talking about interpersonal communication. There's more ambiguity that creeps
in. Yeah. Or even some level of creativity, right? When we think about code, think about some of those more elegant solutions you see to
a problem when you're coding, right?
We're not seeing a whole lot of that from AI yet.
We're seeing very well-written code.
As you said, it follows a lot of very, you know, some of the strictest conventions we
wish our programmers would follow.
But there's also some of those really just elegant maneuvers that you see developers make that I've not seen AI start doing yet.
And I've looked at a lot of code generation in AI,
and I've seen a lot of really impressive stuff.
But there are just some really elegant things.
Do I think AI could go back to the day that someone first invented the idea of
a bubble sort? Do I think AI is going to create something like that from scratch? No, because
again, it's basing its knowledge off of everything that we've already created, the syntax, the
naming conventions and other rules and things. I don't see that level of innovation coming out of AI
at this point yet. Now, maybe in the future that changes. Maybe we get better at how we're
creating these systems and maybe I'm totally wrong and they will be better than the human brain.
But I think we got a long way to go. I think there's an awful lot of boosterism in the space.
I think that there are people who have a definite financial incentive to overhype the crap out of this stuff.
And maybe it works.
Maybe it doesn't.
But I think that there's a – you don't make headlines by being rational or conservative around these things.
You definitely do by saying it's going to change the nature of human existence.
I don't know.
I've seen a lot of things make that promise.
Very few have delivered. Yeah. Oh, the history of technology. Good God. And yeah, for me in
cybersecurity, like, God, every new product that comes out is going to change the world.
Where are we at with zero trust? How's that going? My God, I am so tired of the constant
hyping on this. You think this year at RSA, the expo floor is going to be full of AI-powered stuff?
Which, of course, the same stuff it's always been, but they're going to say AI.
Oh, there's going to be...
Oh, God.
I'm so glad I'm not going to RSA this year, just for that reason.
I live here.
I'm sort of forced to.
Oh, yeah.
That's fair.
But that's also part of why I'm not coming, because y'all's city is way too expensive.
I've noticed that.
Yeah, I'm sure you have.
I would love to go, but yeah, that's a big chunk of it.
But no, I mean, and yeah, we know that.
Last year was Palo Alto had Zero Trust plastered
on every billboard in the entire city, I think.
And everyone you talked to was AI and zero trust
were the two things you heard about.
You know, the year before that,
it was blockchain and AI, right?
I mean, it just, man,
it's always just one of the buzzwords
and they just shift.
And, you know, everybody's looking
for that cool new marketing term they can use.
EDR, MDR, XDR, SASE,
I mean, just keep throwing things out there, right?
I mean, and it's all meaningless at the end of the day when it all still ties back to simple concepts that we were talking about 27 years ago when I entered the industry and
probably well before that.
And yet here we are.
It seems like so much has changed, but also so little has.
Right? Well, it's changed is the technology quite honestly you know i like i i get a board member ask me so
so when are we secure enough my answer that every time when technology stops changing and you stop
creating new shit because as long as we're making new products and we're using new technologies cyber security is always going to be evolving and technology is always going to keep evolving
because as we create new tech and then we create new tech on top of new tech and that that again
it it's a tale as old as time oh now i sound like disney you really do can you put that to music
please i think i just and i'm gonna get you sued because you know disney's coming in here with their copyright i i hear they're kind of uh very diligent about defending their their
oh yes uh one one does not play copyright games with the mouse no no i really appreciate your
taking the time to talk about how you see things evolving it's great to catch up with you if people
want to learn more where's the best place for them to find you? God, I don't know. I have to admit, and I say this with a certain level of pain in my
stomach, it's probably still Twitter. Twitter, the thing with the bird. I don't care what he says.
Twitter. That said, I'm also on Blue Sky. My handle's pretty much the same everywhere. It's
AlyssaM underscore InfoSec. And we'll put a link to that in the show notes because that's what we do here that's awesome
and I mean LinkedIn I know you've got that info too it's just a dash instead of an underscore
because you know LinkedIn puts it in the URL and you can't use underscores and URLs so the joys of
computers thank you so much for taking the time to speak with me. I really appreciate it. Yeah, thank you for having me.
It's always a blast.
And wow, that went by fast.
It really did.
Alyssa Miller, CISO at Epic Global.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five-star review on your podcast platform
of choice.
Whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you hated this podcast, please give a five-star review on your podcast platform of choice, along with an angry, insulting comment that you didn't
read because you just had some chatbot created for you.