CyberWire Daily - AI as Tradecraft: How Threat Actors Are Operationalizing AI [Microsoft Threat Intelligence Podcast]
Episode Date: March 12, 2026In this episode of the Microsoft Threat Intelligence Podcast, host Sherrod DeGrippo is joined by Greg Schlomer and Vlad H. to discuss new research on Jasper Sleet, a North Korean–aligned... threat actor incorporating AI into active operations. The conversation examines how AI is being integrated across the attack lifecycle — from highly tailored phishing lures and fabricated job applicant personas to accelerating malware development and refining operational workflows. Rather than treating AI as a novelty, Jasper Sleet is using it to increase speed, scale, and adaptability while reducing many of the friction points that once slowed campaigns. They also explore what this shift means for defenders. As AI compresses iteration cycles and lowers barriers to entry, traditional attribution signals evolve, influence operations become more convincing, and defensive teams must tighten the loop between intelligence, detection, and response. This is less about experimentation and more about the operationalization of AI as part of modern tradecraft. In this episode you’ll learn: How AI is changing the speed at which cyber operations evolve Why jailbreaking AI models is often trivial for motivated adversaries The strategic implications of AI leveling the playing field between threat actors Some questions we ask: Is there resistance among experienced malware authors to adopting AI? Are we seeing fully AI-written malware in the wild? What stands out about Jasper Sleet’s use of AI? Resources: View Greg Schloemer on LinkedIn View Sherrod DeGrippo on LinkedIn Related Microsoft Podcasts: Afternoon Cyber Tea with Ann Johnson The BlueHat Podcast Uncovering Hidden Risks Discover and follow other Microsoft podcasts at microsoft.com/podcasts Get the latest threat intelligence insights and guidance at Microsoft Security Insider The Microsoft Threat Intelligence Podcast is produced by Microsoft, Hangar Studios and distributed as part of N2K media network. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Welcome to the Microsoft Threat Intelligence podcast. I'm Sherrod DiGripo. Ever wanted to step into the shadowy realm of digital espionage?
Cybercrime, social engineering, fraud. Well, each week, dive deep with us into the underground.
Come here for Microsoft's elite threat intelligence researchers. Join us as we decode mysteries, expose hidden adversaries, and shape the future of cybersecurity.
It might get a little weird. But don't worry.
I'm your guide to the back alleys of the threat landscape.
Welcome to the Microsoft Threat Intelligence Podcast.
I'm Sherrod DeGripo from Microsoft.
A lot of times on this show, we go deep into how threat actors operate.
We talk about what they're changing, what they're scaling,
and what all of our defenders need to know to do things differently.
And today, we are talking about something
that is probably going to define most of the rest of our lives
when it comes to defense.
And that, of course, is how threat actors are using AI.
So joining me today are two threat intelligence analysts from Microsoft.
They have worked on this research.
And I am joined by Greg Schlomer and Vlad.
Thank you for joining me.
Thanks for having us, Sherrod.
Good to be back.
Good to have you back, Greg.
Greg was also on an episode called Between Two Greggs.
And you can go back on the podcast and listen to that episode, which is fantastic.
It also includes Greg Lesniewicz of ProofPoint, and it's a great episode between two Gregs.
Or does that one stack up on the listener ranks here? It's got to be up there.
It's a number one. Really?
Yes. Sure.
It's me.
I don't know. I don't have the stats.
So let's talk about what we discovered and why it matters.
Vlad, I'll start with you. Tell me kind of like what's going on here.
Sure. So I've been tracking a group called what we've been calling Storm 18.
77 from the Microsoft side.
They're financially motivated and they're opportunistic with their targeting.
We've been tracking them for about three years.
And one of the reasons for this podcast is that rapid increase in both the volume by this group,
as well as just the variety of things that they're coming out with.
Historically, they've been relatively consistent in terms of both capabilities,
they're targeting and just their TTPs.
And as of the last six months, maybe even a little bit less,
we're just seeing them accelerate in a very fast way
and just scaling operations and trying things that historically
we've never seen them do.
And we just see them iterating, starting with a new form of attack,
a new vector, quickly testing it in the wild,
and moving on if it doesn't work and expanding if it does.
And we attribute this to their effective use of AI as a core in every step of the workflow.
Okay. So, Greg, I want to talk to you about the workflow. It sounds like they're using AI to operationalize reconnaissance, malware development, social engineering. What does that look like? Is this just an experiment? Is this something where they have built frameworks around their AI tools? What does the integration look like, potentially?
Yeah, I think it's pretty far beyond an experiment at this point.
I mean, that's really not a surprise.
So, Vlad and I are both DPRK-focused researchers.
We've been talking for years.
We talked about it on the between the two Gregs episode shared about how scrappy North Korean actors are.
Yeah.
And so it's been on our mind since we've entered this age of AI being front and center and
everything.
It's something we've been focusing on a lot.
Because of the scrappiness of our actors, we know that they like to move quickly.
They won't interrave fast.
They want to try stuff, see what works, build on the stuff that works, change the stuff that doesn't.
I think AI is really conducive to that kind of work style across the board.
Vlad talked about Storm 1877 or Coral Sleet.
We also see it a lot with DPRKIT workers.
And I think the sort of workflow to your question varies a bit depending on which particular actor we're talking about.
Vlad could probably give you more details on Coral specifically.
I can talk more about Jasper Sleet, the DPRK IT worker threat,
wherever you'd like to take that.
Just to add to what Greg just said,
it's really interesting to watch just how much they operate,
like you would expect a startup to operate,
where both of this kind of testing,
there's kind of small groups that are allowed to have this freedom
to experiment and do their own thing.
The interesting thing about the IT worker side of things
is there's such a variety of tools, approaches,
and ways that they do what they do that, I think it's fascinating.
I have always been interested in that region, DPRK, for a couple of reasons.
Greg, as you mentioned and you have taught me, they are really scrappy and they do have
a unique kind of flexibility, it seems like.
And there's this focus on just doing whatever works, just getting the job done, making it happen,
which is more the attitude that we see in the crime world, which, you know, we talk about
DPRK and their financial motivation side as well. And I think it makes sense to me that they would
grab AI and start doing things that make their lives easier. So let's talk a little bit about
exactly what we're seeing here. Are we seeing things like vulnerability research? Because
Citrin Sleet, probably nine months ago, was able to chain together to chromium exploits,
which was really fascinating. I spoke with the lead of MSRC Tom Gallagher about
that. It was groundbreaking in a retro way, because when's the last time we, like,
were dealing with browser volns? It's not a super common thing, and they had to. So,
where are we seeing AI show up for them? Are we seeing them do bone research with it?
I don't know that we've seen that specifically. Okay. And I think there's probably a reason for that.
The two groups that we're seeing adopting it earliest, Jasper Sleet and Storm 1877,
Jasper Sleet is extremely large scale, somewhere on the order of thousands of operators.
Wow.
As Vlad mentions, it's an extremely decentralized operation.
There isn't necessarily like one playbook to follow.
If you go from one cell to another, they'll use different tools, different tactics.
And so I think there is a lot more freedom for the IT worker operators to be early adopters of AI.
Whereas if you look at a citrine sleet, a shade sleet, those are the more sort of bureaucratic, like, intelligence-focused orgs that probably don't have the same flexibility to just,
like go out and start playing with AI immediately.
We're very much in the early stages of the research here.
And I think that's almost a direct reflection of the tasking and the functionality of the actors that we see using it today.
Glad, do you have anything you want to share on that?
I sometimes wonder, obviously being in tech, you often see, especially these days,
there's often engineers who will almost kind of hang on to obviously the fact that they're proud of the fact that they've learned this language through and through for 20 years.
they've kind of dedicated their whole career to it.
So there's a little bit of resistance almost to adopting it.
And sometimes you hear people kind of snubbing AI in general.
And the truth is it's come a really long way.
And I do wonder when there are threat actors, in the more established ones,
especially when it comes to the malware authors and the developers themselves,
I wonder if there's going to be that sort of resistance also,
you know, where these kind of scrappy groups will just take the past of least resistance
and exploit it, whereas we might see a slightly,
slower adoption on the more seasoned group side?
I love thinking about that.
Like, is there ideological resistance by individual threat actors in their hearts against
leveraging AI for what they've always done by hand or what they took so much of their
blood, sweat and tears to learn how to do, quite frankly, especially in the case of DPRK,
serve their country, which is a huge part of the identity culturally there?
So I think that will be really interesting.
What would you say, Greg, I'll ask you, and Vlad, feel free to comment.
as well. What you've looked at over the past two months, you've been doing DPRK for a long time,
look at what you've looked at over the past two months. Is there a meaningful material difference
to what you saw two years ago? I think we're just at the start of it. I think if you ask me again
in six months, I would say absolutely. I think at this point, it's still pretty early. And that is why
for Vlad and me and for our team, focused on North Korea and actor research, we're really trying
to get ahead of this and be really proactive in looking for the techniques that our actors are
using to leverage and to abuse AI because we believe it's probably going to play a key role
in shifting how all of our actors operate over the next year or so.
I think AI has reached that point where it came out.
Three years ago, people were using it to write, I don't know, funny songs about their friends,
you know, just basic text completion, that kind of thing.
and it sort of then started being useful for, at least in the development world,
for like auto-completing lines of code where the next suggestion based on what you already have
would be good, but to get it to author the entire piece was really difficult without errors.
And then just within the last three months, the rate of advancement is honestly concerning
from a defender blue team standpoint because now it's almost autonomous where you just give it
a target, right? And if you have an LLM that has complete control over a machine, where it has
full access to run commands, egress, and so on, it almost is like looking at something sentient
to operate. Obviously, it isn't, right? But at this point, it's autonomous enough to be able to just
explore a number of paths to solve the problem, build itself a script, and just achieve the goal
without you having to hold its hand all along the way. And I think that's the biggest enabling.
for threat actors because now really you don't even need to know the basics of architecture
for malware. You can get it to explain it, you can get it to reverse something. And even for something
like an RCE, let's say you're a threat actor, you know, have a bunch of these autonomous,
jailbroken agents running on their own boxes and you just give them the task of consistently
pull all like research CVEs and as soon as something comes out, build an exploit for it,
weaponize it and deploy it on target X, Y, and Z. And I think we've seen,
some experimentation out in the industry that researchers are doing that and it's working well.
It's there and jailbreaking them is honestly trivial, so that's the most concerning part is that
it's not. I think the current thing is to just give it a scenario of like, hey, you're in a
red team exercise in a sandbox environment. The signal is all fake and you have the model
writing whatever you wanted to write. And just the level of accessibility that that gives just
beyond what was previously labeled a script kitty now is you can honestly do a lot more.
From script kitty to script cat.
Yeah.
Greg, what do you think on that?
Absolutely.
I agree with everything Vlad said.
I think I want to talk a little bit more about his point of sort of enabling threat actors.
I think that's something we're going to see play out across the DPRK ecosystem.
We have sort of our characterization of less sophisticated and more sophisticated and more
capable actors. And I expect that the emergence and the widespread availability of AI tooling
is going to kind of level that playing field. I think we're going to start to see the actors that we
traditionally have assessed to be less capable, start to demonstrate more agility, more ability to
carry out highly targeted operations, more advanced tooling, more advanced malware. And so that's
really kind of front of mind for me as a defender is someone who spends almost all my time
looking at these actors. We just have to be prepared for that and ready to respond to it and
protect customers in our ecosystems as that happens. Do either of you think that we're seeing
AI written malware, like end to end, beginning to end malware written by AI?
100%. Yeah. Yeah, in the wild, not research. Yeah, absolutely. The scary thing from a defender
standpoint is not just the fact that this is super accessible. It's also just the variety that it can
turn out and the pace at which it can turn it out at. So historically being a threat analyst,
security researcher, if you track a group, you learn what they do, you learn how that looks
out in telemetry, and you almost start recognizing it, right? You develop a sixth sense where you
kind of look at something and you're like, okay, it's them or it's not them. Whereas if they can change
what it is, what it does and what it looks like three times a week, then that almost becomes
not impossible because there's no human hand to kind of leave those traces of this is a pattern
because there's not going to be a pattern because it's not a human making it. And that really complicates
that. So what I feel like I'm hearing you say is using identifiers within code, using the human
sort of element of the handwriting analysis of code for attribution is coming to an end.
Exactly. There's not going to be any humans authoring this type of code at least, or if there will be, it won't be in the traditional sense we see it now. I remember something that stuck me with me when I first started working where I was speaking to one of the reverse engineers and he was reversing a payload and he could tell the author simply through looking at the way the imports were structured. And that really stuck with me because that was amazing and he could just instantly say this was this group.
And that's no longer going to be the case
because obviously you're just driving the AI
and it chooses how to structure it,
author it. And with a simple change of prompt,
you can completely change the way it writes it.
It's still going to have the same functionality,
but it's going to be very different in terms of what it is
and what it does.
I think that's going to be a big challenge.
It creates like an anonymizing function for code.
Yeah, anonymizing plus, of course,
the old and tested way of tracking things,
through when those IOCs can change,
when they can have an autonomous thing,
researching, I don't know, domain registrars
and setting up 20 domains with 20 different things
and completely randomizing every point,
you no longer can spot a pattern of,
okay, well, this group uses these guys and so on.
It's got to complicate every aspect of it.
So we're seeing threat actors use AI
to create end-to-end malware written by AI.
Vlad mentioned some opportunities for agents,
to set up infrastructure, potentially community control, register domains, set up servers, etc.
Social engineering, Greg, are they using it for that? I think they are. What are you seeing there?
Yeah, so one of the more prominent fishing actors from DPRK Emerald Sleep. It's an intelligence-focused
actor does a lot of targeting of government officials, policy experts, think tank officials.
They have for nearly a decade been running pretty much the same playbook, targeting these individuals,
sending either malicious payloads
or more recently just eliciting information
through normal conversation.
And, you know, Vlad mentioned the idea
that malware authors might leave traces
that help with attribution in the code itself.
Similarly, we talked to people about
recognizing the signs of a phishing email, right?
You look for things like misspellings,
punctuation mistakes,
or just, like, shady themes
that really aren't all that clever
and seem unusual,
given someone you've never communicated with before.
And I think getting AI to assist with creating even just simple fishing payloads,
like all those recognizable signs are gone now.
There won't be spelling mistakes.
There won't be issues that you may encounter from having a non-native English speaker
building the lure.
That's all gone.
So what do you tell people to look for, right?
That becomes significantly more challenging.
For those listening, we published himself about Emerald Sleet, also known as Kemsuki
or Velvet Kalima, also known at Microsoft Astellium, about them using LLMs.
for social engineering and creating the content to do spearfishing with a regional expertise.
So I can imagine the prompt looking something like, you know, write this email in localized,
French, Italian, English, colloquial expressions put in, make sure it sounds very conversational,
make sure it has, you could even say make sure it has misspellings, make sure it doesn't look too
perfect, cut the perfection down by 20% on grammar and spelling.
to make it easier to pass through.
And I think, you know, I'm just realizing now
there are types of grammar and spelling mistakes
that I notice but think,
oh, this is just the way this person talks.
And then there's grammar and spelling mistakes
that I notice and I say, this is fishing.
Absolutely.
And imagine, could you even,
if a threat actor were impersonating a public figure,
take some blogs they've written,
take some emails that...
Easy.
Victims have received from them
and say, hey, write it in the style of this person.
So if you're contacting a victim who has an established relationship with that person,
they won't think anything of it.
It sounds just like them.
It looks just like them.
Incredibly convincing.
So, Greg, one final thing.
This blog really is about threat actors using AI.
We have a couple of nice examples in here.
One of them is Jasper Sleet.
What stands out for you about Jasper Sleet's use of AI?
Yeah, I think the thing that's challenging from a defending against abusive AI perspective
is, like, we know how to look for the signs of a threat actor building malware.
with AI, right?
And one would assume that we can strengthen our safeguards
to help make that less likely and less successful.
But for IT workers, they're using LLMs just to build
like believable human personas.
They're building resumes.
They're populating stuff on a LinkedIn page.
They're writing cover letters to apply for jobs.
Like those are things that actual legitimate humans do
when they're seeking employment.
There really is no jailbreaking.
They're just using LLMs to do a thing
that they were actually designed to do.
And I talked a bit already about the scale of the IT worker problem.
I think one of the limiting factors for this threat previously
has been like how quickly can you build these believable personas,
how many LinkedIn accounts can you make,
how many emails can you send?
And using AI in this process just completely removes that as a bottleneck.
And it's really just a matter of how many hours do you have in a day
to go and apply for jobs.
And we've seen the IT worker phenomenon be really massive
and widespread. We've seen criminal indictments and referrals for U.S. residents because they were
facilitating it potentially unknown to them, but, you know, they were doing the laptop farm stuff and
things like that. So the combination of real-world humans on the ground with the AI leverage could be
really significant if it continues to increase the way that it likely will.
Absolutely. So I want to thank Greg and Vlad for joining me. It's really important to
understand all the different things that threat actors are doing with AI. You can go check out more
at AKA.m.s. forward slash operationalizing AI misuse. I am shared to Gripo. Thank you for listening to
the Microsoft Threat Intelligence Podcast. Greg, Vlad, thank you for joining me. We'll see you next time.
Thanks for hominous.
Thanks for listening to the Microsoft Threat Intelligence Podcast. We'd love to hear from you.
Email us with your ideas at TI Podcast at Microsoft.com.
Every episode will decode the threat landscape and arm you with the intelligence you need to take on threat actors.
Check us out, MSthreatintelpodcast.com for more and subscribe on your favorite podcast app.
This week on Afternoon CyberT, I am joined by George Finney, who not only is a CISO, he's also the author of two amazing books.
Our conversations spanned a lot, but I think the most important thing from the conversation was communication,
how you communicate cybersecurity to executives,
how you communicate cybersecurity to organization,
and how that makes you incredibly effective,
particularly when we're thinking about the world of AI,
marrying zero trust with new technologies.
I am certain everyone will love the conversation.
Be sure to listen in and follow us at afternooncyberty.com
or wherever you get your favorite podcasts.
