CyberWire Daily - False flags and real flags. ISIS claims the Strasbourg killer as one of its soldiers. A bogus bomb threat circulates by email.
Episode Date: December 14, 2018In today’s podcast, we hear about false flag cyberattacks that mimic state actors, especially Chinese state actors. Chinese intelligence services are prospecting US Navy contractors. Russia’s Fanc...y Bear continues its worldwide phishing campaign. ISIS claims the career criminal responsible for the Strasbourg Christmas market killings as one of its soldiers. And a bogus bomb threat is being circulated by email—call the technique “boomstortion.” Malek Ben Salem from Accenture Labs on smart speaker vulnerabilities. Guest is Laura Noren from Obsidian Security on data science ethics. For links to all of today's stories check our our CyberWire daily news brief: https://thecyberwire.com/issues/issues2018/November/CyberWire_2018_12_14.html Support our show Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash n2k code N2K at checkout. That's joindelete.me.com slash N2K, code N2K.
False flag cyber attacks mimic state actors, especially Chinese state actors.
Chinese intelligence services are prospecting U.S. Navy contractors.
Russia's fancy bear continues its worldwide phishing campaign.
ISIS claims the career criminal responsible for the Strasbourg Christmas market killings
as one of its soldiers.
And a bogus bomb threat is being circulated by email.
Call the technique boomstortion.
Email call the technique boomstortion.
From the Cyber Wire studios at Data Tribe, I'm Dave Bittner with your Cyber Wire summary for Friday, December 14th, 2018.
Happy Friday, everybody. Thanks for joining us.
China has come in for considerable criticism in recent weeks for its cyber operations, particularly those devoted to industrial espionage.
It's displaced, at least for now,
Russia as the prime adversary in American policymakers' public statements,
as we've heard this week in testimony and comment before the U.S. Senate Judiciary Committee.
That China is an assertive, indeed aggressive, cyber power
isn't really open to serious question,
but criminals are increasingly flying Chinese false flags in attacks that have little or nothing to do with Beijing.
Fifth Domain notes that this is an attractive ploy for criminals interested in deflecting attention from themselves.
It's particularly easy to sail under false Chinese colors,
not only because a lot of people are disposed now to believe that if it's hacking, it's probably China,
but because Chinese intelligence services commonly make use of widely available tools
that many criminal hackers can get their hands on.
Attacks in Russia also suggest that criminals are trying to pass themselves off as intelligence services,
the better to deflect official suspicion.
Researchers at security firm Cylance say that the recent attack on state-owned oil company Rosneft
was framed to look like a nation-state attack.
In reality, the hackers in that case were just criminals.
That said, there are surely nation-state campaigns afoot.
China is probing U.S. Navy contractors, the Wall Street Journal reports,
looking for all manner of detail about naval technology.
And Russia's fancy bear is still fishing widely in foreign governments' ponds.
Non-state actors are reappearing during this holiday season, too.
ISIS has for some time been relatively quiet in cyberspace,
but its propaganda arm this week
hailed the Strasbourg Christmas Market murderer
as one of its soldiers.
The terrorist, killed by police,
was apparently radicalized in prison.
Whether ISIS played a role in inspiring him
or is simply retrospectively and opportunistically
claiming responsibility is is unclear.
But the terror group, as always, is attentive to the seasons and its propaganda.
A fake bomb threat is being used to extort Bitcoin from businesses, mostly in the US and Canada.
Several businesses closed and evacuated their offices, but no bombs were found.
The threats are being distributed with a demand for $20,000 in Bitcoin, payable by close of business. The subject line of the shakedown
email is Hollywood-esque. Think twice, things like that. The text goes on in the broken English
that's become customary in spam land. We quote, there is the bomb in the building where your
business is located. My recruited person constructed an explosive device under my direction.
It has small dimensions and it is very hidden well.
It is impossible to damage the supporting building structure by my bomb,
but there will be many wounded people if it detonates.
My man is controlling the situation around the building.
If any unnatural behavior, panic, or emergency is noticed, he will power the device.
I want to suggest you a deal.
You send me 20,000 in Bitcoin and the bomb will not detonate, but do not try to fool me.
I warrant you that I have to call off my man solely after three confirmations in blockchain network.
The poorly worded email threats bear the common usage and grammatical markers of spam,
but it's just badly done.
and grammatical markers of spam, but it's just badly done.
Connoisseurs of spam will notice that the missive lacks the appealing shimshara-bim of the way the shadow brokers used to talk,
and when we read stuff like this we miss the brokers,
and we hope they got a better job somewhere,
maybe with wealthy elite on some personal service contract.
Whoever they are, they seem to be explosive buffs.
Apart from their mention of TNT,
the scammers in some of their communications specify the explosive as hexagen.
Our Cyberwire Energetic Materials desk tells us hexagen is a plasticized form of RDX,
which, pound for pound, packs even more punch than TNT.
Ars Technica points out reasonably that not even someone who writes
like this can seriously expect to make money this way. It would take regular Joe Lunchbucket
and Janie Sixpack, and those are people like you and me, my friend, well past close of business
to figure out how to get a hold of some Bitcoin. Even a Bitcoin baron would likely think twice and
call the police. Wired said this morning that the total sum that
appeared to have been deposited in the five or so Bitcoin wallets amounted to less than two bucks.
So if you follow ours in their speculation, it would seem that either the goons behind the
keyboard haven't thought this one through, always a possibility in the underworld,
or they're doing it for the lulz, or they're actually just interested in disruption.
or they're doing it for the lulz, or they're actually just interested in disruption.
But unlike sextortion, which this threat is clearly modeled on,
a bomb threat, even an implausible one, is harder to laugh off than a promise to show pictures of you looking at adult content,
which of course none of you would do, but maybe your friends would.
In all seriousness, most people have to take bomb threats seriously, and many of them have.
The San Francisco Chronicle says the local municipal railways bus lines,
the Jewish Community Center, and the San Francisco Fire Credit Union were disrupted.
ABC 7 Chicago says that multiple hospitals and businesses in that city closed.
And the Tampa Bay Times says there have been building closures and
school lockdowns in Tampa.
Do what you need to do to keep your people safe, but take comfort from the fact that
major police departments across North America are calling this one a hoax.
The U.S. Department of Homeland Security's National Cybersecurity and Communications
Integration Center, the NCCIC, part of the Cybersecurity
and Infrastructure Security Agency, says this is a worldwide campaign. They recommend you do three
things if you get this email. First, don't respond or try to contact the sender. Second, don't pay
the ransom. And third, report the email to the FBI's Internet Crime Complaint Center or your local FBI office.
A writer posting over at the Sands Institute suggests Boomstortion or Bombstortion as a name for this kind of caper.
We're going to go with Boomstortion.
Calling all sellers.
Salesforce is hiring account executives to join us on the cutting edge of technology.
Here, innovation isn't a buzzword.
It's a way of life.
You'll be solving customer challenges faster with agents, winning with purpose, and showing the world what AI was meant to be.
Let's create the agent-first future together.
Head to salesforce.com slash careers to learn more.
Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security,
but when it comes to our GRC programs, we rely on point-in-time checks.
But get this.
More than 8,000 companies like Atlassian and Quora
have continuous visibility into their controls with Vanta.
Here's the gist.
Vanta brings automation to evidence collection across 30 frameworks,
like SOC 2 and ISO 27001.
They also centralize key workflows
like policies, access reviews, and reporting,
and helps you get security questionnaires done
five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta
when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
And now a message from Black Cloak.
Did you know the easiest way for cyber criminals
to bypass your company's defenses
is by targeting your executives and their families at home? Black Cloak's award-winning
digital executive protection platform secures their personal devices, home networks, and connected
lives. Because when executives are compromised at home, your company is at risk. In fact, over
one-third of new members discover they've already
been breached. Protect your executives and their families 24-7, 365 with Black Cloak.
Learn more at blackcloak.io.
And joining me once again is Malek Ben-Salem. She's the Senior R&D Manager for Security at
Accenture Labs.
Malek, it's great to have you back.
We wanted to touch today on some vulnerabilities with smart speakers,
specifically ways that they can misinterpret commands.
What do we need to know today?
I think a lot of people by now have heard about adversarial examples
against computer vision systems, particularly
those that are being used by self-driving cars, where you can have a, you know, the vision system
misinterpret a signage. If they see a stop sign, sometimes that could be misinterpreted as a
speed limit sign by adding some perturbation to the image that they see.
Well, a similar thing happens also with smart speakers that are listening to voice commands.
So you can issue a voice command to, you know, your Alexa or your Google Assistant or your Apple Siri. And there is a possibility for
the attacker to add noise that can be misinterpreted by that system as a real command.
Now, we've seen this before with something called the dolphin attack, where, you know, that noise is added, it gets misinterpreted,
there is some illegitimate action that happens that is taken by Alexa or Siri, etc.
But in that case, the noise is heard by the user.
So they may be aware that something wrong is happening.
by the user. So they may be aware that something wrong is happening. What we're talking about here is that that noise can be designed or engineered in a way that it looks or it sounds very normal.
You can embed it, let's say, within a song. So you'd be thinking that you're listening to some
song, but, you know, that noise that was added, that bad noise that was added,
that perturbation that was added to the sound of the song,
the sound bites of the song,
can be misinterpreted by your digital assistant as some command.
This attack has been tested,
and you can embed that sound in a YouTube song, for instance.
You can publish that song, and YouTube song, for instance. You can publish that song.
And everybody who would be listening to that song would be vulnerable to, would be a victim of this type of attack.
And what's the specific vulnerability here?
What sort of information could they harvest by triggering the device?
So they can issue any command that the normal user would issue.
So they can, you know, read email, have Google read email, have Google restart a phone,
have Echo open a front door, for instance.
And they can, can you know do some
say capital one credit card payment
uh... these have been successful attacks that have been tested by by the
researchers conducting this research
at six rest rates that you know reach ninety percent
and is there any effective way to prevent this uh... i suspect you know if you want to be able to use the functionality of these devices, they need to be listening all the time.
Yeah. So what can be done is, again, looking back at these machine learning models that we develop to interpret sound,
to interpret these acoustic models that are listening, that are interpreting that sound and transforming it into text.
Those have to be hardened and made robust against these types of adversarial examples.
So it's basically securing the machine learning models that we're creating. They will never be 100%
secure, but what we can do is, again, make them more robust. There are techniques to do that
by training them through the adversarial examples up front, but that effort has to happen, again,
similar to what we're doing with vision systems, I think we need to be thinking broadly across all machine learning models.
We need to be thinking that AI and machine learning is creating a new attack surface.
And we need to be aware of that attack surface and start thinking about ways to reduce it by rethinking about the way we train and develop our machine learning models.
Malek Ben-Salem, thanks for joining us.
Thank you, Dave.
Cyber threats are evolving every second, and staying ahead is more than just a challenge.
It's a necessity.
That's why we're
thrilled to partner with ThreatLocker, a cybersecurity solution trusted by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total control, stopping
unauthorized applications, securing sensitive data, and ensuring your organization runs smoothly and securely. Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe and compliant.
My guest today is Laura Noreen.
She's Director of Research at Obsidian Security,
a company building machine
learning-based technologies to support enterprise security, where she focuses on data science,
ethics, and human-centered design. Data science, as probably anyone who's ever done it knows,
it's a very, very important part of a product build, but it has to follow engineering build.
So a lot of what we work on is getting
the engineering right. And then once we've kind of built our infrastructure and built our pipelines,
which are designed for data science purposes, then we get to start ingesting data and building
models around that data. And so at what point does the importance of ethics come into play?
And so at what point does the importance of ethics come into play? So in my opinion, ethics comes in pretty much throughout. And it actually really is helpful if the data science team has been involved in building some of the engineering infrastructure, because what we want to aim to do is to be able to ask questions about the broader impacts of the technology that we're building. And this would apply to any technology firm.
Essentially, technologists are kind of world makers, world builders.
They're shaping the way that people are able to inhabit the world.
And of course, they're companies.
So they're aimed at a particular corporate purpose or set of purposes.
But they typically aren't asked to think about broader social impacts
because it's not in the day-to-day
operations of how companies work. But we are starting to ask those questions very early.
And cybersecurity is a particularly interesting area in which to do this because we're pitting
something that's very important, security, which is usually afforded at the collective level.
You do security for an entire company or an entire country.
That's what we're in the business of doing.
And that is often perceived as being at odds with individual privacy.
That's not always the case, but in data science,
that's kind of a crux that you run into a lot of times.
And it's not just such a cybersecurity that runs into this problem.
Marketing runs into this problem of, you know,
how do you make predictions about who's likely to buy your product? That sometimes feels
like it might be challenging ideas about privacy. You're looking at signals in a large corpus of
behaviors. And in order to do that, you need to have, or it's useful with data science to have
individual insight, insight into what individuals are doing.
And then that's where you run into questions about privacy, which is one of the ethical concerns that we have, although it's not the only one.
Yeah, it's interesting to me because I think it would be correct to say that not all companies have people on board who are specializing in the ethical side of things.
And I suppose at Obsidian, that's something that
the powers that be have decided is a worthwhile investment.
Yeah, it is actually kind of a truism if you look across, you know, which companies are the
most likely to have a chief ethics officer. I mean, now that's, you know, anyone operating
in Europe because they have to following GDPR.
But if you look before that, companies like Microsoft had a chief ethics officer and really put that person right next to the CEO's office.
It's older companies that have made a few mistakes and have run into some significant regulatory hurdles.
Companies that are older have usually been the ones that have
these ethics roles in them. And it's usually because their technology has run out ahead of
themselves or the business decisions they're making have kind of gotten ahead of where
regulations are and then the regulation catches up and it's costly. Those are usually the companies
where we see this. So it is particularly unusual to have a startup that's trying to build in ethics
from the very beginning.
Within the organization itself, is there a natural, I suppose, almost healthy tension within of,
I can imagine the marketing folks want to achieve certain things, the technology folks want to achieve certain things. And so I could see there being push-pull between those,
even the legal department, between them and what you're tasked with doing.
Yes. I mean, legal tends to be very interested in compliance, which is great. Compliance,
you know, any law is always reactive to a situation, so it tends to lag a little bit
behind what an ethicist might want to do. So legal isn't necessarily antagonistic to ethics.
It's really, they're not the same. Legal is usually fairly supportive of
what we're trying to do, though it may take some education to get on the same page about what each
goal is. But it is important to point out that legal compliance and ethical principles are not
the same. Ethics is always, or the beauty or the strength of ethics is that it's a set of principles
that can be forward-looking, not just reactive. Now, what is your advice for companies that are either just starting up and want to get a handle
on this or perhaps just want to, you know, it's something that they feel as though they've
neglected. What's a way to approach this when someone's coming at it for the first time?
I would recommend approaching it both from the top down and from the bottom up.
So you want to have leadership really taking this seriously and able to hear from the top down and from the bottom up. So you want to have leadership really taking this seriously
and able to hear from the data scientists,
from the engineers,
when things might be getting a little creepy.
So we have kind of created a reporting structure
where if anyone on any of those teams
sees something that's like, you know,
it turns out you can actually learn a lot
about what's going on in
a company by reading a file name. We had never thought of hashing file names because it seems
sort of innocuous at the outset, but someone on our team said, hey, you can actually learn a lot
from file names. Is there a way to still maintain some insight into what's being sent around without
reading entire file names? How can we handle that? And they have someone to
take that concern to. If you don't appoint a person for that, then chances are that idea that
crosses an engineer or data science mind is just going to fade. They'll think about it and then
they'll get onto some other problem and it won't go anywhere. But if you have a feedback mechanism
where there's a place to say, hey, there's a potential privacy issue here that nobody had really thought about.
Can we think about it?
Is there an easy fix for this?
And for something like that, there might be a relatively easy fix.
So not everything is about saying no.
It's about saying, well, how can we do this in a way that's more privacy protecting? protecting. It helps if you're the person that to whom you're reporting this stuff is has some
strength in in the social sciences and kind of understands, you know, their history and how
these things have played out in the past and also has some technical chops so that they can suggest
a fix rather than just suggesting we don't do X, Y or Z. A stop sign isn't all that useful.
You know, a redirection is much more useful. So that's the top-down part.
Have a very intelligent person who's trained across domains to understand what should happen
next to whom people can report without being punished or singled out in any way.
It does help to have some lightweight programming, like corporate programming,
that touches people. So when you hire junior people, assign them to mentors, someone who's within their kind of
managerial organizational stack reporting structure, and someone who's not in that
structure who can help them not only professionalize, guide their careers, but also
learn to articulate things they're seeing that we might want to question. If you don't teach someone how to
articulate that, it's unlikely that everyone's going to learn how to do that on their own.
And it is usually the best in one-on-one situations. So that's the bottom up.
Anyone who's aware of these things can start framing conversations about what's the broader
social impact of what we're building? What's the broader social impact of what a company like
Google? Everyone uses Google, so it's a nice example. What's the broader social impact of what we are building? What's the broader social impact of what a company like Google,
everyone uses Google, so it's a nice example.
It's a broader social impact of some of the things that they do.
Those are the conversations that we can have,
our mentors have with some of the younger staffers
to teach them that it's completely within their wheelhouse
to ask those bigger questions,
that they don't just need to stay in a track
where they just build stuff and they never get to ask the big questions.
That's Laura Noreen from Obsidian Security.
And that's the Cyber Wire. For links to all of today's stories, check out our daily briefing at thecyberwire.com.
And for professionals and cybersecurity leaders who want to stay abreast of this rapidly evolving field, sign up for Cyber Wire Pro.
It'll save you time and keep you informed.
Listen for us on your Alexa smart speaker, too.
The Cyber Wire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation
of cybersecurity teams and technologies.
Our amazing CyberWire team is Elliot Peltzman,
Puru Prakash, Stefan Vaziri, Kelsey Bond,
Tim Nodar, Joe Kerrigan, Carol Terrio, Ben Yellen,
Nick Volecki, Gina Johnson, Bennett Moe, Chris Russell,
John Petrick, Jennifer Iben, Rick Howard, Peter Kilpie,
and I'm Dave Bittner.
Thanks for listening. We'll see you back here tomorrow.
Your business needs AI solutions that are not only ambitious, but also practical and adaptable.
That's where Domo's AI and data products platform comes in.
With Domo, you can channel AI and data into innovative uses that deliver measurable impact.
Secure AI agents connect, prepare, and automate your data workflows, helping you gain insights, receive alerts, and act with ease
through guided apps tailored to your role. Data is hard. Domo is easy. Learn more at ai.domo.com.
That's ai.domo.com.