CyberWire Daily - In the shredder or off the truck? Battlespace prep for a supply chain campaign? NG-Spectre found in Intel chips. No domain fronting for you. Kitty mines monero. NSA, US Cyber Command under new management.
Episode Date: May 4, 2018In today's podcast we hear that they're hoping in Australia that backup tapes made it to the shredder, and didn't fall off the truck. Equifax's board of directors gets reelected. Are China's espiona...ge services preparing the battlespace for a supply chain attack. New Spectre-like vulnerabilities are found in Intel chips. Google and Amazon clamp down on domain fronting, and anti-censorship advocates are unhappy. Here Kitty…we have Monero for you. And a change of command at NSA and US Cyber Command. Johannes Ullrich from SANS and the Internet Stormcast podcast, reviewing the history of hardware flaws. Guest is Philip Tully from ZeroFox with a recap of a talk he gave at RSA on AI. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
They're hoping in Australia that those tapes made it to the shredder
and didn't fall off the truck.
Equifax's board of directors gets re-elected.
Are China's espionage services preparing the battle space for a supply chain attack?
New specter-like vulnerabilities are found in Intel chips. Google and Amazon clamp down on
domain fronting and anti-censorship advocates are unhappy. You're kidding, we have Monero for you.
And a change of command at NSA and U.S. Cyber Command.
From the Cyber Wire studios at Data Tribe, I'm Dave Bittner with your Cyber Wire summary
for Friday, May 4, 2018.
May the 4th be with you.
Australia's Commonwealth Bank gets a black eye from its loss of about 20 million customers'
records. In 2016, the bank engaged Fuji Xerox to decommission one of the Commonwealth Bank's data
centers, and that entailed secure destruction of 15 years' worth of customer statements on the
center's backup tapes. After the decommissioning, however, the bank became aware that it didn't have
the certificate that would have vouched for the tapes' destruction, nor could the tapes themselves
be found. After looking around and considering various possibilities, including but not limited
to the off chance that the records fell off the truck on their way to destruction, the bank decided
that the records had in fact been destroyed and that there was no
need to notify the customers. The incident appears to have been an accident and not a hack, and
probably customer accounts weren't compromised, but the bank's failure to notify customers when
it realized what had happened doesn't look good. Give them credit for retracing the delivery truck's route and scouring the roadside for fallen tapes.
But still.
The Australian Prudential Regulation Authority Tuesday said that trust in Australia's banks had been badly eroded
and that Commonwealth Bank in particular had fallen from grace.
The bank will be required to carry an additional billion dollars in regulatory capital as the result of that fall.
Commonwealth Bank has been commendably contrite and promises to do better in the future.
Its leaders might take heart from this week's elections for Equifax board members.
Despite the horrific data breach the credit bureau endured on their watch,
every member of the board who stood for re-election
was returned to office by the shareholders,
who are either unusually discerning, forgiving, or inattentive.
We're guessing door number three.
Still, congratulations and best wishes to Equifax.
May your house cleaning and restoration continue apace.
Researchers at ProtectWise think they discern a shift in Chinese cyber espionage,
a focus on IT staff in targeted enterprises,
and collection of code-signing certificates.
These are taken as signs of preparation for supply chain attacks.
Intel has confirmed that specter-like chip vulnerabilities reported by an industry site in Germany are real.
There are eight of them, according to C'T, the German publication Computer Technik, and Intel is working on fixes.
C'T calls the flaws SpecterNG.
A number of researchers appear to have contributed to the discovery,
Google's Project Zero among them.
One of the newly discovered issues is arguably more serious
than the original Spectre problem.
It could be exploited, some think,
to bypass virtual machine isolation from cloud hosts
and then infiltrate sensitive data, including passwords and keys.
For all that, researchers are cautiously optimistic
that the flaws are relatively unlikely to see widespread exploitation.
Intel plans to roll fixes out in two tranches,
one this month and a second in August.
Researchers at security firm Imperva warn of Kiti,
a crypto miner that specializes in Monero.
Kiti exploits the so-called Drupal Geddon 2.0 remote code execution flaw, which has been patched.
Kiti is particularly problematic, SC Magazine reports, in that it compromises web application servers,
from whence it goes on to compromise future users of apps running on those servers.
from whence it goes on to compromise future users of apps running on those servers.
Amazon and Google have, as expected, put an end to domain fronting,
a feature widely used by services like OpenWhisper's Signal to evade Internet censorship.
Google began the process some weeks ago,
pointing out that domain fronting had been an accidental and not supported feature of their content delivery system.
Amazon shut the option down this week, telling OpenWhisper that their use of Amazon's CloudFront would be suspended immediately if OpenWhisper's Signal continued using third-party domains without their permission.
In domain fronting, an app like Signal is able to obscure a connection's destination.
Thus, as far as a Russian or Chinese or Qatari or other state sensor is concerned,
they're simply seeing a connection to Google or Amazon, not to a prohibited service like Signal.
The sensors could either block nothing, or they could shut down everything provided by the big content delivery networks,
which would be as close to shutting down the Internet as makes little difference.
The upshot, as the Electronic Frontier Foundation and others put it,
is that Amazon and Google have elected, in their business models,
to foreclose certain ways of evading censorship.
U.S. Cyber Command today was officially elevated to combatant command status, putting it on a par with major military organizations like U.S. Strategic Command.
General Paul Nakasone got his fourth star as he assumed command of Cyber Command and duties as director, National Security Agency.
Nakasone replaces Admiral Michael Rogers, who now enters retirement.
Nakasone replaces Admiral Michael Rogers, who now enters retirement.
So hail and farewell, respectively, General Nakasone and Admiral Rogers.
Hackers who don't like the U.S. state of Georgia's proposed anti-hacking law have protested by, wait for it, hacking sites in the Peach State.
So this is arguably better thought out than dim-witted homages to war criminals on an Arizona highway sign, but still, really. The hacktivists aren't alone in
thinking the law is a bad one. The man himself and the person of big tech companies is inclined
to agree, but there are surely better ways of making a point. All of you young techno-libertarians
out there, you say you want a revolution, but of you young techno-libertarians out there,
you say you want a revolution,
but if you go hackin' some sites in the county of Barrow,
you ain't gonna make it with anyone anyhow.
That's what some old guys told us, anyway.
Calling all sellers.
Salesforce is hiring account executives
to join us on the cutting edge of technology.
Here, innovation isn't a buzzword. It's a way of life.
You'll be solving customer challenges faster with agents, winning with purpose, and showing the world what AI was meant to be.
Let's create the agent-first future together.
Head to salesforce.com slash careers to learn more.
Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security,
but when it comes to our GRC programs,
we rely on point-in-time checks.
But get this.
More than 8,000 companies like Atlassian and Quora have continuous visibility into their controls with Vanta.
Here's the gist.
Vanta brings automation to evidence collection across 30 frameworks, like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting,
and helps you get security questionnaires done five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
And now, a message from Black Cloak.
Did you know the easiest way for cyber criminals
to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home, your company is at risk.
In fact, over one-third of new members discover they've already been
breached. Protect your executives and their families 24-7, 365 with Black Cloak. Learn more
at blackcloak.io. And joining me once again is Johannes Ulannes ulrich he's from the sans technology institute and he's also
the host of the isc stormcast podcast johannes welcome back you know uh we had the uh recent
news about um hardware flaws like rohammer and specter uh but you wanted to make the point that
maybe we need to look uh into the past to uh to be reminded that some of these things might not be so new.
Yes, and the reason I'm saying that is, being a developer myself, you always assume that
hardware is flawless, which is kind of odd because I know my code is not flawless. So
why should the developers that develop hardware be any better in writing code? That's essentially
what they do, even if it doesn't look like code.
They design systems, which of course have flaws.
And so I looked a little bit in the history here.
How old are these flaws?
Now, Spectre Meltdown was the big hit recently.
Turns out, actually, I think it was around 2006, 2008,
papers were already being published
that essentially just talked about this
particular flaw. If you have these predictive execution threats that, well, a code that may
not be supposed to be executed based on privilege settings will get executed. And then if you don't
clean up right, well, you end up with sort of a escalation vulnerability, exactly really what Spectre was about.
Now, then I looked at Rowhammer.
Now, Rowhammer is this vulnerability a little bit older than Spectre Meltdown,
where essentially what you do is you flip certain bits in memory really fast,
and that affects the neighboring bits that you may not have access to.
And with this, you sort of can manipulate memory
that you're not supposed to be able to manipulate.
This was even sort of a little bit more amazing here
when it comes to sort of old vulnerabilities.
Turned out good old magnetic core memory,
which was like used back in the 60s and such,
had exactly this vulnerability.
And this was a well-described phenomenon.
PDP-11s, which an old digital computer was used quite a bit,
actually had a very specific feature built into the system
where you could calculate or measure what's called the worst-case noise,
which exactly means, well, if you write certain patterns to memory, you may flip additional bits.
So people maybe should look a little bit at these old research papers before they design new systems.
And this is not just really a factor of these new systems being sort of too fast or overly tuned.
Yeah, I remember in the past couple decades, I want to say probably around the Pentium time,
when there was a lot of publicity about some of the processors had some issues with some floating point calculations,
where you could ask a PowerPC processor one math question and ask a Pentium processor the same math question, and you wouldn't necessarily get the same answer.
Correct. Back then, Intel actually did a big recall, and I remember doing it myself.
I sort of received this new processor I had to swap for the old one.
It was, again, one of these real weird bugs where if you used one particular number,
there was a bug in the processor that would essentially interpret that number differently.
Now, back then, it was a little bit easier to swap CPUs.
You had desktops with sockets and such.
Today, even if Intel would attempt the recall, it would be quite difficult to exchange CPUs.
They're mostly soldered in these days. So that wouldn't really
work that well. I remember my old Commodore 64 had a special command. When you send it,
it would actually physically destroy the computer. So yes, these systems always,
these problems always existed. People just sort of seem to forget about it, that really hardware
isn't perfect. And your software should not assume that hardware is perfect.
No, it's a great point.
It's definitely worth remembering.
Johannes Ulrich, as always, thanks for joining us.
Cyber threats are evolving every second
and staying ahead is more than just a challenge.
It's a necessity.
That's why we're thrilled to partner with ThreatLocker,
a cybersecurity solution trusted by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total control,
stopping unauthorized applications, securing sensitive data,
and ensuring your organization runs smoothly and
securely. Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe
and compliant.
My guest today is Philip Tully.
He's a principal data scientist at ZeroFox. At the RSA conference this year, he presented on the topic of artificial intelligence
and how we may see more adversaries making use of it soon.
It's been about a decade now that enterprises and security professionals and defenders
have been using artificial intelligence in general,
or machine learning-based data-driven methods,
to detect, prevent, and remediate attacks on perimeters.
So more and more we're seeing the advent of these techniques,
and they're applied to more and more things. Classically, it was applied to problems like
spam detection in emails. There was a new wave of approaches involving detecting malware,
whether it be binary malware or URLs. URLs also in the phishing domain, just finding malicious links, detecting botnets,
detecting network intrusion attempts. And what I do for ZeroFox more recently in detecting
threats on social media, for example. And so these type of things have been evolving.
And more recently, you're starting to see, at least in the academic world and in the research realm, several examples
of AI or data-driven techniques being leveraged for offensive purposes and for attack automation.
At the moment, I want to be clear, and there's a lot of hype around this type of thing.
From my point of view and where I stand, I haven't seen any credible evidence of an AI or a data-driven
technique being waged for an attack in the wild yet. I'm curious because what I hear people say
often is that the attackers are using the most efficient and also least expensive ways to attack
people. You know, they fish people because fishing works. They use ransomware because it works.
Is it a matter that using AI and machine learning,
is there a cost associated with that that makes it unattractive to the adversaries?
Absolutely. And this is a fair point. And this is, I think, one of the primary reasons you don't see
these attacks waged often, if at all, currently. But there are certain trends, both in the hardware realm,
where you're starting to see increased parallelization and cloud-based computing
and easier access to GPUs and kind of this continuation of Moore's law and even technologies
that are positioned post-Moore's law, like quantum computing and neuromorphic computing,
that are becoming more and more available.
I mean, nowadays I can log into AWS and spin up a box
and start to play with machine learning tools within an hour as a non-expert.
And this was never possible five, ten years ago.
On the software side, you have trends that kind of match this, right?
You have deep learning, the rise of deep learning itself,
which kind of the previous generation of machine learning models,
I would say 10, 15 years ago and even before then,
all relied on hand-tuned features.
So you'd have to define in advance what the models that you were building should care about.
Deep learning kind of automates that process away.
You don't need to hand-tune features and do feature engineering anymore.
On top of this, you have different trends.
You have educational resources like Coursera and code sharing via GitHub and Stack Overflow.
And these type of things kind of lower the bar for entry.
You have lots of open source data sets and pre-trained models and professionally quality open source libraries like TensorFlow
that are being released by these big companies.
And these are extremely powerful tools.
There's a general trend to try to lower the bar.
So what we're seeing more is that beforehand, you would probably have to be an expert or
get a PhD or get a master's or have some type of specialized training in this field to kind of practice these
techniques. But I expect if it's not happening already, I expect it more and more for these
skills to be taught earlier on in education cycles in college and in high school. And I think it's
going to be par for the course in not even five years away that people will start to use techniques
like this on a more regular basis.
So the trends are all pointing towards lowering the bar for entry. And when you think about that
in terms of the attacker, lowering the bar for entry and, you know, eliminating these technological
hurdles is going to kind of speed up their processes and make them more appealing.
So where do you suppose we would see the
adversarials first turning to this technology? Is there an area that you think it's most likely?
I've worked on a project before with a colleague, John Seymour, that was concerned about automating
spear phishing. And so we built a tool that didn't take us very long. And so this is kind of what got this idea in motion about the ease of applying machine learning on offense.
It took us a few months to build this tool, which went out and procured information from people's Twitter timelines.
We had a model that was able to generate tweets at a high level.
And we would be able to take information from each individual user's timeline and see the model with that information.
So if you're posting a lot on your Twitter or your social media about cybersecurity or the recent vacation you went on or the recent movie that you just saw that you loved, the model would be seeded with this kind of interest, this hobby or this general interest that you have and that you're posting about and the hypothesis was that if the post that we targeted you with was concerned with that
interest and it aligned with kind of the the content of your timeline that you'd be much more
likely to click on a link that was served up to you via a tweet and uh and this was born out in
the data we we ran a simulation where attacks like this were a lot more successful than your run-of-the-mill question-answer attacks or randomly targeting people with stuff that didn't necessarily match to their timeline.
And you can do this all using a technique that relies on unsupervised learning.
And this is a sub-method of machine learning that does not need labeled data in order to work, so to speak.
You can basically tell a model, hey, we have this distribution of data here. We want you to generate
a piece of data or a piece of content that appears similar and has the same or similar statistics
as this piece of data and this mountain of data we already have. And because you don't need to
label that data or associate each piece of data
with a label like malicious or benign,
you can just go out and scrape or grab a bunch of data
and that's very easy to scale up,
train a model up and start to use it
in a much shorter amount of time
than it takes a defender to spin up a similar model
that might be used for defense
because the defender actually
has to label each piece of data malicious or benign in order to better predict an attack or
a non-attack that's incoming. How should we be preparing then? It's interesting to me that
we're still kind of dealing with the human factor here. You know, we're using,
the bad guys could be using AI to better fool the people. In this arms race of machine versus machine, AI versus AI,
is the weak link still the meat in the middle, the humans?
Yeah, I would say the human is always going to be a weak link in this sense. In that example,
it's very clear that, especially on a social media-based venue like Twitter, it's very clear that especially on a social media based venue like Twitter, it's hard
for a human to decipher whether or not a post was generated by a bot or a human. Attackers have
always had an advantage simply because of what's at stake. They only need to win a few times in
order to win that battle overall. Whereas blue teams, you really need to or defenders really
need to have detection that
approaches 100% success. So I think what's different this time around is that in the
cybersecurity domain, you have politics or you have a little bit more nuance than you do in,
I guess, generic machine learning or generic image recognition and other natural language
processing, other high-level applications to
which machine learning is applied. In those realms, and in kind of the core machine learning
research field, you have people sharing data often with each other, researchers sharing data.
And this kind of accelerates the field and makes these models and these methods
advanced in a shorter amount of time. The position of the cybersecurity field is a little bit different
because sharing data can be either illegal
because of contractual obligations you have with your clients.
The data can be too sensitive to share
because it contains personal information or whatnot.
Data is secret sauce. It's intellectual property.
So if you have two companies that are developing a similar approach,
they're competing with each other. They don't necessarily want or they're not incentivized
to share their data. They want to build a more accurate model than their competitor.
And so they view it as something as data, this fundamental thing that gives them a leg up in
this fiercely competitive market. That's Philip Tully from ZeroFox.
And that's the Cyber Wire.
For links to all of today's stories, check out our daily briefing at thecyberwire.com. And for professionals and cybersecurity leaders who want to stay abreast of this rapidly evolving field, sign up for CyberWire Pro. It'll save you time and keep you informed. Listen for us on your Alexa smart speaker too.
The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe,
where they're co-building the next generation of cybersecurity teams and technologies.
Our amazing CyberWire team is Elliot Peltzman, Puru Prakash, Stefan Vaziri, Kelsey Bond,
Tim Nodar, Joe Kerrigan, Carol Terrio, Ben Yellen,
Nick Vilecki, Gina Johnson, Bennett Moe, Chris Russell,
John Petrick, Jennifer Iben, Rick Howard, Peter Kilpie,
and I'm Dave Bittner.
Thanks for listening. We'll see you back here tomorrow. Thank you. AI and data products platform comes in. With Domo, you can channel AI and data into innovative uses
that deliver measurable impact. Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to your Data is hard. Domo is easy. Learn more at ai.domo.com.
That's ai.domo.com.