CyberWire Daily - Voting machine security. Airliner firmware. Attribution and deterrence in cyberwar. Monitoring social media. Broadcom buys Symantec’s enterprise security business. Policing, privacy, and an IoT OS.
Episode Date: August 9, 2019Are voting machines too connected for comfort? Airliner firmware security is in dispute. Attribution, deterrence, and the problem of an adversary who doesn’t have much to lose. Monitoring social med...ia for signs of violent extremism. Broadcom will buy Symantec’s enterprise business for $10.7 billion. Amazon’s Ring and the police. A CISA update on VxWorks vulnerabilities. And human second-guessing of AI presents some surprising privacy issues. Justin Harvey from Accenture with his insights from the Black Hat show floor. Guest is Tim Tully from Splunk on the AI race between the US and China. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
Are voting machines too connected for comfort?
Airliner firmware security is in dispute.
Attribution deterrence and the problem of an adversary who doesn't have much to lose.
Monitoring social media for signs of violent extremism.
Broadcom will buy Symantec's enterprise business for $10.7 billion.
Amazon's ring and the police.
A CISA update on VxWorks vulnerabilities.
And human second-guessing of AI presents some surprising privacy issues.
From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday,
August 9th, 2019. Vice reports that contrary to various government assurances, voting machines in the U.S. made by election systems and software have in fact sometimes been connected to the Internet.
County election officials who desire faster tabulation and reporting of votes establish wireless connections to SFTP servers behind a Cisco firewall.
These connect with back-end systems that actually count the votes.
firewall. These connect with back-end systems that actually count the votes. Typically, votes are recorded on a memory card and physically delivered to a tallying location, but in some areas and
under some circumstances, the machines are configured to report remotely. Vice says that
such connections are intended to be brief, matters of a few minutes, but Vice's investigation
concluded that in some cases the systems remained connected for
months. Thus, voting may be less air-gapped than many officials had imagined. The possibility of
direct manipulation of votes, of course, is a more serious matter than the influence operations
Russian intelligence services have conducted during recent elections.
Both Boeing and the U.S. Federal Aviation Administration dispute claims made
this week by IOactive that the 787 Dreamliner's firmware is vulnerable to cyberattacks on flight
systems. The aircraft manufacturer told PCMag that IOactive did not have full access to the 787
systems and that Boeing's extensive testing confirmed that existing defenses in the broader
787 network prevent the scenarios claimed. The FAA says it's satisfied with the assessment of
the issue. IOACTIV, which presented its research at Black Hat this week, did not claim to have a
proof of concept, still less that they had found any actual exploitation in the wild.
But they do think there's a possibility that an attacker could pivot
from in-flight entertainment systems to flight control avionics.
It's important to note that this is not the same vulnerability CISA warned against last week.
That warning concerned the CAN bus in small general aviation aircraft.
The Dreamliner is a different kettle of fish.
aviation aircraft. The Dreamliner is a different kettle of fish. Elsewhere at Black Hat, Miko Hypponen, chief research officer of F-Secure, shared some thoughts on the distinctive features
of cyberwar. His observations, as reported by Fifth Domain, are worth some reflection.
What distinguishes cyberwar from kinetic war is, he thinks, the fundamentally difficult nature of
attribution in cyberspace.
Hipponen said, quote, cyber weapons are cheap, effective, and they are deniable.
False flag operations are common, and attribution is usually hedged about with reservations.
There may even be doubt as to whether a cyber attack has even taken place.
Consider, a missile launch is an unambiguous event, and the ones our fire support desk has
witnessed cannot be mistaken for anything other than what they are, nor is it that difficult to
tell where the missile came from. But with a cyber attack, it can be unclear whether an attack has
even taken place. And even after you've determined that there has been an attack, attribution can be
difficult. In most cases, the best companies in the threat intelligence business can do
is present convincing circumstantial evidence.
That's fine for cyber threat intelligence,
but it's problematic when a responsible government is considering going to war.
This problem is closely linked to another,
the difficulty of establishing deterrence in cyberspace.
For deterrence to work, the
adversaries must have some relatively realistic appreciation of what the opposition can do,
what its capabilities are. That's one reason for the Cold War traditions of military parades in
Red Square or news footage of tests on the Pacific Missile Range. Cyber capabilities are inherently
more difficult to assess. You may not even know that a particular kind of attack is possible,
let alone that the opposition is capable of delivering it.
We have no idea what offensive capabilities other nations have, Hypponen said.
So what kind of deterrence do these tools build?
Nothing.
We note, as Dr. Strangelove put it back in the heyday of nuclear deterrence,
deterrence is the art of
producing fear in the mind of the enemy, but the whole point of the doomsday machine is lost if you
keep it a secret. Turning to specific nation-states, Hypoenon singled out North Korea for particular
mention in dispatches, making all due allowance for the difficulties of attribution mentioned above.
Pyongyang does things no other government attempts, like engaging in hacking for financial gain. Part of what explains North
Korea's high level of activity and relative recklessness, Hypponen argues, is that the
country has very little to lose, and that makes it a different kind of threat actor.
With calls for increased attention to evidence of threats in social media, the FBI has issued a request for proposals that asks contractors to propose tools that could effectively monitor Facebook and other social media for signs of impending criminal or terrorist violence.
Facebook, the Wall Street Journal says, isn't entirely happy with the idea.
It has been under fire for the way it handled personal data, and Menlo Park has been
on the defensive over privacy for a long time. The last thing Facebook needs is this sort of help
from the feds. But with the White House convening a social media summit to come up with ways of
controlling violent extremism online, the Bureau is likely to continue leaning forward in the foxhole.
Some significant industry news has broken.
Broadcom will acquire Symantec's enterprise security unit,
including, CRN says, the Symantec brand, for $10.7 billion in cash.
Seeking Alpha calls this Broadcom's next move in its bid
to become a major infrastructure technology provider.
Symantec will retain its consumer-facing Norton LifeLock business.
NBC News has a story out on the ways in which Amazon's products
are being used by police departments in the U.S.
Most of the discussion surrounds the company's smart doorbell, Ring,
which in addition to the ringing keeps an eye out for the ringer.
Data captured by Ring has been fed to police departments
and have arguably helped them solve burglaries. Most would regard this as a good thing,
but the implications of creating an American panopticon from the bottom up trouble some
observers. This is especially so at the points where familiar technologies intersect with
unproven innovations. Ring's networked video security cameras are increasingly used in conjunction
with controversial and possibly error-prone facial recognition software.
One critic, University of the District of Columbia law professor Andrew Ferguson,
put the objections this way to NBC News,
quote,
I am not sure Amazon has quite grappled with how their innovative technologies
intersect with issues of privacy, liberty, and government police power.
The pushback they are getting comes from a failure to recognize that there is a fundamental difference between empowering the consumer with information and empowering the government with information.
The former enhances an individual's freedom of choice. The latter limits an individual's freedom and choice.
an individual's freedom of choice. The latter limits an individual's freedom and choice.
The U.S. Department of Homeland Security's CISA has issued an updated warning about vulnerabilities in Wind River's VXWorks, the widely used industrial IoT software.
CISA says that 11 vulnerabilities could be exploited to allow remote code execution,
and that the level of skill such exploitation would require is relatively low.
Wind River is addressing the problems in VxWorks,
and users are encouraged to apply the patches and mitigations the company is offering.
And finally, Microsoft's use of humans to perform quality control on some of its services
has received the same sort of scrutiny Google, Apple, and Amazon have attracted.
Microsoft's Skype service and Cortana Digital Assistant are listened in on from time to time,
but Microsoft says its contractors listen to Skype calls and user interactions with Cortana
only after receiving user permission. I can hear you. Excellent.
I can hear you.
Excellent.
Calling all sellers.
Salesforce is hiring account executives to join us on the cutting edge of technology.
Here, innovation isn't a buzzword.
It's a way of life.
You'll be solving customer challenges faster with agents,
winning with purpose, and showing the world what AI was meant to be.
Let's create the agent-first future together.
Head to salesforce.com slash careers to learn more.
Do you know the status of your compliance controls right now?
Like, right now?
We know that real-time visibility is critical for security, but when it comes to our GRC programs, we rely on point-in-time checks. But get this, more than 8,000 companies
like Atlassian and Quora have continuous visibility into their controls with Vanta.
Here's the gist. Vanta brings automation to evidence collection across 30 frameworks, like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting,
and helps you get security questionnaires done five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
And now, a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses
is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices, home networks, and connected lives.
Because when executives are
compromised at home, your company is at risk. In fact, over one-third of new members discover
they've already been breached. Protect your executives and their families 24-7, 365,
with Black Cloak. Learn more at blackcloak.io.
at blackcloak.io.
And continuing our coverage of Black Hat,
joining us is Justin Harvey.
He is the Global Incident Response Leader at Accenture.
Justin is joining us from the show floor at Black Hat,
where he has a very spotty phone connection.
So we apologize in advance for the audio quality of our connection.
But, Justin, what are you seeing there?
What is the overall tone that you're sensing on the show floor itself? Well, the tone is all about visibility, detection, and response.
And in years previous, we've seen different point solutions being deployed.
But this year, the theme is all about shining light under the rock in order
to find the adversaries. And so how does that express itself? What sorts of things are people
out there talking about and offering? The various solutions that we're seeing out here that are
focusing on visibility is applying it to endpoints, applying it to your network, and applying it to
your identities.
And it seems like all of these vendors are using the P word, Dave. They're all talking about platforms.
What can they do to increase visibility and integrate with other solutions?
I think that those of us in the industry have been saying for many years that there is going
to be an investment bubble, that we walk in the door and we expect Black Hat to be smaller than it was last year. But this is 19,000 people are attending Black Hat
this year from over 110 countries. And this is their 23rd year. And I have to tell you
that there is no slowing down in the market. It is very hot. People are very excited.
And in fact, one of the net new things that I'm seeing is
a focus on training, a focus on career management, particularly being inclusive and diverse in the
workforce. And how is that expressing itself? I mean, are you seeing more diversity out there
on the show floor? Is there more representation of different types of folks out there?
more representation of different types of folks out there?
Definitely a representation of a larger swap of diverse attendees, but we're also seeing it in some of the booths here.
There is a big focus on women, and there is a big focus on diversity.
So what can these companies do to attract talent and enhance and shepherd their career, if you will, get them the right
training, get them the right support in order to succeed in cybersecurity today.
What sorts of things as you walk around on the show floor do you have your eye on? Is there
anything you're hoping to find out, anything you want to learn or get insights on?
Well, I have two objectives attending Black Hat this year. First is I wanted to see
what sort of OT or operational
technology solutions are out in the market today. And there's not a lot of that out here, Dave.
We are seeing companies like Nozomi and Forescout that have these asset inventory solutions,
passive network solutions that are mapping OT networks out. But I'm not seeing a lot of
the vendors here talk about the convergence of information technology
and operation technology,
or in essence, the ability to marry the digital
with the kinetic world.
For the last decade or 15 years,
they've been very segmented.
If you are in IT, you're dealing with business systems.
If you're an OT, you're an engineer,
you're not a technologist.
And I think the
industry is just now waking up to OT and critical infrastructure and figuring out how to bond those
two together. At RSA this last year, we saw OT was one of the big things. Now we're here and
we're not seeing that. I'm also not seeing a lot of emphasis on the small and medium businesses.
It seems like if you bring in less than $50 or $100 million in revenue, there's not a lot of solutions out there in the market for you.
And I think that's really worrisome to both of us in the industry.
All right. Well, Justin Harvey, thanks for joining us and safe travels home again.
Thank you, Dave. We miss you and I look forward to seeing you at one of these events again.
All right. We'll see you soon. Take care.
Cyber threats are evolving every second, and staying ahead is more than just a challenge. It's a necessity.
That's why we're thrilled to partner with ThreatLocker, a cybersecurity solution trusted by businesses worldwide. ThreatLocker is a full suite of solutions designed to give you total control,
stopping unauthorized applications, securing sensitive data, and ensuring your organization
runs smoothly and securely. Visit ThreatLocker.com today to see how a default deny approach can keep
your company safe and compliant.
My guest today is Tim Tully. He's chief technology officer at Splunk. His team recently published a
report titled The State of Dark Data, which sets out to reveal the gap between AI's potential and today's data reality.
I asked Tim Tully to outline their findings when it comes to how the U.S. is approaching AI versus China.
As part of the dark data report, you may have seen that sort of there's a higher level of general acceptance
in terms of sort of the role of AI in society.
And I would probably tend to agree that they're a little bit ahead of the U.S. in that regard.
I think a lot of that, you know, largely has to do with sort of, you know, where they focus
their time and where they spend their time.
And I think, you know, it shows up both as being perhaps more societally more acceptable,
but also, I think, emphasized a bit more in school, particularly earlier.
Yeah, I think that's an interesting point. I mean, there's that whole situation where I suppose if
you're a citizen in China, you may not have the same options that we have here in terms of opting
out of data collection. Yeah, that's certainly true. And, you know, all that sort of is societal.
And then part of it is also sort of just government law, if you will.
And then also sort of what is acceptable, I think, which goes back to the societal piece.
What you see as being everyday norm, perhaps in China as a citizen, is slightly different than sort of your level of expectations as a citizen of the U.S. or Europe.
says to when someone is gathering up a bunch of data or using a data set for a project involving AI, how do they go about establishing what the standards are for that data? What's involved
in there? Yeah, I mean, there's sort of like the technical piece, which is, you know, if you're
doing supervised learning, you definitely want to make sure it's labeled. I think what you're trying to allude to is more of what
goes beyond level of acceptability,
of violating perhaps even basic human rights,
not to sound too hyperbolic.
The way I would perceive it is you
try to think about privacy and PII, first of all.
I see data privacy as being super, super important.
My background is I've been doing big data
going all the way back to 2003 and then
obviously over the last decade increasingly
more in AI space. But to the extent
that you can, ideally you stay
away from PII as much as possible.
Whether it's full names or birthdays
or social security numbers or what have you.
You try to either mask out the
data or one way hash it or what
have you and try to anonymize it to the extent that you possibly can and focus more on the models and the training of the models rather than sort of like the actual depths of what the data represents per se.
And then perhaps come back and map in the models later.
But you definitely want to try to stay clear of exposing or even having access to that kind of private data as much as you can.
It's a slippery slope in my mind.
What about even having biases in the data itself, of knowing ahead of time that this
data may be leaning in one direction or may be oversampled with one type of data than
the other?
I think that's a skill that a lot of AI practitioners have to learn over time.
And I think increasingly the literature is doing a better job of sort of introducing a notion of biases,
biases up front. I think it's one of those things that you sort of just, you start to figure out
over time, it's not necessarily an inherent skill that people are born with. And it's a tough
problem. Otherwise, you probably wouldn't be asking. Yeah, it's certainly hard.
There's no tried and true way really to stay away from it. It's just sort of an experiential kind of thing.
Now, in terms of staying competitive and trying to get an advantage over other nations around
the world, what sort of directions do you think we have to take?
Schools and colleges in particular need to do a much better job of focusing more on sort of the
realities of machine learning being every day. I think we've historically bolted on ML to what
we've done. And I think ML needs to be thought of being pervasive in everything that people do in
their computer science education and in the background that they establish. I mean, that's
sort of the approach I'm taking with our products right now within Splunk. Historically, a lot of companies in Splunk included have sort of thought of machine learning
as being more of an afterthought or something that we sort of have bolted onto the product.
And the approach that I've been taking over the last couple of years is to think of machine
learning and AI as being sort of ingrained in everything that we do and almost automatic,
right? It's not sort of a feature that, you know, we just want to put out there and market
as saying, oh, it's AI powered. That should, it should just be implicit. People should just
understand it should be completely augmentative to your experience as a, as a user. And so in the
same way that has to sort of show up in the curriculum, right? Like there has to be ML and
sort of all the courses that, that people take in one way or another, whether it's a security course
or a networking course or what have you, it has to be completely ubiquitous in the same way that, say, like UI development is done within Teams.
One way I try to explain to people in my teams is, you know, there shouldn't necessarily be just
one set of people thinking about UI in the same way that there shouldn't just be one set of people
thinking about ML, right? It has to be completely pervasive. Can you give me an example of a
situation where there would be ML applied to something
where perhaps folks wouldn't have thought it would be there before?
One of the things we're working on as an example is what we call sort of automatic source type
inference.
And what that is, is using ML to train models to sort of recognize data up front before
you put it in the, as you put it in the Splunk.
And so instead of as a user saying, hey, this is a JSON data format that represents some firewall log,
for example, splunk should just say, hey, I see you're putting in JSON data that's firewall log
data, right? And that would be automatically done without you telling it to use ML or to
have previously trained a model. It's automatic and And it happens in a second, and you move on with your day.
In the same way that Netflix, which
is the canonical example of machine learning, where
they recommend movies, it's analogous to that
in the data space, where it's just part of the experience
that you just assume is there and works well out of the box.
MARK MANDELMANN- And then I suppose part of the notion
here is that over time, it gets better and better at making those guesses or assumptions for you.
So the error rate goes down.
Yeah, sure.
Because you have a feedback loop.
What are your tips for businesses that are trying to get a handle on this, who realize that this is something they want to start integrating?
What's a good place for them to get started?
Yeah, you know what?
That's an interesting question.
I think a lot of times businesses talk about AI because they have to, right? It's sort of like, it's a
marketing box that they sort of check and they talk it up and then they hype it up and then
they spend all their time sort of planning and thinking about what they're going to do. And then
they're sort of stuck and they're not really doing anything. I think the best, in my experience,
what I've seen across my career is the best thing to do is just dive in headfirst.
You look at a small problem and you look at how to build models and train it and which techniques you're going to use.
And then you start to roll them out in an applied way.
And then you look at the data and you look at the feedback loop that you create, as you just asked about.
And then you expand from there.
Instead of coming up with this boil-the the ocean, expansive AI slash ML strategy,
you know, start with a small problem with a small set of folk and just go for that and try to
optimize the hell out of that problem and then expand from there. Yeah, sort of crawl before
you walk, I guess. Yeah, I think oftentimes you see a lot of companies just trying to like come
up with this like grand unification theorem around ML or AI and like that just that never works.
That's Tim Tully from Splunk. The report is titled The State of Dark Data.
And that's the Cyber Wire.
For links to all of today's stories, check out our daily briefing at thecyberwire.com. And for professionals and cybersecurity
leaders who want to stay abreast of this rapidly
evolving field, sign up for CyberWire
Pro. It'll save you time and
keep you informed. Listen for us on
your Alexa smart speaker, too.
The CyberWire podcast is proudly
produced in Maryland out of the startup studios of
DataTribe, where they're co-building the next
generation of cybersecurity teams and
technologies. Our amazing CyberWire
team is Elliot Peltzman,
Puru Prakash, Stefan Vaziri,
Kelsey Vaughn, Tim Nodar, Joe
Kerrigan, Carol Terrio, Ben Yellen,
Nick Volecki, Gina Johnson,
Bennett Moe, Chris Russell, John Petrick,
Jennifer Iben, Rick Howard, Peter
Kilpie, and I'm Dave Bittner.
Thanks for listening. We'll see you back here
tomorrow. that are not only ambitious, but also practical and adaptable. That's where Domo's AI and data products platform comes in.
With Domo, you can channel AI and data into innovative uses
that deliver measurable impact.
Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts,
and act with ease through guided apps tailored to your role. Data is hard.
Domo is easy. Learn more at ai.domo.com. That's ai.domo.com.