CyberWire Daily - Data-centric security. [Special Editions]
Episode Date: August 1, 2018In this CyberWire special edition, we take a look at data-centric security, focusing on the security of the data itself, rather than the surrounding networks, application or servers.  To help us o...n our journey of understanding we’ve lined up a number of industry experts. Ellison Anne Williams is CEO of Enveil, a company that’s developed cutting edge encryption techniques. Adam Nichols is principle of software security at Grimm, a cybersecurity engineering and consulting firm. Mark Forrest is CEO of Cryptshare, maker of secure electronic communication technologies for the exchange of business sensitive information. And John Prisco is CEO at QuantumXchange, a provider of what they claim is unbreakable quantum-safe encryption. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the CyberWire Network, powered by N2K.
Calling all sellers.
Salesforce is hiring account executives to join us on the cutting edge of technology.
Here, innovation isn't a buzzword.
It's a way of life.
You'll be solving customer challenges faster with agents, winning with purpose,
and showing the world what AI was meant to be. Let's create the agent-first future together.
Head to salesforce.com slash careers to learn more.
Well, because data is often the largest asset of an organization, all types of people have their eyes on that data for exfiltration purposes, for stealing it, etc.
So you really have attack vectors coming at you from every possible angle.
In this CyberWire special edition, we take a look at data-centric security, focusing on the security of the data itself, rather than the surrounding
networks, applications, or servers. To help us on our journey of understanding, we've lined up a
number of industry experts. Ellison Ann Williams is CEO of Envail, a company that's developed
cutting-edge encryption techniques. Adam Nichols is Principal of Software Security at Grimm,
a cybersecurity engineering and consulting firm.
Mark Forrest is CEO of CryptShare, makers of secure electronic communications technologies for the exchange of business-sensitive information.
And John Prisco is CEO at Quantum Exchange, a provider of what they claim is unbreakable quantum-safe encryption.
Stay with us.
Something I've seen for many years, I've been in the industry now 25 years or more,
is that one of our greatest vulnerabilities is to individual data users' lack of understanding as to the level of vulnerability they face with their data.
That's Mark Forrest from Cryptshare. Whether that vulnerability is through direct security breaches that are intended for malicious purposes or indeed for use by agencies which they don't realize are using their data for purposes such as advertising or
promoting products. And I think there's a very limited understanding amongst the general
population and around corporate users about the way in which their data can be both manipulated
for criminal reasons or for commercial reasons. So typically data security comes in three parts.
That's Alison Ann Williams from Envail. So the first part is securing your data at rest on the file system.
This is going to be your standard file-based encryption techniques.
The second is encrypting your data in transit.
So securing your data when it's moving through the network.
That's the second piece of the data security triad.
And then finally, securing it when it's
being used or processed. That's typically done by things like searches or analytics, because data
really only stays in three states within an organization. It stays resident on the file
system or in some other storage technology. It's moving through the network or it's actually being
used or processed. And so what are the vulnerabilities that are inherent with each of
those states? Of course, if you don't lock down your data on the file system or encrypt it on the
file system or in your storage technology, then you're leaving that wide open for an attacker
just to come and take your files. If you don't lock it down in transit when it's moving through
the network, then the same applies. So you leave it open for an attacker to come and take it off of the network as it's moving through your environment, or as it's
transiting from one location that you own to another. And then finally, of course, in use.
So if you don't protect it as it's being processed, then you're leaving that whole
processing layer open for all types of attacks. There are many points of vulnerability.
We can't solve all of them all of the time, but we can take simple steps to solve some of them and
do that quickly. And that actually minimizes the exposure that we have, both for significant data,
by the way, and for stuff that we may consider more trivial. Traditionally, only two areas of
the data security triad have been focused on by organizations, and that's
encrypting or securing the data at rest when it's on the file system and securing it in transit when
it's moving through the network. Why? Because those types of encryption that you use in those
cases have been well understood for a very long time. So because of that, you have a lot of
solutioning going on in the commercial market around those two areas and a lot of choice for
organizations. That third area of the data security triad has been far less solutioned over the years
because the area of encryption that really deals with that has been very computationally intensive
for a very long time. Does that mean that it just hasn't been available for you? It hasn't
been practical to be used? Correct, correct. So traditionally, the area of encryption that would apply to the
usage of data and keeping that encrypted at all times during processing is a type of encryption
called homomorphic encryption. It's a special type of encryption. It's not new. It's been around 40
or so years at this point, and it allows you to perform processing or operations on encrypted data as if it were unencrypted data.
Historically, like I mentioned before, it's been very computationally intensive.
A lot of work has gone into that field in academia and some commercial organizations and research labs.
And only recently have we really seen breakthroughs in that space.
Now, help me understand, because I think it's a difficult thing for many people to wrap their
heads around, myself included, which is, you know, this notion of being able to do things to encrypted
data without decrypting the data and getting an answer, I suppose, from that encrypted data
while not revealing the
original data.
How do you go about explaining that to people?
It's certainly a mind bender.
You are correct.
And it's really all math.
So what I tell people is it sounds magical.
It sounds impossible.
But really, it's just mathematics.
So there are some very strong, well-understood mathematical principles that underlie that ability and allow
all of that to move into the realm of not only possible, but now practical.
So how do you prevent the original data being revealed by knowing the answer to the calculation
that you're performing? So you make sure that as it's being processed, what's coming out of that
processing, say if you're
performing a search, the results of that search are encrypted as well. So there's never a point
in time where your results for that search are, for example, in an unencrypted state,
and then they become encrypted the way that you traditionally seeing things like data at rest
encryption operating. With this special type of encryption,
homomorphic encryption, as you process over the data, what's coming out of that processing or
encrypted bits, and you can't tell what's being selected and what's not being selected. That means
that now they can make sure that their business processing of data is completely secure at all
points during the processing life cycle.
So it opens up whole new worlds for things like secure cloud processing. So allowing people to migrate their most sensitive workloads and data to public cloud environments that are fundamentally
untrusted locations and have them process there in a way that's completely trusted to that
organization. Because of course, nothing is ever decrypted as it's being processed
in that cloud environment. Opens up whole new worlds around interacting with third-party data
services and providers outside of their walls in ways that stay completely secure and private to
the organization. And, of course, large implication for compliance and regulation, particularly under
things like GDPR. Now, when you look towards the future, towards the horizon, what do you see the role of this sort of encryption playing?
Our goal is to see ubiquity in this, to make sure that that last gap of the triad truly is closed
at all times when processing data, which, like I said before, is the most
valuable asset often in the organization.
the most valuable asset often in the organization.
Well, you know, we're using encryption protocols that are based on solving difficult math problems.
That's John Prisco from Quantum Exchange. And if we look historically at our secret key, RSA type keys, we understand why they've gone from smaller numbers to larger
numbers. And that's because the smaller numbers have been cracked. We're now up to RSA 2048.
It would be very difficult to factor that large number into two prime numbers.
But it won't be difficult to do that when quantum computers are available.
The problem is we can't simply say, oh, well, that problem is in here and it's five to ten
years away. I wouldn't even argue about how long quantum computers will take. I'd argue with,
can you guarantee that your data won't be stolen because if it's scraped today it can be
decrypted tomorrow there's no question that computers are getting faster and the reality
with any kind of encryption is that brute force attacks on encryption services enable you know
the bad guys or the governments to to to break down even the strongest encryption.
There are two solutions to it, of course.
One is you use more complex encryption algorithms.
But obviously, that's a never-ending story.
You know, we've increased the strength of the algorithms considerably in recent years.
And as computing power gets greater, and clearly with quantum computing,
we're seeing a kind of a step leap in capability. That is something which we can deal with in the changing of the algorithm to some extent. The other way is to consider how we
do the encryption, the classic mode of PKI using static key pairs, where you form a prior
relationship and then you trust that the key pairing remains
uncompromised. I think quantum computing challenges that statement. And so another
way of dealing with this is to make sure you use a unique encryption algorithm for every transaction
that you make so that if a quantum computing power is applied to your transactions, at least you know
that every message and every package of data needs to be cracked independently. And by having one
cracked, you don't have all of them, all of your communications broken into. And that approach to
symmetric encryption, I think, has a fundamentally greater strength than the old asymmetric methods
as used in PKI. So for us, from a quantum perspective,
our encryption is modular. So we're able to adjust it, swap it out, be very open and transparent
about it, which means that not only can we take advantage of the latest mathematical breakthroughs
in these special types of encryption that we use, but also we can make sure that the algorithms that
we leverage are quantum resistant. There are two approaches going on in parallel.
I don't think they're competitive.
I think they're complementary.
One is the solving difficult math approach, but it's not simply making the number bigger.
It's using what's called post-quantum cryptography, other mathematical approaches
that are not proven to be unbreakable but are more difficult than simply making the number bigger.
The other approach is to use a property of physics, which is the quantum key approach.
And that's where instead of using large numbers and difficult math,
we're using keys that are composed of photons,
and photons that have a related quantum state that won't exist if someone eavesdrops on them. So this is leveraging a law
of physics that says if you try to observe or eavesdrop on a quantum key, you'll change that
key in a profound way and the key will become useless and will not be able to be used to decrypt a file. And that is proven to be unbreakable.
So a combination of both of those schemes will probably get us to where we want to be.
And I've heard stories of, particularly some folks have been talking about nation states
that have been gathering up data, sort of vacuuming it up and storing it,
on the hope that in the future, even though they may not be able to get at that data now,
maybe 5, 10, 20 years down the line, they'll be able to get at it.
I think that's the fear. And I really don't argue in favor of 5, 10, or 20 years for the quantum computer,
of 5, 10, or 20 years for the quantum computer,
but if you just look historically at what's happened in the past year with Google and its 72 qubit
machine, and IBM with its 50 qubit
quantum computer, and Microsoft,
and the Chinese, and so many people that are investing
billions of dollars into the development
of quantum computers. It's not an if, but it's a when. People like Google are saying it's three
years away. But whatever it is, we can definitely lose our data today, and it definitely can be
stored cheaply and decrypted later. The practical issue we face
with this is that explaining that methodology to people who are selecting technology is often a
very difficult thing because they need to invest time in understanding how you approach things
differently and it is something in which buyers have to invest time. I think the days in
which you would simply snatch a solution from the marketplace based on the claims of a vendor,
whether it's Microsoft or anybody else, are long gone. I think there needs to be a rather
more forensic analysis of exactly what these tools do and how they work and whether they
match the use case of an enterprise. And I think that's a fundamental requirement for anybody
buying an encryption technology today. The usage space is a new one, and we're certainly telling a lot of people raising awareness around that issue.
I think the Intel disclosures that have been that type of attack surface is very common from a nation state perspective is now becoming very real in the commercial world. So our idea is to make sure that people are aware that this
is actually occurring and that there is a solution to stop it and to close off those attack surfaces.
Traditionally, because attacking and that usage gap was so difficult from an encryption perspective,
people have really worked around the problem. And that comes in a lot of different forms.
At one extreme, we see people simply calculating that they're willing to take that risk of
processing sensitive data in the open, as it were.
Moving along the spectrum a little bit, we see people that have built various types of
fences around the processing of data, whether that fence is extremely large for an organization,
something like a firewall, down to extremely small
kinds of container security or even enclave types of solutions. The issue with those types of
technologies from a fencing perspective is that when you break in through the fence,
which all attackers can do at some point, the data and the operations inside as they're being processed are completely
exposed. And then finally, we see people now trying to apply the encryption as it's become
possible to keep everything encrypted as it's being processed. I think a lot of these techniques
are terrific. They're better than anything we've seen before, but standardization will come slowly.
I know NIST has been requesting submission of algorithms.
Back in November, I believe, they still haven't stood the test
of time where many people try to
break them. So combining
an algorithm that NIST certifies
with a quantum key I think would be an ultra
safe approach. I think quantum keys by
themselves today are better than anything we have. And in fact, I believe they are unbreakable
because they're relying on a property of physics that's as immutable as gravity.
We see a lot of different recommendations from people, security experts, and they're generally based on anecdotal evidence.
That's Adam Nichols from Grimm.
So there are things that they've seen in the past in some certain environment, and they worked well, so they're advocating them, you know, usually just in general.
And we've seen some cases where people have been making these recommendations for
years, like, for example, mandatory chat password changes. Years later, when someone actually did
study where they split their users into two groups and enforce this policy in one and didn't on the
other, they found that people were more likely to choose poor passwords. And they also found that
there was like a 17% chance that an attacker who knew
one password would be able to guess the next one in five tries or less.
So in terms of coming at your security decisions in a data-centric approach,
how do you establish a culture where that is the standard?
It's an uphill battle right now, but I think that's probably going to change going forward. There's been a lot of people who are pushing metrics, and that way you can at least
see within an organization how you're doing. People will bring this to the board, and when
the board's saying, you know, we're spending a lot of money on cybersecurity, can you show me that
it's working? So these are the kinds of demands that we're seeing from board members of the CISO or CSOs.
And now basically it's up to the boots on the ground to make this happen and figure out how can we measure this?
What makes sense in our environment? Is it the number of days that a computer is down?
Is it the number of incidents that we've responded to? How do we measure this? And what can we do better?
of incidents that we've responded to. How do we measure this? And what can we do better?
And how do you ensure that you're looking at the right types of data that will lead to the best results? The things that you need to pay attention to are the things that
have the most impact to the business. If you look at the cost of an incident, like if a computer's
down for one day, it costs this much money and just that employee couldn't do their job that day.
Things like that.
You want to pay attention to what's important and measure that.
Because if you're just measuring the same generic, oh, how many password attempts were there on this account?
Like that doesn't actually tell you anything useful.
You want to make sure that whatever metrics you're gathering are going to be useful.
And the goal really is to kind of publish these things.
Unfortunately, it's difficult to get corporations to publish these things because they don't have
any real incentive. Like they've collected the data, they know the answer and they know how good
their science was in determining all of this, but they don't have a whole lot of incentive to share
it with anyone else. And the industry suffers as a result.
Sometimes people will trust outsiders.
A salesperson will come in or a consultant, and they'll say, well, this is what you should be doing.
And it sort of shifts the responsibility to that person without necessarily checking out to make sure that they're giving you the right information.
It doesn't actually shift any of the liability of that person. So while you might have a scapegoat, at the end of the day, if there's an incident and money's lost or the customers don't
trust your company anymore, that's still your problem. And you can point to, oh, well, you know,
this expert told me that I should be doing, you know, whatever it is that they recommended. But
you're the one holding the bag at the end of the day if you're the CISO. Quite often when I talk to analysts, they'll say that sometimes they'll be prompted to chase
something down because something just doesn't feel right. You know, they might not have the
numbers in front of them, but they just have a feeling. Is there anything to that when it comes
to setting a security posture, or is that a blind alley that we go down? I think most of the time
it is not a blind alley. I think it's, well, I mean, you don't know what's going to be at the
end of it. It might turn out to be a dead end, but at the same time, it might turn out to be
a huge issue. And that's kind of where the industry is at right now is that we don't really
have the data that we need to figure these things out from first principles.
So you kind of go with your gut.
And that's what people have been doing for a long time.
It works.
It's, you know, better than just randomly guessing, like, oh, I guess I'll just make up some policy that they need to change your password every seven days. Or that we won't install patches, you know, until they've been tested in a testing environment.
Or maybe we're just going to install them on a production live
and hope that the patches were good.
Having an expert to guide you is better than
just kind of blindly choosing something.
At the same time, it's not the same as an actual proper scientific study.
And so what are your recommendations for people who want to adopt this approach?
How can they get started, and how can they convince the higher-ups
that this is a good way to invest their resources?
From what I've seen,
the higher-ups are pretty much already demanding
you give them some kind of evidence
that what you're doing is working.
So from what I've seen,
it seems like they've already pretty much bought in
and saying, like, look, we're spending this much money.
Like, why don't we cut the budget 20%
and see what the impact is?
Or raise the budget 20%.
What's going to happen?
So I don't think it's going to be so much of a problem with them getting on board.
It's more of a problem with people not knowing what to measure.
And part of the reason is, like I said before, since people aren't really sharing this data, like when they collect data and figure out what's the best policy in their particular circumstance or what's the best methodology, then they're not sharing this.
So some other company is like, well, where do I even start?
So that's one of the problems that we've been seeing.
And academic papers address this to some degree, but things are moving pretty fast, and academic papers tend to make very small incremental improvements, and we need to move a lot faster than that.
And so where can people go to get good information?
there's not a whole lot out there.
You have like your standard sources of information like NIST and things like that
based on anecdotal evidence, which is something.
It's not just random.
There's not really a whole lot of good sources.
One of the things that me and my team are going to be doing
is basically putting some of these recommendations
to the test and trying to disprove them.
Set up a test environment
and maybe if the recommendation is that
you're safer if you have whitelist applications and then we'll basically run
tests and try and figure out, well, is that true?
Is there some way that we can break out of that?
How many of the actual just normal off the shelf malware does that prevent?
And then do the same thing with other things like running as an unprivileged user,
all the different kind of recommendations that we hear all the time.
We're going to try and put them to the test and then publish our results
so that people know, like, okay, this is effective,
and here's the data to back it up, and it needs to be repeatable
so that other people can verify that this is in
fact accurate and if there's a problem with the testing methodology they can come out and people
will say well no that's wrong because of this and then we can say yeah you're right we need to
rerun the test and you know we'll have different data and we can actually iterate whereas if people
keep all these uh results secret then it's going to be much harder to iterate and kind of improve as we go as a industry as a whole.
Cyber security is still a young industry by all accounts.
And other industries like medicine or engineering, like structural engineering, they at one point, they were kind of doing it the same way where everyone just kind of does their own thing.
And, you know, people might share information doing it the same way where everyone just kind of does their own thing.
And people might share information once in a while, and sometimes they don't.
But now, pretty much, that's kind of the standard is we've got these figured out.
And when something we've been doing for a long time is wrong, people will do rigorous studies to try and prove it.
And even then, it takes a while for it to be accepted as fact.
Like someone else needs to verify it in another study and so on. So yeah, the inertia factor is definitely going to be a difficulty. People should know what they're getting. So when they actually implement some
policy or install some product, they should know something about it and how effective is it in
practice. Those are the metrics that we generally don't have right now.
Ways of attempting to verify these,
like you have your network security assessments and things like that
to try and say, oh, well, this device was actually ineffective
at stopping any of the attacks.
We have that fairly well for products to some degree.
For policies, it's still kind of the Wild West
where you pick a policy and you're
like, well, it seems plausible. It makes sense. Yeah, let's do it. And then that's pretty much
the end of it. Until that bridge collapses. Exactly. Yeah.
And that's our CyberWire special edition. Our thanks to Ellison Ann Williams, Adam Nichols, Mark Forrest, and John Prisco for joining us.
The CyberWire is proudly produced in Maryland at the startup studios of DataTribe,
where they're building the next generation of cybersecurity startups and technologies.
Our coordinating producer is Jennifer Iben, editor is John Petrick,
technical editor is Chris Russell, executive editor is Peter Kilby, and I'm Dave Bittner.
Thanks for listening.
Thank you. by businesses worldwide. ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications,
securing sensitive data,
and ensuring your organization
runs smoothly and securely.
Visit ThreatLocker.com today
to see how a default deny approach
can keep your company safe and compliant.