Programming Throwdown - Episode 110: Security with Dotan Nahum
Episode Date: April 26, 2021Programming Throwdown talks cybersecurity with Dotan Nahum, CEO and Co-founder of Spectral. Dotan provides us with a high-level overview of the role of cybersecurity, its definition, evolutio...n, and current challenges. He also shares tips for small- and medium-sized ventures on how to develop best practices.The episode touches on the following key topics and ideas:00:01:12 Evolution of modern cybersecurity 00:06:06 When to integrate security in a design00:11:54 Shadow IT00:13:50 Hacker motives and motivations; SQL Injection explained00:16:48 Firewalls and WAFs00:20:29 Cybersecurity for small- and medium-sized companies 00:23:52 “The last mile of developers”00:26:47 dotfiles00:32:23 Simple tools and good practices00:40:42 Attack vectors, attack factors00:44:16 Ransomware and phishing00:48:19 Unsafe languages00:50:02 Fuzzing00:54:11 Rust programming language00:55:54 Example security scenario with IntelliJ00:59:42 More about Spectral, Dotan’s company01:03:40 Staying virtual using DiscordTranscript:Episode 110 Computer Security with Dotan NahumJason Gauci: Programming Throwdown Episode 110, Security with Dotan Nahum. Take away, Patrick. [00:00:21] Patrick Wheeler: Hey everybody. We're here with a hundred and tenth episode, which is pretty exciting. And we have our guest to-- oh, yeah, go ahead. You want to... [00:00:30] Jason Gauci: I'm just saying, yeah! (laugh) [00:00:32] Patrick Wheeler: So we're here with our guest today, Dotan, and you are CEO of Spectral. Why don't you go ahead and introduce yourself briefly, and then we'll get started.[00:00:42] Dotan Nahum: Yep. So hi, guys. So I am Dotan, and by the way, 110 is binary, right? [00:00:48] Patrick Wheeler: Oh, there we go. That's right. (laugh) [00:00:52] Dotan Nahum: So yeah, so I'm Dotan, CEO of Spectral. It's a cybersecurity company, geared towards developers. I mean, we like to say that we create tools for developers with security as a side effect. So yeah, so that's, that's, you know, that's what our focus is. [00:01:12] Patrick Wheeler: Awesome. Well, I mean, I guess that's a lot to unpack, so I think everybody would agree, security is very important, but maybe everyone doesn't understand what security is. So we were talking about this a little when we were doing, doing warmups. So if we talk about security, does that mean that you are developing antivirus for computers, for developers, or does it mean something more?[00:01:35] Dotan Nahum: Yeah, I mean, I mean, it's kind of all goes back to, I guess, evolution of our, I guess it is our domain, our, our world, which is kind of a high-tech or softer, softer world? Time really gets compact with all these revolutions. We have a, we have evolution revolution. [00:01:57] So, I mean, if you go back to 2007, that was just before Facebook and just before iPhone, I guess. And if you go back to 2005, that that was before the rise of Microsoft, I guess the major rise of Microsoft as a .net shop, which really made, you know, made all the enterprise software come along and then kind of '98, 2000, the first bubble.[00:02:27] So all these stages, they had, it's kind of a sprint to create technology. And, the focus is on creating technology that is supposed to give developers productivity, and supposed to make, you know, make companies very productive and create a very nice portfolio of products. [00:02:48] And almost always, I mean, maybe not intentionally, but almost always the security side of things, was kind of left behind. You know, I'm sure no one intended for it to be, but, there's a lot of more velocity under creating a great product at the time. Every, each and every step of this, like in the first bubble, and then in 2005, and then into 2007 and so on, rather than, okay, so let's create the technology and the product, and let's also make it, you know, kind of, dependent on making great security, be there for us. [00:03:35] So almost every time, security came after the revolution, after the evolution. So we had from, simple firewalls, to intrusion detection, which is, you know, the large kind of, systems that try, try their best to find anomalies in the, in the area of 2000, to the smarter firewalls. And even today, those like, this, mini kind of firewalls, of WAFs that you integrate as an SDK into your app. So yeah, so it's kind of come, it comes in waves, technology, and then, security comes in waves as well. [00:04:17] And yeah. So the latest, the latest we're seeing right now in terms of the evolution of software is that yeah, we know that software eats the world, but we are kind of feeling that it already ate the world? So, you know, you can do so much today that you couldn't have done, I mean, as little as three or four years ago, actually. You know, it can take a Lambda and you can pick up a bunch of SAS services and you're done. I mean, you build a product that used to be maybe three, four, five years ago, you know, used to take much more energy to build.[00:04:58] So in that sense, as a developer, you have so much more power and so many more paths to get to the same end goal that... I'm not sure, I mean, I feel it for myself. I'm not sure the security world can even begin to realize, because they need, I mean, if we, if we think about them as they, then they need to understand how to develop as well as developers in order to give, to create great solutions for that developer, that glue stuff together, and, you know, invent stuff from existing, existing parts.[00:05:37] Jason Gauci: Yeah, that that makes a bunch of sense. [00:05:39] Patrick Wheeler: I say, yeah, that covered, I mean, you, you went to the whole history of modern or last couple of decades of, computer software there, but I was going to say, so one of the interesting things I think before we get into the kind of specifics about, what needs to be secured, this, this kind of, thing you mentioned where people build a product first and then try to figure out security later.[00:06:02] I guess that's an interesting balance where, if you're building something until it's built, maybe it doesn't really need security. Right? If this was a thought in my head, I don't need security. If people are going to start using it though, immediately, you need to start having some amounts of security. Do you have opinion on like, what is the balance there?[00:06:19] So if you don't know yet what you're doing and what may be your risks, when is the right time to start considering security and what are some of the good, you know, first things to start considering? [00:06:30] Dotan Nahum: Yeah, so that, that's a great, great question. I mean, I think the balance is shifting towards really taking the time, in development time, in design time, and think about security on the security model.[00:06:46] So, you know, this was kind of theoretical, yeah, everyone should do threat modeling and everyone should do secure by design and so on. And, and frankly, you know, you'll, you'll find these people who are extremely into security that are actually doing these th...
Transcript
Discussion (0)
Hey everybody, we're here with 110th episode, which is pretty exciting.
And we have a guest today.
Oh, yeah, go ahead.
You want to celebrate?
I was just saying, yeah.
So we're here with a guest today, Dotan, and you are CEO of Spectral.
Why don't you go ahead and introduce yourself briefly, and then we'll get started.
Yeah.
So hi, guys.
So I'm Dotan.
And by the way, 110 is binary, right?
Oh, there we go.
That's right.
You didn't see that.
How did we miss that?
So yeah, so I'm Dotan, CEO of Spectral.
It's a cybersecurity company geared toward developers.
I mean, we like to say that we create tools for developers
with security as a side effect.
So yeah, so that's what our focus is.
Awesome.
Well, I mean, I guess that's a lot to unpack.
So I think everybody would agree security is very important,
but maybe everyone doesn't understand what security is.
So we were talking about this a little
when we were doing during warmup.
So if we talk about security,
does that mean that you are developing antivirus
for computers for developers?
Or does it mean something more?
Yeah, I mean, it's kind of all goes back to,
I guess, evolution of our,
I guess it is our domain, our world, which is kind of
high-tech or software world.
Time really gets compact with all these revolutions.
We have evolution and revolution.
So, I mean, if you go back to 2007, that was just before Facebook and just before iPhone I guess and if you go back to 2005 that
that was before the rise of Microsoft I guess the major rise of Microsoft as a dot-net shop
which really made you know made all the enterprise software come along. And then kind of 98, 2000, the first bubble.
So all these stages, they had, it's kind of a sprint to create technology.
And the focus is on creating technology that is supposed to give developers productivity
and supposed to make companies very productive and create a very nice portfolio
of products.
And almost always, I mean, maybe not intentionally, but almost always the security side of things
was kind of left behind.
You know, I'm sure no one intended for it to be, but there's a lot of more velocity
under creating a great product at the time
at each and every step of this,
like in the first bubble and then in 2005
and then into 2007 and so on,
rather than, okay, so let's create the technology
and a product and let's create the technology and a product,
and let's also make it, you know,
kind of dependent on making great security be there for us.
So almost every time security came after the revolution,
after the evolution.
So we had from simple firewalls to intrusion detection, which is,
you know, the large kind of systems that try their best to find anomalies in the area of 2000,
to the smarter firewalls. And even today, those like this mini kind of firewalls of WAFs
that you integrate as an SDK into your app.
So yeah, so it comes in waves, technology,
and then security comes in waves as well.
And yeah, so the latest we're seeing right now
in terms of the evolution of software
is that, yeah, we know that software eats the world,
but we are kind of feeling that it already ate the world. So, you know, you can do so much today
that you couldn't have done, I mean, as little as three or four years ago, actually. You know,
you can take a Lambda and you can pick up a bunch of SaaS services
and you're done.
I mean, you build a product that used to be
maybe three, four, five years ago,
you know, used to take much more energy to build.
So in that sense, as a developer,
you have so much more power
and so many more paths to get to the same end goal that
i'm not sure i mean i feel it for myself i'm not sure the security world can even begin to
realize because they need i mean if we if we think about them as they, then they need to understand how to develop as well as developers
in order to give, to create great solutions for that developer that glues stuff together and,
you know, invent stuff from existing parts. Yeah, that makes a bunch of sense.
I'd say, yeah, that covered, I mean, you went to the whole history of modern or last couple
decades of computer software there. But I was going to say, so one through the whole history of modern or last couple decades of computer software there.
But I was going to say, so one of the interesting things, I think, before we get into the kind of specifics about what needs to be secured,
this kind of thing you mentioned where people build a product first and then try to figure out security later.
I guess that's an interesting balance where if you're building
something until it's built, maybe it doesn't really need security, right? If it's just a
thought in my head, I don't need security. If people are going to start using it though,
immediately you need to start having some amount of security. Do you have opinion on like, what is
the balance there? So if you don't know yet what you're doing and what may be your risks, when is
the right time to start considering security?
And what are some of the good, you know, first things to start considering?
Yeah, so that's a great, great question.
I mean, I think the balance is shifting towards really taking the time in development time, in design time and think about security on the security model.
So, you know, this was kind of theoretical.
Yeah, everyone should do threat modeling
and everyone should do secure by design and so on.
And frankly, you know, you'll find these people
who are extremely into security
that are actually doing these things.
But the thing is, it wasn't being done properly or, you know, by everyone as kind of a development
workflow.
You know, when you come to develop a feature, then you have the design and you have maybe
a POC and you're supposed to have this small or maybe large threat modeling box, but no one actually does it.
Or most people kind of focus on the other areas of developing a new feature. product and pushing it into your traditional server farm or your really secure and isolated
cloud operation, whatever.
And you're pretty sure that within this closed garden, even if you didn't do the proper threat
modeling as a developer, then things will be okay. However, this kind of
understanding is changing because it's no longer pushing to a server or to a kind of a closed
garden environment. It's, you know, taking your function and placing it somewhere. And now someone
can ask a question, which is, I don't know, I don't have the answer.
If I push a function to whatever, you know, I don't want to name any service, but, you know,
it's kind of any of the new hip, cool services out there that really make you productive. If you
push that function, did the other side do everything they need to do in terms of
the traditional threat modeling to keep you safe?
Are they obligated to do it?
Do they have, let's say, a WAF to identify SQL injections for you, maybe, or maybe to
drop someone who's attacking your service and so on?
I'm not sure, actually.
So it kind of shifts the responsibility to the developer because you're building a function,
you're dropping it on whatever cloud provider, and your function is now live.
It's up to you, right? Yeah. So I guess you were talking about deploying these functions and applications to public
facing cloud.
Or do you think that the same applies to internally deployed app like enterprise software that
would just be used sort of within your corporate firewalls?
I think you were sort of referring to this when you mentioned walled garden approaches.
Right, right. I think you were sort of referring to this when you mentioned walled garden approaches. Right.
Right.
So I believe like eventually the enterprises, the closed enterprises really adopt whatever is happening on the open, let's say, on the open wild world.
So, you know, maybe we need to give one realistic example.
So let's say I'm working at kind of an Acme Corp,
some kind of corporation, doesn't matter.
And I'm a developer and basically I have, you know,
this service, small service to build.
And I decide to build it on, I don't know, Heroku
or I don't know, Verso.
I do that.
No one is stopping me.
I can do it.
And then I can plug it into my existing infrastructure inside the corporation.
And I don't know if that would be something that is okay.
I mean, as a developer, I'm just, you know, shipping software.
But here's something that, you know, an ability or a possibility
that wasn't there five years ago.
You know, Heroku was there, but the culture of shipping things fast
and being able to take things to the, you know,
to the extreme end-to-end wasn't there.
So here's one path that is now open, and now people can actually wake up tomorrow and figure
out, scan their code and look for external services that exist in the code base and try
to figure out how many are there that they know of
and how many are there that they don't know of.
And that's just SaaS services.
Now you can take the same analogy
and try to think about what kind of libraries do you use?
And everyone remembers LeftPad, right?
When it was just suddenly pulled out of NPM,
breaking half the internet.
That's kind of the new world that's happening in the last few years that I'm not sure everyone are ready for.
So what would be an example of, like you mentioned, building a Heroku-based application,
deploying it and scanning for what services that you may not have realized?
Do you have examples there of what would be something
that you may not intend to have exposed that got exposed?
Oh, yeah.
I mean, well, first there's this kind of the cyber world,
call it shadow IT, where people basically,
what they want to do is be more productive
inside the organization.
So it's kind of two sides of the coin.
One is positive, one is negative.
And the positive side is you have a team that thinks it can move quickly
and adopts, you know, unvetted software, so to speak,
and then ships it to production and that creates a bunch of you know it assets i don't
know services services whatever you you whatever you can think of that actually no one knows exists
in production on the other side if from a cyber perspective that is an unauthorized use of software,
which is kind of giving it a kind of a warfare kind of name,
like a shadow IT, like shadow ops.
So this example, you know,
if you guys even check your stuff,
then maybe you can find many examples of that.
But, you know, it's kind of a productivity thing.
Yeah, I think, so maybe just stepping back a bit, it'd be really good to explain to folks
what are the different components of computer security?
What actually a firewall is and how to protect against, like what is a SQL injection?
What are the kind of threats that you encounter
and how do those things work?
Oh yeah, so I think, first of all,
let's try to get the motives out there, right?
So there's hackers and there's the good people
and bad people, right?
So to speak.
So I guess developer build software
and they're trying their best to actually add
value and the hackers try to, I don't know, remove value or try to gain the system and gain some
profit really quickly. So basically when I build, for example, when I build a function, I don't know, that takes a parameter from a URL from a website and, you know, maybe it's a page number, you know, traditional paging feature.
Then I take this parameter and I, you know, inject it into an SQL query that I have on my backend.
And my goal is to just give you page number two. So that's,
you know, that's my perspective as a developer. I see nothing, you know, no harm done. I mean,
I'm taking a value and dropping it inside a string, which contains an SQL query, and I'm done.
Like I push this feature, I go home and that's it. But the other side of it is that when a hacker looks at it,
then first of all, there's what is it for me?
Like what's there to gain?
But first of all, the company needs to be really attractive
in terms of hacking anything.
And there has to be some kind of trophy on the other side.
So if I'm looking at something, at some company as a hacker,
and I realize they might have sensitive data because they're, I don't know, healthcare,
whatever, then at least I have now the motive or incentive to actually try to figure out
where can I hack into. So looking at this naive SQL thing that developer just built. So I'm looking at
the parameter and what I'm trying to do is take, instead of giving the parameter what it expects,
which is a number, I'll try as a hacker to, you know, try to inject some malicious SQL code.
Maybe if I'm in for doing some damage, maybe I'll try some drop tables instead of a
number. If I have reason to believe that the backend will actually give me the results
as I wanted, then I'll try to inject an actual query into the number instead of a number.
And what I'll hope for is for the developer not to actually be defensive,
which means the developer forgot
or didn't bother to actually sanitize the parameters
and make sure that if the developer expects a number,
there only should be a number there.
So that is kind of the gist of SQL injection.
So this is one kind of attack.
So just a little bit about firewalls.
So basically a firewall is something that sits between a machine and the outside internet
or maybe internal internet, doesn't matter, the outside world.
And what it tries to do is to monitor traffic and figure out which traffic is strange
and which traffic is normal.
So it used to be, you know, it used to be very simple.
It used to be basically looking at open ports
and trying to block irregular ports on machines.
That is like 20 years ago.
And today it's a lot smarter. So today a firewall
is maybe not the correct name anymore, but it's a system that looks at anomalies in your traffic.
And that is, the acronym is WAF, which is Web Application Firewall. So many cloud providers have that,
and you can actually flip a switch and have it as a feature.
And basically, it looks at your traffic,
and it can recognize what is normal and what is not normal.
And usually, that is backed by some kind of machine learning.
So yeah, so these are two categories, I guess, of attacks.
And basically, the reality is that the amount of attacks always grows.
There are always new attacks because there's always new code and there's always new features
and new products being launched.
That make sense?
Yeah.
So you mentioned this, the WAFTA's web application framework
and trying to understand what is questionable traffic.
So if you're deploying some new website, some new API,
and it doesn't kind of know what to expect,
how does it understand what is questionable
and what is considered pretty normal traffic?
Right.
So there's, I guess, like in every machine learning operation, there's the cold boot
problem.
So if you have something new, then obviously it hasn't yet learned enough traffic to tell
you what's normal and what's not.
But luckily, I guess if someone would look at most of the internet traffic, they'll realize there's like clusters of normal traffic and there's cluster of irregular traffic.
So, and again, probably this is, you know, the secret recipe of every different vendor
of such firewalls.
Fair enough.
But yeah, but, but, you know, the generic case is that there's normal behavior and you have so much traffic
these days, public traffic and so on that you can analyze and build on.
And there's like irregular traffic that is very specific to a certain service.
And I can tell you from experience that, yeah, definitely it takes time for these technologies
to actually learn what is normal.
And you get a small amount of false positives at the beginning.
But the good news is that if you have traffic, then it learns.
And if you don't have traffic, then maybe this service is not that popular or risky
because you don't have traffic.
So yeah, so in this specific case, it kind of of it kind of creates a nice a nice closure on it.
So there's no gaps.
So I'm going to take a step back and come back to this in a second and correct me if my I could be completely wrong here.
So if I'm thinking from the shoes of an individual developer like myself or Jason or anybody who's just developing software.
And I'm going to assume, which you might have to help correct me. So if you're at a really big
company, chances are you're not able to deploy straight to the cloud, or at least that's been
my experience, is they typically have it pretty locked down. There's like procedures and reviews
to go through and there's a whole organization kind of devoting to that.
So if I flip to the other side, if you're a super, super small, like only a single developer
or a couple of developers, then you probably are the whole entire stack.
And then my guess is there's a sort of gradient in the middle where like you kind of mentioned
before this, you know, shadow IT where maybe there's people who are trying to do IT or
monitor it, but they're not
everywhere. You could get around them. And it's not that you intentionally or unintentionally
meant to, it just sort of kind of happens. Where along the spectrum do you find that
many of the developers kind of live in? I'm not clear from my side. I've spent most of my life
working at relatively large companies where these kinds of things you're talking about have always been of interest, but they've always
kind of been handled by someone else or handled by the platform. So if you're building something
yourself or in a company that's, let's say sort of medium or small size, how do you sort of figure
out like who the right people to contact are? How do you kind of figure out for yourself what are
the best development approaches to make sure that you aren't accidentally going to expose all your data to the world?
Right, right. So the thing is that, and that's why I kind of connected to kind of an evolution.
So there used to be good news for this. There used to be a good answer for your question.
The good answer would be, oh yeah, once you have a few controls in your organization basically 90%
of the problem of the risk is gone but that's no longer the the answer you know for the big
big corporations um you know that i don't know thes, Amazons, I believe it's so insanely sophisticated there
that there's very little chances of a developer adding risk
in terms of security because the investment is probably huge.
But for the medium-sized and the emerging companies
that have to deliver and ship fast,
which is probably the majority of the companies these days, there isn't this kind of resources.
And also, the times of one solution to rule them all is gone.
So you can't really buy a system, put it in your network, and now all of your security issues are gone.
Because ways to do things are just growing.
So just for example, if you look at your machine right now, I mean, ignoring where you currently work at, if you look at your machine and have a small thought, what's in your
bash history, assuming you used your terminal? Do you have a.files? If you use Vim, what's in
your Vim, VimRC? Do you have any secrets there, any tokens, anything that is maybe issued by a
company or organization, but you have it right there.
And guess what's the first thing hackers want to steal?
Right?
Yes, logins and passwords.
Exactly.
And you know, don't tell me, but
by all means, have a look
after this.
Just, you know,
do a last on your
C-shell history and figure out what's in there. What's the material? just you know do a last on your seashell history
and figure out what's in there
what's the material, how many times
do you export
a token, maybe even an ephemeral
token, something temporary
but still it gives
clues, it gives clues to how
systems work and you know
how many of the tokens
are temporary and how many of them
are permanent and how does one look like and so on and so forth.
So all these things are kind of happening on the last mile, which is us, the developers.
So you can look at an organization and say, wow, that's a fully secure organization.
But the last mile is probably us.
We use all the, you know, in terms of R&D,
we use the assets, we connect them,
we take on the risk of, yeah, I'm going to use this token
and this password and this secret,
and I'm going to connect to this external service
and to this external machine,
and I'm going to store some stuff on my machines.
And all these things combined are just, you know, ways that I've adopted to be super productive.
And as the world of software grows, the ways to be super productive also get more sophisticated. And that creates an impossible
problem for one cyber or security solution to solve. It's basically you need to solve
all of the habits of developers at once? Yeah, I guess I don't,
security is not my background or my forte,
but I always hear security and depth
and layers of security.
And I guess this is what you're sort of mentioning,
like the last mile of developers.
And we were talking at my company
a little bit about social engineering
and just my takeaway was basically,
like if someone targets to social engineer you're kind of hosed like uh the amount of sophistication and some of
the attacks that that have been uncovered is just insane that um no matter what you do that uh
someone could probably figure out your username and password without you uh kind of knowing that
you've turned it over and i guess as're mentioning, if you store stuff on your computer and your dot files, or even just in your history,
and they get access to your computer, they're going to learn a lot. And I guess that makes
sense. That kind of brings to light to me why you have to have so many layers of security. It's not
just the firewall at the outside or the intrusion detection on the inside, but also like even on
individual computers, like having people have good habits and stuff. Right, right. And you can't not have
your dot files, right? Because first of all, dot files indicate a kind of progression, right?
Because you're using something that supports 12-factor apps and is considered as a best practice, right?
But, and there's like a huge warning here that I think we all miss, is that somewhere we should have a.file that is not meant to be in a repo, but it's laying on our computers.
And what's in these.files?
I don't know, but I'm quite sure that a hacker learning these best practices, they also want
to learn how to abuse these best practices.
So I would, I don't know, I would build a script that searches for.env.production.
Yeah, I think...
And grabbing it from your computer, right?
Oh, yeah.
I think I took a...
I also, like Patrick, don't have a strong background in cybersecurity, but I did take
a course on it in university.
And I remember at one point someone was...
The lecturer was talking about the magnetic hard drives and how when you just erase something,
it's not actually gone. It's just, you know, some reference to the data is gone because to recover
it. And someone started kind of challenging and saying, well, if I do all of these things in this
course, then, you know, I'm totally untraceable. I'm totally secure. And the professor had a really
insightful answer that always stuck with me. He said, you know, yes, he said, but if you're a criminal, and this is for like cyber forensics type stuff, if you're a criminal, you know, you have to, you know, make sure like in the world of atoms, you're totally, you know, untraceable and in the world of bits, you're untraceable and you didn't leave any footprints and you didn't, you know, and so when you, when you started accumulating all of this, it becomes harder and harder and harder to, to, to get away with something.
And so, and so this is also true where there's so many different systems we work with.
We, we, we use 10 different languages and we're, you know, having, we're on three different
public clouds and there's four different machine types.
When you start adding it all up, if you have a problem even once, that could be enough to get access to everything else.
Right.
I think this is completely what's happening right now.
And unfortunately, a lot of it is happening in our domain, like in the developer space, because the general sense is that if we get more power
and, you know, I can look at Docker
just as something that happened
that gives us developers more power,
because now I can, you know,
I know previously I could use VMs,
but now I can, you know, kind of juggle machines
and plug and play them and, you know, build so many things in a better way.
So I have so much more power using Docker.
And all these things kind of shift the power towards the developer and shift the responsibility also to the developer. And the big question is,
did you or did we know that this, that happened,
that we have more responsibility now?
Because as far as we care,
you know, we have more stuff to play with.
I mean, it's, you know, I'm super productive.
I'm much more productive than 10 years ago.
But did I know that I now have much more responsibilities in terms of security?
So, yeah.
So I think the answer for that is that I'm seeing, we're seeing that not everyone realizes
that, you know, with that extra oomph that we got with all these technologies that we actually have now,
we are taking more risk as developers.
So that is the, you know, that is the friction that we see.
And the answer is to try to use the same tools
that we use to build features and products and to figure out what do we,
you know, what do we miss? So I just said, yeah, if I'm a hacker, let me just build a scanner that,
you know, tries to find your.env.production, right? So how about we use that and build
a developer tool that actually can tell me that before the hacker knows it?
So here's a go.
Here's an idea.
Build a tool that scans for all these files that you know that you have in the back of your head on your computer.
And that is a tool that you build for yourself and put it in your own toolbox.
And now you feel much more secure and now you can work in a safer way.
It's like a carpenter working with security goggles, right?
So this is how you can actually build the tools for yourself so that you can actually take on more responsibility.
So that was basically my thinking, I guess, throughout all of my career. So that every
step you need to actually make your toolbox bigger so that you can actually take on more
responsibility. Yeah. So when you talk about, I guess,
throughout your career, you're saying, thinking about this toolbox, I guess it's probably bad,
but I'll admit it. I don't really spend much of my day thinking about what tools could I add to
my toolbox to help make sure that I don't leak secrets. I'm a bad, bad engineer, I guess. But
how does someone go from the mindset of like,
my job is to sit here and as you were mentioning, faster and faster to ship a product to see what
sticks. So what is the phrase to move fast and break things, right? I didn't invent that. And
like, how do you go from that mindset to sort of culturing a sense of like, hey, wait a second,
I'm taking these risks. And they're
really easy to not take these risks. You know, these are the common dot files that people would
would scan for. I mean, it makes sense when you told it to me, it seems obvious, but I won't
admit that I probably ever thought about it before. How does someone go from my sitting here
and just developing code to I also need to have tools in my toolbox for not just
developing my code, but for making sure my code is safe and secure. Yeah. So, so actually it's,
it's, it can be very simple. I mean, to me, it's, it's, it's very clear and I'll try to give the
same clarity in how I think about it. So I remember the day when we used to ship software
and zero tests.
That was around 2005, 2006 maybe.
Yeah, no such thing as unit test.
No one even knew about that.
And basically you would ship your software,
you wouldn't even test it,
you know, properly and you'll figure out, yeah, we have a QA somewhere down the line.
They'll do the work, tell me what's wrong. And I'll just, you know, wake up and fix the stuff
they found. That was life. So, and that was not long ago in terms of, you know, in normal professions, like
15 years in, I don't know, in car making, that's nothing, right?
So that's, that's 15 years, uh, in software.
So that, that was the reality.
But since then, in terms of, uh, quality and, uh, QA, I mean, we almost obliterated that,
that kind of workflow and we have unit tests andend tests, and so on and so forth.
And everyone knows that you need to have coverage.
So that's one thing to think about.
Another thing to think about is distributed systems, right?
So there was a time where we built a server, a service, doesn't matter,
and we deployed it to one
single server. And that
again was at the same kind of
area of years.
And maybe we used two.
That was like amazing.
And the
load balancer was kind of a hardware
thing that you had to
put out from a box and install it somewhere.
And, you know, we never thought about redundancy in the way that we do today.
And today it's insanely more involved and so much, you know, so much better. And again, today, what we do is we plan a service.
And I mean, almost before the first line of code, we think about how is it going to be deployed in
how many instances and how would it fail and so on and so forth. And I think security will go
through the same evolution. I mean, what changed?
The only thing that changed is the responsibility.
So on the QA story, there was a group of people
who were responsible to test your code,
but not you as a developer.
And on the redundancy side,
there was the group of people called IT, not even ops,
and they were supposed to make sure your service is always live, which is absurd these days, because how would they even know
back then? And I mean, today, you guys just said it. I mean, in some organizations,
you build your code and there's a bunch of people who are
responsible to make sure you don't do a mistake or don't put the company in risk. So, I mean,
just, you know, just from a history point of view, the story should repeat itself, right?
Yeah. I mean, I think, as you mentioned, like a single developer becoming more and more,
I mean, in the beginning, there was only the developer and then there was the organization.
And then now we're going back to only the developer.
So, yeah, I would say it's a fair bet to say that it's going to go back to a combination of people.
And we'll see if it I sometimes think that the roles can be embedded within the team.
So, you know, we talk about
like deploying on Docker or whatever, like, it's not that you have an ops team that is responsible
for deploying everything, but maybe you have, you know, a person or a consultant on your team who
helps you do it. And it's still the team's responsibility, but there's someone there to
help it. So I guess if I hear what you're saying about security, you know, thinking, thinking similarly, like having someone who isn't there to just, you know,
send you emails when you've accidentally leaked your password. Um, but actually, you know,
help you guys develop, um, good practices and sort of look over what you're doing and make
good suggestions and is a shared goal with your team. Yeah, I think that can make a lot of sense.
Yeah, and I mean, and you ask how,
like how can actually teams can improve and, you know, and take more of that ownership over security.
So that's the first step, realization.
You know, history repeats itself.
And I'm a huge fan of philosophy.
And what I see here is a pattern that repeats.
And I'm quite convinced it will repeat itself because we are all people and humans.
And we collaborate and work in fairly the same way.
You know, what's the difference between quality and quality of service, which is actually kind of what
distributed systems come to solve, and then quality of your security.
So it's all the same.
So that's one part.
And the other part that I believe in personally is trying to instill the mentality of retrospectives
and learning from mistakes.
So if you have these kinds of processes in your company, in your culture,
then you can always use these processes to actually have, you know, a plugin, a space for,
hey, we're doing a pre-mortem or a post-mortem let's talk about how
security is in this whole picture so we talked about redundancy and deployment
and how's the you know capacity planning how about we talk a little bit about
security which means sometimes you can either simulate attacks sometimes you can either prove that you know
prove that a certain service is secure or sometimes you can just you know list what your
what your risks are and that that is a great conversation opener so if you have a billing
service um you know then you can actually state your. And you can just say in that same form,
your fear is that someone can go in
and, I don't know, steal what?
Credit cards, transaction numbers, and so on.
So just by stating what's valuable for you
in that same form of premortem, retrospective, whatever,
you know, you can actually start a great conversation
and you can discover that people didn't even realize
that the service you're building
is actually holding a lot of value
for a potential attacker.
And that is 50% of the job.
Yeah, you're kind of describing now,
we've kind of covered a lot of ground
already, but you're describing now you were mentioning sort of thinking about the attack
vectors and what is the, you know, where could people attack? What would they think about a value
sort of thinking about coming from someone on the outside, not just, hey, I'm playing with your
service, but I'm looking at it, I'm analyzing what I could get from it.
And then I'm deciding how much time to spend, whether it's just to crash your site or to drop your data, or if there's something here, I could extract. And I guess that even in itself is a
useful way to think about it is I'm deploying this service, which is valuable to end users.
But in some ways, like the more personal data or credit card, whatever,
those things that might be most valuable to delivering really awesome experiences to end
users would also be very juicy targets for attackers. And I guess the juicier the target,
the more concerned you have to be with making sure your stuff is very secure.
Right. And always, I try to keep it very, very simple.
I mean, I mean, there's, you know,
think about what you're doing in a day to day, you know,
is your computer filled with, with sensitive material?
Did you encrypt your drive?
Because, you know, once, you know,
in the times when we used to go to conferences,
then I literally saw, I won't name the company,
but it's a big cloud company,
two solution architects just leaving their laptop on the chair
and going for the restroom.
And I was kind of looking at two laptops
with obviously the company sticker on them, just telling me, steal me, right?
I mean, so once that happens, if you didn't encrypt your hard drive, basically, it's just,
five minutes and I have all of your data. So these are the small, simple stuff that I like
to think, these are the terms that I like to think in. So it's
very, very simple. It's connect to the, you know, the stuff that you're doing on a day-to-day basis
that you're feeling uncomfortable with and try to figure out what that is. And on the other side is
also, you know, in terms of organization and teams, it's also very, you know, very simple is connect to the things that you're fearing from, like your fears.
So basically, when you're about to deploy a service, I'm sure as hell, like as a developer, your fear is that this service is going down, right? So you'll try to figure out how to keep it alive and, you know,
avoid waking up at night because, you know, you didn't, I don't know,
throttle something properly or you didn't think of an edge case.
So just add another layer of fear, which is,
I think this service is storing sensitive data,
and I want to just state that. And that is just a simple way to start a conversation
with other intelligence people and brainstorm and think what you can do. And obviously,
when you do that a few times,
you'll probably get to more advanced levels
of thinking of attack vectors
and kind of instilling this culture
and it will just happen from itself.
Yeah, I think you're right.
I think realizing it's an issue,
thinking about it,
these things are important.
I feel like it's uh like you said
from philosophy you know applies over and over again not just that things repeat itself but that
kind of the path to learning about it is first realizing there is something to learn
and i think the topics we're discussing here are interesting approaches so we've talked about
on a developer computer that you you know, having something that
scans for maybe secrets that you wouldn't want pushed out or that you wouldn't want someone to
get on your computer. We talked about application firewalls and traditional firewalls. We talked
about a little bit about SQL injection. So we talked about a little bit like someone getting
on your computer or stealing your computer hard drive. We talked a little bit about like the end deployed app.
What are other areas?
I mean, there's a whole pipeline there, I guess, of, you know, not just developing the code, but then pushing the code, serving the code, you know, like the distributed systems themselves.
Like what are other areas that are important to think about security in?
Yeah.
So there's two niche areas, I would say.
I would be careful with calling these niche,
but there's a reason I didn't mention these first.
So one is, I believe, maybe the scariest of all,
which is getting an email, clicking on something, and then bam, your computer is now locked with all the data and you need to pay someone millions of dollars.
Right.
So this is like the whole ransomware thing.
And yeah, as you mentioned, the best way to combat this is to actually, you know, be aware.
So have awareness training and make sure you know what phishing is.
And sometimes email specifically,
sometimes in terms of emails,
there are some email providers that are very advanced
that can tell you, listen,
this email came from outside of your organization.
And I'm a fan of Gmail in that sense.
You know, they can color the out of your contacts thing.
I don't know if that exists in other clients.
I hope it is.
But this is a good part of awareness
that I thought should be there.
And it wasn't there for a long while.
I mean, if you're an email client
and you're giving a service to an enterprise and obviously you can identify who's in the organization,
who's outside, and then just give me a call or tell me, listen, someone from outside the
organization is now sending you an email. That's a great, great thing to know.
And flagging hyperlinks that don't match the URL they point to is another
one that shocks me that email clients don't do today. Right, right, right. And obviously all the
DNS typos, which is fun. You have a company name and then someone changes a letter, maybe turns an
I into a capital I, which looks like an L.
And then you have something which looks familiar, but isn't.
So that also, you know, that is also something, I don't know if to call it funny, but it's when you realize that you can do it, then you go and buy the domain for like four bucks.
And then you have something in your hand that you know someone can use for
for attacking so basically by the way this is something i do in almost every company i join i
look at the domain and then i go and buy all the similar looking domains and hand them over to the
ops and it people to just have these domains you know just buy them so no one else can buy them and
and it's very cheap so i i really encourage everyone to do that for their companies.
So yeah, so this is one category
which I would call niche of kind of attacks
because they're very specific.
And the other one is actually that would feel nice to people who works with embedded systems, maybe more,
because in there, there's a category of languages which are considered unsafe. I guess that's a
category of languages that was kind of born these days. But the idea...
You're about to attack me, aren't you? Oh, no, no, no, no.
And the idea is that, I mean, to me, it's strange because I remember programming in C and C++
like it was yesterday and also assembly.
I'm from this generation.
It's today for me.
And these days, these languages are kind of portrayed as unsafe
and, you know know basically because they are
not providing the right checks for your memory usage out of the box by default
so this class of languages you have a whole i don't, domain of proper usage so you could actually build secure software.
But it is secure in the sense that when you,
I don't know, when you put the software on a pacemaker,
then no hacker can actually hack that, I guess,
or a spaceship or whatever.
So yeah, so there's this whole domain of security
geared towards making sure these things are as safe as possible. And just to look at this from
a developer point of view, then when you build this kind of software, so one of the most accessible tools to use is a fuzzer,
which basically means it's a kind of a tool or a library that you can use.
I try to explain it in a simple term.
If you have a function that is supposed to receive a string and an integer, for example,
then this tool would look at this function, analyze it, and try to inject
kind of brute force all of the strings in the world, like huge strings, strings with binary
characters in them, strings with unicode characters in them, and will do it by brute forcing.
It won't try to be smart about it. if it's an integer obviously it will try to
overflow it or give it some bad numbers and basically it takes some time and then it will
come up with a combination which is bad for your for your software so basically you'll be fuzzing
this for weeks and weeks and weeks and then you'll be able to see if, you know, there's any result.
And this result would be gold for you because you just exchange time with
something which is very sensitive. If this, for example,
if this was a spaceship, right?
So that is a whole category of security and very geared towards a developer.
And also, you know, in the realm of embedded or towards a developer and also, you know, in
the realm of embedded or things that are, you know, in that sense can become very, very
costly.
Just to riff on that for a second, for people who may not realize, so if you run a fuzzer
and your program crashes, there's a number of different reasons why it could be a problem. So the first reason
is that obviously, like if you were running a spacecraft or a service, if it crashes it down,
and then you have, you know, an outage, right. And so someone could keep sending malformed data
and crashing your system. And if they do that enough, then you have a sort of denial of service
problem or, you know, even worse.
And so in that way, you know, that's kind of like the first class of problems. The second class of problem though, is that sometimes it crashes because you're not checking the data
properly. And the crash can actually lead to a buffer overflow that allows someone to exploit
your code and then actually read contents of your computer or your memory and
actually steal results, which is much worse than just crashing it.
And so running a fuzzer and finding those things is super useful because many times
it's unobvious, especially if you depend on another library for doing something like JSON
parsing or any kind of data consumption and someone can feed in there.
And if there's a problem in that library, which you used because you liked the fact
that it had a lot of convenient functions in it, you could be opening yourself to a
whole host of problems that you didn't realize.
Right.
You just said it's much better than me.
But yeah, I, you know, you can, you can, you can claim, I don't know, maybe, you know you can you can claim I don't know
maybe
you know as you know we have SpaceX
and all these
maybe there's many more
developers that are being called
and shipping into a
spacecraft these days
than you know there was when there was
only NASA
so I don't know.
Maybe fuzzers will come into fashion.
Yeah, so we talked about, okay, a couple other niche areas.
And then one of the other things I know people kind of mention
is you see every so often is someone will publish a report
where they scanned GitHub and they found,
you know, 10,392 places where someone had their database password stored.
What are the, so obviously like that's a huge problem. Like, is that something that you see
people doing is pushing code itself up to a repository where they've put something in there
unintentionally? Yeah. I mean, first of all, quick disclaimer,
this is part of what we do at Spectral.
Oh, okay.
Well, I guess I teed one up for you there.
Yeah, and by the way, just to close a small circle,
which, you know, it's so much fun to recommend
when I can recommend it.
So I've been through, I don't know, since assembly,
probably through almost every program and language,
including Haskell and Erlang and some wacky languages.
And at Spectral, we use Rust,
which on the fuzzing, unsafe languages part of things
is supposed to be safe.
And as far as I can tell, it is.
And there hasn't been a morning where I
didn't wake up and look at the code and say, this is an amazing programming language. It gives me
the Holy Grail. So just to encourage people to just try out Rust. It's an amazing language that is, you know, for me, it's as performant as C++ and as expressive
with none of the disadvantages. It's amazing. And back to your question. So yeah, so people do
tend to do these mistakes. And I mean, it's actually a reflection of the situation right now, which is there's so many tools, so many technologies, so much is being asked from a developer these days that there's so much opportunity to make a mistake. And making a mistake is, yeah, let me give you an example.
It's using an IDE that you think, let's use a concrete example.
So you have IntelliJ.
And IntelliJ likes to save some settings in a folder called.idea.
And as far as I can tell and anyone else can tell,
this folder is clean.
It's safe.
So if you like, you can actually push it to your GitHub repo,
which is public and everything is great because you get a guarantee.
The common sense thing to do from IntelliJ's point of view is, yeah, is make sure
it's clean and let you publish your settings so that if someone else takes the project, then they
will get a nicely configured environment for them in their own IntelliJ editor. So that's
the understanding that almost every developer has.
But here's the twist.
There's also plugins.
And some developers would choose to install various plugins, unofficial ones.
And I've seen in my own eyes a plugin that is actually doing something more than a search,
like some hyper-search something.
And basically, I've seen a project
where a developer pushed their own private settings,
and in there, they had this plugin installed.
And what this plugin did is break the invariant
that I guess IntelliJ has, that this folder has to be clean.
So basically the plugin saved and cached the search terms of the developers inside this folder.
Oh no. And it's a little bit funny because you could see how the developer cleaned up the project before publishing it.
So you would see the searches right there in a public repo.
You would see Amazon, and then AWS, and then EC2.
And then you'll see the actual Amazon key and secret.
So basically, what I've watched is the entire history of the developer trying to clean up the project before publishing it.
And they searched for the actual secrets in order to remove them from the code.
But the plugin...
Oh, they were trying.
Exactly.
The plugin recorded everything and dropped it inside this.idea folder, which is supposed to be clean.
That is the understanding that every developer has.
So they pushed the project, published it.
And basically, because it's a.file,
I guess they were all also on Windows.
And by default on Windows,
you don't see hidden files or whatever.
Yeah, completely didn't know about it
it was sitting there for at least
six months
until we reported it
to the company that
we saw this exposed in
and yeah, the story
that we got after that was
basically, you know, they had
miners and breaking into the cloud
and whatever, so it was a mess for
them so this is just one one small example of uh you know so much uh there's so much freedom so
much technology so much to use and to experiment with uh but you don't always get uh you know the
the best offering uh basically this one was down to adopting some plugin,
which looks cool.
But I guess the developer of the plugin
didn't have security first
or didn't really understand or realize
that there's this kind of agreement
that this folder needs to be clean.
Yeah.
Well, I guess that serves as a pretty good transition
as we sort of wrap up here for you to talk a little bit
about what Spectral is and sort of what it does.
Yeah, so Spectral is a tool for developers.
It's a security scanner.
And it tries to do kind of things that we talk about today,
which is follow the habits of a developer
and be the best buddy and make sure that everyone
can use it for themselves and make sure they scan their work and scan their computer and make sure
they don't do mistakes. And we say mistakes because that is what it is. You can say vulnerability, that's a different term, but basically at Spectra, we're 50% coming from cyber and 50% pure engineers.
So we like to keep this balance of productivity and security.
So we just call it mistakes.
I think that is what it is.
And a mistake is something that was made on good intention.
And basically a whole class of mistakes is, you know, using and misplacing sensitive data,
secrets, passwords, credentials, and so on. And, you know, placing different files somewhere you
don't want them to be and realizing we as a company are trying
to realize the habits and the workflows of a developer and making sure there's always
a great solution for that.
So I think this week or next week, we're going to release an open source product.
We're going to open source it.
And what it does,
I don't want to jump the gun,
but what it does is acknowledge that the whole usage of vaults
or dot files is not perfect.
So it acknowledges it
and gives you instead a nice tool,
which is open source,
and you can build it
and do whatever you want with it, which gives you instead a nice tool, which is open source and you can build it and, you
know, do whatever you want with it, which gives you a clean way to actually handle secrets,
grab secrets from a vault or grab it from your disk or from any other service and leave
zero footprint on your hard drive.
So basically everything is in memory.
So this is something that we identify
that developers are struggling with.
So yeah, so this is just another example
of something we do
and we will launch next week.
Oh, nice.
I'll have to keep an eye out for that.
And then what about Spectral as a company?
So you said you're a 50% sort of cyber,
50% engineers.
Are you guys looking for interns? Are you hiring? What is it like to work at Spectral is a company. So you said you're a 50% sort of cyber, 50% engineers. Are you guys looking for interns?
Are you hiring?
What is it like to work at Spectral?
Oh, yeah.
So first of all, we're hiring.
Basically, we have kind of positions for Rust engineers and Go engineers and Node.js.
And I guess you could say we are always hiring.
I mean, we're always willing to meet great people.
And in terms of the company,
it's, you know, the COVID thing
kind of caused us to be fully remote.
But lucky enough, we're kind of,
in terms of the experience in the company,
then some of us come from distributed companies
by definition.
So we were able to create a really great infrastructure for being a nice remote first or distributed
company.
And yeah, so we're being super productive, making sure we have all of the advantages of distributed
and zero of the disadvantages.
And yeah.
Nice.
Nice.
So do you guys think you'll,
I mean, you don't have to answer,
but do you think you will stay virtual
or you guys think you'll go back to a somewhat normal thing
when this is all over?
So yeah.
So we basically try to always think about the balance
and think about the people first.
So you know there's, first of all, it's finding the right tools.
So is Slack the right tool?
Is email the right tools, right?
So what we found is we use Discord.
And that was on a simple thesis where if gamers are really liking that platform
that coders probably would
and we found that it actually proved itself
because when you share your screen
and share multiple screens
and people can watch everyone else's screens
and what I mean is editors, right?
Writing code and not games. So you have zero
latency and you have really great audio and everything just works. So you realize that this
whole thing was optimized for sharing games and it's more than enough for sharing code and doing,
you know, pair programming, all these things. And you also have rooms, which we actually made a few rooms, just a few, not too many,
just like in a real office.
So we have rooms that everyone wants, but are taken, right?
The same frustrations.
So basically, our Discord resembles a real office.
You can just drop in a room, drop into a conversation,
and it's really, really fun.
So that is one thing we realized and experimented with
and really works well for us.
But yeah, we're trying to create a balance.
And also, you know, when possible, meet physically.
We're always thinking about this as a problem.
How do we solve it?
Kind of like engineers, like an engineering problem.
Nice.
This is a great heuristic that if gamers like it, engineers might too.
Yeah, I think that's a great takeaway.
Yeah, it's great.
And then what about you personally?
Are you on social media?
Do you have any, I think you might be doing some other stuff.
Anything you want to share with people?
Oh yeah.
So I'm on GitHub, Medium and Twitter.
I'm John Dot, J-O-N-D-O-T.
And yeah, I kind of,
I'm trying to be on everything
at the same time it doesn't work.
So I'm dividing my energy.
So you'll see me active, you know,
periods of times on GitHub
and periods of times on Twitter.
So I just load balance.
I can't do everything in parallel.
So yeah.
All right. Very cool.
Well, thank you for your time, Dotan.
We really appreciate it.
That was a really awesome
high-level overview of security.
I had a good time. Thank you for coming on to the show perfect me too
music by eric farm mellow programming Throwdown is distributed under a Creative Commons
Attribution Share Alike 2.0
license. You're free to
share, copy, distribute, transmit
the work, to remix, adapt the work,
but you must provide an attribution
to Patrick and I
and share alike in kind.