CyberWire Daily - Black Hat 2017 - Research and Investment [Special Edition]

Episode Date: August 1, 2017

Black Hat 2017 has wrapped up, and by all accounts it was another successful conference, with an active trade show floor, exciting keynotes and engaging, informative educational sessions on a variety ...of topics. There was business being done, with hopeful entrepreneurs and investors alike looking to identify the next big thing in cyber security. In this CyberWire special edition, we’ve rounded up a handful of presenters and one investor for a taste of Black Hat, to help give you a sense of the event.    Patrick Wardle is Chief Security Researcher at Synack, and creator of objective-see, an online site where he publishes the personal tools he’s created to help protect Mac OS computers. He’ll be telling us about his research on the FruitFly malware recently discovered on Mac OS.  https://objective-see.com/   Hyrum Anderson is technical director of data science at Endgame, he will discuss research he released on stage at Black Hat showing the pros and cons of using machine learning from both a defender and attacker perspective.  https://www.endgame.com/our-experts/hyrum-anderson   Zack Allen, Manager of Threat Operations, and Chaim Sanders, Security Lead, of ZeroFOX will be speaking about their Black Hat presentation on finding regressions in web application firewall (WAF) deployments.  https://www.linkedin.com/in/zack-allen-12749a76 https://www.linkedin.com/in/chaim-sanders-a7a23713/   And we’ll wrap it up with some insights from Alberto Yepez, founder and managing director of Trident Cybersecurity, on the investment environment and the changes he’s seen in the market in the last year.  https://www.linkedin.com/in/albertoyepez/ Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the CyberWire Network, powered by N2K. Calling all sellers. Salesforce is hiring account executives to join us on the cutting edge of technology. Here, innovation isn't a buzzword. It's a way of life. You'll be solving customer challenges faster with agents, winning with purpose, and showing the world what AI was meant to be. Let's create the agent-first
Starting point is 00:00:30 future together. Head to salesforce.com slash careers to learn more. In a darkly comedic look at motherhood and society's expectations, Academy Award-nominated Amy Adams stars as a passionate artist who puts her career on hold to stay home with her young son. But her maternal instincts take a wild and surreal turn as she discovers the best yet fiercest part of herself.
Starting point is 00:00:55 Based on the acclaimed novel, Night Bitch is a thought-provoking and wickedly humorous film from Searchlight Pictures. Stream Night Bitch January 24 only on Disney+. Black Hat 2017 is wrapped up. And by all accounts, it was another successful conference with an active trade show floor, exciting keynotes and engaging informative educational sessions on a variety of topics. There was business being done with hopeful entrepreneurs and investors alike looking to identify the next big thing in cybersecurity. Thank you. researcher at SYNAC, and he's also the creator of Objective-C, an online site where he publishes the personal tools he's created to help protect macOS computers. He'll be telling us about his research on the FruitFly malware recently discovered on
Starting point is 00:01:53 macOS. Hiram Anderson is the technical director of data science at Endgame. He'll discuss research he released on stage at Black Hat, showing the pros and cons of using machine learning from both a defender and attacker perspective. Zach Allen is manager of threat operations, and Chaim Sanders is a security lead at ZeroFox. They'll tell us about their Black Hat presentation on finding regressions in web application firewall deployments. And we'll wrap it up with some insights from Alberto Yepes, founder and managing director of Trident Cybersecurity, on the investment environment and the changes he's seen in the market in the last year. Stay with us.
Starting point is 00:02:44 So, Fruitfly was discovered originally in February of this year, actually the first Mac malware of 2017. That's Patrick Wardle. Discovered by Malwarebytes. A few weeks after that initial discovery, a friend of mine gave me a hash of a variant, a new variant, variant B, that I took a closer look at that looked like it came out around the same time frame or was discovered again in January or February of this year. So give us an overview. How does it work?
Starting point is 00:03:12 Yeah, so FruitFly targets Mac users. So it's a Mac backdoor, essentially. It's a fairly feature-complete backdoor, providing a remote attacker the ability to fully control an infected computer. So standard things like file upload, process, execute, running shell commands, numerating processes, but also has some interesting capabilities. For example, it can interact with the mouse and keyboard. And the initial variant also had the ability to turn on the webcam. So it looks like the main goal of the malware was unfortunately to spy on infected victims. And is there any notion of who's being targeted? So that was interesting. So one of the
Starting point is 00:03:51 cooler aspects, I think, of my analysis was I was able to decrypt some of the backup command and control server addresses, and these were available for registration. So as part of my analysis, I had built a custom command and control server so that I could task the malware in the lab and basically have it show me what it was able to do. So the end result of that analysis was I had this custom command and control server that could fully interact and talk to the malware. So anyways, I registered these backup domains and put up my custom command and control server and immediately hundreds of infected victims connected. Now, I didn't task any of those victims, but when the malware connects, it sends a host and username.
Starting point is 00:04:30 And also, obviously, I have its IP. So with those three pieces of information, you can readily identify victims' full names and where they're roughly geographically located. And then you can hop on LinkedIn or Google and get a pretty good sense of who these people are. So in this piece of malware, this scenario, we actually were able to pretty readily identify the victims. And unfortunately, it looks like it's just everyday kind of people, families, you know, individuals, most in the U.S. and with certain interesting geographic clustering. It looked like, for example, Ohio had about 20% of the victims. So that's kind of interesting in a way. Any idea what the infection vector is? No, and that's a great question. I'm pretty sure certain people know. I handed over my
Starting point is 00:05:16 research and information to law enforcement, and I know Apple has been looking at this as well. If we look at how traditionally Macs get infected, it's usually through some sort of user interaction. So an email with a malicious attachment, perhaps a trojanized or pirated application, or maybe even an infected website that has a fake security pop-up. But in this case, we actually didn't see an installer. So maybe there's a different infection vector.
Starting point is 00:05:42 That having been said, the malware, while it's feature complete, isn't incredibly sophisticated. So, you know, I would be surprised if it's using some really advanced exploit or some infection technique that perhaps doesn't use user interaction. But hopefully we'll have an answer to that in the not too distant future. So would a standard antivirus detect it? So this is one of the issues. So looking at this malware, there's some forensic clues and also some other interesting information, which unfortunately is in public at this time, that seems to indicate that this malware has been around perhaps five or even longer years, which is a rather long time. So it's possible that this malware hasn't been
Starting point is 00:06:23 detected for almost half a decade or more. And when it was originally discovered by Malwarebytes, and when I started looking at the variant B, neither of those samples were detected by any of the antivirus engines on VirusTotal. So my guess was this piece of malware kind of flew under the radar, and it being a custom code, a custom piece of malware, it didn't have any detections for perhaps an incredibly long time. So that's a little worrisome and I think kind of illustrates the fact that perhaps, let's say, antivirus on Mac has a rather long way still to go.
Starting point is 00:06:56 You have some security tools on your own website. Would they have had any chance of detecting this? Yeah, that's a great question. And the reason I designed security tools is exactly for a scenario like this. So most of my tools, the way I design, they look for malicious activities versus signatures of known malware. So it's likely that my tools would have detected this. For example, when I ran them against the sample, a lot of the activities of the malicious code would trigger. So, for example, one tool,
Starting point is 00:07:26 Knock Knock, will show you installed items. This piece of malware installs itself as a launch agent, so you would see an unsigned launch agent that probably you wouldn't recognize. Oversight, which monitors the webcam and the microphone, would have likely popped up when the malware turned on on the webcam. And it was interesting because the malware had the ability to alert the attacker when the user was not active. And this was probably by design as a way to spy on users without them noticing because the webcam was turned on by the malware, the LED indicator light would go on. And if you're sitting at your computer and all of a sudden the LED indicator light goes on,
Starting point is 00:08:02 throw that computer out the window, right? you're sitting at your computer and all of a sudden the LED indicator light goes on, like throw that computer out the window, right? So the attacker probably realized this and therefore, you know, built some capabilities into his malware so he could perhaps only turn on the webcam when the user was not there with the hope from the attacker's point of view that maybe he would capture, you know, the victim, you know, walking around their bedroom in their underwear or, you know, worse, less. So, you know, a tool like Oversight underwear or, you know, worse, less. So, you know, a tool like Oversight, which can alert you of this webcam activity, I think is incredibly
Starting point is 00:08:30 powerful. So, you know, I think it's wise for users to look perhaps into third party security tools, especially free ones that are able to detect malicious activities versus, you know, looking for just static signatures, because I'm sure there's other similar threats out there and traditional antivirus products may not be detecting them. Yeah. I'm curious about, you know, how you registered the command and control server domains and started getting information. A couple of things come to mind about that. Did the malware, beyond sort of checking in with you, did it start trying to send you information, sending you pictures, sending you keylogged files, that sort of thing? That's a good question.
Starting point is 00:09:10 So first, the reason I was able to grab the backup domains is because they were available for registration. And the reason the malware then connected to them was because the primary command and control servers were offline. Probably when Malwarebytes discovered the initial infection. I'm not sure if they worked with an ISP to shut it down. Anyways, end result, the primary command and control server was offline. So all the malware was trying to speak to the backup ones. So when I registered it, what the malware does, it just checks in and then asks for tasking.
Starting point is 00:09:41 So the only thing it sends is the version number of the malware, username, and then the full name of the computer, which is often the user's full name. You know, I'm not going to lie, I was very tempted to task the malware, but, you know, that's a very gray area. And it's actually funny when I handed over my information to law enforcement, that was the first question that he asked me. So I didn't interact with the victims, and the malware won't send out any sensitive information until it has been tasked. So FruitFly is interesting because it kind of has the capabilities that match what a nation state piece of malware would have. But the victims are the ones that are normally targeted by cybercrime malware. But it had none of the features that cybercrime malware traditionally has. malware, but it had none of the features that cybercrime malware traditionally has. So that's why I'm fairly confident that its goal was just to spy on everyday users.
Starting point is 00:10:30 And there's probably just maybe an individual behind this insidious malware who seems to be rather perverse. But we are also starting to see Mac ransomware. We've had a few samples this year. And that's something that, unfortunately unfortunately is probably going to continue a trend because it's such a financially incredible opportunity, essentially for hackers. And if Mac users are falling prey
Starting point is 00:10:54 to these kinds of social engineering attacks to install malware, and hackers are going to continue to target them. So I think there'll be more information about this coming out in the next few months. And I'm optimistic that hopefully we'll have some closure about who did it and perhaps their motives and answer some of the questions that remain open at this time. So I come from a machine learning background. I have
Starting point is 00:11:17 a PhD in machine learning and I love it. So what I'm about to say should not at all diminish, I think, the role of machine learning. That's Hiram Anderson from Endgame. Machine learning has blind spots, has weaknesses. If an adversary has access to your machine learning model, in some cases, those weaknesses can be very convenient to exploit. to exploit. So we came to this research kind of with an aim to help harden and improve our machine learning models before motivated and sophisticated adversaries do it for us, right? So that's kind of the framework within this. Machine learning has blind spots. We like to find them first and use knowledge about, you know, we're red teaming our own machine learning models
Starting point is 00:12:02 in order to patch them and provide superior protection for our customers. So take me through some of the details of that. When you say machine learning has some blind spots, is that inherent to all machine learning? Is that just the way it works or is it the way specific systems may be set up? No, in fact, only the most trivial problems don't have blind spots. I think it'd be fair to say that in all applications, all machine learning models have blind spots. A famous example of this is actually not in security, but in images where a classifier is shown a picture of a bus. It knows that it's a bus with 90% confidence. But then I can actually ask the model, what small pixel intensity modifications
Starting point is 00:12:48 can I make to most confuse you? I can ask directly the model that question, and it will respond and tell me what pixels to change. This new image that looks to your eye exactly like the previous image, now the machine learning model thinks it's an ostrich with 90% confidence. And that is just something that machine learning models have in common. They are imperfect representations of the world, but they're useful for things like detecting images and also detecting malware. And so to sort of extend what you were talking about there with the imaging, can you ask the systems that are being used for malware, where are your blind spots? Indirectly. The research that I'm presenting at Black Hat is tackling a very ambitious problem
Starting point is 00:13:31 and frankly is not nearly as successful in finding the blind spots as, you know, if I have sort of a source code to your machine learning model. The framework for information security is hard for a number of reasons. Number one is that if I change a pixel in an image, that image is still an image. But if I change a byte in a Windows executable file, there's a chance that that breaks both the format of the file or it breaks the functionality of the malware. So those modifications are not so simply done in, you know, especially machine learning for malware detection. The second point is that often an adversary doesn't have the source code.
Starting point is 00:14:15 You know, it doesn't know specifically your model or it might not be one of those models you can ask directly. So in our setup, we are taking the most general approach. There's a black box. You can throw arbitrarily a sample at it and get an answer, malicious or benign. That's it. Then we pit an artificial intelligent agent, reinforcement learning agent, against that black box to play a competitive game where the agent tries to learn to discover what small modifications it can make that preserve the PE file format and preserve the malware's functionality, but still bypass the machine learning model.
Starting point is 00:14:56 And so which of those two AIs has the harder job? Oh, by far, in our setup, by far, it is the job of the attacker that's hard. The attacker has almost no knowledge about what it's attacking, and its success rates are very, very slim. So in the image case, those kind of attacks where the attacker knows everything and its images, sort of easy manipulations, I can bypass those models 90, 95% of the time or more. In information security, in this most general setting for malware, where I know nothing about the model, the bypass rate is more like 5% to 10%. And it's a very hard problem for the agent to learn. But in security, 5% is kind of a big deal. Yeah, yeah, it absolutely is. And so as the defensive AI is having, is sort of being hammered, being pounded against by the attacking AI, how is the defensive AI
Starting point is 00:15:55 doing? Is it adapting as well? So during the attack, that AI is not adapting. But what we do to harden our models is that we play out this series of games where the attacker becomes, you know, relatively good at this job, where relatively is, you know, trying to get five, 10%. When the game has played out, this artificially intelligent agent,
Starting point is 00:16:18 this reinforcement learning agent can actually take a malware sample and know what modifications to make to it to bypass the defense, right? Let's freeze now. The game's over. I'm going to use this agent to generate a bunch of malware samples that are going to bypass it. And then I'm going to fold that experience, those new malware samples, back into the defense. And he's going to learn how to patch his own holes. And then when you play the game again, the defense becomes much stronger. And so then at that point, is it just sort of an iterative process where you just go round and round and round until you've
Starting point is 00:16:57 got a really strong defensive system there? That is the hope and that's the approach. From a practical point of view, are you seeing many attackers actually using artificial intelligence and machine learning? You know, I'm not a threat until guy. I don't think that from what I have seen that attackers are using this sophistication level, but certainly they are going to know about it. Those that are especially sophisticated and motivated are going to know that this is possible. And part of the point of our research is to get ahead of the game. We're going to be releasing code that will allow friends and competitors alike to leverage this game play to strengthen and harden their
Starting point is 00:17:37 own machine learning models based on these attacks. So take me through what are some of the key takeaways from this research? I guess the number one is that machine learning is a useful tool for generalizing to never before seen malware. In our games, predominantly the defense wins here. But as I said before, 5% is a big deal and we like to patch those. So number two is that machine learning has those weaknesses. It has blind spots. It can hallucinate.
Starting point is 00:18:06 And we would like to provide a consistent and realistic method for finding those blind spots. And number three, we'd like to open this up to the security community to help us improve and all boats rise with the tide. So we'd like to release that and have researchers and collaborators help us to strengthen our machine learning defenses. We originally did not work with each other. That's Zach Allen. He and Chaim Sanders are both from Xerofox. I was at a company before this, and this company was deploying and building a web application firewall.
Starting point is 00:18:43 And Chaim, being the kind of engineer that he is, he reached out to us and asked us how we were doing because he works on web application firewalls a lot. And he asked us, you know, how are you verifying, how secure it is, how are you testing it, and things like that. And I pretty much said, that's a really, really good question.
Starting point is 00:19:02 So because Hayim works on mod security as one of the core developers, we started talking about a way to quantitatively measure how effective a web application firewall could be. So after a couple weeks of discussion, we just kind of met up and put our heads together and said, you know, I think this is something that everybody needs, especially the plight of security engineers
Starting point is 00:19:24 when they go to places like Black Hat. They're presented with sales material. They're presented with, you know, Forrester waves and Gartner magic quadrants. And they look great for managers. But when a security engineer then goes and gets the handbook on how this thing runs, they realize it kind of sucks. So what we wanted to do is level the playing field and give people a chance to measure the effectiveness in terms of logging, in terms of stopping attacks, in terms of configuration against any type of web application firewall. And that's where Framework for Testing Labs or FTW came to be. And so Chaim, what's your side of this story? How did this collaboration come to be from your point of view? Yeah, it was pretty interesting.
Starting point is 00:20:16 I was toying around with this idea of how I can effectively test mod security in conjunction with the core rule set, which are both two open source projects. And I was talking with Zach, and he essentially said, oh, well, we have the same problem, but maybe we can do it in such a way that it can scale more effectively. It can be more helpful for everyone involved. And I said, well, golly, Zach, that would be swell. And from there, we kind of just ran with it. We talked about design and architecture, how we would need to make it in order to scale up from just a single test platform to one that can test many different platforms effectively. Now, at the time, you all were working for separate companies. Was there any pushback from the higher-ups on this sort of collaboration? Surprisingly, there was very limited pushback on my side. I was working primarily on open source projects at the time,
Starting point is 00:21:03 and I think, Zach, you had an environment where they kind of fostered open source a little bit at the time. Fastly was really good when it came to open source work, and they still are. And when you have a chance to contribute to the greater security community, they also fostered that. So it was a win for Fastly because we got to get regression testing from a security standpoint on our WEF, but it was also good for the community. And it's just one of those weird timing and luck things where everyone benefited and no one really got upset, which is kind of weird in this day and age. So help us understand here, what are we talking about when we're talking about regression testing? Yeah, that's a great question.
Starting point is 00:21:42 when we're talking about regression testing? Yeah, that's a great question. Essentially the idea is that we need to take some sort of baseline at minimum of attacks that are well known and well understood and determine whether or not an application or a web application firewall will ever be subject to allowing one of those attacks to bypass its protections that it offers.
Starting point is 00:22:03 So to put this another way, we want to make sure that once we install a protection in a web application firewall, that it's always working. Now, when you deploy this in an environment that's really helpful, but it's also really helpful when you're the developer of the web application firewall. So it kind of has a two-pronged benefit. One way you can make sure that new rules
Starting point is 00:22:24 and new additions to your firewall don't break things. And one way you can make sure that new rules and new additions to your firewall don't break things. And one way you can make sure that it's actually working exactly how you expected it to. Yeah. A good analogy for this is, let's just say you're adding something new to your car. You want to make sure it still drives after you put on new tires or you replace the spark plug after that. And especially when it comes to software, which is definitely more complicated than most cars, even bolting on something as small as a new spark plug can just make this piece of software fall apart, so to speak. So regression testing, just make sure that anything when it comes to a feature or a new attack or a new rule is added, that the car is still moving forward. It turns left when I turn the steering wheel left or anything along those lines. I see. So, sort of like doctors say, first do no harm. It gives you a way to make sure that you haven't inadvertently messed up the firewall's baseline functionality.
Starting point is 00:23:17 Exactly. And one of the benefits of doing that is once you've established that there's some baseline functionality present, you can then start comparing that to other web application firewalls and determining whether or not that baseline exists still. So it may not be able to test every single feature or it may not cover that currently, but it covers the core base web application attacks that we see day in and day out, attacks on HTTP,
Starting point is 00:23:41 SQL injection, cross-site scripting. And we're always adding new tests to kind of make sure that these are up to date and thorough. So currently we have thousands of tests that we run. And so, you know, we're in a constant arms race with the bad guys. Can they get a hold of this test and then, you know, just figure out ways to get around it? Absolutely. In fact, that's encouraged because in our industry, there's kind of this notion that unless this is publicly disclosed and people are publicly kind of brought to the stake about a situation, that they're not going to fix it. And one of the main goals here is to make sure that everyone at minimum has this baseline for web application firewalls.
Starting point is 00:24:21 You can sure add to it. In fact, we encourage you to. for web application firewalls. You can sure add to it. In fact, we encourage you to, but we assume that developers of web application firewalls will want to know when there's a bypass. They won't want to hide that. They'll want to fix it as quickly as possible. Raising the cost of the bottom line
Starting point is 00:24:35 does surprisingly well when it comes to cyber defense. So if we can raise that cost and get any web application firewall to at least adhere to this baseline, then I think everyone benefits. It's surprising to me that no one has done this before. Was this a novel effort, or had there been other attempts at this that maybe hadn't gained traction? Well, so we don't necessarily know of attempts that are generalized.
Starting point is 00:24:58 There are certainly attempts from each individual producer, and most of them are behind the scenes, I would assume, to test and provide some sort of functionality, regression capability on a given web application firewall. But as far as a large-scale effort that can compare multiple different vendors, this is pretty difficult to do, and we were actually in an interesting spot to do it.
Starting point is 00:25:23 Vendors inherently have some sort of bias towards themselves, and kind of actually in an interesting spot to do it, vendors inherently have some sort of bias towards themselves. And kind of as an open source project, we don't necessarily have that initial bias that might exist in that respect. And what's the feedback been from the web application firewall vendors? So at least from Fastly's sake, it's used within their pipeline every day to test for aggressions. The tool's actually being presented at OWASPly's sake, it's used within their pipeline every day to test for aggressions. The tool's actually being presented at OWASP AppSec US this year. And one portion of the talk is someone from Fastly who's still there talking about how it's in use. So that's been a pretty positive experience on their part. And I know we've actually, we're working on this last night.
Starting point is 00:26:04 Part of our presentation for Black Hat Arsenal is how this is now being integrated fully into the mod security development pipeline. And how we can strategize about ways people can make a change to mod security or the core rule set. But they would also have to submit a corresponding FTW test before the merge button is pressed. In terms of other vendors, we're still working to get some traction. There are some OWASP-type groups that have been kind of key in doing qualitative assessments in the past, and they're still working on kind of pushing quantitative aspects of their evaluation methodologies. And as a result, we're going to try and piggyback
Starting point is 00:26:43 on their work as they have close ties with many more vendors than we do. But in our initial tests, it's looking pretty good. So if someone wants to find out more, if they want to perhaps contribute, what's the best way for them to find out more about the project? Sure. So if you just go on GitHub and type in framework for testing WAFs, the repo should be up there. The organization is CRS-support. So github.com slash CRS-support slash FTW.
Starting point is 00:27:22 They can also find it through Python's PyPy repository, pip install FTW, and they can go and get documentation on that. In addition to that, we'll be having a couple blog posts come out on the Core Ruleset blog discussing how to write tests and how to implement them within the OWASP Core Ruleset project. And I think that should kind of lead to a little bit more understanding and traction beyond just the docs. Well, it's exciting times in cybersecurity, as you know. We continue to see all these breaches and compromises of business data around the world and governments. The investment area is very, very active right now. That's Alberto Yepez from Trident Cybersecurity. Our executive editor, Peter Kilby, caught up with him right off the show floor. We are tracking about 2,500 companies of all sorts of stages, about 450 from Israel alone.
Starting point is 00:28:25 But the biggest trends that we see is, you know, we all talk about the shortage of security professionals and analysts. We talk about the cost of integration as being something that the customers have to bear, and there's way too many point solutions. So we're focusing and looking for solutions that are really pre-integrated, easy to consume, easy to deploy, preferably delivered via the cloud or through MSSP, through a managed security service provider, because the middle market becomes a tremendous opportunity for innovation because while they need the same defenses and the same type of sophisticated tools
Starting point is 00:28:59 the large banks and critical infrastructure companies have, they have the same needs and they actually, because of regulations, have the same reporting requirements. So in the trends, we see a move to cloud-based, native-based security companies and entrepreneurs, but also thinking more of not just the tip of the iceberg of that large customer with thousands of cybersecurity professionals, but I'm a retail customer with maybe 10, 15 people in IT and one of them has a part-time responsibility for for security so I think that's so how do we automate we bring automation in simplification so that the shortage of professionals can be not only achieved by training but more, by bringing automated tools
Starting point is 00:29:46 and tools that can help corporations understand the threat and eventually the defense. Do you see a lot of startups and other companies aiming toward that bigger market versus the middle market? You know, traditionally, if you see the new innovations in the market, it's always focused on the large companies. Why? Because they're early adopters, they're willing to work with you, help you with the feature set, but eventually when it gets to a point where there's a mature solution, you want to bring it down market. And so, yes, we see that the majority of early companies try to focus on that market. And as you bring maturity, talk about SIEM, bring analytics and all that,
Starting point is 00:30:27 I think some of the maturity goes to address middle market solutions. A good example is AlienVault, we were just talking a little while ago, where it's a company that is really seeking to provide visibility for the average person that is probably the only person responsible for security in the small to medium business, but it gives them all the tools that once they identify something, what do I do? There's a prescription, there's a way to share intelligence straight to the open trade exchange and things like that.
Starting point is 00:30:59 So I think those type of companies are beginning to innovate more on how do I bring in automation for the middle market, this small business. But most companies, like in Deception, in IoT security, really begin addressing the large companies, GE, all the large banks, first, and then eventually bring in that market. We hear a lot about artificial intelligence and machine learning in various contexts. How do you see that playing into your investment strategies or the, you know, kinds of trends that you're seeing? I would say all our companies have a component of machine learning and AI. You know, this is something that's been around for 20, 30 years, right? Now I guess it's more possible because compute power,
Starting point is 00:31:39 storage, cloud, you know, network speed. Now you're able to aggregate and really process a lot of data and try to bring algorithms that are either learned by a machine or really using some predefined, predetermined mutations of viruses that you could actually predict. So I would say without exception, every single one of our portfolio companies are using a component of big data analysis infusion and applying different degrees of automation with the help of machine learning or artificial intelligence.
Starting point is 00:32:17 Last time we spoke, it was this time last year really, you talked about the investors becoming a little bit more finicky in terms of the companies that they go after the opportunities that they want to fund is that still the case how is the investing changing it's become harder because you know maybe last year we're probably maybe 1,500 companies were tracking out there 2,500 so every company sounds the same and so he's trying to tease the signal from the noise, trying to understand who has the core IP intellectual property that could actually be differentiated becomes harder and harder. So I would say that this venture investment has become a specialized sport. What do I mean by that? In the past, you have very large funds that were generalist funds
Starting point is 00:33:06 that would invest in app tech, clean tech, cybersecurity, enterprise technology. So we are probably one of the first funds that has become an exclusively investors in cybersecurity. In January, we closed a $300 million fund, one of the largest funds in the industry, that exclusively invests in cybersecurity companies. By that, so now we're, I guess, we continue to be looking for a high barrier of entry where whatever problem is being solved is something that is grounded by key requirements. Innovation in cybersecurity comes primarily by trying to solve people's problems. It has to be grounded. So very little innovation has happened in the lab in cybersecurity comes primarily by trying to solve people's problems. It has to be grounded. So very little innovation has happened in the lab in cybersecurity,
Starting point is 00:33:48 arguably PKI or some things that happened in the early days. But having grounded requirements, working with the chief information security officers, they can tell you what problems they're trying to solve, and there's no commercial solution. So when we become investors, we try to get very keenly acute of the problems that companies are trying to resolve. We have a large group of advisors, and many of them chief information security officers and serial entrepreneurs.
Starting point is 00:34:16 They help us through defining whether this company has the right intellectual property in addressing a real problem. In the next issue, if you have the large market opportunity, the intellectual property, the thing is how do you scale? How to bring the growth market? That's when we come in. So once we see the large market opportunity and the high IP, we invest, hopefully we reference customers, bring you partners, bring you executives, bring your board members,
Starting point is 00:34:39 channel partners, and eventually help you scale to you know to address the problems of many and so yes i think the problem has become more acute because the noise is higher there's more companies but the fundamentals haven't changed by and large what we're seeing is an industry that is maturing in certain aspects and bringing automation but all the new innovation is coming about and bringing automation, but all the new innovation is coming about the threat vector. The criminals, the adversaries, are not staying still. They're looking for vulnerabilities and really trying to exploit them. New platforms are emerging, as we know, and now we have the connected car. Everybody talks about containers, cloud-native applications,
Starting point is 00:35:24 so the surface continues to increase. And oftentimes, we get attacked with legacy. And so we cannot forget about legacy. But nonetheless, also, we need to figure out how we cover the new thread vectors. So I think as an industry, we'll continue to thrive. I don't think we will see a solution in our lifetimes. It's something that will continue to evolve. But it's very exciting, thriving. You look at Black Hat this week, I think it's one of the largest crowds that we've seen
Starting point is 00:35:51 and continues to amaze me the level of international participation in corporations as well as very capable individuals that can show you how the trend is actually real and the companies need to pay attention to. Is there a lot of competition for the deals themselves from a company like yours? There is actually. It's interesting because the entrepreneurs have choice. So I've been a serial entrepreneur. I don't know when last time we talked I ran three different cyber companies and so money
Starting point is 00:36:23 is equal. When you go look at an investor, what value are they going to bring in, in addition to a fair offer to fund your company? I think what most entrepreneurs look for and what they're telling us is, I want somebody I can partner in the long term. The real investors and real partners show their true colors when the going goes tough because nothing goes on a straight line. And if you're an adept entrepreneur, you move with the market and you need to move fast. That's why you're a private company. And so the competition is pretty fierce. Most of the new funds come in and compete in price.
Starting point is 00:37:03 But the serial entrepreneurs understand the value that is brought in by the different groups that are, I would say, the leading investors in cybersecurity. There's a lot of competition, but I think you will stand out by just seeing how you can shorten time to value and de-risk the execution. Do you have any advice for those companies that are seeking investment? Obviously, you just gave a big piece of advice, you know, right there, look for somebody who can bring value, but what do they need to do to stand out to an investment company? How do they know they're ready for investment like yours? You know, we can, you know, imagine you're pitching me a deal,
Starting point is 00:37:43 we can get really excited, but if there's not a customer out there that tells you that valid is whatever we think it is, don't even come to talk to any investor. It has to be grounded on requirements. And you have to have at least a reference or a couple of reference customers that really have spent the time and help you refine the problem you're trying to solve, define the features you need to have, and hopefully test at least the prototype or the initial version that eventually when and if you have a generally available product, they can become your biggest champion, your biggest reference accounts.
Starting point is 00:38:20 When somebody else calls and say, this company, do what they say to do, it's absolutely so. The best time to come in and look for a man is when you have external validation, other than the entrepreneur team and people that can help you refine and define what segment of the market you're going to play in, why are you different than many of the others. That's what we really focus on is differentiation but validation. Is there anything that I haven't asked you that you'd like to share with our audience? Well, you know, I'm really happy with the success of CyberWire and I would love to encourage
Starting point is 00:38:58 all my portfolio companies to continue to work with you because you do a great service to the community by keeping people informed in very short snippets of time. I pick up Cyber Wire every morning and find out what's going on in the industry and encourage people to use it. That's great. Well, thank you very much. We appreciate you listening in and reading us every day. Thank you.
Starting point is 00:39:17 Thanks for talking to us. Okay, so that last bit may have been a bit self-serving, but who are we to disagree? Our thanks to Patrick Wardle, Hiram Anderson, Zach Allen, Chaim Sanders, and Alberto Yepez for taking the time to speak with us. The Cyber Wire podcast is produced by Pratt Street Media. Our editor is John Petrick. Social media editor is Jennifer Iben. Technical editor is Chris Russell. Executive editor is Peter Kilpie. And I'm Dave Bittner. Thanks for listening. Thank you. ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications, securing sensitive data, and ensuring your organization runs smoothly and securely.
Starting point is 00:40:33 Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe and compliant.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.