CyberWire Daily - The power behind artificial intelligence. [Research Saturday]

Episode Date: July 1, 2023

Daniel dos Santos, Forescout's Head of Security Research is sharing insights from a recent exercise his team conducted on AI-assisted attacks for OT and unmanaged devices. Using ChatGPT, Forescout’s... research team converted an existing OT exploit developed in Python to run on Windows to demonstrate how easy it is to create an AI-assisted attack that converts the original exploit into alternative programming languages. The research states "our goal was to convert an existing OT exploit developed in Python to run on Windows to the Go language using ChatGPT." This would then allow it to run faster on Windows and run easily on a variety of embedded devices. The research can be found here: AI-Assisted Attacks Are Coming to OT and Unmanaged Devices – the Time to Prepare Is Now Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the Cyber Wire Network, powered by N2K. of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me. I have to say, Delete.me is a game changer. Within days of signing up, they started removing my personal information from hundreds of data brokers. I finally have peace of mind knowing my data privacy is protected. Delete.me's team does all the work for you with detailed reports so you know exactly what's been done. Take control of your data and keep your private life Thank you. Hello, everyone, and welcome to the CyberWires Research Saturday. I'm Dave Bittner, and this is our weekly conversation with researchers and analysts tracking down the threats and vulnerabilities, solving some of the hard problems, and protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.
Starting point is 00:01:53 So we were thinking about using generative AI, not just for something that can be used to kind of dupe users into executing malicious code, but how the attackers could use it to actually generate or convert some malicious code and some actual exploits to cause some real damage, more specifically in OT and unmanaged devices. That's Daniel Dos Santos, Head of Security Research at Forescout. The research we're discussing today is titled AI-Assisted Attacks Are Coming to OT and Unmanaged Devices. The time to prepare is now.
Starting point is 00:02:34 Well, let's walk through this together. I mean, you have this tool, and I think probably for most of our audience, it's a tool that at the very least they've played with. How do you approach this? Where do you begin? tool that at the very least they've played with. How do you approach this? Where do you begin? Yeah, so our idea is we took an exploit which we already had that we developed for a kind of a proof of concept that we had in the past for an exploit for operational technology, right, for a programming logic controller. And we wanted to convert that into a different language, right? So the original exploit
Starting point is 00:03:05 was written in Python as a short script. We wanted to use that in Go because we see a lot of malware being written in Go these days and because, you know, it allows for cross-compilation and to run in different architectures and things like that. So we just wanted to explore Go. The issue is that none of us kind of knew Go. So we wanted to explore from the eyes of a complete beginner, let's say, how you would use a, taking a part of a code that actually exists and translating that, converting that, embedding that into other malware, other malicious code to make it more useful, let's say, right? So what we did is we just kind of had a chat with ChatGPT, the specific generative AI tool that we used,
Starting point is 00:04:01 where we were asking the tool to generate parts of the new code that we wanted it to generate. And in the process, we kind of guided the tool where we needed some corrections or things like that. But the whole process took something like 15 minutes, and in the end, we had the working exploit as we intended. Well, tell me about the exploit that you were using in your experiment here. What sort of functionality did it have?
Starting point is 00:04:28 Yeah, so originally it was part of a larger attack which we developed where we wanted to show how adversaries can infiltrate networks from the IoT part of the network, so from IoT devices that are exposed like IP cameras and network-attached storage devices and things like that, then move laterally to IT, computers, servers, workstations. And then from there, move to OT networks and find vulnerable devices like PLCs and then exploit them.
Starting point is 00:04:57 And the exploit in this case is the denial of service, right? It's crashing the device so that you need to manually reboot it. Manually in the sense of like power cycling it, like a hard reboot. So what we did is we took this last component part, the part that crashes the PLC, and we wanted to have a very simple version of it, which was avoiding all the network scanning, avoiding all the other parts before and just really focus on the core exploit, which is crashing the PLC by sending a malicious packet. So we had the format of the malicious packet
Starting point is 00:05:30 and we told the tool, chat2pt, to generate the code around it that would create the connection, send it to the specific target, check if the target is alive and things like that. So help me understand here. The way you're describing it, this was not a case of, for example, loading in the Python code and saying, hey, chat GPT, convert this for me.
Starting point is 00:05:54 No, no, it was not exactly like that because we wanted to change it a little bit. And also because there are some safeguards in tools like chat GPT. If it identifies that the code is potentially malicious or if you're asking, convert this malware for me or something like that, the tool will actually say, I'm not supposed to generate malicious code
Starting point is 00:06:14 or things like that. So we basically just told the tool to take parts of what we wanted, like write a scanner in Go for this part of the network and things like that. And we were driving the parts of the code that we wanted. Of course, we had a part that was actually, some parts of it were like the equivalent of the Python code that we wanted it to translate, but some other parts were just saying, you know, fix this thing here or change this part of the code and things like that. So it was not just like, put the whole
Starting point is 00:06:52 code in and we get the whole code out, right? It was a little bit more of a conversation. You mentioned that ChatGPT has some safeguards built in. How did you approach this to avoid tripping some of those safeguards? Yeah, so in this case, it was really by not mentioning that things are malicious, right? Because in this case, it was difficult. It would be difficult for the tool to identify that that's an actual exploit. I mean, you're sending a packet over the network and the packet could be benign, could be malicious,
Starting point is 00:07:23 could be just checking if a device is alive. So the payload itself wasn't known to be malicious by the tool. So in this case, you can use it without tripping any of the safeguards, right? Because if you mention specific things like write a malware or create an exploit or let me test this vulnerability or things like that, that would become obvious to the tool that you're trying to create malicious code. But when you're just asking for code, right, code that can have a dual purpose, let's say, the tool won't notice that you're doing anything malicious. Thank you. rise by an 18% year-over-year increase in ransomware attacks and a $75 million record payout in 2024, these traditional security tools expand your attack surface with public-facing
Starting point is 00:08:32 IPs that are exploited by bad actors more easily than ever with AI tools. It's time to rethink your security. Zscaler Zero Trust plus AI stops attackers by hiding your attack surface, making apps and IPs invisible, eliminating lateral movement, connecting users only to specific apps, not the entire network, continuously verifying every request based on identity and context, simplifying security management with AI-powered automation, and detecting threats using AI to analyze over 500 billion daily transactions. Hackers can't attack what they can't see. Protect your organization with Zscaler Zero Trust and AI. Learn more at zscaler.com slash security. Now, you say that you and your colleagues were, by your own admission, not experts when it came to Go code. How did you evaluate the code that it spat back at you? Yeah, so that's a good question.
Starting point is 00:09:46 Because there were two, I mean, the test was actually running it on the network, right? But there are two parts to it. The first is making sure that the code compiles, like runs, so the code makes sense. And the second is that the code does what it's supposed to do, right? So the first part is you take the code that the tool spits out and you try to run it. And in some cases, there were issues like missing packages and things like that. And then we just asked the tool itself, like, you know, ChatupG, I'm getting this error line. What can you tell me?
Starting point is 00:10:14 How can I fix this? And then the tool would realize, oh yeah, I forgot to add one line, or maybe you forgot to install this specific package on our computer. So please go ahead and do this, this, this, and the following. It was really very kind of step-by-step instruction on how to run the actual code.
Starting point is 00:10:33 And the second part to test that it works, it was really when the code was running, compiling, let's say, we could actually just run it on the lab, the same lab where we tested the original Python exploit and we saw that things were working. Is there any sense when you look at the code that it generated of how sophisticated or elegant or efficient it is?
Starting point is 00:10:56 Yeah, so in this case, the code was somewhat simple, right? Like there was nothing too complex around it. There was another experiment we did, which is not published in this research per se, but it was looking at other use cases and applications that were more in the medical domain, like trying to write parsers for some specific protocols to get some data out and things like that. In that case, it was interesting to see,
Starting point is 00:11:25 because the code was a little bit more complex, right? It's a parser for some fields in a protocol. It was interesting to see that the tool struggled sometimes with things like regular expressions, but in other cases, it had some very good code in the sense that it follows standard... In that case, we were using Python, not Go. It follows the standard Python guidelines for formatting of the code or for the libraries that you would use and things like that. So it's kind of a mixed bag there in the sense that you can get very good code, you can get code that works pretty well, or you can get code that is somewhat in between and work some of the times, doesn't work all the times, right? So you do need,
Starting point is 00:12:12 it's not kind of get it and for sure it will be running. Sometimes you need to massage the code a little bit, but you can use oftentimes the tool itself to help you do that. It strikes me that it's potentially a huge time saver and also takes away that problem of staring at the blank page. Yeah, I think the time saving component is the most important one. It's like, again, we could have learned probably enough or looked at examples to write it ourselves. But if the tool is there to help you, and it's not just Go, right? Imagine we would want to do the same thing in Rust or in any other programming language
Starting point is 00:12:53 or, you know, this thing I mentioned before for the parses, writing regular expressions can be pretty annoying. If the tool can help you with that, it's a great tool, right? And that's what I mentioned right at the beginning of our conversation, that both the good guys and the bad guys are looking at the tool with the same kind of different motivations, let's say, or different purposes, but the same motivation, right? The motivation is to save time, to gain efficiency, to have more, let's say, return on investment on whatever they're doing. Of course, the purpose in some cases is to detect attacks
Starting point is 00:13:26 or to increase business efficiency or whatever good motivation or purpose that can be. But in other cases, it's to launch more attacks, to have attacks launched faster, to have attacks that might go deeper into a network because you have less time crafting specific exploits for specific environments or trying to understand the environment you're in because the tool can help you with that.
Starting point is 00:13:49 So based on what you all have gathered here and learned, what are your recommendations for folks out there who are tasked with defending their organizations? How does this inform that process? As of now, the attacks themselves, let's say the tactics, techniques, and procedures that the attackers are using are probably not changing super fast, right? The fact that they are using or they can be used in generative AI means that the attacks will come faster. They will come probably at a higher rate, a higher volume. So you can expect an increase in attacks. But that's something that we were already expecting, right? We are already expect an increase in attacks, but that's something that we were already expecting, right? We are already experiencing an increase in attacks. This is just, again,
Starting point is 00:14:30 probably exploding the number of attacks that we will foresee. So as of now, the nature of attacks is not changing too much. We do expect that maybe in the future, there can be new types of attacks, you know, mixing things like disinformation, mixing things like, again, the phishing campaigns that can become more sophisticated. years ago, where they trained a generative adversarial network, again, to create images of patients that had gone through CT scans, and they inserted fake tumors in those images, right? Which is very scary, and it's a terrible use of the technology, but it allows you to imagine what kind of potential outcome attackers can get with that.
Starting point is 00:15:27 So I would say for each specific domain, there's probably a different type of attack that in the future can happen that we're not even imagining now. So I would urge the research community to look at that, people who are domain experts to think about how this technology could be misused in their environment to look at that. But from a cybersecurity point of view, a general, you know, defending against attacks that we are seeing these days, I would also urge the defensive community, the security operations center analysts, the cyber defenders out there to look at how to use AI in their day-to-day job to make it more efficient, right? look at how to use AI in their day-to-day job to make it more efficient, right? We all know that people are flooded with intrusion detection alerts these days. Maybe generative AI can help you to go through those alerts and see what is actually a threat. We all know that reverse engineering
Starting point is 00:16:18 code and understanding malware code is very difficult. There are people out there looking at how to use generative AI to explain reverse engineered code and make that kind of human understandable, human readable, and so on and so on, right? Even generating code for threat hunting and things like that. So there's lots of opportunities for the defenders as well to use this type of technology for their own purposes. Our thanks to Daniel Dos Santos from Forescout for joining us. The research is titled AI-assisted attacks are coming to OT and unmanaged devices. The time to prepare is now. We'll have a link in the show notes. Cyber threats are evolving every second, and staying ahead is more than just a challenge. It's a necessity.
Starting point is 00:17:26 That's why we're thrilled to partner with ThreatLocker, the cybersecurity solution trusted by businesses worldwide. ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications, securing sensitive data, and ensuring your organization runs smoothly and securely. Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe and compliant. The CyberWire Research Saturday podcast is a production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies.
Starting point is 00:18:13 This episode was produced by Liz Ervin and senior producer Jennifer Iben. Our mixer is Elliot Peltzman. Our executive editor is Peter Kilpie. And I'm Dave Bittner. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.