Risky Business - Risky Business #749 -- Google answer to Microsoft's insecurity? Buy Google stuff!

Episode Date: May 23, 2024

This week’s episode was recorded in front of a live audience at AusCERT’s 2024 conference. Pat and Adam talked through: Google starts using security as a marketi...ng tool against Microsoft, along with steep discounts Microsoft announces a creepy desktop recording AI UK govt proposes ransom payment controls Arizona woman runs a laptop farm for North Korea Julian Assange just keeps on with his malarky And much, much more This week’s episode is sponsored by Tines. Its CEO Eoin Hinchy joins the show to talk about how AI can be genuinely useful in automation. Show notes (1) Dina Bass on X: "Google is offering deep discounts to government and corporate customers to entice them to switch from Microsoft Office as it attacks Microsoft's cybersecurity over recent breaches, citing US gov't cybersecurity review board report https://t.co/43sIJmBWi5" / X Microsoft president set to testify before Congress on ‘security shortcomings’ | Cybersecurity Dive Chairman Green, Ranking Member Thompson Announce Microsoft President Will Testify on Company’s Security Shortcomings Following Hack of Government Accounts – Committee on Homeland Security Google leverages Microsoft’s cyber gaps to woo Workspace customers | Cybersecurity Dive CSRB report highlights the need for a new approach to security (1) vx-underground on X: "tl;dr Microsoft introduces 24/7 surveillance functionality for the NSA and/or CIA but markets it as a feature that you'll like" / X Everything You Need to Know About Windows 11's Recall Feature Australian government warns of 'large-scale ransomware data breach' (1) National Cyber Security Coordinator on X: "The Australian Government continues to assist MediSecure, an electronic prescriptions provider, respond to a cyber incident. We are still working to build a picture of the size and nature of the data that has been impacted by this data breach impacting MediSecure. This https://t.co/oyNeRonurZ" / X HHS offering $50 million for proposals to improve hospital cybersecurity Remote-access tools the intrusion point to blame for most ransomware attacks | Cybersecurity Dive UK insurance industry begins to acknowledge role in tackling ransomware Exclusive: UK to propose mandatory reporting for ransomware attacks and licensing regime for all payments Hacktivists turn to ransomware in attacks on Philippines government Arizona woman accused of helping North Koreans get remote IT jobs at 300 companies | Ars Technica US offers $5 million for info on North Korean IT workers involved in job fraud FCC might require telecoms to report on securing internet's BGP technology FCC to probe ‘grave’ weaknesses in phone network infrastructure EPA says it will step up enforcement to address ‘critical’ vulnerabilities within water sector EPA takes steps to address cybersecurity weaknesses at water utilities British signals agency to protect election candidates’ phones from cyberattacks Feds seize BreachForums platform, Telegram page Dark web narcotics market’s alleged leader arrested and charged in New York WikiLeaks’ Julian Assange Can Appeal His Extradition to the US, British Court Says | WIRED

Transcript
Discussion (0)
Starting point is 00:00:00 Hello everyone and welcome to this live edition of the Risky Business Podcast. My name is Patrick Gray and we are at OSSERT's 2024 conference here on the Gold Coast. And as you can see, because he's sitting next to me, we'll be chatting about the week's security news with Adam Boileau in just a moment. Please give him a warm hand. Thank you very much, Patrick. He loves a warm hand on his opening. This week's show is brought to you by Tynes,
Starting point is 00:00:29 and they make an absolutely killer automation and workflow product. And this week's sponsor interview is with Tynes' chief executive and co-founder, Owen Hinchey. And we spoke to him early on during the AI hype, and he was not impressed. But his opinions and his plans have changed and honestly, the interview we did made my brain hurt for several days afterwards.
Starting point is 00:00:49 You edited that one and you had the same reaction. Yeah, I mean, AI is weird enough and then you're dealing with a guy that's actually using it for useful stuff. It's very confusing. Yeah, it is. I mean, we want to believe, like we wanted to believe for a long time
Starting point is 00:01:03 that it was all hype and it was all bullshit and we could just ignore it. And then... Make my life a lot easier if it was. Yeah. All right. So we're going to talk about some stuff now. We're going to get into the news. And I guess the big theme of this week is Google v. Microsoft fight, right?
Starting point is 00:01:20 Because Google's come out swinging. They've come out swinging. They're going to offer like heavily discounted Google Workspace accounts for US government. Take that, Microsoft. I think the only flaw in this plan is that Google Workspace doesn't have anything even approaching feature parity with 0365. And I can't imagine this is going to go anywhere at all. Unfortunately, you're right. I mean, Google has done so much great work on security since they got hacked by China, many, many years ahead of Microsoft
Starting point is 00:01:50 getting obviously hacked by China. It's like a rite of passage, isn't it? Getting owned sideways by the Chinese. But then Google kind of did the right thing afterwards and spent a lot of money and effort and engineering time building things like what we now call Zero Trust came out of their response to getting hacked back in
Starting point is 00:02:09 what was that, 2000 and... Age? Yeah, it was a long time ago. But as you say, yeah, like feature parity Excel versus Google Sheets. I mean... I mean, it's my joke. I often say that people don't realise but 50% of the world's GDP is run through Excel, right?
Starting point is 00:02:27 And it's a joke, but it's true. It's a joke that's not funny because it's true. Yeah, I mean, it's a developer environment for people who aren't developers, basically. I think that's the best description I've heard of it. But, I mean, you know, this is a big discussion in the United States at the moment, which is, okay, Microsoft has got just completely dominates
Starting point is 00:02:46 in federal government. How do you turn that around? And there's no easy answer there because you can't go for workspace. I mean, Sheets is great if you want to do such advanced mathematical operations such as multiplying, you know, addition, subtraction. It can do that.
Starting point is 00:03:01 And averages, I've heard. Averages, yeah, wow. They can do the mean. They can do the mean. They can do the mean. They can do a mean mean over at Google. But yeah, so I just don't know how this is going to go for them. And what's been interesting, though, is seeing the sort of ambulance chasey marketing
Starting point is 00:03:17 that Google's doing around this, because they published this blog post saying, the CSRB report highlights the need for new approaches to securing the public sector. Oh, wow, this is some great thought leadership out of Google, but the whole thing's a marketing blog post saying that US FedGov should use their stuff instead. I mean, from a security perspective, they're kind of not wrong. It's just everything else that's the problem, right? Actually shifting the entire, well, can you imagine the Department of Defense running on Google Sheets?
Starting point is 00:03:45 I mean, I can imagine parts of the US Department, and I think that's where Google can win, right, is for certain categories of user accounts, they can go there. But, you know, I think Microsoft's big thing, right, it's the integration. You've got Active Directory, you've got Entry ID, and you can synchronize them and do it horribly and get owned a lot. But you can synchronize them and you can do the whole thing. And then you've got integrated Office applications. You know, the whole thing just sort of sticks together in this sort of weird turducken of Microsoft legacy and new.
Starting point is 00:04:15 Which is how you end up with Microsoft Teams conversations being SharePoint posts behind the scenes. Which is why it's so goddamn slow using Microsoft Teams for group chat. But Microsoft is still in trouble. For those who aren't aware, the CSRB, that stands for the Cyber Safety Review Board, and that is the review board set up by the Department of Homeland Security in the United
Starting point is 00:04:37 States to do investigations on serious cyber incidents. And they gave Microsoft a solid paddling recently over one of their incidents. It was the Chinese one, right? Not the Russian one. Yeah, it was the Chinese one. Sometimes they all sort of blur into one. So Microsoft's Brad Smith, the president of Microsoft,
Starting point is 00:04:57 he is going to testify before the House Committee on Homeland Security on June 13. So the paddling will continue until morale improves. Microsoft's going to need a lot of paddling, which fortunately the US government has a lot of paddles. They do. I mean, I wouldn't want to be Microsoft's... Can you imagine being Brad Smith heading into that?
Starting point is 00:05:18 It's going to be a rough day at the office. Now, Microsoft has just tried to change the conversation here, I think, in the worst way possible, in the absolute worst way possible. Who here has seen the news that Windows is going to have a feature that records everything you do on your computer? And it's got some AI in it as well, right? Because if that didn't sound bad enough, it's now with AI. So they've introduced, what's the feature called?'t sound bad enough, it's now with AI. So they've
Starting point is 00:05:45 introduced, what's the feature called? It's called Recall. Yeah, Recall. Basically, it takes constant screenshots of your box and then uses Gen AI to summarize what's going on so that it can index search it. And then you can ask your local machine about stuff that you did last week and it will know about it. And this is fine. I mean, it's just like having someone stand over your shoulder and watch and record literally everything you ever do. Yeah.
Starting point is 00:06:11 I mean, that sounds great. Everyone's going to feel great about that feature. That's really on the zeitgeist. Like everyone's really in for exactly this kind of creepy surveillance. Good job, Microsoft. Really, you know, nailed it. Nailed the vibes. I mean, the thing is they you know, nailed it. Nailed the vibes.
Starting point is 00:06:25 I mean, the thing is, they could probably do this in a way that is sane, but I just can't believe they chose now to announce that when they're in the middle of a sort of security image crisis, right? Like, it just seems pretty boneheaded to me. It does seem a little boneheaded. And then there's the challenge that, like, modern Wintel computers don't have enough hoof
Starting point is 00:06:43 to do the AI stuff they want, right? You're going to have to run Windows on ARM with mobile phone chips that do have APUs for doing neural net processing on board, whereas Intel, we're going to have to wait three, four years or however long for HP to catch up, Dell to catch up. And I can think of stuff like this that has been useful.
Starting point is 00:07:02 Does anyone in here know Vortimo? Vortimo is a creation of Rolof Temming, who created Maltigo. And essentially, it's a browser plugin that allows you to, like if you're researching something, you turn it on, and it records everything that comes through your browser. It does automatic entity detection, you know, names, various selectors and whatever. And it's, you know, it's choose a lot of storage, right? But it's a terrific bit of software and that's done right. I want to research something that I might need to recall later, flick it on, off you go. And Saatchi's idea is like, no, no, we'll do the whole desktop,
Starting point is 00:07:35 not just the browser and leave it on all the time. And you just think, what are you, really? You think people want this? You know, just seems a bit out of touch. Hands up if anyone wants that. Wow. There you if anyone wants that. Wow. There you go. Not one.
Starting point is 00:07:48 Free research from Microsoft there. Market research. Exactly. Market validation services by Risky Business Media. Awesome. So let's talk about some stuff that's happening in Australia at the moment. And we've seen the government in a flap about a data breach for which there's very little detail publicly.
Starting point is 00:08:07 And I'm talking about MediSecure. I'm sure most of you in this room would have seen the news headlines about the big breach at MediSecure, who apparently do electronic prescriptions, which aren't even really that common in Australia. I know when I go to a doctor, I get a little printed script. I think during COVID, we had a couple of telehealth appointments and you'd wind up with a text message, QR code or whatever script. But we don't really know what's been taken. I just love too that it's MediSecure, right? It's got secure in the title. That's how you know it's going to get wrecked. Such a red flag, right? So yeah, like we still don't know. I think I checked yesterday, and they still hadn't really put much of a meaningful update
Starting point is 00:08:46 on their web page. But it looks like some sort of data breach may involve some prescription information. But again, the Australian government's doing the thing where it's, like, activating a whole-of-government response, and it seems like, you know, the federal government has kind of decided that health information is something
Starting point is 00:09:04 that deserves a big reaction and a big response right and i think that's a positive thing yeah it is it's nice to see that there is a process to follow even if like in this case we don't have as much information as we did during the previous medibank medibank was the previous one yes where that was a you know a really big deal and so it's nice to see that that process is being followed, but it's not really clear what the impact here is. And they'll be talking about like it's a third party provider and there's just not a lot of information, but better to overreact for something that doesn't need it than underreact when we really do.
Starting point is 00:09:43 I think that's right. And it's interesting seeing politicians play in this space, right? Because it is being led at a political level where we've got like the Home Affairs Minister saying, you know, this is serious stuff and we've activated ASD and all this sort of stuff. So, yeah, I mean, I think it's good to have that sort of signalling at a political level. And things have been relatively calm in
Starting point is 00:10:05 Australia for the last little while. So I don't know. But it's hard to talk about this when we just don't really know what happened. It's very hard to have sensible conversations in public when we have so little data. And as you say, throwing politicians into the mix, like that's what the cybersecurity industry needs to go with lawyers, is we need politicians in there as well. But on the other hand, us nerds have not exactly done a good job of this whole computer business. I think it's good to establish this as a norm too. So whether there's a change in government or not
Starting point is 00:10:36 at the next election, I think there would sort of be an expectation that if there were a similar breach, that there would be a response from the government. So that's one of the things that I like about this, is it sort of establishes it as something that the government should do. Yeah, and I think Australia has been pretty forward-leaning in dealing with...
Starting point is 00:10:54 The Medibank hacker, for example, seems to have had a pretty rough time of hacking Australia, so I'm sure there are some people who will think twice about coming after your koalas or whatever. Yeah, named, sanctioned, imprisoned. Yeah. Damn. Ain't many cyber criminals in Russia that end up in prison for doing crime in the West.
Starting point is 00:11:14 So good job, ASD and friends. And of course, you know, the healthcare sector globally is just such a target at the moment. It seems like the education sector and the health sector are just two areas where ransomware actors are really applying focus. The controls are not really as good as other sectors, other verticals. But don't worry, Adam, because the US government, HHS, has a solution. So, I mean, okay, step one with the headline, $50 million. Do you feel like that buys enough Fortinet to secure the US hospital system? Probably not. Okay. That would be if they were going to spend $50 million on regular cybersecurity solutions
Starting point is 00:11:54 like FortiSecures. But no, they are going to spend it on getting people to submit ideas for automated patching. So like AI-driven, automatic network securing this that you are going to run in a hospital environment and it's going to make it better. I mean, we've been telling human beings to patch things for 20 years and that hasn't worked. So maybe we're going to try the robots, but I don't think that's going to work either.
Starting point is 00:12:24 I mean, I guess like ChatGPT sometimes listens when you ask it to do something, so that's probably better than a sysadmin. That's a good point. They're less ornery. You're making points. The GPT being not the sysadmin, they're very ornery. Clearly healthcare needs something, but it's such a hard problem space.
Starting point is 00:12:59 And, like, if we humans can't get, you know, medical, hospital, healthcare, IT system security right, I don't feel like AI is going to do a better job. No, and, like, just the scale of the problem. This is why for years I've been advocating for offensive action from signals intelligence agencies and whatnot against ransomware actors is because the amount of security uplift that's required to make a dent on ransomware. Politicians early on thought, this is a security uplift problem. We'll just set some standards.
Starting point is 00:13:20 And it hasn't worked. It was never going to work. And the idea that, I mean, look, good on them, 50 million bucks for some robot patching, I don't know, fine, right? But yeah, I don't think it's going to work. I don't think it's going to do anything, really. Yeah, and I think the Australian approach
Starting point is 00:13:36 of releasing a few hounds to go after the people who are actually doing it, like it's not the right solution, but it's the one that's actually going to make a difference in a way that $50 million worth of AI patching probably isn't. Yeah, I think sanctioning Bitcoin exchanges that are laundering money, taking down things like Tornado Cash,
Starting point is 00:13:55 doxing, sanctioning travel bans, charges. But my blockchain, my freedoms on my blockchain, you can't put me in jail for writing a money laundering Ethereum bot. Yeah, think about that as they can. They can't put me in jail for writing a money laundering Ethereum bot. Yeah, think about that. They can. They can, as it turns out. One person doesn't care about your blockchain.
Starting point is 00:14:10 One person's blockchain privacy tool is another person's money laundering business. North Korean's money laundering business. But, you know, like just on patching, we got some research that's come out. This came out of an insurance firm, I think. Is that right? I don't know.
Starting point is 00:14:23 I should have prepped better for this. Prize for obvious headline. Yes, remote access tools, the intrusion point to blame for most ransomware attacks. So they're utilizing the cutting edge hacking technique of logging in. So change healthcare, right? When change healthcare went down, I'm sure most of you in the room heard about the Change Healthcare fiasco in the US. The mind boggles when you realise how the threat actor was able to do that, which is they bought some creds off the dark web and then logged in with no MFA into Change Healthcare's Citrix. Womp womp.
Starting point is 00:14:59 Do we have sad trombone loaded on the pad? I don't have sad trombone. I've got internet weapon. Okay, that's good. Internet weapon. Internet good. Internet weapon. Internet weapon. Internet weapon. That's what a password you buy on the dark web.
Starting point is 00:15:10 Yeah. That's what the Citrix is these days. So I don't know. It's sort of like times like this when we all want to just go live in a cabin somewhere remote and forget about this whole security thing. That's pretty much the only option. But yeah, I think we've all seen
Starting point is 00:15:22 quite how many breaches have been in network edge remote access tools lately, right? I mean, the Fortinets and the Pulse Secures and the Citrixes and the Cisco VPNs. And if you put it on the edge of your network and you made it in the 90s and then just rebranded it for 20 years. Yeah, private equity buys some product
Starting point is 00:15:42 with a decent install base and then just never touches it and just keeps renewing the licenses because that's how our wonderful business works. Yeah, good job, good job. Enjoy your Broadcom everything. Yeah. Oh, VMware, so sad.
Starting point is 00:15:56 Paul went out for VMware. What else have we got here? Paul went out for all of the VMware admins. I'm terribly sorry. Now, we've got a happy turn in a big theme, a big arc that we've had on the show over the last, I want to say decade, which is when insurance firms started getting involved in trying to urge their customers to make better security decisions. This beardy guy right here, he was so excited by that. He's like, this is going to make an impact.
Starting point is 00:16:23 This is going to make an impact, right? You know, we've got some financial levers, some financial incentives. Insurers are going to make people make better decisions. It didn't really happen. It was very disappointing. It seems like that's actually turning around now. You know, I speak to some vendors where they say, look, you know, a big way we're selling, like Zero Networks is a great example. They do micro-segmentation stuff. And a big way that they sell is someone has to submit a pen test report to their insurer to negotiate as part of negotiating their premiums. And if there's a finding that says, this network is flat, wide open, and pretty bad, they have to do something about that.
Starting point is 00:17:00 So now they're rolling out micro segmentation controls which you know people haven't really done on mass right so this is this is actually driving some change and we've got a report here from alexander martin over at the record which says the uk government is now realizing that working with insurers might be a decent way to start moving the needle on security uplift. That thing I said a minute ago wouldn't work. Yeah, I mean, I keynoted B-Sides Canberra, the first one. So that was 2016, I think. And I said that insurance was going to save us because they have metrics and data and we, security industry, you know, we have vibes and occasionally qualis reports. They're evidence-based vibes. Evidence-based vibes, but still vibes reports. They're evidence based vibes. Evidence based vibes but still vibes
Starting point is 00:17:45 nevertheless and I thought insurance, they'll bring their rigorous statistical methods and actuarial tables and they will save us all and then everyone laughed at me because that is not what happened but maybe I was just ahead of the times. Maybe I'm an insurance hipster and I was into insurance before
Starting point is 00:18:01 but yeah, seeing the UK government start to work with their insurers and start to go you know like there is a place for insurance to drive the right behaviors you know feels like a direction that you know in the end we can't solve this with nerd stuff we have to solve this terribly with business stuff and insurance is about as business as it gets right yeah and look staying with the uk there's some there's some interesting stuff happening over there right now they've got a proposal apparently uh coming along which would require people to obtain permission from the government before
Starting point is 00:18:36 they can pay a ransom in a ransomware attack they're also proposing banning payments for critical infrastructure providers now i don't think that's a good idea for a very simple reason, which is it's well and good to have a ban on a piece of critical infrastructure until you really, really need to get those systems back up and running and paying is the only alternative. Now, I understand governments want to reduce the likelihood that people pay because quite often the reason that they're paying is they've done the numbers on how long it will take them to recover without paying versus you know how long it'll take them to recover with paying and they've just realized that it's cheaper to pay
Starting point is 00:19:16 and I don't think that's a good reason to pay uh you know you should take your medicine uh honestly if you've if you've been ransomware it's just a it's just a fact of life and the licensing regime i know this is a proposal being considered in a lot of countries right the uk is the first one to talk about it uh publicly the idea that a government department with some bureaucrats in it right because they are the people who work in government departments that they're going to be the ones telling you whether or not you can pay a ransom, I don't know whether they're always going to make the right decision. And so I just feel a bit funny about this one. What do you think? I mean, one of the reasons you pay ransom is because it's faster, right? I mean, you can build your water infrastructure or whatever else back from scratch. If you have to, it's going to take a while. When we were talking last week with Lena, and she said speed of recovery is one
Starting point is 00:20:08 of the main reasons that people will consider paying a ransom, even though they know overall it's the wrong thing to do. And so that's depending on critical infrastructure, by definition critical, kind of makes sense that that's a thing that speed is going to be important. But it's just such a hard set of trade-offs to make. And, you know, an absolute answer, you know, an absolute rule is always going to be difficult. And then putting bureaucrats in the mix is going to take the speed advantage of paying a ransom out of the equation. Yes. Government's known for speedy responses.
Starting point is 00:20:41 Yeah, exactly. But we had that a couple of weeks ago on the show. We were talking about in the US where some state-level laws tended to encourage paying ransoms because you were required to do everything practicable to prevent the harm. Minimise the impacts of a breach, yeah. So that's an issue.
Starting point is 00:20:58 I think New York State is one where the lawyers have interpreted their laws and regulations to say you should pay in the case to get data extortionists to delete data. Yeah. So I think in the case of the UK, one of the things is mandatory reporting, which that's a useful thing, right? To at least have some data.
Starting point is 00:21:18 Well, NCSC in the UK too specifically said we don't regard paying as a mitigation to get around that. So that must have been an issue there as well yes yes i mean either way it's a thing governments have been struggling to figure out exactly what to do about it and there aren't any easy answers because if there were we would be already be doing them um but you know it's just it's kind of nice in a way to watch other governments and other places do it so that we can learn from it instead of having to make the mistakes ourselves. Yeah. So thanks, Britain.
Starting point is 00:21:51 Now let's talk about North Koreans getting jobs in the West because this is a wild story. So we got Dan Gooden's report on this from Ars Technica in this week's show notes. This woman in Arizona has been accused of basically running an operation where she would help North Koreans get like legit IT remote work jobs. She set up like a laptop farm where these people could proxy in and she helped funnel the money to North Korea. And the agency in North Korea that was running this program is actually the NUX agency, right? So, you know, people who listen to the show might have heard me say,
Starting point is 00:22:23 like, I've always found it's a bit of a tired line when people say, oh, you know people who listen to the show might have heard me say like i've always found it's a bit of a tired line when people say oh you know the crypto theft that north korea is doing is is uh paying for their nukes because as a percentage of of north korean gdp it's actually very very small uh the crypto theft but now we've got an example of the actual nuke agency and they're not doing like from what we can tell they're not really doing like cyber operations once they have these jobs they're just trying to earn a salary and give it to their government which is you know what a world what a world indeed and and yeah i guess you know u.s tech worker salary you know helps pay you know probably goes a long way in north korea that's one gram of plutonium yeah um but yeah it was just kind of funny that the actual mechanics of this process, because we've seen stories about North Korean operatives applying for jobs
Starting point is 00:23:12 and getting them remotely, going through the interviews, etc. But the mechanics of how do you actually octa-auth to an American company with geo-restrictions and someone has to have the plumbing to do that. Someone needs a basement full of laptops basically. Someone needs a basement full of laptops someone has to get the paychecks etc and remit them back to North Korea so yeah it's kind of funny seeing
Starting point is 00:23:34 someone's, that that's someone's actual job. Can I mean who would have thought that in this year of our Lord we would be at the point where you could be a laptop fixer for the North Korea Nuclear Weapons Agency in Arizona. I think she's facing like 100 years in prison as well. Man, talk about playing stupid games and winning stupid prizes. That's a hella stupid prize.
Starting point is 00:23:57 Funnily enough, I actually know a company that this happened to. And it was a really interesting story. They haven't talked about it publicly, so I can't name them. They figured it out in about three days and they actually had really good processes. And they used a third party to do some of the identity verification stuff and this third party didn't do what they were paying them to do. They skipped a couple of really important steps
Starting point is 00:24:21 and this person was onboarded. And as I say, they figured it out in a few days they did go poke around a couple of places that was like they're not sure that there was intent there to do anything shady but yeah as I say they figured it out but if you think this can't happen to you I don't know that it's happening to Australian companies yet but it's certainly like something North Korea is doing big time in the US and mostly just providing competent contracting services, which is the part of this that's so weird. It is very funny that we are so desperate for SharePoint admins that we're going to
Starting point is 00:24:57 hire North Koreans. Hire them straight out of North Korea. Now, in some news that's come, I don't know, 15 years too late, but you know, better late than never. Let's put it in the better late than never category. The FCC in the United States is now going to require telecoms to report on what they're doing about BGP. And, you know, I still remember one of my, one of the best presentations I ever saw. It was in the Netherlands and it was at the time the chief scientist for arbor networks so this is a long time ago and he did a whole presentation on bgp and i think my favorite line from that presentation was frankly i'm amazed the internet works at all yeah that's a miracle uh does anyone remember when half of youtube wound
Starting point is 00:25:39 up getting routed to a null interface on a box in an ISP in Pakistan, I've seen like a couple of hands go up. So the way that happened was really funny because Pakistan decided to ban YouTube and some admin at one of the ISPs just went, well, I'll do a BGP announce for YouTube's IP space to this null router, you know, to this null interface. But he accidentally announced a more specific route that propagated through the entire planet and wound up sending 50% of all YouTube traffic to a dead interface in Pakistan. I feel sorry for that poor Cisco. I mean, that would have been, what, 2009 or something. But don't worry, Adam, the FCC is onto it now. Let me check. It's 2024. When did Loft testify
Starting point is 00:26:25 to Congress about BGP? That was like 95, right? Yeah, something like that. As I say, better late than never though, right? 29 years? I know regulatory agencies can be a little sluggish, but at least they've fixed it now. That's the important thing. We're in a better place now
Starting point is 00:26:41 though, right? Because back then, BGP hijacks could actually get you something useful, and now we're in an everything's encrypted world. But you would think it's still going to be good to maybe not let your BGPs get redirected everywhere all the time. At the very least, from an availability point of view, BGP being more robust and RPKI and things like that would be useful, but TLS has made it much harder
Starting point is 00:27:06 to just get in the middle and help yourself to everything. Yeah, yeah. And I've dropped a link too in the show notes this week to a report from April because the FCC is doing something similar around SS7. And you know, it's just crazy that we've been doing this so long and then we start to see like smart regulatory decisions come down. As I say, better late than never, but are you telling me it took until 2024
Starting point is 00:27:30 for the Federal Communications Commission in the US to go to the telcos and say, hey, maybe you want to have a look at... Have you thought about SS7? Have you thought about SS7, the signaling protocol that runs all of your networks? There's so many complicated moving parts in that ecosystem, and it's weird in a way that it's taken this long. But yeah, it makes BGP look like the new kid on the block, seeing them out there regulating SS7.
Starting point is 00:27:57 Now let's talk about hacktivists, also in the US. Hacktivists, well, you know, Russian hacktivists, whether or not they're working at the behest of the state or not, we don't really know, but they've been attacking water infrastructure. Funnily enough, the EPA now is setting up enforcement to address critical vulnerabilities within the water sector. This is not the first time the EPA has tried to address this. They've said, OK, well, we have to... You know, we enforce safety standards and whatnot, and people need to adhere to certain guidelines around disaster response and whatnot, non-cyber.
Starting point is 00:28:35 They tried to demand that they would be able to do cyber security audits on water treatment plants. And keep in mind, in the United States, often they're run... And here, they're run by like local governments they're not some you know there's not some big you know federal water coca-cola water company yes yeah so it's often dystopian future bet yet you know a bunch of republican politicians cobbled together a lawsuit to make them stop and we surprised ourselves by actually being on the side of
Starting point is 00:29:03 the people doing the lawsuit because it did seem like overreach they're trying they've come back at another stab they've come back for another stab at this and they've issued an enforcement alert and they're trying to issue guidance and standards I'm guessing there won't be an audit component there I really feel for them because what they're trying to do is really good and it's just a really a question of whether or not they've got the authority to make it happen. Yeah, it's one of those things that ought to be straightforward, like modern society, water supply, ought to be a thing that we should manage.
Starting point is 00:29:33 I mean, it's been a while since the Roman Empire, you know, we should be able to do water reticulation, but the environment around that, all of the small authorities, all of the local politic dynamics, all those sort of things make you know computers were already hard when we managed to centrally control them and centrally manage them and now we've got you know distributed super important critical infrastructure and being run by yeah in a distributed way by people who aren't really qualified to secure it
Starting point is 00:30:01 we never should have moved away from aqueducts now Now, one more from the UK. The GCHQ is, they're standing up or making available a DNS, a secure DNS service, essentially, for opposition politicians and various people in civil society. This is a great first step. And it's an opportunity to talk about something that kind of alarmed me when I learned about it here which is if you're in opposition in the Australian government you're an elected MP you don't really get all that much support. Political parties too here in Australia we saw the Chinese had had a go at the Australian Labor Party and the Liberal Party, like party organizations. I wonder, I often wonder if we need to be doing more to look after our political parties, to look after our political candidates, to look after people in civil society. I mean, there's limits on what could be done there
Starting point is 00:30:56 because if you ask the, you know, the average Australian journalist, hey, would you like the Australian Signals Directorate to install some endpoint protection software on your computer? They're going to say, absolutely bloody not. And I can understand that. But when it comes to opposition politicians and political candidates and political parties, it's amazing how little support they get. This seems like the UK recognising that.
Starting point is 00:31:23 They're trying to do at least something small. Hopefully it expands. What did you make of this? Yeah, so I mean from a technical point of view this is pretty straightforward, like they've got a DNS server that will filter domains that are in their list of bad stuff and presumably the GCHQ has a reasonable list of bad stuff and they've been providing this service to government agencies and other things. So they're sort of expanding an existing service, and that seems like a low-impact way to extend some of that protection out, and especially when, you know, we're not in a centralised network anymore, you know, mobile devices, etc., etc., you know, you need to have something that can work regardless of how you're getting on the network. DNS block list, very sensible way to do that. But the bigger picture question of, you know, democracy and how that deals with,
Starting point is 00:32:06 you know, the crazy information warfare future that we find ourselves living in, you know, that's much more difficult. And I think the thing you highlighted about journalists and journalists and their reluctance to, you know, get protection from a government when holding government to account is what they're meant to be doing, you know, get protection from a government when holding government to account is what they're meant to be doing. You know, it makes it complicated, but, you know, we're in a time and a place where we need this kind of cooperation and, you know, opposition. Yes and no. Yes and no.
Starting point is 00:32:34 I mean, look at what happened in Poland. True. Right? With the whole spyware scandal in Poland where the new government there is still unpacking all of the abuses that the previous government engaged in with its use of tools like NSso groups spyware and whatnot you know if you have uh you know a government agency providing this sort of support i mean it's easy for us in australia and new zealand to say oh you know that'll be fine but i i understand the problem there you know when we
Starting point is 00:33:00 were talking a couple of weeks ago about uh the amount of spyware being bought by the Indonesian government, that's a democracy that's very close to us, but the fact that they are using spyware presumably for political ends there, a little bit more. And for those who didn't see that news, that was some report. It was Amnesty came out with that one. And what was interesting is you start reading about it and you're like, oh, they might have been using it for legitimate counterterrorism purposes. And then you see that a whole bunch of the domains
Starting point is 00:33:28 tied to their use of that spyware were like lookalike domains for like opposition parties and stuff. You know, probably not. So I don't know. There's no easy answers, right? And, you know, I guess DNS blocking is something. Whether we'll see them... The UK government has talked about
Starting point is 00:33:45 extending that to internet service providers as well, so they'll be providing it on a broader national level which limits its effectiveness. Here we are still talking about better late than never, you know, like secure DNS services 2024, yeah, crazy. Breach forums,
Starting point is 00:34:03 Adam, seized again seized a second time yes so breach forums was a hacker forum they got taken down I want to say last year
Starting point is 00:34:15 or was it beginning of this year time seems to dilate I can't even tell I think it was last year Pom Pom Purin got arrested
Starting point is 00:34:19 Pom Pom Purin was the admin got arrested someone stood up a breach forums to but this was like Clearwe arrested. Someone stood up a breach forums too. But this was like ClearWeb. Yes.
Starting point is 00:34:28 Like who runs a breach forum on the ClearWeb and expects not to get like taken down and arrested? Well, it turns out got taken down and arrested. Yeah, there's been a dark web arrest as well. A 23-year-old Taiwanese man. Yeah, incognito guy. Has been arrested for running incognito market. But yeah, a bad time to for running incognito market.
Starting point is 00:34:48 But yeah, a bad time to be running a dark web marketplace. The incognito guy, though, he's the one that scammed his own users. Like he exit scammed his own dark web market and took all their stuff. And then he tried to extort his own users. And he said, like, give me $20,000 or I'm going to publish details of your use of my dark web market. And furthermore, you're an idiot for trusting me, the operator of a dark net market on the internet. And then he went to New York.
Starting point is 00:35:14 Yeah, good move. Always a good move. I mean, I figure, like, Rikers Island prison, probably a safer place than when he's just tried to extort half the world's drug dealers on his own marketplace. Geniuses in the criminal underworld. Geniuses.
Starting point is 00:35:29 Absolute geniuses. Unfortunately, I regret to inform you that we briefly need to talk about Julian Assange because he has won the right to appeal his extradition to the United States. By the time this guy exhausts all of his appeals, he's going to be due for a nursing home. Like, this is just dragging out.
Starting point is 00:35:47 It's unbelievable. How long has he been in, like... He's been in Belmarsh five years. And before that, he was in the broom cupboard at the Ecuadorian embassy. And you just think, mate, you're probably going to get time served. And I think he's got a better shot at, like, clemency or,
Starting point is 00:36:02 you know, say he does get a long sentence if he were extradited. He's got a better shot at clemencycy or or you know say he does get a long sentence if he were extradited he's got a better shot at clemency from biden than he does from trump so what's he doing kicking this can down the road i mean i understand that it would be a pretty you know you would not be happy if you were him but i sometimes i question his strategy yeah i mean i feel like at this point he just needs to go look you know what yeah what? Yeah, I did it all fine. Sentence me. Time served. Walk away. Get it done because it's just been dragging on for so long. And to be honest, like, how long has it been since WikiLeaks was relevant?
Starting point is 00:36:35 Yeah, a long, long time. But, Adam, we are going to actually wrap it up there. That is it for this week's news recording. We're going to stick around for a couple of questions if anyone's got some uh but thank you very much for being a terrific audience and hanging out while we uh flap our gums about the week's news it's great to be here thank you so much thank you very much A big thanks to OSSERT's 2024 conference for hosting Adam and myself on the Gold Coast. It was a lot of fun. It is time for this week's sponsor interview now with Owen Hinshey, the co-founder and chief executive of Tynes.
Starting point is 00:37:20 Tynes does security automation and they do it extremely well. Just talk to any of their customers and you typically get a stream of praise. But yeah, they're sort of doing more than just security now and they have some big plans. As a company that's already automating stuff, Tynes is extremely well placed to make use of decision engines like large language models. So Owen joined me for this absolutely terrific interview, all about how they're thinking about AI. Enjoy. The way we think about times is that we provide software to help companies build, run, and monitor their most important workflows.
Starting point is 00:37:57 And for the longest time, we spent probably like 80, 90% of our engineering and product resources on the build section. And that was the workflow builder, allowing our customers build these like incredibly intricate, powerful, flexible workflows that integrated with like any tool in their stack. As we've kind of like experimented with AI, and when we last spoke, we were probably after spending about six months of experimentation around AI. And we were- Well, at six months of experimentation around ai and we were well at that point i think you said all it was good for was crapping out broken yeah workflows right it just wasn't it wasn't there we were we were quickly descending into like a trough of
Starting point is 00:38:36 disillusionment around like ai and we were seeing other companies kind of like quickly release these features that felt like bolted on that were mostly kind of like demo wear and we were like geez are we gonna have to like descend to that and do something like that but what we what we realized was with these technologies a there's a huge difference between building something that's demoable and building something that's actually deployable and the difference with something that's deployable and building something that's actually deployable. And the difference with something that's deployable is, you know, it solves real customer problems, it runs at scale, it's cost effective. And we've seen huge companies get this wrong, right? Like Microsoft, their security co-pilot, I don't know if you like were reading recently, it costs like 100k minimum
Starting point is 00:39:21 and Microsoft themselves are saying, hey, don't trust this thing. It's just like, it's, it's staggering. And so what we realize is like, okay, we can build something that like demos really well, but can we build something that's like deployable? And eventually what we realized was, yes, we absolutely can, but we were trying to solve two separate problems. And the first problem we were trying to solve is your point, which is like, okay, well, how do we make these workflows a little bit easier to build? So both in terms of like configuration, but also in terms of like using natural language to describe the kind of workflow that I want, and also using natural language to iterate on it once it's built. And that's kind of like the the easy obvious application of ai and llms to this problem and i think there is very little technology moats honestly to like providing
Starting point is 00:40:15 that type of capability um what what's really interesting is that like over time we solve this kind of like build problem with the help of ai but now we're spending more and more of our time focus on the running of the workflows and the monitoring of the workflows and so we suspect that over time building your workflows will become commoditized either through ai or by writing like python and you don't need to be an engineer anymore to describe the kind of scripts that you want etc and what will become more and more important is like okay well how do I run these things at scale how do I make sure they can run both in the cloud and on-prem and in hybrid and how do I make sure they're like massively scalable and elastic and all this kind
Starting point is 00:41:00 of stuff and also okay well how do I monitor these things to make sure they're working the way I expect them to work? And how do I get notified if a service I'm relying upon falls over? Or how do I be notified if something breaks midway through a workflow? And I think that will become more and more of a important differentiator for us as we kind of like continue to grow as a company I mean it's almost like when you think about the way that you're gonna deploy these models you know you almost think about them like people doing a job right because they're a natural language interface so you kind of want it to ask
Starting point is 00:41:38 you an important question but not bother you with not important question like it's just, you know, and you're going to have models being the bosses of other models. And, you know, I wonder if they're going to start firing their sub models or something or complaining up the chain and saying, you got to do something about this underling. It's, it's no good. Anyway, I'm getting a little bit. No, no, I think you're, I think you're right. You touch on a really important point in that, like, you know, up to this and not that's not quite true but like up to this we've really considered like a model like everybody isn't kind of using like chat gpt as like the the model um but now like they're kind of being fast followed by a bunch
Starting point is 00:42:16 of like models and now we see open source models like lama and some of the antropic stuff being like as capable for the vast majority of the use cases. And so I think what's also going to become really interesting is like, how does a platform like Tynes give our customers a choice of model for an appropriate task? So it's like, hey, you know, you've got some really, really gnarly, complex decision that needs to be made related to security. What model is best suitable for that? Because it's going to be a little bit more expensive to run. And then for this basic stuff like, hey, is this email and iTunes gift card fraud? What model can we provide that's very cost effective and very simple to understand, yeah, this is bad or no,
Starting point is 00:43:02 this isn't bad. And so when we're thinking about providing this technology to our customers, we're also thinking about, okay, well, how can we give them access to a secure and private model, as well as giving them access to all these cutting-edge models like ChatGPT 4 and 5 eventually? But you raise a really interesting point there, and that was something that came up in conversations I had with various experts on this at RSA, where I was talking to them about just the costs involved in all of the compute that you use to do this stuff. And they were saying, well, we're at the point now where we're using chopped down, much more specific models, exactly like you were saying.
Starting point is 00:43:40 And they're not that expensive anymore. You know, you don't need to throw every single thing at like a cutting edge, you know, whole of planet chat GPT style model. You just don't. Absolutely. And I think that's the really interesting thing to me is that like the adoption of these technologies, it's not black and white.
Starting point is 00:44:00 Like it's not a case of we're using AI in our workflows now or we're not using AI in our workflows. There can be this kind of like spectrum of we're using the really cheap models to do the very basic mundane work like phishing email analysis. And then, you know, we're going to slowly and responsibly increase our usage as we build trust in the system until eventually we reach this utopia, which is like all the repetitive work is being like handled by AI and LLMs. And it's just a varying degree of sophistication between the models for the level of sophistication required for the task. Yeah. Yeah. I mean, I think it would be helpful if you'd explain to people like from a times perspective, the scale of the opportunity
Starting point is 00:44:44 for you, because, you know, you've got such a head start in understanding what needs to be automated, right. And doing it the old way without the AI. So now to sort of paste in the AI goodness, you know, you're not just slapping a now with AI sticker on something that doesn't really need it. Like the scale of the opportunity for you must be just extraordinary, right? I mean, you know, you're talking about going from automating certain tasks to automating everything. Yeah, yeah. I mean, I'm guessing that's how you would be thinking about this, right? Oh, 100%.
Starting point is 00:45:15 And honestly, like sometimes, Patrick, I have to like temper how much I think about this because it can be a little bit overwhelming, like sometimes when you think about the size of the opportunity but you know when we think about software space in general and the um the kind of like the the dawn of AI and I should also like say that I'm I'm not an AI fan by by like nature I was like hugely skeptical about AI for like the longest time and it's only really in the last kind of like 12, 18 months, as we've actually seen these experiments we've run ourselves and the results of them that I've been like, okay, yeah, there's something actually real to this technology. And so when
Starting point is 00:45:55 we think about like software in general, it feels as if every single product will become a workflow product, right? Like everything, regardless of what that product is if it's a point solution it's going to become a workflow product and times and i'm not exaggerating when i say this we fundamentally believe we have the best workflow product available it's like well i mean i think you've got a i think you know just for anyone who might think that's too bold to claim i mean i think you've got a you could make a legitimate case that that's true i agree yeah you know thank you um it's not just ceo hubris and no but i mean you know but someone listening to this might not know who you are might not know times but i mean you know it's true you know you
Starting point is 00:46:33 can you can make that claim thank you yeah i i i you're i think you're right and i think the the um the interesting thing about this becomes like, times now we've got this super, super powerful workflow engine. We now power something like 40,000 of the world's most important workflows across like all manner and sizes of companies. We do something like 40 million automated actions on behalf of our customers every single day. And we've got six years worth of data like that have resulted from like all those workflows and as a result we're kind of like we've a little bit of a head start both in terms of like the technology we've built for the workflows but also the data that has come from all those workflows and we're now in a position where yes we can kind of act as the plumbing
Starting point is 00:47:23 between all these individual workflow products, but we can also now kind of begin to behave like, okay, we have all this data around workflows and we can provide recommendations to customers to do things like, here's the type of workflow that we've seen be really successful when they have a technology stack that looks like XYZ. And we can do it in a secure and private way because one of the things that's unique about our technology and how we've applied LLMs and an effort to kind of sidestep some of the security and privacy concerns
Starting point is 00:47:55 is when you use AI within Tynes, you're using an LLM that's running in our environment, in your environment. So we're not using like Microsoft or Google or OpenAI, we're running an AI in your environment. So we're not using like Microsoft or Google or OpenAI. We're running an AI in your environment. So it's secure, it's private, there's no logging, it's in region, there's no tuning or anything like that. So customers can kind of like immediately embrace
Starting point is 00:48:17 this technology without having to worry about some of the downstream security concerns. I mean, you know, already I'm guessing that, you know, you can ask the Tynes AI, you know, I have this tech, this is my technology stack. I have this problem. Can you recommend anything that I can do about that? And it'll tell you, and it'll do it for you. Correct. Absolutely. And, you know, again, I don't, I think that's a really interesting thing that probably we have the best answer for. But there's going to be loads of companies who will be able to do like that to like 60%, right? We're off to the races, though.
Starting point is 00:48:50 We are absolutely off to the races with this stuff. And it's going to just change business quite a lot. I know. It's fascinating. And I think, you know, we'll continue to kind of lead in that build space. But again, I think what's going to become increasingly important is the overall picture. Like, okay, well, you've got your workflow. It's designed correctly and it's built correctly.
Starting point is 00:49:11 Now, how do you run it and monitor it in a fashion that's like representative and aligned to how important those workflows are? And that's where we'll be investing a bunch of our time in the future. So what's ready to ship now uh so what we have in the product today that's that's already released is one kind of kind of think about this in two separate ways so one is ai that makes the product easier to use so this is
Starting point is 00:49:39 kind of what we've been talking about so like hey help me configure this action yeah or build me a workflow that does xyz i mean this has been this has been a immensely successful use of ai across you know all different types of vendors it's just great having that little ai person sitting there it's like clippy but that doesn't suck right i think as well what's what we've seen is like hugely impactful in this kind of category of make the product easier to use is data transformation so if you've ever done any automation or both in terms of like no code platforms like times but even in terms of like scripting honestly one of the hardest parts is manipulating the data from format a that came from a tool to format b that needs to go to another
Starting point is 00:50:22 tool like that's that's tricky unless you know what you're doing and understand things like regular expressions and so on but like being able to apply an llm for that problem is just magic it's like yeah and you can do it like you i know what you mean you're trying to transform something and you mess it up a little bit and it you know drops a comma in the wrong place and it's all wrong, you know, like I've seen it. Yeah. And remember as well, for things like the transformation, they are build time AI.
Starting point is 00:50:55 So you're only running that AI when you're asking it, hey, help me transform this data to that data. And then the AI is giving you back a script and that script is static. So, you know, the workflow may execute a million times, but you've only used one execution against AI for during the build period. To get the script that turns the thing into the way that you need it. Correct. So that is outrageously cost effective. So that's kind of like the category of making the product easier to use. The second category then is, okay, how do we give security teams and people who are performing workflow automation
Starting point is 00:51:27 access in a secure and private way to this outrageously impactful power of like LLMs without them having to go and like open a third-party risk review with like their procurement team, et cetera. And so what we have done is we have created an action type. So without going too far into the weeds, the way Tynes works is we provide these set of basic building blocks for automation.
Starting point is 00:51:51 We had seven for like the longest time. We added an eighth most recently, which is essentially primitive access to an LLM. And so now our customers have raw access to this secure and private LLM that runs inside their tenant that they can use in any workflow at any point and get it to do anything that they want. And so you have the flex- So you've got like, let me guess, like a prompt and then you paste in something, you know, you get to build the prompt and then shove the data in and then it gives you, you know,
Starting point is 00:52:20 you can ask it to give you a yes or a no or a true or a false. So I'm guessing that's how it works, right? Correct. Hey, recommend some next steps based on this security alert. Or that, yep. Like rank this security alert from zero to 100. Let me know if this code has any security vulnerabilities and recommend fixes. Summarize this document. So all these prompts are available.
Starting point is 00:52:40 And we're not telling our customers, hey, here's how you should use LLM for your program. You know better than us. But what we do provide, as always, is like a big bunch of kind of best practices. So like, hey, look, here's a great prompt if you want to act like a security analyst, like performing instant response. Or here's a really good prompt that's going to like act as a vulnerability manager if you want to like analyze a Qualys report or something like that. And so if we can get the machines to do that for us, you know to like analyze a qualus report or something like that and so
Starting point is 00:53:05 i mean if we can get the machines to do that for us you know it's a service to humanity you do realize this is what will inspire the machines to rise against us owen uh is making them read qualus reports until they've had enough of humanity uh we're out of time we're out of time uh i could talk to you about this all day. Fascinating discussion, Owen Hinchy. Great to have you on the show as always. And I can't wait to see you dropping some of these, you know, some of these advanced AI features
Starting point is 00:53:35 in the product in the future. It's going to be great. Beautiful. Thanks, man. That was Owen Hinchy there with a bit of a mind-bending interview. I do hope you enjoyed it and you can find them at tines. I do hope you enjoyed it and you can find them at tines.io. And that is it for this week's show. I'll be back soon with more risky
Starting point is 00:53:50 biz for you all. But until then, I've been Patrick Gray. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.