CyberWire Daily - Cyberattack in the fast lane.

Episode Date: January 7, 2026

Jaguar Land Rover reveals the fiscal results of last year’s cyberattack. A Texas gas station chain suffers a data spill. Taiwan tracks China’s energy-sector attacks. Google and Veeam push patches.... Threat actors target obsolete D-Link routers. Sedgwick Government Solutions confirms a data breach. The U.S. Cyber Trust Mark faces an uncertain future. Google looks to hire humans to improve AI search responses. Our guest is Deepen Desai, Chief Security Officer of Zscaler, discussing what’s powering enterprise AI in 2026. AI brings creative cartography to the weather forecast. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On today’s Industry Voices, we are joined by Deepen Desai, Chief Security Officer of Zscaler, discussing what’s powering enterprise AI in 2026. To learn more on this topic, be sure to check out Zscaler’s report here. Listen to the full conversation here. Selected Reading Jaguar Land Rover wholesale volumes plummet 43% in cyberattack aftermath (The Register) Major Data Breach Hits Company Operating 150 Gas Stations in the US (Hackread) Taiwan says China's attacks on its energy sector increased tenfold (Bleeping Computer) Google Patches High-Severity Chrome WebView Flaw CVE-2026-0628 in the Tag Component (Tech Nadu) Several Code Execution Flaws Patched in Veeam Backup & Replication (SecurityWeek) New D-Link flaw in legacy DSL routers actively exploited in attacks (Bleeping Computer) Sedgwick confirms breach at government contractor subsidiary (Bleeping Computer) FCC Loses Lead Support for Biden-Era IoT Security Labeling (GovInfoSecurity) Google Search AI hallucinations push Google to hire "AI Answers Quality" engineers (Bleeping Computer) ‘Whata Bod’: An AI-generated NWS map invented fake towns in Idaho (The Washington Post) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry’s most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the Cyberwire Network, powered by N2K. Ever wished you could rebuild your network from scratch to make it more secure, scalable, and simple? Meet Meter, the company reimagining enterprise networking from the ground up. Meter builds full-stack, zero-trust networks, including hardware, firmware, and software, all designed to work seamlessly together. The result, fast, reliable, and secure connectivity without the constant patching, vendor juggling, or hidden costs. From wired and wireless to routing, switching firewalls, DNS security, and VPN,
Starting point is 00:00:46 every layer is integrated and continuously protected in one unified platform. And since it's delivered as one predictable monthly service, you skip the heavy capital costs and endless upgrade cycles. Meter even buys back your old infrastructure to make switching effortless. Transform complexity into simplicity and give your team time to focus on what really matters, helping your business and customers thrive. Learn more and book your demo at meter.com slash cyberwire. That's M-E-T-E-R dot com slash cyberwire.
Starting point is 00:01:29 Jaguar Land Rover reveals the fiscal results of last year's cyber attack. A Texas gas station chain suffers a data spill. Taiwan tracks China's energy sector attacks. Google and Veem push patches. Threat actors target obsolete delink routers. Sedgwick government solutions confirms a data breach. The U.S. CyberTrust mark faces an uncertain future. Google looks to hire humans to improve AI search responses.
Starting point is 00:02:05 Our guest is Deep in Desai, chief security officer at Z-scaler, discussing what's powering enterprise AI in 2025. And AI brings creative cartography to the weather forecast. It's Wednesday, January 7th, 2026. I'm Dave Bittner, and this is your Cyberwire Intel briefing. Thanks for joining us here today. It's great to have you with us. Jaguar Land Rover has reported sharply weaker
Starting point is 00:02:57 preliminary results for its fiscal third quarter ending December 31st, underscoring the far-reaching impact of a major cyber attack. Wholesale volumes fell 43.3% year-on-year to 59,200 vehicles, while retail sales declined 25.1% to 79,600. The company said a September cyber incident forced weeks-long production stoppages and delayed global distribution, with manufacturing only returning to normal levels by mid-November. The disruption compounded other pressures, including the planned winddown of legacy jaguar models and newly imposed U.S. tariffs.
Starting point is 00:03:43 The impact was global, with wholesale volumes down more than 60% in North America and steep declines across Europe and China. Even the UK saw a modest drop. The attack, claimed by scattered lapsus hunters, prompted 1.5 billion pounds in UK government support and contributed to slower UK economic growth, according to the Bank of England. Tata Motors estimates the incident cost around 1.8 billion pounds, while the Cyber Monitoring Center warned of wider economic damage. Goulshan Management Services, a Texas-based operator of roughly 150 gas stations and convenience stores,
Starting point is 00:04:28 has disclosed a major data breach affecting more than 377,000 people. The incident was revealed through filings with the Maine Attorney General and the Texas Attorney General. Attackers accessed an external system between September 17th and September 27th of last year, with the breach discovered on the final day. Initial disclosures cited exposure of names and personal identifiers, while later filings indicated the compromised data may also include social security numbers, driver's license, or government ID details, and financial information. Affected individuals were not notified until January 5th of this year,
Starting point is 00:05:10 more than three months after the breach period. The company now faces class action lawsuits, and investigations alleging inadequate security controls and delayed notification, highlighting ongoing risks in highly interconnected retail fuel operations. Taiwan's National Security Bureau reports that cyber attacks linked to China against Taiwan's energy sector surged tenfold in 2025 compared to the previous year. Overall, incidents attributed to China rose 6 percent, targeting nine critical sectors. Energy infrastructure saw the most dramatic increase with attacks up 1,000% while emergency
Starting point is 00:05:54 services and hospitals rose 54% and communications increased 6.7%. Other sectors, including finance and water resources, declined. The NSB says many attacks coincided with military activity and sensitive political events. The most common techniques exploited hardware and software vulnerability alongside distributed denial of service attacks, social engineering, and supply chain compromises. Energy sector attacks focused on industrial control systems and malware insertion during software upgrades. The activity was attributed to Chinese-linked groups, including Black Tech, APT-41, and others. Google has released an urgent security update for its Chrome browser to fix a high-severity flaw, The issue affects Chrome's WebView component, which lets apps display web content inside native interfaces.
Starting point is 00:06:54 Insufficient policy enforcement could allow attackers to bypass security controls. Google has pushed patched versions to all desktop platforms and Android through the stable channel. Users are urged to update promptly, as Google is limiting technical details until most systems are patched. elsewhere veem has released an update for its backup and replication software to fix multiple vulnerabilities that could enable remote code execution the issues require highly privileged roles such as backup or tape operator which led veem to rate them high severity rather than critical the company says the bugs were found internally and have not been exploited still organizations are urged to patch promptly as veem products are frequent targets in ransomware attacks, and past vulnerabilities have appeared in SISA's known exploited vulnerabilities catalog.
Starting point is 00:07:53 Threat actors are actively exploiting a newly disclosed command injection flaw affecting several end-of-life D-Link DSL routers. The vulnerability stems from improper input sanitization, allowing unauthenticated attackers to execute remote commands via DNS configuration parameters. The issue was reported by Volncheck after exploitation attempts were observed by the Shadow Server Foundation. D-Link confirmed that multiple DSL router models are affected, all of which have been unsupported since 2020 and will not receive patches. While exploitation details remain unclear, D-Link and Volncheck warn that identifying all impacted devices is complex due to firmware variations.
Starting point is 00:08:41 Users are strongly advised to retire and replace affected routers as end-of-life devices no longer receive security updates and pose ongoing risk. Sedgwick has confirmed a security breach affecting its federal contracting subsidiary Sedgwick Government Solutions, which provides services to more than 20 government agencies. The parent company, Sedgwick, says the incident was limited to an isolated file transfer system and did not impact its broader corporate network or claims management servers. Sedgwick has notified law enforcement and engaged external cybersecurity experts to investigate. Clients of the subsidiary include major U.S. agencies such as SISA and the Department of Homeland
Starting point is 00:09:30 Security. While Sedgwick did not publicly attribute the attack, the Trident Locker Ransomware Group has claimed responsibility, alleging the theft of 3.39. gigabytes of data and publishing samples online. The investigation is ongoing and Sedgwick says services remain operational. The U.S. CyberTrustmark is a voluntary consumer labeling program designed to help Americans identify smart devices that meet baseline cybersecurity standards. Launched by the Federal Communications Commission during the Biden administration, the initiative was created to address long-standing concerns that consumer Internet of Things products often ship with weak security and limited accountability after vulnerabilities emerge.
Starting point is 00:10:20 That program now faces uncertainty after UL's solutions formally withdrew as its lead administrator. UL notified the FCC in late December that it was stepping down effective immediately, saying it had completed foundational work such as convene, stakeholders and helping develop technical and governance recommendations. The departure leaves no clear entity overseeing day-to-day operations of the program. While UL described the move as a natural transition, the timing follows an internal national security review ordered last summer by FCC Chairman Brendan Carr, which examined potential foreign influence in program management.
Starting point is 00:11:05 It remains unclear whether the FCC plans. to appoint a replacement administrator. Google is signaling a renewed push to improve the reliability of its AI-generated search responses as it expands AI overviews across Google Search. A new job listing shows the company is hiring engineers for an AI answers quality role, focusing on verifying and improving the accuracy of AI overviews and AI mode responses. In the listing, Google acknowledges the need to solve complex challenges while reimagining how users search for information.
Starting point is 00:11:47 The move is notable as Google continues to push AI-generated answers more aggressively, including into its Discover feed, sometimes rewriting news headlines. Despite recent improvements, AI overviews still produce contradictory or fabricated answers, even when citing sources that do not support the claims. Media scrutiny has intensified with The Guardian reporting misleading health advice generated by AI overviews. The hiring effort appears to be Google's first indirect admission that answer quality remains a serious issue. Coming up after the break, my conversation with Deepened Desai from Z-Scaler, We're discussing what's empowering enterprise AI in 2026.
Starting point is 00:12:42 And AI brings creative cartography to the weather forecast. Stick around. What's your 2 a.m. security worry? Is it, do I have the right controls in place? Maybe are my vendors secure, or the one that really keeps you up at night, how do I get out from under these old tools and manual processes? That's where Vanta comes in. Vanta automates the manual work, so you can stop sweating over spreadsheets,
Starting point is 00:13:23 chasing audit evidence, and filling out endless questionnaires. Their trust management platform continuously monitors your systems, centralizes your data, and simplifies your security at scale. And it fits right into your workflows, using AI, to streamline evidence collection, flag risks, and keep your program audit ready, all the time. With Vanta, you get everything you need to move faster, scale confidently, and finally, get back to sleep. Get started at Vanta.com slash cyber. That's V-A-N-T-A-com slash cyber. Most environments trust far more than they should.
Starting point is 00:14:07 and attackers know it. Threat Locker solves that by enforcing default deny at the point of execution. With Threat Locker Allow listing, you stop unknown executables cold. With ring fencing, you control how trusted applications behave. And with Threat Locker, DAC, defense against configurations, you get real assurance that your environment is free of misconfigurations and clear visibility into whether you meet compliance standards. Threat Locker is the simplest way to enforce zero-trust principles without the operational pain.
Starting point is 00:14:40 It's powerful protection that gives CISO's real visibility, real control, and real peace of mind. Threat Locker makes zero-trust attainable, even for small security teams. See why thousands of organizations choose Threat Locker to minimize alert fatigue, stop ransomware at the source, and regain control over their environments. Schedule your demo at Threatlocker.com slash N2F2. U.K. Today. Deep and Desai is chief security officer at Z-scaler, and in today's sponsored industry voices conversation, we discuss what's powering enterprise AI in 2026.
Starting point is 00:15:29 Well, Deepen welcome back, and you and your colleagues there, Z-scalers, Threat Labs, have put out some interesting research when it comes to enterprises and AI. I would love to start off with some high-level stuff here. I mean, can we start with the obvious that it's safe to say that AI is here to stay when it comes to these enterprise security operations? Yes. Thank you. Thank you, Dave, for having me on the call. So, yes, the recent report that the team has been working on is our annual AI security report where we look at the
Starting point is 00:16:08 enterprise AI traffic that goes through Z-Scalers, cloud infrastructure, how we secure them, type of trends that we're seeing. We also slice and dice it by threats, the usage stats, and there are a lot of other interesting insights,
Starting point is 00:16:25 but the main goal of that is to showcase that AI usage in enterprises is no longer a speculation. It's a reality. It's grown significantly. And then the cyber risks that comes with it are also real because we're starting to see more and more reports of things that are happening in the wild. One of the things that caught my eye in this research was that this idea that because there are a limited number of vendors for these sorts of things, that that's creating a concentration risk where so many businesses relying on just a handful of vendors. Can you unpack that for us?
Starting point is 00:17:07 Yeah. So if you think about it, the base LLM models, there are a few major vendors, whether it's Open AI, Google, Anthropic, XAI, and then meta. And then there are, of course, more area-specific ones as well that have tweaked and tuned this base model to support their use cases. The vendor concentration risk is a real problem because if you're relying only on one vendor for all your needs, you will have issues if that vendor has a bad day, both from security risk perspective as well as from business continuity perspective. The other thing that I've seen more and more over the past six months is
Starting point is 00:18:00 where many of the organizations won't even know that they're leveraging one of these vendors because it's the supply chain that these organizations rely on. They are using AI as part of one of the application. The example I can give you is maybe, and this is just an example, not trying to quote any app name, but say you're using SAP, and SAP has deployed Open AI to do some analytics on the usage, stats, and data, and of course, with your permission. So that's where there is an embedded AI usage happening across your supply chain as well.
Starting point is 00:18:40 So the point I'm trying to make is not only will your first party application have a bad day, but also those embedded AI apps will also suffer if one of these vendors were to have a bad day. Can we go through a typical organization and talk about how the different departments are making use of this technology? I guess can we start with the engineering teams? Absolutely. So look, I mean, we've been hearing from last couple years how every organization, it's a mandate from board. It's a mandate from CEOs.
Starting point is 00:19:16 You need to start leveraging AI, explore where we can bring in efficiencies. where we can make things more optimized as well. And what are some of the use cases where you are able to actually use AI to solve the problems that most enterprises face today? So while we're looking at the last year's data, we also sliced and diced it by the application categories and which departments in the modern enterprises are likely using those applications. Now, this is purely from the lens of the web traffic that we're seeing.
Starting point is 00:19:58 So what we saw, and this is the data set was in billions of records. So it's not a small dataset. But what we saw was engineering was about 47% of the overall traffic that was hitting the AI applications. 47% out of all the top departments. Number two was IT, followed by marketing and customer support. And then there were long tail of other areas which are also exploring different AI applications. Now, engineering makes a lot of sense because one of the area where we're seeing the number of transactions will be high is because you're using things for coding.
Starting point is 00:20:46 So various AI coding applications, co-pilots that are assisting developers in writing code, writing those test cases, vetting, automating, so which is where you will see high volume of transactions originating from engineering department. For years and years, we've talked about the notion of shadow IT, and now we talk about shadow AI, you know, folks using perhaps unapproved systems.
Starting point is 00:21:16 What's the reality that you all are seeing when it comes to that? It is actually one of the top priorities for all global CSOs to have a program in place, have tooling in place that allows them to flag shadow AI application. Shadow AI, just like you have shadow IT apps, these are not sanctioned AI applications. applications that are being leveraged within the environment and potentially interacting with your sensitive data set as well. So it is absolutely prevalent. And whenever there are issues that are seen in the wild
Starting point is 00:21:57 around this AI application or the environment suddenly becomes a fire drill incident for a lot of the CXOs because they may not even be aware of an AI app or Shadow AI implementation in the environment that is vulnerable to the issue that got reported. A recent example that comes to my mind, this is probably a few months back, but there are a lot of these NPM open source packages getting targeted by threat actors. And you had a lot of these coding agents which are updating, downloading the libraries, and these are latest libraries that were impacted.
Starting point is 00:22:40 So many of the assets where these agents or these apps were running had this NPM libraries installed were actually compromised and impacted. So again, there will be many such cases. The biggest investment that a lot of the CXOs are doing right now is to have a proper observability tooling and then the policy enforcement tooling. This is where Z-Skiller also helps out. That's where my visibility comes from. We do provide a solution that kind of lights up your screen on,
Starting point is 00:23:16 okay, here is all the AI usage in your environment. And then the customers go and say, okay, these are sanctioned apps versus these are not sanctioned, and then they will then take policy enforcement decisions on those applications. Can we talk about the velocity issue? I hear folks saying that the speed at which end, AI functions that we really need to be addressing these issues with AI. It's the only thing fast enough to be able to parry that velocity.
Starting point is 00:23:49 To what degree do you think that's the reality here? It's absolutely true. I mean, look, if I were to see it from the lens of the threats, right, where an AI agent is your adversary, just like the example that we saw when Anthropic reported that cyber espionage campaign. If I were to do a comparison between, say, a human adversary and an AI agent adversary,
Starting point is 00:24:15 I mean, three or four things that comes to my mind is like when a human adversary is targeting your environment, you will see, you know, a periodic, bursty activity. When they're awake, when they're active, they're doing hands-on keyboard activity in your environment versus when an AI agent is involved in the attack
Starting point is 00:24:36 and acting in an almost autonomous fashion, it will be constant, tireless, it will be adaptive, and honestly breaking the timing-related rules that we'll see when human adversaries are attacking. The second point is where, you know, there is limited parallel workloads that you will see if it's a purely human-driven attack. But in case of AI agent, it's just massive.
Starting point is 00:25:04 it could spin up number of instances at scale without any problem as long as the compute is available. Even the time to think and react for human adversaries, it will be minutes to hours, but with the AI agent, you're talking about millisecond feedback loops. This is where the AI agent will have the ability to try all options simultaneously. And this is, again, a threat when you're trying to defend against this in any kind of reaction. active fashion. And the last piece I'll mention over here is human adversaries. I mean, we all humans are expert at something, but not everything.
Starting point is 00:25:47 It will be hard to find an adversary that knows it all when it comes to tech stack. So when they're in the environment, they discover certain apps, certain technology being used, they will spend some time. Either they're expert at it and they will go about doing what they're trying to do or they will spend some time to pivot. In case of AI, there is no cognitive limits and the ability to creatively engage in various permutation, path traversal, op-sac, all of that is well beyond human capacity.
Starting point is 00:26:21 Well, given these realities, given the information that you and your colleagues have gathered for this report, what are some practical steps that people can take to best protect themselves? Yeah, the number one thing, and we've been working with many large organizations around there. So number one thing is you need to know, so I'm going to call it out from both securing your AI infrastructure and then also securing against AI driven attack. So when it comes to securing your AI infrastructure, you really need to have that observability, policy enforcement, just like a zero trust exchange for all your AI needs.
Starting point is 00:27:03 You need a good handle on having a governance in place, and that starts with the observability piece, and then you're able to enforce policies. So that's number one, very, very important. You need to have a program that prioritizes this if you don't have one already. It should be number one priority. Number two is you also need to be ready to, you know, defend your organization, your infrastructure against AI-driven attacks because it's no longer theoretical.
Starting point is 00:27:33 We're already seeing practical instances. It's no longer limited to just a phishing template or phishing page being spun up using AI. You saw the full attack chain, you know, being orchestrated using AI. So this is where prioritizing a zero-trust strategy everywhere is critical because you essentially cannot fight. AI-driven attacks in a reactive manner because of how fast and how scaled these attacks will be, the best thing you can do is to shut down the vectors of attacks, whether it's a human adversary or an AI adversary can take when they're targeting your environment and that this can
Starting point is 00:28:16 be achieved using a true zero-trust architecture. So having that zero-trust strategy everywhere will set you up in the best possible posture to defend against. these highly scaled, highly trative, and very high-speed attacks. That's Deepen Desai, Chief Security Officer at Z-Scaler. And finally, at first. First, the wind forecast for Camas Prairie, Idaho looked routine.
Starting point is 00:29:02 Hold on to your hats, the graphics suggested, especially if you lived in places like orange-tilled or wadabod. Minor complication, these towns don't exist. The National Weather Service later confirmed the maps misspelled and imaginary locations were the result of an experiment with generative AI. The agency said the image was quickly correct. and the post removed, stressing that AI is not commonly used for public-facing forecasts, though it's not prohibited either.
Starting point is 00:29:36 The episode comes as the Weather Service, part of the National Oceanic and Atmospheric Administration, explores AI for everything from forecasting to graphics, while also dealing with staffing losses that have stretched resources thin. Experts warned that even small errors can chip away at public trust, especially when they come from an authoritative source. As one observer noted, AI can help fill gaps, but inventing towns is probably not the kind of innovation anyone had in mind. And that's the CyberWire.
Starting point is 00:30:28 For links to all of today's stories, check out our daily briefing at thecyberwire.com. We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review
Starting point is 00:30:45 in your favorite podcast app. Please also fill out the survey in the show notes or send an email to Cyberwire at N2K.com. N2K's senior producer is Alice Carruth. Our Cyberwire producer is Liz Stokes. We're mixed by Trey Hester with original music by Elliot Peltzman. Our executive producer is Jennifer Ibin. Peter Kilpe is our publisher, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.