CyberWire Daily - The leak was only a matter of time.
Episode Date: April 22, 2026Mythos leaks. The DOD preps a more aggressive cyber strategy. A former FBI cyber official urges homicide charges for hospital ransomware deaths. Lotus Wiper targeted the Venezuelan energy and utilitie...s sector. Over 1,300 SharePoint servers remain unpatched against a spoofing vulnerability. The Harvester APT group deploys a new Linux version of its GoGra backdoor. A new LOTUSLITE backdoor targets India’s banking sector. The Mirai botnet exploits discontinued routers. Our guest is Brian Vecci, Field CTO at Varonis, discussing how organizations can safely adopt AI and autonomous agents. A satirical startup sells clean-room clones. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On today’s Industry Voices, Brian Vecci, Field CTO at Varonis, discusses how organizations can safely adopt AI and autonomous agents by securing data, managing risk, and focusing on measurable outcomes. If you enjoyed this conversation, tune into the full interview here. Selected Reading Anthropic’s Mythos Model Is Being Accessed by Unauthorized Users (Bloomberg) Claude Mythos Finds 271 Firefox Vulnerabilities (SecurityWeek) New Defense Department cyber strategy imminent, official says (The Record) Pentagon Cyber Leaders Back $1.5T Budget Request (GovInfo Security) Ex-FBI lead urges homicide charges against ransomware scum (The Register) New Wiper Malware Targeted Venezuelan Energy Sector Prior to US Intervention (SecurityWeek) Over 1,300 Microsoft SharePoint servers vulnerable to spoofing attacks (Bleeping Computer) Harvester: APT Group Expands Toolset With New GoGra Linux Backdoor (SecurityWeek) Same packet, different magic: Mustang Panda hits India's banking sector and Korea geopolitics (Acronis) Mirai Botnet Targets Flaw in Discontinued D-Link Routers (SecurityWeek) This AI Tool Rips Off Open Source Software Without Violating Copyright (404 Media) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry’s most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
No, it's not your imagination.
Risk and regulation really are ramping up,
and these days customers expect proof of security before they'll even do business.
That's where Vanta comes in.
Vanta automates your compliance process and brings compliance, risk, and customer trust together on one AI-powered platform.
So whether you're getting ready for a SOC2 or managing an end-toe,
enterprise governance risk and compliance program, Vanta helps keep you secure and keeps your deals
moving. Companies like Ramp and writers spend 82% less time on audits with Vanta. That means less
time chasing paperwork and more time focused on growth. For me, it comes down to this. Over 10,000
companies from startups to large enterprises trust Vanta to help prove their security. Get started at vanta.com
slash cyber.
Mythos leaks.
The DOD preps a more aggressive cyber strategy.
A former FBI cyber official urges homicide charges for hospital ransomware deaths.
Lotus Wiper targeted the Venezuelan energy and utilities sector.
Over 1,300 sharepoint servers remain unpatched against a spoofing vulnerability.
The Harvester APT Group deploys a new Linux version of its Gogra backdoor.
A new Lotus Light backdoor target.
its India's banking sector. The Marai Botnet exploits discontinued routers. Our guest is Brian
Vecchi, field CTO at Veronis, discussing how organizations can safely adopt AI and autonomous agents.
And a satirical startup sells clean room clones. It's Wednesday, April 22nd,
2026. I'm Dave Bittner, and this is your Cyberwire Intel Briefing. Thanks for joining us. It is great as
always to have you with us.
In news, that should shock almost no one.
A small group of unauthorized users gained access to Anthropics' unreleased Mythos AI model,
despite the company's efforts to restrict it to vetted partners because of its potential cybersecurity risks.
According to a person familiar with the situation and materials reviewed by Bloomberg News,
the users accessed the model through a third-party contractor environment and online investigative techniques,
including scanning unsecured resources like GitHub.
Anthropics says Mythos can identify and exploit vulnerabilities
across major operating systems and web browsers,
which is why it's being distributed only through its limited project
Glasswing testing program.
The company stated it's investigating the reported access
and has no evidence that its core systems were affected.
While the group reportedly used Mythos for benign experiments
rather than cyber attacks.
The incident highlights how difficult it can be
to contain powerful AI tools
and raises concerns about whether other unauthorized parties
may also have access.
Mozilla says Anthropics-Claude Mythos preview
identified 271 vulnerabilities in Firefox,
though only three received CVE designations
in Firefox 150,
suggesting most were lower severity issues.
Mozilla noted the bugs were within the reach of elite human researchers, not entirely novel flaw classes.
Palo Alto Networks reported the model performed roughly a year's pen testing work in under three weeks,
highlighting growing enterprise risk from advanced AI-driven security tooling.
The Defense Department is preparing a new cyber strategy aimed at aligning military cyber operations
with the Trump administration's more aggressive approach to digital adversaries.
Officials say the plan will integrate cyber capabilities across all warfighting domains,
strengthen operations below the threshold of armed conflict,
and advance the Cyber Command 2.0 effort to modernize cyber forces.
The strategy builds on the White House blueprint calling for expanded offensive and defensive cyber actions
to impose costs on adversaries and improve coordination with the industry.
Senior Defense Department officials told lawmakers,
a roughly $1.5 trillion budget request
prioritizes expanded cyber forces and digital warfare capabilities
that counter increasingly disruptive nation-state threats.
The proposal includes $20.5 billion for cyberspace operations,
supports the Cyber Command 2.0 restructuring effort,
and funds zero-trust architecture and infrastructure protection.
Officials said cyber is now central to military modernization and deterrence,
alongside $58.5 billion for AI and command and control initiatives,
while workforce shortages and organizational coordination remain ongoing challenges.
A former FBI Cyber Division official urged the Justice Department
to consider felony homicide charges when ransomware attacks on hospitals contribute to patient deaths,
arguing penalties should match the severity of harm.
Cynthia Kaiser also called for possible terrorism designations for groups that repeatedly target health care providers
and urged Congress to restore funding for state and local cybersecurity programs facing cuts.
Lawmakers and experts warned that reduced support for the cybersecurity and infrastructure
Security Agency could weaken ransomware defenses, citing workforce losses and the suspension
of its pre-ransomware notification program, which previously warned thousands of organizations
of imminent attacks and helped prevent billions in damages. Witnesses said continued funding,
information-sharing authorities, and defensive investments remain critical, despite some progress
against ransomware threats in recent years.
Researchers at Kaspersky warn that a previously undocumented wiper malware called Lotus Wiper
has targeted the energy and utility sector in Venezuela
in a destructive campaign likely intended to permanently disable systems.
The attack used two batch scripts to weaken defenses,
coordinate execution across networks,
and retrieve the final payload, which deletes restore points,
overrides physical drives,
systematically erases files. The absence of ransom demands suggest a targeted non-financial motive.
Kaspersky reported no attribution but noted the activity coincided with regional geopolitical tensions
in late 2025 and early this year. The execution chain relied on legacy Windows features
and network-based triggers, indicating prior access and familiarity with the victim environment
before deployment.
More than 1,300 internet-exposed Microsoft SharePoint servers remain unpatched against a spoofing
vulnerability, previously exploited as a zero-day and still used in active attacks.
The flaw affects SharePoint Enterprise Server 2016, SharePoint Server 2019, and SharePoint
Server subscription edition, and allows unauthenticated attackers to conduct network spoofing
through improper input validation without user interaction. Successful exploitation could expose
sensitive information and enable data modification, though not disrupt availability. Microsoft
released patches this month, but Shadow Server reported limited remediation progress. SISA added
the vulnerability to its known exploited vulnerabilities catalog and ordered federal
civilian agencies to apply fixes within two weeks, warning the issue poses a significant
significant risk to government networks and is a common attack vector.
Researchers report that the Harvester Advanced Persistent Threat Group has deployed a new Linux version
of its GoGraw backdoor that uses Microsoft Graph API and Outlook mailboxes as covert command
and control infrastructure to evade detection. Semantec linked the malware to earlier Windows campaigns
based on shared code and identical errors,
indicating expanding cross-platform tooling.
The backdoor uses social engineering
with disguised document files for delivery,
persistence via System D auto-start entries,
and encrypted email-based tasking and data exfiltration.
Initial samples were submitted from India and Afghanistan,
consistent with Harvester's historical focus on South Asia.
Analysts observed no confirmed,
victims but assessed the campaign as targeted espionage activity, leveraging legitimate cloud
services to bypass perimeter defenses and maintain stealth.
Researchers at Akronis identified a new Lotus Lightback Door variant targeting India's banking
sector, delivered through DLL side-loading using a legitimate Microsoft signed executable.
The malware communicates with a dynamic DNS command and control server over HTPS and supports remote shell access, file operations, and session control, indicating espionage activity rather than financial crime.
Code similarities confirm continuity with earlier Lotus Light builds.
Analyst assess moderate confidence links to Mustang Panda, noting a shift from earlier delivery methods and a geographic pivot for,
from U.S. government targets to India's financial sector.
The Marai botnet is actively exploiting a command injection flaw
in discontinued delink routers, according to Akamai.
The vulnerability allows attackers to execute malicious commands
through crafted post requests,
enabling payload delivery via shell scripts
with typical Marai features such as XOR encoding and hard-coded infrastructure.
The affected devices no longer receive updates,
and DLink has advised retiring them.
Researchers also observe targeting of T.P. Link and ZTE routers,
highlighting continued widespread reuse of Morai source code
in opportunistic botnet campaigns.
Coming up after the break, my conversation with Brian Vecchi from Veronis.
We're discussing how organizations can safely adopt AI and autonomous agents,
and a satirical startup sells clean room clones.
Stick around.
Quick question.
Have you watched Project Hail Mary yet?
Humanity is facing an existential threat and racing to solve it with the clock ticking.
For security teams, that probably hits close to home with AI use rapidly spreading.
Everyone's using AI, marketing, sales, engineering.
Chris the intern without security even knowing about it.
That's where Nudge security comes in.
Nudge finds Shadow AI apps, integrations,
agents on day one and helps you enforce policy without blocking productivity.
Try it free at nudgesecurity.com slash cyberwire.
Maybe that's an urgent message from your CEO, or maybe it's a deep fake trying to target your
business.
Dopple is the AI-Native social engineering defense platform fighting back against impersonation
and manipulation.
As attackers use AI to make their tactics more sophisticated,
Dopple uses it to fight back, from automatically dismantling cross-channel attacks to building team
resilience and more. Doppel, outpacing what's next in social engineering. Learn more at doppel.com.
That's D-O-P-P-P-E-L.com. Brian Vetchy is field CTO at Veronis. I caught up with him at RSAC
2026 for this sponsored industry voices discussion about how organizations can safely adopt
AI and autonomous agents.
Before we dig into our topics here, how's the week been for you so far?
An absolute blur.
I don't know how many hundreds.
I think it's in the hundreds of customer meetings we have this week.
Wow.
This is my 15th or 16th or 14th.
I don't know, something like that, R-SAC.
They're always a blur.
This one seems to be even more of a blur than the ones previously.
That's both exciting and draining.
I can concur.
Yeah.
Well, AI is the hot topic.
What's top of mind for you when it comes to AI's intersection with cybersecurity?
What's really interesting is we're kind of in a new phase of AI.
Like we started talking about AI and data security and how data security and AI security are so closely intertwined, like three or four years ago.
And what's interesting is the pace of change has accelerated.
Now things change not in.
six decades or six years, but in six days. Things happen extremely quickly. The number of
conversations that focus, you know, I talk to security people, you know, the chief information
security officers and the architects and the AI security architects. And they're all in a position
where they're organizations. I don't want to use the word businesses because they're not all
businesses. There's nonprofits. There's government organizations. Everybody has a mission, though.
something they want to accomplish.
And their leaders, not the security leaders, the business leaders, the organizational leaders,
they want to move more quickly than the security and governance capabilities can possibly keep up with.
Three years ago, it was conversations around how do we enable our workforce to use AI tools like Microsoft co-pilot or chat to BT?
those questions and those issues are still relevant,
but those are problems from two or three years ago.
The problems today are,
I'm now deploying thousands of agents.
These agents are autonomous and non-deterministic,
which is a fancy way of saying,
I don't know what they're going to do.
I'm going to define an agent to try to automate a business process,
but these things, and I try to be careful about anthropomorphizing them,
too much. They don't make decisions, but they are non-deterministic in that you give them an input
like a prompt or an interaction with another agent or a system or a workflow. You can't predict
what the output would be, which makes it fundamentally different from the user and application
security that we've seen in the past. It's kind of an innovation gap where the need for governance
and security controls, and governance is a problematic word, governance is intent. Governance is not a
control. It's not a capability. It's intent. I would like things to be governed. But the need for
governance and security in order to enable this space of change just hasn't kept up. The other thing
that's really interesting is that AI tools like ChatGPT and Microsoft Copilot and Claude,
which have become part of our daily lives at this point, they're extremely smart. They're extraordinarily
capable because they've had the internet to learn from. Organizations are looking at that
and saying, why can't we do that?
Why can't I use AI?
Not to replace my knowledge workers,
but to make them dramatically more productive.
And there's a really interesting answer to why not, why they can't.
And it's because the public models,
the AI tools that we have come to rely and depend on
have access to the internet.
Organizations can't build their own models or leverage agents
because the AI tools that they're using
are only leveraging 3% of their data.
So they're not as smart and they're not as capable.
Let me ask the question, why not?
Why can't they use all of their data?
There's three reasons.
One, organizations struggle to secure their data.
If I unleash an agent
without any kind of governance or access to my data,
I'm introducing risk.
Like, I don't know what's going to happen.
agents can get fished like human beings
that can be co-opted
so I need data security
that's a huge problem
organization struggle with data security
I know that because I've been working
in data security for 16 years
second thing is AI security
how are these agents and these models
these code libraries and the tools
that people are using are they properly secured
because if the answer is no
guess what I'm gonna
somebody's gonna put a big hand up and stop
your AI usage
and the third thing is your adversaries
have access to all these same tools
Anthropic released a report in November talking about a almost completely automated AI hacking campaign.
80 to 90% of the work of reconnaissance identifying targets and then penetration and a lateral movement,
privilege escalation, all of the things that an attacker would normally do was handled by AI agents,
and it was broken up into discrete parts.
You put that together, the adversaries have these powerful tools as well.
they're innovating faster than security teams and organizations can possibly keep up.
We put all of that together.
There's an innovation gap from a security and a governance perspective, and organizations are
unable to deploy these tools to keep up.
It's stopping them from outpacing their competitors.
It's stopping them from leveraging the benefits of what could be extraordinarily powerful tools.
I may have just said two dozen different things, and I hope all of that makes a little bit of sense.
Those are the conversations that we're having this week.
I know.
It's a lot to take in, and there's no wonder why folks feel like their heads are spinning.
I want to loop back to something you talked about, this kind of tension between the powers that be at the organization versus the security team and the leadership, the business leaders saying we must go, go, go with full speed ahead with AI.
To what degree do you think that is fear that they're competitive?
editors are throwing caution to the wind with this and they're full speed ahead.
So even if there are risks that we don't know about, we can't be left behind.
I think that's a big part of it.
But the problem is as soon as you hit a major roadblock, you suddenly have to go.
It's like one step forward, two steps back.
I've been telling an AI security story for a few years now, but I tell it over and over again
because every time I tell it, people are like, wow, that sounds crazy.
One of the big financial services companies
was piloting an AI tool.
In this case, it was Microsoft co-pilot,
which is incredibly powerful.
I use it every day.
I'm building agents using Copilot Studio now.
But they were piloting co-pilot.
And one of the things that big banks do
if they want to measure the ROI of a new tool,
especially one that's supposed to make people productive.
You know who they give it to?
The traders.
That's on the trading floor.
I used to work at UBS.
And UBS had the biggest trading floor in the world.
in Stanford, Connecticut.
And I would go there because I was in architecture
and we were like level three or level four support.
There's two things that were really interesting
about the users that are traders at a bank.
One, if they have a problem,
it was like 90 seconds.
We had a human being, like a body at the desk.
Because if they can't work, the bank doesn't make money.
Right.
The flip side of it is,
is if you can make them more productive,
the bank makes more money.
So if you ever go to one of these trading floors,
you see these people have nine monitors.
Like they have the latest devices.
They get all the best support for good reasons.
So you want to test a productivity tool,
you give it to one of your traders
or a group of them.
And you say, well, does it help?
Does it make them more productive?
Because if it does, the bank will make more money.
And they gave, in this case, Microsoft Copilot,
which is an incredibly powerful tool.
It's a large language model,
but it is also, it has access to,
if you prompt Microsoft Copilot,
it has access to all your emails,
all of your files,
everything that you collaborate with
with your team and other people in the company,
it's incredibly powerful.
You can ask,
it's a search engine,
but it is also a content generation tool.
Like it's what we think of as a very powerful AI tool.
And one of these traders asked what I think is a really interesting question,
what stocks do our employees invest in?
Listen, the bank's got a couple hundred thousand employees.
These are smart people.
We hire smart people.
What do they invest in?
Maybe that'll tell me something.
Maybe when they get paid or their bonus schedules might inform,
I don't know, something about the market.
And this trader had been using Chachabit,
So what he expected was, like we would all expect, you know, a few paragraphs or a few sentences, a summary of some data or some analysis.
Somebody's done this report somewhere, right?
Or maybe copiled smart enough to figure it out and do this for me because I've seen some pretty impressive output from some of these AI tools.
So when he asked what Stockstar employees invest in, he got, instead of a couple of paragraphs of text, you got a big table.
and in this table were names and social security numbers
and account numbers and positions of employee 401ks.
That is the reaction every single time I told us.
I was really afraid you were going there.
Go on.
And what's interesting is it could have been a hallucination.
It could have been, you know, sometimes because these tools
are designed to give you an output that looks real or looks useful.
But it didn't matter because they immediately had to shut it off.
I learned of this story from their vice president of their modern workforce,
whose job it was, it was her job, to deploy these tools.
and measure their value.
And she said, this is a privacy nightmare.
Like, we could get sued out of business.
I told the story to architects that really understand how these large language models
and these AI assistants work.
And they kind of push back.
They challenged me a little bit.
They said, copilot doesn't punch holes through access controls or systems.
It doesn't give you access to information that you don't have access to.
It's not going to punch a hole through your employee retirement plan system
and tell you that's not how it works.
And I said, I know.
Because in this case, somebody on their compensation team had created a spreadsheet of employee 401 information, and she had saved it at a team site, which is exactly what she was supposed to do.
And she clicked the share button, which is exactly what you're supposed to do.
Frictionless collaboration, we work from anywhere, from any device, we share with each other.
And she didn't share it with everybody.
She just shared it with people on her team.
It was a distribution list or a group or something.
The problem is inside that group was an entity in Microsoft 365 called Everyone Except External Users.
There's a fancy way of saying
when she clicked that link to share it
with what she thought was a small number of people
she opened it up to everybody.
Not her fault.
The group that she was sharing with
had just been misconfigured.
It could have happened years ago.
Who knows?
What was interesting,
when we talk about risk,
because we're at RSAC.
Cybersecurity is about risk.
It's about risk management.
We, I work for a software company.
We sell software to help others
manage and mitigate the risk
to get to security outcomes.
risk is when you break it down a
it's a factor of two things
what's the impact of something happening
the impact of loss
well and also what's the likelihood of it happening
that spreadsheet existed
it was in a team site somewhere
it was open to everybody
but up until you gave someone co-pilot
in order to find it and see it
somebody who have had to go looking for it
kind of digging through this
Byzantine morass of SharePoint sites and files
never happened
until you gave someone the greatest information retrieval tool
in the history of mankind.
Right. That's what co-pilot does best.
That's what co-pilot does best.
AI leverages data in ways
that were both faster and more scalable
than we've ever seen.
But it means, this is what I say,
that what stops companies,
one of the things that stops companies
from leveraging these tools
is the fact that they struggle to secure their data.
And now they struggle to secure the tools
and the models and the agents
and the code libraries specifically.
and then the bad guys are using these tools as well.
Imagine that one of the users at this bank got fished.
There's nothing to do with AI
until the identity that they have now taken over
has access to copilot or has access to agents.
Or what happens when I fish an agent?
These tools directly access data,
the era of applications,
even web applications,
having interfaces with specific controls,
is going away five years from now,
you're not going to log into a web browser
to access this application,
then another page for this application,
and then you're just going to ask an agent to do something,
and it's going to directly access all this data,
which means the underlying data security
and the security of the data itself,
the infrastructure and the agents,
is really all that matters.
Everything else is just noise.
So companies want to move quickly.
They want to outpace the competition,
or they don't want to be outpaced.
But, I mean, we're at a security conference.
The security people realize there are new risks or it's old risks at a completely new order of magnitude.
And the companies that succeed are the ones that, from a security perspective, I don't even like using the word governance because governance is just an intent.
They actually secure things and they get to outcomes.
Outcomes are what matter.
Can you measure risk and reduce it and prove that you did it?
Can you minimize how long it takes to detect and respond to a threat, even if that threat is completely agentic and completely automated?
Not everybody can. The ones that can are going to win.
So let's bring it home together then.
What is your guidance for the folks who are anxious about this?
What sort of advice are you giving the folks you interact with ways to head forward safely?
I help people articulate to themselves and to their people.
appears what successful outcomes are. And what I mean by that is, in WordArsac, you go down to
the Expo floor, there are dozens and dozens and dozens and dozens of tools. Most of them do one
thing. If they do it well, they do one thing. And that's provide visibility and discovery.
There are a lot of tools out there that are very good at showing you things that maybe you didn't
know, which sounds great. Because if you ask security leaders, what keeps you?
some up at night or what their primary goal is.
They'll say things like it's the unknown unknowns.
It's the, you know, it's not the problems that I know about.
I can fix those.
It's the problems that I don't know about that's going to kill me.
But where we try to, where I try to help leaders think is, okay, what's the, let's look
at the chessboard.
You do discovery.
That's move number one.
What are moves number two, three, and four?
And moves two, three, and four are once you've done discovery, you need to address findings.
And if the remediation or the, the, if you can't go from observability to remediation or from finding to fixing, you're going to be, we're going to be back in the same places you were last year.
You know, or you're not going to move.
You're just going to have a bunch of findings that you don't have the people to address, even if you want to.
And your adversaries are going to move more quickly than you.
So that's step two.
And then step three is you can't just discover problems.
You need to monitor all of the behavior and you need to do it usefully.
Everybody's drowning in noise.
Alert fatigue is a real thing.
Findings fatigue are a real thing.
But in security, context is everything because I don't want to just log everything.
I want to log things with context like what data is sensitive and how is it being used and who and what?
Because it's all the non-human identities.
It's NHIs that we're worried about these days.
The agents, what are they actually doing?
And agents don't just interact with data.
They interact with other agents.
and they interact with code libraries and MCP servers.
There's all these things that you need to monitor effectively.
But if you've got all of the right context,
you've got context of identity, you've got context of behavior,
you've got context of access,
you've got context of the underlying data,
all that context means you can minimize how long it takes
to detect and respond to even an issue.
It doesn't necessarily need to be a threat.
That 401K example, that wasn't an insider threat.
That wasn't an outside attack.
It was somebody just trying to do their job.
that's what organizations really want to be able to address
and then step forward
like move four on the chessboard
can you prove you did it?
Everybody's got a boss
can you say here's what we were trying to do
here's what we accomplished
here's how we measured success and here's what we're going to do next
if you have if you meet a C-Sov
and you probably meet as many as I do
a few yeah you've met a few
and you tell them listen
I'm going to help you measure risk and reduce it
detect and respond to threats quickly
prove that you did it,
CISO is going to do,
they're going to be a hero.
They're going to do an amazing, amazing job.
So, listen, it's self-serving.
I work for a security vendor.
We make software.
We help other people solve these problems.
But however they're doing it,
if they're not thinking in terms of outcomes,
they're always going to be behind.
All right, Brian, thanks so much
for taking the time for us.
I appreciate it.
I really appreciate the time.
This has been great.
Thank you.
All right.
That's Brian Vetchy field CTO at Veronis.
Local news is in decline across Canada, and this is bad news for all of us.
With less local news, noise, rumors, and misinformation fill the void, and it gets harder to separate truth from fiction.
That's why CBC News is putting more journalists in more places across Canada,
reporting on the ground from where you live, telling the stories that matter to all of us,
because local news is big news. Choose news, not noise.
CBC News
When a country's productivity cycle is broken,
people feel it in their paychecks, their communities, their futures.
What does this mean for individuals, communities, and businesses across the country?
Join business leaders, policymakers, and influencers
for CGs' national series on the Canadian Standard of Living,
productivity and innovation.
Learn what's driving Canada's productivity decline
and discover actionable solutions to reverse it.
And finally, a new website, malice.s.h, offers for a modest fee with a straight face to liberate software from its licenses by using AI to recreate functionally identical versions without the legal baggage.
It is both satire and, inconveniently, a real business that actually delivers clean room style rewrites,
inspired by the classic IBM BIOS cloning playbook, now automated at machine speed.
Its creators say the point was to make the threat tangible, not theoretical,
and the joke lands because it works.
The project highlights a growing tension in open source.
AI can now reproduce software faster than communities can maintain it,
raising awkward questions about attribution, ethics, and sustainability.
Critics warn these rewrites strip away the invisible infrastructure of open source, maintenance, security fixes, and shared stewardship.
In that sense, malice is less a prank than a proof of concept and possibly a preview.
And that's the Cyberwire. For links to all of today's stories, check out our daily briefing at thecyberwire.com.
We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the
rapidly changing world of cybersecurity.
If you like our show, please share a rating and review in your favorite podcast app.
Please also fill out the survey in the show notes or send an email to Cyberwire at n2K.com.
N2K's lead producer is Liz Stokes.
We're mixed by Trey Hester with original music and sound design by Elliot Peltzman.
Our contributing host is Maria Vermazas.
Our executive producer is Jennifer Ibin.
Peter Kilpe is our publisher, and I'm Dave Bittner.
Thanks for listening. We'll see you back here tomorrow.
