CyberWire Daily - Brazil nixes Twitter’s successor.
Episode Date: September 3, 2024Brazil blocks access to X/Twitter. Transport for London has been hit with a cyberattack. Threat actors have poisoned GlobalProtect VPN software to deliver WikiLoader. “Voldemort” is a significant ...international cyber-espionage campaign. Researchers uncover an SQL injection flaw with implications for airport security. Three men plead guilty to running an MFA bypass service. The FTC has filed a complaint against security camera firm Verkada. CBIZ Benefits & Insurance Services disclosed a data breach affecting nearly 36,000. The cybersecurity implications of a second Trump term. On our Industry Insights segment, guest Caroline Wong, Chief Strategy Officer at Cobalt, discusses application security and artificial intelligence. A Washington startup claims to revolutionize political lobbying with AI. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On our Industry Insights segment, guest Caroline Wong, Chief Strategy Officer at Cobalt, discusses application security and artificial intelligence. You can find out more from Cobalt’s The State of Pentesting Report 2024 here. Selected Reading Brazil Suspends Access to Elon Musk's X, Including via VPNs (GovInfo Security) Cyberattack hits agency responsible for London’s transport network (The Record) Hacking Poisoning GlobalProtect VPN To Deliver WikiLoader Malware On Windows (Cyber Security News) Scores of Organizations Hit By Novel Voldemort Malware (Infosecurity Magazine) Researchers find SQL injection to bypass airport TSA security checks (Bleeping Computer) Three Plead Guilty to Running MFA Bypass Site (Infosecurity Magazine) Verkada to Pay $2.95 Million Over FTC Probe Into Security Camera Hacking (SecurityWeek) Business services giant CBIZ discloses customer data breach (Bleeping Computer) Who would be the cyber pros in a second Trump term? (CyberScoop) Convicted fraudsters launch AI lobbying firm using fake names (Politico) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
Brazil blocks access to ex-Twitter.
Transport for London has been hit with a cyber attack.
Threat actors have poisoned Global Protect VPN software to deliver Wikiloader.
Voldemort is a significant international cyber espionage campaign.
Researchers uncover an SQL injection flaw with implications for airport security.
Three men plead guilty to running an MFA bypass service.
The FTC has filed a complaint
against security camera firm
Verkata.
CBiz benefits
and insurance services
disclosed a data breach
affecting nearly 36,000.
The cybersecurity implications
of a second Trump term.
On our industry insight segment,
our guest is Caroline Wong,
chief strategy officer at Cobalt,
discussing application security and artificial intelligence. And a Washington startup claims to revolutionize
political lobbying with AI.
It's Tuesday, September 3rd, 2024.
I'm Dave Bittner, and this is your CyberWire Intel Briefing. Thank you all for joining us. It is great to have you here today.
Brazil has blocked access to social platform X-Twitter after the company repeatedly failed to comply with court orders aimed at curbing disinformation campaigns.
The government demands
that XTwitter appoint a legal representative in Brazil and pay a fine of about $3.2 million
before lifting the ban. Brazil's Supreme Court unanimously upheld the suspension,
emphasizing that freedom of expression comes with responsibilities. The court ordered Internet service providers
to block access to ex-Twitter within five days
and warned that using VPNs to bypass the ban
could result in fines and legal consequences.
The crackdown is part of a broader investigation
into disinformation efforts linked to supporters
of former President Jair Bolsonaro.
Ex-Twitter, led by Elon Musk Musk has resisted complying with the orders, drawing criticism from Brazilian officials who argue
that all businesses must adhere to the country's laws, regardless of their global stature.
Transport for London, TFL, the agency overseeing London's transport network, has been hit by a cyber attack affecting its back-office systems.
While TFL stated there's no evidence of customer data being compromised or service disruptions, staff have been advised to work from home.
Immediate actions have been taken to secure systems.
The National Cyber Security Centre is collaborating with TFL and law enforcement to assess the impact.
Hackers have been targeting VPNs like Global Protect to inject malware and steal sensitive data, compromising private networks without detection.
Cybersecurity researchers at Palo Alto Networks discovered that threat actors have poisoned Global Protect VPN software to deliver Wikiloader, a sophisticated malware loader.
Active since late 2022, Wikiloader primarily spreads via phishing but recently shifted to SEO poisoning, leading users to fake installer pages. The malware uses complex evasion techniques,
including DLL sideloading and shellcode decryption,
making detection difficult.
Wikiloaders operators utilize compromised WordPress sites and MQTT brokers for command and control.
The malware creates persistence through scheduled tasks
and hides in over 400 files within a malicious archive.
Mitigations include enhancing SEO poisoning detection, robust endpoint protection, and application whitelisting.
Security researchers at Proofpoint have uncovered a significant international cyber espionage campaign
affecting over 70 organizations across 18 sectors. Insurance
companies are among the most targeted, along with aerospace, transportation, and universities.
Beginning on August 5th of this year, the campaign has sent at least 20,000 phishing emails,
masquerading as local tax authorities in various languages. Victims are tricked into clicking malicious links,
leading to the installation of the Voldemort backdoor via DLL sideloading
with the legitimate Cisco Webex executable.
Voldemort, a custom backdoor, gathers information
and can deploy additional payloads, with Cobalt Strike likely among them.
Uniquely, this malware uses Google Sheets for command and control, data exfiltration,
and command execution.
Proofpoint has not attributed the campaign to any specific group, noting its mix of sophisticated
and basic techniques, suggesting a complex and unusual threat actor.
a complex and unusual threat actor.
Security researchers Ian Carroll and Sam Curry uncovered a significant vulnerability in FlyCAS,
a third-party service managing the known crewmember and cockpit access security system programs,
which are used to bypass security screenings for airline employees.
The vulnerability and SQL injection flaw allowed unauthorized individuals to log in as administrators, manipulate employee data, and potentially bypass airport
security. The researchers successfully exploited this flaw to create a fictitious account with
full access privileges. After discovering the issue, they notified the Department of Homeland Security,
leading to the system being disconnected and the vulnerability fixed. However, the TSA downplayed
the vulnerability's impact and quietly removed conflicting information from their website.
Additionally, FlyCAS suffered a ransomware attack in February of 2024,
raising additional security concerns
about the system's integrity. Three men have pleaded guilty in the UK to running a website
otp.agency that enabled criminals to bypass banking anti-fraud measures, leading to significant
financial losses. The site charged criminals subscription fees to access services
that bypassed multi-factor authentication on major banking platforms.
An elite package allowed access to Visa and MasterCard verification sites,
facilitating extensive fraud.
The National Crime Agency shut down the site in 2021 after uncovering the scheme,
which may have earned up to £7.9 million. Sentencing is set for November. The FTC has filed a complaint against security
camera firm Verkada for inadequate security practices, which allowed a hacker to access
customers' cameras, including inensitive locations like psychiatric hospitals.
According to the complaint, Verkada failed to implement proper data protection and encryption, leading to breaches,
including a 2021 incident where up to 150,000 cameras were compromised.
Verkada has agreed to a $2.95 million settlement with the FTC, which includes implementing better security measures and addressing email marketing violations under the CanSpam Act.
CBiz Benefits and Insurance Services disclosed a data breach affecting nearly 36,000 individuals after a hacker exploited a vulnerability in one of its web pages.
The breach occurred between June 2nd and June 21st of this year, compromising client information
including names, contact details, social security numbers, and health data.
CBiz, a major U.S. professional services firm, discovered the breach on June 24th and has
since notified impacted clients.
Although there's no evidence of misuse, CBiz offers two-year credit monitoring and identity
theft protection to mitigate risks. In a featured article for CyberScoop,
senior reporter Tim Starks looks at the cybersecurity possibilities that could come
with a presidential win for former President Trump.
Despite previous turmoil during Donald Trump's presidency,
a number of cybersecurity officials are reportedly prepared to rejoin or newly enlist if he wins a second term.
Trump has begun assembling his transition team with potential cyber officials,
including former Trump administration members like
Pedro Allende, Nick Anderson, and Karen Evans. Although specifics are uncertain, some former
officials believe that a second Trump administration would bring a more disciplined approach
with potential changes to key agencies like CISA. Project 2025, a policy blueprint for Trump's second term, suggests scaling back CISA's
role, moving it to the Transportation Department, and focusing more on political appointees.
Despite this, cybersecurity remains a priority for both Trump and his potential administration,
with an emphasis on reducing regulations and addressing threats from China, AI, and quantum computing.
The exact future of agencies like CISA under Trump remains uncertain,
with possible changes but likely continuity in core functions.
Coming up after the break on our Industry Insights segment,
Caroline Wong, Chief Strategy Officer for Cobalt,
discusses application security and artificial intelligence.
Stay with us.
Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security, but when it comes to our GRC programs, we rely on point-in-time checks.
But get this.
Thank you. ISO 27001. They also centralize key workflows like policies, access reviews, and reporting,
and helps you get security questionnaires done five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta
when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
And now a message from Black Cloak.
Did you know the easiest way for cyber criminals to bypass your company's defenses
is by targeting your
executives and their families at home. Black Cloak's award-winning digital executive protection
platform secures their personal devices, home networks, and connected lives. Because when
executives are compromised at home, your company is at risk. In fact, over one-third of new members
discover they've already been breached. Protect
your executives and their families 24-7, 365, with Black Cloak. Learn more at blackcloak.io.
Caroline Wong is Chief Strategy Officer at Cobalt.
In this sponsored Industry Insights segment,
I sit down with her to discuss application security and artificial intelligence.
We're more than halfway through 2024,
and I actually feel like every single month of this year,
AI has looked like something different.
I actually think we're at
a stage right now where people are using AI. When it comes to software development,
software developers are using AI. Now, I also think that we're at a stage where
people are figuring out what can AI really do and what can't it do. And I think we're finding
that, you know, general numbers that I'm hearing is if you have a series of tasks and a human
can be expected to perform 70 or 80 percent of those tasks, you know, you give those same tasks over to an AI, and you're going to get something in the maybe 10 to 25% range.
So I think we're seeing people using AI to get incremental productivity increases.
But I don't know that we're going to see dramatic shifts in productivity for another maybe 18 to 24 months.
That's my perspective. Let's talk about software development itself. I mean, for the folks who are
deep into that process, how much is AI impacting the work that they do?
I think it's impacting developer work. And I think it's, at this point, almost more on sort of the entertainment side. You know, I think it's fun for developers to play around with AI tools and test out if it's going to give them the results they want to. When developers are using autocomplete, when developers are writing
comments for code and something like GitHub Copilot is suggesting what code to use, I think
it's kind of fun and developers like to play with these features to see what happens. And maybe
something will come up that they didn't think about before.
Certainly, the AI will suggest things that are incorrect.
But I still consider it generally in that area of kind of fun and kind of experimental.
The way that I've heard folks describe it when it comes to software development is, if you've got a fairly junior software engineer,
that person, in combination with AI, can maybe begin to look like a mid-level software engineer.
But neither of those are yet approaching a senior software engineer. And certainly,
from a cybersecurity perspective, I think that a
really interesting question is, which code is more secure? The code that's written by an AI
or the code that's written by a human? Well, let's dig into that. I mean, what's the reality there?
I think about stuff generated by AI and then comparing that to, let's say, something else that came from an outside source.
And I'm putting that in air quotes, you know, something that's open source or, you know, something that wasn't custom coded by a human inside your organization.
Is it fair to compare and contrast those two things? I think the tricky bit here is actually that the state of things in 2024 is that the vast
majority of human written code is actually extremely vulnerable to security problems.
And that's kind of, you know, almost the joke or the trick in this question. I think that if you were to somehow be able to ensure that
the large language model, the data set that an AI that's helping to produce code is filled with
secure coding patterns only, then you might actually get something better. But at this
point in time, I really believe that while folks are
leveraging AI and should continue to do so, it does require that manual check at some point in time
by a human. I don't think we're at a point where we can give over any of our critical business
processes. Just like you said, there's an awful lot that an intern can help out with,
processes. Just like you said, there's an awful lot that an intern can help out with,
but you're not going to give them over the entire task of running the whole business.
Similarly, when it comes to software development, there are little things here and there that are going to help out. But I think we're really talking at the stages of, you know, kind of typo and grammar corrections rather than making something entirely out of nothing.
What about security controls themselves?
For folks who are heading down this pathway, they're exploring AI and they can see some of the benefits it'll bring their business.
benefits. It'll bring their business. What sort of things should the cybersecurity practitioners be thinking about when it comes to provisioning those types of security controls? So I think the
number one thing is security teams really need to impress upon their folks that when employees
are putting data into public LLMs, asking questions in public AI systems, that's sensitive
data exposure. None of us need our development teams to be going and putting big chunks of our
code base into any sort of public-facing LLM. So first things first, folks need to
help their developers, help their employees understand that public LLMs
are not in any sense private. There's no data protection that's guaranteed. And as soon as
you take any piece of information, whether that's sensitive or confidential or code-based or secrets or otherwise, you put that into chat GPT, and that information is therefore out in the public.
That is sensitive data exposure.
You know, that's really the number one thing.
And so at Cobalt, for example, what we've done is we have a policy that says,
hey, we really want folks to be using AI in their daily work, and here's our
private instance that you should be using to ensure that you're not, you know, accidentally
releasing any sensitive or confidential data out publicly. Beyond that, you know, I mentioned
AI system inventory and ownership. I mentioned human in the loop, manual review. I also think
that something that's totally critical is logging and monitoring of AI activities so that there is
a record, you know, stuff happens. And when stuff happens that we don't expect or we don't like, it is very helpful to go back and say,
okay, when was the output that caused this problem generated? What was the prompt? Who was the
individual? What did they say? What was the output? And just being able to track that information
is going to be really helpful. I think what's challenging for some folks about this AI world that we live in is that
the systems are, in some cases, very much a black box. You put information in, you get information
out. Who knows exactly how that's in there? Organizations are having different levels of
transparency about helping that out. But I do think now is the time for organizations to begin
thinking about proper monitoring and logging. Can we dig into some of the specific sorts of
problems that you've seen along the way? So the number one here that I think is fascinating
is this concept of hallucinations when it comes to generative AI.
Because the key concept between generative AI and, I guess you could call it non-generative AI,
is it's based on data and probabilities.
The AI system is proposing and guessing and imagining, if you will, the next response. And while I don't exactly categorize
that as a security concern per se, I do think that it's important for folks to really understand that
what you're getting out of a generative AI system is not fact. It is not the truth. It is not
correct. It is simply a response determined by data and probabilities.
Now, when we get into some of these security problems, at Cobalt, we've done manual penetration
tests on AI and LLM systems. And the most common vulnerability types that we've found include
prompt injection, model denial of service, and prompt leaking or sensitive information
disclosure. Now, these three are in the top 10 for LLMs, as described by OWASP. And it is
fascinating, actually, to see that these problems are being found in the wild. You know, I'd say
that folks who are using AI, they really do need to conduct
security testing on these systems. And the skills required to appropriately test these systems
is actually quite specialized. Folks need to have an understanding of the architecture of
these systems and how they work in order to properly test them for security problems.
how they work in order to properly test them for security problems.
That's Caroline Wong from Cobalt.
To learn more, check out the link in our show notes. Cyber threats are evolving every second, Thank you. of solutions designed to give you total control, stopping unauthorized applications, securing
sensitive data, and ensuring your organization runs smoothly and securely. Visit ThreatLocker.com
today to see how a default-deny approach can keep your company safe and compliant. And finally, our unbridled AI enthusiasm desk pointed us to an expose from Politico.
In it, they describe Lobbymatic, a Washington startup claiming to revolutionize political lobbying with, wait for it, AI.
Politico reveals the firm is actually run by Jacob Wohl and Jack Berkman,
infamous far-right conspiracy theorists and convicted felons.
The duo, operating under the aliases Jay Klein and Bill Sanders,
have used the AI buzzword to lure big-name clients like Toyota, all while hiding their true identities.
Former employees discovered the truth after noticing suspicious behavior, including fake personas and questionable business practices. The company touts AI as a game-changer, but it seems more like a smokescreen
for a dubious operation, with Wall and Berkman using it to exploit public enthusiasm for AI
and potentially mislead clients. Their history of spreading misinformation and staging fake events
raises concerns that Lobbymatic could be yet another vehicle for deceit,
using AI as a cover. One former employee summed it up, stating,
if I knew who they were, I wouldn't have touched it with a 10-foot pole.
Who knows? Maybe in this case, AI really stands for artificial identities.
for artificial identities.
And that's The Cyber Wire.
For links to all of today's stories,
check out our daily briefing at thecyberwire.com.
Don't forget to check out the Grumpy Old Geeks podcast,
where I contribute to a regular segment on Jason and Brian's show every week.
You can find Grumpy Old Geeks
where all the fine podcasts are listed.
We'd love to know what you think of this podcast.
Your feedback ensures we deliver the insights
that keep you a step ahead
in the rapidly changing world of cybersecurity.
If you like our show,
please share a rating and review
in your favorite podcast app.
Please also fill out the survey in the show notes
or send an email to cyberwire at n2k.com.
We're privileged that N2K Cyber Wire is part of the daily routine of the most influential leaders
and operators in the public and private sector, from the Fortune 500 to many of the world's
preeminent intelligence and law enforcement agencies. N2K makes it easy for companies to
optimize your biggest investment, your people.
We make you smarter about your teams while making your team smarter.
Learn how at n2k.com.
This episode was produced by Liz Stokes.
Our mixer is Trey Hester, with original music and sound design by Elliot Peltzman.
Our executive producer is Jennifer Iben.
Our executive editor is Brandon Karp.
Simone Petrella is our president.
Peter Kilty is our publisher.
And I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow.
Thank you. can channel AI and data into innovative uses that deliver measurable impact. Secure AI agents connect, prepare, and automate your data workflows, helping you gain insights, receive alerts, and act
with ease through guided apps tailored to your role. Data is hard. Domo is easy. Learn more at
ai.domo.com. That's ai.domo.com.