CyberWire Daily - Weathering the phishing front.
Episode Date: April 16, 2024Cisco Dou warns of a third-party MFA-related breach. MGM Resorts sues to stop an FTC breach investigation. Meanwhile the FTC dings another mental telehealth service provider. Open Source foundations c...all for caution after social engineering attempts. The NSA shares guidance for securing AI systems. IntelBroker claims to have hit a US geospatial intelligence firm. The UK clamps down on deepfakes. Hard-coded passwords provide the key to smart-lock vulnerabilities. On our Industry Voices segment, Ryan Lougheed, Director of Product Management at Onspring, discusses the benefits of artificial intelligence in governance, risk and compliance (GRC). A Law Firm’s Misclick Ends 21 Years of Matrimony. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On our Industry Voices segment, Ryan Lougheed, Director of Product Management at Onspring, discusses the benefits of artificial intelligence in governance, risk and compliance (GRC). Selected Reading Cisco Duo MFA logs exposed in third-party data breach (ITPro) Casino operator MGM sues FTC to block probe into 2023 hack (Reuters) Open Source Leaders Warn of XZ Utils-Like Takeover Attempts (Infosecurity Magazine) FTC Bans Online Mental Health Firm From Sharing Certain Data (GovInfo Security) New NSA guidance identifies need to update AI systems to address changing risks, bolster security (Industrial Cyber) IntelBroker Claims Space-Eyes Breach, Targeting US National Security Data (HackRead) Creating sexually explicit deepfakes to become a criminal offence (BBC) CISA warns of critical vulnerability in Chirp smart locks (The Register) Wrong couple divorced after computer error by law firm Vardag's (BBC) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © 2023 N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me.
I have to say, Delete.me is a game changer. Within days of signing up, they started removing my
personal information from hundreds of data brokers. I finally have peace of mind knowing
my data privacy is protected. Delete.me's team does all the work for you with detailed reports
so you know exactly what's been done. Take control of your data and keep your private life Thank you. JoinDeleteMe.com slash N2K and use promo code N2K at checkout.
The only way to get 20% off is to go to JoinDeleteMe.com slash N2K and enter code N2K at checkout.
That's JoinDeleteMe.com slash N2K, code N2K. Thank you. The FTC dings another mental telehealth service provider. Open source foundations call for caution after social engineering attempts.
The NSA shares guidance for securing AI systems.
Intel broker claims to have hit a U.S. geospatial intelligence firm.
The U.K. clamps down on deepfakes.
Hard-coded passwords provide the keys to smart lock vulnerabilities.
In our Industry Voices segment, Ryan Lockheed,
Director of Product Management at Onspring, discusses the benefits of artificial intelligence
in governance, risk, and compliance. And a law firm's misclick ends 21 years of matrimony.
It's Tuesday, April 16th, 2024.
I'm Dave Bittner, and this is your CyberWire Intel Briefing. Thanks for joining us here today. It's great to have you with us.
Cisco Duo alerted customers that an unnamed telephony provider they used for multi-factor authentication services was breached by threat actors.
This breach allowed attackers to access SMS logs but not the content of messages.
The breach occurred due to a phishing attack on April 1st of 2024,
targeting the provider's employee credentials.
The accessed logs contained users' phone numbers, carrier information,
and general location data from March 1st through the 31st,
potentially affecting thousands of duos over 100,000 customers.
Cisco warned this data could facilitate broader social engineering campaigns.
The provider has since invalidated compromised credentials and is bolstering security measures.
Cisco's response includes advising businesses to inform impacted customers and providing the obtained message logs upon request,
highlighting the risk of further social engineering attacks.
MGM Resorts International is suing the U.S. Federal Trade Commission
to stop an investigation into data security breaches following a major
hack. Filed in Washington federal court, MGM argues it's not subject to FTC consumer financial
data rules as it's not a financial institution. Additionally, MGM contends FTC Commissioner
Lina Khan should recuse herself due to personal involvement as she was staying at an
MGM hotel during the hack. The FTC has not commented on the lawsuit. The September hack
caused MGM significant financial damage, leading to tens of millions of dollars in losses and 15
consumer class action lawsuits against the company.
Speaking of the FTC, they have proposed a settlement with mental telehealth service Cerebral Incorporated and its former CEO, mandating a $7 million penalty for unlawfully sharing sensitive health data
with third-party advertisers without patient consent.
This action addresses violations of the FTC Act and the Opioid Act,
focusing on deceptive practices and unfulfilled cancellation promises.
The settlement includes a $5.1 million partial refund for consumers
misled by Cerebral's cancellation policies.
The order, pending U.S. District Court approval,
aims to limit Cerebral's handling of consumer data
and enhance privacy protections,
requiring consent for data sharing
and the implementation of a comprehensive privacy program.
This follows a reported data breach
affecting 3.2 million individuals
and Cerebral's improper use of tracking tools sharing patient
information with platforms like Facebook, Google, and TikTok. The FTC's move is part of a broader
enforcement against health data privacy violations, reflecting increased scrutiny on telehealth and
data brokerage practices regarding consumer information security.
The Open Source Security and OpenJS Foundations have issued a warning to open source maintainers about social engineering attacks aiming for project takeovers following suspicious activities
mirroring the XZUtils hack. The OpenJS Foundation observed dubious emails seeking urgent updates for a JavaScript
project under the guise of fixing vulnerabilities, with the senders pushing to be made new maintainers.
These attempts resemble the tactics of Xiatan linked to the XZ Utils backdoor incident.
OpenJS detected similar schemes targeting two other projects,
which were reported to the U.S. Cybersecurity and Infrastructure Security Agency.
The foundations advocate for increased awareness and caution, outlining signs of suspicious
behavior such as aggressive requests for maintainer status by unknown individuals
and attempts to introduce unreadable or obfuscated code.
The situation underscores the significant risk social engineering poses to the open-source
ecosystem, highlighting the vulnerability of underfunded projects and the potential
difficulty in distinguishing genuine contributions from malicious intents.
The NSA, in collaboration with multiple national
and international cybersecurity organizations, has released a cybersecurity information sheet
to guide organizations in securing AI systems. This inaugural guidance from the NSA's Artificial
Intelligence Security Center emphasizes best practices for deploying secure and resilient AI
systems, particularly for national security system owners and defense industrial-based companies.
The document highlights the importance of adapting security measures to specific use cases and threat
profiles, aligning with the cybersecurity performance goals by CISA and NIST. It covers comprehensive strategies for AI system deployment,
including robust governance, secure configurations,
privacy considerations, and zero-trust frameworks.
Additionally, it stresses the ongoing necessity of identifying risks,
implementing mitigations, and monitoring for security issues
to protect intellectual property, models, and data from theft or misuse.
The notorious hacker Intel Broker claims to have penetrated the cyber infrastructure of SpaceEyes,
a Miami-based firm providing geospatial intelligence to U.S. government agencies.
This breach potentially exposes highly confidential documents related to U.S. government agencies. This breach potentially exposes highly confidential documents
related to U.S. national security
about individuals and vessels under U.S. sanctions.
The exposed data, detailed by HackRead.com,
includes full names, phone numbers, company details,
job descriptions, over 26,000 email addresses,
some password hashes, and complete location data.
The leak also includes public data from the U.S. Treasury website,
listing sanctioned cybercrime groups and individuals.
This incident follows a similar breach by Intel broker targeting Acuity, a U.S. federal contractor,
which was initially dismissed by Acuity and the U.S. government
until further data implicating the Five Eyes was released.
CISA has been notified, but Space Eyes has yet to comment on the authenticity of the breach.
Creating sexually explicit deepfake images without consent could become a criminal offense in England and Wales, punishable
by an unlimited fine and potential jail time, even if the creator did not intend to share the image.
This new law, to be introduced as an amendment to the Criminal Justice Bill,
aims to tackle the invasive use of AI to alter images or videos, particularly of celebrities or public figures, into pornographic
content. The legislation, however, has been critiqued for potentially having loopholes,
as it requires proving the creator's intent to cause distress. The move, bolstered by the Online
Safety Act, which already made sharing deepfakes illegal, has been welcomed by victims and advocates
as a significant step toward enhancing protections, especially for women, against this form of digital
abuse and exploitation. A severe security vulnerability has been identified in the
software that controls certain smart locks, specifically those managed by chirp systems.
certain smart locks, specifically those managed by Chirp Systems. This critical flaw allows for the possibility that individuals without authorization can remotely unlock these smart
locks, posing a significant risk to the safety and security of over 50,000 households reported
to be using Chirp's system. The root of this vulnerability lies in Chirp's Android application, where passwords and
private keys are hard-coded. Through an API, attackers are capable of not just identifying,
but also controlling the smart locks that are affected. The flaw was discovered three years ago
by Matt Brown, a senior engineer at Amazon Web Services, who took an interest in the security of Chirp's Android app
when his apartment building adopted these smart locks.
Compounding the issue, Chirp Systems, based in Texas,
was acquired by real estate technology firm RealPage in 2020,
which in turn was bought by private equity firm Tomo Bravo.
This change in ownership raises questions about the company's accountability
and commitment to resolving these sorts of critical security issues.
The disclosure of this vulnerability serves as a stark reminder
of the potential risks associated with smart home technologies.
For users of CHIRP-enabled smart locks,
additional precautions such as the use of traditional mechanical locks are advisable until a definitive fix is confirmed.
Coming up after the break, my conversation with Ryan Lockey, Director of Product Management at OnSpring, about the benefits of artificial intelligence in governance, risk, and compliance.
Stay with us.
Transat presents a couple trying to beat the winter blues.
We could try hot yoga.
Too sweaty.
We could go skating.
Too icy.
We could book a vacation.
Like somewhere hot.
Yeah, with pools.
And a spa.
And endless snacks.
Yes!
Yes!
Yes!
With savings of up to 40% on Transat South packages,
it's easy to say, so long to winter. Visit Transat.com or contact your
Marlin travel professional for details. Conditions apply. Air Transat. Travel moves us.
Do you know the status of your compliance controls right now? Like, right now? We know
that real-time visibility is critical for security, but when it comes to our GRC programs, we rely on point-in-time checks.
But get this, more than 8,000 companies like Atlassian and Quora have continuous visibility into their controls with Vanta.
Here's the gist. Vanta brings automation to evidence collection across 30 frameworks, like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting,
and helps you get security questionnaires done five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta when you go to vanta.com slash cyber. That's vanta.com
slash cyber for $1,000 off. And now a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home, your company
is at risk. In fact, over one-third of new members discover they've already been breached.
Protect your executives and their families 24-7, 365 with Black Cloak. Learn more at blackcloak.io.
Ryan Lockheed is Director of Product Management at Onspring.
And in this sponsored Industry Voices segment, we discuss the benefits of artificial intelligence
in governance, risk, and compliance.
Governance, risk, and compliance, you know,
really kind of covers your regulatory concerns.
If you're a business, there's a lot of regulatory concerns.
The compliance in which you hold yourself as a company.
And then the risk processes from your company.
So that will entail a risk register, risk assessment of your business processes inside of your environment.
And so traditionally, how have organizations come at this particular challenge?
As a kind of an overview in the GRC space, I mean, first starts, of course, you know,
where it all first started was really in that kind of Microsoft Excel and SharePoint land where you're able to see your documents
in the SharePoint space
and collaborate in Excel a little bit
for evidence gathering and tasks to complete.
And then it's really involved.
I mean, today, it's generally done inside of a GRC platform
or civic tool for governance, risk, and compliance or business process automation.
Well, and of course, I think so many people's imagination has been captured by this kind of revolution that we've had in AI,
certainly in people's awareness of it.
How has AI been applied to the GRC space?
Yeah, I mean, as far as AI in the GRC space,
it's really young right now, right?
Of course, the consumer market moves much quicker
than the enterprise market.
And right now, when you look at what GRC and AI do,
it's really on that surface level of asking these LLMs
about crafting policies
or what this regulatory change happened.
What's the difference between this clause prior
and then this clause now?
Yeah, I know you and your colleagues
sometimes refer to kind of a crawl, walk, or run approach to AI.
Can we kind of go through that together
and explain to us
how an organization would approach each of those?
Yeah, I mean, crawl, walk, run is a good analogy
that it's steps of maturity,
however you want to call it, right?
But that first step or really the crawling there
is really kind of establishing AI
in a general
purpose manner. So if we envision these steps of maturity that you're using a GRC tool that
doesn't have any native AI capabilities, right? So looking at it from a crawl or a first step,
really, that's that general purpose that we're at right now. So that's embedding
maybe ChatGPT, Gemini,
Copilot, Cloud,
all throughout, potentially
in an app frame
or just
right next to you
in that sense. And then you're just
using some of those
LLM benefits.
Create me a policy.
Can you analyze this,
analyze that type situation?
And really, that's a low-risk,
low-reward piece to your GRC program as a whole.
You will gain some efficiencies there,
but the ceiling is really low.
So you're going to bump against that productivity ceiling pretty quickly if you're just looking
to use an LLM to create some policies or analyze controls or regulatory changes.
And I guess the structure of that, sort of running these things side by side, means that
it, by necessity, flows through a human which i suppose
could be beneficial for you know keeping an eye on things as well right yeah yep is it you know
part of those state practices and now he's having a human in the loop and grc is really a a program
where a human is always in the loop right um so yep well so once we're past that notion of crawling what does walking entail
yeah walking entails really kind of combining data retrieval with an llm right so in data
retrieval meaning either live data from websites like stock markets or weather or you know things
like that you can go out and access, but also combining that
data retrieval in with your internal tools. So when you think about going out to CRMs or your
HR sources or any of those tools that you have internally, and when you are interacting with the
LLM, you're really also prompting it to go out and grab those sources and analyze
that together with your statement. So adding more context to what you're asking. Examples there,
of course, is the trending predictive analytics, things like that, alongside of GRC.
So when you're looking at risk assessments assessments and you want to trend that out or
you're just able to go ahead and ask, you know, for financials or anything like that,
that you could potentially use in your evaluation and then have a better context response back.
Well, I mean, let's continue down the path here. Then ultimately, what does running look like?
Running is really, you know, now you're in this mode of fine-tuning, right? So you have the ability
to fine-tune your LLM inside with, you know, baking in knowledge, which is potentially better
than, you know, augmenting with data retrieval. You probably want to do both, right? So,
augmenting with data retrieval,
you probably want to do both, right?
So having that data retrieval element plus fine-tuning is going to give you
a lot of powerful results.
And that really kind of getting you to a point
where you are asking your models
with high-level information
and getting back incredibly detailed information
based on you, right?
In your context versus a general everybody
context. And I guess, again, as you alluded to, I mean,
throughout all of this, there needs to still be reality
checks along the way. Yeah, of course. Of course, right?
And that's using, of course, those safety AI practices,
right, where we're using that human-centric kind of approach.
You're making sure, especially with fine-tuning and learning,
that you're examining that raw data to make sure that it's applicable, right?
You're understanding the limitations of your data sets and your modeling.
And you're just testing, testing, testing, testing, right?
And then, of course, continuing to monitor.
And I think everybody in this space, right,
we don't just roll things out. It always has to be tested from data security to GRC tools to really anything that you do cyber
related. You always have to continue to monitor and update and test.
Are there any common elements that you see
for the people who are having success here?
What do they have in common with each other?
Yeah, I mean, I think that really the commonality of success here is that context, right?
So being able to augment that, you know, the larger models with your specific data.
that the larger models with your specific data,
and then you know that you have more of,
you're cutting down on little hallucinations and you're cutting down on some of the false positives
or the things that the LLM is giving you back.
That's really where that success is coming from.
You know, I think a lot of folks are hesitant here with AI
because they see the usefulness of it and they see the potential for
great time savings and efficiency and all of those good things. But every now and then we'll
see some sort of story where when AI gets something wrong, it can get it catastrophically wrong.
And so how are organizations approaching that to hit some kind of balance between those potential benefits, but also avoiding that catastrophe?
Yeah, I mean, I'm not going to sit here and say that this is easy, right?
It's very, very, very challenging to look at the data in your models and making sure that you're trying to avoid things like that.
So as a recommendation,
is that not necessarily using those general models
and having an enterprise version or open source yourself
and having AI engineers and practicing those safe AI practices
or those responsible AI practices
is really going to cut down on that.
You're never likely
going to have a zero
incidence because of
artificial intelligence.
It's more mitigating
everywhere along the way and continue to
monitor and update your system
and making sure that, hey,
if there is a catastrophe,
right, that it's, you know, only to us and not necessarily a large catastrophe. It's just kind
of maybe a small burning fire that we have to put out. Right, right. Where do you suppose we're
headed here as we look, you know, towards the future? What do we aspire for these tools to be able to do? Yeah. I mean, the future is
wild. When you think about all
the capabilities and everything that AI can
do, we're really looking for, especially in the governance
risk and compliance space, and this branches out as well, but having
multiple LLMs in your
environment that are vertical specific, right? So training a model for compliance, training a model
for risk, training a model for vendor management, or when you think about outside, it's health and
finance and all those things. But having multiple models that are specifically vertical focused, that are sharing
information between each other and sharing that information and you getting the best context back
you can from the model you ask based on your specific business process. What are your
recommendations for folks who are curious about this, who are looking to get started? Any words of wisdom?
ChatGPT or OpenAI, Anthropic, they all have a lot of good documentation on why AI, first of all.
Your responsibilities, fairness, privacy, security.
They go through a lot of those things on each of their sites, on each of their models. So definitely a good place to start.
But when you start cracking that stuff open and getting in a little bit deeper,
it's a lot of hands-on experience and just learning the ins and outs of those things.
That's Ryan Lockheed, Director of Product Management at Onspring. Thank you. thrilled to partner with ThreatLocker, a cybersecurity solution trusted by businesses worldwide. ThreatLocker is a full suite of solutions designed to give you total control,
stopping unauthorized applications, securing sensitive data, and ensuring your organization
runs smoothly and securely. Visit ThreatLocker.com today to see how a default deny approach can keep
your company safe and compliant.
And finally, in a digital age where a misclick can lead to purchasing a lifetime supply of toilet paper or inadvertently liking
your ex's vacation photos from 2014, a Mr. and Mrs. Williams from the UK found themselves
unwittingly at the forefront of an even more monumental digital faux pas. Imagine their
surprise when, due to a clerical blunder at their law firm, they were divorced without their consent.
The scene unfolds at Vardig's, a law firm known for serving the needs of the rich and famous.
One fateful day, a staffer ventured into the online divorce portal, and with the wrong file
open, a click intended to sever the marital ties of one couple inadvertently sliced through the
bonds of Mr. and Mrs. Williams, a pair blissfully unaware of their participation in this electronic
lottery of love. Three days later, Vardiggs realized the error and sought to undo this
unwanted ununion, appealing to the wisdom of Judge Sir Andrew McFarlane.
But the judge decided the digital deed was done.
What's done in the cloud stays in the cloud, and the divorce stood.
The tale serves as a cautionary fable for the digital era,
a world where a single click can alter lives, where matrimonial ties are as vulnerable determination as an unsaved
document in a power outage. For Mr. and Mrs. Williams, their unintended digital divorce
becomes a story for the ages, a reminder to double-check before you double-click,
lest you find your I do turned into I did by the impersonal stroke of a key.
And that's the Cyber Wire.
For links to all of today's stories,
check out our daily briefing at thecyberwire.com.
We'd love to know what you think of this podcast.
You can email us at cyberwire at n2k.com. N2K Strategic Workforce Intelligence optimizes the value of your biggest investment, your people. We make
you smarter about your team while making your team smarter. Learn more at n2k.com. This episode
was produced by Liz Stokes. Our mixer is Trey Hester with original music by Elliot Peltzman.
Our executive producers are
Jennifer Iben and Brandon Karp.
Our executive editor is Peter Kilpie
and I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow. Your business needs AI solutions that are not only ambitious, but also practical and adaptable.
That's where Domo's AI and data products platform comes in.
With Domo, you can channel AI and data into innovative uses that deliver measurable impact.
Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts,
and act with ease through guided apps tailored to your role.
Data is hard. Domo is easy. Learn more at ai.domo.com. That's ai.domo.com.