CyberWire Daily - Terms of service and GDPR. LastPass breach update. GhostWriter resurfaces in action against Poland and its neighbors. Cellphones, opsec, and rocket strikes.
Episode Date: January 4, 2023Ad practices draw a large EU fine (and may set precedents for online advertising). Updates on the LastPass breach, and on Russian cyber activity against Poland. Malek Ben Salem from Accenture explains... smart deepfakes. Our guest is Leslie Wiggins, Program Director for Data Security at IBM Security on the role of the security specialist. And cellphones, opsec, and the Makiivka strike. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/2 Selected reading. Meta’s Ad Practices Ruled Illegal Under E.U. Law (New York Times) Meta Fined More Than $400 Million in EU for Serving Ads Based on Online Activity (Wall Street Journal) Meta's New Year kicks off with $410M+ in fresh EU privacy fines (TechCrunch) LastPass data breach: notes and actions to take. (CyberWire) Poland warns of attacks by Russia-linked Ghostwriter hacking group (BleepingComputer) Russia says phone use allowed Ukraine to target its troops (AP NEWS) Russian soldier gave away his position with geotagged social media posts (Task & Purpose) Russian commanders blamed for heavy losses in New Year’s Day strike (Washington Post) Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
Ad practices draw a large EU fine and may set precedents for online advertising.
Updates on the last past breach and on Russian cyber activity against Poland.
Malek Bensalem from Accenture explains smart deepfakes.
Our guest is Leslie Wiggins, Program Director for Data Security at IBM Security,
on the role of the security specialist.
And cell phones, OPSEC and the McKeefka Stripe.
From the CyberWire studios at DataTribe, I'm Dave Bittner
with your CyberWire summary for Wednesday, January 4th, 2023.
Meta's advertising practices have drawn a fine of roughly $223 million from European authorities.
Meta is the corporate parent of Facebook, Instagram, and WhatsApp.
The Wall Street Journal reports that what's at issue was Meta's behavioral ads,
which pitched specific ads to users based upon Meta's tracking of the user's online activity.
based upon Meta's tracking of the user's online activity.
Ireland's Data Protection Commission, which oversees activities of U.S. companies on behalf of the larger European Union,
announced the conclusion of its two investigations and the fines.
In summary, data protection commissioners in Ireland found
that Meta Ireland violated transparency obligations under the General Data Protection Regulation
by not clearly outlining the legal basis for processing personal data to users.
The DPC also found that Meta Ireland did not rely on consent as a lawful basis for processing personal data
and instead relied on contract as the legal basis for processing personal data in connection with the delivery of personalized services.
The DPC proposed substantial fines for Meta Ireland and directed the company to bring its processing operations into compliance within a short period of time.
The New York Times reports that Meta disputes the finding and intends to appeal the fines.
It maintains its targeted advertising is properly respectful of GDPR, the EU's General Data Protection Regulation,
and that the terms of service it asks its users to accept constitute proper consent to tracking.
Litigation obviously isn't over, but online platforms should look to their terms of service
the large print giveth and the small print taketh away as mr tom waits has taught us and as most
fair-minded people understand but still there are going to be limits on what that long document is
going to cover you know that document the one you impatiently clicked through and said you
read it when you actually hadn't. That long document may not be enough to constitute a
contract anymore. You've likely heard that password manager LastPass had been victimized
in a data breach that included customer data, including password vaults. Security Week reports
that the breach occurred in August of last year
when hackers got into the LastPass network
and returned later to hijack customer information.
The threat actor is said to have copied a backup of customer vault data,
which is said to contain both unencrypted data, such as website URLs,
as well as fully encrypted sensitive fields,
such as website usernames and passwords, secure notes, and form-filled data.
Hack Read reports that the threat actor also stole technical data and source code from the development environment.
Almost Secure discusses the LastPass breach and disclosure,
speculating that the near-holiday time of disclosure
was not coincidental.
Rather, they think, it may have been intentional
to keep news surrounding the incident low.
The disclosure, Almost Secure says,
seems like LastPass's attempt
to minimize potential litigation risk
while also preventing drawing attention to themselves
and causing a
public outcry. The British news site Witch says that LastPass customers should ensure that their
master password isn't used elsewhere and is more complex than the passwords they customarily use
as LastPass doesn't store master passwords and asserts that only brute force will allow threat actors access
to users' master passwords. The news site Witch states, LastPass does not know users' master
passwords and they are not stored or maintained by LastPass. If you're a LastPass user, only you
know your master password. The company describes this as its zero-knowledge architecture. The
company also recommends changing passwords on websites that had stored passwords through the
manager. The threat group Ghostwriter has resurfaced in phishing campaigns against Polish
targets, according to authorities in Warsaw. Bleeping Computer reports that the Russian hackers set up websites that impersonate
the gov.pl government domain, promoting fake financial compensation for Polish residents
allegedly backed by European funds. The goals of the campaign are believed to be intelligence
collection and disinformation. The EU has linked Ghostwriter to Russia's GRU military intelligence service.
Mandiant has also discerned a connection to Belarusian services.
Ghostwriter has long specialized in impersonation,
especially impersonation of NATO members located along the Atlantic Alliance's eastern front,
an area in which Russia takes a proprietary interest.
The countries there are
either former Soviet republics like the Baltic states, former members of the Warsaw Pact like
Poland, or former provinces of the old Russian Empire like Finland. A very long historical
memory is informing the Russian outlook on the special military operation.
And finally, the Wall Street Journal reviews the mistakes that led to the Russian disaster in Mikivka. Among them, concentrated administrative troop billeting, storage of ammunition adjacent
to the billets, and generally poor operations security manifested in undisciplined use of cell phones and failure to
camouflage. The journal quotes retired U.S. Army Lieutenant General Ben Hodges, a former commander
of U.S. Army forces in Europe, as saying, the Russian military is not a learning organization.
To learn, first you have to acknowledge that you were wrong, and that's not the culture.
Learn, first you have to acknowledge that you were wrong, and that's not the culture.
Phones collect a lot, whether the users think about it or not.
Put down that smartphone troop.
Or not.
When officers don't care about their troops, the troops cease to care about the rules. And that's been the story of the Russian army in its war against Ukraine.
Coming up after the break,
Malek Ben Salem from Accenture explains smart deep fakes.
Our guest is Leslie Wiggins,
Program Director for Data Security
at IBM Security,
on the role of the security specialist.
Stay with us.
Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security, but when it comes to our GRC programs, we rely on point-in-time checks. But get this,
more than 8,000 companies like Atlassian and Quora have continuous visibility into their controls
with Vanta. Here's the gist. Vanta brings automation to evidence collection across 30
frameworks, like SOC 2 and ISO 27001. They also centralize key workflows like policies,
access reviews, and reporting, and helps you get security questionnaires done five times faster with AI. Now that's a new way to GRC. Get $1,000 off Vanta when you go to
vanta.com slash cyber. That's vanta.com slash cyber for $1,000 off.
And now, a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform secures their personal devices, home networks, and connected lives. Because when
executives are compromised at home, your company is at risk. In fact, over one-third of new members
discover they've already been breached. Protect your executives and their families 24-7, 365
with Black Cloak. Learn more at blackcloak.io.
Leslie Wiggins is Program Director for the Data Security Product Management Team as part of IBM Security.
We spoke about the evolving role of the data security specialist and how people in that position collaborate with others in their organization. So what led us to this point is the
maniacal focus and value that sensitive data has, whether that's regulated or sensitive data,
like intellectual property, and the fact that breaches, whether they are from external actors who are up to no good or whether they come from malicious insiders, are after that data.
And because that data is to be able to see how privileged users who should have access to that data
or other people who shouldn't have access to that data are trying to access that data,
and to be able to take action to protect that data so that it isn't accidentally viewed
by somebody who shouldn't see it, copied and removed from the organization, or breached
in any other kind of way.
And how does their role work within an organization? Are they generally collaborative or
do they end up being adversarial with certain groups? Where do they stand within the organization?
So I wouldn't say they're adversarial with certain groups, but historically,
data security teams have been quite siloed and sort of focused
within on their own activities and being able to produce that compliance report for an audit
or being able to make sure that they have that real-time view of what's happening to sensitive
data and automatically take action to protect it. But the piece that's been missing for a long time has been, for example, there is likely, almost always, a SOC, or everything was shared in a language that
those security analysts in the SOC did not understand. So stuff might have been shared,
and at that point, it was all being shared, too much sharing, and causing a bit of chaos in the
SOC. So it would either tend to be one scenario or the other scenario. So what things
have evolved to today is a much smarter and much more integrated sharing of data. So that only
things from the data security program that are of the highest risk, that are the most useful
for that SOC and for that security analyst to have now can get
shared over with that environment. And it's shared in a simple way these days. They should, you know,
be talking about the who, the what, the where, the when. It's like reporting, right? Or running the
podcast. So that the data is now, or the insight is now shared with the SOC in a language that the security analysts can understand as well.
What makes an ideal data security specialist in terms of their background, their knowledge, their mindset, their disposition?
Patient.
Patient disposition is something that they would need.
disposition is something that they would need. But they tend or they have tended in the past to be very focused on compliance because that has been the thing that will cause an organization
potentially to be fined if they fail an audit, for example, or struggle to meet a deadline for an audit. And so they've been very focused in the past on the
bits and the connections and the environment and making sure they can get access to that data
that's been stored for a year or two and put it together in an auditor-friendly way. And that
takes a lot of patience. And historically, it's taken a lot of time because data security tools haven't been built in the past to retain data over long periods of time so that it's sort of hot elastic and able to retain data for longer.
Now that that is a reality for a data security team, they don't have to spend their time so much
on those back end, hooking things up, finding the data, getting it together. They're able to now
focus more on the data security side of the house,
understanding where is my data exposed? How exposed is it? Where should I be investigating
an anomaly? Because there was something that happened within a data source that had,
I can see it had a lot of classified data in it. It had a lot of database vulnerabilities.
It had a lot of classified data in it.
It had a lot of database vulnerabilities.
That was a really significant anomaly that occurred.
It was, you know, maybe a SQL injection that's showing up.
I should prioritize investigating that thing and making sure that data is protected and hasn't been breached or leaked somehow rather than trying to cobble things together. So that role that we were talking about a minute ago of the data security specialist
is changing to one where they can add more value
and demonstrate even more value to the business.
When you see organizations doing this right,
doing it well,
who's taking the lead here
for the collaboration typically?
Where that scenario works best is where
you have savvy leadership that is bringing
together the data security side and the
SOC side to make sure that they are
cross-pollinating and cross-sharing and being as efficient
as possible across both of those pieces
to better enable the whole and to better protect the whole.
That's Leslie Wiggins from IBM Security.
And joining me once again is Malek Ben-Salem.
She's the Managing Director for Security and Emerging Technology at Accenture.
Malek, it is always great to welcome you back to the show. I want to touch base with you today on deepfakes and particularly this, I guess, is it fair to say a subset called smart deepfakes?
Yeah, absolutely. We've seen some new developments in AI in general and the new advent of large language models and the advances in computer vision models that are driving basically a new category of deepfakes that we can call smart deepfakes.
how interactive they became and how very plausible the very real conversations and authentic conversations you can have with them, you can think of ways of creating deep fakes
that look much more real, that you interact with over time. So it's not just, you know, a video that you watch passively,
but you can think of a deep fake that you interact with, you know,
say an avatar on the metaverse or whatever. Right. But this is,
this is a, a persona that looks real, right.
With the right face, et cetera,
that talks to you and that you interact with,
but it's all and completely fake.
So what's driving that is, number one, these chatbots or models
that are built on large language models that are becoming very, very good.
And the other advance is the ability of creating videos just by fake videos, obviously, just through a prompt. There are models out there today where you can type in what you want to see in the video
and it will create the video for you completely.
So you can say, I want a persona or I want to see Dave doing this and that, right?
And the video will be created for me.
And you can make that video as long as you want it, but just by feeding it some more
information about what you want to see in the video.
So those two advances or those two trends, I think, will generate a sort of deepfakes that are very believable, that look very real, and that are interactive in nature.
that look very real and that are interactive in nature.
Yeah, so really the convergence of those two things make this possible.
Yeah, I could see this being used for some sort of advanced chatbot,
customer support, those sorts of things.
But also, I suppose this could take phishing to the next level. You could get a FaceTime call or a video call from your boss
and it might not be your boss. Exactly. And that's the big threat.
These deepfakes will be very believable. I mean, it will be very hard for people to
recognize them as deepfakes. One of the things we've been training people on
as deepfakes started is to look at the face
and the features of the face, et cetera.
Then we started training people on looking at the context
where the deepfake is located.
But now if these deepfakes are interactive,
if you can talk to them,
that basically creates and takes things to the next level.
It's hard to tell whether this is a deepfake video or not. So it may not be just one interaction with a conversation, but it could be multiple interactions over time to build that trust with the victim.
Then that becomes really, really hard to detect.
So it's one more challenge we have to deal with.
to deal with. Now, I've seen some demos of some output detectors for some of these AI models,
the chatbot models, where folks have spun up a way you can feed in the output and it'll tell you,
is this likely to be, it kind of gives you a little sliding scale of whether it thinks it's real or fake. Do we expect that sort of thing being applied to this as well? Is that a possibility?
Do we expect that sort of thing being applied to this as well?
Is that a possibility?
I think we'd have to build that, but that's not going to be enough. I think relying on technical tools to detect these deepfakes or on detective ways is not going to be enough.
I think we need to focus more on building, watermarking, let's say, these videos up front,
watermarking the real and authentic content that we are creating.
We have not done much of that.
I think we need to increase media literacy amongst the population. We need to focus more on authentic journalism and reporting to have to counterbalance this
amount of misinformation, disinformation that we may be served in the future.
Wow.
All right.
Well, it's a lot to think about, a lot to ponder.
I'm glad we have folks like you out there working on it.
Malek Ben-Salem, thanks so much for joining us.
My pleasure, Dave.
Cyber threats are evolving every second, and staying ahead is more than just a challenge. Thank you. to give you total control, stopping unauthorized applications, securing sensitive data, and
ensuring your organization runs smoothly and securely.
Visit ThreatLocker.com today to see how a default-deny approach can keep your company
safe and compliant. Thank you. production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe,
where they're co-building the next generation of cybersecurity teams and technologies.
Our amazing CyberWire team is Elliot Peltzman, Trey Hester, Brandon Karp, Eliana White,
Puru Prakash, Liz Ervin, Rachel Gelfand, Tim Nodar, Joe Kerrigan, Kiril Terrio, Maria Vermatsis,
Ben Yellen, Nick Bilecki, Millie Lardy, Gina Johnson, Bennett Moe, Catherine Murphy, Janine Thanks for listening.
We'll see you back here tomorrow. Thank you. Your business needs AI solutions that are not only ambitious, but also practical and adaptable.
That's where Domo's AI and data products platform comes in.
With Domo, you can channel AI and data into innovative uses that deliver measurable impact.
Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to your role. Thank you.