CyberWire Daily - Social engineering shenanigans, by both crooks and spies. Suing social media over alleged mental health damages. And how to earn an “F.”
Episode Date: January 9, 2023Telegram impersonation affects a cryptocurrency firm. Phishing with Facebook termination notices. Russian phishing continues to target Moldova. The IEEE on the impact of technology in 2023. Glass ceil...ings in tech leadership. Seattle Schools sue social media platforms. Malek Ben Salem from Accenture explains coding models. Our guest is Julie Smith, identity security leader and executive director at IDSA, with insights on identity and security strategies. And dealing with the implications of ChatGPT. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/5 Selected reading. Impact of Technology in 2023 and Beyond (IEEE) Telegram insider server access offered to Dark Web customers (SafetyDetectives) Moldovaʼs government hit by flood of phishing attacks (The Record from Recorded Future News) OPWNAI : Cybercriminals Starting to Use ChatGPT (Check Point Research) Hackers exploiting ChatGPT to write malicious codes to steal your data (Business Standard) Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots (Forbes) Hackers Exploiting OpenAI’s ChatGPT to Deploy Malware (HackRead) Cybercriminals are already using ChatGPT to own you (SC Media) Threat Report: Impersonation Detected in Telegram Chats to Deliver Malware (Safeguard Cyber) Seattle schools sue tech giants over social media harm (ABC News) Seattle Public Schools sues TikTok, YouTube, Instagram and others, seeking compensation for youth mental health crisis (GeekWire) Ghost Writer: Microsoft Looks to Add OpenAI’s Chatbot Technology to Word, Email (The Information) Microsoft plans to use ChatGPT in Bing. Here's why it could be a threat to Google. (Freethink) ChatGPT Hits Ethical Roadblock; Blocked (Analytics India Magazine) A College Kid Built an App That Sniffs Out Text Penned by AI (The Daily Beast) A Princeton student built an app which can detect if ChatGPT wrote an essay to combat AI-based plagiarism (Business Insider) Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
A telegram impersonation affects a cryptocurrency firm.
Phishing with Facebook termination notices.
Russian phishing continues to target Moldova.
The IEEE on the impact of technology in 2023.
Glass ceilings in tech leadership.
Seattle schools sue social media platforms.
Malek Ben-Salem from Accenture explains coding models.
media platforms. Malek Ben-Salem from Accenture explains coding models. Our guest is Julie Smith,
Identity Security Leader and Executive Director at IDSA, with insights on identity and security strategies, and dealing with the implications of ChatGPT.
From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Monday, January 9th, everyone.
Great to have you all join us here today.
Safeguard Cyber this morning released a report detailing an observed instance of impersonation
of a cryptocurrency firm in Telegram
that may have been the activity of threat actor Dev0139. In December 2022,
Microsoft released research around a threat actor they've tracked as Dev0139. The malicious actor
is said to have joined Telegram groups used to facilitate communication between VIP clients and
cryptocurrency exchange platforms and identified their target from among the members.
The threat actor posed as representatives of another cryptocurrency investment company
and in October 2022 invited the target to a different chat group and pretended to ask for
feedback on the fee structure used by cryptocurrency exchange platforms. An Excel file sent by the actor contains malicious macros.
Avanon released a report this morning detailing a phishing campaign
impersonating Facebook for credential harvesting.
The attack begins with an email appearing to be from Facebook
saying that the victim's account had been suspended for violations of community standards.
They're told they have the ability to appeal the decision within 24 hours or face permanent account deletion.
The threat actor provides a link, which in actuality leads to a credential harvesting page,
even though it appears to be from Meta.
The threat actor made the credential harvesting link believable,
and the name of the victim's actual page was included in the email contents.
Playing on urgency, this attacker hopes that the victim views a quick appeal to preventing
an impending loss of their account as reasonable. The sender's email address, however, did not
appear to come from Facebook, rather a Gmail account.
Alas, the wicked fleeth where no man pursueth.
If someone wants you to do something now, now, now,
well, maybe it's better to do it never, never, never,
and bong that bozo to the spam folder.
Since Russia's invasion of Ukraine,
Moldova has felt more uneasy than any other country in the near abroad except Ukraine itself.
There are too many parallels to Ukraine's situation for comfort.
Like Ukraine, Moldova has received hostile Russian attention in cyberspace.
Ukraine has seen factitious liberation movements seen to detach Donetsk and Luhansk.
Moldova has an even longer history of Russian-sponsored secession in Transnistria.
The record reports that Moldova's government has, over the past week,
seen a surge in phishing attempts seeking to compromise official and corporate networks.
These efforts have been accompanied by impersonation campaigns that misrepresent themselves as communications originating with senior Moldovan officials.
A couple of items of selected reading for your consideration today.
Connie Stack is CEO at security firm NextDLP, and she recently shared her thoughts in our monthly Women in Cybersecurity newsletter, Creating Connections.
You can find a link to the newsletter and her article, Breaking the Glass Ceiling, My Journey to Close the Leadership Gap, in today's selected reading section of the show notes.
Also in the show notes, we have a link to the IEEE Impact of Technology in 2023 and Beyond study.
We hope you'll check them out.
Seattle Public Schools has filed a lawsuit against the parent companies of TikTok,
Instagram, Facebook, YouTube, and Snapchat,
claiming that the social media platforms have driven a rise in mental and emotional health issues among youth.
The Seattle School District said in a statement that excessive social media
use is harmful to young people, and social media companies have intentionally crafted their
products to be addictive. Quoting the statement, most youth primarily use five platforms, YouTube,
TikTok, Snapchat, Instagram, and Facebook, on which they spend many hours a day. Research tells us that excessive and problematic use of social media is harmful to the mental,
behavioral, and emotional health of youth and is associated with increased rates of
depression, anxiety, low self-esteem, eating disorders, and suicide.
The evidence is equally clear that social media companies have designed their platforms to maximize the time youth spend using them and addict youth to their platforms, as alleged in the complaint.
These companies have been wildly successful in attracting young users.
As of last year, almost 50% of teenagers in the state spent between one and three hours a day on social media, and 30% averaged more than three hours a day. The statement added that school districts lack the resources to keep up with the demand for mental health care, stating,
School districts like Seattle Public Schools have been significantly impacted by the resulting crisis.
Like school districts across the country, Seattle Public Schools, schools and school-based clinics are one of the main providers of mental health services for school-age children in the community,
but the school counselors, social workers, psychologists, and nurses
need greater resources to meet the high demand for services.
Naturally, social media outfits don't think they're the villains here,
and in truth, it is a tough problem.
According to the AP, Snapchat's parent company, Snap, responded in a statement outlining the measures it's taken to provide mental health resources to users,
stating,
is safe and to give Snapchatters dealing with mental health issues resources to help them deal with the challenges facing young people today. Jose Castaneda, a spokesman for Google, YouTube's
parent company, pointed to various parental controls available on YouTube, stating,
We have invested heavily in creating safe experiences for children across our platforms
and have introduced strong protections and dedicated
features to prioritize their well-being. And finally, there have been all sorts of reports
of the misuse, both actual and potential, of chat GPT by various miscreants. Social engineering at
scale, more alluring catfishing, even the automation of malware coding are all being reported.
But we'll concentrate on what chat GPT seems to mean in the ongoing range war between academic integrity and technological advance.
The New York City Department of Education has banned chat GPT on school devices due to concerns about plagiarism.
Vox notes that the chatbot is able to write decent essays
that can pass popular anti-plagiarism tools.
The Daily Beast reports that students are already using the AI
to complete writing assignments.
Even if the service is technically banned by schools,
it's difficult to see how such a ban could be enforced.
Princeton student Edward Thien attempted to offer a solution to this dilemma
by creating an app called GPT-0,
designed to detect if an essay was written by a human or an AI.
The Daily Beast explains that GPT-0 uses perplexity and burstiness as metrics.
Perplexity is a measurement of randomness in a sentence,
and burstiness is the quality of overall randomness for all the sentences in a text.
Human written sentences generally vary in complexity,
while bots usually create sentences that are consistently low complexity.
while bots usually create sentences that are consistently low complexity.
Edward Tien has already been approached by major venture capital firms interested in his product, and he acknowledges the usefulness of artificial intelligence in the right situations,
but he notes that there are beautiful qualities of human written prose that computers can and should never co-opt.
pros that computers can and should never co-opt. Go Tigers, we say, and go Gerdell, who proved that any deductive system, at least as complex as the arithmetic of the natural numbers, was either
incomplete or inconsistent. That is, there are true theorems that can't be derived from any finite
set of premises. If you can derive them all, then your deductive system is either trivial
or just freaking wrong. Or so our logician's desk tells us. Why don't you ask ChatGPT?
Extra credit if you ask for an answer in the style of Yogi Berra or Don King or
the dude from The Big Lebowski. And in the meantime, it occurs to us that Kurt Gardell was also at Princeton, so go Tigers, again.
It's not for nothing you've got those cannonballs stuck in the walls of Nassau Hall.
After the break, Malek Ben-Salem from Accenture explains coding models.
Our guest is Julie Smith, identity security leader and executive director at IDSA
with insights on identity and security strategies.
Stay with us. Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security,
but when it comes to our GRC programs, we rely on point-in-time checks.
But get this, more than 8,000 companies
like Atlassian and Quora have continuous visibility into their controls with Vanta.
Here's the gist. Vanta brings automation to evidence collection across 30 frameworks,
like SOC 2 and ISO 27001. They also centralize key workflows
like policies, access reviews, and reporting,
and helps you get security questionnaires done
five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta
when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
And now a message from Black Cloak. Did you know the easiest way for cyber criminals to bypass your
company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home, your company is at risk.
In fact, over one-third of new members discover they've already been breached.
Protect your executives and their families 24-7, 365 with Black Cloak. Learn more at blackcloak.io.
Julie Smith is Identity Security Leader and Executive Director at IDSA, the Identity Defined Security Alliance, which is a nonprofit founded by a group of identity
and security vendors, solution providers, and practitioners that, in their words,
acts as an independent source of thought leadership, expertise, and practical guidance
on identity-centric approaches to security for technology professionals.
They recently published a report tracking trends in securing identity,
and that's where my conversation with Julie Smith began.
According to our research, and this is the second year that we've published a trends report,
84% of organizations have experienced an identity
related breach. And that's a sort of an astounding number really. And in most cases, it has resulted
in disruption or loss of revenue or costs associated with remediation. And at the same
time, we're finding that 96% of organizations look back and say, well Yes, there have been some
high-profile breaches lately that have exploited that, but it does put a barrier up in front of
the bad guys. Another key area that organizations haven't focused on but need to is just the
deprovisioning of accounts. So when an employee leaves the organization, what we found is that
about half the organizations out there are deprovisioning those accounts on the day
that employee leaves, but only 26% of them are doing it regularly. So just these accounts that
may have extended privileges are floating around. And if someone gets a hold of that account,
they've got valid credentials,
they can log in and they can do bad things. When folks are coming up short with this,
what are the typical explanations for it? Is there a lack of funding or attention or
why are we not hitting where we should be here? I think it is a bit of a lack of attention.
should be here? I think it is a bit of a lack of attention. Organizations are now prioritizing 64%. Again, back to the research, 64% have identified identity within their top three security priorities.
But I think that's relatively new. In the past, identity management has been more about granting
access and getting employees or even potentially partners accessing resources so that they can be productive.
And it's been considered an operational function in the past.
And just now, within the last couple of years, I would say that there is the cybersecurity focus on it.
And even to the point where it's becoming a board-level topic.
it's becoming a board-level topic.
And individuals have so many different logins,
passwords, and usernames and passwords that they deal with,
not just on a personal side, but on a professional side as well.
And we found that people are not taking and not protecting,
not taking care and not protecting those credentials.
And whether it's sharing usernames and passwords,
whether it's reusing passwords across both their personal and professional accounts, there's just some
basic things that I think we as individuals can do not just to protect our personal identity,
but also our employer identity and employer infrastructure as well. So we kind of think
of it as identity as everyone's responsibility. What are some of those things that folks can do that are easy to implement?
Yeah, I think from an organization perspective, I mentioned MFA. That's top of mind.
An MFA for all user types, so not just your employees, but also we're seeing certainly a
lot of organizations are starting to implement it for their customer-facing applications.
Certainly need to do that for third-party access as well.
I'm staying on top of privilege access.
As individuals move around the organization, privileges can creep, if you will.
And thinking about it from a least privileged perspective.
So only give people the level of access that they absolutely need to do
their job. And then staying on top of those changes to access as individuals move around
the organization. If you experienced an anomaly or believe that there's something going on that's
suspicious, revoke that access immediately. There's always challenges if it's the CEO, for example, but in some cases, it's better to remove access for an identity that maybe
is not behaving the way you would expect it to. A lot of organizations are now looking at the
characteristics of a device as well and to determine whether that device has been compromised or not.
When you look towards the future, toward the horizon,
where do you suppose we're headed here?
Are we someday going to shed usernames and passwords
and move on to something more secure?
Where do you suppose we're heading?
I think we are definitely headed in the right direction
from a passwordless perspective.
There's standards being put forth by the FIDO Alliance, for example,
that helps with passwordless strategies. I think there is a tremendous amount of infrastructure,
however, that organizations have built up over time and there's an awful lot of technical debt
and things that they need to be able to provide access to. Unfortunately,
not everything is in the cloud at this point in time. So I think we're definitely headed in the
right direction with passwordless. And hopefully we can get rid of usernames and passwords sometime
in the near future. That's Julie Smith. She's Executive Director at IDSA, the Identity Defined
Security Alliance.
And I am pleased to be joined once again by Malek Ben-Salem. She is a Managing Director for Security and Emerging Technology at Accenture.
Malek, it is always great to welcome you back to the show.
I want to touch base with you today on some of the things that's going on with coding
and some of the coding models that folks are taking advantage of.
What can you share with us today?
Yeah, so Dave, with the advent of GPT-3 two years ago,
that was one model that was used,
a large language model that we used to generate language
and English language,
and then it was expanded to other languages, et cetera.
We've seen similar models being trained to write code,
the same approach.
It could be Java code, JavaScript code, et cetera.
A number of coding models have been created
for various programming languages.
And so these models are being used
to help developers write code.
They can be deployed in,
made available to developers
to help them basically predict the next word in their code
and help them complete the function or the code line.
Or in some cases, they have been at least tested
to write code completely autonomously.
And I think the reason I want to talk about this topic is for clients who are considering using these coding models,
I think they have proven that they can bring efficiencies when they are being used as programming pairs, if you will, with the developers,
but they are not as effective if they work autonomously.
And we've seen several deployments or several studies that have demonstrated
that they can bring these efficiencies if we let the human in the loop, stay in the loop,
and if we let the human review the code before it gets deployed.
Yeah, it makes me think about organizations
that are required to include SBOMs,
software building materials,
and how does this play into that reality?
Yeah, very good question.
I think for the companies who are thinking about building their own coding models,
and some clients or some organizations are thinking about that,
it's very important to think about the quality of the code that is being trained these models
if that code is not written in a secure manner if that code is being used to train the model
and we don't know the quality of of the code and we don't know that it's safe, then you end up with a model that is compromised,
that may write and spit out code that has security vulnerabilities. It's not a software,
it's SBOM, but now the data basically is your input. That coding data that is being used to
train the model carries the vulnerability inherently within it. And another way is if these models,
if people start using these coding models
that are available to them
without knowing enough about how they have been trained,
then that presents another type of risk.
And it's not just a security risk.
That exists, obviously.
But also there are potential legal risks
about how to use this model.
Many of these models have been trained
with open source code
or code in open source repositories.
And you can think about the question, is that a fair use of the coding model
once it was used with code that is open in the public? Is it a fair use or is there not enough
in the coding model to justify that label of the model as a fair use model.
I think there are legal risks that have not been clarified or that do exist that organizations have to be aware of before they start adopting these types of coding models.
whereof before they start adopting these types of coding models.
Yeah, this is fascinating to me because I wonder to what degree can the AI copy someone else's code
or to what degree can it be inspired by someone's code?
Is it capable of a creative act?
Exactly.
of a creative act.
Exactly.
Exactly.
I don't think we know enough about how these types of models have been trained to know or to make an assessment
whether it's a creative act or whether it's a copying act,
if you will.
Yeah. act, if you will. And we did not have any cases in the courts that we can use as a reference
to even guide us in that assessment. So what's your advice for folks who are
thinking of wading into these waters? Any words of wisdom here?
folks who are thinking of wading into these waters? Any words of wisdom here?
Generally, it's probably more cautious to train your own model using your own code.
So you know that you own the code and you know the derivative model out of that code.
But also make sure that it is trained using code that is secure, that obviously does not have any bugs, but also does not have any security vulnerabilities. Otherwise, you'll keep
recreating those security vulnerabilities in the code generated by these models.
Yeah. Every time we push something out to production,'s full of back doors I can't understand it
oh yeah
it's the
yeah
same issue
we have to deal with
so
the more you invest
on security up front
the better
you are
yeah
all right
well Malek Ben Salem
thanks so much for joining us. Thank you. and ensuring your organization runs smoothly and securely. Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe and compliant.
And that's The Cyber Wire. For links to all of today's stories, check out our daily briefing at thecyberwire.com.
Don't forget to check out the Grumpy Old Geeks podcast, where I contribute to a regular segment called Security, Ha!
I join Jason and Brian on their show for a lively discussion of the latest security news every week.
You can find Grumpy Old Geeks where all the fine podcasts are listed. The Cyber Wire podcast is a production of N2K Networks,
proudly produced in Maryland out of the startup studios of Data Tribe, where they're co-building
the next generation of cybersecurity teams and technologies. This episode was produced by Liz
Irvin and senior producer Jennifer Iben. Our mixer is Trey Hester, with original music by Elliot Peltzman.
The show was written by John Petrick.
Our executive editor is Peter Kilby, and I'm Dave Bittner.
Thanks for listening. We'll see you back here tomorrow. Thank you. through guided apps tailored to your role. Data is hard. Domo is easy.
Learn more at ai.domo.com.
That's ai.domo.com.