CyberWire Daily - The Chameleon attacks Online Social Networks. [Research Saturday]
Episode Date: February 8, 2020The Chameleon attack technique is a new type of OSN-based trickery where malicious posts and profiles change the way they are displayed to OSN users to conceal themselves before the attack or avoid de...tection. Joining us to discuss their findings in a new report entitled "The Chameleon Attack: Manipulating Content Display in Online Social Media" is Ben-Gurion University's Rami Puzis. The research can be found here: The Chameleon Attack: Manipulating Content Display in Online Social Media Demonstration video of a Chameleon Attack Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. data products platform comes in. With Domo, you can channel AI and data into innovative uses that
deliver measurable impact. Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to
your role. Data is hard. Domo is easy. Learn more at ai.domo.com.
That's ai.domo.com.
Hello, everyone, and welcome to the CyberWire's Research Saturday.
I'm Dave Bittner, and this is our weekly conversation with researchers and
analysts tracking down threats and vulnerabilities and solving some of the hard problems of
protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.
And now, a message from our sponsor, Zscaler, the leader in cloud security.
Enterprises have spent billions of dollars on firewalls and VPNs, yet breaches continue to rise by an 18% year-over-year increase in ransomware attacks
and a $75 million record payout in 2024.
These traditional security tools expand your attack surface
with public-facing IPs that are exploited by bad actors
more easily than ever with AI tools.
It's time to rethink your security.
Zscaler Zero Trust Plus AI stops attackers
by hiding your attack surface,
making apps and IPs invisible,
eliminating lateral movement, connecting users only to specific apps, not the entire network, continuously verifying
every request based on identity and context, simplifying security management with AI-powered
automation, and detecting threats using AI to analyze over 500 billion daily transactions.
Hackers can't attack what they can't see.
Protect your organization with Zscaler Zero Trust and AI.
Learn more at zscaler.com slash security.
So we discovered a kind of weakness in online social networks. That's Rami Putsits. He's an assistant professor at Ben-Gurion University.
The research we're discussing today is titled The Chameleon Attack, Manipulating Content Display in Online Social Media.
manipulating content display in online social media.
This is a feature that can be misused by adversity to perform a few different kinds of scam through the social networks.
So a user could be fooled to interact with some content on social media
that can be switched later on to a different display,
a different visual representation, which actually seems as an absolutely different content.
So you can like, as we say in some of our publications, you can press a like on a
cute kitty and a day after it can be switched to a movie of some terrorist organization.
I see.
And as the user, you would have no idea that this change had happened behind the scenes?
Currently, as it is implemented in the social networks, no, you would not.
Because social networks do track changes to the posts
and they do display notification if the post is edited. But through this feature,
which can be misused by an adversary, the actual post is not changed,
only the way it is displayed to the user.
Well, let's walk through it together. Describe to us what exactly is going on here. What
are these people doing to make this work? So these people post a link to a website. It can be a
website they own. It can be a redirection link, some kind of link shortener service, Anything that allows to change the target of the post, the eventual target of the
link being posted. On Facebook, they follow the redirection until the final destination.
And from that final destination, Facebook extracts the title, the preview image, and the short description of the website.
Assume it's some YouTube movie.
The link is posted on Facebook.
Users can comment, like, interact with this post any way they like.
Later on, the user who posted the link may change the destination of this link
to point to a different web resource
or change his own website to display something different
and ask Facebook through the application interface,
through their services,
to refresh the link preview.
Now, when you describe the attack in your research paper here,
you align the different phases of a chameleon attack
to a standard cyber kill chain.
Can you walk us through those phases?
To walk through the phases of the standard cyber kill chain,
we need to assume first some target of the attacker.
Since we have a few different kinds of attacks, let's start with the basic one. Let's say
shaming. If an adversary would like to discredit some political figure or anyone else on the web,
they should first collect some intelligence about this figure
with what kind of posts this figure interacts, what kinds of posts he likes or comments,
and then put a post with a link to a resource that looks appealing to the person that will
be discredited later on.
Of course, they need to attract attention of that specific person.
But this is done using the usual techniques,
either social engineering or just targeted marketing.
Once they have the attention of this person and some interaction with him in form of comments, for example.
Then the chameleon post can change the way it is displayed
and reveal its true self by pointing now to a different web resource
and then also refreshing the link preview.
So it will look like it always have pointed to the different illegal
or some kind of bad web resource. Then you can attract public attention, make screenshots of
that person liking something he should have never liked.
Now, what are some of the ways that you're seeing this deployed?
What are some of the uses for it?
You just talked about shaming someone.
What are some of the other things that it's being used for?
So you can use it for more trivial things like promotion
or some kind of commercial misuse cases.
or some kind of commercial misuse cases.
For example, one could post a link to a well-known, famous web resource,
collect likes, collect comments, collect social capital,
and then switch his already promoted post to point to a different web resource, including different preview and
different display, which will inherit all the social capital collected by the old post.
Well, which social networks are susceptible to this? And to what degree do each of them
allow this sort of thing to take place?
allow this sort of thing to take place?
So Facebook is the first one. On Facebook, only the owner of the post
can modify the way it is displayed
and refresh the link preview cache.
If the post is being shared by some user,
then the shared posts are no longer affected
by this manipulation, then the shared posts are no longer affected by this manipulation.
Only the original one.
And no other user can manipulate the way the post is displayed.
On LinkedIn, anyone can change the way a link is previewed, can refresh this cache.
can refresh this cache.
Of course, in order to change the display to what the adversary would like it to be,
the adversary needs to control the link.
So if I'm an adversary
and I can make you posting my links on LinkedIn,
then later on I can change the web resource
to which these links lead and ask LinkedIn to refresh the preview of these links.
So all posts that you have posted on LinkedIn with my link will now show something different.
The last one is Twitter.
Twitter generally does not allow editing tweets. Once you tweet, you tweet. You cannot modify the content of your tweet.
request a refresh of the link preview.
If I'm an adversary and I can change the final destination of my links, then I can ask Twitter to refresh the display of these links,
the way they are previewed.
And anyone who tweeted this link,
his tweets will now look different.
Now, one of the things you outline in your research is an experiment that you all did.
You set up some things on Facebook looking to evade censorship in some Facebook groups.
Can you walk us through what did you do here?
Yes.
We identified several moderated groups, in this case, sport fans.
The groups were all split into fan groups of rival teams.
For example, Arsenal versus Chelsea.
Then we created several Facebook pages, some of which were such chameleon pages.
pages, some of which were such chameleon pages. We did not use profiles for this experiment in order to comply with Facebook user license agreement and their regulations. Using these
pages, we first tried to enter the group, having the page displaying posts with movies of rival team.
For example, a page with a movie of Chelsea player trying to enter a group of Arsenal fans.
Of course, in most cases, it was denied.
Then a week later, the same page changed the way it looks like.
It's a chameleon page. So it adapts to the new fan group and all the movies are now supporting
the right team. And we tried to apply to the same group again. And of course,
the pages were accepted this time.
I could see this going the other way, where you could post things that were attractive to the
members of the group and then after the fact, change it to something that was controversial.
And that's one of the things you describe here. That's not what you did in your test.
Yes, of course, we didn't do that. We did not interact with any members of the group.
We did not post at them. We did not comment or in any way interact with human accounts.
Again, in order to apply with Facebook rules and also with the university ethical committee requests. In very few cases, we did interact
with the group moderators since we had to answer their questions. And by the end of the experiment,
we notified all the group owners that the experiment took place and all its consequences.
Now, what are your recommendations for folks to mitigate this?
Well, the mitigation is first by the social networks themselves.
For Facebook and Twitter, this is a very easy tweak to do
because both networks already maintain a link inspection service. They have
URL blacklists and they do mark
websites as suspicious and so on.
So it is very easy to them to
display some notification that
the link preview was changed
and also maintain a history of these changes
the same way that Facebook maintains a history of changes to the post.
For LinkedIn, it will be a little bit harder
because currently they do not use their own link shortener service,
but they can also track any changes performed
to the link previews using their service.
And in this case, they will be able to display a notification.
For users, just watch your likes and use them with caution.
Like and comment only on links and posts that you trust.
But, you know, that's the usual recommendation to anyone
to be afraid of phishing attempts or any social engineering scam.
Is there anything in particular for group moderators,
some things that they can look out for?
That's a tough question.
A user that would like to make an investigation and inspect a profile or a post,
he can use the social network APIs to see
the history of changes to the link previews.
Now, if the chameleon post was never activated so far, they will not see such a change.
They will only see its first initial disguise and it will be hard to anticipate if it will ever change.
On the other hand, if you see that the link that was posted
leads to some IP address rather than well-known domain,
that's a suspicious indication in the first place.
Our thanks to Rami Putsis for joining us.
The research is titled in the first place. Our thanks to Rami Putsis for joining us.
The research is titled The Chameleon Attack,
Manipulating Content Display in Online Social Media.
We'll have a link in the show notes.
Cyber threats are evolving every second, and staying ahead is more than just a challenge.
It's a necessity.
That's why we're thrilled to partner with ThreatLocker,
a cybersecurity solution trusted by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications, securing sensitive data,
and ensuring your organization runs smoothly and securely.
Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe and compliant. The Cyber Wire Research Saturday is proudly produced in Maryland
out of the startup studios of DataTribe,
where they're co-building the next generation of cybersecurity teams and technologies.
Our amazing Cyber Wire team is Elliot Peltzman,
Puru Prakash, Stefan Vaziri, Kelsey Bond,
Tim Nodar, Joe Kerrigan, Carol Terrio, Ben Yellen,
Nick Valecki, Gina Johnson, Bennett Moe, Chris Russell,
John Petrick, Jennifer Iben, Chris Russell, John Petrick,
Jennifer Iben, Rick Howard, Peter Kilpie, and I'm Dave Bittner. Thanks for listening.