Risky Business - Risky Biz Soap Box: Cool compliance tricks with the Island enterprise browser
Episode Date: December 20, 2024In this sponsored Soap Box edition of the show Patrick Gray talks to Island CEO Michael Fey about some of the cool tricks in the Island enterprise browser. You can use i...t to tick off so many compliance boxes, and not just cybersecurity boxes. This is largely a conversation about compliance, but it’s actually interesting and fun. These are words we never thought we’d type! You can find Island at https://island.io/ This episode is also available on Youtube. Show notes
Transcript
Discussion (0)
Hey everyone and welcome to another Soapbox edition of the Risky Business Podcast. My name's
Patrick Gray. These Soapbox editions of the show are wholly sponsored and that means everyone you
hear in one of them paid to be here and today we're speaking with the Chief Executive of
Island which is a company that makes an enterprise browser. Now they've appeared on the show a bunch
of times before, and we've spoken about the, you know, the general shape and gist of the product,
but there's a bunch of benefits, right? So this is an enterprise browser with a full enterprise
feature set, unlike the consumer browsers or the consumer based browsers that most enterprises
are using. But, you know, you use Island, you get some benefits like you can do endpoint health checking,
you can do secure app delivery, pegged to that browser, you can do DLP stuff, you can do cut
and paste restrictions that are quite granular. So you can cut and paste between these apps,
but you can't cut from this or copy from this app and paste it outside. And there's file system
restrictions and just really, really cool stuff and granular controls. But something that's happening with them lately
which I find really interesting is increasingly people are buying them to tick off various
compliance objectives and not just cyber security objectives. In this interview you're going to hear
Michael talk about how people are able to comply with like labor
regulations by using an enterprise browser which I know technically isn't a cyber security thing
but I just found it so interesting that I kind of zeroed in on this part of the conversation in
the edit so here is Michael Fay to kick things off now about how companies are ticking off compliance
requirements with the Island Enterprise browser.
Enjoy. Why don't we start with what we can do, and then I think it'll be easy to see
where you get used. We can literally show you every click, every type, at any given moment,
in any given policy structure. So what does that really mean? You are an IT worker. We don't care about 90% of
what you do from a compliance perspective, but you just logged into Amazon and you're setting
up new users. I can literally track everything you do in that process. And I can send that to
the SIM or your S3 bucket or wherever else you need that. So I can grab data at a very intricate
level, opposed to having to do it every application. So you could do things
like Pam, you can literally put, you know, privileged account management on an app that a
cyber arc doesn't even know exists. You literally could give that control in place. I can make it so
an end user at a call center, if they want to access something, they got to put in a reason
they have to, you know, I'm about to go look at your last purchases.
Wait a second.
We shouldn't have to do that.
Well, this case I do.
Here's what I have to do from a compliance perspective to be allowed to do that.
I put in the reason I attach the ticket.
It's now recording everything I do in there.
And then I step off and the end user gets a warning.
This is all being recorded.
So it keeps them inside the bounds of that.
Also, we get a lot of compliance data flow. You know, how do we ensure that this data can't be seen, this data can't be taken,
this data can't be accessed? We can govern all of that. Who can get to what from where and what
geography they could get there? And, you know, are my people in Germany getting to the German
Salesforce or are they being redirected to the U.S sales force when they fly to the u.s which sales force do they connect to
we can manage all of those elements and it's just extreme visibility as needed and how do you manage
that though sorry because that's an interesting one that i wouldn't have you know that's a that's
a use case for like gdpr purposes that i wouldn't have thought was immediately obvious which is i
am a german person working for an insurance company or whatever. You know, I go to the United States
for a conference and then just via their CDN or whatever, the way that they distribute resources,
I'm winding up kind of logging into the wrong, you know, to the wrong portal. How do you actually
enforce that in the browser when a lot of this is done via the sort of content distribution networks
for those apps? because that's going to
be tricky so we have tenant control to start with the trickiness you speak of which i've dealt with
my bunch of my career was based in the networking aspect of it if we wanted if we want to force
to a connection back to a given location we have the ability to do that with policy very easily. So where it really
shows up is which G Suite am I going to? I'm a strong organization. I want to bring in G Suite.
Am I going to your personal G Suite or my corporate G Suite? Those are very different
tenets to us. The German Salesforce is different than the US Salesforce. They are literally
fundamentally different links. We can direct that and control
that and the end user never has to understand that. And we can also then control what data
they see. So hang on, that's by domain, is it? It can be. It can be by domain. We can do a direct
IP addresses. It can be your, but we also have network control. We literally have the ability
to do a ZTNA connection and take them back to their data center if we want, or take them back to, you know, source IP addresses.
So is that via like some sort of what we would call like an identity aware proxy sort of thing,
and you peg certain URLs to those proxies? That makes sense.
Yeah. But I will tell you, it's shocking how little you actually have to use the proxy
chaining when you have actual control of the real URL.
Yeah. I just wondered, because I would have thought though, when I was talking about CDNs, have to use the proxy chaining when you have actual control of the real url yeah and yeah i
just wondered because i would have thought though when i was talking about cdns you know it's
obviously because you're going to get different you know you're going to get different ips back
for google.com in australia than you're going to get in germany or whatever so i just wondered if
you were sort of pegging dns results yeah to the country of origin or how that all worked yeah so
that the company most of the companies really
care about this they actually have a set of ip of privileged ips they use that you call back to
yeah okay that so like they don't really go directly to salesforce they go to a set of ips
and then they're um so we can do that but we also have our own thing that you know sits inside the
dmz and we can call that to get access to back to data centers.
And I'm guessing what you're saying is that if a user just tries to enter
like salesforce.com or whatever,
it just redirects them or whatever.
Yeah.
Yeah. Okay.
So that's pretty simple.
Like it's just redirecting them
to the customer's domain.
But let's get a little trickier.
So now you're dealing with a French organization
and yes, you have the geo,
but they can't work past four 30. How do I govern that? And any interactions we can,
we can't record what they do. We can only record their violations. But when you go over to the UK,
we can record everything they do and they can run all day long and they're in the same,
you know, uh, Amazon bucket. How do you set policies for these different flavors we give you really
strong dexterity on that where even inside of your own company your own geography you can pick
and choose how all this stuff works opposed to one size fits all model identity based policy is
really hard to do with the network layer really yeah it is we can do it at the end point and
really change the way that works.
So you can.
Well, it's hard to do in the IDP layer as well, right?
Because you sort of issued with tokens that grant you access.
But my mind is boggling a little because you just said French people can't work past 430.
What's that?
So we actually have a French customer that their edict is their access to their applications
has to shut down at that because there's a governance on the hours of work that they allow their employees to do and they literally stop the
ability to access their applications at a certain time of day why do they do that according to them
it's a government regulation for them of how many hours they can have in a work week are they in a
sort of specialized ultra regulated industry or is this just like i mean you know are you telling me
the french people can't work past 4 30 because that'd like i mean you know are you telling me the french people
can't work past 4 30 because that'd be pretty cool you know not all well hey listen there's
obviously french people dead but french the the french have a governed hours of work
a week and you don't take a salaried employee and have them work 80 hours like we do in the
u.s and just say that you're salaried it is what it is and they often have to prove that that can
occur now i'll tell you we have this in the U.S. too in a different way.
We have a wonderful customer that is trying to do really right by their employees.
They're a fast food restaurant or maybe they think of themselves as not fast food,
so fine food or whatever it is.
And their workers can go out on their portal and look at the other stores
and take hours at those other
stores if they want to complete and they want more hours right they they don't have to just get the
hours at their local store and they did this to try to make sure their workers could get to full
time as fast as possible going everywhere they want if they spend more than 10 minutes doing that
you have to pay them for the full hour in California. Yeah. So here they
were trying to be nice. And then they hit this roadblock that says, if you spend, you know,
a while clicking around here, we have this problem. We govern that relationship.
Yeah. So you kick them out at nine minutes and 45 seconds or whatever. Yeah, exactly. And,
and it, you know, that's something that company needed. And what they need it next year,
policies will change. Maybe that goes across.
Maybe it goes away and they want that dexterity.
A number of bizarre rules that end up in this is amazing.
I'll give you another one.
We got traders and investment teams that can't view LinkedIn because LinkedIn has messaging
and it's not audited and it's not tracked. So they can't.
So imagine you're investing people's money and you can't go out on LinkedIn and see who quit
from that company that day, which would be a telltale sign of what's going on, where executives
move and where they go. So what are they doing? Well, we all know they're going to see it.
They're taking the risk of having that interaction on their own device outside of the jurisdiction
of the SEC and everything else.
They have to be.
They're not saying dumb to that information.
We make it so they can go to LinkedIn, but we record that.
If they do happen to have a chat, we can record that messaging or we can block the messaging
in the first place.
Something you can never do at the network layer, figuring out which stream that is.
That actually has been used a lot.
We've had others that want to do Zooms and they want to make sure no one can do screen captures
on the Zoom. Nobody can upload a document or download a document or put something in the chat.
And if that last mile control shows up in so many ways that aren't cyber, it's amazing what can be
done with that. Yeah. I mean, you're actually really selling it on the compliance stuff because, you know,
just those use cases that you're talking about in finance,
I mean, eventually, you know,
that can pop up in investigations, right?
Like you're being investigated by the government.
There's a few screen recordings lying around,
you know, that's not good.
They shouldn't have been taken in the first place
and they can incriminate people.
And even if they're not incriminating,
they shouldn't have been taken in the first place. And they can incriminate people. And even if they're not incriminating, they shouldn't have been taken in the first place and, like, incorrectly stored and all of that, right? So, you know, Patrick, one of the things we added on because a customer built it and we made it part of our product after we saw it, they put a QR code in Watermark.
And the QR code would say what machine, what user, you know, what was going on when that QR code showed up.
So if you pull out your phone
you try to take a picture of your screen and they find it on the dark web or whatever they know
everything about the leakage yeah it's like a printer marking right yeah and yeah and then
there's the other watermark that just has your name and your phone number and all that stuff on
it so you take a picture of it you know you better be really good with photoshop because uh you know
you're you're going to be out of yourself when you when you share that
yeah that's interesting so i mean in terms of like the industries where this sort of compliance stuff
lights people up right because i can imagine that's how the conversation goes like you know
the case you just laid out for you know like trading houses and whatever that is you know i
can imagine that that's a pretty easy sell you know are there other industries where it's as much of a slam dunk yeah so health care yeah as they're struggling
with teledoc and everything around that and and you know i didn't know this when i first started
working with them almost every doctor is a contractor so you have this i mean it's the
same here right they all run like a company basically in Australia or a trust. And, you, if you require too much stuff on them,
they're in such demand, they just won't work with you. They'll work with another set of providers
and this and that. So you want to make it easy, but you have to obviously defend your patients.
So there's that, the healthcare world, state and locals got a lot of governance to it. Federal's
got governance to it. But I will tell you what's really shocking. Many industries that you don't think of as compliance centric, we get purchased for because
they have their one thing they've got to do.
They've got this one area that's a burden for them and they don't want to adopt a massive
effort to solve that.
So maybe you're a hotel chain and you've got customer data that GDPR cares about, but it sits in one place.
If you just govern that one place, you took that entire risk off your back. So they don't have to,
if you will, become a bank or become a PCI juggernaut or anything else. They can solve
that one problem. We see that come up a lot. So
it's not just these highly regulated that care. And then of course they get their security upgrade
and then we get automation and it just, it becomes one of the many tipping points for kind of
crossing that chasm. Yeah, it makes a lot of sense. Now, look, another thing that's popped up over the
last couple of years as an issue is, and you're not the only company that's looking to address this right which i find interesting is the idea of company staff
just plugging all sorts of sensitive information into gen ai chatbots and i know people who do this
right like i know people who use like chatPT for as like a coding assistant.
Like even my colleague Adam Boileau, he just re-engineered and rewrote the backend to our content management system.
He's not a developer and he used GenAI for suggestions in various places.
And he said it was actually quite useful.
He said you don't just take its output and use it because it's always like, you know, a bit hinky in some ways.
But nonetheless, he found it quite useful. take its output and use it because it's always like, you know, a bit hinky in some ways, but
nonetheless, he found it quite useful. Now that's fine because we're a small business writing a,
you know, content management system or whatever, but you do have to worry if you're running,
you know, a large, tightly regulated organization about what your staff are slapping into those,
into those models, which are then going to use those inputs and those queries to train their,
their data sets. And you don't know how that's going to fall those inputs and those queries to train their data sets.
And you don't know how that's going to fall out again when someone else asks it a question.
So, I mean, this is the thing.
Like at this point, the risk I think is largely theoretical.
It's just that we don't know how that's going to go.
So you have had a lot of customers who are quite concerned with controlling the way people are using those sort of services, right?
Yeah, we do. We get a lot of-
And how do you do that? Is it granular or do you just block them from being able to use those
services? Because I'm aware of other companies where you can just block it and just say, no,
you can't use that. Or is it a case where you're trying to do inspection of these prompts?
Yeah. So the blocking is a terrible approach because the blocking comes with this idea that
you understand now AI is going to show up in your world. And that's just not true.
Is Salesforce an AI company or not? I don't know. If I watch the commercials, they are. So what do
I do there? Do I block? Everyone's an AI company. Everyone who's publicly listed now is an AI
company because they want to make the little line go up. Exactly. And they're going to have
real AI products. And when're going to have real AI
products. And when do you run into them? And is it obvious, right? So the idea that I'll just block
AI and, you know, we'll allow these two AIs. Okay, good luck. So step one that we provide is that
shadow IT. What AI are you actually running into? Which is usually shocking. When we ran in our own
company, we ran in about 200 ai engines that we ran into
for a 400 person company and okay great so a new shadow ait problem so what are we even running into
then we can govern of course which ones you can go to but we also can can steer which i think is
very important if you want to go to one ai engine let's say some legal engine but you have another
one the company has license prefers has governance over, we can redirect them at that moment. So not just like a standalone model, that's not going to
refeed the training set for everybody. And exactly. So we can redirect and we see you're
trying to go here. Here's we prefer you to be over here. That's the first step.
Then we can apply DLP policies against whatever you do on any of those items. So what are you
saying into it? What items are in there?
Long-term though, and this is pretty funny,
we've got to build an AI engine.
So I was about to say,
if you want to be able to do detection on content types that are being fed into prompts,
you're going to need to do like a detection that says,
is this a block of code?
And do we want to allow that?
And the way you're going to do that is with an LLM, right?
So you're actually using AI
to inspect prompts
to make sure they're okay
to put into the AI.
And it's so unique to a company.
You can imagine a banker
that deals with deals
and deal flow
and buying and selling companies.
What is important to them?
And they're the ones I think
that are
kind of leading this charge of, you might learn what we're doing. And if you have a JPMC, for
instance, where they're, if they were using AI without abandon and they don't, they're very
governed, you might be able to see where JPMC is investing or what customers they're helping or
who's being sold or, you know, cause they're in everything. Yeah. Very scary and can move markets and tons
of SEC violations hidden in there if it ever happened. So in that scenario, mentioning,
say my company could be perfectly fine for American airlines, you know, Hey, we're talking
about this Island about, you know, a standard travel agreement. Great. But if JPMC, who's one
of our bankers starts talking about what we're doing that could be literally you know a massive issue so understanding that risk is going to be company by company and I
think you know you're going to need that AI engine to start to understand what is confidential for
our company every company's different yeah and I just think that that is an interesting thing like
because a lot of people are using things like ChatGPT as a replacement for search engines.
So, you know, if you're sitting upstream and you somehow had visibility into the queries
coming out of a, you know, out of a large bank, which all of a sudden a lot of people
are asking AI chatbots, hey, tell me about Island.
You know, what do we know about their market share and whatever?
Like that's going to be valuable and sensitive information.
Valuable.
And I tell you, I mean, I use it to create PowerPoints.
Yeah.
And the wrong PowerPoint getting out could be very damaging.
So you have to be smart about how you use it.
And some governance definitely helps.
But I think pointing people to the right models is helpful.
Guiding them to it is helpful.
Well, how do you handle that?
I mean, I'm guessing you've got a licensed sort of standalone model to do that.
Yeah.
So we literally, we can embed whatever engines you want and literally make it part of the browser
itself we modify the buttons click here i mean i met you personally when you're doing your
powerpoints i'm guessing what you've got a sort of private license off to some you know standalone
model like an anthropic or something yeah it was i don't use the help for anything that would be
confidential today yeah okay right so it So it's self-governed.
It's very simple for me.
But I will tell you, if you're a software company and you sign the contracts we sign about data residency and where your data can go and the like, it is very hard to operate anything customer-related with an AI engine and live inside what you've already agreed to contractually.
Now, could you get caught?
Probably theoretical, difficult.
But, you know, Grammarly breaks half of those contracts, right?
Yeah.
You know, much less.
And I think the usage of things like ChatGPT, it is like Grammarly now.
Hey, make this sound better.
You know, give me a better graphic on this.
So the data risk is very, very real long term.
Yeah.
It's funny that you say that.
I mean, one of the things that I find extremely frustrating actually is having spent a large part of my life actually writing professionally.
I find it immensely frustrating that people use these models to sound better because everything that comes out of these models sounds the same and they don't really write well.
So it's a pet peeve of mine.
Yeah.
I,
you are starting to,
you can feel it when,
when chat GPTs on the other side,
you know,
you get an email and you're like,
this is weird.
You can feel it when somebody ran it through those engines.
Oh,
a hundred percent.
Like,
you know,
when it's generated text,
you know?
Um,
so look,
one last thing I wanted to talk to you about,
and this is something that's come up previously when I've spoken to Ireland is you always talk about how uh
you know in the case of an M&A using Ireland lets people work quickly together and whatever and it's
uh you know less of a disaster to start integrating networks and whatever I wanted to get some more
detail on that because I'm not exactly clear what the use case is for i mean i'm guessing okay if you need to get into someone's sap or whatever
all of a sudden this entire new set of users external needs to get into some of those
applications i'm guessing that's where it's useful is that about right or is it you know
or am i missing something no no you're getting it i think of it this way if you trust us for byod
or contractors and you just bought this company and you are a
well-run company yourself with good security and good IT enablement, you look at that company
you have to integrate and go, okay, I got to go figure out what they have, what their
risk is, what issues they've got before I bring them into the tent, right?
I'm not going to risk my entire company over this thing we just bought.
So they keep it at arm's length. But that company that just got bought wants to sell your products,
wants to get into, wants to be able to be on the same email, wants to be able to communicate with
you, wants to start to be part of that company. So treating them like a BYOD user, a contractor,
is a simple way to solve this problem immediately. So you can literally say, all right, you can get to our messaging platform day one, and we're going to inspect your devices, what we think it is,
that you're configured correctly, and we're going to give some data controls around that.
So you can quickly offer up applications to share. So SAP is a great example. You know,
you need to be on my SAP. Great. We can give you access to that, but I don't have to bring your entire company over. I don't have to bring your device into my network. I can still be suspicious
of your setup while we collaborate. And that's really what it's about. It's not about a long-term
posture. It's about how to get an acceptable short-term posture, why those IT teams figure
out how to become one company. So people spin up Island to get the initial access going and then gradually those networks are joined and then what they revert
back to often their other browsers? They don't. That's the beautiful part.
Hey, speaking of right, we've seen Google actually launch their enterprise browser,
which looks, you know, it's very different to what you're doing. Yes. You know, any thoughts
on what they're up to?
I mean, it seems like they're trying to deliver different objectives with this.
Yeah.
The Google Enterprise browser has been around before Island, right?
And what it really, I think...
But they've kind of relaunched it and added features, right?
Yeah.
But I think really at the heart of it was there's this massive estate of Chrome browsers at any company.
How do I ensure configuration is optimal?
How do I ensure that our settings are right?
How do I set general policy for this giant estate?
Doing that with, you know, a big fix and all these weird controls trying to, it was very kludgy and very challenging.
And I think Google set up to do that. Then they had their beyond court vision, which has been around for, I don't know, a decade
and a half or whatever. It's I can't remember when it showed up, but it seems like it was always
there. Um, how do I engage in that beyond court? How do I make sure our traffic can plug into all
these other items and open up that, that integration. And so it's, it's active trying to be a little more collaborative with the enterprise estate opposed to ambivalent to it.
Yeah. These are all positive things. They're minor features of our product, but they are positive for
the environment to have those. So it's, it's not that they're bad ideas or not needed. You could
just do a lot more. There's lot more yeah yeah i mean that was the
impression i got as well when i looked at it which is okay you know they're sort of dragging these
things kicking and screaming into a more modern age when it comes to configuration management
yeah but they're not offering it is blanket configuration like turn off copy paste yeah
so we did this little game for for a company to understand the value of having dexterity on
copy paste. Like for us, we can control copy paste and not only what you can copy, what you
paste, but where it goes. So we could say these seven applications can share data in copy and
paste between them at will. If you have blanket copy and paste, you're going to turn it off.
That even applies to the app you're in so imagine working in salesforce or something where
you can't copy and paste you can't go and get your contact and put it in your email and all this other
stuff so for a couple hours we did that and the plan was to have it we turned our policy to mimic
that we plan to do it for half the day we made it about 90 minutes before the team just literally
just you know, destroyed us.
It made us turn it off because they couldn't work.
They couldn't.
Yeah.
And that's what happens with these blind policy updates.
The reality is it is nice to get a common configuration across the
environment right up until somebody says,
but what about me?
And it has to be different.
And now you're outside of that control and now you're yet another
variant.
And so it's, it's a step in the right direction, but the dexterity isn't there to really be
used by enterprises at large.
And that's why you don't see it around too much.
All right.
Well, Michael Fay, we're going to wrap it up there.
Thank you so much for joining us for this, the last Soapbox edition of 2024.
Have a great Christmas and a great New Year's.
And we'll chat to you again next year.
Thank you very much.
That was Michael Fay there from Ireland, big thanks to him
for that and that is it for
this edition of the Soapbox Podcast
I do hope you enjoyed it, you can find Ireland at
island.io
if you want to go and get some more information and maybe
take the browser out for a bit of a spin
but yeah that's it for this podcast edition
I do hope you enjoyed it. Until next time,
I've been Patrick Gray. Thanks for listening.