Big Technology Podcast - Is Generative AI a Cybersecurity Disaster Waiting to Happen? — With Yinon Costica
Episode Date: September 24, 2025Yinon Costica is the co-founder and VP of product at Wiz, which sold to Google for $32 billion in cash. Costica joins Big Technology Podcast to discuss the extent of the cybersecurity threats that gen...erative AI is creating, from vulnerabilities in AI software to the risks involved in “vibe coding.” Tune in to hear how attackers are using AI, why defenders face new asymmetries, and what guardrails organizations need now. We also cover Google’s $32 billion acquisition of Wiz, the DeepSeek controversy, post-quantum cryptography, and the future risks of autonomous vehicles and humanoid robots. Hit play for a sharp, accessible look at the cutting edge of AI and cybersecurity.---Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016bQuestions? Feedback? Write to: bigtechnologypodcast@gmail.com 00:00 Opening and guest intro01:05 AI as a new software stack04:25 Core AI tools with RCE flaws06:18 Cloud infrastructure risks09:20 How secure is AI-written code13:54 Agents and security reviewers17:38 How attackers use AI today22:09 Asymmetry: attackers vs. defenders32:36 What Wiz actually does40:11 DeepSeek case and media spin
Transcript
Discussion (0)
What type of novel security threats are emerging as AI advances?
Let's find out with WIS co-founder, Enon Kostika, who is here in studio to speak with us about what's happening.
Yon, great to see you. Welcome to the show.
Thank you, Alex.
All right. So AI is doing amazing things. It is producing lines and lines of code for engineers and helping people build things faster than they ever could before.
The other side of it is, it's helping, I imagine, bad guys produce lines and lines.
lines of code and attack faster than they ever could before. Now, you are the co-founder of WIS,
which is in the middle of a sales process or selling to Google for $32 billion. Correct.
Okay. So you're the perfect person to have to discuss this because WIS is a cybersecurity
company. We've never had a cybersecurity expert like you on to talk about what's happening,
especially as generative AI rises. So just give us a little bit of a state of play here in terms of
what this explosion of the ability to code has done to cybersecurity.
Yeah, it's interesting.
I think the ability to code is just one aspect of AI.
When we think about AI as a whole, first I'm thinking about a whole new stack that is created.
We are now at an era that is the big bank of technologies, and you are reinventing a whole
array of capabilities, technologies that they are being brought in play, whether it's
the prompts, the model, the information.
infrastructure, the platforms, and they're all playing together in order to allow customers to leverage
AI. AI can be used, let's say, as an employee-based, like a chat GPT query. It can be as part of a
SaaS, like in cursor in GitHub copilot, and it can be your own developed AI, like as an enterprise,
you're starting to develop applications. All of these are leveraging these new technologies.
Now, as with any new technology, it's based on software. And software by itself,
is obviously
can have vulnerabilities. So when we think
about AI, first we need
to understand that it's code
and code has vulnerabilities like
any other software that we have shipped before
and it's interesting
just a few weeks ago
there was point to on, you know?
No point on. It's an amazing event
where they bring together the best researchers
and this year they had for the first
time the AI category. What does
it mean? AI category.
They are basically doing a context
to find vulnerabilities in certain technologies.
And the more, you know, impactful the vulnerability is,
the bigger, the bounty you get back.
So this time we had for the first time,
and they spawn to one event, the AI category.
And six technologies were presented.
Out of these six technologies, four were actually researched
and found to be vulnerable at what we call
the highest impactful vulnerability.
which is remote code execution, RCE,
which means that you can do anything with that technology.
The learning that we have from here is that AI is very new as a software,
and the fundamentals exist.
It can be vulnerable, and you can actually use it
in order to just run your own code on it,
like any other technology and software we have used to shift.
So that's the first layer.
Before you move to the second layer,
I just want to make sure I'm understanding.
This is a research competition.
was it that there were six AI applications that were developed,
and of those six, they were five that were so vulnerable
that a bad actor could use those vulnerabilities to remote in
and run this software any way they wanted to.
Correct. There are six technologies that are used to build AI technologies,
like Nvidia, like PostgreSQL, these are Redis,
these are common technologies that are used to build an application.
out of the six, four actually had critical vulnerabilities at the highest stake.
And by the way, AI has the most vulnerabilities now that they are being disclosed
with research itself competed in this point to own contest.
It won the first place, but we need to understand that this is an area of active research.
It's a new technology.
Hence, it has a lot to do with the maturity to it in order to get to the level of, let's
say, trust in the software.
It wasn't just an output of an AI-enabled technology that was vulnerable.
It was the actual building tools that are used to build AI itself were vulnerable.
So all these companies, all these engineers who are relying on artificial intelligence tools,
because they're so new, they may not know it.
Whereas what you're claiming, that they may not know it,
but bad actors could be basically hacking into the code that they are writing with these.
tools, and then not just controlling the tools themselves, but the outputs as well.
Exactly.
And this is what we see.
It's scary.
It's a new technology stick.
It's being built.
It's now maturing over time.
We're learning about it.
We're securing it.
We're improving it.
But we also need to remember that as with any technology, this is new software.
And it's now being out there, you know, tested by pentester, but also by threat actors.
So that's one.
It's a new technology stick.
important to understand. And that's the most important thing to understand about AI. It's software,
like any other software. Five minutes in, you've already told me about something that I didn't even
anticipate coming in would be a problem, which is it's not just the code. It's the foundational
tools that are used to output it. You're bringing new software. Okay. And is the second thing that
you're about to go on to, the unreliability of AI-produce code itself? Not even that. First, we're going to
look at the infrastructure, right? Because in the end, you're running this.
these new tools, but they are running on top of infrastructure like any other application.
You have identities, workloads, you're using the basic components.
You are storing your training data sets in buckets that can be now publicly exposed.
You are using identities that can be overly permissive.
You are using VMs or containers that can be also compromised or misconfigured.
So just to translate to a non or less technical audience, all these, the infrastructure
that you're talking about is people are building AI programs
and then they are relying on the cloud,
all the tools that are used to support basically anything
that runs online, a software that runs online,
and there could be vulnerabilities there as well.
Exactly, vulnerabilities, misconfiguration, and in fact...
So you have a program and it's running off of a cloud system,
but you're exposing a huge part of the way that your program runs.
Yeah, without knowing it.
Exactly.
So if we look at incidents around, let's say, AI applications, we had an incident where a very
large, I would say, software provider exposed a bucket through with all of the training
data sets in there, a lot of sensitive data and they didn't intend to expose it.
This is a very basic security issue that we have experienced in the past decade now applied
to AI.
So again, when we think about AI, it's not all new.
It's new software, its existing infrastructure.
And when we think about securing it, the basics apply.
And it's very important to remember it.
The fundamentals apply.
The fundamentals of patching vulnerabilities, securing configurations, managing identities,
and so on.
So this is like the second layer that we think about the infrastructure.
Okay.
And can I ask one more question about that?
So basically what I imagine a lot of organizations are doing right now is when they want
to build personalized AI, personally as AI tools for their company. They're saying,
we want to build something specific to us that does what we do that can maybe replace our
employees or augment our employees. Let's just download everything from the organization and we'll
throw it into the bot and they've been promised by some API provider that the bot will not use
that data for training or won't spew it out elsewhere. But the thing that they miss is as they
download all this important secure information from their company, they might store it somewhere.
Because where else are you going to have the treasure trove or this motherload of data
other than when you're starting to train a bot specified for your company?
Exactly.
So if I'm a threat actor, why wouldn't I use a good old technique I'm using to expose buckets,
databases, misconfigurations in order to exfiltrate this precious data?
Why would I go through complicated AI prompts and so on if I can go directly at the infrastructure?
So that's a second layer that we have to be mindful of.
And these are all best practices that we apply and we follow,
but they apply the same way on our AI applications.
And actually, when we look at AI-related incidents,
the majority of them is within this layer of using the infrastructure
in an insecure manner that allows threat actors
to just do what they are used to do in cloud in the past.
Okay.
And now let's circle back to the question I asked in the beginning
because you know we've been building up to it,
which is how vulnerable is AI written code?
So AI written code is interesting
because one, yes, it may be more vulnerable
because we haven't instructed it to be secured.
We need to, as we use AI to build applications,
we need to also instruct, same way that we instruct it
in what we want the application to do,
we need to instruct it in the way we want it to be secured, right?
Apply list privileges,
remove data if not in use.
So there are all of these best practices in securities
that we apply today if we have a human developing it
and we review it.
But with AI, we need to now specify the same manner.
And as an example, we have released,
the research team have released,
it's a rule set that you can feed into AI generator,
like code generator through AI.
And this rule set will guide AI to build
a secure code more than if you didn't.
And this is just one aspect.
But the more interesting thing around securing code that is generated by AI is what happens
if you actually need to fix it?
And who is the owner of this code?
Now there is a very interesting question that arises because if I'm the developer of the code
and now there is a vulnerability of someone reported a security issue, I wrote the code, I know
it by heart and I have someone reviewed it. But let's say I vibe code my entire application. And
it's funny because on the first blog, like post introducing vibe code, the person who posted
it said, you can forget that code event exists. But no, you cannot. The code is vulnerable.
And now if something happens, you need to get back to it and fix it. But you need to know
the code to do it. So there is a broader question on responsibility around the code.
that is generated through AI
and who is going to go back and fix it
if something happens. By the way, it's not only
security, also reliability,
availability, scale.
How do we assure
that we have the proper capability
to maintain the software that
we have shipped across
security, availability, reliability
in the long term? Okay.
So I think that what you're saying
is basically, AI can build
pretty secure code
if you introduced the
instructions or the protocols to build secure code into the prompt.
Correct.
That seems fairly, that seems good.
But then the other side of it is what we're seeing now is engineers are starting to be
removed from the code that they are writing because oftentimes they are shepherding that
code through a process.
They're vibing it.
And the AI is doing the rest.
Are we already seeing cybersecurity problems within companies who have had developers,
that have just vibe-coded or AI-coded applications?
Yeah, actually there are very known examples of some people that have posted,
they have vibe-code an application, and then it got hacked hours after or days after,
and now they don't know how to recover because they didn't have the skill.
So the way I'm looking at it is that a vibing code is a great way to accelerate,
but it doesn't remove you from the responsibility of actually knowing your code,
being able to address issues within the code and guide AI farther into the maintenance process
as we go and use the application and mature the application.
So this is, I think, a maturity thing that we need to do.
And another aspect is when we think about how would an agent architecture work
because you're talking only on one role, which is the developer.
But if we are fast-forwarding, like, agentic concepts into the future, so why wouldn't you have a security reviewer that is also an agent?
So you commit code, you're developing your code.
And next, you should have, like, a security review to your code, but also performed by an agent that is minded for security with all the security best practices, guidelines that we have provided it.
And someone should maybe look at the architecture.
And someone should look at data privacy.
And you can think about, and again, it doesn't exist today yet,
but as we fast forward, AI doesn't mean that it's only doing the coding.
It can do other things in the development lifecycle that we can rely on.
But this is scaring me even more because I thought when you said, okay,
people are going to vibe code.
By the way, vibe coding is coding just by prompt.
So you say build me this application and it builds an application
as opposed to just doing the code yourself.
I thought, okay, so now we'll have defined roles.
will have, you know, if vibe coding becomes a thing, people will vibe code applications and the role
of the developer will be to audit, monitor, look for vulnerabilities there, and address them.
So that will be their expertise. It's not actually building the things. It is securing them
and maybe improving them in some ways that the bots can't handle. But what you're saying now is
we now have, we're looking towards a future where there are going to be agents like another AI bot
whose role is to do that thing, to secure the code.
Exactly. Why not?
So then aren't, isn't that like a triple risk then or a double risk
because you're now going to have AI bot whose core competency is to build
and AI bot whose core competency is to secure?
So if something goes wrong, now we're definitely not going to have a human with the skills
to be able to diagnose and address.
So it actually doesn't conflict with each other.
We can accelerate or automate more.
But it doesn't actually remove us from the need to have, let's say, a human in the loop
that we will be able to then account when something happens in the details.
So there are two separate questions.
That's why when we started, I said, yeah, vibe coding can be improved.
It can be bad, but it can also be improved in various ways.
But also, it doesn't resolve the actual need to say who is the owner of the code.
What is the operating model?
When something happens, and let's say AI cannot fix an issue, so who does and who owns this over time?
And this is maybe the most, I would say, when I look at the challenges, technology will continue and accelerate and help us to bid the applications.
But still we have to figure out what is the operating model, that when something happens, we know who to turn to and we know that we will be able to.
Think about it.
You are an enterprise.
You are developing an application.
Forget about security.
Let's say basic reliability.
The application goes down for whatever reason.
You cannot just wait and rely on, you know, maybe AI will figure it out.
Maybe it doesn't, right?
We need to...
Seems like a bad strategy.
So we need to have accountability in the end.
Okay, so how do we maintain this?
And if I'm a developer and I shift an application and it goes, it doesn't matter if I built it myself or through AI,
but it goes down tomorrow and I'm not able to fix it and get it up again.
I'm not doing my job, right?
So this is a big question.
How do we continue to own the application while leveraging increasing amount of automation?
Okay, so we've talked about the liabilities and the tools you used to build AI,
liabilities in the infrastructure used to store key information for AI,
and now liabilities in AI code and vibe coding in particular.
But there's another side to this, which is I love the fact that people are able to build things
just by a prompt now, or maybe a little bit more sophisticated, they prompt, they code
a little bit. They build off each other. And now something that might have taken a team of 10
can be done with one person. But, you know, we've called them threat actors. I call them bad actors.
Basically bad guys looking to do damage or looking to hack into computer programs. They have all
these tools at their disposal as well. So are we already seeing them putting those tools to use
in an attempt to hack into software? And has that increased the sophistication and the level of hacking
that we're seeing already.
So there are several ways in which threat actors can use AI.
And the first thing is just hacking into an AI application, right?
And if you think about how do I hack into an AI application and why it's appealing,
there is what's called like the trifecta of, let's say, risk factors in AI.
One, it's exposed.
Second, it has access to private data.
Third, you have untrusted content, like the query.
the chat queries that you send to it, the prompt,
that is also exposed to the threat actors.
So one, you have the layer of directly aiming at the AI application
and trying to extract via the prompt sensitive data.
So that's one.
The second thing that you can do,
you can automate and iterate more on what is already known.
And I think this is one thing that we look at AI.
It's really good in automating repetitive tests.
So instead of trying one type of attack or one type of vulnerability,
I can automate and iterate through an AI like purpose-built application
to try and test many more options, right?
And then you have the third layer that am I able to discover new type of threats using AI?
Am I able to do vulnerability research to find new vulnerabilities because I've trained
AI to do something very specific in that case?
So these are the three levels that we can look at.
And I think that the interesting thing is, and let's try to tackle it one by one.
When I'm targeting directly the AI application, it's like any attack surface.
We need now to understand how to secure AI application, the prompt, the data within the model against threat actors.
And that's almost like application security applied to AI.
whole new stack, whole new type of applications, still a lot to learn how to secure it.
And yes, threat actors can do it today.
I think it's very available for them today.
We should think about it that any prompt that we exposed is going to be tested also by threat actors.
So that's one.
Second layer, the automation, this touches a very, I would say the deep issue with security,
is the attacker to defender asymmetry. And if we think, you know, historically, just for
those who are not familiar, the asymmetry means that a defender has to defend on all fronts
all of the time while the attacker has to find only one thing that actually works in order to get
in. So the asymmetry is insane. It means that the more a state we have, the more we need to
secure as defenders, the harder and harder it becomes because we need to secure everything
out there, but the threat actors can find one vulnerability and they will still get in.
So, historically, we've become better and better in improving the ability to secure the
foundations, remove the risk, proactively, detect and respond.
So this is where security has been throughout history, okay, in improving our ability
to cope with this asymmetry.
And now with AI, the interesting thing, that the threat actors can automate a lot more
but from a defense perspective, it doesn't give me the same order of magnitude of, let's say,
improvement that the threat actor can gain.
So, in essence, there is an aggravation, a significant aggregation in the asymmetry
that we're going to face.
Because I can test the threat actors, I can try more.
As a defender, it doesn't help me to detect more at the same order of magnitude.
So this is a challenge that we are going to see more and more,
more automation on the threat actor's side.
And the reason is that from a detection point of view,
from the defender point of view,
I cannot withstand a high false positive rate.
From the threat actors perspective,
I don't care about the false positive.
I just need one.
From the detection, if I have a high false positive rate,
I'm done.
I cannot find the hay in the state.
Because let's say,
Even if I have 0.1%, 0.001%, you take it as much lower as you want.
Just multiply it by the attempts of threats that you get.
You're going to be bombarded with noise.
And noise is an enemy of security.
They can, like, DDoS your infrastructure pretty much with attacks.
DDoS your security team.
You're actually the security team.
It's like a physical, like we're just going to, DDoS is basically you send a bunch of traffic
to a website and take it down.
So go ahead.
Yeah, I run security.
I can tell you a nice story.
not technology oriented
one time I walked into a building
and I saw a new employee
training for the guards
of the building
and they were just next
to the fire
like fire alarm
and then the one who was giving the tour
was saying if the alarm is sound
if you hear the alarm
this is a fire alarm
this is where you go
you turn it off and then
the alarm stops
and then a lady asked
from the new employees? And what if it's a real fire? And he told her, listen, it's not a
real fire, it's a false alarm. Why do I love this story? Because this is exactly the challenge
the security teams have to face with false positive. If we're not going to be really accurate
in detecting the high fidelity alerts and really making the extra mile to make sure that
if a security team sees something, it's actually a thing they need to take in. Very fast we're
becoming like unable to cope with the real threat.
So this is the risk is that if you automate, if you're a threat actor or someone trying
to hack into a system, you can automate it.
You can overwhelm a security team because every false positive they need to put attention
to.
Exactly.
And eventually you get past their defenses.
Exactly.
And this is why there is a good way to tackle this.
It's actually reducing the noise by investing more in the fundamentals and less on the detection.
Okay.
So if we are always trying to detect, it's going to be hard.
we're going to be bombarded with alerts.
If we're proactively reducing the risk and the chance of being attacked in the first place,
namely patching vulnerabilities, fixing misconfigurations,
we are in a much better position to turn down the noise.
So this is where security has invested a lot of time,
and now it's going to aggravate if AI is being applied by the threat actors.
Who are the threat actors?
Everyone.
AI is very accessible as a technology.
We need to understand that it's accessible, it's simplified, it can be used at scale, it's not very expensive, you don't need to be a superpower to use it.
You don't need a lot of funds to use it, to be honest. It's accessible.
So is it like organized crime, governments?
Everyone. State nations, organized crime. Teenagers in basements? For real?
Yeah, it's easy. It's accessible. Why wouldn't you do it?
Well, you could go to jail if you get caught.
No, I'm saying it's a threat actor.
Why wouldn't you try to use AI?
I mean, same answer.
Why wouldn't use AI if you already decided to do something bad?
If you are already a threat actor, of course you're using AI.
Of course you're going to use AI, right?
So that's what I'm saying.
As a threat actor, it's a new tool in your stick.
And now you can leverage it to accelerate, to automate.
I think there is a thing about cybercrime.
It's a business, right?
And we don't, I don't know if we always, it's not just bad guys.
These are like businesses.
If you think about ransomware, it's a business.
It has rules to it, right?
You know, when you pay the ransom, you get the data, right?
It's a rule.
You're not tricked into it because it's a business and they need to maintain their reputation.
So as a business, same way that any other business is looking into AI and thinking,
how can we use it to accelerate, to automate, cybercrime, state nations, they have the same logic.
Today in the year 2025, these bad guys, teenagers in basement, whatever you want to call them, businesses,
They've had access to pretty sophisticated generative AI for a couple years now.
So talk a little bit about the curve that you've seen since the introduction of ChatsyPT.
Are we seeing about the same threats like we've seen previously, or has it escalated exponentially?
Currently, from what we see, and we are releasing like the state of AI and we're monitoring for threats.
There is a site threats with IO that we monitor all of the cloud AI-related incidents.
and we do a breakdown of basically what was the root cause, the techniques.
And to be honest, we're seeing today more of the same, right?
It's not that we are seeing a significant shift in the threat landscape.
It's really turning back to the infrastructure, the things that we know that are working.
So, no, the same vulnerabilities, but the magnitude of attacks.
Have they gone up?
Always.
They're always trying to do more.
But you need to remember that also on the blue side, on the defender's side,
we are really getting good in protecting our infrastructure.
And I think that we're seeing a trend.
And it's always easy to talk about the increasing rate of attacks and the red side.
And I think it will always be that way.
They will always try more.
It doesn't mean that it will be more successful because on the blue side,
we have major transformations in how we look at security
that is helping us to improve the foundation,
improve the processes, improve the ability to proactively reduce the risk,
the ability to detect and respond.
And I think that we're seeing, like, since we're still at the foundations that we know how to
secure, I think that we're seeing that we are not at the phase that you're saying where
we're seeing crazy stuff happening.
I'm not saying it.
I'm just asking about it.
No, not yet.
We're not seeing the crazy stuff.
Why do you think that is?
Because I think that with every conversation around gendered AI, there's always this, oh, you've got to be
looking at cybersecurity part of the conversation because all these tools are in the hands of
the bad guys now for all these reasons.
But yet, we are, you know, in reality, the actual ability of them to get through is not
higher.
Is it, I mean, is your answer just because companies are doing a better job securing their
infrastructure?
Because if so, then it's really not a big problem.
I think that we are in the process.
When I think about our, like, the last decade, right?
But this process of automation, it's not new to us, right?
Automation has taken place before AI, long before AI, and we'll take place after AI long before.
We are always in a journey to continue and automate what threat actors a decade ago have done on the keyboard manually.
And when we look at cloud attacks, for instance, the ability to automate what happens as soon as I walk into an account as a threat actor, what do I do?
Well, I can automate quite a bit, and we have seen this level of automation.
We have seen this level of automation, for instance, with ransomware.
We have seen automation happening over the course of the last decade, but in response within
security, we have basically developed the capabilities to respond to this automation.
AI is another layer that allows us to automate more right now, okay?
I'm not talking, because we're not seeing the crazy new threats yet, we are right now at the
phase where we're seeing accelerated automation of the known threats, known risks. And this is
a journey security as being in the past decade. And it's only one step up. So it's interesting
because I guess it's what you're talking about sort of follows, I think, a lot of the progression
of generative AI to date, which is that it's a very promising technology, but companies
trying to put, because if bad guys are companies, their businesses, companies trying to put it into
action have seen mixed results. And so actually what I'm getting from you,
same thing is happening with the bad actors.
But then the question is, if these models get much more intelligent,
does that open us up to bigger risks?
I believe there are areas that, yes, if they become increasingly better,
let's take vulnerability research, as an example.
Vulnerability research is one area.
Let's explain what's a vulnerability.
Okay.
Okay.
Vulnerability, by definition, is the ability to move from one trust level
to another trust level in a way that is not permitted.
For instance, if I can run remote code,
then I'm moving from the outside to the inside.
And this is the worst that can happen
because I'm literally running code from remotely, external,
in your internal environment, right?
So that's a vulnerability.
A vulnerability can be like an unauthenticated access.
So authentication bypass.
I'm logging in and I have this trick that I'm giving
a false password and I'm still able to log in.
Okay, that's authentication bypass.
I was able to walk into an higher trust
without the permission to do so.
So that's a vulnerability.
The ability to research and find vulnerabilities,
this is kind of the bottleneck of the security space,
okay, because vulnerabilities are what allow threat actors
to move from, you know, low trust to higher trust environments.
and the ability to automate research by, you know, AI of vulnerabilities can open up maybe a race
where you can find many vulnerabilities and unable to patch them at the same pace.
There are solutions today that are already leveraging AI to detect vulnerabilities.
These are companies, security companies, that are using AI to analyze your code and find vulnerabilities in your code
that up until today, you needed, like, a very deep research to do it, and it's automated.
It's really nice from a security perspective.
Threat actors may try to weaponize it to find faster and faster vulnerabilities
and automate the process to put it into action so that they can always use what we call
in the industry, zero-day vulnerabilities.
Zero-day vulnerabilities means that it's a vulnerability that besides the threat actor
that found it, no one else has seen it, so it's.
It's under the radar.
That's the meaning of a zero-day vulnerability.
So if we look into what could be a risk, yes,
if we're able to create a perpetuumobil
that finds vulnerabilities, operationalize them,
and puts it into action, it's scary.
But I think that today we're, again,
looking at the detection mechanism and so on.
Right.
Keeping the fight.
Okay.
So we're deep into the conversation,
but good time to really introduce what WIS is.
I mentioned in the beginning that WIS has sold to Google for $32 billion.
So what your company does is basically just that.
It looks at everything, and you correct me if I'm wrong,
but looks at everything that an organization has in the cloud
and tries to find those vulnerabilities proactively.
Correct.
Wiz connects very quickly to any cloud environment
and assesses all of the risks to the cloud.
We call them attack path.
So risks that, you know, we talked about the noise.
We remove the noise.
We focus on really what are the critical attack path
that threat actors can use
in order to gain access to an environment.
And we surface them to security and development teams
so they can proactively reduce the risks
before the bad guys find them.
Okay.
So that's what Wiz does.
Interestingly, it doesn't happen only in cloud.
We can do the same thing.
On-prem.
on-prem, private cloud, but also during the coding phase, which is even more interesting.
This means that as you are developing, I can actually predict what would become a critical attack path
and guardrail and let's say help developers make the right choices while developing before
it gets to the cloud.
So this is like even preventive security.
So this is, and the last pillar is like detective, this is where we don't want to be,
but we have to have the controls, is to monitor the cloud environment.
in which we are seeing a suspicious activity,
detect and respond to it.
So this is a bit on what Wiz does.
Do you think increasing threats from generative AI
played into why Google wanted to acquire the company?
Security, again, we're in this battle for decades.
I think we are improving dramatically as an industry.
I think that generative AI increases, one, the use of software.
So cloud security has become even more important today
because we understand that AI will be used at scale by any business.
By the way, historically, we're looking at cloud.
There were, I call it three migration waves into cloud.
The first one were the cloud natives, Spotify.
Yeah, where we're recording in Spotify studios.
So we are cloud native started in cloud, born to cloud.
Then came COVID.
COVID actually brought the second wave where businesses realized that staying on-prem
means they may be disconnected from their business.
and moving to cloud is the way to go, and they then came the second wave, right?
During COVID, the major, the banks, the financial, pharma, they all moved to cloud
because they realized, okay, that's not the playground, that's a strategy.
And now we're watching at the third wave, which is AI.
So AI augments the use of cloud.
It augments the use of technologies, transformation into cloud and AI, and it all piles up in the
same direction.
So just going back to the point, yes, AI increases the use of software.
increases the importance of cloud AI security, what we call the fundamentals.
How do we build secure application?
So absolutely it plays into how do we think about, you know, strategically what's important
for an organization, cloud and AI security, right?
So that's one.
Second thing, I do think that AI requires us to innovate faster than ever before.
And this is an interesting piece.
We talked about the new tech.
We talk about new threats.
We can talk about the pace of adoption of technologies, which is also mind-blowing.
We are at an era that we are innovating more than ever before.
And in order to keep up, you need solutions that can help security to enable the business to move forward.
And not secure a solution that will block the business from using technologies.
And I think that when you look at WIS, WIS is really a technology.
that allows organizations to adopt technologies
in a way that they enable the business
rather than blocking the business.
And this will become a critical attribute
for any company out there.
Otherwise, you know, AI is moving
and businesses will move with AI
and any other business should keep up with AI
because this is going to define businesses
in the upcoming years.
Yeah, I mean, I think that I'll just add that.
Obviously, like Google is trying to be the one
where startups, anyone who's trying to be,
build. I mean, there's seen tremendous growth in cloud. I think like 30% quarter over quarter
in the most recent quarter. But it's been the case for a while as AI has built up. And if there's
a chance that these vulnerabilities actually start to show themselves and become more difficult
than I would imagine that WIS is pretty well suited to help them out there. I think there is an
interesting concept. For those who are not familiar with the history of cybersecurity, there was a very
interesting initiative back in even, I think back in the 2000, okay, year 2000 by Microsoft.
And I want to just, it was called trustworthy computing.
And trustworthy computing was around this concept that if humans won't be able to trust
their compute, they won't use it.
And that started a whole initiative within Microsoft to secure Windows, to secure all of the
software that they ship. And this changed the security industry forever because this created
the concept that security has to be built in in order for people to trust it. And when we think
about AI, when we think about cloud, security has to be baked into these processes so we can
trust anything that we leverage in terms of cloud and AI moving forward. Security is a cornerstone
for our ability to use technologies at scale. And yeah, this is, I think,
we have to solve it, right?
We have to make sure we can trust cloud and AI.
Clearly, Google thinks.
So, I mean, I think this was the biggest acquisition in their history.
Correct.
So it tells you everything you need to know.
All right, we got to take a break.
But after we come back, I want to talk about what might be like the big black swan events,
whether bad actors can hack into humanoid robots when we see them.
And also, you had a very interesting report on deep seek when it came out.
So we'll touch on that as well.
All right, back right after this.
Shape the future of enterprise AI with agency, AGNTCY.
Now an open-source Linux Foundation project,
agency is leading the way in establishing trusted identity and access management for the internet of agents,
a collaboration layer that ensures AI agents can securely discover, connect, and work across any framework.
With agency, your organization gains open, standardized tools, and seamless integration,
including robust identity management to be able to identify, authenticate, and interact across any platform.
Empowering you to deploy multi-agent systems with confidence, join industry leaders like Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat, and 75-plus supporting companies to set the standard for secure, scalable AI infrastructure.
Is your enterprise ready for the future of Vigentic AI?
Visit agency.org to explore use case.
is now. That's agn tcy.org. And we're back here on big technology podcast with
Enon Kostika, the co-founder of WIS, also its VP of Product. Let's talk about DeepSeek right
away because there was an interesting release that WIS put out when Deep Seek was sort of all
the rage. And it got a lot of headlines. It was here. This is from your blog.
whiz research uncovers exposed deep seek database leaking sensitive information including chat history
now i took issue with this because i had just met you at the amazon uh the reinvent conference
and you had talked to me about how what whiz does is it proactively we were just talking about
before the break it proactively looks through your code and finds things that might be exposed
and i saw this blog post and i was like well that's what whiz does and you could probably write
similar headlines about many different companies.
And I think what happened was the news cycle ran with it.
And people who were afraid of Deepseek pointed to it and said,
ah, look, Deepseek is trying to leak your information to the Chinese Communist Party,
which is actually not what it was.
So what do you think about my read on the situation there?
Okay, so let's go back on the details on the Deepseek.
What happened?
Deepseek was introduced, and
gained a lot of traction and interest from businesses, media, everything, and from our research
team as well. Looking into Dipsic, the research team found, again, we talked about it in
the fundamentals. It's not a very advanced AI, you know, centric capability. It's an exposed
database in the end, if we simplify it. That includes a lot of sensitive data from Dipsic. They
closed the exposure after we disclosed it.
And then we published a blog post about the fact that DeepSik had an open database,
if we simplified.
And I think that DeepSik has a few interesting, that week, there are a few interesting
things that worth noting.
One, DeepSick adoption was something that we need to pause and understand how fast
technologies can
propagate within
our state
within a week
almost 10%
of organizations
were using deep seek
that it sync
within a week
and I think that this is just one
aspect of when we think about
you know from a
security perspective what do we need
to do as we are
facing this rapid adoption
of new technologies.
In the end,
Dipsic raises interesting questions
around, I'm going to use it
to train on sensitive data.
I'm going to,
what are the questions
that we should be asking
when we bring a new technology
into play?
By the way, it's not just Dipsic.
It's any technologies
that we're bringing in.
And I think Dipsic was interesting
because of the rapid adoption
on one hand.
Second, the questions around,
okay, where did it come from?
It came out of nowhere, right?
nobody anticipated it and all of a sudden everybody started downloading the models using it
and third we found what is a critical vulnerability that we felt like we need to let people
know about and I think that these areas this is really important for teams to ask the question
on how do we enable safe use of technologies that doesn't prohibit the business from using them
but does instill the security measures we need in place.
And this is really around what we've learned from this deep seek.
Technology is adopted faster than ever.
We need to know what our teams are doing with the technology.
And third, we need to do the scrutiny around what is a security posture of the technologies we use.
Again, deep seek like any other AI tools, as we talked about earlier.
it's a new space, new software.
It's being developed as we go.
We need to scrutinize the technologies as being adopted,
as we have done in the past decade.
It's nothing new.
But my point is, I feel like you guys find similar vulnerabilities
like the one that you found with Deepseek every day.
And this release led the media run with this.
There was already fear of Deepseek,
and the media kind of ran with it and said,
see, the whole point of Deepseek was like a data,
filtration exercise.
This is what the media takes of it, right?
So, yeah, but I want to hear from you.
What was really happening?
I think that in general, we's focused on the research on the entire AI stack over the
past year and a half.
We launched products to secure AI, AISPM, which is AI security posture management.
We have released the AI state of the union, like what's being used, not used.
And I think we have also researched many, many AI technologies and that we found to be
vulnerable.
You can cover in our blog.
earlier, five of six of these foundational
technologies. Exactly. So was the
vulnerabilities in DeepSeek
any worse than what you typically find?
It was a very, what I would say,
typical exposure that
exposed a lot of sensitive data,
but yet, if we look historically,
there were similar incidents, for instance,
with Microsoft releasing a token
that had access to sensitive data
in, you know, a bucket used for
training AI. And I
think there are vulnerabilities, more complex, but vulnerabilities in Nvidia's toolkits that
they're released for developing AI.
I think what media does with it is one thing, right?
But the research shows, like throughout the past years and the research we have shared
around AI, that AI, really the messages that we started with, AI is a new stack of tools.
There are software tools.
their vulnerabilities.
We need to put the effort to secure it.
By the way, the tools we use and the infrastructure we use.
It doesn't mean that we can obviate the need to do infrastructure security
the same way that we have done.
And that's basically the message I think that we should all take,
including from the deep seek incident.
This is a very common misconfiguration that happens exposing a database.
Having spoken with you, I was thinking that as well.
And I was trying to be like, come on.
But I think that it's good that you guys talked about it.
But I feel like I'm looking back at the story of the context of the media missed is this stuff happens all the time to everybody.
Yeah.
Okay.
Everybody.
All right.
Let's talk about some wild stuff now, some more pie in the sky things.
So all these AI labs are trying to develop superintelligence.
They've gone from like AGI to superintelligence over the course of a couple of months.
one of the things that you can you would imagine you would find is is is uh with super
intelligence is the ability to break into anything uh i was speaking with a researcher
recently who gave this interesting thought experiment that i kind of want to run by you i do
run by you which is that um if an if if an AI lab built an AI system uh that was an
expert at detecting vulnerabilities could get past any system today no matter how good you know
no offense to anyone working in AI security,
but the idea of superintelligence is
you can surpass what exists today.
Would they then have, and imagine they saw
other labs getting close to building the same thing,
which would eventually, like, destroy security on the internet.
They then have a responsibility to go in
and delete the competing AIs
because of what could happen if this technology proliferates.
You're looking at it from the threat,
perspective. I always try to think about it from the defender perspective. Are we
able to generate faster technologies that will defend against these sort of stuff?
But doesn't the idea of super intelligence then worry you a little bit? Because, A,
we're not quite sure, I mean, if it happens, and it's still a big if, we're not quite
sure if it's going to be something that's developed by the good guys or the bad guys. And
even if it's developed by the quote unquote good guys, there could be hidden bad stuff
going in there, going on in there?
It's true to any technology.
Quantum computing is another example.
Like there are many new technologies
and there will always be new technologies
that will be introduced.
And I think that as defenders
and in general, as a good guys,
we need to think about
how do we create the proper guardrails
to secure and build foundations
that help us to continue
and adopt technologies in a way
that is controlled, manner and secured.
Now, of course, you can think about scenarios that are far exceeds, the worst-case scenarios
that we have seen up until now with security.
But I think that historically, that wasn't the case.
And historically, the same things were said about cloud.
They were said about data lakes.
They were said about containers.
They were said about many different innovations that were introduced.
And I think that we were good enough in an industry.
I'll give you like the positive perspective.
most of the technologies that were introduced in the past
were introduced without thinking about security at all.
And if we think about the sad story of, you know,
why we ended up in a, I would say a huge deficit of security in the industry,
it's because security always came later on, it came bolt on.
We invented autonomous cars and then we figured out,
okay, so, you know, they can now be controlled and, you know, being, you know,
crashed into places without control, so we need to secure it.
The concepts behind, I think, today, AI security are actually well thought today rather than
much later.
And it shows a bit on the maturity of not only the security teams, by the way, it shows a lot
on the maturity of the industry, that we are having the discussion on what's safe AI,
how to secure AI architecture.
And unlike before, you know, talking about new technologies and their security was always something that was perceived as an inhibitor to innovation.
Right.
Let's get it right first and then it's secure it.
Let's not get the folks scared about it, right, so they won't block it.
But today it's actually a much more, I would say, integrated discussion of, of course, we're going to use AI.
But, of course, we're going also to put the guardrail, the security in place, the standards.
We're going to ask the questions.
So as we do this, we also develop the muscle of let's secure AI at the get-go.
And I think it actually shows a lot on the maturity.
I'm positive, actually, about this, because I think we are at a position that we've never been in it before.
Like, it's relatively new that we talk about technology and the security of the technology.
you the same time. It's great.
I mean, I know you're in security, and so I guess I'd be optimistic if I was, you know,
co-founding a security company, but I think you're underrating the risks.
I just think that as an industry, there are always risks.
Right.
And the risks are always scary, and they far exceed what, you know, ordinary people can
think about when they just see a technology.
But I do think that we have strong foundations within the security industry,
the development industry to build the proper guardrails.
And I have trust that we have established enough track record
on how to secure applications, environments that we can continue and innovate.
And as I'm going to talk not only about WIS,
like the entire industry, if I look today at the pace of innovation
around AI security, look at the number of startups
that are already focusing on providing solutions to AI security,
AI firewalls, AI security posture management tools, AI security gateways, right?
There is a whole array of security industry solutions that are basically creating AI-aware controls
that are now already in place, already in place.
It's not like futuristic.
It's existing solutions.
Yes, we need to mature.
Yes, we need to research.
Yes, we need to do a lot more.
But we are building the capabilities towards there.
What about quantum?
You mentioned quantum.
Everyone talks about how quantum will break encryption and break security.
You have a big smile already, hearing the word.
Quantum is interesting.
I think it's also, it's an exercise, right?
What if exercise?
I think we're seeing a lot more awareness to post-quantum cryptography, right?
The ability to basically use cryptography that will withstand quantum, you know,
if quantum computing happens, what do you think is going to happen?
I don't know.
not in this case where speculations.
I do think that, you know, it's good practice to improve again.
It's the same thread.
Actually, it's interesting because you will hear me giving you the same speech
about quantum computing, like security as AI.
As AI.
There are foundations.
We need to be really good in running initiatives that will secure our foundations.
In AI, it can be whatever we talked about.
But in quantum computing, it should be about using the cryptographic
tools that are suitable for
this era. And I think
the one thing that has changed around
quantum computing is a thought of
maybe it is closer
than we think and then the
threat actors are, you know, stealing
data now, so they
will be able to decrypt it later once they
you know, figure it out. That's interesting.
So, you know, still now
decrypt later, yes, it's an
experiment, it's, you know,
it's a thought, right? We need to
also give it attention. And I
I think there are more and more companies, especially regulated companies, government,
that want to make sure that they are securing their data today against crypto, like post-quantum
cryptographic risks.
So, all right.
I want to end with this.
I want to talk a little bit about the threat to hacking autonomous vehicles and humanoid robots
if they eventually show up.
I have to ask this in a way that we don't get the same.
I'm confident we'll be able to get ahead of it.
response because I actually want to know like, you know, what's likely to happen if this
actually goes down. I heard you speaking recently about there's been attacks on hospitals
where their systems have broken down and they've been unable to send dispatches out and
give care to people. If this happens, hypothetically, is it a magnitude greater risk if,
let's say, humanoid robots, which might be in people's houses or embedded in
society or autonomous cars, which we know are like all over the roads now, if there's a
vulnerability there, is that like a level up from just the typical software hack?
You know, there are things that happened in the history of cyber that were very surprising.
For instance, what would happen if I break into your, you know, I don't know, oven?
It has an IP.
I'm breaking into your, you know, remote cameras.
What will happen?
Just I'm getting installing something there.
So historically, there was Adidos attack that actually utilized all of these IOTs in order to target one specific, let's say, target, and they brought it down because we're talking about millions and millions of devices that are flooding a certain area.
But who thought that this will be the use of these IOTs, right?
It's a very creative use by the threat actors, right?
And I think that the way we think about the risks, threat actors are very creative in how they utilize these sorts of stuff.
And it's not always in, you know, the direct way, okay, we're going to hack autonomous cars to run into people and robots to go into people's house and do stuff.
But I think that this can cause significant outages, for instance, right?
We can use it.
it can cause a significant dysfunctional, let's say, society is right if we rely on it too much
or we don't have the proper ways to secure against it.
So I do think that as things evolve, we do need to always think about how this can be used
not only in the direct way, but also in part of other campaigns that can take place in cyber.
And with this, creativity is amazing when we innovate.
It's frightening when it's used by the threat actors, right?
We're seeing it, you know, not only in cyber.
We're seeing it in everywhere, right?
And I think we should be always, I would say, careful about the way we apply technologies to our day today
and the guard rules we put into play.
because the less thing we want to do is to build trust on something that breaks at some point
and we don't have the guardrails to protect against it.
Yeah, it's kind of crazy because throughout this conversation we've talked about
how we're relying more on AI to code and now we're going to rely more on technology
to work autonomously in our society, whether that's agents taking action on our behalf
with computer use or whether that's cars or humanoid robots.
And so I think when people look at the trend of why cybersecurity is important, the more cyber we put in our life, so to speak, the more security we're going to need.
Correct. The more state we cover, the more security we need. And one concept, you know, one concept that I do want to say, something that was a cornerstone for, at least from my experience with customers in cloud security, it's a concept of democratizing security.
Because if we think about just security teams doing security for the whole world, it's not going to scale.
But if we think about our responsibility as employees, humans, developers, security teams, IT teams,
and how can we contribute to this resilience of our systems, right, the cyber resilience of the technologies that we use,
this is going to end up in, I think, a much more scalable approach to security.
and one of the most interesting trends in cybersecurity is actually that democratization aspect.
Another very simple example is phishing emails, right?
If we're just a security team, one security team tries to filter all emails, you know,
something will get in and in the end will work.
Yeah, we haven't even spoken about people spoofing other people's voices to try to get in.
That's happening now.
That's happening.
That's happening.
We've seen live cases.
Can you protect against that?
This is another, as I said, all security products across the industry should become AI aware.
And if I'm thinking about anti-fishing tools, they need to be able to cope with emails that are created via AI-generated capabilities that are very, very targeted and seem very, very credible.
So email security tools should be able to find them.
fishing through fake voice messages.
Basically, these are the deep fake stuff.
So these are the things that all of the industry has to mature.
Across all of them, there are solutions that are now in play.
Security is not security today.
And this is like me being optimist.
I think security today is geared towards finding new threats,
responding to them in ways that we can operationalize at scale.
But we have to be aware that with new technology,
technologies come new threats.
Right.
There's a human side to it also, I imagine,
where, like, when somebody is telling you
they need money or something like that,
like, you got to, like,
you can't just believe them anymore.
It's weird, even if it sounds like a parent
or sibling, child, there has to be,
like, you have to be like,
I'm just trying to be sure
that this isn't an AI-generated voice here.
Some things I'm going to do to triple you up.
Correct. And that's our responsibility in the end.
Everybody, as humans, employees, developers,
in how do we make sure that we are part of this resilient system?
Right.
So I'm trying to think about how I'm going to, how I feel after today's conversation.
I mean, on one hand, you sort of help broaden the spectrum of threats that I ever thought about,
whether it's the ability to hack the actual tools themselves.
On the other hand, it seems like the way that generative AI is working today,
it hasn't led to this massive uptick in security vulnerabilities or even attacks,
even though we're always seeing them escalate.
So I'll choose to leave with your perspective, optimistic,
and we'll have to keep talking.
Amazing. Thank you very much for everything.
Thank you, you know.
Thank you, everybody for listening.
We'll be back on Friday to break down the week's news.
Thanks again, and we'll see you next time on Big Technology Podcast.
Thank you.