Screaming in the Cloud - AI Agents, Enterprise Risk, and the Future of Recovery: Rubrik’s Vision with Dev Rishi
Episode Date: December 4, 2025In this episode of Screaming in the Cloud, Corey Quinn sits down with Rubrik’s GM of AI, Dev Rishi, to unpack the real story behind enterprise AI adoption, the rise of agentic systems, and ...why most organizations are still stuck in read-only mode. Dev breaks down how Rubrik’s Agent Rewind brings safety, observability, and resilience to AI-driven actions, solving the “Oh no, the agent deleted production data” problem before it happens. From deep learning’s evolution to the massive gap between consumer AI enthusiasm and enterprise risk posture, this conversation is a candid, insightful look at the AI future Global 2000 companies are racing toward… or cautiously tiptoeing into.Show Highlights(00:25) Understanding Rubrik and Agent Rewind(00:50) Challenges in AI and Disaster Recovery(01:27) Guest Introduction: Dev Rishi from Rubrik(01:44) The Evolution of AI in Enterprises(02:33) Starting an AI Company: The Backstory(05:10) Generative AI and Its Impact(07:15) Enterprise AI Trends and Challenges(08:56) The Future of Agentic AI(18:03) AI in Customer Support(22:03) Rubrik's Acquisition and AI Strategy(29:30) Launching Rubrik Agent Cloud(31:26) Lessons from Starting a Machine Learning Company(35:25) Conclusion and Contact InformationSponsor:Rubrik: https://www.rubrik.com/sitc
Transcript
Discussion (0)
We started to feel we were on to something, and we felt like we were onto something in two parts.
One of the parts was when we were pitching with Rewind, felt differentiated, and it gave people the safety blanket, essentially, or that they didn't really have with their AI systems.
In fact, pretty consistently, one of the things that I heard was, oh, I didn't even know that this would be possible.
Just to give you, like, the 10 seconds on what AI agent Rewind is, Rubik maintains backups of the most important production systems that an organization or enterprise,
has. Agent Rewind says if an agent operates on those systems and makes a mistake, delete something
it shouldn't have edited a field in the wrong way, we can allow you to correlate those agents'
actions with the actual change to production system and allow you to recover in one click
from that healthy snapshot. So that's like the agent Rewon pitch, and it gave people a lot of safety.
That was one thing we learned. But the second thing we learned was this ability to go out and
recover was still a little bit of a future problem, right? It gives them comfort that allows
them to go ahead and do something that's coming out. But in order to build Rewind, we also
had to build a deeper understanding of the agent's systems. We call it the agent map and the agent
registry. We had to know, well, what agents are running inside of your ecosystem? So we can
figure out which ones we might need to rewind. And we need to know what are the types of things
that they have access to what actions can they take. So we can rewind a given action that it
actually ended up taking.
the cloud, I'm Corey Quinn. This promoted guest episode is brought to us by our friends at
Rubrik. Also brought to us by our friends at Rubrik is Dev Rishi, their GM of AI. Dev, thanks for
joining me. Hey, Corey. Thanks. I'm looking forward to being here. Let's be honest about disaster recovery.
Your DR plan assumes everything fails gracefully and your backups are pristine. But what happens
when ransomware is already in your backups? Or when the credentials your DR runbook depends on or
compromised. Most DR strategies are built for accidents, not adversaries. Rubrik gets this. They isolate
your backups from production credentials, make them immutable, and can scan them for threats so you're
not restoring malware along with your data. When your DR day comes, and it inevitably will,
you need more than a runbook and hope. You need backups that are actually recoverable. Learn more
at rubric.com slash
SITC.
You are a somewhat
recent newcomer
over to Rubric. Before
this, you figured, you know what I'm going to
do? I'm going to start an AI company,
but you did it in the hipster style
while it was still underground before it was cool.
What's the backstory there?
Yeah, honestly, I'm not sure. A.I. was never
not cool. I remember even when
we started the company in 2021,
I thought it was one of the most
I actually thought at the time most of AI was a little bit overvalued.
I thought it was overhyped.
And one of the reasons I thought that was I had spent a long time as a product manager at Google.
I was the first PM for Kaggle, which is the data science machine learning community.
And I saw this massive influx of, I think, like citizen data scientists, people that were learning a lot about machine learning and AI were very excited about it.
But I was one of the product managers on the team that eventually became called Vertex AI, Google Cloud's machine learning platform.
And candidly, very few enterprises were actually getting value, I think, out of AI.
And very few had actually even figured out how to go into production with those systems.
Yeah, my line for a while back in that era was that machine learning can sort through vast quantities of data and discover anything except a business model.
I think that's about accurate.
You know, when we looked at companies that were actually making money in the space, it was data labeling companies.
Like, Scale AI, I think back in this day, was making a killing.
But their core business was not actually delivering necessarily production-ready AI models for the enterprise.
It was just labeling data.
And so I started the company in 2021 along with a few of my co-founders because we just believed in two things.
The first was the technology is powerful.
It isn't delivered on that promise yet, but we saw what it was able to do at leading organizations like Google or YouTube or Uber, where my co-founders came from.
And then the second was we thought we had an abstraction that would make it a little bit more accessible.
You know, we'd seen a lot of companies die on this hill of trying to democratize machine learning and democratize deep learning.
We thought we'd take our stab at it as well.
Yeah, I remember back then the use cases that they came up with were either highly specific to the point of uselessness for anyone else or banal.
The two examples that stuck in my mind were if you are a credit card company and you have a massive transaction volume, you can start to identify fraud via the power of machine learning, great, I'm not that.
The other one was, I think, WeWork, did machine learning to analyze traffic patterns and wound
up discovering that they could reduce them if they had a second barista at certain hours of the day.
In other words, humans like to drink coffee in the morning was their amazing discovery out of this.
And it felt like it was a really interesting space that people struggled to articulate value from.
Then we saw this massive Gen.A.I. explosion over the past few years.
and everyone is doing some experiment with it.
Some folks are rapidly rebranding as fast as they can to be AI companies,
but the question of value seems to be one that hangs over the industry as a whole.
Just because it's terrific when it gets it right, it often doesn't.
There's our distinct costs associated with it,
and people are still trying to figure out how exactly this factors into the thing that they're doing.
Has been my experience from talking to folks.
Am I missing something key?
How are you seeing the industry evolve?
I want to answer that in just a second, but the first thing you mentioned actually was really
interesting.
You talked about how use cases back when we started the company in 21 were pretty banal or relatively
consistent.
I had the same thought.
I want to talk about technology for a minute and then go into this genitive AI section.
One of the things that I remember lamenting about actually back at that time was if you
looked at every one of our competitors' websites at a use case section, every competitor's
use case section looked like the exact same thing.
There's a churn prediction model.
there was an LTV prediction model, there was a fraud for, you know, detection model.
And there's like all these use cases of look at all the amazing things you can build
with the platform.
I think our insight at the time was that there was a newer technology and that the power
of generative AI was going to be unlocked, or sorry, it wasn't generative AI at the time.
It's called deep learning.
And, you know, our insight was that, number one, we think that it'll work really well
with unstructured data.
So if you think about like fraud or churn, there were a lot about structured data
problems. And we were really excited about unstructured data, raw text and images and video.
And then the second, I think, insight was this rise of what we called pre-trained models.
And so these are models that you didn't have to have a million records to be able to go ahead and
build a data science team and clean the data set on. But they're pre-trained so you can just
sort of adapt what generally understood English towards your task. Those were like two of the
things that we decided to invest in with deep learning. And we set out on a mission to democratize
deep learning. And then I like to say open AI and large language models really democratize.
deep learning better than any of us.
But then you asked a question of, like, outside of consumer,
which is, I think, where a generative AI has probably, like,
delivered a lot of value, I would argue, for users today,
what is the value that enterprises are seeing, right, in G2K?
And I'd say I partially agree with your observation.
Like, are people getting a, like, ROI or not out of it?
And I actually think that the, what's been surprising to me coming into rubric
is the reason why, I think, organizations aren't actually getting nearly as much
value yet, as I think has been forecasted, and I believe that they will get out in a few
years. At Predabase, the startup that we worked on for the last four and a half years, we worked
with a lot of digital native and leading AI engineering organizations. Household brands you would
know, we're the ones that were deploying production models with us. I would say the biggest
difference between them and coming to Rubrik where the customer base is Global 2000 Enterprise.
Think about the most important, regulated customers and financial services in healthcare. The biggest
difference is actually on risk posture. And that has converted downstream into ROI. The reason I think
a lot of organizations haven't gotten the type of value that they want around AI right yet is they haven't
figured out the framework where they're going to let AI loose in terms of like the actual work
that will produce value. At certain points of scale, companies become less about seizing opportunities
and more about risk mitigation and management. And when you have something that gets it right
80% of the time and the other 20% goes off the rails to greater or lesser degrees,
that becomes almost an unbounded risk vector.
And I feel like that's why people are taking a very, okay, how do we put guardrails around
this thing so that it doesn't destroy the company we have built?
Totally.
Like, let me ask you, what do you think is like the biggest enterprise AI trend of 2025 or
2026?
I think the most hyped one is probably agenic AI, right?
Like agents is the thing everybody wants to talk about.
Even getting a definition of what agent is.
is becoming a exercise and tell me what you have to sell me.
Everyone is defining it in ways that align with their view of the world.
We saw similar things in the observability space in recent cycles.
Let me offer you a definition without something to sell you because we don't sell an agent builder platform.
We don't like make it, you know, something you build agents with.
I always think about agents as just LMs or models with access to tools.
And so if you think about that, that really just means a model that can do work or take an action, you know, on your behalf.
And I think this comes back towards this idea of, like, at a certain point, organizations have to be about, like, risk mitigation.
If you think about, like, a lot of different software trends, the shift to cloud as an example, there is a huge amount of, I think, nervousness at the beginning of the shift to cloud, predominantly around security and guard rolls and, like, you know, this idea of we're going to go to a multi-tenant architecture in some way or the other, rather than, like, on my on-prime data center.
And now with AI, what we're talking about with agentic AI is like, look, it's going to be great, it's going to be able to do work, you're going to be able to take action.
And I know every IT and security person I'm talking to is like, wait, so you want to give a non-deterministic model,
meaning a model that will not necessarily operate within a certain defined framework, something that can come up with random answers at times.
You want to give a non-determinicistic model, access to my production and enterprise systems, and you want me to be on the hook with it.
No way.
And so while I think it has a lot of different promises, a lot of the organizations that I think I speak to are kind of still stuck on like square one of how to
do I get comfortable with the idea of this running around inside of my ecosystem?
Yeah, it feels like every vibe coding project you start begins with, okay, roll for initiative.
And it always goes slightly differently depending upon the fates, for lack of a better term, which is great for some use cases and terrifying for others.
Yeah, but I think like coming to your point about ROI, I actually, by the way, I didn't know this three months ago.
You know, when I was working and we were acquired into rubric about three to four months ago,
When I was working within production AI systems and a lot of leading tech companies,
we thought that there was a series of challenges that typically had to do with latency or throughput
and, you know, the efficiency of models.
But over the last two to three months, I've had a chance to speak with about 180 to 200 different customers
representing, you know, all sorts of swats from the global 2K.
And the thing that was really interesting to me is that we speak a lot about the promise of what AI could unlock.
It can go ahead and do work on our behalf.
Imagine that route task you had to do to be able to prepare for.
for an interview or to be able to send out an email,
can actually go ahead and read through the systems
and write that out for you.
But then in practice, if you think about what type of agent
or what type of AI, every single organization
is actually rolling out today, for the ones that actually even are,
first of all, they're all in read-only mode, right?
Like very few people are giving agents,
what we call, like, write or delete access.
Very few agents of people are giving agents the ability
to, like, actually edit a system.
And it's not because they can think of the use case,
and it's not because the business value argument isn't there.
but it's because it feels like the risk is almost uncapped.
There's an unlimited downside, really in it for them.
I mean, I do myself in the test lab, but worst case, something blows up.
It's not that hard to restore from backups.
Turns out data resilience is a thing.
There's a, yeah, but the idea of doing this with production, customer side data,
because my laptop has theoretical access to customer environments,
I can't let Claude code run loose on this thing.
I give it a bounded EC2 instance in a dedicated AWS account.
case can blow up my budget, but that's the end of it. It's not, there's no access to data that
can cause me a nightmare. Yeah, exactly. I spoke with our head of Infosec yesterday, right? And he was
talking about how, look, there are people inside of our company that have quote-unquote super, super
user admin privileges. They can do incredible amounts of damage. But those people, number one,
have all been background checked by the company. Number two, there's like all of these guardrails
that are put in place, you know, sock that observes them. And,
Number three, and maybe the most important, like from his perspective, they operate at
human pace. So, you know, the amount of actions that you can go ahead and take, you could
probably blow up your rate of less codspin, you know, reasonably easily. Yeah, compensating
controls. The way we handle this in polite society, the bank teller theoretically can enrich
themselves at your expense. The reason this doesn't happen is because there are audit controls
and security flags and alarms that will go off everywhere the second something like that happens.
Yeah. And there's probably like 30 to 40 years of software that's been developed around.
on validating the employee and the human.
And there's a certain pace that a human can go ahead and make it damage on.
I would say with the agents, you know, the line that we use internally is, well, they could do 10x or 100x to damage in a tenth of the time.
The pace at which I think the operation is changing is something that there isn't really a resilience infrastructure for today.
And I think that's what's stopping a lot of the ROI.
It feels like a lot of the value that AI is generated comes at the personal level.
You alluded to some of it recently where we're talking about, oh,
respond to this email for me.
Like, I have what I like to politely call the asshole an email problem.
I tend to write relatively tersely.
It's a bad habit I get from Twitter.
And it turns out that short emails look like I'm being imperious.
So make this polite.
But I still have to iterate through it a couple of times.
It starts off with, I hope this email finds you well, which is not my style.
I'm likelier to begin with.
I hope this email finds you before I do because if that's at least my brand.
And you have to tweak it and all.
And like there's a bit of a human in the loop story.
And it's helpful, but it's not, you know, I'm not going to pay $5,000 a month
of personal money for these things. It's helpful, but it doesn't necessarily justify a lot
of the investment. On the enterprise side, that story may look radically different. I have a number
of customers, effectively all of them, who are doing some experiments with AI. The ones that are
spending meaningfully on it, those use cases start to look a lot more like the B2C aspects of what
it is that they do. The B2B companies that I work with are using it obviously for a number of
things, but their spend is nowhere within orders of magnitude of what we're seeing when it starts
getting mass deployed. How are you seeing the Global Fortune 2000 largely emerge as far as
trends go when it comes to AI? I'm seeing a little bit of a bifurcation. And so I'm seeing a leading
edge of companies. And I'm talking about Global 2000 enterprise still right now, right? But I'm seeing a leading
edge of companies that are fully leaning in to AI.
And I, you know, I spoke with a Fortune 100 company that walked me through how they
were thinking about things at a board level.
And they were going to brand their company, which is, you know, household name, decades
and decades old, into an AI native company.
And I sat there and I was like, this is incredible.
Like this company, which has more employees than, you know, any other company I can think of
has such a large distribution network
is now thinking about how to be able to go AI-native,
thinking about how to be able to do agents
and other workflows for every single type of use case.
And they're really looking, like they're taking let 1,000 flowers bloom,
but actually investing behind every single one of those flowers
to be able to see where am I going to get value?
Simple reason why.
They think that in five to 10 years,
their workforce is going to look massively different, right?
Like they think the entire way that the work happens.
I would say like 5 to 10% of companies.
that I speak to are in that bucket.
And I think 90% of organizations that I speak to are in the bucket of kind of like a let's wait and see posture.
Let's go ahead and run a few different experiments.
Let's have a center of excellence and like let's start to experiment with.
Across both of them, though, the one thing that I see that's really interesting is that typically your first handful of use cases are the ones that take the longest.
Use cases are agents one through five take a lot of doing.
There's a lot that you have to be able to establish from a framework standpoint.
How do you measure ROI whose approvals are in there?
What tasks are they good at?
Which do they struggle with?
The fascinating thing, Corey, is how quickly organizations go from five to hundreds or thousands.
When we were thinking about what would rubrics play with an AI be, we had a huge question of like,
when is this moment going to come when people are going to have, you know, hundreds or thousands
of agents deployed in our thought process, well, okay, well, is this going to be 12, 24, 36 months away?
We think it's going to happen, but what's the time from over it?
The thing that's really been interesting is, like, a set of conversations we had,
we think like maybe, you know, one, four, one and five or so where people tell us,
oh, that's already happening today.
I'm actually seeing it kind of scale out.
And again, it's only the, I'd say, leading quartile at best that are in that bucket.
But you can kind of see those early adopter motions that we think the rest of the market's going to follow.
So I think, you know, the very brief, like summary for what are we seeing in the enterprise?
Well, we're seeing most people in that one to five experimentation routes.
But what we see is the people that graduate from it start to scale up very, very quickly.
And I think that there's a misunderstanding around a lot of it, too, where if I'm reaching out to support, for example, from a company I do business with, I don't want to talk to a chatbot.
However, it would be convenient if the human that I talked to has an AI-assisted context on their side of, oh, here are the previous support tickets I've opened.
oh, it looks like I might actually know how networks work, so is it plugged in, might not
be the first thing you lead with for me. It starts to tailor the responses there. And that's,
that is a neat transformative customer experience. But so often it shortens to, oh, we can
lay off our entire support staff, which generally is not happening. Every time companies
have tried this, it seems to have gone disastrously. Yeah, I think that, you know, you said
you don't want to shut the shop bot. How many times have you been on a phone and be like, speak to an agent,
speak to an agent, speak to an agent.
I'm already pissed off because I don't want, I'm an older millennial.
I don't want to talk to people on the phone.
If I did, I would take out a person's ad.
If I'm calling it and something has already gone off the rails.
Yeah, Corey, I'm a millennial.
I was like, I don't want to speak to somebody when I have a support issue, almost period.
I just want it solved as fast as possible, right?
Like, I don't want to speak to a chatbot.
I also don't really want to spend a long time explaining to human what exactly is happening.
I will use a chatbot on my side because I want to explain, here's the, here's the two hours of logging data that
I have for this. Can you skim this down to a concise ticket that shows a skeleton reproduction
case? I used to have to do that by hand. It was how I fix things. And invariably, the chatbot
will often fix it for me while doing that behavior. It's incredibly helpful. But the support tickets
of it broke, not the most helpful thing in the universe. Let me connect it to what I think we need
to be able to actually tie it together, which is a lot of chatbots are, you know, they're
interactive, the conversations, they almost feel like they're incentivized to keep you talking.
What I want to be able to do is get to the fastest action possible.
I want that system, which today, the reason I want to speak to a human is today,
the person who can issue me my refund or who can, you know, cancel the order,
take that action is a human.
I fundamentally believe that organizations are going to get to the point where those actions
are going to be done autonomously.
They are going to be done by like an agent or AI system.
In order for that agent to be able to do it, needs to be connected with that tool.
Needs to be connected to the database that manages order cancel.
as an example.
And I think where you're going to see that flip and value happen from like, hey,
the AI-driven chatbot is something that's driving me nuts to, oh, this was a way
better experience than others, is when those chatbots start to get access to tools.
There's a lot of organizations they already have, but the type of organizations that have
have a kind of risk-forward posture still today.
Rubrik is sponsoring this segment because they've figured out something important.
Protecting your AI isn't just about backing up databases.
It's about protecting the entire supply chain, the training data that teaches your models, the source code that defines them, and the live data that they're acting on.
Miss any piece of that, and you're basically rebuilding from scratch when things break.
Rubrik secures all of it in one platform across your multi-cloud circus.
So when your AI inevitably does something creative you didn't expect, you can recover without losing months of work.
That's the difference between resilience and resume updating.
Learn more at rubric.com slash SITC.
There's a lot of, I ought to say, optimism in the space.
There's a lot of hype as well, where companies are suddenly trying to wrap the exact same thing
that they've been doing for 15 years in the AI story, despite the fact that they did
not revamp their best-selling product to completely be AI native and a departure from
what it previously was over an 18-month span.
That would be, in some cases, lunacy.
In some cases, it's, oh, great, we're an AI company.
Like, that's great.
I thought you were a bus company.
But there's this idea of being able to spill this out in ways that are iterative and transformative
that do lead to better outcomes, which I guess brings us to rubric on some level here.
Why did they acquire you?
And what is, what are you doing these days now that you're the GM of AI over there?
Yeah, it's a great question.
Answer them in two parts, right?
I think the first is, well, why make the acquisition?
And Rubrik is a company, co-founder and CEO Bipple, I think, has defined.
Rubrik is a company that's been perpetually, that the market's been perpetually confused how to bucket it, right?
Like, it started off in like a core on data backup, and then data protection and cyber resilience really became a larger market as we saw the rise of ransomware and other attacks.
So it went away from like natural disaster fire flood for why you need the technology.
I think that the kind of executive and founding team here have a long-term ambition in AI.
The reason that they have a long-term ambition to AI is because the view is that one of the most fundamental kind of like substrates for feeding into AI models or others is like what is the actual customer data that is backing it up.
And Rubrik is one of the largest pre-populated data lakes for every single enterprise customer that it backs up and protects.
All of the most important data that is going to be important for our customers like, you know, business, resilience,
applications and day-to-day operations, that a Rubrik customer is backed up, really, by, like,
our underlying systems.
So the first observation that I think the rubric exec team had motivating the acquisition was,
hey, we think data is going to be, like, a fundamental asset in AI.
We won't have a unique right to play here.
And then the second piece was, like, I think, how that fit into the Predivase platform.
And, you know, what we did as a technology.
What we did just as like a, you know, I'd say the 30-second overview was our favorite customer
quote is generalized intelligence might be great.
but I don't need my point of sale system to recite French poetry.
We were targeting enterprise applications where you needed something narrow and specific done,
and we wanted to help you be able to build out that application.
So the thesis was what Rubrik had in terms of the data substrate,
and then more recently the identity component as well,
that helped you understand who has access to what,
could be paired with our platform that gave you the ability to think about more tailored applications with models,
and we could build a more resilient enterprise story.
And so that was really, I think, you know, what brought us in at, like, I'd say 80% of the story.
And I think 20% of the story is just a great kind of, I think, cultural fit that also existed across the two teams.
And then fast-forwarding to today.
What is it exactly that, you know, I'm doing here?
I like to kind of say that I don't have any...
I don't have any idea.
I don't know either.
Don't tell anyone.
Kidding.
Kidning.
No, no, no.
Actually, it's very true.
I would say, when I come into, like, in your market, we had our customer base.
and now I'm coming into newer market.
The Rubrik customer base, which tends to be IT and security and large parts of Global 2000 enterprises,
I think I walked in with the assumption, I don't know.
Like, I actually don't know what the right way is to take the product that we have and the product that Rubrik has
and be able to start to retrofit it towards what the future of this specific market needs.
So the only way I know to answer that question is to get out and talk to as many people
and customers that are in the field and hear from them, what are they actually struggling with.
So, the last two and a half months, I've had a chance to speak with a little over, like, 180 organizations.
And, you know, many, many different people inside of each of those organizations, typically IT, security, but also you can think of everywhere from the back of admin all the way to the person who's heading AI inside of those organizations and understanding what is it that they're actually, you know, struggling with.
And we, you know, it's hard to go ahead and have a super open-ended conversation.
So we came in with an initial pitch that we called Agent Rewind.
Agent Rewind connected what's happening with AI to Rubrik's core around resilience.
It was this idea that if agents are going to do 10x damage in one-tenth of time,
what if we allowed you to revert back a destructive agent action?
We started to kind of have a pet, you know, we had these conversations that were very genuinely not sales-oriented,
in part because the product was still too early to sell at the time that I was having these conversations.
I love those conversations.
I just wish I could trust it a little bit more.
people say, I'm not trying to sell you anything and you have those conversations and it quickly
transitions into a sales pitch. It's, yeah, but if I say it's a sales pitch, you won't take my
call. It's, and you think lying to me is going to lead you to a better outcome. I digress.
Please continue. I have the opposite experience because I think people were like, well, when can I go
ahead and try this? And when can I, like, you know, how much this is going to be? And I was like,
we got to, we got to slow you down a little bit there. That's, you know, you're on to something.
Yeah, exactly. That's that, but what you said is actually what we all started to feel.
So we started to feel we were on to something, and we felt like we were on to something in two parts.
One of the parts was when we were pitching with Rewind, felt differentiated, and it gave people the safety blanket, essentially, or that they didn't really have with their AI systems.
In fact, pretty consistently, one of the things that I heard was, oh, I didn't even know that this would be possible.
Just to give you, like, the 10 seconds on what AI agent rewind is, Rubik maintains backups of the most important production systems that an organization,
enterprise has. Agent Rewind says if an agent operates on those systems and makes a mistake,
delete something it shouldn't have edited a field in the wrong way, we can allow you to
correlate those agents' actions with the actual change to production system and allow to recover
in one click from that healthy snapshot. So that's like the agent Rewant pitch, and it gave people
a lot of safety. That was one thing we learned. But the second thing we learned was this ability
to go out and recoverer was still a little bit of a future problem, right? It gives them comfort
that allows them to go ahead and do something that's coming out.
But in order to build Rewind,
we also had to build a deeper understanding of the agent's systems.
We call it the agent map and the agent registry.
We had to know, well, what agents are running inside of your ecosystem
so we can figure out which ones we might need to rewind?
And we need to know what are the types of things that they have access to,
what actions can they take so we can rewind a given action that it actually ended up taking.
Yeah, it turns out natively there's no undo for an RMRF or a drop table.
natively, there's no undo for a drop table, especially when it's like an MCP tool call made from some agent, you know, that sits upstream towards it.
Oh, God.
We've all had the thing where it just, oh, this file looks where I'm just going to overwrite it.
Like, that was important.
What are you doing important to the project?
Not important to the grand scheme of things.
But yeah, that's why we have guardrails and test this in constrained environments.
You'd hope, right?
Like, that's why before I think people go through it.
But it's actually hard to even test speedies because software testing, you had used.
unit tests and you'd essentially say, like, okay, I wrote it, I wrote the subroutine to be able to do X.
Like, I kind of know that it's not going to come out and start reciting French poetry.
But testing a non-deterministic system is meaningfully more challenging.
And I think Rewind gave people a safety blanket.
But as we were talking to people, the other thing that really kind of lit up with folks was, you know, a lot of folks told us, I actually don't even know what are all the different agents that are running in my ecosystem.
I don't have that registry right now.
And I don't have the ability to go ahead and map out what types of tools and actions can it do.
And so we took that as well as the first-party challenges that Rubrik was actually dealing with ourselves as we roll out agents as a company.
And we formulated into like what our product division and thesis ultimately has become, which to your point, Corey, I think you said earlier, a lot of these companies are a paperclip company and they're now like, hey, we're an AI company.
And have you changed the core product of the paperclip or not?
We kind of took a view that we needed to launch a net new product, really.
So there's the Rubric Security Cloud, which is the thing that Rubrik has really kind of built
its core business around.
We just announced most recently our new product, which is called the Rubric Agent Cloud.
And the Rubric Agent Cloud comes up with three key pillars that are based on these conversations
we had.
The first pillar is monitoring and observability.
I often think about monitoring observability is like that base layer you need to have.
You need to know what kind of agents are running.
you need to be able to understand a little bit about what is the blast radius.
The second pillar that I haven't seen a lot of people, I see a lot of people talk about,
but I haven't seen a lot of really great software solutions towards our governance and policy enforcement with guardrails.
So we talk about guardrails, but what does that actually mean to be able to do at a systems level?
That's what we solved.
And one of the reasons we decided to solve it is I saw what it looked like to do AI governance
inside a rubric first party, I set on our AI governance committee.
And things were done via documents that legal wrote,
Google sheets, you know, sort of the best of intentions, but the hardest things to enforce in practice.
So we wanted to platformize that, and that's our second pillar.
And then the third pillar is remediation.
Rubik's always had a mentality, assume breach.
I think everyone listening should probably assume something is going to happen to if you deploy agents at the level of promise that they can't achieve.
Defense in depth, especially when the attack is coming from inside the house.
Exactly.
Defense and depth across the hyperscalular capabilities, but also the single control plane that gives you access across the different agents.
So that's what I'm up to now these days.
We're building, working on RUrik Agent Cloud.
And we started with this idea around we should be able to rewind these actions
and then realized there's actually really a need for a broader control system here.
And that's what we've architected.
What last question I have?
Because there are few people that can answer this question, honestly, without sounding, I guess,
like they're just imagining and prognosticating here.
But you've done it.
What lessons did you learn by starting a machine learning company in an era before Gen AI was all the rage?
On some level, I feel it has to be frustrating because you've done the work and lived in the space
and now every grifter out there claims to be an AI expert?
I think it doesn't bother me so much because I've always had, like, you know, since I started,
my career in machine learning started well over a decade ago.
And since I started, it always felt like AI was candidly like overhyped technology.
Over the last two years is the first time I felt like it's underhyped, which is insane
to say because I think that AI has such a hype cycle that's coming over the last two years.
The reason I say that I think it's actually potentially underhyped is because I think
that most organizations haven't gotten to that value discovery point yet because we haven't
figured out hopefully the easier part than figuring out how to make model super intelligent,
which is the risk components around those models.
But in terms of advice or how I think about it, the thing that we've often struggled towards,
I think, in machine learning and AI conventionally, is really two things.
The first is tying ML directly with value.
And so I worked with healthcare systems early on that we're looking to be able to build
models that could help expedite radiology processes.
And this seems like a very straightforward use case that would be tied towards value.
But it was sort of a data science and machine learning team doing it in a silo.
There's a big question of like, okay, can the actual physician or operator,
or can the insurance pair, whoever needs to be able to use this,
can they actually use the outputs of that?
So I think the very first thing,
it's like a startup lesson that also extends
to everyone building towards AI
is get to the dollars or the sense
as quickly as possible.
We'll just sell ads later. It'll be fine.
Yeah. Well, like, get to value.
If you're a consumer business, get to engagement, right?
Like I would say go for retention user.
If you're a B2B business,
like if you're a consumer, the ultimate source of truth
is like, are people spending time on your property?
You will figure that piece out later.
But you're a B2 business,
the ultimate source of truth is
Does somebody trust me enough to legitimately go to their boss and sign a purchase order?
It doesn't have to be for a lot of money.
Just validate, will they transfer a dollar from their bank account to ours?
My hot take is, like, in a consumer business, your job is to make users feel delighted
because your source of truth is, do they refer you to a friend?
My hot take is for an enterprise business.
Your job is to make your champion uncomfortable because you want them to be able to feel the discomfort
of going to their management of their boss.
No one wants to ask for money.
And honestly, people don't want to go through procurement and security and vendor audits.
You want them to feel convicted enough where they're like, it's worth the discomfort of going through those processes because that's when you know that you're on to something.
And so, yeah, the very first piece of advice, I think, was just really, and the longer chunk of it is tie it directly towards value and know what truth looks like in your space.
And then the second is just recognize that you need to be able to recognize the importance of small and quick wins.
conventional machine learning and data science
looked like
there was a joke
which was like ML was
90% about cleaning data
and 10% about complaining
about cleaning data
and you know
there was a long process
you would take before we started
to realize value
and I think the other piece is like
recognized time to value
Gen AI has never made it faster
to be able to build a prototype
or an agent.
I actually think building agents
now is the easy part.
No, now the problem is
all the prototype agents I've built
is oh, there's a new
Claude Sonet model out there
Oh, crap, where do I have to go and update all the model strings?
And I get to play whack a mole in my project directory of where did this all live?
And hopefully nothing I built to scaffolding or observability breaks around it.
And the right answer, frankly, is, oh, you just instantiate Claude Code and have it do it and just, you know, keep hitting at it.
It'll be fine.
And my hope, Corey, is the right answer will become, you just instantiate Rubrik Agent Cloud.
All of the scaffold.
Great models out there, great agent builders out there.
But if you want to make sure that they all kind of work across your organization, Ruby Agent Cloud will, like,
monitor, govern, and help you remediate anytime something goes wrong.
That's really what I think I'm excited about.
If people want to learn more, where's the best place to find you?
Yeah, it's a great question.
You can actually go towards our website and there's an agent's subpage that'll tell you
about it.
We've now started to post the webinars, online, and also demo videos.
Just go ahead and reach out to us.
Right now, we're in an early access program.
And so we're starting to selectively onboard customers, and we'd love to be able to chat
further.
And we will, of course, put links to that in the
show notes. Thank you so much for taking the time to speak with me. I appreciate it.
Corey really enjoyed it. Thanks for having me on.
Dev Rishi, GM of AI and Rubrik, I'm cloud economist Corey Quinn, and this is screaming
in the cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast
platform of choice, whereas if you hated this episode, please leave a five-star review on your
podcast platform of choice, along with an angry comment written by an agent that has gone
completely outside the bounds of what guardrails you thought were there.
You know.
I'm going to do.
Thank you.
