ACM ByteCast - Anusha Nerella - Episode 77
Episode Date: November 10, 2025In this episode of ACM ByteCast, Rashmi Mohan hosts Anusha Nerella, a Senior Principal Engineer at State Street. She has more than 13 years of experience working on building scalable systems using AI/...ML in the domain of high-frequency trading systems and is passionate about driving adoption of automation in the FinTech industry. Anusha is a member of the ACM Practitioner Board, the Forbes Technology Council, and is an IEEE Senior Member and Chair of IEEE Women in Engineering Philadelphia chapter. She has served as a judge in hackathons and devotes significant time mentoring students and professionals on the use of AI technologies, building enterprise-grade software, and all things FinTech. Anusha traces her journey from growing up with limited access to technology to teaching herself programming to working at global firms including Barclays and Citibank and leading enterprise-scale AI initiatives. Anusha and Rashmi discuss the challenges of applying AI to a field where money and personal data are at stake, and workflows that prioritize trust, security, and compliance. They touch on the importance of clear data lineage, model interpretability, and auditability. The discussion also covers observability, tooling, and the use of LLMs in finance. Along the way, Anusha shares her personal philosophy when it comes to building systems where speed and reliability can be competing priorities.
Transcript
Discussion (0)
This is ACM Bythcast, a podcast series from the Association for Computing Machinery,
the world's largest educational and scientific computing society.
We talk to researchers, practitioners, and innovators who are at the intersection of computing research and practice.
They share their experiences, the lessons they've learned, and their own visions for the future of computing.
I am your host, Rashmi Mohan.
There's probably not a day that goes by without using an AI-powered assistant for most of us.
From editing our emails to answering our queries to making our reservations,
this technology has wormed itself into our daily habits and is here to stay.
And while we may not actively think about trust, risk, and compliance of the underlying data in our common use cases,
when it comes to our money, we expect no less.
Our next guest is embedded in the world of financial tech and spends most of her day finding ways to incorporate AI into these crucial workflows, driving efficiency and safety.
Anusha Narela leads fintech automation at a global fintech firm as a senior principal engineer.
She has over 13 years of experience working on building scalable systems using AIML in the domain of high-frequency trading systems and is particularly passionate.
about driving adoption of automation in the fintech industry.
She serves on the Forbes Technology Council, the ACM Practitioners Board,
is an ICCI senior member and devotes significant time mentoring up-and-coming talent
to lean in more on the use of AI technologies, building enterprise-grade software, and all things fintech.
She also often serves as a judge at hackathons in premier universities in the U.S.
We are thrilled to have her with us.
Anusha, welcome to ACM Bytecast.
Thank you so much for having me, Rashmi.
It's pleasure to be here.
Great.
Let's begin.
We have so many interesting things that I want to ask you.
But I would love to lead with a simple question
that I ask all my guests, Anusha,
if you could please introduce yourself
and talk about what you currently do
and also give us some insight into what brought you
into this field of work.
Yeah, sure.
So thank you.
for the great intro, and my name is Anisha Nerelo. I'm currently working as a senior software engineer
at a leading fintech firm, where I actively work on building AI-powered cloud-native financial
systems. Over the past 13 years, I have also contributed to modernization and innovation projects
at various fintech firms like Buckley, Citibank, and prior to that, I have worked the U.S.
Patents and Trademarks Office, but my story didn't start in a big city.
or with access to the mentors.
I grew up in a very rural background
where computers were rare
and there wasn't much guidance for someone
who wanted to get into the technology.
But what drew me in this was this fascination?
That with just cold, I could make something work
like something that solved a problem
even if nobody around me could tell me how.
I remember the trial and error,
the late nights of figuring things out on my own
and the excitement when something finally ran successfully,
that sense of discovery was addictive.
And it kept me going when resources were limited.
That experience, I would say, that shaped my career.
It taught me resilience, creativity and the confidence to keep learning.
Today, whether I'm modernizing financial systems
or applying AI to financial compliance,
I carry that same mindset, that technology.
I treat it as a tool
anyone can use to build
to solve and to empower
and I think that's what continues
to inspire me, turning curiosity
into real world impact, no matter
where you start from.
So I would always say like
at the end of the day I believe technology
in finance should do two things.
Remove friction and build trust.
That's a great way to summarize Anusha
and very inspiring to hear about your journey
also. Was computer science
a class that you had in high school? Was that your first sort of brush with coding?
That's actually, I wasn't a smooth entry through formal classes at first, actually. So I didn't
have much exposure to computers until much later compared to many of my peers who are
studying with me. But when I finally reached college, yes, I did take programming classes,
but what really shaped me was self-learning and experimentation. Probably many of the people
can resonate with me about the same explanation, I often didn't have someone to guide me
step by step. So I realized a lot on trying things out, making mistakes, debugging endlessly,
and then feeling that sense of achievement when I finally got something to work. I still
remember the first time I wrote a simple program that ran correctly. It might have looked tiny,
but it gave me this feeling of, wow, I can actually build something from scratch. That's what
build me deeper. Programming became less about syntax and more about empowerment during the time.
If I can instruct a machine to solve a problem, then maybe I can scale that thinking to solve
bigger and bigger challenges. Honestly, this mindset stayed with me even now. Whether I'm working
on any financial system or platform or any AI-driven complaint systems today, I still see
coding as the same empowering tool, the key that opens doors to innovation, even if you start
with much guidance. That's incredible, Anusha. I mean, to think that you started in college and a lot
of it is self-driven, and you're right, I think we'll probably find a lot of our audience resonating
with that, the sense of accomplishment that you talk about when a program finally works, you're
able to get it to do something much faster, more efficiently at a larger scale than what you
could have done if you had to do it manually. It's pretty incredible. So thank you for sharing that.
I'm sure we'll tap into that in many, many situations as we talk about your career.
One thing, you know, I did a little bit of research on your work and the various projects
you've worked on. One of the things that came across is that you're always very passionate
about leaving a place in a better state than you found it. Has that been one of the guiding principles
of the kind of work that you pick.
And how did you get into the finance industry?
Just like anybody, it was through opportunity.
So I just utilized the opportunity in a proper way.
See, I would believe anyone have to wait until the opportunity hits the door.
But once the opportunity hits our door, then we should not leave it and never giving up.
So just never give up.
I would believe the only guiding principle throughout my career,
every system I touch.
I want to leave it stronger, cleaner, and more future ready than when I first arrived.
That's how I leave my footprint.
So in large financial infrastructures, you inherit decades of complexity.
You never know until you get into it.
So for me, it's not enough to just make it work.
Here, the goal is just to transform it so the next generation of engineers can build on a solid foundation.
I like to say good engineers solve today's problems, great engineers remove tomorrow's obstacles.
So as for how I entered finance, I was drawn to it because of the scale and the states.
Finance is not just about numbers, it's about trust and access.
Small improvements in processing speed, reliability, or risk detection, it ripples out to millions of people.
that blend of high technical challenge
and real world impact is watered.
And it's why I have stayed passionate
about applying AI and engineering to the domain.
I love the quote that you shared.
I think that's very, very powerful.
And I will want to talk about that a little bit more,
especially around when you're building systems
and you're building for, obviously,
we build with the intent of commercial use,
whether we're working for a company
you're working for a client, etc.
It's always very challenging to kind of continue to deliver what a customer needs
and pairing that with building improvements within your own system for a fact that when you
actually invest the time and energy to improve your internal systems, you can be far more
efficient in being able to deliver a solution to a customer.
And yet it's not very often that we're able to take that time to do things the right way
rather than the fastest way.
So how do you kind of battle that decision?
So definitely I would be one of those persons who can take it as a challenging
because in order to become a leader,
you have to have that faith in solving the situations with so much confidence.
So definitely this kind of scenarios gives us critical tension,
but this is the time we have to understand how much speed
versus how much sustainability that we are applying here.
in finance especially because I'm in finance so the pressure to deliver fast is definitely immense
but if you only optimize for speed you will definitely create a debt that someone will
eventually have to pay i have always approached it with a simple principle bill like the system
that will outlive you what that means in practice is finding balance yes we meet the immediate
business needs but we also carve out space to strengthen the foundation
through automation, I believe, reusable frameworks and robust testing.
It's a mindset shift.
Investing in internal improvements is not a luxury.
It's a multiplier.
Every hour you spend making a system, more reliable pays back tenfold when the next customer
asks for a new feature.
So I push for a culture where fast and right are not opposites.
The fastest way in the long run is often doing it the right way up front.
That perspective helps both the business and the industry.
engineers. So this is what I would believe in this scenario. That's very, very succinctly put and
really shows the power of what you were talking about, which is investing in the framework is a
multiplier. Have you found, especially as an engineer, as a senior principal engineer, how does
that influence work? Anusha, I'm curious because, I mean, we'll obviously have a lot of engineers
working. I mean, there are multiple, like we talk about, there are multiple forces that are
influencing the decisions of what we build. Can you talk about like a few strategies that have worked
for you in terms of being able to sort of guide the team in a certain direction, which may be
at least temporarily at attention with product delivery? See, definitely we have very proficient
engineers and talented engineers in our team and we would hire for that sake so that we could
deliver any highly scalable product with so much efficiency and quality.
So that definitely is a great question for anybody who is working in any tech industry.
Because at senior engineering level, you're not just writing code, you're shaping direction.
And you are right.
There are always competing forces like business deadlines, regulatory needs, customer expectations,
and change of need.
Because these days we are not even following waterfall anymore.
So we have to go with the flow like customer needs can change
and they can come exactly before the release deadlines
and they ask us to accommodate.
So a few strategies have worked well for me for sure.
So first, I would say data-driven storytelling.
When you can show how an upfront investment in a framework reduces incidence
or accelerates delivery over time,
it turns an abstract engineering choice into a business decision.
And the later it follows is like incremental wins.
Instead of proposing a six-month factor, I break it into milestones that deliver visible value at each stage.
That way, the business sees progress while the foundation gets stronger.
And the later, principle-based alignment.
I anchor decisions in shared principles like reliability, security, and compliance.
These are non-negotiables in finance.
So framing engineering choices through that lens makes alignment.
easier. In short, I would say, influence is not about saying, trust me, I'm an engineer. It's about
showing the business that doing it tries is not in conflict with delivery. It's the fastest path
to sustainable delivery. The short-term tension fades when everyone sees the long-term multiplier.
I love it. I think the way you broke that down in terms of very specifically, like almost like
a framework that anybody could use to kind of say, hey, if I wanted to.
to go and influence a decision in terms of saying this is something I feel very strongly about
what we should invest in as a team. I think breaking it down into the dimensions that you talked
about, it makes for a very, very powerful case. So thank you for sharing that. And I think it's not
just, you obviously talk about it in the financial space because that's where you are, but I think
it is absolutely relevant in any industry because, you know, that's what happens as systems grow
in volume and you're processing a lot more data, the product has lived on for longer,
you tend to inherit and incur this tech debt, which you have to pay off in order for you
to continue to be relevant and sustainable. So very valid points. But I'd love to talk a little
bit more, Anusha, around financial systems itself, right? When you're building for the financial
industry, I know a couple of times you mentioned trust, you mentioned compliance. Can you maybe
elaborate a little bit more on what do those terms really mean from a software development standpoint?
What are the factors that you keep top of your mind when you're building systems in the
fintech industry? So these terms definitely belong to domains. These are domain specific.
So I would say for the software industry, mainly in finance. Trust and complaints are not just
buzzwords anymore, actually. If you see there are engineering requirements,
baked into every decision.
From a software standpoint,
I think of them in three dimensions.
Just to explain it more clear.
First, I would say like security by design.
Every component is built assuming it will be attacked.
So encryption access controls and audit trails are not add-ons.
They are defaults.
And second, resilience and reliability.
If a system fails, it's not just downtime.
It's financial risk.
So we design with redundancy, observability, and graceful degradation.
Trust means the system works when it matters most.
And third, explainability and accountability, especially as we bring AI into the decision making.
A credit decision, a fraud alert or a risk goal, must be explainable to an auditor or a regulator
and ultimately an end user.
In finance, a black box is not acceptable.
So when I talk about trust and compliance, I mean building systems that are secure, resilient, and transparent.
Because in financial technology, every line of code is not just serving a business.
It's carrying someone's money, their data, and their confidence in the system.
So if you think you are also a customer, I'm also a customer.
So when we are believing in a financial firm, we have to consider all these pointers.
So trust in finance is invisible until it's broken.
and as engineers, our job is just to make sure it never breaks.
That is what I believe.
That is very good, very good context.
I think it really helps us understand the emphasis,
and to your point about downtime is not just,
hey, I can't use this platform, it's an inconvenience.
Of course, everything comes with business loss and business risk,
but there is an increased level of sensitivity,
especially around financial risk,
and that's something that you absolutely need.
need to take very, very seriously. When you talk about ability, Anusha, are you primarily talking about
sort of the underlying data being transparent and traceable? What is it that you do in order to
enable that? How do you process your data or what kind of guardrails do you put in place to
ensure that you never miss the standard of both traceability and auditability but also security?
Yeah, that's a great question because explainability goes beyond just showing the data.
So in financial systems, we always think about there's three layers, like data lineage, being
able to trace exactly where the data is coming from, how it was transformed, and why it was
included in a model's decision.
That transparency is very critical.
When regulators ask, show me the source.
We should be able to clearly show that.
And second, model interoperability.
Sorry, can we cut here?
I misspelled it.
Go for it, no problem.
I'll take note of it.
So second, model interpretability.
It is not just a score, but the why behind it,
the question of why behind it.
For example, instead of simple saying a credit card application is declined,
an explainable model will highlight the key factors,
saying like high utilization, less credit history, or recent delinquency.
That helps customers understand regulators validate and teams improve.
And third is about auditability and reproducibility.
Every decision has to be replayable.
Otherwise, we don't know whether that can be consistent across.
If a transaction is flagged as fraud today, six months from now,
I should be able to reproduce the exact condition inputs and model outputs that led to that
decision. So just to your point, explainability is both about the data being transparent
and the decision-making process being traceable, reproducible and defensible. Like I said,
in finance, a decision is not explainable until you can show the data, the model and the
reasoning, all in a way that a human can understand. This is how the agents are also being built
with explainable AI.
Understood.
That is very, very, very insightful, right?
When you talk about data lineage and the need to plan for this data to live also for a
significant period of time, I'm guessing financial institutions have very strong compliance needs
for you to keep the data alive for a longer period of time.
You were talking about like, hey, six months later, I need to be able to replay this entire
transaction and understand how I determined it to be fraudulent.
So that is, I mean, I would say a couple of questions that come up from there in itself, right?
Two different questions.
One is around the data that you use to train these models, etc.
How do you ensure that data is clean or in a format that you can easily use in terms of the map,
the schema that you're trying to use?
How do you clean the data, basically?
That's one of my questions.
And a follow-up question is, especially as an engineer, I'm also curious about how do you test
for these products, right?
I mean, how do you generate synthetic data
that is mimicking what you see
in the production environment?
How accurate can that be?
How difficult is that a problem?
Yeah, sure.
So I would say data quality is everything
because when we don't have,
when we have bad data,
it is more dangerous than having no data at all.
So because if a model is trained on biased,
inconsistent or noisy data,
the errors get amplified,
scale. So first foremost thing, the schema standardization is very much important in order to make
sure that every source system speaks a common language. In finance, data often comes from
decades-old platforms and new digital channels. So mapping to a consistent schema up front is very
essential. And considering the second one is like more of data validation and enrichment.
so we build automated pipelines that check for missing values, anomalies, and then enrich with
external sources when it is needed.
For example, reconciling transaction data with reference feeds ensures consistency.
Imagine I'm a customer.
I'm deciding right now saying in New York, and I created an account.
Later onwards, I moved to different state, and I have generated different credit history or
different transactions during that region.
so when it is the time to aggregate the customer data it always has to consider both the regions
like whether it is zip code based or whatever it is so it has to relate all the historical data
and generate a report based irrespective of where the customer has been moving so that's how
the data validation and enrichment should happen and governance and feedback loops are the third
cleaning is not a one-time task for sure i would say
I would definitely say this because I have faced this so much.
We continuously have to monitor the drift and we have to log anomalies and refine data pipelines.
In fraud detection, for instance, if an alert was a false positive, that feedback goes back into improving both the dataset and the model.
So for me, clean data is not just about pre-processing.
It's about creating a life cycle where data remains trustworthy over the time.
So every model is only as reliable as the data it was raised on.
If models are the brain of financial AI, clean data is like oxygen.
Without it, the system's suffolking, I would say.
ACM Bytecast is available on Apple Podcasts, Google Podcasts,
Hot Bean, Spotify, Stitcher and Tune In.
If you're enjoying this episode, please subscribe and leave us a review on your favorite
I love these analogies that you bring out. They're so accurate in terms of explaining the
situation. So thank you for sharing that. And I think 100% agree with you on data cleanliness being
something that you're constantly working towards. This led me to another line of questioning,
Anusha, which is given the, you know, stringent sort of needs of the systems that you're working,
on, is it typical that you tend to build a lot of these, you know, observability tools,
like for example, if you're monitoring data and over a period of time, right, and especially
for customers who have greater retention policies, which are very long, are there commercial
tools available that you can actually adopt for your software development, or do you tend to
have to build a lot of custom tooling? So it's based on how we are proceeding with our
decisioning and what exactly was of a problem. Because in finance, observability is just not
a convenience. It's complaints. And you're right. Retention policies can span over years,
sometimes even decades. So monitoring data integrity over time is non-negotiable.
From what I have seen across this industry, it's usually a hybrid. Commercial observability tools
like logging, monitoring, or any alerting platforms gives us a strong foundation for sure.
They are reliable and scalable in order to help accelerate our adoption.
But financial systems come with unique needs like multi-year replayability, strict audit
trails, and regulatory reporting.
So those needs often require custom extensions on top of the commercial stack.
For example, while an off-the-shelf tool might tell you a system,
filed at 3am. A custom layer ensures we can reconstruct the exact transaction flow,
data lineage, and model decision that occurred in that moment even years later.
So the strategy here leverage the commercial tools for the scale and efficiency,
but build custom with compliance, explainability and domain-specific needs demanded.
In finance, we believe observability is just not about knowing that something broke.
It's about proving why, how, and what the impact was even long after the fact.
Because I believe commercial tools tell you what happened, but custom tools prove why it mattered.
Yeah, it makes incredible sense.
I think, again, using the power of a commercially built tool that is built to scale,
but adding your custom implementation layers to kind of suit your use cases makes a lot of sense.
Is that true of AI models as well, Anusha, are you starting to see that both the customers that you work with as well as for your own usage within the organization, that you are able to leverage some of the commercially available LLM models and then build on top of that?
Or do you think that the way to go is to actually sort of build this out from scratch in a custom model?
So like I said earlier as well
So absolutely it is very similar to observability space
Even it is in the context of large language models
Like they definitely give us powerful foundation
But in finance you can't just take them off the shelf
And deploy them directly
Because the capability is very raw here
But the context, compliance and precision have to be layered on
So the approach I have seen succeeded
is hybrid adoption. Start with a commercially available model, then fine-tune or constrain
it with the domain-specific data and guardrails. For example, you can use an LLM to analyze
unstructured compliance documents. But before it reaches production, you add layers of prompt
control, explainability, and domain-trained data so that output aligns with financial regulations.
And this applies for customers as well. They're open to leveraging this
models, but only when they come with clear boundaries. Finance doesn't tolerate hallucinations.
If a model generates the wrong legal interpretation or a false fraud signal, the cost is
enormous. That's why custom wrappers, monitoring, and explainability layers are as important
as the model itself. So yes, we are leveraging the scale of commercial LLMs, but the real value is
in how we adapt them.
Think of the base model as the engine
and our custom layers as though
steering, brakes, and safety systems
that make it roadworthy for finance.
I'm trying to give this analogies
so that this can be understandable
for the people who are not into finance as well.
I greatly appreciate that.
That's what I was saying.
I think I love your analogies.
They really bring the point home.
So to that point,
especially because you're building
all of these custom layers on top of
say, a commercially available LLM, and how critical it is to not have errors or basically
have, I'm wondering if your lead time in terms of, you know, experimenting with a model or a
system within your development environment before it goes out to production or goes out to
prime time, are those lead times significantly longer in the fintech industry as opposed to,
say other industries. It's more of, I would say, when we are considering this kind of
metric right, so we have to make sure, I mean, I have to consider only the fintech here because
I have been in the decade space of this part. So lead times in fintech are typically longer for
sure. And I believe that's very good reason. In many industries other than fintech, you can
experiment fast and iterate as well because you don't have a regulatory or complaints fall into
this picture.
In finance, a failure is just not a bug.
It can be a compliance, violation, a financial loss, and a regulatory breach.
So it can be any of these.
So we cannot just treat it as a bug.
So what we do is building layers of safety into experimentation.
Prototypes may move quickly in a sandbox.
But before anything reaches production, it goes through rigorous testing,
validation against historical data sets, stress tests, stress tests under extreme conditions.
and explainability reviews to ensure we can just verify every decision the system makes.
This does not extend the timeline compared to, say, any consumer tech,
but it's a conscious trade-off.
In finance, speed without trust is meaningless.
A delayed rollout is far less costly than a premature one that erodes confidence.
So yes, experimentation is slower, but it's also deeper.
And I have found that this actually pays off in the long run,
because once a system is validated,
it can scale globally with much more confidence.
That makes a lot of sense.
And like I think as you've emphasized multiple times,
I think the trust of your customer in your system
is absolutely paramount to your business.
And so I don't think, as opposed to say maybe a consumer internet product,
customers, I mean, I would say even me as a fintech consumer,
is not looking for the latest and greatest in terms of features,
but 100% is looking for secure.
the ability to trust the system to make sure that it is always accurate, is always secure.
So I think 100% agree with you that the dimensions by which you're measured in terms of your
consumer confidence as well as satisfaction is probably leans and weighs much more heavily
on these dimensions.
Yes, I agree with that.
Got it.
The other question, I mean, of course, is what everybody is talking about is
within the fintech industry itself anusha what role is agentic AI playing where are you seeing the
applications what do you see as things that will really sort of catapult the user experience and what are
the things that you're seeing as risks so now it's not in any evolving phase i would say it's
already ready to deployable phase many applications many organizations are in that condition right now
So it is starting to shift finances.
Now, everybody is proactive.
So traditionally, our systems used to wait for instructions.
And when a customer applies for a card, a trader submits an order, or a compliance
officer runs a report, any of this used to happen with using traditional systems.
Now, the system cannot wait for instructions for these kind of activities.
With agentic AI, the system can anticipate needs, act on behalf of the user and adapt dynamically.
Right now we are seeing applications in personalized financial guidance like AI agents that can help users, manage spending, optimize investments or detecting unusual account activity without even waiting for the customer to ask.
So that can be a specific training for that agent to perform according to that customer wish.
On the institutional side, agentic AI is emerging in workflow automation, like agents that can reconcile transactions across multiple systems, flag anomalies.
and even generate complaints documentation automatically.
But considering your later question,
the real character for user experience
will come from contextual awareness,
an AI that doesn't just give you the raw insights,
but takes the next logical step.
I can give you a better example here.
Instead of just telling a portfolio manager,
risk is rising in this sector,
the agent could propose rebalancing the strategies
and even simulate outcomes will give better output.
That's it.
The promise comes with responsibility here, because when we are dealing with Agenti
AI, the systems must design with strict guardrails, like human oversight, auditability,
and transparency are the factors that we need to consider in this case.
Because in finance, autonomy without accountability is unacceptable.
So I see Agenti KIA as the co-pilot of the future.
Invisible when things are rooting, but invaluable when decisions are complex, you know?
So I believe that future of AI is just not AI
answering just our questions or it's AI
asking the right ones on our behalf and at the right time.
Yeah, I really enjoyed that answer, Anusha,
especially the point that you made about
oftentimes when we think about agentic AI
where, you know, hey, here's this system that is going to come
one with, obviously with recommendations on like,
hey, I know what your patterns of behavior are.
here are the few things that I can do for you without you asking me to so I can be more efficient.
But I love the fact that you also said, simulating transactions, right?
Especially, again, in this sensitive financial industry, you want to be very careful about the
kind of controls you're giving to an agent to execute on your behalf.
And so I think the simulation gives a lot of information and a lot of insight to a customer.
Even for me, for example, for making a financial transaction, I would love to see, hey, play this
out for me, see how this will, I mean, what will you predict will happen if I made this move
and then have the human make the final decision. Do you see that changing at all, Anusha?
Do you think that's kind of the pattern that we'll be following for the foreseeable future?
Yeah, definitely, I would say we would definitely simulate such kind of powerful middle ground
because it gives the system agency to explore outcomes. Even though we are keeping our final
decision to human, but in finance, the balance is critical because customers and institutions
both want empowerment without losing control. Right now we are in a face with agent ICAI,
primarily like it simulates, recommends and assess. It helps you see multiple futures before you commit
to one. But over time, I see that evolving into a tiered trust model for low-risk actions like
automating routine reconcilations or generating a complaints draft, the agent might act
autonomously, but for high-stick actions, like reallocating millions in assets, the agent will
always remain in a co-pilot role, offering scenarios and insights, but requiring explicit
human sign-off. What I don't see changing and what must not change is the principle of
human accountability. Financial AI should always expand
decision-making capacity, but it should not be replaced it.
Simulation will remind the anchor. It provides transparency, foresight and
confidence without stripping away human judgment. So to me, the real
power of agency is just not about making decisions for us. It's making us
better decision makers. So this every single time, even I was
invited for a guest keynoting or any relatable conversations as well. I would always say it's a
collaborative effort even though we get agentic AI or agents or people say AI replaces human capacities.
No, because we are the brain. So treat it as a collaboration effort. Do not even think towards
the dimension that something will replace human brain. If we are not there, do you even think
there is a foundation of AI? No, not even a chance. That's a very powerful statement and a message. I love
the act that you bring up human accountability being the final sort of frontier in any of these
decisions. One thing I did want to talk about is also this is such an evolving space, Anusha. So much
is changing all the time. You certainly sound like the kind of person, you know, who loves to experiment
and be hands on. You're involved in a whole bunch of hackathon. So I think you're always sort of
of trying to keep yourself on that cutting edge of, hey, what's new, what's evolving in this
space? What would you say are your key strategies for staying abreast of this ridiculously fast-moving
technology space? So there are various mediums, I would say, one has to explore in order to
scale up our self, whether it is in knowledge-sake or the strategy's sake, the space is moving
at an almost a dizzying space, you know. For me, staying current is not any.
kind of passive mode, it's always intentional, I would believe. I use three principles in my
perspective. I do not leave hands-on experimentation for sure. I don't just read about new tools.
I try to test them. And coming to hackathon's part, they are like side projects, you know.
And even small prototypes at work give me a safe way to explore what's hype and what's real.
Because that's the community where the students will come up with fresh ideas and fresh way of
implementations and something are like very startup ready as well you know we will be mesmerized
with their thought process and sometimes they might lack the understanding between the reality
and the real industry standards and all so that's where the professionals come into picture
to guide them in a proper way so i would love the interaction during hackathons and second
community engagement i would say i stay active in professional circles like i triple e-acem and
industry conferences. It's not just about listening to talks, but about exchanging ideas with
peers who are wrestling with the same problems. That network is often where I hear the earlier
signals of change. So I would say it has to be a continuous synthesis. We make a habit of
write, speak, and teach, whether it's an article, a keynote or mentoring, just make sure
always distill the new concepts into something. I can explain.
forces me to really internalize them. So the only strategy I believe, and throughout my conversation
or my talk, I must be emphasizing this in different examples as well, always experiment,
connect and distill. Because in a field evolving this fast, the only way to stay relevant is to
stay curious. And definitely I would say I don't chase trends, but I chase understanding. That's what
keeps me future-proof. At least, if I understand, I would make it relative to understand something new.
Yeah, no, thank you. I think you make an incredibly valid point. We do hear a lot about experimentation,
especially as engineers, often we want to kind of build something and try it out. I love the point
that you bring up about, you know, write, speak, teach as well, because I think the ability to explain
a concept to someone else really strengthens your own understanding of it or forces you to, which I
think is an incredibly valuable insight.
For our final bite, Anisha, I want to talk about a couple of things.
One is, I know you're very actively involved in mentoring, the next generation of leaders,
of engineers, of thinkers in this space.
What pushes you towards that and what do you think works well?
Why do you seek that out?
See, I came from the similar phase, I would say, where I needed guidance.
proper dimension to go through.
So mentorship is deeply personal for me.
As I clearly explained about my background,
I wouldn't be here where I am today
without people who believed in me, challenged me,
and opened doors when I'm still figuring things out.
So I see mentorship as both a responsibility and a privilege.
It's how we pay forward the opportunities that we were given once.
What pushes me is the belief that talent is everywhere,
but opportunity is not.
By mentoring, I can help bridge that gap.
Sometimes it's technical walking someone through how to architect a system, but often it's
about confidence or perspective to see a bigger future for themselves than what they have imagined.
But what works best in my case is authenticity and accessible.
Being honest about your own struggles makes your guidance more relatable.
And creating spaces like hackathon, stocks, office arts,
where younger engineers feel safe to ask, simple questions.
This fosters the growth far faster than formal lectures.
At the end of the day, mentorship is just not about teaching.
It's about building leaders who will, intern, mentor others.
That's how you create a cycle of innovation that outlasts anyone's career.
I believe the true measure of leadership is just not what you build.
It's who you inspired to build after you.
I came from this cycle.
I wanted to continue to build this cycle.
That's absolutely fantastic and it's a great note for us to kind of tie it all together, Anisha.
I think paying it forward is absolutely the right way to think about this,
especially because we all are the products of the influences and the mentorship and the guidance
and the help that we've received through our career.
I want you to thank you so much for the time that you have spent with us.
It's been absolutely fascinating and insightful to delve into the
world of fintech with you, Anusha. Thank you for taking the time to speak with us at ACM Bypcast.
Thank you so much, Rashmi, for having me here. It's true pleasure to share my journey and
perspectives. For me, the heart of these conversations come back to one such simple belief,
you know. Technology and finance or any industry should do two things. Like I said, remove
friction and build trust. But I have been fortunate to work on systems that carry real impact.
So thank you again for having me. It's been an awesome.
likewise.
ACM Bytecast is a production of the Association for Computing Machineries Practitioners
Board.
To learn more about ACM and its activities, visit ACM.org.
For more information about this and other episodes, please visit our website at learning.acm.org
slash bytecast.
That's learning.acm.org slash B-Y-T-E-E-C-E-C-E-E-C-E-E-C-E.
A.S.T.
