In The Arena by TechArena - Betterworks: Turning AI Hype Into Enterprise SaaS at Scale
Episode Date: April 17, 2026Maher Hanafi of Betterworks joins TechArena Data Insights to discuss AI in enterprise SaaS, why many AI proof-of-concepts fail, and how engineering leaders can successfully move AI into production....
Transcript
Discussion (0)
Welcome to Tech Arena, featuring authentic discussions between tech's leading innovators and our host, Alison Klein.
Now, let's step into the arena.
Welcome in the arena. My name's Allison Klein. Today is a Data Insights episode, which means I'm here with my co-host, Janice.
Welcome back to the program, Janice.
Oh, thank you, Allison. It's great to chat.
Denise, we've got a really exciting guest today. What I can go ahead and interesting,
introduce them and talk about the topic that we're going to be covering.
Yes, we've talked, you know, all things AI with our data instance podcast.
And today we actually have Mayor Handafi from BetterWorks.
So we're going to dig into a little bit of how does AI intercept HR and kind of helping the working world and the working environment.
Welcome to the program.
Thank you so much.
Thank you for having me.
It's a pleasure to be here and talk to you today.
Mayor, why don't we just start with an introduction of BetterWorks?
BetterWorks has not been on the program before.
And your role at the company, describe how that brings a focus on practitioners working
at the intersection of engineering, AI, and enterprise SaaS.
Yeah, sure, absolutely.
So BetterWix is a talent and performance management platform for global enterprise customers.
We help these customers navigate the complexity of really achieving success and
line goals and calibrate their talent and also grow with their talent in general.
It's very important to have a tool that goes,
a little bit beyond like a static or traditional HR software.
These customers usually have a high level of requirements and needs that are only met
with the right product.
My job at BetterWorks is a senior VP of Engineering.
I lead our engineers in AI products and technologies.
I've been with the company for three years.
And definitely maybe the most important years of the journey at Betterworks because of
generative AI and all that it brings from solutions and opportunities for the business.
amazing. And Meher, better works operates, you know, in a performance and talent management world.
But it has a really strong line of business perspective rather than just the traditional HR lens.
Can you tell us a little bit how that shapes the kind of data challenges and opportunities you're focused on today?
Yeah, I was just talking about the traditional HR tools and platforms that, usually they feel very static, administrative databases.
I mean, if you think about this is usually, you know, a set of documents.
spreadsheets, tables, lists like that.
But it better works.
We're looking at the data from a performance lens.
So we're trying to enable anything that helps go beyond just tracking history and like
what happens before to focus more all the flow for it as an example.
Because of our continuous approach to talent and performance, it's very important to look
at the challenges and how can we turn, you know, highly complex organizations like the one
I was listing.
If you think about global enterprise customers, they usually have these very complex structures.
So aligning their visions from the individual at any business unit, at any department, in any region to contribute to the success of the company and the vision of the company is very important.
So it all comes back to this kind of strategic alignment in how platform like BetterWorks help achieve that and move from just being reactive, health checks towards performance and look at backward to the results and see, okay, well, we have a performance challenge to more like a proactive growth system that really helps.
again, individuals, especially managers, but also the whole business to achieve what is most
important to them, which is goals and their objectives. There is definitely different perspectives
to this. And usually, again, enterprise customers and big companies, they usually go with some,
you know, they have, I would say, traditional systems in place. So bringing in a platform like
Betterworks is kind of flipping that upside down and trying to focus more on proactive growth
versus reactive performance checks. So, Maynard, one of the things that I wanted to talk to you
and we've been doing a series of interviews on this,
is really how AI is reshaping software used by enterprises.
And I wanted to talk to you about the SaaS industry overall,
not just in terms of the kind of features
that platforms like BetterWorks can deliver differently with AI,
but also how you're approaching building platforms,
how you're integrating them,
and what you're seeing from customers in terms of their evaluation.
Yeah, I mean, there are a lot of shifts in the space, right?
both in SaaS and also obviously in AI. Going back maybe to something you said earlier about
like my approach or my lens as a practitioner, yeah, I mean, obviously I'm very deeply involved
in AI, especially in the most recent years. My background, I don't come from a research expertise
in AI from an academic standpoint. So I had to really learn and adapt and really understand
how this is going to impact software engineering, developing products, and especially in
SAS. So my focus is, I would say, more into applied AI. I would call that the science of consuming
and integrating these models and how you make them work and add another layer to whatever technology,
whatever product you have. So I think that's unique background or that unique perspective has helped
me a lot to think about how can I bring AI to really your shape an existing SaaS product.
The other thing that when I think about bringing AI to SaaS is that it should not be coming from
just an intent of having a label like powered by AI or like we integrate AI into our platform
product. So there is completely a different, I mean, AI is forcing a shift towards, in my opinion,
more horizontal intelligence. And we say that a lot of Berwick's because Berwick's is a combination
of multiple modules that work together. But before AI, I would say they were considered
clear domains and verticals. Like you can manage your goals and objectives and OKRs as a business. You can
manage your feedback in conversations with the managers, you can manage your meetings, you can manage
your talent and skills. But at some point, what AI has brought here in, I would expect, brings to a
lot of businesses in SaaS, is that kind of possibility or capability of really connecting all of these
domains and verticals in a very intelligence way and also at a cheaper cost. I mean, a lot of things
were possible before. AI is not new, right? Like, Master learning existed and AI and any sort of
like statistical and analytics existed before, but with AI today, with the power of L&M's and
genetic AI, it's just way easier to interconnect all of these. I think my understanding of how
at least SaaS products and SaaS platforms will be built is more that interconnected set of
layers that will just break the silos between different components and features in SaaS in general.
So, Mayer, you know, many enterprises really struggle with adopting AI and beyond isolated use
cases, have you seen what really drives real meaningful adoption of query AI capabilities across
SaaS platforms? Yeah, absolutely. Again, there is a lot of change in the way businesses build
software using AI, but also how customers are evaluating or looking at this. Going back to the
earlier question, part of it I think that I missed was about how customers are looking at us,
but it works in general. And definitely over the last few years or close to four years now, we went to
like a rollercoaster of relationship change with customers.
Initially, when AI started, there was a lot of concerns.
I mean, first, there was a lot of hype, right?
Like amazing proof of concepts and demos.
Everything was mind-blowing.
So that created a big hype.
But that hype applied directly maybe to B2C in some other domains,
not necessarily B2B enterprise SaaS products,
where customers were a little bit concerned about privacy,
about using their data to train,
and what comes with all of that.
So there was a lot of friction
into the hype versus the skepticism.
But then over time,
as these customers start to understand better
AI capabilities, the boundaries,
the privacy, the governance, and all of that,
and they set their own policies and rules,
and they understood that providers like BetterWords
can do that properly meeting their guidelines and requirements,
while that shifted completely.
And these same customers,
they were very skeptical,
trying to keep distance from usage of AI because of all of the privacy concerns.
They shifted completely over time to just be the ones that are asking for AI,
the one that are asking for what capabilities AI has, and even asking for a roadmap.
So I hope that covers the customer relation for SaaS businesses.
Now, when it comes to what these businesses are struggling with,
it's definitely similar to that challenge of being hype in building proof of concepts
very quickly and demo to your internal stakeholders and leaders and board and say, hey,
like, we can have this in our product versus the reality of taking that to production
in delivering this to scale global enterprise customers.
So the challenge there is how big is the gap between what you can see early on from an
AI perspective capabilities to what you can really get in production.
And especially in an environment like HR tech or health tech or fintech, these are
heavily compliant environments, a lot of governance, a lot of privacy about ethical concerns,
even regulations within different regions globally. So that creates a lot of challenges and
problems that will make you achieving the proof of concept you create and you created early
on very challenging and maybe even completely missing on your promise, which leads to failure
on achieving ROI, which leads to what we have seen a lot of reports recently reporting that,
a huge percentage of POCs when it comes to AI fail to achieve RY.
Why? Because of that.
One of the things we have done early on at Betterworks is created council that involves customers as well.
So really early on, when we start thinking about AI and how we build it, our focus was towards
what users are looking for.
We didn't want to just focus on what we think was right.
Because at the end of the day, it's the users and your customers that would be using
these AI capabilities.
So you'd rather meet them in terms of their needs.
So understanding this and tweaking your whole framework towards quick iterations and understanding
the technology gap between what you show internally to what you deliver to your end users
has been one of the biggest challenges and gaps and pitfalls that I have seen a lot of enterprise
going through.
No, BetterWork sits atop a sea of text-based data and it accesses this data to deliver value
to your customers.
when you look at that from a lens of generative and even agentic AI integration,
what new challenges does that introduce in terms of the management of data
and the interface within the application?
Yeah, that's true.
We have a lot of very valuable data, especially now in this generative AI, agentic AI era,
because it's heavy on text and reported data.
I mean, there are a lot of opportunities, but there are also a lot of challenges.
I would start with challenges because I think the first one that come to mind
is it's related to how you keep your controls and access controls of this case, like to the data
safe. A.I early on when it started, we understood these concepts of putting a lot of things
in the vector DB and be able to search, you know, a huge corpus of data just by doing semantic
search or similarity to search. So it was easy to go and find data, analyze it, summarize,
and do all of that. But with that kind of early on technology, we have decades of mature
access controls to SQL relational databases.
So going from that high level of maturity on how you connect to the data and how you access
the data to just pure the Wide West, where it's just early on and too early for us to really
grasp and do it properly, that was the biggest challenge in my opinion, is how can you
bring this new innovation, new AI capabilities without breaking the controls?
And other things that comes with all of this is the interterministic aspect of AI.
We know about hallucinations and bias and model training and all of that.
So if you bring this on top of a huge corpus of data that can introduce and desire outcomes,
like, again, wrong answers or answers with bias,
if you use our tool to polish your feedback to someone,
if you don't have the right guardraids and safeguards in the system,
these likely to think that will be even worse than whatever initial responsible.
was. Another challenge was build a system, not just a product or a feature you're leveraging AI,
but build a whole pipeline and a system that will go through all of these checks, ensure that
everything works from an access control, a control, you know, safeguards and guard grades,
and ensure that anything you do is adding value versus introducing risk. And then one last
thing that is very important is how you build trust into these solutions, right? Because
one of the challenges is like, I don't trust the outcome that AI is given to me.
or how AI was capable of doing that, looking at data.
So part of this, and it's all known by, I would say, the name of responsible AI,
how would you build a responsible AI that can really achieve these great outcomes
and initial thoughts and objectives without introducing risks?
So two of these pillars for responsible AI, in my opinion, that Better Works bet on early on,
were transparency and explainability.
Transparency meaning you really explain why you got the data, what sources you use,
to generate whatever new cult then.
And then explainability is like you really explain
why this polished thing is better than the initial version.
If you're given feedback to someone and AI suggested to do changes, why?
So this is important to really understand how AI thinks
and to catch any risks with all the deterministic aspect.
But also, it's to use AI to learn from AI,
not to put your brain all the side and just delegate everything to AI.
Like you start typing bullet points very generic,
but AI will make it as a good feedback or a good text that you can share with your manager as an example.
So we are trying to use AI as a way to really get you as a better individual, better member of your organization
and contributing to the big picture versus just having AI take control.
So you should be in the driver's seat.
You should be the person who is leading.
AI is just there to help you and be a co-pilot, nothing else.
Awesome.
And as you see capabilities mature, where do you see the biggest opportunity?
opportunities improving for customers' return on investment in enterprise SaaS products?
I think, you know, when we talk about ROI in this space, I think about two things.
First is efficiency and second is insights. So performance and talent management is a time-consuming
thing. The bigger your organization, the more complex the organization is and the structure
and the org chart, the more time it's going to take you in teams. You know, historically, companies
used to dedicate teams and contractors to just do analysis of the data when it comes to
performance and talent and lead with results and trying to analyze all of that and run it by managers
and group and business units and then align all of these different regions and organizations together
to just make sure you have patterns and get to these insights. So efficiency is first.
And efficiency goes from an individual contributor, working and using the platform to just be
able to be more efficient. Like we help a lot with the writer's block as an example. Like when you
are about to do a 360 report and you have a few of these, you start thinking like what happened
in the last year, what happens in the last quarter, where should I start? So with AI, you can have a lot
of data pulled together for you to just save your time and get you with some sort of like drafts
or at least hints to what you can talk about and what you can cover in this report or performance report.
So efficiency is number one when it comes to AI. Building AI will save.
companies' time at different levels. Individual and manager relationship, teams, structure, and
performance and overall organization, and alignment between all of these. And then the other thing is
insights. To get insights from the data you have, it takes a lot of effort. It takes a human
weeks and even months to get these insights that maybe you can use for the next year. So with AI,
and because of that kind of gold mine of data we sit on, you can get this inside pretty quickly,
if not even in a real time.
And on top of all of that, you can be proactive.
So as you are managing your meetings,
or if you're a manager,
you have individuals reporting to you,
you can have insights on a frequent basis.
Before you 101, you can go into 101
with a set of insights that AI was able to pull,
looking at a lot of data,
looking at what feedback the individual
you're talking to receive them the last quarter,
maybe some of their goals
and how much progress they have been having all these goals.
Maybe some of their cognitions
as they receive throughout the certain period.
So you go into these meetings with a lot of insights
that without these capabilities,
it would have taken you a lot of effort
and you need to schedule these times
and frequently work on them
to get these insights to the surface.
And insights now, going back to that horizontal intelligence,
interconnected layers,
insights can come from different sources,
not just one.
You don't just look at the previous action items
and agenda topics you had in a 101 with an individual.
you'll be able to look at all of these different data points together holistically and have a big picture
instead of just, oh, you're doing great in your goals, but maybe you received feedback that we need to talk about.
So the picture will be more holistic covering different aspects of domain.
When I think about, again, I ROIs and what things will change in the future, I think efficiency, I think about insights.
And when I talk about insights, I think also about the quality of these insights, the quality of the outcomes in general when it comes to performance.
I think with AI, you can achieve higher quality performance.
You can develop better your talents with a very high quality of focus and efficiency,
getting the right and not trying too many things before you get to the desired outcomes.
As you were talking, it is very clear that, you know, in a lot of SaaS applications that I use,
there's almost been a siloed approach for AI features to come on as in part of the menu.
But I think that what you're doing is really baking it more integrated into the full platform.
capability. I'm wondering if you can comment on the approach to that immigration and how does that
change building AI as a platform layer versus a product feature? Yeah, this is definitely one of the
biggest areas I focus in my work as a senior group of engineering. So there's a lot of technical
aspects to this, how you turn this into platform, maybe why. Let's start with the why. I think initially,
again, as I said, getting these proof of concept,
leveraging all the AI tools out there,
like off-the-shelf models that you can connect to through an API,
all that is great, quick in SaaS,
and you can really impress with the capabilities you can have very quickly.
But then later, as you start taking these into,
okay, we're going to deploy this,
and understand all the constraints that come with SaaS in general,
and then adding to that specific domains like HR Tech,
it becomes clear that it's not doable to just build that as a layer on top of what you have.
So you had to go deep into your platform and really understand how the data is flowing in your system,
catch it either proactively ahead of time or as streaming, you know, as soon as data happens,
get that data and you connect to AI.
So I would say a few aspects to platforming AI that I have experienced myself.
The first one was because of some of these compliance and challenges,
we had to build AI in-house.
We couldn't just rely on third-party vendors like OpenEI and others
because of the requirements from our customers,
because of the privacy concerns,
because most of customers and HR tech consider all the data being sensitive.
No one wants their data to go to a third party.
And I shared earlier how perspective has been changing from our customers.
It is like a certain spectrum.
At the beginning, there was like a no-go.
No customer is willing to have their data go to a third party.
And then over time is changing because they started using these systems and trusting them
internally.
So they are more fine with that.
But initially for us it was very important to bring AI, use open source models, self-hosting
them, self-manage them, and manage the infrastructure and build it as a platform.
Now, building as a platform was something that we kind of started learning over time
because as we develop feature one and then two and then three and these kind of verticals,
we got to that horizontal intelligence.
And it was very important for us to dig into all of these AI capabilities
and all of these domains and interconnect them.
So we couldn't do these as just one AI product plus another AI product.
We needed to build the AI platform on the bottom
and then build what makes this feature different from that feature
as a thin layer on top.
And that kind of AI platform is interconnected with the whole system
and our cloud solution on the back end.
So it's pooling again, as I said,
earlier, streaming data, it's listening to advance, it's really reacting. It can even also,
we have some pre-processing we do to just generate AI responses before customers go and ask for
them. Why? Because you don't want to go to an interface where you need a summary of something
and then wait like at 30 seconds or a minute for it to be ready. Knowing that data didn't change much
over a few days or weeks. So coming there, maybe you'll find a pre-generated AI summary that was
ready sitting for you, that's the experience we want to build. And the only way to achieve that
is to turn it into a platform. And last take is, we don't know where AI is going. We don't know
how AI is changing. Over the last few years, we have been updating changing models, increasing
the number of our GPUs, watching our cost, optimizing latency and performance. I call this the trade-offs
pyramids, where you have cost, latency, throughput, and performance or quality. And do you need to manage
all of this. And there is no perfect spot where everything is optimized 100%. So you need to make
decisions where, depending on the features you're building, where you need to put your focus on.
And I call that point the AI Nexus, the central point where most of your constraints are met,
and you deliver the best for that feature. So pre-processing was one of these where we didn't care
too much about latency or how fast. We're going to do it overnight for you to read when you needed.
So latency was not a focus. We just wanted to do it as the cheapest cost. We just wanted to do it as the cheapest
as an example.
Versus when you want to rephrase something
or a feedback you're writing,
we need to focus on latency,
so that should have been fast.
Again, all this is only capable
if you built it as a platform,
not just as a point-of-sale product using AI.
Thank you.
Enterprise SaaS has often always been considered
a compliance-heavy and globally distributed.
But how do governance, regional requirements,
and transparency factor into AI system
design without slowing innovation? Yeah, I covered some of this in my earlier kind of answers.
First, we start with that council that has people internally in external from different domains.
It's not just product engineering. It's not just design. It's also legal. It's security and it's
professional services. It's customer success. So really look at the big picture, how you integrate
AI, meeting all of these compliance requirements, customer needs, and also put a foundation for future
development. So that was very important because again, we don't own and manage our product in a way
similar to B2C, where you just design what works best, where your customers, what you think is
best and you go on and you take feedback. We needed to work with that regional compliance system
and regulations. We learned about EU AI Act and other acts out there that will manage AI. We, you know,
understand our customer needs for when it comes to data governance. We are a multi-tenant solution,
but also distributed geolocally to meet these kind of requirements.
Going back to the earlier question,
we couldn't do that by just an integration point with some systems.
We needed to own and have more control and trade off a little bit more convenience
using some of the off-the-shelf solutions.
So that's one.
I talked about also transparency and explainability and responsible AI in general.
That meets a lot of legal ethical checks from our customers
that come with like a bunch of questions,
a lot of assessment to our capabilities.
So proactively work towards that and have internally AI champions for responsible AI,
for better AI and things that will really achieve success where our customers was so important
early on.
And then last but not least, I talked about how we self-manage and self-host our AI,
how our AI platform is more configurable so we can achieve and meet these regulations and
compliances with us kind of managed all of that.
We have been looking at a lot of solutions that could be in between, you know, it's not self-hosting and it's not like completely third party, something in between.
But even these, again, you trade off some of the convenience to get some more control or vice versa.
And we ended up just owning our stuff and owning our AI stack.
We had to develop internal skills and knowledge and capabilities to manage this new stack that was, again, he's moving and changing a lot.
So upskilling and reskilling internally was also very important.
So I consider that a big part of future enterprise SaaS companies is how can you manage your talent internally to really keep up with the AI innovation?
So all of these decisions, yeah, could slow down our innovation.
But looking back at how we designed AI features, we were focused on high impact, less complexity, right?
There are a lot of areas when you think about what we have from a data perspective to leveraging the early no-brainer capabilities of AI.
like summarization, rephrasing, text extraction.
This is all before, you know, agentic AI and reasoning and all of that.
There's a lot of easy, low-hanging fruits that you can achieve
that will really get you to that innovation early on
without sacrificing a lot of deep technical changes to your platform.
And that's the path we're following is really iterative.
Focus on small set of features, but with high impact versus just go big bang
and try a genetic AI into the whole system and see what sticks.
You know, one thing that I think about when Janice asked a question about global distribution, talent systems, working across markets, language is going to be a really critical capability within any tool that is being deployed.
And this is a hot button topic for me, Janice does this. How do you see rapid progress in AI around multi-language models and support for languages in different regions evolving that changes what's possible in this space?
Yeah, I think it's going to break a lot of silos and a lot of boundaries that were hard to break before.
The evolution of language support in LLM's is very impressive.
I think the biggest leader in the space is Google.
They have been building translations and systems, AI intelligence in the past.
So now they are putting all of that into LLM's generative AI capabilities.
But even early on, before Google or alongside Google, like most of the models out there have been able to train on
corpus of data coming from different languages and also make sure that these could be fairly
translated from one language to another. So when I think about this, I really look at how these
capabilities are turning things like performance management into a co-pilot experience. So you'll have
AI capabilities capable of really helping you polish your messages and your contribution in any
language that is supported, which are becoming many. And we know at Betterworks doesn't accept.
We have a lot of customers globally that just support most of languages out there.
So being able to offer these capabilities in their native languages was important.
The other thing is how in these global enterprise businesses, sometimes they have management structure
that could be across countries, across regions, and across languages.
With AI capabilities, what we have found very interesting is you can manage a whole conversation
or like a 360 review or a feedback conversation between an individual and their manager
in two different languages, and they will both each be able to contribute and write on their native
talks while having the best outcome and the best kind of content they can do. Why? Because AI can take that
and translate it and keep it and polish it in your respective language. So looking at the feedback,
someone speaking Spanish, someone speaking English, they can both do it in their native language,
and they can both read their other person contribution in the language they need. So that for us was a
change are at some point because we can really do more with this. And if you are breaking these
kind of frontiers and barriers for language, I think you can achieve better development for talent
and skills. You can achieve better mentoring and coaching and you can achieve better goals for the business.
And that's the whole idea is like how can you take performance businesses to the next level,
especially when they are distributed globally and also from a language perspective.
I love some of those examples. So thank you for kind of going in a little bit deeper.
for other practitioners out there similar to yourself,
how should they lead AI adoption inside of enterprise SaaS?
And what lessons or advice would you specifically give them about
avoiding common technical, ethical, or even financial potential pitfalls?
Yeah, I mean, the first thing is obviously upscaling, reskilling,
whatever like development you need to have yourself as a leader,
I think you need to get a certain degree of understanding of this new technology,
how it works, how it's impacting businesses, how it's impacting productivity.
That's one thing is really get yourself on a certain level of knowledge
that goes well with your title level and impact to the business.
Second is do not get overwhelmed.
I think AI is moving very fast.
Like today, if I want to look for like an AI development tools
that I want my software engineers to adopt, there are hundreds.
Picking the one out of the haystack here is very challenging.
So it's very important to not be overwhelmed with the scope and all the movement here,
but stay focused and stay informed from a high-level perspective.
You know, we're talking about AI adoption here,
create an environment where adoption could be controlled from an ethical, legal perspective,
obviously, but also a little bit more flexible to get people to choose
and bring the best kind of solutions to the problems and to the situations.
As an example, that's exactly what I did with the Better Works Engineering team.
I created something called the AI engineering lab, and I opened the space for everyone to go and explore the tools.
It was so hard for me as a technology leader to just come and dictate, okay, we're going to adopt this AI technology because it's the best based on an article.
Trust me, the next day there will be another article showing different benchmarks.
It was the same benchmarks with a new leader.
So what works best is creating that kind of environment where there is a little bit more flexibility while control isn't.
place and then over time have different experiences, different people interact and find out
what works better and start going from the big spectrum and pool of options to just the
final list of the ones that worked really well based on feedback. I did a lot of surveys internally
to report the AI impact on the work and on performance and also on the learning and on the
self-breaking or safe rating of your own performance and also your own growth. Do you think AI is
going to take over in terms of your job? Or does AI make you not think too much about the problems
and think about more delegating to AI? So all these are impactful things that leaders should be
aware of and find the right dynamics. Another thing is follow frameworks. I mean, this is now,
I think, old enough with the pace it's going, that there are some frameworks that you can follow
to just achieve some of these. Like really, some of the things I've been talking about today,
I logged them or I wrote them down into what I called the AI maturity framework.
It's inspired by other frameworks that exist before.
And without that, I couldn't get to that.
Another framework I came up with is a flywheel framework that really focus on planning how we build AI,
build it and run it, and then go and optimize.
Optimization of AI has been a big journey for me because it's something I go to back and again
because there was no end point of building AI.
There was no finish line.
Everything moves really fast.
So the fact that I was not overwhelmed that I was always trying to be informed and work with my team to achieve what is next for us and keep the door open to better innovation optimization has been very successful.
So these are the known ones.
The last one is maybe financial, I think, for technology leaders, it was kind of a game changer for me because I managed costs and budgets from a different perspective for AI.
There was a huge risk of AI taking too much money without achieving ROI.
So turning into someone who cares more about the financial aspect
and looking at costs on a frequent basis
and optimize the infrastructure and the usage
and the techniques to just cut the cost to a minimum
was a huge success as well.
I mean, something to be aware of,
I have attended a lot of webinars and meetings in person
that were just talking how CTOs and VPs of engineering
need to take some time to work as a CFO if needed
just so they can control the cost of their infrastructure
and their AI stack.
which again, you don't want to be in a position where you are asked by your leadership team and
finance team, you know, why you're spending this much money on AI, why the ROI is not there.
So being proactively aware of these dynamics cut costs as early as possible was something that I
really suggest for leaders to look on.
I love that answer, Mayor.
It really describes leadership through rapid organizational change and environmental change.
And you brought up a lot of good things, especially being comfortable with the ambiguity of a changing
landscape, I'm sure that will intrigue people to want to talk to you further. So for those of us who are
listening and watching online and want to learn more about, first of all, BetterWorks and the topics
that you've discussed are connect with you. Where should they go for more? Yeah, for BetterWorks, our website,
betterworks.com. We also have a LinkedIn page where we share a lot about these things and more.
For me personally, LinkedIn as well, I have a medium kind of page where I drop some of these
thoughts and frameworks in there. I have a YouTube channel where I share also some of my kind of
public speaking. I hope people in the audience here would find very useful. Beyond these, I mean,
there are a lot of content and resources that I use personally to develop my own understanding of
AI, like frameworks and obviously medium and YouTube and all of these, but there are also a lot of
sources on the LMS platforms like LinkedIn Learning, Coursera and others. I think it's time for leaders to just
hop on all the learning journey and write this with their teams.
What I was looking back at how I was able to do this
and how I was able to work with my team on this,
I thought about this that usually what happens is
I moved on from learning and growth as just setting the vision to the team
and have them go and understand everything and report
to just be in it together and hop on on this together
and I do the research.
I try things on my aunt, I bring some ideas and we all collaborate.
I don't think there is someone best,
a position to read this. So we're all in it together. So really big advice here for leaders is to just
go find the resources that works best for you based on your way of work and your time management.
I do snackable like learning content sometimes. I drive my kids to school and all the way back.
It's 15 minutes by I put on a webinar or podcast and I listen to. So yeah, there are a lot of
materials. For the things that are specific to what I do, there are communities out there,
AI communities are kind of booming.
MLOBS community, AI artificial intelligence or AI accelerator institute, AI realized.
All these are things I follow and just get the content I need on a snackable format
that helps me develop my understanding and not be overwhelmed with everything gone.
I love it.
Thank you so much for being on the program today.
And Janice, that wraps another episode of Data Insights.
Thanks so much for the collaboration.
Fantastic interviewed, both of you.
Thank you, Alison. Thank you, thank you for having me.
Yeah, thank you so much. Thank you for having me.
It was a great pleasure, and these are topics I care about a lot, and I really enjoy talking about.
So thank you for kind of opening up the conversation and going deep into these.
I appreciate it.
Thanks for joining Tech Arena.
Subscribe and engage at our website, Techorina.ai.
All content is copyright by Tech Arena.
