The Derivative - The State of AI, and AI in Alternative Investments with Mohammad Rasouli of AIx2
Episode Date: January 30, 2025In this episode, we dive deep into the current state of AI… getting to its transformative impact on the alternative investments landscape. Our guest, Mohammad Rasouli, is a renowned researcher at St...anford University and the founder and CEO of AIx2, a leading AI solutions provider for private equity, venture capital, and hedge funds. Mohammad shares his extensive expertise on how AI is revolutionizing the way alternative investment firms operate, from streamlining due diligence and deal sourcing to enhancing portfolio monitoring and reporting. We delve into the key challenges and best practices in AI adoption, as well as the potential implications for the future of finance jobs and the broader economy. Whether you're a seasoned alternative investment professional or simply curious about the intersection of AI and finance, this episode packs an artificial punch. So get use to “sit back and relax” because the rapidly evolving landscape of intelligent investing will be doing more than ever - SEND IT! Chapters: 00:00-01:14=Intro 01:15-11:31 = AI: Helping to reduce market friction & the shelf life of a constant changing landscape 11:32-23:46 = Predictive vs. Generative AI / Transforming finance 23:47-31:58 = Fear for the ultimate AI, AGI, & overcoming challenges 31:59-46:00 = Job conqueror or tool: Understanding the fundamentals of AI and adopting them 46:01-50:10 = Regulations & Competition - what could go wrong? 50:11-59:28 = Computes and data are the drivers of AI Follow along with Mohammad on LinkedIn and visit AIx2.ai for more information! Don't forget to subscribe to The Derivative, follow us on Twitter at @rcmAlts and our host Jeff at @AttainCap2, or LinkedIn , and Facebook, and sign-up for our blog digest. Disclaimer: This podcast is provided for informational purposes only and should not be relied upon as legal, business, or tax advice. All opinions expressed by podcast participants are solely their own opinions and do not necessarily reflect the opinions of RCM Alternatives, their affiliates, or companies featured. Due to industry regulations, participants on this podcast are instructed not to make specific trade recommendations, nor reference past or potential profits. And listeners are reminded that managed futures, commodity trading, and other alternative investments are complex and carry a risk of substantial losses. As such, they are not suitable for all investors. For more information, visit www.rcmalternatives.com/disclaimer
Transcript
Discussion (0)
The core business of investment is finding those opportunities, right?
The rest of it is basically performance around it.
The second thing is that if you look at AI and you match it to what it can do for investment,
it's perfectly matched to this use case asset because what is AI?
AI is gathering a lot of information, finding a pattern, then running the pattern of success
and finding that into another set of data.
And then this way, find the needle in the haystack.
Welcome to The Derivative by RCM Alternatives.
Send it.
Hi, my name is Mohamed Rasouli, and I'm a researcher at Stanford and CEO and founder of AIX2.
I'm here to talk about AI in finance and what that might mean for you coming up here on the Derivative.
All right, Mohamed, how are you?
Thanks, Jeff.
It's two degrees here in Chicago.
We're recording on the 20th.
It looks nice and sunny there, although I'm assuming that's a virtual background.
It is, but we have good weather here in Palo Alto.
Yeah, it's the same.
What am I looking at?
That's Stanford?
It's the Stanford campus, yes. looking at? That's Stanford? It's the Stanford
campus. Yes. So at Michigan, was undergrad at Michigan, postgrad at Stanford? I was doing my
PhD in Michigan. My PhD was dual degree between electrical and computer engineering and economics.
I worked with a thesis committee of Stanford, Harvard, MIT, and University of Michigan,
including Nobel Prize winners.
And my topic of research was how AI can help reduce friction in the marketplaces,
including the investment market, where there's information frictions
and operational frictions, especially when it gets to private market,
when these frictions are more significant. And the question was how AI and algorithms can reduce these frictions, especially when it gets to private market, when these frictions are more significant.
And the question was how AI and algorithms can reduce this friction, result in better allocation of resources,
more prosperity, including increasing the social welfare.
And then I moved in Stanford and continued that research in management science and engineering.
And I stayed as a visiting scholar researcher in Stanford and I worked in McKinsey
as I said in New York office where I was managing a transformation for financial Institutes
particularly private equity and did due diligence projects with them and I started AIX2 here in Bay Area, which is AI for alternative investment.
So hence AI multiplied by two.
And the idea was that to use the real new power of AI to address the needs in the industry
for investment, including finding good opportunities to invest on, as well as having complete
understanding of the deal due diligence all the way to the
final reports and reducing that friction in the market.
That's what we have been doing in AIX2 with our set of clients.
And there is so much happening with AI and so much happening with AI in this space.
Love it.
I'll go backwards for a minute.
So you're working for McKckinsey in new york
what uh what is that like i don't know how many former mckinsey people we've had on the pod
i ask all the goldman sachs alums if it really is the uh giant squid sucking the life out of
the financial industry which was uh matt talib coined that term so mckinsey's kind of got what
would you say sorry if i'm bashing your
former employer but right it's got a bit of a negative connotation out there what was it from
the inside did you like your time now uh i really enjoyed my time in mckenzie uh i love that firm um
sometimes people ask me why i did it because i was in a data science background from an engineering perspective. And I worked in Microsoft and I was set for being in the AI research labs.
And I consciously decided to go to McKinsey because I wanted to have that exposure to that world and that experience,
which was super interesting for me for the short time that I wanted to invest there.
And I got to work on topics that I wanted.
I worked in these topics across US, serving from the headquarter in New York, but also I went
and served European clients and Middle Eastern clients as well. So for me, it was a perfect ride.
It's a huge firm with a diverse set of groups, topics, and it really depends,
the experience is really dependent on what you work on and what you do and what you like. It's definitely a lot of fast paced working, a lot of work that should be
done if you want to be successful there. And I appreciated that. But what I would say is that
the set of people there are really interesting set of people. And that's what really matters,
come from very different backgrounds. I don't know know any other firm i have been in academia i have been in tech industry i have been in a startup
scene i have been in mckinsey and i have been in finance i think mckinsey uniquely what they have
is not only global presence but also very diverse background of people coming from engineering mbas
law military art and everything and they work together to solve interesting problems.
And how does that, not just McKinsey, but any consulting firm, especially in AI, right?
Hey, pay us a couple million dollars and we'll tell you what the AI landscape looks like.
Six months later, it could be a totally different landscape, right?
How did that work out?
Do you think what you put forward in some of these reports was, how long was it valid for like what's the shelf life of that research so
that's a very good question so the underlying assumption you rolled out here is very valid
like ai is a changing scene right and anyone who says i can predict ai for the next five years
should have access to a crystal ball of future prediction, which I don't think anyone has. So it's kind of unfair ask for anyone
to predict for a long,
even like two, three years ahead,
what AI can look like, right?
And just look what happened to us in the last two years,
like how many wrong bets existed
among the experts in the field, right?
Now, this doesn't mean that someone that wants to start now
shouldn't talk to experts, right?
You want to take your bets, right?
It's a speculative to some extent, because we don't know where the technology goes.
And it can go all the way down to what is causing that black box approach to AI.
But that's the fundamentals.
It's to some extent, it doesn't mean that you should not talk to experts.
We are in finance, right?
There is a speculation.
We are all used to probability and statistics. We talk to, we understand the fundamentals and we talk to experts. We are in finance, right? There is a speculation. We are all used to probability and statistics. We understand the fundamentals and we talk to experts.
Now, two things here. Number one, when you talk to experts, depending on the use case,
where you are and what you want to do, that expert should have the capability to answer that.
And what I can say as a common factor for AI experts to go to, no matter which industry you are to or what level, make sure they are number one expert in the engineering aspect of AI.
Now, you may want to talk to people who are also experts in finance domain or organizational transformation topics or other topics or HR topics.
But don't miss out on the fundamental requirement of
being expert in AI deeply at the engineering level.
What do you mean by that on the engineering level, the actual chipset and all that?
No, no, no.
I mean the algorithms, the AI algorithms, right?
That's the table of state.
Like people who have written codes, done data science, understand neural networks, understand
fundamentals of this technology, because that intuition is helping them predict what can happen.
What are the use cases?
What's the value?
What is the data required?
What are the trends in the future?
Now, no one can, as I said, predict perfectly, but that's the table at stake for being able
to talk about AI as an expert.
And I'm glad that the time I was back to your question in McKinsey,
I was glad that on the project I worked on, which is AI for AI transformation for private equity
funds, at least I came from PhD in engineering. So I had that aspect of that and it showed that
showed it itself in how we designed the framework for AI transformation for private equity with an angle and understanding of the engineering part.
And that results in more reliable and longer impact reports, as opposed to people I see,
especially sometimes in consulting industry who talk about AI, and they have never really
done anything like concrete AI, written a code or done research or others.
But I can see that both ways, right?
I don't need to understand.
We could argue this.
I don't need to understand the self-driving algorithm in Tesla to want to buy Tesla stock or something like that.
Right. So or I don't need to understand the scheduling assistant algorithm
to use that tech.
So it seems like, right, and I've struggled with this personally.
Do I go down a deep hole with AI and learn all these tools
and learn how to code some of them and do it?
Or just wait for Microsoft's Copilot or Google's new tools, right?
Do I just wait for that to integrate into my life
seamlessly instead of learning it myself? So I can see both sides of that.
So I agree with you, but there's two differences here. First, when I say someone should know about
AI, I mean, at the expert level, you go to ask your questions, right? That expert knows AI, right?
And second, what I use case I'm talking about
is not about investment in AI companies
and which one is going up or down.
That's some level of understanding.
I'm talking about a deeper understanding
on using AI for your organization,
like building versus buying,
what use cases are deliverable.
But the keynotes I give in conferences,
I gave a keynote in Finovay Fall, for example,
or Super Return and other.
One framework that I update every six months is where are we in the scope of AI deliverables?
Which framework we can take and compare use cases against that. Is it going to be a good use case
right now at high quality, high accuracy without a lot of human in the loop? Or no, it's a use case that yet AI cannot address it
at major performance without human in the loop, right?
For example, the video analysis, the audio chatbots,
the writing documents, analysis, financials, insight extraction.
These are different functions that we should understand where AI is
and what use cases can do a better job for each of them. If you want to go into that deeper question
of how should I use AI in my daily routine, especially in organizations, should I invest
in buying versus building, how much data costs will be imposed? So that expert to respond to
those questions requires to understand AI.
But at different levels,
you don't need to really be all the way coding and stuff
if you want to answer investment
in StarRiders.
And what were the private equity firms
or what are they doing now?
Are they saying, I want to use AI to identify new investments?
I want to use AI to streamline my paperwork and borrowing docs and whatnot, all the documentation, all of the above.
What were they and are they looking at to improve their operations?
Yes, yes. Let's take a look back in the history for the last 10 years and it's
a very interesting journey that finance especially private investment has done in this so the idea of
using ai or algorithms for finance and trading is not new it's existed for the last 20 years or even
more right at least it was i was trying following managed futures fans on this podcast know that
well right they were doing it since the 80s.
Yes, exactly.
And as a job description, like a solid job role in Wall Street as a quantitative researcher,
it's been there for 20 plus years.
And it's a very mature field.
My friends from PhD who go to Wall Street and work for hedge funds and others.
So that trend existed.
And I would say like eight years ago, eight to 10 years ago, people in the
private investment space, like private equity thought about, okay, maybe we can use AI in the
same way that hedge funds use to predict stocks. We can predict value of an asset to do an MNA or
to invest on. And just back up, sorry to you when you i keep calling it private equity you
kind of refer to it as private investment so you're including like private equity vc private
bank investments kind of all these different things that are outside the exchange traded
markets exactly exactly that even real estate and others right yeah okay so the idea came that okay
can we predict the value of an asset with AI
across all of these different asset classes that you mentioned? Now, and there were some good
results. There were a lot of research going on. There were some bigger firms who were using these
things. And almost all the megafunds were using this before ChatGPT was hot, like EQT, MotherBrain,
and KKR, General Partners. All of them are using, particularly for
the single use case that I mentioned. Why that? Because if you look at that single case, it's
first of all the bread and butter and core business to these funds. The core business
of investment is finding those opportunities. The rest of it is basically performance around it.
The second thing is that if you look
at ai and you match it to what it can do for investment it's perfectly matched to this use case
asset because what is ai ai is is gathering a lot of information finding a pattern then running the
pattern of success and finding that into another set of data and then this way find a needle in the haystack. That's
exactly what an investor does. Investor looks at the history or whatever, forms a thesis,
forms a pattern. This is how I'm going to invest. Then he goes out, he collects a lot of data about
different assets that exist. He pattern matches what he thought is important for investment or
successful to those new data and says, okay, these are the top five, top 10 I want to do the intelligence or invest on. And this is again, finding the needle in the
haystack for him. So if you look at these two, AI, what AI does at the core and what an investor
does at the core, just perfectly matched each other, right? So that's why it was very natural
to have that use case of finding good opportunities. And we make this.
Would you say we talk a lot with different managers on here,
to me that's more I'm replacing 10,000 workers that are scouring documents and looking for these opportunities.
So it's not necessarily I'm relying on the AI
for its crazy insight that I wouldn't have thought of.
It's more of doing a lot of work in a short period of time.
We will get there.
Everything I said was history.
We are now getting to today.
So it was up until ChatGPT became a U-turn in the industry, right?
Before ChatGPT, it was mostly focused on finding opportunities.
But once ChatGPT and OpenAI commercialized the set of natural language processing algorithms at mass and made it
available to everyone at low cost, suddenly there was a U-turn because people realized exactly what
you said, Jeff, that, hey, I can have this beautiful device, this beautiful technology
to process my documents, find out any issues in a contract, write a new investment memo, write a marketing email, write my LP query,
monitor my performance of my portfolio companies through the documents they sent to me.
A lot of things suddenly became available at a low cost that were not at the core.
And NLPs before ChatGPT were one of the more complicated, complex and expensive algorithms,
right? So commercializing that was definitely U-turn so what happened after that was not only
the big funds big firms that used to have ai teams for using predictive ai for finding assets now
suddenly also try to use generated ai for for this day-to-day work improvement, but also a lot of
smaller funds who didn't have the luxury of having AI teams and using AI found it accessible and
affordable for them to use these tools for generative AI tools based on ChatGPT directly
by ChatGPT or derivatives of ChatGPT to help them with this document writing and many use cases from
that that I mentioned a few it's like due diligence market research portfolio monitoring exit
preparation document writing LP query LP reporting investment memo all of these things and you can
see it's present across the investment cycle all the way from forming a thesis to executing that, exiting that, and reporting
to LPs.
Everything can be impacted by this generative technology.
So you're kind of separating between predictive AI and generative AI?
So was predictive was kind of that, hey, I'm replacing a room of 200 people, and this is
just going to do the work more quickly, versus generative was I'm coming up with new ideas
based on it?
I would say it's predictive and generative was I'm coming up with new ideas based on it? I would say it's predictive and generative.
The predictive one is more on alpha finding,
is finding opportunities,
finding good assets to invest on,
predicting the performance of an asset,
predicting a chance of fundraising from an LP,
predicting the chance of hiring someone,
finding someone who can join your portfolio company,
finding something and predicting its performance, being an asset, an LP, an individual is predictive.
Doing things day to day better, like writing reports, analyzing reports, analyzing unstructured
data, reading, providing market research is generative AI.
It's power of chat, GPT, and generative AI.
So it's beautiful that these two use cases...
Almost backwards at the moment.
It's beautiful that these two categories of use cases
match perfectly to these two technologies of AI.
And then I almost want to say before that, right?
So you said in the last 10 years,
20 years ago, they were already using predictive analysis
and computers based on on numbers on just prices
and data right and so the the leap was to use natural language processing like now we can we've
done it with numbers for years now we're going to use it do it with words basically yes that's
and unstructured data yes and you know like it's very like fundamental this is an interesting
observation now we are relying on a pair of language,
and language as a model of communication
and everything we can do with language.
But there is a lot of research going on
that maybe the fundamental AGI we want to make,
the great AI agent we want to make,
it should not be built on language.
Maybe it should be built on audio or video sensors, right?
You know, like 3D.
There's a good amount of research going on in Stanford that they think,
and there are companies that started off of that,
that they think they're going to replace ChatGPT
because they have agents, fundamental agents,
based on a different modality, which is more powerful
and more complete for addressing.
Like, for example, language is limited for analytics and math and analysis, right?
You can do function calling and this way and that way to add these functions to chat GPT.
But at the core, language models don't have that.
If you ask what is 2x2, it doesn't do a math in the background by chat GPT unless it does function calling.
It just looks in the history,
like how many times it has two by two equals four, right?
Right, and it might say some, yeah.
And it might say something strange or hallucinate.
So yeah, that's an observation.
Real quick, because you mentioned hallucinate.
When I use some of the tools, I say don't hallucinate.
That doesn't seem to work
right it's still hallucinates it'll be like well this the market was down because the fed surprise cut rates or something i'm like no they didn't what are you talking about
so there's a i encourage people to always get the source from um the experts as i said but there's
a good amount of interviews especially people in uh Anthropic are very open and walk all about the type of trainings they do with LLM,
like right from the top before they release cloud.
And do watch their podcasts and others,
like they talk about it actively.
And they say these things like at a top level,
they should really turn the knob on how the machine turns to be polite
and accepting versus to turn to be rude so it's
literally not that it's turned on the language models right i think better and better on at
their high level at that high level and see isn't that that scares me right that they have knobs
they're controlling at the high level which is by the time it gets down to the people using the
tools which leads into the whole like how do we control this stuff well let's get there
later um but interesting you said so they're going to use different modalities visual
right we can know right if i'm reading something i'm reading a text versus i'm telling it to you
over video there's a whole different interaction right you can see my facial expressions you might
think oh he's lying or he's he's joking right of like oh the
text might miss the humor or something so that's interesting that the video could in theory pick
up on those video and audio pick up on those intonations and and all of those non-text clues
right no perfect uh exactly that um exactly that and the language is a construct of many, many years of human development, right?
We have come to these protocols of communication language called language.
And it has a lot of structure around it.
And it allows for pattern matching, pattern recognition.
That is the fundamental.
Like if you go back to the history of ChatGPT, how they started and what they did, like OpenAI, I mean,
OpenAI fundamentally, and do watch some of Sam Altman's
like earlier comments and podcast interviews
that talks about earlier history.
It was not clear the language is the model to focus on necessarily
for the goal of making that super intelligence agent, AI agent, right?
So they had to navigate a bit, work around reinforcement learning was one idea, there were
other ideas, and eventually the language picked up because it's this modality and this protocol
that humans have worked many, many years on to perfect that phase of communication.
Plus, there's a lot of data around it, like in all everything in internet,
and you can train machine on that massive amount of data.
So the thesis was, if you have a lot of data, like more data, more compute,
more bigger training, deeper and deeper neural networks, bigger and bigger models.
If you combine them together, you can crack the highly intelligent AI solution.
That was the fundamentals.
Now, where to find it?
It could be language, right?
Because language has all of these availabilities and it's to some extent a structure, so perhaps
easier to decode and predict.
So you've mentioned a few times this super agent what are you calling it the the mother of all ai agent is that i mean is that a goal or is that a fear uh that's the topic of
agi i'm uh sure that people uh yeah maybe define that quickly for my friend, George. Yeah, so the general proposed AI,
like what we have it,
like AGI is something that OpenAI actually talks about
and do we get to that or not?
Like there's a lot of even like debate around
what is AGI and that such as exists or not.
Like, and I don't want to get that debate.
Like we can always debate about what is it?
Is it a clear milestone?
Is it consciousness of the machine?
Is it what it is?
But without getting to all those extra conversation,
the idea is roughly to have an AI
that can do many, many tasks, right?
That can be at almost at the level of advanced human, right?
That's the idea of AGI, right?
How do we get-
And AGI is artificial intelligence, right? Why is the idea of AGI, right? And AGI is Artificial
Intelligence, right? Why is the G
in there?
The general part is basically like
the idea is that right now the chat GPT
is still not very general.
There's still a lot of limits in how
to work with it and others. It's not
a human, advanced human that can do a lot of tasks,
right? So getting
to that general ai is the goal
set for the ai community by open ai and other uh companies right now where where one bot could be
tell me how to do a million different things exactly right versus now it seems like yeah i
need different bots that are kind of pre-programmed with different parameters to do specific things.
It limited the scope.
Like you can have an AI very scoped and trained for just writing your investment memo, just predicting the price of a stock, just do this.
And you cannot give them high level tasks.
Hey, should I invest in this company or not?
Go find all the information, all the sources, write the documents, manage the process, right? The idea of getting to that high level AI, like an advanced human who can do everything rather than,
when I give it to finance people, I like to use this analogy, right?
If you hire an analyst right off the bat from undergrad, he's not a general proposed expert yet in your field.
But if he stays in your field for 20 years and become like very
advanced in thinking and has seen other patterns now with general proposal you can think about
investment you can work with LPs you can write documents you can predict patterns so that's the
transformation you are seeing in AI if you if you look at the letter that was released for 01 GPT 01
the our main argument the beautiful and elegant argument
was that the previous AI, previous chat GPTs
were almost an undergrad who was able to collect data and read.
The 01 is a PhD.
That's the, what's the difference?
PhD can analyze and think critically
and think creatively and write, have a thesis
and reason, fundamentally reason about
a complex task, right?
Now we have got to that level.
And it was a different technology approach to previous chat GPT improvements.
It was reinforcement learning and a lot of chain of thoughts and others.
So that is the progress and that progress continues, right?
Now, being a more and more advanced person at the level of let's say nobel prize winners for
every field can we have an ai at that level who can think at that level and bring innovation even
to science and uh and uh research and technology you tell that ai agent that nobel prize winner
ai agent go find a new cure uh for let's say cancer and he designs experiments and he designs
everything and he thinks about it and he reads the results and that's the AGI that's what AGI means but how do they make that
jump right because if you're learning on the past how can you create the future right like it seems
it if I come back to just I'm training on a data set of prices in crude oil and I create a model that's going to predict crude oil prices
and then the next 10 years look nothing like my data set
that I trained on, right?
And I have issues that I didn't expect, whatever.
They found oil in Antarctica
and we have more than we ever knew we needed.
Whatever the case might be, right?
I don't have the full future picture.
What's the AI community's thoughts on that
yeah that's a very good question the reflection of that for finance is exactly what you said like
events that are less frequent in the history let's say if there's a suddenly a revolution
somewhere in the world right yeah it's not and the impact of that in the stock market right
can the ai predict that and others the short answer is exactly like you said. AI is
the technology of predicting within the data, within the patterns it has seen. So it's not
for counterfactual analysis, right? Having said that, there are disciplines for counterfactual
analysis and understanding, right? Through utility-based modeling or through model-based AI or others that we can
go out of the sample set and have a broader view of the world.
And there are a lot of techniques in this, including reinforcement learning and synthetic
data.
The idea of reinforcement learning, people probably have heard about that broke the news
through the AlphaGo, right?
What was AlphaGo? people probably have heard about like that broke the news through the alpha go right what was all the alpha go was hey we have an ai machine that is trained on play go and if there's a lot of data
but the real breakthrough was when they said you know what yeah it did something different
let's let's let the machine play against itself and every time it plays against itself it finds
a new word that have never had before, a new data, right?
A new top-of-the-line go, a new strategy, a new position.
So that's creating synthetic data?
That machine created synthetic data in reinforcement learning and feed it back.
And this way explored many, many words that didn't even exist and became so powerful so i'm saying that yes your comment is correct but ai
community is also trying different techniques that i mentioned to try to address that and that's an
interesting observation because uh there was this new rips conference this year new york is the
number one conference in ai like it's a blood battle to get through like publish i publish my
papers there i know like it's so so competitive to get published
papers there people don't sleep for months just to get their paper like uh competitive and submit
to that conference and get accepted and one congrats thank you one main uh one main uh
conversation this year was that are we running out of data as the fuel for AI progress? Yeah.
Look at the last 20 years on AI.
The main driver of AI has been this single thesis of more data into bigger and bigger neural networks.
That's been the major prediction of AI.
That's been the major fuel for AI and major approach for AI development.
What is open AI?
Massive amount of language data on large, large scale models for and just tuning
and training. Now, the conversation was, are we running out of data? Because the data in
this world is limited, right? Have we trained on everything that we could find on the internet
and Wikipedia and everything?
Is there any legal and otherwise, right? So right, that's all another discussion, probably
for another podcast that who owns that data to train
on. Yeah. So yeah, that's a good conversation. I tried to stay away from this one particularly,
but the conversation was that are we out of data in this world, right? But then there's this
good argument that yes, the AI community addresses this problem, and we have done it before.
And there's so many ways around it, synthetic data, reinforcement learning, other types of AI solutions.
So I don't think AI is going to stop because of lack of data, but I think it's a major question for AI scientists to think about.
And part of that is what you said about counterfactual data and the wars that we have not seen.
Yeah, that breaks my brain thinking about it, right? How can you come up? But how could anyone, how could a human come up with something new with only their past experiences too?
So let's talk a little bit about what do you see in finance in particular, alternative investments, private investments in terms of are we going to get rid of all the junior employees?
Is there just one big machine to run everything or is it just add on?
It's just going to help the tools like it's going to affect the jobs in finance. so i think finance in this question you pose is there's nothing like particular to finance
that separates it from other broader question about like labor market and how it will impact
labor market i do think that this technology is just transformative the way we work right
and what it means is just just like previous technology shifts and the technology shifts has
been so uh fast in the recent years that all of us have lived through this technology shifts if
not multiple of them right from the pcs like personal computers to cell phones to internet
being available at broad broadband everywhere all of these technology shifts, we have lived through them, right?
We have kind of in our short life spans, we have seen them in a longer life spans of humanity.
We have the invention of fire, wheel, steam engine, electricity.
What was that?
Jimmy Carter, they were saying he was the first president born in a hospital.
I was like, that was crazy.
The first president born in a hospital was still alive here today
where we have generative AI.
So I think that all of us have a good sense of this question
by looking at our history of humanity
and our direct experiences that we have had.
And it's just going to be transformative in the way that we work.
I don't think it's going to replace humans fully,
but I think it's a massive change in the way humans work.
I was giving a lecture last quarter in the Wharton for their MBA class and another one in Columbia Business School.
And I taught those classes and I gave a lecture in the FinTech Club in Stern as well.
And especially in those classes, you talk to like analysts, associates and VPs of those funds who are now in the MBA programs.
And they all ask this question,
like, what should I do to make sure I'm ready for this change?
And is it going to replace me?
Am I going to be out of job?
Because if you're a more senior person,
you expect AI to take time to get to your role,
if you're a senior CIO.
And the answer is that you definitely need a skill, shifting skill.
You definitely need to ramp up and you definitely need to think about how it can work.
And there will be eventually solutions for finance, right?
Be it AIX2, be it others, but eventually we will figure out that solution, right?
And you should be able to work with it, right?
So does that change your mindset
of you don't need to know how to do something?
You need to know how to ask queries, essentially?
You didn't know how to use
the tool. It can be asking
queries, it can be
managing the work stream in the
platform, right? It's just
imagine when Excel didn't exist for finance
and now suddenly Excel exists.
Of course you know Excel because in every single interview, right?
Do you know Excel?
That's a question, right?
Yeah.
That's the same thing with AI.
And understand the fundamentals of AI because I think people should not be shy away from technicalities of AI.
I don't think it's actually super complicated to learn those technicalities at the abstract level, like the conversation we had earlier.
And it helps you to understand the buzzwords
and just especially younger generation,
they are in a world that they're going to see
massive shifts in technology,
not one, like multiple of them.
So better understand the fundamentals of this.
Now, back to your question,
I don't think it's going to replace the humans.
There is going to be new ways of work for humans.
Like, yes, we have had all of the
inventions in the history of technology from fire and wheel all the way to cell phones and internet
but we are all busier and busier with more work right so and it's more creative work rather than
going to the farms and like just working on the land these days we can think about investments
and opportunities and resource allocation and frictions in the market
and how to use AI, right? So there's going to be more room for humanity to be more creative,
solve bigger problems and more interesting problems. And then at the firm level, I'll ask
this a little different way. What are you seeing? Well, I'll do two things. What are you seeing as
the adoption, right? In terms of across the, let's call it generally the hedge fund space
or the private market space.
Like what is the adoption of AI?
That's a difficult question because some might just be using small pieces of it.
Some are using it for big things.
Or so I can, you can answer that one.
I'll give you a choice or answer what are most of them
that are trying to use it getting wrong. That one seems
more fun. Okay, can I take both?
Sure, take both.
So these are intertwined questions,
right?
Responses, both of them.
So the
finance industry
is
relatively doing
good in AI.
And I'm impressed every time I give these keynotes
in the conferences or the articles I write
and the feedback I hear,
like the community is moving forward.
Like I remember a year ago
when I was giving talks in Super Return
versus like the recent one that I gave the keynote.
The first last year was all the fear of ai and hallucination and security and danger and now it was all about hey how can
i scale my experiments i like like what is next tell me next how can i scale this like beautiful
things i did with ai you think that was all basically chat GPT? Like put it in the hands of millions of people
and it becomes less scary?
I think multiple things happened.
First of all, chat GPT dramatically,
like significantly improved in its quality of work.
Like it was really nice to see that the fundamental,
the operating system providers,
which are chat GPT, Anthropic, Mistral, Lama,
and others are doing a great job
in addressing some shortcomings,
especially the reasoning,
especially the multimodal reading,
especially the control of hallucination and alignment.
And there's a way to go yet.
So it passed the bar for adoption in the industry.
But also it was a lot of work by not only Jad, GPT,
but the derivative companies that use that operating system for
providing a solution right what does it mean like chat gpt is the operating system is if you want to
make an analogy like it's like a translator it's like a dictionary for a translation and translator
so it's not a translator itself right chat gpt if you want to translate with dictionary, you can do it,
but it's painful and it takes a lot of time. But if someone takes a dictionary as an operating
system and makes a translator on top of that, suddenly there is a solution that can really
work for translation and the adoption goes higher, right? So for the finance industry,
I think multiple things are happening. Number one,
finance is at the focus for AI innovators,
researchers, entrepreneurs,
because finance has a lot of data,
structured data, unstructured data,
use cases, smart people.
So it's naturally a place
for a lot of AI researchers,
practitioners, entrepreneurs
to start from, right?
And you see that Microsoft Copilot has Microsoft Copilot finance.
That's the only like domain specific Copilot that Microsoft released.
Right.
The other thing is that the finance people have the idea of roughly the idea of using
AI and data and getting this.
So they were waiting for it somehow.
So I wouldn't say finance is necessarily
the best and foremost adopter of technology generally.
But I think in AI space, they have done a good job.
And it's going to continue.
Like the firms are experimenting.
I don't think there's a consensus in the industry
on what are the main use cases
and what are the main solutions and what are the best practices, right? Like if you go to this industry and what are the main use cases and what are the main solutions and
what are the best practices, right? Like if you go to this industry and ask about data analytics,
Excel is the word you hear, right? If you ask about, I don't know, like if you go out and ask
about sales, maybe Salesforce is the solution you hear, right? So it's not yet there with AI.
The game is not settled and everyone is exploring.
Now, what it means for practitioners is that if you want to start with AI, you have to do a little of homework for yourself because there's not one solution for all yet.
You have to do a little homework, understand the use case, understand what is good for you, decide your own way of working on that,
provide the solution you want, build it if you want.
Going through that experiment, that's why, back to your question about consultancy,
when I say work with experts who can help you get there,
at this point, that expertise is not only on AI engineering and research,
but also in transformation of organizations with AI,
and have seen the patterns that gets to your second question, what has gone wrong for the many and have seen the patterns,
that gets to your second question.
What has gone wrong for the many other folks in the industry,
similar CEOs and CIOs and CFOs and COOs in hedge funds and private equities?
What did go wrong for them?
Like, what can I learn from their practice?
And there's a lot of things that are not necessarily
the core engineering of AI,
like how to define use case,
how to design the solution,
how to roll it out, how to manage the expectation, how to manage the cultural change and behavioral
change, how to scale it up. So many times people come to me and CEOs and they say, hey,
we bought this solution and that solution and we paid and no one uses. What's wrong, right?
Yeah. Salesforce. I joke all the time. It's the most successful hated company of all time, right?
Nobody wants to or likes to use it.
These are the type of questions that are maybe at the fundamental,
they should understand the technology of AI,
but there are also other aspects of that,
that the behavioral change, the cultural change,
the organization change.
And that's the type of expertise required for successful AI.
And I would say that we see that day in, day out.
And we work at AIX2. One thing we're proud of, we have this logo in our website that one week to impact. And if you think what happens
in one week to impact, what is it? I mean, it's a clip of a subscription to a software. What about
one week? It's one second to subscribe to a software. Why do I need one week? That one week
is exactly designed to cross that barrier. like from subscription to a software to significant scale impact in your organization.
And how to do it, I would say there's a lot of nuance that has gone through like designing
that a lot of experiment observation, systematic surveys of the industry, working with our
clients that get us to that.
Now that's the state of the industry.
Maybe in five years, it's so streamlined.
So like public knowledge, common information, but you don't need that.
It's so clear.
Oh, you're going to say Salesforce, let's say for sales.
And this is how it works.
And there are so many online YouTube videos about how to use Salesforce.
But at this point, it's not yet there for AI.
But the impact on that option is just growing.
Every conference I go to, I see more and more.
And it seems, real quick, it seems there's a bit of a split, right?
I didn't know it was all these quants, as we call them. Like you said, quantitative finance guy has been around forever. go to sc more and more now it seems real quick it seems there's a bit of a split right and though
it was all these quants as we call them like you said quantitative finance guys been around forever
they're using ai to generate new models and do testing and all that stuff but they've kind of
been doing that for a while this i think just speeds that up maybe for idea generation i don't
hear a ton about that of like for new trade ideas or some models but then it
seems like you're everything you're just saying hey there's the whole business side that can be
vastly improved which hedge fund people generally hate the business side right they just be like hey
i know how to make money i want to make money for people oh i have to deal with clients i have to do
paperwork i hate all that side of it so yeah ai is a huge huge unlock for that side of it. So yeah, AI is a huge, huge unlock for that side of the business of,
hey, you don't have to worry about that
as much as perhaps you used to.
Exactly, Jeff.
It's exactly like you said.
And exactly like you said,
there's this alpha generating side
or alpha predicting side.
And then there's this operational efficiency
or business side, if you want to call it.
And AI has just opened its impact
and its way to the operational efficiency
of the funds, hedge funds, private equities, VCs, and others.
And it requires a lot of these internal processes, document writing.
And a lot of hedge funds come and talk to us about the same sets of operational efficiency,
how they do regenerative AI.
And to your point about the traditional, like more older use case of like predictive AI for
alpha prediction and alpha finding, even that, like people come and say this trading desk,
we want to have a summary of all the data online, like all the sentiment analysis, for example,
all the reviews, all the alternative data, all the reviews
in social media, all the news reports about an asset so you can fit into our alpha finding
models.
So it's also getting there.
Like thesis generation, I would say less.
It's a very high level question to ask an AI to provide a thesis for you at this point.
But if you break it down to a smaller task AI can solve, but for data
feed in to those alpha finding models, that's been something that, because the power of, there's one
power of generative AI, like chat GPT that people, it's not only a chatbot, it's a huge powerful
machine for turning unstructured data into structured data. Something that would take huge significant manual efforts.
Yeah.
Which to me, I always said, like, okay, you're using AI to generate models and do like, why
hasn't it told you to buy a Vancouver real estate and whatnot?
Right.
So there's little things like out of like, okay, but why isn't it grabbing any data it
can and saying, hey, here's something that fits with what you're trying to do with your model.
Here's a trade idea.
Exactly.
So, yes.
So, nicely, you put it nicely that there are two domains, business and alpha generation.
And each of them can be impacted by generative AI and chat GPT.
Obviously, on the operations side, there's a whole new domain and a whole new way of thinking.
But even in the alpha finding,
there's a lot of use cases
that people come and talk to us.
Now, what has gone wrong?
The other question that you have.
I chose both of them,
so I have the responsibility to respond now.
A lot of times that what goes wrong is
first choosing wrong use case people have wrong expectation of what ai can do
they think ai we can come through and find a risk analysis for all the risk in their in their
assets that's a high level thinking that AI cannot do yet. Right. Second,
if you ask it like, what am I missing from a risk standpoint?
Yes. That's like a huge question. Yeah. I don't know.
Break it down, break down the risk into the hundred elements, even like for them,
get clear inputs, clear prompts and ask the machine to synthesize. Right. Unfortunately,
the machine is not able to do that yet. Maybe in the future we have these agis or advanced ai agents
that we discuss and they can do the high level thinking right does it ever drive you crazy like
the completely unrealistic expectations of like why can't it do this yeah what's wrong with this
stuff you're like do you understand how much it's already doing yeah it's not only at the level of
like finance i see some startups in bay area
that is formed by people coming from business background rather than engineering they have
this idea that ai is gonna solve the risk prediction for uh this this simple example
i give risk prediction for investments and let's make a startup around it and they hire some data
scientists and they want them to build that machine for them because the founder was businesses he
doesn't even understand this is not yet there. You can invest the resource and
everything, but it's not yet ready for that. You should start from another use case and eventually
he's going to be able to address that. So these are some examples on wrong expectations. And that
goes back to not understanding the fundamentals of technology and where it is. But also the other
domain that goes wrong is in adoption and scale in the organization which is a
lot cultural and behavioral control right many things should come together to have a behavioral
change and start using a new platform right and part of that is the communication lines, description of the roles, addressing the concerns of the
employee for replacement and others, proper evangelist and advocates in the organization.
There's so many ways in AI, in organizational transformation.
McKinsey, we had a playbook for these kinds of things and all the techniques.
So some of them are really like working for AI. Is it down bottom up how to design it like that's something that i see goes wrong a lot
of time that's not because the technology is there it's because it's not the organization has not
defined a good path for making this change in the organization right right and it could be a
significant change right it is a significant change like And it's kind of a thing that it requires some level
of learning. It's not like happening in a moment. I always give this example that people say, hey,
I'm going to wait for AI to get better, then I start. I'll say, hey, AI is not like a Tesla
machine that you don't buy now, you buy in a year, and suddenly you have Tesla. No, you start
with AI, and there's a learning curve and experience, especially with organizations, to learn and build that culture.
And it takes some time for people to get there, right?
So the sooner you start with the solutions, the sooner you are onboarded and you learn and people get the ideas of AI.
And then you can go with this journey with the organization, successfully start.
So this is the other area that I see goes around. So that's why I think
for a finance specific, the type of people who work on this should be a mix of like people who
are really deep in engineering and research, not only engineering research, because AI is changing
quickly and researcher has the capability to see beyond just the current engineering and
APIs and solutions to where the research is going, plus someone who understands the fundamentals
of investment organizations and this behavior of people there and how technology can come
through, a type of consultancies that we talked about.
A mix of these two, I think, is the minimum required for people in the finance industry,
being hedge funds, public or private, private equities, VCs and others to use AI effectively.
Close it out with what could go wrong?
What's going to stop this freight train from, right?
I'm talking with a guy who sold his AI startup.
He thinks if any like Fortune 500 company comes out
and says, hey, we're pulling back on AI investment,
like Nvidia stock will go down 50%.
So not to have you make market predictions here,
but just overall in the whole ecosystem,
what can you see that slows this down, if anything?
So I'm not going to comment on if ai is hype or not and what happens if
like a fortune 500 like yeah yeah uh it's a good conversation like where is ai how much it has
delivered how much revenue it has generated versus investment and if it's hype or not we can go to
that conversation what seems now if you're a CEO, playbook's obvious.
You need to say you're investing in AI.
You need to be doing things to stay relevant,
whether it's actually adding anything to the bottom line seems secondary.
And the question comes down to how much AI has really delivered
versus how much is expectations for the future.
So we are bidding in the future versus right now. And there's a good amount of work that should go with
providing value from the operating systems of AI. I think that example I said from dictionary
translator, there's a lot of work should happen to get to the translator level. But I take your
question from the angle of what can stop this train from moving on,
right? What can stop it? I think one clear A-B test and experiment we have is the Europe
versus US, right? In US, you see AI is rapidly growing because of supportive regulations,
right? Regulations really matter. Like the regulations in the,
I can say now we can say last administrative.
Yeah.
Because it's happening now.
So the regulations were limiting.
And to some extent, even in Bay Area here
for startups and people in the scene
that honestly don't care much about politics.
That was probably the first time I saw these people
really caring about politics
because the regulations was really impacting their startup life. What kind of data
you can use, what kind of AI you can use. And you can see that ABT has a bigger perspective between
Europe and US and what's the growth of AI innovation between the both and in the last 10
years and what's the gap now. So it's, I think, an interesting observation. The upcoming, the current administration,
I would say it's fair to say they seem to be very pro-AI and reducing regulations and providing
support for this kind of technology shift. So it's going to go even faster. The competition is
another angle. I hope the competition exists. I'm very happy to see that it's not only ChatGPT
anymore. It's not only OpenAI anymore. It's Anthropic, Mistral, XAI, and others. And I hope the competition keeps going,
which is a driver of success and innovation and better outcome, I guess.
What can the other things that can stop it is compute and data because these two are like fuel for growth as i said and compute
tied in with energy usage also perhaps uh energy as well as like the real compute like uh gpus
that we need and what nvidia provides for the world right uh i wouldn't say energy is a major
bottleneck now but compute like the demand for NVIDIA,
if it can get to all the researchers in the field
and everyone can use it and the price comes down enough.
I think generally, all in all,
there's not a signal of a major slowdown
or roadblock here.
Not for data, not for compute, not for energy.
But these are areas that we should actively work on
as we grow
to ensure that we can see the impact of this technology
into day-to-day life.
You're supposed to say, like, as soon as the robots take over
and there's a World War III with the AI versus the humans.
That's a good thought.
I remember back in my PhDs is like uh before things happening and
we were working on ai and even then there was this ideas of ai what's the i was working on a
grant nsf grant on security cyber security and ai and there was this question of ai is going to take
over and the robots are going to come and extend the humans i think these are at this point very
much sci-fi uh
right because what would that even look like who knows with that's a whole another podcast as well but so you you're fading that that's not a real threat at least uh not the closest threat that
you're facing yeah it seems to me the bigger problem you have the ai is flying the planes
and driving the cars and there's mass like problem that something
happens there's a bunch of people die or something and it gets hacking AI and AI not alignment of AI
is super important because if you cannot predict some of the behavior of an AI machine and suddenly
does things that is not expected to do and examples especially is like high estate control
like driving or flying or something right But from a public, right?
If like, okay,
2,000 people died when these planes
collided or something.
And then some Congress people come out
and say, hey, put the brakes on.
Yeah. But I think AI
community, all in all, has taken a very
more conscious
approach to the topic of alignment.
It's probably the biggest, hottest topic of research here in Stanford.
And AI alignment and safety, I think, has been taken more seriously.
And alignment is alignment with us humans?
Alignment is a task that you do.
If you're supposed to drive a car and not collapse with others,
just not do that, right?
Don't think that's...
So I think especially with the anthropic push
and the nice competition we see on the field,
there's more and more importance for AI alignments.
With open source AI, there's access to the models by everyone,
like the Lama models and others.
So it helps the community to work on that more actively.
I think they're all good pulses
that we see in the community.
So I'm relatively optimistic about where AI will go.
Awesome. I love it.
I think we'll leave it there.
We'll link to your website in the show notes
and people can go find out more.
Any last thoughts before we wrap it up?
I'm available also in LinkedIn.
People can reach out.
I'm always happy to talk with people
who are excited about AI, research, engineering, finance,
and the mix of them,
or creative ideas in this space.
And I would say that
we are definitely going to see this change.
Finance is a beautiful area that's at the forefront of AI,
focuses a lot of innovation in research and in business
and building solutions happening in this beautiful field.
There's a lot of interesting problems for everyone
from a research standpoint to engineering to business.
So if you're excited about them, I'm very happy to work with people
and talk to people in the same area.
And thanks for having me on this podcast.
You've been listening to The Derivative.
Links from this episode will be in the episode description of this channel.
Follow us on Twitter at RCM Alts and visit our website to read our blog or subscribe to our newsletter at rcmalts.com.
If you liked our show, introduce a friend and show them how to subscribe.
And be sure to leave comments.
We'd love to hear from you.
This podcast is provided for informational purposes only and should not be relied upon
as legal, business, investment, or tax advice.
All opinions expressed by podcast participants are solely their own opinions and do not necessarily reflect the opinions of RCM Alternatives, their affiliates,
or companies featured. Due to industry regulations, participants on this podcast are instructed not to
make specific trade recommendations nor reference past or potential profits, and listeners are
reminded that managed futures, commodity trading, and other alternative investments are complex and
carry a risk of substantial losses. As such, they are not suitable for all investors.