Yet Another Value Podcast - AI in Investing with Daloopa's founder Thomas Li

Episode Date: April 30, 2025

In this episode of Yet Another Value Podcast, host Andrew Walker shares a webinar conversation with Thomas Li, CEO and co-founder of Daloopa, diving into how AI is transforming the workflows of fundam...ental investors. They explore real-world applications across hedge funds and investment banks, highlighting both the promise and current limitations of large language models in financial analysis. From note synthesis to risk modeling and center book evaluations, Thomas outlines the practical realities of AI implementation, discusses adoption across firm sizes, and explains how contextual data—not just algorithm quality—is becoming the differentiator. Whether you're a solo analyst or part of a multi-manager platform, this episode offers a grounded perspective on where AI in finance is heading.____________________________________________________________[00:00:00] Andrew introduces the episode as a repost of a webinar with Daloopa on AI and investing.[00:01:58] Thomas Li outlines AI’s strength in generating language vs. processing structured financial data.[00:06:43] Discussion on practical AI use cases like cross-referencing notes with earnings calls.[00:10:12] Andrew asks how to structure analyst notes for better AI input and efficiency.[00:12:38] Comparing large pod shops and long-only firms in terms of AI adoption and internal tools.[00:17:34] Why foundational models are commoditized and context is key to AI application value.[00:22:18] The crowding factor as a risk vector and how pod shops hedge against it.[00:29:01] Generating alpha today: human edge through timing, perception, and behavioral insight.[00:35:07] Long-term value of internal data and modeling analyst performance over time.[00:41:49] How AI might evolve: foundational models vs. application layer as the value driver.[00:46:22] Adoption outlook—AI use is growing, but nuanced finance problems slow full automation.[00:52:14] Importance of internal champions (agency) to drive meaningful AI integration.[00:57:30] Center books at pod shops use AI to backtest and analyze analyst effectiveness.[01:02:40] Closing thoughts on AI’s trajectory and data as the real moat for firms.Links:Daloopa: https://daloopa.com/yavp See our legal disclaimer here: https://www.yetanothervalueblog.com/p/legal-and-disclaimer

Transcript
Discussion (0)
Starting point is 00:00:00 All right. Hello. Today, I'm posting a video of a webinar that I did with the LUPA talking about AI in investing and its implication for fundamental investors. We posted this webinar last week. It got a lot of, it got a lot of good feedback. So we figured, hey, we've already got the video file. I might as well post it on the podcast channel. So anyone who's interested in my work or just thinking about AI and investing can go ahead and listen to it and get some ideas. I thought it was just a really interesting. informative conversation and Thomas Lupus E.O. sees how companies and investors across the board from very small scale hedge funds to super large investment banks, prop trading desks, hedge funds that are managing billions are using AI. So I thought it was a really informative conversation and I'm happy to share it on the channel. Hello, everyone. Welcome to the webinar. I'm Andrew Walker, the host of the Yet Another Value Podcast. With me today, I'm excited to be to talking on this webinar with Thomas Lee, the CEO and co-founder of Dulupa. Thomas, look, I'll give an overview and then I'd love to dive into it with you. One of the most frequent questions I have
Starting point is 00:01:11 when I'm talking to my friends, when I'm thinking about investing everything is, look, AI is everywhere now, you know, it's getting used for everything. I think chat GPT recently surpassed Google as the most used search function. But, you know, in terms of finance, I'm always wondering, hey, how are, you know, my peers, my competitors, how are people using AI to improve their work method? Because if you're not using AI as a financial analyst, I personally think you're going to get left behind. But there's so many different ways to use it,
Starting point is 00:01:39 and I'm always worried as a one-man shop or a small shop that somebody's figured out a way to use it 10x better, 100x better, a thousand X better. So I really wanted to, I talked to you guys frequently, and I really wanted to talk to you because I remember when we met back in 2020 over Zoom in the dark days of August, you were talking about using AI and using LLMs and everything inside of the DLUPA products. And you have a very wide view and range of a lot of different institutions through your perch.
Starting point is 00:02:07 So I'm really curious in how you're seeing AI get used by the best. So that's a broad overview. That's why I'm excited to talk to you. I'll kind of flip it over to you for high-level thoughts on what you're seeing AI's use in finance. And we can dive in from there. Yeah. Hey, Andrew. Thanks for having me on.
Starting point is 00:02:26 So there's a couple questions there. And I think the first level set that I'll provide with is how do you think about what AI is built for, what is capable of doing? And the foundational models keep getting better and they keep doing something different. But the reality is, at the end of the day, what AI really is doing is a prediction model
Starting point is 00:02:49 on what the next item is. And over time, we've went from the next item being the next word to the next sentence, to the next pixel, to the next set of pixels, to the next paragraph, it's able to generate, check its own work, regenerate, check its own work, generate again, so it gets better and better. But I think at the very, very core of it,
Starting point is 00:03:08 what hasn't changed in the AI world is, based on everything I know, what is the next object? How do I generate the next object? And the key to all of this where it becomes really smart and why is so much better than search is because of the concept of generation, right? As humans, what we are typically doing when we have conversation, where we have thought processes, is that we're generating stuff, right?
Starting point is 00:03:32 I am creating new sentences as we speak. And it's very human-like. The problem is not everything we do is generation, right? For instance, if you're a financial analyst and you're sitting there trying to model, a lot of what you're doing isn't generating stuff. It's actual processing. And it's not processing on how to write an email, a processing on how to pitch a story to your PM or whatever.
Starting point is 00:03:56 it's actually processing to just understand something better, right? It's going through numbers, extracting data, thinking through inconsistencies between how cost is growing relative to how revenue is going, thinking through like how CAC is changing for a software business, and thinking through how occupancy rates are changing for a hotel company, and maybe thinking through how all of those might come into one big picture together. That's really what an analyst's work is about, and that's not generation of concepts,
Starting point is 00:04:26 of objects-based. And that's fine. Like, our brain has different compartments for different things. So when I think about what AI is really good at and is really good at the generation piece, there are a lot of applications where you want to use that, right? But trying to use that for everything is probably not a good idea, right? I think every time, and this is like a common human valisey, right? When you're a hammer, everything looks like a nail.
Starting point is 00:04:50 When you're a really, really shiny hammer, everything now definitely looks like a nail. Right? So we are definitely in a little bit of that, but very quickly you start to realize that if you try to use Chachapiti to do a lot of tasks, it just fails spectacularly in a way that it succeeds spectacularly as some other task. One common example I always tell people is something I absolutely hate doing is trying to plan a trip. And if you try to plan a trip because of the volume of like literal verbs, like words that I have to go through, like travel reports and reviews and I want to read Reddit. and I want to read, like, Google reviews and whatever. Like, that just takes a lot of time to process and plan. And chat GPT can solve that for me by giving me, like, a one-pageer, like,
Starting point is 00:05:33 here's what you should do if you want to go to Croatia, right? That's incredible. But if I'm saying, hey, I have a financial, I want to build a financial model. I want to capture all the intricacies in a financial model. All of a sudden, that becomes really, really hard for chatypty to do, and it just fails. It starts generating things where it seems correct, but I don't need you to seem correct, because this is a public company with public disclosures, I just want to correct. So when I think about how to use AI, it really comes down to what AI is built for
Starting point is 00:06:05 and what the philosophies of AI are and applying those philosophies. So if you said, one really smart use case I've seen a lot is, let's say you have a bunch of internal notes, right, that you've written about a business and your internal understanding of a business. And you say, okay, I want to correlate my internal understanding. of the business with every single earnings call that has been created in the last four quarters since I started covering this company. Are there any discrepancies? AI is phenomenal at that, right?
Starting point is 00:06:36 But if you said, hey, I've built this model and I want to see if this model is inconsistent with any of the changes in the financials that the company has disclosed, like maybe there's revenue acceleration or there's cost deceleration or whatever, then it just doesn't work. The whole thing just doesn't work. It just generates a bunch of nonsense, and you get nowhere. So the reason it doesn't work is because if you realize that AI is not there to generate, it's not there to extract data, is there to generate objects, then you realize that the data piece of it simply doesn't work. But the summary and the logic thinking through two language-based objects work really, really well. So let me, you gave a really interesting use case there, right?
Starting point is 00:07:25 So I'm an analyst. I've got a company I've been following. I might have, you know, pages and pages of notes on them. You're saying one of the really effective use cases that, can you just dive into that? So the effective use case you're saying is upload your notes into chat GPT and then tell chat GPT, hey, these are my notes on the business. Tell me if there's anything inconsistent with what you're seeing in the way the company is talking about over on the earnings cultures.
Starting point is 00:07:51 Is that one of the use cases you're seeing that's interesting? Yeah, I mean, that's a pretty common use case. I mean, most BISI firms don't do that because they are not allowed to upload anything to chat Chbtee, let alone. I was not saying, I haven't thought about just like writing. Would I like write my own thesis, which I do all the time, write my own thesis on a company, upload it to chat, GPT, and then say, tell me if this thesis is disproven by something that's happened in the past six months.
Starting point is 00:08:14 Is that kind of what you're thinking? Yeah, generally you want to be more specific than that. So you would say, like, here is the latest earnings transcript. Here is the latest conference transcript. They were presenting a Morgan Stanley TMT conference, for instance. Here's the transcript for that. Here are my notes prior to these two events happening. Where are the inconsistencies?
Starting point is 00:08:36 It's like a really smart blacklining function if you think about it. Right. So that works. Let me ask one more question because I want people to have a takeaway, like how they can start, how they can improve their use of AI from this process. So if you're uploading your notes in there, as you said, a lot, like I've joked with AI, it's garbage in garbage, a lot of what you're going to get out is what you put in. So how would people structure their notes in a way that would make this one use case you're talking about, which seems really interesting to me,
Starting point is 00:09:03 how would they structure their notes in a way that would make this maximally efficient? I think the good thing about how these language models are set up is you don't actually have to structure your notes in any way, shape, or form. Because, I mean, when, when, it was like chat gpte like three like there's some structuring that you should do but with the latest iterations of the models as we've moved away from just regular transformers architecture the structuring becomes less less important like i won't bore you with like the technical details but as we've moved away from transformers basically what that means is we're able to instead of generating one item we can generate multiple and we can check we can check itself the output
Starting point is 00:09:42 and then regenerate the output right so that looping process allows you you to loop over two words or 50 words or 15 words or whatever. So how you structure, it doesn't matter to the AI anymore so long as it's words. It doesn't work, unfortunately, is when it's numbers, then it just kind of doesn't work because the logic for numbers aren't quite there yet or isn't there at all. But yeah, for notes, works incredibly well. Let me, I'm going to stick with this theme. So one of the cool things about DeLupa is you guys work with a bunch of different clients, right?
Starting point is 00:10:14 You have big investment banks, big hedge funds, small hedge funds, really running the cabinet. So you mentioned, the reason I asked this is you mentioned when we were talking about that. You say, hey, a lot of byside firms don't do this practice that I think is pretty useful because of internal restrictions on uploading notes. Now, I would not be surprised if the analysts are going and writing their own notes and uploading on the side. But one thing I'd love to start, how do you see, let's keep it to investors. How do you see big pod shops that are probably like have a big,
Starting point is 00:10:44 research budget and are probably trying to be at the bleeding edge versus let's say, you know, more traditional sleepy long-onlys. How do you see deferring use cases with AI and how they're interacting with AI between kind of those two broad buckets of customers? Yeah. So actually, I wouldn't categorize it that way because what we've seen is the big pot shops and then big long-onlys have both really just like put their foot down and invested in AI. I think they've seen the benefits. It's like, to, It's like, to your point, it's like Google all over again. If you don't use Google as a research analyst,
Starting point is 00:11:20 you're just not doing your job, right? It's like that all over again. So I think there is no hesitancy between the pot shops and the long long ones and saying, we got to do something here. We got to figure out how to create applications and how it works. But you're absolutely right on the research budget front. There are firms with huge research budgets and there are firms with much smaller research budgets.
Starting point is 00:11:44 Generally, big AUM firms, doesn't matter if you're long, only or long short, have much bigger research budgets. And the big discrepancy that we're seeing is whether or not they build internal tools. So the thing about AI that is very different from a search engine is you won't, you will never build your own search engine because the foundational algorithm of a search engine was always locked within the confines of Google and Microsoft. Right? Like, till today, we don't have access to Google. search algorithm. But we have access to a ton of foundational models. I mean, we can literally pick foundational models today and three weeks later find something that's four times as good and half is expensive. And you can switch on a dime, right? So it's almost like if Google just
Starting point is 00:12:31 released their algorithm and 20 other companies are releasing their search algorithm too. What that implies is we can all build search in the way we want to build search. We can build extremely fine-tuned search. We can build internal search. We can build external search. We can build all sorts of search. Right. So that's what's going on with AI. The cost of building an internal AI capability now is super low because of what people like to call chat GPT wrappers, right? ChatGBT wrapper is really just a software that you build on top of a foundational model that Open AI provides or Entropic provides like one of these guys provide. So what is very obvious now was there are a whole suite of BiceF firms that's willing and able to build these internal tools.
Starting point is 00:13:17 So the question is why, like, why would you build these tools? Why would you not use chat GPT? I think a year ago, the answer to the question was compliance. I think when people are hesitant on a lot of software, the first answer to why you don't do anything is always compliance. But over time, as people sort of understand this better, real economic decisions start getting me. So what we are seeing today, the real economic decision is what we call the context problem. So if you wanted to do that note comparison example that we talked about,
Starting point is 00:13:51 right, you want to compare what you've written with what the company has disclosed. You need access to two different sources of data. You need access to the company's transcripts, which are not actually available on the public internet. So if you try to do this to chat GPT, it actually doesn't work, right? And you need access to your own notes, which also does not exist on internet. So if you try to do this with chat chPD, it doesn't work. So obviously, if you go into chat chepti today, even if you uploaded your own notes, it doesn't work because it doesn't have access to Morgan Stanley TMT. It doesn't have access to the earnings call transcript of the company. Amazingly, it doesn't even have access to the 10 Q's and the 10 case. Right? It has access
Starting point is 00:14:29 to blog and post written about public companies, but not the actual filings of the public companies. It doesn't pull them off like Edgar or anything? No, it does not. Hmm, interesting. I didn't realize that because I've definitely searched and I feel like I've gotten links from Edgar before, but I didn't realize it wouldn't pull off that. You would have gotten links from a blog that links to Edgar, but it doesn't actually go into Edgar. Hmm. Why is that?
Starting point is 00:14:52 So AI is very good at processing information, but the act of document obtaining is still an old-school technique, right? Like, AI doesn't solve the problem of, like, how do you go get a document? How do you go get a document from the SEC? How do you go download an investor presentation from a company's website? All of these are just very manual, difficult problems. There are systematic ways to get it, but it's not a large language model problem to solve. Well, so if you went to JetGBT and you said, hey, can you pull me an Nvidia's latest investor relations deck?
Starting point is 00:15:33 If you went on Google and you asked a question, you get it immediately because it gets you to Mvitya's IRR website and then there is a PDF button and you hit that and you download it. If you try to check a GPT, you just won't go anywhere. So that's like another problem with AI. The document obtaining piece is very difficult, but Google has effectively solved the problem super, super well.
Starting point is 00:15:55 But the real problem is you don't have access to all these different buckets of data. So you want your foundational model to sit on top of all of these different buckets of data so that if your analyst, us that question, hey, here are my notes. What are the inconsistencies? Not only is it able to compare it with Morgan Stanley TMT and the latest earnings release, it also should theoretically compare the data between your model and what the company has reported. It theoretically should be able
Starting point is 00:16:27 to do it for consensus. It should do what an analyst is really trying to accomplish, which is how has my company changed? That's the real question. So what we are seeing the really big shops doing is they're building this. They're saying the most difficult part of building this historically has always been the foundational model, the logic, but now someone has invented a logic. So all I need to do is feed it the information. Feeding it the information is a harder problem than you think, but it's solvable because you can just leverage the fact that your analysts generate the notes. you can leverage the fact that you can purchase transcripts from data vendors. You leverage the fact that there are guys like DeLUPA that has done the work of extracting all the data into a database.
Starting point is 00:17:16 So you can say, hey, I would like to compare how my estimates are relative to company historicals. Am I getting more accurate over time or am I getting less accurate over time? Right. So now this exercise is completely doable, assuming that you have access to a foundational model, your analyst, actual model, and the fundamental data from Tuluba. Now, in one sentence, you can answer the question. But historically, that work is, you know, a risk associate's, you know, two weeks' worth of just manual grinding through in Excel. How were you seeing, you know, it strikes me that if I went into a fund right now, let's use age gaps, right?
Starting point is 00:17:56 There might be a 50-year-old portfolio manager who's been doing this for 25 years, 35-year-old, you know, portfolio manager are very senior analysts who's been doing this for, 10 years and then 25-year-old analyst or post-MBA, you know, very early. And each of them, like, you know, I could imagine the 35, which kind of influenced that bucket, you know, we came up and AI came in once we had already gotten and got our process going. The 50-year-old, like, internet came when they'd already kind of started gotten in their process going to, maybe they're less adaptal or maybe they're more adapt to because they're just willing to throw it out there. And then the 25-year-old is just like AI came while they were in school, completely native. How are you seeing those three Obviously, all of them have different job responsibilities, different. How are you seeing those three buckets defer and their use of AI? Yeah, it's a great question. I think it comes down to like how strategic the person is. And over time, you notice that the more senior person gets because of job requirements and demands or whatever and experience, you tend to get more strategic in how you think about your business versus at a very junior, I just graduated from college level. You're just trying to get the work done, right?
Starting point is 00:19:01 You're just working on activities versus working on long-term strategic visions. what we've noticed is the people with long-term strategic visions are like basically saying this is a monumental shift in how things work we got to adapt to the shift because you don't want to be the one bank who didn't move to Microsoft Excel when Excel happened you don't want to be the one research associate who said no to Google when Google was happening you certainly don't want to be the person stuck on Blackberry when everybody was moving to iPhones right there are these like monumental shifts and I think I don't think the age gap matter I think actually the senior people we talked to, fully recognized the power.
Starting point is 00:19:39 In fact, the more senior the person we've noticed, the more likely they are to want to be AI adopters. And the more junior the person, the more likely they are to be skeptical. You know, one of my friends pointed to Zaddi was like, look, if you're a senior person, right? If you're the PMs and you're just having everyone emails you, the research and you kind of give them two questions back and then say yes or no, you might be more likely to be an AI adopter because you're just like,
Starting point is 00:20:09 oh, I'm just replacing calling the 25-year-old of the phone with putting it into chat GPT, like it's the exact same thing. I ask a question, then give it a yes or no. Whereas if you're that 25-year-old, you're used to, hey, I read every 10K all the way through. That's what my boss taught me. I read it all the way through. Like, I need to have all these special insights. You might be a little more skeptical or a little more scared to push off those insights.
Starting point is 00:20:31 That raises another one. With investment banks especially, I remember when AI first, started coming out, there were a lot of people who are like, oh, AI, like all investment bankers famously when you're a junior, you're basically just making pitch decks nonstop, right? A lot of people are like, oh, AI, we don't need juniors anymore. And then the counter to that was, hey, maybe you need a thousand more juniors. Maybe you need them doing a lot more. So which direction are you kind of seen, especially on the junior level, the head count
Starting point is 00:20:56 AI pushing. Is it pushing towards less because AI can take over so many of the jobs or is it pushing towards more? If I switch from investment banking to research, I'd say, hey, maybe you need more. people calling up franchisees to say, how are your sales this order and putting that in to give them more bespoke information to AI? Which direction do you think it's going? I think that's a great question. I think the reality is like, that's not how we see how customers think. And I would draw a parallel to investment banking pre-Microsoft Excel. People forget Microsoft Excel existed,
Starting point is 00:21:27 was created in the generation of what would we know today as the MD generation. So when the MDs of today, or the SMDs of today were analysts, they were in the migration phase from Lotus Notes to Microsoft Excel. And if you've ever used Lotus Notes, it is really not meant to do what spreadsheets are supposed to do today, what we think of its financial models. So basically to your question, what we've noticed
Starting point is 00:21:55 is the amount of hours worked doesn't really change. When Notice Notes became Microsoft Excel, the argument was now we can build models, way more efficiently. It's so much easier. We can build much more detailed models. We don't have to work 100 hours a week anymore. Guess what? People still work 100 hours a week. When PowerPoint became way better, you have hockey to align logos. Like all these things become easier. Should you be working less in theory and be more efficient? Yes. But the reality is an apprenticeship model that Wall Street is. That's just not how it works. Right. Because when when firms think
Starting point is 00:22:30 about their revenue structure, the question is never, hey, if I could replace this piece of cost? Can I save costs? People fundamentally are growth engines. We're always thinking about how do we expand business? How do we be more competitive? How do we become first the market? How do we scale our firm? So when you find a new capability that has tremendous cost savings as it does today, the question is like, okay, what do I do with the cost that's saved? What do I do with the associate's time that isn't spent updating models, that isn't spent summarizing 10 queues, that isn't spent transcribing, you know, earnings calls.
Starting point is 00:23:05 What do I spend at the time? Can they be more productive? It's rarely, okay, now I need four less associates. Let's shrink the firm down. Let's keep our revenue steady state and let every other bank, like, outgrow us. One of my biggest words with AI or any product that automates the process is, you know, I think a lot of people have taught me, hey, if you're, you know, Dulupa, if you're covering 35 companies and you need earnings report
Starting point is 00:23:35 and you just need a quick 35, like, great, go get somebody else to have that model built. And then you can study it and like, think about it. But if you're like, hey, this is the company I want to bet on this source, you need to go build that model by hand because building it by hand and the act of actually thinking it through really helps you, like, think through your assumptions, understand, get a better feel for it. One of the worries I have with AI is you mentioned it. earlier where you had an analyst who was just like tossing everything into it. One of the reasons I have is you do that and AI, I guess there's two. There's the echo problem, which I'll
Starting point is 00:24:07 come back to in a second. But the second is you're just like outsourcing all your thinking. And then when you come to like a real edge case, you kind of haven't thought it all through, right? You think you understand it, but you understand it only at a very high level. You're the famous example where the guy goes and gives a hundred speeches. And one day he has his bodyguard go give the speech for him because his bodyguard just got it memorized, but can't understand I guess how are you seeing people kind of avoid, especially more junior people, I'd say, avoid that problem where they've outsourced so much of their thinking. Yes, they're getting a lot done, their jobs, but they're not actually learning and maybe developing. Yeah.
Starting point is 00:24:45 So I think we don't actually see a lot of people outsourcing their thinking process. We do see a lot of people outsourcing, like to AI, right, the mundane parts of the job. And finance is a really mundane job. Like anyone who listens to this at work in finance probably won't disagree with that. There are just a lot of the job, hours upon hours, but you just spend basically spinning wheels. And it's necessary, but it is very boring. And I think a lot of that work is getting farmed out to agents today or should be formal to agents for those who aren't doing it. Do I see people trying to farm out the intellectual piece?
Starting point is 00:25:29 Yes, but it's not very common because finance is also the industry of corner cases. Rarely do you look at something and say, oh, yeah, the same thing had really happened before. Yes, history kind of rhymes, but not real, not all that much. And there's a lot of human assumptions and human judgment that needs to happen. And frankly, that's the fun part of the job. That's why people want to be in finance. So that's the piece of the job that rarely do we see people trying to automate away. What we see people really interested in automating away are the stuff that gets in the way of that.
Starting point is 00:26:03 So, you know, the loopback exists because we update people's models, because we help generate like auto comp sheets and industry models and whatnot. But there's also cases where you can say, you know, your average earnings call is about 15 pages long. It takes a long time to read that, right? So if you are trying to say, I don't cover this industry, but I want to just understand how tariffs are affecting, you know, this entire industry because this industry feeds into what I cover. Do I really want to spend, you know, 16 hours reading through every transcript of every company? Probably not, because I don't get a key point anyway. But I can very quickly run a summary system where I can get the highlights.
Starting point is 00:26:44 I can Q&A through it for 30 minutes and just get all the information that I need. right so that's that's saving on process but not saving on the intellectual horsepower that actually brings me nicely to my next question you know my biggest worry is you use AI to get to a yes on something right and then you've got the echo problem and i've had this before right where i i asked me and chat GPT really got into it and i asked it a bunch of questions and then you know i said hey what was the earnings number in in this quarter or something and it gave me a number i was using that to build a lot of stuff. And then I go back and check. And it said, hey, earnings for 2003 were $500 million. And you go back to the 2023 and you're like, hey, the revenue was $200 million in 2023. I'm pretty sure they couldn't have earned $500 million. You know, and it was like $25 or something. So how are you, I guess when you talk to clients and they're using AI, the two questions are how worried are they about the echo problem and what are they doing to ensure that you know you could imagine a deep research project where you and a team are spending a week on it
Starting point is 00:27:55 and then you come out at the end and there was an echo problem at the beginning and all of your premises were completely mistaken or is something like that so how are you seeing them think about the echo problem worry about the echo problem and avoid the echo problem I think the reality is we don't actually see a lot of people worry about the echo problem him. All right, so Thomas, let me ask you, I ask you how your pot chops versus long lonely is deferred, and you actually said they're kind of using the same. Let me ask you this in a different version.
Starting point is 00:28:26 Your best, your clients who you think are incorporating AI the best, what are one or two things they're doing that your clients who are spending a lot of money, but you don't think they're incorporating it as well? What are they kind of using AI for that your average clients in terms of AI aren't? Yeah, great question. think it's what we call the context problem, which is there are people who realize that the difference between a great algorithm and the second best algorithm is really not that big, right? Like the nuances are really, really, really small, but it's how you drive context into the
Starting point is 00:29:01 algorithm, which makes all the difference. So, you know, once you realize that what makes an output really good is giving the foundational model access to a ton of. of data and creating guard rails around how to display the output, that's really what makes the output really powerful versus picking the best algorithm, the cheapest algorithm, spending time on the foundational model piece. And this is not just for hedge funds. Like, you see this across the entire stack,
Starting point is 00:29:31 I think, across like hedge funds and VC funds to invest in like AI and whatnot. The foundational models are important, really, but the differences are small, right? Where the money is made with a rubber, meets the road, where the applications are created, is how do we assemble the most amount of context around the model I pick? Because if you think about the Google analogy I used before, if this were closed source, right? If all the foundational models were closed source, then yes,
Starting point is 00:30:02 we should end up with a Google where the guy with the best algorithm just takes the entire market. But we don't have that world. We have an open source world where the foundational models effectively are, I wouldn't say free to use, but cheap to use, right? And anybody can use them to build any product. Then it comes down to the product. This is like the internet or like mobile like data, right? Mobile data is cheap for anybody to use to build a business on top of. Then you get evaluated on the merits of your business and the creativity in which you are leveraging the power of mobile internet. That's where the world changes. So the context problem is how do you build a product that solves the problem for your customers using the most amount of data that you can
Starting point is 00:30:48 assemble. As opposed to thinking about the actual AI problem, you should be thinking about a product problem. So it's so easy to get stuck in the, oh my God, AI is new. Let's use AI. And forget the fact that at the end of the day, people build businesses because they want to solve problems. So if you can solve a problem, it doesn't matter if you're using a hammer or a saw, so long as you're solving the problem. What is one thing, someone on the outside of one of these big shops with a big research budget, what is one thing that, me, I would be surprised that my competitors at big pod shops are using AI for nonstop?
Starting point is 00:31:30 What would they be surprised? What is a use case that is surprising in your mind? Yes. Well, that's a good question. I mean, I don't know what you see. So it's hard for me to figure out what is surprising. Forget me then. Just in average, if somebody's listening to this and they are a part-time investor,
Starting point is 00:31:52 you know, they've got a day job. They like investing. I do hour-long deep dives on podcasts and they're really into that. They would just, their mind would be blown if they heard, oh, professional investors, like just do all of that on AI now. I think the most surprising thing is how little AI adoption there is. That's probably the biggest surprising. Great.
Starting point is 00:32:12 Let me ask my next question then. What is one thing that you think there has been hesitancy or slow uptake on AI that you think investment firms? I mean, again, there's huge research. That you think there would be material improvement in people's jobs if they would kind of spend a little bit more time focused on shifting that to AI. Yeah. I think if we can get around a hurdle, that if you kept querying an AI agent, the AI agent can figure out what you're trying to do, right? The compliance hurdle, which, by the way, is a very real problem because if you had,
Starting point is 00:32:50 if let's say you had an enterprise account with Chad GPT and someone in the back end asked Chad GPT, based on all the queries that you've seen, what do you think the next big investment is? Chad GPT probably has a really, really good guess. So you have that problem, right? So it's a very real problem. And if you're chief compliance officer, you should be freaked out because that means someone could theoretically be front-running all your trades.
Starting point is 00:33:14 So that's a big part of why adoption is low, not because there is no desire to adopt, but because there's hesitancy to adopt a totally, like, running wild model, right? People kind of want their own internal model. That being said, if you look at how we use Google today, we don't seem to really care, right? If Google really wanted to figure it out, they could. But the way most firms have gone around that is essentially partnering up with a vendor
Starting point is 00:33:48 and have the vendor be so compliant, so trustworthy, that even though they could figure it out, you know they wouldn't. And in my mind, I'm thinking a company like Bloomberg. If Bloomberg really wanted to figure out what trades are going to put on, they know. Because half these treats happen on Bloomberg anyway. But you trust that they won't. Let me. So if you and I were talking, let's see we're talking 60 years ago, right? Security analysis was not very sophisticated. You could outperform, maybe this is more 100 years ago, but you could outperform in the Ben Graham era by, you would go and you say, oh, isn't that interesting? That company has $100 in cash on their balance sheet and they trade for 50, and they also like generate good earnings and pay out a dividend yield of 12%. You could outperform by doing that. 60 years ago, there was no reg FD.
Starting point is 00:34:38 Insider trading was a lot, a lot. You know, you look at this history of how Buffett took over Berkshire. The CEO basically promised him that he was going to tender at one price. He did it a quarter cheaper. And Buffett was so incensed that the CEO would lie to him about a tender that he had advanced knowledge of, he took over Berkshire, right? So that'd be 60 years. 50 years ago, you could still do very well by looking at a company and saying, oh, this
Starting point is 00:34:59 trades for eight times price to earnings. It's peers trade for 15 times price to earnings. I buy, I short. All of that has been done away. insider trading by RECFD and stuff, but a lot of the other stuff has been done away by quantitative models. You know, one thing I tell investors, especially college people who come to me and pitch a stock,
Starting point is 00:35:16 they'll say it trades it 10 times straight and you'll be like, that's great, but that's not a thesis. You gave me something that a computer can spit out. The quant models are all over that. I worry that a lot of the edge cases are thinking through AI is increasingly better at that. And we talked earlier about AI displacing jobs. I'm not talking about that anymore. I'm more worried about how do you generate edge as these AIs get better?
Starting point is 00:35:36 and better and can replicate and it can think faster and pull through different sources. And, you know, biotech company A, announces results. It takes the best scientists I know, you know, several hours to look through the results and really announce it. A, I could do it in a quarter second and be trading that stuff. I'm driving to, is there room for humans in three, five, seven years in public finance? Yes, there's room in private finance and structuring everything. There's handshake deals and all that type of stuff.
Starting point is 00:36:04 But in public finance, is there going to be room for humans? Yeah, absolutely. There will be a ton. I mean, when you think about the hardest funds to get into, they are very often the fundamental funds. I think about the funds with like, as an LP, you absolutely want to get into because the sharp ratios are just off the charts. They typically are the super big one funds, the super big multi-managers. Now, why? Like, why are the multi-managers?
Starting point is 00:36:28 Like, why are they able to consistently outperform? And if you look at the returns of some of these multi-managers, it is, so uncorrelated, right? It's markets are good, there are 15%. Markets are bad, they're up 20%, and it's like clockwork. And it's every LP's dream because it's the true promise
Starting point is 00:36:45 of uncorrelated, hedged returns. Everybody comes to itself a hedge fund, not everybody actually provides hedged uncorrelated returns. So why are they able to do that? I think at the end of the day, it's really sitting down and asking yourself, what are the true sources of alpha out there? And a true source of alpha needs to be
Starting point is 00:37:04 something that exists systematically is true and it's something that you can protect for a reasonable period of time, not over, not forever, but a reasonable period of time. Buffett's famous like true system of alpha is basically holding horizon. He's willing to buy a company and let the company compound, right? And because he has a horizon so much longer than everybody else, he's able to just hold. And I would argue, you know, if you are a retail investor, you have that same source of alpha too, and a source of alpha that most professional investors don't ever have, right? Because your marks happen daily, monthly, weekly, whatever. But a professional investor has sources of alpha, the retail investors can never get. And one of the single largest
Starting point is 00:37:51 source of alpha is risk modeling. You are able to factor out doing all sorts of risk models. Let's say you run PC analysis and you can basically figure out the few main factors of risk. risk and you can hatch them out. And what you're left with is alpha. And if you're able to say, okay, if what I'm left with is alpha, then how do I grind up the number of trades? How do I increase trade velocity and just play a standard game, right? Which for the most part is what the multi-managers are doing.
Starting point is 00:38:23 They're saying we will do a lot of trades. We're not here for home runs. We're not here for 10-year, 20, or 30-year compounding returns. But what we are here for is uncorrelated hedged residual alpha that I can lever on a cheap basis and provide uncorrelated consistent returns from ILPs. So can an AI actually create those little sources of hedged alpha? Not really, because that's just not what the foundational models are trained to be able to do. A lot of those sources of alpha are not in a billion sense where, hey, you know, you are not
Starting point is 00:38:58 not uncertain and you go and crack into someone's warehouse and see there's inventory just like flowing out the windows. But a lot of those sources of Alpha, historically in POTCHEPs have been just constant checks. Lots of expert calls, which are proprietary, right? You get an expert call, you learn. Now, a lot of sourcing on, a lot of expertise on, do I trust this expert, asking the expert the right questions, everything. Expert checks on going into the field, you know, yes, there's the satellite imagery of the parking lot, but you can also go into the store and see, hey, are these stores clean or dirty, whatever it is? How much do you think those types of proprietary data, you know, people getting out into the field, grabbing things, seeing things that maybe AI can't?
Starting point is 00:39:42 How much do you think that increasingly becomes the job, especially of the analysts versus, you know, also if you watch billions, most analysts are at their computers from ADM to 7 p.m., clicking away or building models or something like, do you think it goes, shifts more into more qualitative people skills as it goes to generate proprietary data to put into the AI or am I missing something, imagining something creating a story? So I think the alternative data is not a source of alpha. I think there was a point in time where it was 2017, 2018, 2019. It was absolutely a source of alpha. Today, I don't think it's really a source of alpha. Are you saying alternative data you pay for or because I think I was more talking about alternative data that you generate on your own, right?
Starting point is 00:40:22 Like, that's more proprietary than alternative. Yeah, even the proprietary stuff, like channel checks, like all of those, I think because it's done in such high volumes. So, yes, absolutely, when it was first being done, the satellite data, the credit card stuff, absolutely huge sorts of alpha. Today is just what everybody does. It's like reading a 10K no longer has sorts of alpha. And where Buffer would famously say, like, back in the day, if you read a 10K, like, that is a source of alpha. But just knowing what the company does is better than the average. Sometimes I wonder if it's a source of anti-alpha where you should just like close your eyes.
Starting point is 00:40:52 don't look at the 10K and, you know, there's a lot of information there that you don't want to know and it'd be better to just not know it. Yeah, but what I think is true today and what the multi-manager is so good at adapting to is this fundamental idea of what makes a stock move in the short term, right? I think what makes a stock move in the long term is very well studied. Most people appreciate the fundamentals and multiples and interest in your environment and whatnot, but what makes a stock move in the short term actually is much simpler than that. buyers and sellers. If there are a lot of buyers and not a lot of sellers, the price of a stock goes
Starting point is 00:41:26 up in the short term. And so if you're an analyst, really what you're trying to figure out is what the next guy is trying to do. So an AI can help you get all the knowledge about, hey, do you think this stock is long? Fundamentally, what are they saying? Is there a discrepancy? But ultimately, what you need to do as an analyst is to basically play the game. You're saying, based on the information I have. Who do I think has access to the same amount of information that I do? Who do I think is on a two-day delay from the information that I have? Who do I think is on a five-minute delay? If I would buy the stock now, who am I selling it to five minutes now, 30 minutes from now, four days from now, two weeks from now, a month from now? Right. Where does the demand is
Starting point is 00:42:09 a supply game shape? Are the best funds you're thinking about? Are they spending more time on that game theory. Hey, hypothetical simple, this company announced all four earnings. This stock is down 30%, but it's generally a good company. If I buy now in this afternoon, the quality long on these are going to have reviewed the thing, talk to management, and they're going to, are they spending more time on that game theory aspect? Who's my buyer in five minutes? Who's my seller in five minutes? Who's my seller two days for now? More on the game theory aspect than anything else? I don't think they're spending more time, but I think all the best friends do this religious is it implicit or explicit in the process it should be explicit so if I was at a pot shop
Starting point is 00:42:54 and I wouldn't pitch my portfolio manager I would have like it is I would have at the top like here's who I plan to sell the stock to is that kind of how explicit it is yeah yeah I mean because when you sell you don't know who you're selling to but it would be like here's who I think the buyer is going to be like here's what I think makes the stock go up and what makes a stock go up could be fundamental, but the right answer is really fundamental is, you know, based on everything I see, like I'll give you like a good high level example, right? So let's say based on your channel checks, you're like, wait, there is a structural shift because people are afraid of, let's say, potential retaliatory tariffs on this piece of my business. I don't think a lot of people
Starting point is 00:43:38 appreciate this piece of my business. Company is about to go to a conference, in a week. I'm sure they will either talk about it or be asked about it. So my catalytic event is seven days from now when management goes to this conference. That's where my channel check today comes to fruition. We have a seven day window. Right. How do we? I definitely get that. But then when I hear that, you know, just to echo what we said five minutes ago, channel checks are no longer like a source of alpha. So my worry would be, hey, our firm believes that, but Thomas's firm believe that in, you know, XYZ and everyone else believes it. So maybe the answer is we buy because all of them are shorting because they think they're going to come out on tariffs, but I don't think
Starting point is 00:44:21 they're going to be as negative on tariffs. So when they come out, all of them are looking to cover. So we're selling into their cup. Like, you know, that type of like pod monkey knife fight, it just gets really complex. So how deep into the he said, he thinks that I'm doing this, so I'm going to do this. How deep are people getting into this? Pretty deep. I would say pretty deep. I would say that's a big part of the job of trying to figure out what the market is actually going to do. How much of the market are other pots playing with each other, right? Who's sitting at a poker table is the most important thing to figure out. And once you figure out who's sitting at a poker table, you need to figure out who's in the hand.
Starting point is 00:44:57 Let me ask you another question on that. There have been a lot of people recently talking about the pot shops. And, you know, a big piece of the pot shop, you mentioned, hey, they're going to go to this investor day, call whatever it is and I think they're going to say this. I mean, a huge piece of the pod shop right now is I think this company like the channel checks are great. All credit card data is through the roof. There is no way they're going to miss the quarter. I'm going to buy because there's no way they're going to miss the quarter. They're going to have a huge beat. Outlook's going to be great. And then you'll see these companies in the way I've heard it described very much saying is like the week before earnings,
Starting point is 00:45:30 the stock almost can't do anything but go higher because every pod shop's buying. And then even when they have great earnings, sometimes the stock will be down because like all the pot chops are so full on the beat and then they have to sell. So I guess from a game theory perspective, are you seeing people like start to reverse that or how are people playing that? Because it does seem, I've seen some companies report great quarters and then everyone's just so full of them, the stock's down 10% of the next day, you know? So you are absolutely right that that happens. But what I think a lot of people don't see is that what you just described is an actual risk factor. It's called the crowding factor. And you can model out crowding factor. So if you are
Starting point is 00:46:08 really a well-established pot shop, what you are almost always doing, it's actually hedging out your crowding factor because it's such a big vector of risk. So is it fine to be in crowded names? Absolutely. But then what you want to also do is short a bunch of crowded names too. So from a crowding factor perspective, you're neutralized. Now, how good people are at executing like neutrality around crowding factor is something else entirely because it's really hard to short highly crowded stocks or go long like highly crowded shorts because everything fundamentally is telling you to not do it, right? But that's why the long short market neutral guys are so good at what they do is because
Starting point is 00:46:48 they recognize that this is a big risk moment and they take the other bed as much as they take the crowding bed. Yeah. So a lot of people don't realize that crowding is actually a risk vector that is modeled across all the pot shops and it's pretty well neutralized. No, it's, you know, I just, I keep thinking back to the Deep Seek day when, when Deep Seek came out and they announced, hey, you know, we did this model. I don't know if people fully, like there's debates on how much it went into it and everything, but it was crazy that Invidia, the typical AI plays were not down the most, you know, it was the power plays, Tallinn, Vistria, all these big power plays that were down the most. And yes, they were exposed to AI, but what it really was was this hidden crowding factor where if you, you.
Starting point is 00:47:34 had any exposure, you could just be, hey, I'm going to go along these IPPs because I get extra AI exposure for that. So I just always think about that day and then all the beats that we were talking about. I've got a few more questions, but you are, you talk to way more clients than I do, way more people, you incorporate this. So I just want to, you see where I've taken this conversation and everything. Is there anything when it comes to AI and investing that we haven't even touched on or we've kind of glanced over that you think, you know, listeners, obviously people listen to a webinar because they want to learn more, that you think listeners should be thinking about, learning about as they think about AI and investing. Yeah. So something I get
Starting point is 00:48:16 asked a lot, especially with like senior management and banks and very big hedge ones, is what my view is of AI going forward. And I don't think they care about my view. I think what they care about is like, let's figure out what all the AI founders are thinking, because the world is probably somewhere in the ballpark of where they all think the world's going to go. Right. So I always find it interesting to not only provide my view, but also ask what have you been hearing from other people in my seat
Starting point is 00:48:45 around where they think the world is going to go. And the short version of it is fascinating because there isn't close to a consensus of where the world is going to go. There are a group of people who think that the most important piece in AI is really just spending money on the foundational models, and the biggest foundational model win. It's a winner-take-all model, and one guy is just going to win. I think you see that with, like, the funding rounds, right? You see extreme funding rounds happening. Some of them very, very justified.
Starting point is 00:49:20 Some of them are not quite as justified, but you see a lot of those. I think that's one. You also see a second group of founders, and I think I fall into this camp, where we basically say, look, there's enough foundational models out there that it will probably be competitive for an extended period of time, if not always be competitive. So for these foundational models to collapse into a monopoly seems unlikely. When I'm thinking about cloud when I say that, right, like there was a period of time where people sort of believe that it could only be one cloud provider.
Starting point is 00:49:57 And today, we know that that's not true. You cannot have too many because the cost of building cloud is insane, but you could use Google or Amazon or Azure or Oracle almost pretty interchangeably today. So I think that AI goes into that world. And where I think the rubber meets a world where I think the money gets made us on the application layer, how do you leverage the fact that these foundational models exist to start building application. So said differently, the question really is, is the future of AI, the creation of these telcos, right, the guys who actually create AI, or is the future of AI the software and the
Starting point is 00:50:37 social media apps that get created on the back of the fact that these telcos exist and supply you with the infrastructure to do what you need to do? No question Instagram cannot exist without AT&T and Verizon, right? Like, what are you going? Like, it just wouldn't work. Right? But also no question that Facebook is a bigger company than the infrastructure providers that have enabled, you know, Facebook has been out today. So it's like two different bets, right? Some people think if it comes to Google of the world and some people like that, that way it ships out. And some people think it goes into like the applications of the world, the software companies of the world. Natural question for that. You know, as you mentioned, there are some people who think it's winner take all and, you know, chat GPT. their AI gets so good and it gets so far ahead that eventually it just, hey, chat GPT is better than
Starting point is 00:51:29 pick your five competitors. And because of that, it gets more data, gets more usage, and then two months later, it's significantly, and it just scales. And everyone uses chat GPT. There's another where, hey, yes, chat GPT, but all these little add-ons and apps that you're building on top of it are where the real thing is. You know, I could imagine where you said, hey, I've built this I built this integration onto the chat GPT, and it's so much better at picking stocks than chat GPT or something, right? Yeah. The question I want to ask you, when you see these pod shops, whatever it is, spending
Starting point is 00:52:02 big money building like AI integrations, AI apps, that type of stuff, do you think two years for now it's like kind of all wasted because just general purpose chat GPT will just be so much better than that because it just incorporates somebody? Or do you think like the way of the future is like kind of every pod shop has the way. this huge research budget and their own like integrated little apps they've built on top that's running on their proprietary data that's actually like really bespoke and useful. Does that make sense? Yeah, it makes all sense. And I'm very confident in in this answer because I think the answer is the people with the best ability to build AI is going to be the people with the most access
Starting point is 00:52:40 to data. So the very big pot shops, they have access to a lot of data sources, A, because they just buy a lot of data and generate a lot of data in the day-to-day workflow. That will allow them to build internal AI tools that are just superior to the smaller shops who don't have those resources. So do I think chat GPT is going to get 10 times better? Absolutely.
Starting point is 00:53:04 It's going to, I mean, the trajectory has been insane. There's no reason to believe anything else. But do I think chat GPT, by being better at the model, can all of a sudden get access to proprietary data? Like, no, I don't think that world exists, right? And if you are trying to solve internal problems for yourself, you will always need internal data, right? You will always need to feed your AI with internal data.
Starting point is 00:53:29 I'm actually even willing to bet that a second-tier foundational model with the right context vet in will solve more problems than the best foundational model with no access to data. Last question, you know, there's always a debate in AI, Hey, have we hit the scaling laws? Have we hit the peak? Are we slowing down all that? And sometimes it'll be yes, and then they'll hit a new vector.
Starting point is 00:53:53 And it'll be like, oh, it's still already again. With ignoring improvements, AI and finance, right? When you're working with the big banks, the bid pot shops, obviously 24 months from now, there's going to be more AI than there is now. But do you think it is accelerating in usage from where we are today? or I could also imagine a world where you said, hey, they're already doing a bunch of AI and, you know, 24 months from now, will they be doing more AI? Yes, but, you know, it'll be from 50% to 52%, not from 50% to 250%. Does that make sense? it does. So interestingly, I'm referencing a lot of banks when I say this. When I think about a lot of conversations with management at different banks, the big investment banks, I think what they
Starting point is 00:54:43 all collectively have acknowledged is the real problems that they want to solve might not be AI-solvable problems, but they want these problems solved nonetheless. So to what degree can some of my problems be solved using AI, right? I think there is very serious willingness to adopt AI right now. We're seeing that, right? We're getting bought by all these banks. And the seriousness and willing to put money down is absolutely there. The question is, are the problems being solved?
Starting point is 00:55:22 And I think where we are today with foundational models is it solves some problems, but there are a whole litany of hyper-newance finance problems that you simply cannot solve. I cannot speak for other industries, but finance is nuanced, right? Like a model is nuanced, a investment memo is nuanced. Why you buy a stock is nuanced.
Starting point is 00:55:43 It's not because, oh, I think the company's going to grow. I think the company is going to be a bigger company tomorrow. Therefore, I buy the stock. It's not that simple, right, especially in the public markets. There are expectations. Their demand is supply of shares. There's liquidity.
Starting point is 00:55:57 there are risk vectors, there are what happens if like a news event comes out, like do you double down or do you sell? How do you know you're right or wrong? So there's all these nuances in finance that needs to be taken into account. And a lot of the models today, at least not yet, have been trained on these nuances, right? So if your problem, if you're sitting there at the top of a bank and you look at all problems that surface to you every day, rarely are the problems, big, simple blocks of problems, they are always nuanced, right?
Starting point is 00:56:27 So you went to banks there, and I might be showing my bias here, but look, I realize investment banks are not the same as the big commercial bank side, but I would probably guess that investment banks are a little slower and a little behind in incorporating AI versus pod shops and long-goldly, just because pod shops, long-only's investment firms are, they're just smaller. They're flatter, they're more flexible, they probably adapt a little bit quicker. They're certainly less regulated. So I just want to frame the question slightly different. You mentioned banks and basically said, look, banks are running into AI and even problems
Starting point is 00:57:05 that might not be natural fits for AI. They're figuring out ways to attach it to AI. I just want to ask the question again for pod shops, investors at writ large. Do you think if we fast forward in 24 months, you know, the way I said, AI usage is going up, no doubt about it. But is it incremental from this point? Or do you think 24 months from now will be saying, oh, my God. Like, these AI guys, like, they don't even touch the trading screens anymore, right?
Starting point is 00:57:30 They just put into all their notes. And then they say, I think this company is a buy into the AI model. And then the AI model says, oh, Andrew's a dummy. He thinks it's a buy it's a short. And we've got all this data. Or, wow, this is really great. We can buy Andrew's work with Thomas's work. He thinks it by, he thinks of buy.
Starting point is 00:57:44 Boom, it's a double buy or something. Like, is it accelerating or is it incremental progress? Yeah. So 12 months ago, I had the same thought as you just, what you just mentioned, which is banks are slower than H-1s, because why would anything else be true? We've noticed that that's not true, but that's not false either. We've noticed that the guys that move fastest are the guys with the most agency. So where there is a single person who's invested in building our process and building our product
Starting point is 00:58:14 for the company and saying, let's go, that's the firms that are doing. So we've seen that at big investment banks. We've seen that at big pot jobs. We've also seen that at very, very small funds. and it's really agency. Because if you buy AI expecting that tomorrow it's going to monumentally change your business model, it's not going to. It could, but it's probably not going to. If you buy AI thinking that, oh, I'm just going to get rid of 10 of my associates tomorrow because it's going to do the job,
Starting point is 00:58:41 that's just not really how it works. I mean, that's a dream, but it doesn't come to fruition all that much. But if you say, I'm going to believe in the future and I'm going to build today and iterate with my, people on how to enhance their experience, how to help them research companies better, cover more companies, react to situations faster, and you iterate through that, chances are you're probably going to be right to some degree, but that is a process and you have to be committed to the process. And in order to be committed to a process, in an industry where generally we are relatively
Starting point is 00:59:16 short-term in terms of how we think about everything, requires agency, requires someone at the top to say, we're going to do this. It might take 12 months, might take 36 months, but we're going to do this. This is like when the internet happened, I'm not going to miss out on this wave and play catch-up later because that's going to be painful. Actual last question, and then I'll turn it over to you for closing thoughts. You know, it does strike me a lot of what we talked about through this conversation was the people who are doing AI best are putting their information, proprietary notes, everything into AI and letting AI run with that. obviously if you had if our firm was you and me and we were putting notes that's two if it was you and me and five other people that is seven versus 70 are you finding the big
Starting point is 01:00:01 firms that are let's assume everybody's very competent at using AI are big firms 50 100 people using AI better than small firms two to 10 people just because they have more people putting in data so there is like that famous scale data advantage they're getting is that a thing or I mean, you can tell me one versus five, yes, like there's a difference. But maybe is five versus 50? Is it vanishingly small? Or is it even better because more data really serves the purpose here? Yeah.
Starting point is 01:00:31 So loosely speaking, there are two types of hedge ones that we serve. One type of hatch ones is a center risk model or a center book and the other type does not. And for the hedge ones that have a center risk model or a center book, they have very, very sophisticated people sitting on these center models where center portfolios. And what the central portfolio is oftentimes really trying to figure out is which analyst is good under what circumstances, right? Maybe Andrew is really, really good with mid-cap internet companies going into earnings, but does not have a good track record with large-cap internet companies going into conference season, right? And can we both back-test and forward
Starting point is 01:01:15 tests these hypotheses, right? Can we not data mine? So if you're a center book, and you know the quirks of all of your analysts, of all of their coverage, then you can say, when Andrew takes a mid-cap internet company into earnings, I'm just going to double the stake. And when Andrew takes a large-cap internet company into conference season, I'm just going to halve it, or I'm just going to hedge out all the factors on the back end, right? And there are a lot of these biases that you might not even know. Maybe you just like stocks with high dividends in low interest-free environments, and there's a bias, and you don't even realize that, right? So a really good center risk model is going to be able to figure that
Starting point is 01:01:52 out. The second thing the center risk model is trying to figure out is like basically performance evaluation, right? Are my guys getting stocks right because they're lucky or are they getting it right because they're right? Are they getting stopped wrong because they're unlucky or are they getting it wrong because they're wrong? Is there a way to systematically prove that he or she is a really good analyst? This just wasn't their day. They took four companies to earning. all four were wrong, but God damn, they were actually precisely right on their assessment. There was just some big macro event that rocked the book. So a good risk model tries to figure all of that out.
Starting point is 01:02:32 This might be a conversation. It might be a conversation for a different day, a different time. But I am curious on that. Like, I gave the example of the increasing crowding factor where you have a company, you know they're going to beat, so you're long it, they beat. And then sometimes you're seeing the stocks come down on pristine earnings because it's so crowded. How does a book, in your experience, judge, hey, Andrew thought consensus was five cents, Andrews thought it was going to be 20 cents, it was 20 cents spot on, outlook was incredible, and the stock was down 10%.
Starting point is 01:03:02 And let's ignore, you know, Trump announced 250% tariffs, so the market was down. But, you know, he got the positioning wrong. How does the judge positioning versus forecasting? Yeah. So positioning is hard to judge, but you can basically judge everything else. what is left is positioning. So it's sort of like PCA, like component analysis back in the day. How do you know if an analyst is precisely right or wrong?
Starting point is 01:03:26 Well, the short answer is, you know what the analyst's forecast is, you know what historicals are or the newly updated historicals are, and you can do a side by side, right? You can do basically a, and every analyst does this, right, forecast versus actuals or budget versus actuals, right? The question is, we know that each analyst does that, But that analysis is not consumed on a global level, at a fun level. So what risk books want to do is they want to say, hey, for every single analyst, for every single PM's forecast versus actuals, can we consolidate at all so that we know who has the widest variance and who has the most narrow?
Starting point is 01:04:06 And we can adjust it by sectors. There are some sectors where variance for revenue is basically nothing because it's all contracted revenue. And there's some sectors where variance for revenues could be relatively huge. right because you're consumer companies and they're chasing trends and feds or whatnot so how do we adjust for those spreads and figure out if you know Andrew is a really good analyst it gets the numbers right but the positioning wrong or Andrew's numbers are just totally off but and even though he got the stock right it's either he got lucky or he got a positioning route or the combination of both so you can actually strip those factors out Thomas we've gone over time I was really enjoyed it and we win
Starting point is 01:04:46 to a lot of different areas that I didn't think we were going to. Like, there were getting schooled on risk. But Thomas Lee, founder of DeLupa, this has been really insightful. And again, you've just got a really wide range of how a bunch of different firms are using AI. So really appreciate taking the time and looking forward to catching up soon. Of course, Andrew. Yeah, thanks for doing this. This is fun.
Starting point is 01:05:08 All right, this is Andrew coming in to wrap it up. Thank you for joining and listening to the full podcast. I really enjoyed the conversation about how. AI is transforming finance. Hopefully it gave you some fresh ideas for improving your workflow. Look, I'll just wrap it up with DeLupa. If you're interesting in learning more about them, seeing how they can help you focus on analysis,
Starting point is 01:05:28 save you time, enable smarter, faster decisions, incorporate AI. You can sign up for a free account by visiting dilupa.com slash y-A-V. That's DeL-L-O-O-O-P-A-com slash y-A-V. And you can check that set up and get a free trial there. Thanks so much. A quick disclaimer. Nothing on this podcast should be considered an investment advice. Guests or the hosts may have positions in any of the stocks mentioned during this podcast. Please do your own work and consult a financial advisor. Thanks.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.