I am Charles Schwartz Show - The Surprising Future of AI with Fathom’s Founder - Richard White

Episode Date: October 15, 2025

In this candid and fast-moving episode, Charles sits down with Richard White—founder and CEO of Fathom AI, the top-rated AI note-taking platform on G2 and HubSpot—to unpack the truth behind the AI... gold rush. Richard shares why only 5% of internal AI initiatives actually succeed, and what separates innovation from illusion in today’s hype-driven market. Together, they dig deep into the harsh realities of building and buying AI software—from skyrocketing failure rates and short model lifecycles to the “LLM treadmill” that forces companies to constantly rebuild just to keep up. Richard breaks down why most big corporations are struggling to adapt, how startups can outmaneuver them with speed and focus, and why the future of work will favor the few who learn to “think with AI.” The conversation stretches beyond business—exploring the coming social upheaval, the rise of one-person billion-dollar companies, and the ethical crossroads of automation, employment, and human creativity. Both Charles and Richard keep it honest, funny, and forward-looking as they challenge listeners to rethink what it means to lead, learn, and stay relevant in the AI age. This isn’t just another talk about artificial intelligence—it’s a survival guide for entrepreneurs, employees, and visionaries navigating the most disruptive technological shift since fire itself. KEY TAKEAWAYS: -Why the future belongs to those who learn to think with AI, not compete against it -How automation is paving the way for one-person billion-dollar companies -The ethical and human implications of an AI-driven economy—and how to stay grounded amid disruption -The mindset shifts needed to stay relevant, creative, and adaptable in the AI age Head over to provenpodcast.com to download your exclusive companion guide, designed to guide you step-by-step in implementing the strategies revealed in this episode. KEY POINTS: 01:18 – The 5% reality check: Richard opens up about why 95% of internal AI initiatives fail—while Charles unpacks what separates companies that truly innovate from those just chasing the hype. 04:42 – From bootstrapping to breakthrough: Richard shares the journey of building Fathom AI into a top-rated platform—while Charles highlights the timeless lessons in execution, focus, and product-market fit. 08:15 – The LLM treadmill explained: Richard reveals how rapid model updates force companies to constantly rebuild—while Charles reflects on why adaptability is now the ultimate competitive edge. 13:28 – The illusion of enterprise AI: Richard breaks down why corporate AI projects struggle to deliver results—while Charles explores how small, agile teams can move faster and smarter. 20:04 – Thinking with AI, not competing against it: Richard discusses how humans and AI can complement each other—while Charles reframes the idea of “AI replacement” into one of “AI augmentation.” 26:51 – The one-person billion-dollar company: Richard predicts a future where automation and leverage allow individuals to achieve massive scale—while Charles examines what this means for the workforce and leadership. 34:22 – Ethics, disruption, and the human future: Richard warns about the social impact of AI’s rapid acceleration—while Charles challenges listeners to shape technology with purpose, empathy, and accountability. 41:10 – Staying relevant in the age of acceleration: Richard closes by sharing how curiosity and lifelong learning keep innovators ahead—while Charles reminds us that the real advantage isn’t AI itself—it’s how you use it.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to the Proving Podcast, where it doesn't matter what you think, only what you can prove. Richard proved it. In a time where everyone's trying to be successful in AI and they're rushing around, he did it five years ago. He's a CEO and founder of Fatham. He's also a really great guy until he starts telling you the unforgiving truth of what's actually going to happen with AI in the next 24 months. It's terrified. Anyway, I hope you enjoy it. The show starts now.
Starting point is 00:00:24 Hey, everybody, welcome back. I am excited to have you on the show, Richard. Thank you so much for joining us. Hey, thanks for having me. So for the four or five people who don't know who you are, can you explain what you've done, what's your success in the day? Sure. I'm the founder of CEO over here at Fatham AI.
Starting point is 00:00:37 We are the number one AI note taker on G2 and HubSpot. No one likes taking notes on their meetings. And so we have basically an AI that will join your meeting, record it, transcribe it, write the notes, write the actions, fill in your CRM, you know, slack it to you, email it to you, you name it so that you can just focus on the conversations and not doing a bunch of kind of data entry work. So I think most people are familiar with your product.
Starting point is 00:01:02 I think the stuff we're going to talk about now is stuff that people aren't familiar with about the reality of AI. A lot of people think AI means artificial intelligence. It also means always incorrect. There's also a side of this that you believe about what it means for you as well. And some of the harsh realities of what AI does. Can you kind of share what some of those harsh realities are? Yeah, I mean, I think one of the things, you know, I've been doing software for 20 years.
Starting point is 00:01:25 And AI is completely upended how we think. about building software. It made it much more of like an R&D process now, whereas before it was more of like a manufacturing process. It's also made the failure rates much higher, right? Like, you know, it takes a long time to some of them ship an AI feature because it'll fail three times before you get something to work. And so that exists for both when we're building features for our product.
Starting point is 00:01:48 It also exists like when we're trying to buy AI products to basically, you know, move our business forward. We actually have a goal at Fathom of getting to 100 million revenue while staying below 150 employees. And so we have this big emphasis on efficiency and automation. And it's interesting because we had this, you know, I just gave this talk where I expected gave a talk about how, you know, we've transformed everything with AI and we actually have like a 60% failure rate on an AI initiative. So I think there's a lot of really interesting gotchas when you're trying to build or deploy AI solutions. So what you're trying to tell me is that AI isn't the holy grail.
Starting point is 00:02:23 All of a sudden, I'm not going to start floating and curing cancer because I was bored on the toilet one day. That's not how things actually work. Damn you, man. You've ruined it all for us forever. You're so sorry. So as you go into these, you're talking about failure. What do you mean by failures? I mean, a 60% that's, I mean, I wouldn't get on a plane that had a 60% failure. I mean, I would get married because that's a 62% failure ratio. But, okay, we'll get on a plane that has a 62% failure. What do you mean that's a 16% failure ratio in AI? I mean, so actually there was a MIT study just came out and said to like the average company right now is actually a 95% failure rate. on like AI initiatives. What I mean for us is basically like it produced the outcome we wanted. And I think that's actually the hardest part is like in the AI land, it's easy to get it to produce something.
Starting point is 00:03:06 It's easy to get the AI to get out something. Right. Our part is getting it to spit out the right thing. And what is the right thing? So for example, in our business, right, you know, I could, you could build an AI that gives you an accurate summary of a meeting that's six pages long. But accurate may not be enough. Like that's too verbose.
Starting point is 00:03:22 I don't want a six page. You know, it's a 10 minute meeting. I don't want six pages. So there's like this whole new nuance of like quality that I think is hard for us to a judge. We're not used to judging it, right? We're used to software as binary. It works or it doesn't. I click the button.
Starting point is 00:03:36 The thing moves on the screen, right? And now we're in this world where I click the button and it spits out some words and like, are those the right words or not, right? It makes a judgment call. Is that the right judgment call or not? And so I think one of the things that's really changing everything is we have to rethink how we evaluate tools because we have to actually get in there. And it's almost like evaluating a hire, right?
Starting point is 00:03:57 It's more like a higher, because you're basically buying thinking, not features now. And so it's kind of upended how we think about purchasing products. So I can't even get chat, GPT, not to put dashes in the damn responses that it gives me, which I can't tell you how many cursing that I've done at that thing. You're talking something significantly higher. How do we get it to produce content that we actually want or go from that 10-page, you know, dissertation that's so verbose, into what we want? How do we do that at the home level for your everyday consumer?
Starting point is 00:04:28 And then also, you know, as the CEO, is a very successful company because every single medium I'm in, your damn software is there before anyone else joins. Thanks for that. I'm a little bit more angry about you with that. How do we do that in both the personal level and the professional level? Yeah. I mean, it's actually that same study said that like the success rate for things like Chesh EPD is like actually 40%, which is still not great, but way higher than 5%.
Starting point is 00:04:51 And I actually think, I think, I think, yeah, is actually, easier for individuals to use because individuals are basically taking ownership of that output, right? Like, it's like, oh, it's writing this email for me. And yeah, I hate that it always puts the end dashes in there too, but I can at least remove them. Where it becomes problematic is when we're using these things at scale and no one's basically been properly equipped to QA the thing. Right. You know, we have a whole team at Fathom that all they do all day is play what I call kind of like a, you know, AI version of Jenga where we think about this It's like all day we are experimenting with like, you know,
Starting point is 00:05:25 basically models and use cases, right? And does this model good at this use case? Can this model find action items from a transcript? And I call it Jenga because like if you push on the block and it, you know, gives any resistance, you give up. You find another block that move smoothly, right? Because there's like weird kind of like problem you've got now. We've got so many models with differing kind of performance parameters,
Starting point is 00:05:49 cost parameters. And so many different what you want to do. So it's a really big problem where you almost need a full-time team if you're building stuff. Either you're building it or evaluating it to like, you know, evaluate multiple vendors in parallel and try, okay, we're going to hire three vendors. We're going to put each of them on a 90-day pilot, which, by the way, we make every vendor give us a 90-day pilot for AI, going to have a whole team that QA said. And when we don't do that, it almost never works. So when new GPs or new models come out, there are so many times where I personally, I've spent so much time. training my old model and try to teach it and say, hey, do this, do that. I have very specific
Starting point is 00:06:27 calls for it to do that. When a new one comes out, do you guys over at Fathom have the same puckery motion that we have in our side? We're like, oh, God, everything's about to blow up again. Is that something you guys are facing as well? Yeah, on two dimensions. Well, I mean, one, we get excited because usually the new models unlock something for us, right? For example, GPT-5 for the lackluster kind of reaction it got from the market did actually solve a there's a problem for us, hallucination rates are way down, and that actually ends up a whole new class of problems that we were trying to solve before but couldn't.
Starting point is 00:06:58 But it causes also other problems in that none of these models are forward compatible, right? You get something working on GPT4. It's not necessarily going to work the same on GPT5. And even more problematically, I think this is something that everyone in the industry started to realize, the EOL cycles on these LMs is now measured in months. So Anthropic puts out Sonnet 3-5. they put out six months later, Sonnet 3-7.
Starting point is 00:07:22 Sonnet 3-7s were powerful, but now there's a limited number of GPU compute in the world, right? And so they're shifting all of their compute to this new model. So now you end up on what we call it the L-LM treadmill, where if you don't upgrade your models, all of a sudden you find out you're getting all these errors because there's no compute to service them. And so now you're spending as much time upgrading your models
Starting point is 00:07:44 as you are basically building new stuff from scratch. So the maintenance load on these tools, and processes is way higher than anything you've ever seen in software it's one of those things and I'm going to date myself here but the original Warcraft 2 because I'm that old
Starting point is 00:08:00 before you can see you by this while you played it before I would go and I would attack the orcs or I would attack the knights or whatever it was I would save my military formation if this doesn't go well I'm going to go attack it. I'll die I could just go back again I wish that existed
Starting point is 00:08:17 inside chat GPT where or any GPT that we're going, we're like, okay, let you do try and do this. Quit giving me dashes or I want you to work it this way. And then for some reason, AI becomes always incorrect. And it just goes off on a tangent. I'm like, excuse me, sir, can you just go back 30 seconds? That would be nice.
Starting point is 00:08:34 And then it sounds like what you're saying is, hey, I did this great game. Can I pick it up and drop it in over here as well? And it seems like both of those things are absent in the market, even at the highest levels, which is where you are. Yeah, that's right. I mean, I think a lot of advice I give to companies, like, if you can, and try to solve a problem with building something in-house, but know that that in-house solution has like six to nine months of shelf life
Starting point is 00:08:56 and know you're going to throw it away and probably buy a vendor at some point, right? But by building it in house, you have a better sense of like, cool, we at least know we got it to do the one small critical thing that you had to do, right? A lot of vendors throw a lot of things at you, right? We've 10 different features and two out of ten of them work sort of thing. But yeah, this is, it's kind of like this whole new, again, this whole new paradigm. and it's very much an R&D lab. It's very much a not at some point in line, right?
Starting point is 00:09:21 It's like, it's not as predictable as what we had before this in SaaS. I wish this was something new in a sense of tech because I remember, because I'm old enough to remember when, you know, we didn't the dot-com boom and everything was going out with the internet, right? Oh, this is going to be amazing. And then, you know, pest.com is going to be amazing. And this is going to be amazing. And obviously, it blew up all the time.
Starting point is 00:09:40 So not just on a personal level, but professional level, companies you thought would be fortune 500 companies are going to be there forever would be gone. two, three weeks after, are you seeing that with established companies are sitting there going, oh, shit, we've got, you know, there's a, the lines of the end of the tunnel is not a light, it's a train. We got to adjust because what work today just won't even exist. And how short is that time for it? I mean, I think the exciting thing as an entrepreneur right now is that a lot of the big companies are really struggling to release good AI features because it breaks their paradigm of how to do software, right? Their use against this assembly line where it's like,
Starting point is 00:10:14 how do we build software? We say we want to build this feature. We spec it out. We build it for three months. And then, oh, you know, we click the button and it moves, you know, 10 pixels to the left. We're done. Right. And it requires a whole new way of doing QA that most of these companies are good at doing, which is why I think if you look at most of the big new AI features from one of these companies, they're really mediocre, right? Because they just don't know, they don't have the muscle in the company, which is what is quality, right? They don't know how to judge basically like subjective quality and they're still looking at it from their kind of like objective like did it do the thing did spit out words yes great past ua ship it sort of thing so i actually
Starting point is 00:10:55 think there's a challenge of your buying software because a lot of the bigger incumbents actually have inferior products to the new startups new startups have their own problems right of like you know instability whatnot but if you're an entrepreneur i actually think it's a fantastic time because it's like the incumbents are completely out of their depth and how to build software in this new era So I think I'm exciting, actually, as much as it is also terrifying. Yeah, I think the best example I've heard of this is imagine you're on a train that's going as fast as possible, and you're on one car of the train. And you're friction as much as you can. But all of a sudden, it's going to unlock.
Starting point is 00:11:28 And that car is going to be gone. So you better jump or make good luck. I wish you nothing but the best because that's just the reality that we're going to be in. So as someone who's kind of tip of the spirit, who has been become very successful with what you're doing and has created a company that as much as I do hate your thing showing up to all the means is something that everyone uses. Um, where do you see AI? Because everyone's like, oh, my God, it's the greatest thing since fire. And there's other people who are like, oh, my God, it is fire. It's going to burn down my house.
Starting point is 00:11:54 There seems to be people who are very polar opposite. Either you're completely madly in love with AI or, oh, my God, it's the devil incarnated. And they have this paradigm shift. Where do you see it going? Since you are, again, you're in there, you're with the CEOs. You know what's going on better than even someone, the regular person would be. How does this look in five years? Yeah.
Starting point is 00:12:12 I mean, what the able says, this is to me, the, greatest technological shift of my lifetime. Bigger than mobile, bigger than social. I don't even say bigger than internet itself, right? There is real. I agree there. For all the failure rates and stuff like that, they're also, the denominator is huge, right?
Starting point is 00:12:29 Everyone's trying to stuff because this is the closest thing I've seen to magic. One of the challenges is like, yeah, what is, you know, I have board meetings and we're kind of talking about like what, you know, what's our five-year plan, what's our 10-year plan? I don't know. If you get to AGI in five years, does anything really matter? or can you really plan beyond AGI type things. Smarter people than I, I think the real question is kind of the open question right now in the market is, you know, my kind of core friend group, the same folks that I kind of leaned on five years ago before Gen AI got good to make me feel confident building a business, betting on Gen AI getting really good.
Starting point is 00:13:05 It's kind of like we started this company in 2020, 2020 and one we launched. We put AI in the name of the product and all my investors are like, what are you doing? Everyone hates AI. and it's easy to forget it was only four years where AI was being marketed in 2015, 2016,
Starting point is 00:13:19 2016, and it was terrible, right? It was not, it was, it was basically fraudulent kind of stuff. But now we're at this point where everyone's like, oh, my God, AJI's going to happen in two years.
Starting point is 00:13:30 And, you know, there's some people still believe that we're going to keep accelerating. I think that group of people that I'm kind of surrounded with things about 50-50 between like we're going to reach a plateau of what you can do with the current tech
Starting point is 00:13:40 and we're going to find kind of Moore's Law Style the next, you know, step up. It's clear that we're getting diminishing returns from the current generation of Transcendipase AI, like GPG5. I think everyone kind of sees all the latest models are now more optimized for efficiency. They're not, like, wildly smarter than the previous model, but they're cheaper to run, which is important. Right.
Starting point is 00:14:01 Companies running. Yes. You know, for their margins and all that sort of stuff. I've kind of taken the approach, but we have to kind of assume that things are going to kind of slow down. because we assume they're going to continue accelerate. It's almost a possible plan for anyways. So, and we're kind of, again, I think GB5 was one of those was a good data point of like,
Starting point is 00:14:22 okay, like it seems they were plotting towards a plateau, and we're waiting for whatever the next thing is after transformer models alone. But it is the most volatile market I can ever imagine, right? Like, you know, we've been on by, this company's been our objectively a rocket ship by the last 10 year standards, and we're now just doing pretty good by modern. standards where you see companies go from zero to 100 million, a billion in revenue in two years, right? It's, and then go back down to zero two years later, right?
Starting point is 00:14:50 Like, look at the Gators and stuff like that. So it is insanely volatile market full of tons of opportunity, but how long lived those opportunities are, I think, is to be seen. Yeah, I think to your point of what does this mean to the human race? I will give a little bit of pushback. I don't think it's better than Internet. I don't think it's better than Industrial Revolution. I don't think, I think it's better than, the only thing better than this is fire.
Starting point is 00:15:14 That's as far as the human race is concerned, this is, this is fine as far as what can do it. Now, fire is good and bad. It can burn down your entire village, yes, but it also makes good food. We could, you know, do these things. As far as what I'm concerned, what I've seen with it, AI is as good as fire. Now, what that means going forward, hmm, hmm, mm-hmm, mm-hmm, good luck. I wish you nothing but the best because it's going to be pretty, pretty interesting. You mentioned there's companies that go from zero to a billion dollar valuation.
Starting point is 00:15:40 and then two weeks later, gone. Do you think we're going to see in our lifetime the first $100 million company run with us a single employee? Do you think that's going to happen? Yeah, I mean, I think Sam Altman talks about the first billion dollar company with a single person, right? I think that's highly possible. And then you can extrapolate all the concerns you have about, like,
Starting point is 00:16:00 societal people and wealth inequality and from that pretty easily, right? But yeah, no, I think that's perfectly reasonable to expect. Yeah, and I think when it, and this is something that people to understand, this is no longer a luxury. We don't get to sit back and say, hey, I wonder if this is going to happen. I wonder if this is going to affect me. This is going to create wealth distribution issues on the equivalent of basically India. When you look at how people are distributed, especially here in the United States, you're going to see that. For those of you're playing at home who might not understand everything that Richard's talking about and what he's doing, you do not have the luxury to sit on the sideline. So either you're going to be panhandling or you're going to embrace AI because this is just this is what it is. This is the electricity. So if someone's walking into that, and they're like, oh, my God, this is terrifying. You know, you're telling me that, hey, I need to embrace it, but then you're telling me the company's going to disappear in five months. When you're an entrepreneur, you're like, oh, God, I have to go into this. I know I have to go into this, but I could get punched in the face or I most likely will.
Starting point is 00:16:57 How do you advise entrepreneurs? How do you advise business owners? And hey, do some proven tactics at work? Let's do these. Just do these for now. Make sure that if you do get knocked on your butt, you can get back up somewhat gently and, you know, go from there. are the things you advise what i mean honestly i think it's never there's never been a better time to start something that's really narrowly focused right you you hear a lot about the big you know the big platforms that are you know again going from zero like a jasper wherever going to zero to 100 million right back down but the real beauty of this stuff is like it can get you can really tailor the stuff to specific use cases specific problems and you can build faster and cheaper and better than
Starting point is 00:17:35 you ever have before right it's completely you can you don't have to have a cester you like i have a team of six engineers anymore to build something useful. You can just be a pretty good, you know, hobbyist prompt engineer plus some magic patterns and some prototyping tools and you can build something of value, right? And so, you know, I remember 10, 15 years ago, everyone was kind of doing like the, you know, the, was it the lean startup stuff where they're like, oh, you know, they're selling stuff before they really even built it and, you know, that got taken to an extreme. But now you literally can really narrow down and find a very specific niche and you can build a really
Starting point is 00:18:09 good, like, and I know this is kind of a majority of in a lot of markets, but like lifestyle business out of like, great, I've got the best new software that solves this one burning problem for car washes, right? Like, yes. And I actually think that's where a lot of the gold is. I actually think a lot of the gold is at the application layer. A lot of the investment and noise and all the stuff is all kind of at the like foundational layer. It's all about who's building the big infrastructure stuff. But that's a, that's a, you know, billionaires. game, you need a lot of money up front to do that. I think there's a lot of money to be made at the application where you're sitting on top of these tools. And if you can get good at bringing
Starting point is 00:18:47 them, and that's where I think, I think that person that's going to be the, you know, the single person company doing 100 million revenue, I don't think they're going to be a foundational model. I don't think they're going to be something like Fathom. I think there's going to be something that sits above something like Fatho, right, or above these foundational models, right? It just finds a really good niche that just happens to catch a wildfire. So I think that's for the entrepreneurs. I think for the employees, there needs to be this conversation of what's happening, because you're seeing in their orgs, you're seeing people where entire divisions are getting eradicated, people who with master's degrees from Ivy League schools are trying to get jobs at McDonald's right now,
Starting point is 00:19:20 and they're terrified. And I rightfully think they should be. This is welcome to this new world. When we were, me, when I was in, being an entrepreneur, it's not sexy. They did not like that idea. Being in the comic books, not sexy. Being a dork, not sexy. And then all of a sudden, now we're like, it's our time. Our time is common. Same thing with entrepreneurs. the employees that I know are terrified. They are fundamentally, they're like, hey, and they go back to their old model, which is I'm going to be in another degree. I'm like, that's not going to help you.
Starting point is 00:19:49 That's over. Those times are gone. So what do you say to those, you know, mid-level, medium-manage, you know, mid-level managers, kind of just, you know, senior directors, VPs, what do you say to those guys who have said, like, I've built and I have, you know, busted my butt to fit into this model of this process, of this American dream. And as George Carlin said it really well, he goes, it's called the American dream
Starting point is 00:20:10 because you have to be asleep to believe it. If you no longer believe this model and you are no longer built for this and the thing you were built for does not exist anymore. How do you adapt? Yeah. I mean, that is the question,
Starting point is 00:20:23 like that will be the question of the next five, 10 years, right? I remember, you know, I was a big proponent of like, I was telling everyone to listen about UBI 10 years ago and I was worried about truck drivers back then, right?
Starting point is 00:20:33 I was like, truck driver number one profession in like 30 or 40 states, right? And it's going to, you know, it's going to go away soon. And it's kind of funny. It's really hard to predict these things. I think everyone would be assured that that was the first thing, the first kind of like industry to get through. And years ago, here we are 2025.
Starting point is 00:20:49 And actually it's no, it's artists, it's copywriters. It's pretty soon going to be lawyers, middle level management. It's all knowledge work. Therapists. Yeah. Yeah, exactly. And so, so, you know, what would I say? You know, honestly, it's like, it's, there are no.
Starting point is 00:21:05 I would tell you, like, your fear is well founded, first of all, right? Like, and unfortunately, like, I'd love to sit here and tell you that you've got nothing to worry about it. I think you do, right? You know, I think what you're seeing when you look at what's happening at, you know, college enrollment is down. Trade school enrollment is up. And I think, like, you know, people that are kind of solving this first principals, the folks come in at a high school or look at that saying, gosh, you know, there's been a better time to be in the trades. Now, am I going to tell some VP to like, hey, you know, you should go.
Starting point is 00:21:35 back to junior college and become a, you know, a plumber, you know, I think that's a tough sell, too. I think there's like a middle ground where if you really become a student of this stuff, I still think there's a lot of opportunities the next couple years, again, at the application layer where you could be the person that helps companies get from a 5% success rate that we're seeing to a 25% success rate, and there will be a lot of opportunities there. I think it really depends a lot where you are in your career. I mean, I, you know, I've been building software for 20 years, and I've always thought that, like, you know, I can always fall back.
Starting point is 00:22:05 I know how to organize people to build great software. I'm not sure that would even be a skill set in five. No. That's car. You know, I'm very much planning, like, if I don't have kind of an exit or retirement plan over the next five, ten years, we need to be thinking about what we can, what value can provide beyond that. But I do think very tangibly, I think trades will be coming back in a big way.
Starting point is 00:22:26 I think, you know, there's a lot of opportunity for people to learn how to become experts. You can be an expert in replacing your own job with AI. That gives you a job. over the next couple of year. So, you know, we talked out entrepreneurs. We've talked about employees. We talked about where we think this is going and how this is the new fire. What are some of the conversations that none of us are having, let me for various,
Starting point is 00:22:46 that none of us other than you are having in these boardrooms with these people who are, you know, very much a tip of the spear. What are the things that you guys haven't made as public yet, if you can, this is, hey, this is what we're talking about. And these are the things that keep up us at night because we know what keeps the entrepreneurs up at night. We know what keeps the employees up at night. here are us as you know founders this is what keeps us up at night as well i mean i think
Starting point is 00:23:09 you know the i think the boardroom conversations are more about like pace of a i change and kind of like you know how quickly will it used to be build software company and usually at least 10 years before someone really disrupted you and then now you know pretty it started now it's like five years pretty soon to be two years where it's like there's so much technological change that just undoes you know valuations for SaaS businesses you look at them today versus five years ago. Oh, my God. Yeah.
Starting point is 00:23:34 Hank, right? So I think there's a, the board room, I think there's a lot of a conversation with that. Again, about like AGI and like, what would that mean? Could that just, you know,
Starting point is 00:23:43 render a lot of businesses irrelevant? I think the conversation we should be having is the one we kind of tiptoeing around, which is like, how do we as a society handle this? There's a really good short book called Manna, M-A-N-N-N-A, by this guy Marshall Brain. Do you remember how stuffworks.com? Awesome website.
Starting point is 00:24:02 The guy's actually from my hometown to Rale North Carolina. You wrote this like 25-page book, and it was kind of a tale of two cities. One city actually set in the U.S. that was like dystopian AI future where like the robots are in the ears of the humans telling them exactly what, you know, walk 10 steps this way, turn over the burger, that sort of thing. And another city where it's like, oh, no, a lot of the gains from AI are more shared in existence society. It's like it's a little hyperbolic, right? But I think a really interesting thought experiment of like, this is company. and which, you know, I don't know that we'll get
Starting point is 00:24:34 as a topian as one example or as utopian as the other. But I don't think we're, I think everyone's busy fighting trying to put the genie back in the bottle. The genie's not going back in the bottle. We need to talk about what's, where do we want to like put guardrails
Starting point is 00:24:48 and push the genie in one way or another, right? Like, and so, I think the other thing we're talking about is also kind of like AI regulation. The other thing is like, you know, I think a lot of folks in tech land voted for Trump.
Starting point is 00:25:02 And one of the reason they voted for Trump is because he wouldn't regulate AI. And a lot of folks see that basically there's an arms race between us and China around AI. And if there's this belief, right or wrong, that if China gets to AGI first, if you believe in Western-style democracy, bad things happen, right? And so I think that's another, there's like, you know, kind of so many different levels to this upheaval. But those are the three I would think about. So where do you think things are going? because people do have this dystopian fear that all of a sudden it's going to be Terminator, right? You're going to have the day, it cuts over, and then the robots are going to take us over and turn us into cottage cheese.
Starting point is 00:25:40 Where do you think, and what's more realistic for that? I think all the paths are still open. You know, I don't, you know, I think. It's not the answer I wanted to hear, but okay. Yeah. I just peed on myself a little bit, down. You know, I think we would be foolish. I think there's a lot of folks in AI land that are concerned about AI safety.
Starting point is 00:26:03 Like a lot of the, you know, a lot of the kind of open revolt that they had at Open AI a year ago was about this fear that, like, this thing was founded on the premise of AI safety, and it seems to gotten off that mission sort of thing. So a lot of people way smarter than me seem to be very concerned with that. And so I think, you know, don't want to be alarmist, but I think we should all be like alive to the danger. This feels like the critical moment in human civilization, and everyone needs to educate themselves a little bit and do what they can and make sure we're nudging ourselves in the right direction. So for all of you who have just caught the podcast, we've decided that we're all going to die, we're all going to be out of jobs, and it's completely over, and it's a horrible time to be like, okay, so let's try to give people a little bit more hope about what's done. So there's a lot of conversation about what AI can do and what AI is done. And not only just the basic stuff with business, but what's been done medically. Like, hey, we've made X-Ruasive discoveries and we've pushed the envelope with that.
Starting point is 00:27:02 And, hey, how we've looked at a problem that couldn't have been solved by humans for a hundred years. It's in 27 seconds. So there are some amazing things with AI. Can you kind of share some of your favorite ones that, you know, you've seen that of kind of person? I'm like, oh, my God, I can't believe it just did that or figured out that. I mean, I think you just touched on the big one, just like a lot of the stuff you're seeing happening in health care, right? where, like, weak, things that used to be really expensive, right? Like, analyzing scans, the early detection, like, our health care system, my father was
Starting point is 00:27:31 in emergency medicine for 30 years. He was first one to tell you, we are really reactionary in health care for a number of reasons, but first and foremost, it's like it's very bit expensive to be basically proactive in health care because, you know, someone's got to analyze these scans, you've got to look at these blood markers, you've got to do all these things. both on the kind of like the preventative maintenance preventive medicine stuff as well as research and this is going to drive down the cost of all that stuff dramatically the point where you know you don't have to be rich to get kind of life extending care well ahead of some acute medical
Starting point is 00:28:07 crisis um i think there's a lot of i think that's going to be the thing we're going to look back at and be like wow we're going to cure you know hopefully cure or greatly reduce the harm on a lot of diseases in a very short period of time. But it's going to be kind of the Wild West in the meantime because also our medical regulations haven't really caught up with that, right? We don't have to run out of handle it. But I think that's probably one area you can look at in point two and be like, it's going to be a lot of good done there. I think, you know, for all the disruption that we're going to see with self-driving cars, that's also going to place we're going to point two, right? Like, you know, the number one cause of death of people, the number one use of
Starting point is 00:28:43 like urban land. Like you think about housing affordability. Think about what happened when you don't have to dedicate, you know, 40% of your city to parking. Think about what happened when, you know, people weren't getting in car accidents left, right, and center. So I think there's going to be, you know, on the other side of this crucible, there are a lot of things that we'll look forward to. In the same way, you look at the same thing, like the Industrial Revolution and stuff like that. There were a lot of painful things in that transition. A lot of terrible things happened. Humanity was better for that transition in the end, right? But it will be. I don't think we have to go as far back to the Industrial Revolution either,
Starting point is 00:29:16 Even with the IT boom, when technology kicked in, people like, oh, my God, these are you going to wipe out jobs. Yeah, they did. When tech rolled out, when we had the dot-com boom and everything took off the internet, it wiped out walls of jobs. But the job that you at right now did not exist before that, the jobs that I did, the careers I had. So, yes, it will wipe out a ton of shit. We'll also create a ton. So I think there is, and I think to your medical point, now there's a difference between our DNA and DNA. There's different with that, how we measure those things.
Starting point is 00:29:44 Some things don't change because even if you die of cancer, your DNA, that's your DNA. But the other stuff, we can analyze and say, hey, you know what? We say that everyone should take these medicines. However, based off your stuff, your individualized goodies, you should be taking this. I was sitting with the CEO, the company that does that. We broke down. He's like, yeah, let's run your blood work. And within a day, he's like, okay, this is what you need to stop eating right now.
Starting point is 00:30:08 I was like, I'm like, yeah, but that's supposed to be healthy. He's like, yeah, for everyone but you, don't eat them. that. He regrettably did not say that I can have ice cream every day, so I'm still mad at him, but that's, I was like, why can't have ice cream every day? What the hell? So there is that. So we get it. We understand, I think, you know, for every single level that we're at, be a employee, entrepreneur, founder, there is this optimism and there's also a little bit of fear. So as we get through that, I think having the tools and the techniques right now, like what are the things, what are the tools that you're using other than obviously everyone needs to use your
Starting point is 00:30:41 software? I get it. Uh, please stop. using it on my meetings you bastards um so everyone needs to use their software what are some of the tools that you use every day and how to use them differently than everyone else i mean i you know i think that i think everyone thinks that in silica value we have a whole different set of tools than everyone else we actually don't i think there's you know i'm done yeah we're all using you know we're using chat gpt we're using things like magic patterns another one i love where it's like easy way to kind of just like you can basically it's a it's an a i for generating prototypes like if you want to use your interface of something.
Starting point is 00:31:15 So we build a lot of new prototype tools. You know, at the high end, I think the secret is to actually build good products with AI, you actually end up using multiple models. Like any certain future and fathom, whether it's generating meeting summaries or finding action items or, you know, answering questions based on transcripts, there's a pipeline. And we're using four different models from different providers in that pipeline. We use, you know, some from Gemini, some anthropic. We use some self-hosted ones.
Starting point is 00:31:44 So at the high end, when you're actually building really sophisticated stuff and trying to build, you know, the highest quality AI, right, and take it to market, it's a whole not, but individual, frankly, I don't think there's a lot of, I don't, there's actually, I think, there's so much word of mouth adoption of these tools, right? It's why all the tools go from zero to a hundred million so fast, because they're so good that there aren't a lot of, like, secret tools that people are using, right? It is a lot of plage at GPT, you know, make, you know, et cetera, et cetera. Yeah, I think the other thing that's really important is when you pick a new tool, you have to understand what you used to do, how you used to operate, will also have to change. And the simplest example I can give this is we gave very specific PowerPoint presentations that looked in a very specific way and a very specific class level for that. That took a ton of time.
Starting point is 00:32:37 We used a tool called, and again, I don't, do sponsorships or affiliations? I refuse to do it. So this is a doubt. The reason is cool called Gamma. And my team got a hold of it. And they're like, I was like, okay, this looks completely different. They're like, yeah, but we created 300 slides in a week versus a month and a half. And I was like, okay, I guess our slides now look different.
Starting point is 00:32:55 So having that adapt to, adaptability, adaptability was vitally important. What are some of the ones that you've used that? You're like, hey, okay, yes. I used to do it like this. I don't work anymore. That was actually the example I was going to give, right? Which is like, we exactly. Oh, I want my slides.
Starting point is 00:33:10 look like this. Gamma's great getting slides out. Are they going to be exactly what I had before? No. No. That's the thing. It's like, you know, it's like you can chat GPG and Google search. Is exactly what I got out of search results? No, it's actually better, but you have to be flexible and we like, rethink what do I actually need out of this tool?
Starting point is 00:33:27 Right. Yeah. Gamma would be the same example one I do, right? Like I love creating I honestly, I do kind of waste more time now on there, generating fun AI images from it. I do too. I'm glad you brought because I didn't want to be the first shameful one to say that. I spend way too long and they're just messing with the images because it's fun.
Starting point is 00:33:46 I'm like, wee. Yeah, I'm a, I'm a put an image in two words on the slide kind of guy. And our branding, uh, actually we just rebranded and we put astronauts in it. It's probably the reason put astronauts in it is because I had so much fun in every deck we've had for the last nine months. I've got astronauts fencing on the moon, astronauts fighting monsters, astronaut, you know, math with their helmets on. Like it's, I love her, right? Yeah. Yeah.
Starting point is 00:34:09 Not to be, fun is not. to be discounted in the workplace. It's worth doing. It's still got to be fun out there and get that rock and rolling. I'm glad that you stepped up and said that you too are a dork like me. So I appreciate that you step into that world for me. So when people are sitting there and they're looking at this, one of the things that they're concerned with is, you know, if I go to Google and I type in, you know, what's the best food in my city? I'm going to get thousands of answers. With chat, GPT, I'm going to get one. Right.
Starting point is 00:34:38 People are a little afraid of that. Like, okay, we're now getting it so I don't have the option to think on my own. I'm now being told and I've now had the data so synthesized down to this one thing, is that something you're concerned with as well? Because if I go to the library and there's one book on history, I know I'm missing a lot. Yeah, I mean, there's a big concern about like, you know, we've already had this kind of bifurcation I feel like of what reality or truth is in America in a certain degree, right? What do you mean?
Starting point is 00:35:06 If we're not going to get them than that, but, well, but, uh, yeah, it is interesting. Like, for as much as you had to be get things right, so there's certain corner cases where it's really, really bad, right? You know, I think, uh, my girlfriend the other day was like, oh, looking up someplace that would like, I think, you know, sew something for her, right? Oh, I need like it. And it gave her three answers and all of them were completely made up. Like, I mean, that one's at least easy to spot because you can easily verify, like, oh, that's not a real place. But it is a little scary because we are outsourcing judges. Not like, why we like it is because we're outsourcing judgment, right? Because who wants to go through a thousand restaurant recommendations, right?
Starting point is 00:35:44 I just want to three, let me pick three. But yeah, we're outsourcing judgment to this AI. And that's what I think, again, I'm grateful at least that there are reasonable competitors. And it does seem to be that there isn't as much moat in building foundational models as we thought. Now, there's a ton of moat in that, like, you know, from a consumer brand perspective, chat GPT has 98% of the market. But I would encourage people to, like, get a second, you know, whether it's Gemini, whether it's Cod, like, you know, when you're skeptical, ask us, get a second source, GROC, you name it, right? Like, I think all the smart people generally are diversifying, you know, but I don't just, I don't rely on one to answer the question, right, for that very reason.
Starting point is 00:36:28 I also think if you are trying to trap in one ecosystem by your own choice, because no one traps it at this point. And to your point with, you know, the girlfriend asking for place for someone, I'm like, yeah, okay, Schmocko, now go check Yelp and compare your options. You're going to get. So having that cross-reference is important. It's one of the things that I've coded into mine, which is one of the things I love about GPT so much. I'm like, okay, if you give me an answer like this, always do this after. And outside of dashes, it seems to resolve it. I think everyone I know will just celebrate so much when the damn dashes and emojis are no longer included.
Starting point is 00:37:01 Stop it. No one writes like that. That doesn't sound like a human. What is wrong with you? So if anyone, by the way, on a side note who's listening to this knows how to get rid of the dashes permanently, please send me a message. I will pay you for it.
Starting point is 00:37:14 It drives me out of my mind. So with that done... I actually don't even know how to get rid of the end dashes. I think I read something that they're aware of this. They're like, we're not sure how this got in there. It feels like it's the AI's fingerprint sort of thing. I don't know. It really is.
Starting point is 00:37:30 And it's funny because I will sit there and I will tell it over and over and over and over and over. And now I'm just like, undashed this. Because it's just like, I'm not, I can't teach it. So when people are like, oh, AI is so intelligent, it can learn. And it's, no, it can't even get rid of dashes right now. So just the brain of here, sweetie. So
Starting point is 00:37:47 for those of you who are sitting there and, okay, we've been tools, we've got adaptive. When those are going through and we talk about what's next for you, not in five years, but what's the immediate next 90 days for, again, you're kind of tip of the spirit with what you're doing over in Vatham? What is the next 90 days for you
Starting point is 00:38:03 having conversation with your staff because you have to lead differently because now we're in an AI age, how do you lead differently? How do you show up differently in that environment? How do you build 90 day plans? Because anything beyond that, you're, yeah, come on, we don't know. I mean, you know, we've been fortunate in some ways this kind of has come back to our strength. Even from the beginnings company, I've always been like, we only build 90 day plans. I actually think that, I think in a lot of companies, planning is this like art of self-deception and like false prevention, right? Where it's like, you're doing technology even before AI, you really can't know exactly
Starting point is 00:38:35 where you're going to be in a year. And so it's important to have hypotheses about the future, right? We believe the future will take this and not this, right? You know, but then we kind of react, we're more reactionary on a local level. You know, I mentioned our goal earlier
Starting point is 00:38:52 if we want to get to 100 million revenue and have less than 150 employees, that's way easier to achieve when you start from 10 employees than you start from 500 employees, right? Right. And we're also a fully remote business. And so we're kind of pushing the envelope on two dimensions. Like, how do you use AI to
Starting point is 00:39:10 basically streamline communication in an 100-person org that doesn't ever see each other in person more than once a year? But I'll tell you that, you know, right now it's still, I think, really exciting times. I mean, we, for our business, the thing we've been really excited about is not just writing notes for meetings. That's never been our goal. Our goal is what happens when we can get all of your meetings, all of your team meetings, all of your companies, meetings into one data repository because it's a really big data set. It's really hard to move. You know, historically never been captured, certainly not structured. But if you get all that into one place, we're finding a place where the modern LLMs can actually do really interesting
Starting point is 00:39:48 things. Like we did an example with prototype the other day where we said, hey, you know, you know, Fathom, tell us what's the history of transcription engines at Fathom? And it went back through every all hands, every engineering mean for four years. And I wrote a six-page article about everything we've ever done. You think about for knowledge management, right? Yeah. Also, seeing where your loopholes are and where your vulnerabilities are, say, hey, you know, you've listened to four years of my conversation. I don't remember what I had for dinner last night, let alone anything else. So being able to sit there and analyze, okay, where are there holes in our things? What have we missed? That was mission critical. That's something that, because again, I love picking on file them because
Starting point is 00:40:24 it shows up and it annoys me all the time. I say, I want permission. I'm like, bugger off. But the ability to do that and then query everything down the road, that data set is invaluable. Right. Once you get to a point where it's like, you know, everyone hates meetings, but we love having great conversations, right? And I think what we're moving towards a world where you can have meetings
Starting point is 00:40:42 and just kind of speak things into existence, we could talk about it and we get done to the meeting, and it's done. The SOW's written, the emails drafted, the power, the game or PowerPoint is already queued us sort of thing, right? And we get to a world where we get this really interesting dissemination of knowledge across
Starting point is 00:40:58 the org in like a fun way. One of the things we're experimenting with is like, you know, everyone hates sitting in all these meetings where like, I didn't need to hear most of this. How do we sort of building everyone like a customized podcast that listen to every meeting adjacent to your, like adjacent to your function and gives you kind of like having crossed the org today? There's just so many fun things you can do now that you literally couldn't do even six months ago with the LMs we had then. So I still think, you know, I still wake up every day, feeling pretty optimistic. Yeah, I'll get some of my window
Starting point is 00:41:28 I feel less optimistic, but like I feel like we'll get there. Humans always solve things at the absolute last possible minute, but we usually test them. So. Churchill said it really well. So the Americans always do the right thing after they've done everything else.
Starting point is 00:41:43 Exactly. So that's kind of where we are on this. And I'm like, oh God, here we go, here we go. All right, just survive and hold your breath long enough, go in that one. How are you dealing with, because a lot of, and this is getting away kind of from the AI, you've created a very successful brain.
Starting point is 00:41:56 a very successful company, it's all remote. A lot of founders, a lot of owners of companies have problems with that. Be there, you know, how do we keep my team motivated? How do I keep them honest? How do we keep them unified? How do we build this cohesive culture? So how have you survived and thrived in that environment? I think, you know, so one of the reasons why I have this goal around 100 million
Starting point is 00:42:18 with less 150 employees is I've had a lot of very successful friends that go IPO, get to really big companies. and all of them say, gosh, when I tell them we're like 80, 90 people, they're like, oh, I miss that. That was so much fun. And I always ask them, when did it stop being fun? And they're like, well, you know, the answers vary, 100, 150, 200, but it's all in that range. And I hypothesized from talking to them, like, there's some point at which you switch from a high trust environment to a low trust environment. And, you know, I picked 150 for our goal because that's like the Dunbar number, which is like this theoretical limit. of how many real friends you can have. And so I kind of think when you get above that number, it's impossible for everyone to be friends in your work, and you're almost inherently going to be a low trust environment. And so I think it's interesting.
Starting point is 00:43:06 I see all the same stuff where it's like, I let my employees work from home, and they're not really working that hard and da-da-da. Oh, that's because you have a low-trust environment. And I don't exactly know what creates high trust versus low-trust. I mean, I think it's a cultural thing, right? I think it's something like, You know, I think it's a lot about, like, maybe how we lead and how we communicate and how we motivate folks.
Starting point is 00:43:29 But I do know, you should just be aware of when you have, what environment you have. And you're right. If you have a lot of an environment with your employees, one, maybe we should get curious about, like, how did that happen? And two, yeah, they need to get people back in the office because if you can't trust that they're going to work, you know, put in their work, right? Instead of structures might be evaluated. But I think we've been very fortunate in that, like, we have an amazing team that loves the work they do. that each are given enough autonomy and given trust.
Starting point is 00:43:56 I think high trust environments happen because when we hire people, I say, I told our team, tell our execs, you should trust by default. You didn't want to trust them by default. You shouldn't have hired them. But you should drive by default. You should give room to run.
Starting point is 00:44:10 You should, it's kind of like, kind of gamma. You shouldn't be prescriptive about the deck needs to look exactly like this. Is it 80% what you thought it was, but 100% what it needed to be? Yes. right and I think that's an important factor 80% of what you thought it was 100% of what you
Starting point is 00:44:27 needed it to be right and I think when hiring people that one of the best advice I ever heard was would you trust this person to feed your children in other words if you got an accident and you couldn't provide for your family would you trust that these people could do it for you and if you can't say yes to that then you have failed in the hiring process so my I guess my question is, as you built this high trust environment, which takes time and it takes personalities and there's very specific things, help with argue on getting rid of someone who does not fit into that environment. Our goal is usually 90 days. Like, you generally, you usually know, you usually know, 45, 60 days and then, you know, just out of an abundance of caution,
Starting point is 00:45:11 like, I think you can go as long as 90 days. You really can't go anywhere than that. But that's our, that's our goal, right? I mean, I think it's generally pretty quick. The nice thing is once you have a high trust organism, the organism will reject any organs that don't seem to fit in with that. And they themselves, like, as long as you've got a good way to have listening posts that are not just, you know, like that's what gets hard. I think it's to get bigger.
Starting point is 00:45:37 It's like, how do we, how do people trust they can tell me, hey, this new executive is brought in and is not, like, it's not our DNA sort of thing. But the organism knows if you can find a way to observe it. It's interesting that you do 45 days. I'm much faster on that. Yeah, we're very quick. I mean, my grandmother said it really well.
Starting point is 00:46:00 When you're dating someone, you will know within three weeks. And if you don't know, you know. And she's just bulletproof with that. And I miss her greatly. She's no longer with us. But when it comes to hiring someone, normally within the four 48 hours, and we don't pull the trigger that quickly.
Starting point is 00:46:16 But within the first 48 hours, you've got enough of an icky. You've got enough of, okay, this, I don't know if I want a second date. This might have not, um, thought of it. So I love that you have a big heart and you have high empathy. So model tough to you and near people. Well, what I'd say, actually, we used to probably have, I would say that number used to be lower. But then every time we looked at, we said, any time we find out it's not a fit in the first week, that is a real indictment of our hiring process.
Starting point is 00:46:43 A thousand percent. And so I think now we're generally getting to things like, okay, we think our hiring process is pretty good, which means no one should be failing inside of three weeks, four weeks, right? We shouldn't be able to tell. It shouldn't be anything that like crazy. But you can't test for everything in the hiring process, right? That's where I think like, okay, even with the best hiring process, those issues will show up month in. That's when, oh, they were all on their best behavior in the hiring process and we got unlucky in our references and stuff like that. Right.
Starting point is 00:47:11 We normally give people tests. We're like, hey, need you to do this, need you to do that. We kind of go through that process. Like, hey, do these things. And we still have people actually test of what they need to do. And so that helps us out with what we're doing. So as we go through this and as things are changing as an organization and as for you, as you had a level of success that you never thought you were going to have, doing something you never thought you were going to do, what's next?
Starting point is 00:47:38 What's the next big thing that you're like, hey, I really want to accomplish this? you know i think one of my superpowers entrepreneurs like i have like these built-in blinders sort of thing right where uh i get really i get so passionate about what i'm working on i actually think like one of my superpowers is just getting passionate about things i can get i would say like i was like to hire passionate people like because passionate people get passionate about anything you get passionate about plumbing like going back to our like transition your career into you know i think if you told me like hey rich go
Starting point is 00:48:09 people on Pullmer. I get so excited about fittings and the right and stuff like that. And I think right now it's like there's just so much like it's the most fun time to build. It's the most volatile to build. It's also the most fun time to build. I do on a personal level get really passionate about, you know, what I see happening in public discourse and what I'll hesitate to call politics. I met with another entrepreneur yesterday who told me he's running for city. counsel. And I think he expected me to be disappointed. It might be kind of confused by that. I think that's amazing. I was like, that's amazing. I was like politics, not enough, I think people of high character, good judgment go into politics because they judge it to be EV negative. And it is
Starting point is 00:48:55 EV negative. That's not why you do it, right? You do it after you've gotten so much from the society that you feel like you should give back. And I, you know, I think there's a lot of stuff that I would love to do in that sphere in the future because I think I think our country could use some help. I think it could use some high judgment people that are not out for themselves. A thousand percent. It's interesting because it's a similar conversation I had over the weekend. We're talking about, hey, we've all been very blessed. We've all been very successful. Maybe it's time to get back and offset and maybe course correct some of the things that are going on that have been going on, not just for this generation, but for many, many, many, many, administrations.
Starting point is 00:49:36 We're going back double-digit. You know, it's like, oh, my gosh, we got it. We have to pivot this, and it's time to kind of have these people take over and do something different. So other than you're running for president in the next 27 minutes, if someone wants to track you down and they want to learn more about you and they want to connect because I'm just super grateful that you shared the stuff, what's the best way? How do they get a hold of view? How do they get a hot of Fathom? What's the best idea? Yeah, check out Fathom.
Starting point is 00:49:56 Fathom.com. It's free to use. Please give it a shout. And then you can find me on the only social media that I use, which is LinkedIn. So find me on the stodgiest of the social media is LinkedIn. That seems you're there. I love you. I got you.
Starting point is 00:50:10 I really appreciate you coming on. Thank you so very much. Charles. This is awesome. Thanks for having me. Absolutely. All right, guys. That wraps up our episode with Richard.
Starting point is 00:50:18 I want to thank him for going out and sharing some insights and where things are going and the unforgiving truth of what's next with AI. Oh, it has two very specific paths. And it's our ability to dictate where that goes. All right, guys. See you in the next one.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.