We Study Billionaires - The Investor’s Podcast Network - TECH006: Open-Source AI That Protects Your Privacy w/ Mark Suman (Tech Podcast)
Episode Date: October 29, 2025Mark Suman, co-founder of Maple AI and OpenSecret, shares his insights on how to build cutting-edge AI without sacrificing user privacy. From secure enclaves and attestation to real-world use cases ...in law and finance, Mark outlines the technical and ethical foundations of private AI, and why efficient inference, not just open models, is the next big frontier. IN THIS EPISODE YOU’LL LEARN: 00:00:00 - Intro 00:01:57 - How Mark’s Time at Apple Shaped His Vision for Secure, User-First AI 00:06:06 - Why Verifiable AI Matters for Protecting User Data 00:07:38 - The Privacy Threats of Centralized AI Models 00:15:26 - What Maple AI Does That Other AI Tools Don’t—End-to-End Encrypted, Verifiable Privacy 00:17:30 - The Threat Models Maple Addresses and How Enclaves + Attestation Work 00:19:51 - Why Inference Speed and Efficiency—Not Open Weights—Are the New AI Battleground 00:24:05 - Where Decentralized AI Fits Into Today’s Landscape 00:25:12 - A Step-by-Step Guide to Getting Started with Maple 00:29:13 - How Users Change Behavior When They Trust the AI System 00:32:42 - The Risks and Critiques of TEEs—and How Maple Answers Them 00:37:35 - Which Professions Benefit Most from Private AI 00:45:27 - Mark’s Vision for Verifiable, Private AI Over the Next Decade Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES X Account: Mark Suman. Website: Maple AI. Related books mentioned in the podcast. Ad-free episodes on our Premium Feed. NEW TO THE SHOW? Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok. Check out our Bitcoin Fundamentals Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value Newsletter. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORS Support our free podcast by supporting our sponsors: Simple Mining Unchained HardBlock Kubera Vanta Shopify reMarkable Onramp Public.com Abundant Mines Horizon Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm
Transcript
Discussion (0)
You're listening to TIP.
Hey everyone, welcome to this Wednesday's release of Infinite Tech.
Just like Bitcoin separated money from the state,
decentralized inference is now separating AI from big tech.
It's a quiet revolution shifting control of intelligence itself
from the centralized data centers to individuals and small developers
who can run powerful models privately, securely, and anywhere in the world.
Today I'm joined by Mark Suman, founder of Maple AI,
to unpack how this is being possible through trust.
execution environments, secure hardware that protects both data and computation. It's a glimpse into
the foundation of a truly open AI ecosystem. And so without further delay, let's jump right into
the interview. You're listening to Infinite Tech by the Investors Podcast Network, hosted by Preston Pish.
We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of
abundance and sound money. Join us as we connect the breakthroughs shaping the next decade and beyond,
empowering you to harness the future today. And now, here's your host, Preston Pish.
Hey everyone, welcome to the show. I'm here with Mark Suman, and I'm really excited to have this
conversation, sir, because this is such an important topic, like crazy importance. And I think
it's only getting started, but I think everybody's going to come to the realization how important
this topic is in the coming five to 10 years. So welcome to the show. I'm excited to have you here
and really excited to get into this. Thank you. Yeah, I'm excited to be on here. I've listened to
your show quite a bit. So it's cool to be on here and chatting with you. So I'm honored, sir. I'm
honored. Let's start here because I'm fascinated by your background. You worked at Apple for many years
as a software engineer working on privacy, machine learning, and computer vision.
On the privacy front, I think this is something that is super relevant to where we're going
to go with open source, decentralized AI, which is what you're building here with Maple AI.
But what did you see while you were there at Apple that encouraged you or gave you the
motivation to go out and start what you're doing right now?
Yeah, sure.
So privacy has been part of my career from the beginning.
I started off doing online backup software for people back in like the early, I don't know, the 2000s, right?
The aughts.
And it was all about how do we save your computer into this new cloud thing that everybody's talking about.
But we wanted to offer people a private way to do it because like you could backup all your photos to someone's computer.
And that person who runs a computer can see everything.
So we would provide people with this private key that they could use on their computer and encrypt everything before they sent to the cloud.
That's kind of where I got my start.
And so privacy was always kind of part of who I was.
Fast forward to when I joined Apple.
And on day one, my new manager sits down with me and says, I want you to build this thing
that we're going to use in the retail stores.
But we have to do it in a way that's totally private because Apple cares about privacy, right?
It's one of the core things.
What I can say, like, it truly is one of the core tenants of Apple, seeing it on the inside.
So from like probably the third week of my project, I was engaged with a privacy lawyer.
and they were kind of part of the journey throughout the whole thing.
And it's like, okay, how do we build this thing?
Normal companies would just capture someone's face and capture their identity
and look at their banking transactions and all these things, right?
Normal companies would do that.
Apple doesn't do it that way, right?
We have to separate all this stuff.
We have to find ways to do it that is totally privacy preserving.
So it made things difficult.
We had to innovate and invent new things that nobody was doing.
We had to invent totally new tools for tagging and annotating AI training data
and machine learning training data in ways that we're totally privacy preserving.
So it's some really cool stuff.
And I will just say like it's truly part of who they are.
So when I look at all this AI race and you see who's emerging as the dominant players,
you got Google, you got OpenAI, you got X AI, which is really coming from behind.
And then you have Apple.
Like it was almost like an ongoing joke amongst friends with the Apple intelligence
and how slow it's been and how they're just really, they don't seem to be playing in this space.
You look at all the GPUs that these companies that I just named are standing up and buying and all the
hardware infrastructure that they're building out.
And Apple just kind of seems to be standing on the sidelines.
Do you think that this is because of their focus on privacy and why they haven't been able to play
in that space?
Or is it just the bad leadership?
What is this?
Like, how we understand what's going on over there?
Yeah, I'll definitely put out the plug there that I am not speaking on behalf of Apple.
I'm not representative of Apple, right?
But from my own personal view and things that I can talk about that have been publicly
mentioned, there's definitely a privacy angle to it because everything moves slower.
Apple wants to do it in the way that's most private.
You know, they announced their Apple private cloud compute, what, two years ago now.
And so they're using secure enclaves.
They're trying to do it in a way that's verifiable.
They don't open source their code.
And so you're trusting third party auditors who have maybe seen images of the server code.
So there is still a layer of trust there.
But they're trying to do it in a way from what I can tell that is responsible.
I don't want to use the word responsible because in the AI space,
responsible tends to mean censorship.
But they are trying to do it in a way that cares about the user.
So I think that's an element.
And then also Apple is just a giant company, right?
They have over 100,000 employees.
And so you're going to have different organizations.
you're going to have the typical turf wars that go on in any large organization where someone
wants to build out their headcount and maybe there's five competing products being built
internally and one gets funding and the other one doesn't.
So there's probably a lot of that going on too.
I'm not going to dismiss that aspect of it.
So I think it's both of those at play here.
Yeah.
Yeah.
Help educate us just on, well, let's start here.
So when you're just looking at this from the viewer's standpoint of open source, decentralized,
whatever terminology, help us understand what you think the best terminology for this is,
preserving your data, everything that you put into these AI models, is it possible?
I'm assuming the answer is yes.
But give us your 60 second overview of how you see the world as it comes to AI moving forward
and how people can preserve and protect their data that they're feeding into these models.
Sure.
The word that I would use is verifiable, which coming from the Bitcoin space,
it's always don't trust verify. And I apply that exact same thing to AI. And verifiable can mean
different things, right? It's not prescribing a specific technology. It's prescribing an ideology of
being able to understand, inspect, look at what's going on with your data. So verifiable could mean
open source code. It could mean running the LLM locally on your computer. If you want to take
advantage of really powerful cloud compute, then running in the cloud, but use something like
secure enclaves in
trusted execution environments that can give you mathematical proof, you know, that the server
matches the open source code that's on GitHub, that kind of thing.
So really, it's being able to inspect.
It's being able to verify everything that you're running, whether it's the LLM, whether it's
your data in your AI memory that you're hosting.
You want to be able to look at everything so that nothing is kind of hidden in there that
you don't know about.
Help us understand the threat here.
Like how bad could this get five, 10 years from now?
is this something that people should be very concerned with or just moderately concerned with?
Because I can tell you, I have an open AI account.
I'm feeding stuff.
I find myself feeding stuff into this that three or four years ago, I'm like, ah, no,
I wouldn't be using it that much.
But I find myself using these models all day long, asking it questions.
I can fully understand how it just knows everything about me at this point based on the things
that I fed it.
And I'm not proud of that.
It's a thing of convenience that I find myself and I suspect many people that are listening
to this are also participating in.
And I'm concerned that you're going to get to a point where you get so addicted to using
these types of models that it's really, really hard to wane yourself off of it.
So what is this threat?
What does this look like?
And is there a point in the future where everybody comes to a realization of like how
terribly dangerous this really is?
Or do we just keep going down this path where we just keep beating it more and more?
Yeah. I mean, I don't fault anybody for using these tools. Convenience is an amazing thing, right? It's why technology exists. Technology comes around to make things more convenient for people. It adds value into their life. And so they grasp onto it. And then there's always tradeoffs. And so I have an open AI account. I've got a GROC account. I use them. And I use them in a way that I'm trying to minimize my exposure, my privacy exposure, right? I also obviously build Maple and me and my co-founder. And so I use that for different.
purposes. But the threat that I see, you talk about like five to 10 years down the road,
the difficult thing I heard it described recently chatting with a friend is that as you kind
of give away your thought process to a proprietary AI service, right? It takes that. And there's
really no getting it back, right? They have it now forever and they can make as many copies as they
want to. And then they can choose to put that into their model. They can choose to manipulate it
if they want to. They can do whatever they want to with that data. And you're just not getting it back.
So, five, 10 years down the road, if you look at what's unique about you as a human, it's
really the way that you think.
Your face is unique for sure, but you could probably find a pretty good doppelganger out
there that looks similar to you.
But your memories, your thought process, the way that you perceive the world is probably
the most unique thing about you.
And if you've kind of given up that thinking process to another machine that has now captured
it and can train off of that, we might be given up the thing that makes us uniquely
human. It's, I think a threat could be viewed in that lens of, are we turning over some of
our humanity to a proprietary system? And I'm working on a long form article about this right now.
It's a phrase I'm calling subconscious censorship. And we can dive into that if you want to.
But it's really this notion that these proprietary systems capture your memories and capture your
thought process. And then they can be instructed given directives to alter your memory
to be more mainstream or be less mainstream. You know, they can guide you and direct you
how they want. And that's really the threat, right? Is the model could, if the desire is there,
to let's say it's somebody very influential that's using one of these accounts, let's say that you
want to start shaping and transforming what they think. You can go in there and exploit it in a way
because you kind of know what their desires are and what their interests are. And then you just
kind of slowly like correct it by putting it into a rut and in the direction that you want it to be
shape. Is that really the deep concern or the risk for some people that are using these over time?
Yeah. And we've seen those methods already used with social media feeds, right? We all talk
about the algorithms and how we don't like them. We don't like the 4U tab on X. We'd rather have
the chronological timeline. Same thing with Instagram. People got very upset when Instagram flipped over
to an algorithmic timeline, but at the same time, we just keep using it. And so we've seen how,
just by the way that they order the posts, they can affect your emotional state, right?
So maybe there's a piece of good news that you're going to see on your timeline,
but for whatever reason, they're motivated to keep you in an angry state.
So they're going to show you something that's really maddening right before that good news.
So now you're in a totally different headspace when you receive that good news,
and maybe it downplays it.
If you take some of those tools of persuasion that they use and apply it to AI,
well, now AI knows you intimately.
And so it knows where to like place an anchor of a false.
fact in this output that it gives you. So it's guiding you and now it's emotionally triggering
you and it can do all these things in a very subtle way and it can repeat them and do them and
change the permutation of it in thousands of iterations over the course of weeks, months,
years that we're working with it until the point where we don't realize that we've been
kind of guided into this rut as you call it and led a different direction.
Today I don't suspect that, you know, these large models that we're using call it X-AI or even
open AI. I don't think they're being used in a way that's trying to psychologically manipulate
people, but I think it can go there really fast. And I think that's the concern, right?
Is when does the government get their hands potentially in something like that and then start
using it for exploitation as opposed to just an everyday tool that's being tailored to help
you out or to make your life better? Yeah. Yeah. I'll jump in there. I don't want to be like super
doom and gloom. Yeah, neither do I. Right. Yeah. I tend to
view technology as this amazing gift that we have and this thing that we're building. And so I love
it. And obviously, I love using AI because I'm working on that daily in my life. And so I want to
just call out this vulnerability that I see. And I think that the mitigation is really verifiable
AI. So if we use models that have been trained on open data sets and we can see the weights,
we can see the biases that went into training. And then we can see the code that we're running
and we can run it locally on our own data or we're running it in the cloud with something
like trusted execution environments.
These kinds of things allow us to see what's going on.
Now we can kind of have our cake and eat it too.
We can use these powerful models and this powerful technology without having this risk
that we are going to be led away because I think you're right.
I think right now we're not seeing that kind of manipulation going on.
We have seen them introduce things like advertising and shopping experiences.
and a few weeks ago, it was, hey, you know, Chachapit is going to work while you sleep.
And when you wake up, you're going to have recommendations in front of you,
which is basically like, we're going to give you advertisements in front of you in the morning
and of all these things cool things that you want to get.
And you can see how that can quickly turn into something more.
Yeah.
They're building this tool that could be used in other ways.
And I would prefer to basically not give myself to that system and instead go down a path of openness
and verifiability.
Yeah.
In the example that you provided earlier, as far as like social media, it seems that AI has been used for almost like a dark way when you start talking about social media, what's coming into the feed.
But as far as the chat context windows and the interaction, it seems like we're still very early in building that all out so that it hasn't necessarily entered into that form yet.
I think anybody listening to this is saying, I like the first.
fact that it kind of knows me and can give me just tailored custom made responses, right?
I love that. I just wish that I knew it was partitioned or on my own server and that nobody
else could see that data and it was something that was inside of my control. I think everybody
listening to this would agree with that statement. So walk us through, because that's what you're
trying to do, right? Walk us through how you're trying to do this. Okay. Yeah. And I think what you just
described is what most people want, right? This is the tale's oldest time with privacy technology
and freedom technology is we know that we should be probably being better about our data and
better about our information that we share, but it's just so useful and so convenient. And we see
so much productivity gains from using some new technology that we're willing to kind of like close
our eyes and plug our nose as we use them. And I'm just as guilty as everybody else with
doing that because we have to make tradeoffs in our personal life to do that. And I mean,
we haven't really talked about the potential for data leaks, right?
And maybe I'll just drop this in here really quick and then I'll answer your question
directly.
But we've seen with ChatGPT with GROC, most recently, both of them had bugs in their software
where chats were being indexed on Google search results.
And so I don't know if you saw this was like a month or two ago.
So people were searching for stuff and finding personal chats that they had made.
These were specifically around the share button.
So in Chat TripD, you can click share.
It gives you a private link that you can send somebody.
and now somebody else can read that chat.
And so it was still meant to be private between you and someone else, but Google was
picking up on those and now people could search.
And it would be things like somebody's chatting about their marriage, you know, and marriage
difficulties they're having.
And then they send it to their spouse and say, okay, here's what our AI therapist on
Chachapiti told us.
And now their marriage details are spilled out onto the internet.
So it's when you give somebody else your data, that's a risk you're taking on,
have stuff like that happen.
What we're trying to build is we're trying to build.
is we're trying to build verifiable AI that people can use so they can see everything through
the process. We build everything in the open. All of our code is online before we push into the servers,
before we push anything as an update that you can download. You can see the code first so people can
inspect it. And then we also know that local AI is really the most private AI, something you can
run on your phone, you can run on your computer. It's never going to get more private than that.
Turn off the internet, talk directly to it. You can inspect it before you use it. That's the utopia
here right there. But not everybody has a powerful enough device yet to do that. It costs tens of thousands
of dollars to run these big, largest open source models. So we're trying to give people in
between. We run secure enclaves in the cloud. We push our code there. And then what it does is it gives
you, it's called an attestation, which is really just a mathematical proof. And it's a way to match.
So it's a way to say, okay, you have this code that's open source on GitHub. But how do I know
that you're actually running that exact same code on your servers. So there's a lot of other private
AIs out there that say, hey, here's our open source code. We're private. We're not tracking what you do,
but you can't actually check. You can't verify that. So we're trying to be as transparent as possible.
And so we provide that mathematical check. So when you go into Maple, you get this little green
verified check mark. You can click on that. You can see all the details. It's really similar to
that lock icon when you go in your web browser and log into your bank. You get that secure socket layer
lock icon, HGTPS, I view this as like the next iteration.
The internet started open with HGTP and everybody was just like going around websites,
viewing them.
And then when we started having usernames and passwords, we had to come up with something better.
So we did HGTPS.
Now I view this as a third iteration.
I'm calling it HGTPSE for Secure Enclave, which is my own fun little moniker.
But it's really this way that we can now verify the code running on the servers because
we're trusting the cloud with so much more every day.
So we need to provide a way for people to verify that.
So that's effectively what we've built with Maple.
But I'll say one more thing and then I'll let you respond if you want to.
But what we're trying to do is we know that people don't want to give up their convenience
just for the sake of privacy.
It's a really tough sale to make.
Very tough.
Yeah.
Right.
It's really tough.
So we are going to build effectively chat chisputt, but it's going to be better.
We're building chat chagipt and it's going to have privacy at the core.
And so we are going to give people.
all of those core amazing features that they get out of chat GPT and GROC, but they're also going to have privacy built into it rather than someone who is harvesting their data for other business purposes.
Let's take a quick break and hear from today's sponsors.
All right. I want you guys to imagine spending three days in Oslo at the height of the summer. You've got long days of daylight, incredible food, floating saunas on the Oslo Fjord, and every conversation you have is with people who are actually shaping the future.
That's what the Oslo Freedom Forum is. From June 1st through the 3rd, 2026, the Oslo Freedom Forum is entering its 18th year bringing together activists, technologists, journalists, investors, and builders from all over the world, many of them operating on the front lines of history.
This is where you hear firsthand stories from people using Bitcoin to survive currency collapse, using AI to expose human rights abuses, and building technology under censorship and authoritarian pressures.
These aren't abstract ideas. These are tools real people are using right now. You'll be in the room with about 2,000 extraordinary individuals, dissidents, founders, philanthropists, policymakers, the kind of people you don't just listen to but end up having dinner with. Over three days, you'll experience powerful mainstage talks, hands-on workshops on freedom tech, and financial sovereignty, immersive art installations, and conversations that continue long after the sessions end. And it's all happening.
in Oslo in June. If this sounds like your kind of room, well, you're in luck because you can attend
in person. Standard and patron passes are available at Osloof Freedom Forum.com with patron passes
offering deep access, private events, and small group time with the speakers. The Oslo
Freedom Forum isn't just a conference. It's a place where ideas meet reality and where the future
is being built by people living it. If you run a business, you've probably had the same thought lately.
How do we make AI useful in the real world? Because the upside is huge, but guessing your way into it is a risky move.
With NetSuite by Oracle, you can put AI to work today. NetSuite is the number one AI cloud ERP, trusted by over 43,000 businesses.
It pulls your financials, inventory, commerce, HR, and CRM into one unified system.
And that connected data is what makes your AI smarter. It can automate routine work, surface actionable inside,
and help you cut costs while making fast AI-powered decisions with confidence.
And now with the Netsuite AI connector, you can use the AI of your choice to connect directly
to your real business data.
This isn't some add-on, it's AI built into the system that runs your business.
And whether your company does millions or even hundreds of millions, Netsuite helps you stay ahead.
If your revenues are at least in the seven figures, get their free business guide,
demystifying AI at Netsuite.com slash,
The guide is free to you at netsuite.com slash study.
NetSuite.com slash study.
When I started my own side business, it suddenly felt like I had to become 10 different people
overnight wearing many different hats.
Starting something from scratch can feel exciting, but also incredibly overwhelming and lonely.
That's why having the right tools matters.
For millions of businesses, that tool is Shopify.
Shopify is the commerce platform behind millions of businesses.
around the world and 10% of all e-commerce in the U.S. from brands just getting started to household
names.
It gives you everything you need in one place, from inventory to payments to analytics.
So you're not juggling a bunch of different platforms.
You can build a beautiful online store with hundreds of ready-to-use templates, and Shopify
is packed with helpful AI tools that write product descriptions and even enhance your product
photography.
Plus, if you ever get stuck, they've got award-winning 24-7 customer support.
Start your business today with the industry's best business partner, Shopify, and start hearing
sign up for your $1 per month trial today at Shopify.com slash WSB.
Go to Shopify.com slash WSB.
That's Shopify.com slash WSB.
All right, back to the show.
The challenge on that last part, Mark, is people hear that.
they know that these large models, especially the newer ones, they call it GPT5 or GROC4, heavy,
their training even more powerful ones, getting access to that in a way that you can leverage it in an open source kind of way is the challenge.
And people are just looking at the performance of these models.
I know I've played around with these models and I'm saying, you know, the open source version or the open weight versions are just not even close to what these newest models are doing.
Is that a temporary something that is going to continue to persist for the next two or three years?
And then all of the sudden, the open source or the open weight versions are going to be up to speed with where some of these larger models are?
Or are they always going to kind of be outpacing where the open weight models are at?
I think that's the concern that people have before switching over to something like this.
Yeah, it's a valid concern, especially in the early days, the open models were significantly worse than the proprietary ones.
But we've seen an acceleration.
I mean, ChattipT came on the scene two and a half years ago, maybe three years ago is when
it really caught on.
And we've seen the open models catch up a ton in that time frame.
You know, they were like 50% as good, then 75% now in the 90% range.
You have a coding model, Quinn3 coder, which is scoring just as good as some of the proprietary
models on programming in some areas, not all areas.
And we're seeing a point where like the benchmarks, however you want to define that, are really
getting similar. And then it comes down to just using it and seeing how it behaves for you. And really,
most people don't need to have that extra like 3% in their model to really get a lot of value out of it.
And then the other thing we're seeing is you bring up GPT5 and arguably GPD5 is an incremental
increase over GPD 4. And a lot of people will complain and want them to go back to GPD4 and make
it available again. And some people have done like introspection into the routing technology behind GPD 5.
and think that there's actually still a lot of GPD 4 just under the hood that's helping
to power version 5.
And so you look at that and you say, okay, maybe their progress has slowed down just a little
bit.
And then also they open sourced GPT OSS, which is really just kind of 4-0 under the hood.
And so you're seeing that open source from their standpoint is starting to catch up.
And then you have this whole market dynamic of the Chinese models.
You have Deepseek, you have Quinn, you have these other ones.
I'm blinking the other one right now.
But you have these other ones that are coming up.
And in order for them to compete with these big proprietary models, they're going open first.
And they're trying to be as good so that they can compete.
So I think we are seeing this world where the open source catches up just enough that it becomes just as valuable to a regular person, mainstream person, than these other models.
What's the business model for them to go open source like that?
That's the challenge that I'm continuing to struggle with.
What's the incentive for them to go that route?
like, how are they going to make money doing that?
It's a good question.
One that I'm still trying to figure out.
I think that if you are China and you don't have access to chips or you don't have
access some of these things, then, you know, maybe you're trying to build this model that
you just want to spread around the world.
Really, I guess one thing that I come down to is if ideologically you want your view
of the world to be out there and used by everyone.
And you know that the American models are not going to have your viewpoint.
And so you're going to build that.
And the best way to get out there is just give away for free.
So that could be one thing.
The difference with the open source and running the open models is you can see exactly
what's going on.
And so you can kind of build around that and build around that propaganda.
So I don't have a good answer for you on what's the business model other than it seems
like if you can't compete from a proprietary standpoint, then you go open and you spread your
message far and wide and try to catch up that way.
Do we get to a point where these large language models kind of start peaking out and it just doesn't make sense to train them with even more, you know, they're plowing so much energy into these latest models.
Is there a point where they get like peaked out, call it two years from now and there's no longer this race to just build an even larger compression of all the information on the internet?
Because that's basically what we're doing, right?
Yeah.
It's hard to guess five, ten years from now, even two years from now because AI's moving.
so fast, but I think what we start to see is a slowdown of the general models and then we
start to see more specific models. The first thing we're seeing is with coding, right? People want
to program and very quickly we've seen that a model that is fine-tuned at just software engineering
is performing generally better. And so we're seeing a divergence there. And I think we start to
see a medical field and legal field and all these other different industries come up with
their specific models. I know those people working on therapy ones. And then we have these
general models that act as routers in the front. So you come up to it and you start talking to it
and it pulls in as specialists. And we can really go deep and dive deep, you know, when we have
those specific models. So I think that might be the next thing is how do we have specialization
and then how do we have these general models, kind of wheel and deal and be the general contractor,
if you will. How do you think through stitching all these different models together?
So when a person creates an account, walk us through the process of creating an account at
TriMaple.aI, which is the website that you've built in order for people to run their own AI locally.
Walk us through what that is.
And then more importantly, how are you stitching the different models together to provide
the experience that the user has?
Help us understand that.
Sure.
Yeah.
So first off with Maple, right now we're only in the cloud.
We want to provide local stuff as well.
It's like a hybrid local cloud where all your data is encrypted locally first on your device.
And we use a private key.
Coming from the Bitcoin world, we understand the power of a private key.
Same with Noster, right?
And so we apply that here.
So you're chatting with AI locally on your phone.
It encrypts it and then sends it to the cloud for processing.
In the cloud, a secure enclave uses your private key, decrypts it to the AI,
it comes with a response, re-encrypts it with your private key and sends it back to you.
So we are not in the middle.
we can't see anything going across the wires.
Only the secure enclave sees the personal data, but you can go look at the source code
and see that there's nothing going on there.
So how do we tailor that experience?
Right now, we just give you a model picker and you're having to choose.
And we have a lot of users telling us they get almost like low-key anxiety from like trying
to pick which one's the best model that I should use right now.
So part of our big 2.0 push that we have coming up is we want to build something that
helps guide the user and say like, I am in big brain thinking mode right now.
So I'm going to click on this thing and it's going to drop me into a model that helps me do that.
Or I'm just in quick trivia mode.
I want to look up something.
So it's going to drop me in there.
In the beginning, it'll be like just an easy picker to do for the user, but then we'll switch over to an auto mode where they just chat and it knows what to do.
And you just put a simple classifier in front of it.
So it looks at your prompt and it can quickly determine itself what should be used, just like you would.
You know, you would say, I'm in this mode.
Here's what I'm thinking.
And you would pick that.
Well, it'll do that for you.
So we want to get more automatic with that, but always provide these advanced features that people can turn back on and be more selective if they want to.
Love that.
The next thing that I personally want to see, so with OpenAI, I can, you know, I'm having a chat and I'll say, this is really important.
I want you to remember this for other context windows and other discussions that I have with you, right?
It then compresses whatever that is.
It puts it into like this, my understanding of Preston Memory Bank.
Are you guys working on something like this because I think this is something that people
really want to have, but they want to have it in a private way.
I know I personally want to have this in a private way.
I hate every time I do this.
But I also find myself going back to similar conversations where that context is really important
and I hate just I have to type it in every time I'm in a new context window.
So is this something you guys are working on?
And where do you see the roadmap for something like this if ever?
Yeah, definitely.
What you just described there are two different implementations that kind of help out with the same thing.
So one way that the chatypD does it is they have these custom GPTs where you can set up this
thing and it has a lot of the context pre-built into it. It's basically like you typed in a system
prompt with all the stuff that you want. And for people listening who maybe aren't like super
into AI, a system prompt is basically the instructions that you give to the model. You give to the
AI to say, hey, when I talk to you, I want you to kind of be in this personality or this frame
of mind or as this character as I'm talking to you. So you can get silly and you can say like,
I want you to talk like a pirate to me. So every time you chat with it, it's going to talk like
a pirate. That's like the extreme example, but more nuanced, you can be like, hey, I am going
to have a legal discussion with you right now. So I want you to be a lawyer. I want you to be a contract
lawyer and I want you to have these qualities about you. So to me, that's one part of what you just
described, and we definitely do want to do that. We have a system prompt you can edit. We show
you what the system prompt is. We want to have a multiple in the future where you can customize
those and maybe a drop down or something and say, hey, I'm in legal mode right now, switch into that
mode. The other aspect you just described is the memory side of it. And we are definitely
working on that. We're going to have an open source memory component to it. We don't know exactly
which direction we're going to go yet with it, but it's going to be something where you will
see, okay, here's everything that the AI has learned about you. And AI memory is really fascinating
because I like to view it as you're sitting down with a biographer, right? Say you're Steve Jobs
and you want to have everybody know about your life so you get the best person out there to
write biographies. That's what's happening with you every day as you're using chat GPT or any
AI product. It is sitting down and trying to learn everything you can about you. Here's how
he thinks. Here are his childhood memories, yada, yada, yada. The difference is in a proprietary
system, you don't get to read that biography. You don't really get to see what's in there. They will
show you an interface that says, oh, here's the things we know about you and we'll even let you delete
it. But there's no guarantee that that's actually happening, right? If you delete that thing out
of there, it probably still remembers it, but it's just like, oh, we'll tell them that we're not
going to use it, but we could use it if we want to. What we want to build is a truly sovereign
AI memory where you can go in and see what we remember about you, not we, what the system
remembers about you. And then you can edit it, you can add to it. And then that will
get pulled into future chats. And so with those two combined, the system prompt personality thing,
that's more proactive. You can say, I want you in this mode. Whereas the memory side is more passive.
It's like, hey, this is my context about me. So use it selectively as you see fit. Let's take a quick
break and hear from today's sponsors. No, it's not your imagination. Risk and regulation are
ramping up and customers now expect proof of security just to do business. That's why Vance
VANTA is a game changer. VANTA automates your compliance process and brings compliance, risk,
and customer trust together on one AI-powered platform. So whether you're prepping for a SOC
or running an enterprise GRC program, VANTA keeps you secure and keeps your deals moving.
Instead of chasing spreadsheets and screenshots, VANTA gives you continuous automation across
more than 35 security and privacy frameworks. Companies like Ramp and Ryder spend 82% less time
on audits with Vanta.
That's not just faster compliance, it's more time for growth.
If I were running a startup or scaling a team today, this is exactly the type of platform
I'd want in place.
Get started at Vanta.com slash billionaires.
That's Vanta.com slash billionaires.
Ever wanted to explore the world of online trading, but haven't dared try?
The futures market is more active now than ever before, and plus 500 futures is the
perfect place to start. Plus 500 gives you access to a wide range of instruments, the S&P 500,
NASDAQ, Bitcoin, gas, and much more. Explore equity indices, energy, metals, 4X, crypto, and
beyond. With a simple and intuitive platform, you can trade from anywhere, right from your phone.
Deposit with a minimum of $100 and experience the fast, accessible futures trading you've been
waiting for. See a trading opportunity. You'll be able to trade it in just two clicks once your
account is open. Not sure if you're ready, not a problem. Plus 500 gives you an unlimited
risk-free demo account with charts and analytic tools for you to practice on. With over 20 years
of experience, Plus 500 is your gateway to the markets. Visit plus 500.com to learn more.
Trading and futures involves risk of loss and is not suitable for everyone. Not all applicants
will qualify. Plus 500, it's trading with a plus. Billion dollar investors don't typically
park their cash in high-yield savings accounts. Instead, they often use one of the premier
passive income strategies for institutional investors, private credit. Now, the same passive
income strategy is available to investors of all sizes, thanks to the Fundrise income fund,
which has more than $600 million invested in a 7.97% distribution rate.
With traditional savings yields falling, it's no wonder private credit has grown to be a trillion
dollar asset class in the last few years. Visit fundrise.com slash WSB to invest in the Fundrise
Income Fund in just minutes. The fund's total return in 2025 was 8%, and the average annual
total return since inception is 7.8%. Past performance does not guarantee future results,
current distribution rate as of 1231, 2025. Carefully consider the investment.
investment material before investing, including objectives, risks, charges, and expenses. This and other
information can be found in the income funds prospectus at fundrise.com slash income. This is a paid
advertisement. All right. Back to the show. For me, the ladder there, where it's really kind of
understanding your past and just understanding the essence of who you are and what it is that you want
is super powerful and useful as I continue to interact with this. I'm curious what the challenge
are from an engineering standpoint to put something like that together because it seems like
it could dominate as you go into a new context window, talking about a brand new topic, that
when it has this memory that it's pulling from to be customized responses, that it could
potentially dominate the next conversation that has nothing to do with the memories that
you're asking it to hold. How do you think through the problem? And I'm just kind of curious,
is that going to be a really difficult feature to kind of roll out as you continue to build
this out?
Yeah, absolutely.
That is one of the biggest challenges with AI memory right now is that it overweights and
overemphasizes something that you give it.
Because you think about your brain, you've been storing up memories for decades in
your brain.
And you know, without realizing it, you know when to selectively pull on something and
when to not pull on something and use it in the decision that you're trying to make.
Whereas AI, maybe it has right now, maybe it has.
like two pages of information about you. And so that one memory you tell it is suddenly like
one of the most important things it knows about you. So it's going to like overly push it into the
things that you're using. So we're trying to figure out how do we downweight that and how we
not have it influence everything. I think a lot of that comes down to you have to get really
good at annotating information when it goes in and say, okay, this little memory you have,
this memory is really focused around finances or this is focused around health or, you know,
being outside, mountain biking or something, right?
And you want to classify that.
So when you're having a chat, it knows, hey, I can totally ignore that right now
because there has nothing to do with this topic.
Rather than like, oh, your mountain biking today, hey, you should bring along your
financial advisor and have a good chat about your IRA or something.
Like, that would be really dumb.
So we do have to figure that out.
And I think it goes back to that verifiability thing is I want to know what the AI is doing
and which memories it's suppressing and why rather than a closed opaque system.
that is going to do that, make that decision for me, because maybe that memory is really important
and relevant right now to something we're talking about, but it is choosing to not make it relevant
for whatever reason. Could be accidental, could be profit driven, could be nefarious.
One of the things that I've heard more recently is that inference is where is going to be
the competitive moat moving forward in the coming five to 10 years. So for people listening,
you have, and correct me if I'm wrong here, Mark, if I'm not describing this correctly.
But when you think about AI, you've got the training of the model itself, which we were talking about earlier with some large language models.
And then you have the inference, which is using the train model that you spent all this energy and poured into all these GPUs that compress all of it into a single large language model.
The inference is using that train model to generate the answers that you're getting back.
And every time you prompt, you know, you ask a question and it goes into this model, this happens every time you prompt.
It requires GPU memory and you have to run it through that entire model and then it pops
out the answer.
And this is the inference process.
And what I've heard is that in the future, the speed of that inference and then the cost
to do it as efficiently as possible but still give you the high quality answer is where
the competitive edge is going to have the winners and the losers.
I'm curious if you agree with this.
And then if you do, how do you think about that inference piece with your company and
being able to provide an efficient and quality fast response to people as they're doing this
in an open weight, decentralized kind of way.
Yeah.
So inference versus training, the costs involved and the power involved are very different, right?
So the training part costs, I don't know what the exact numbers are.
Let's just say 10x.
It might be bigger.
It might be smaller.
But yeah, it's like it takes 10x the amount of resources to train something, just like as a
human.
It takes all this time to train you over decades for you to live life and learn all these things.
And then eventually you can sit down and have a fruitful conversation like we are right now, right?
And so it's easier for us to have this conversation than it was for us to learn everything we
learned up to this point so that we are capable of having this conversation.
So inference should be viewed that way.
Now you're ready to have a chat.
And I see that what's the moat, what's the unique thing that's going to be competitive?
And that is just the user experience.
And so these apps that we're building on top of the inference are going to be the competitive
of moat and what different qualities they have. And we're already seeing that with chat GPT and
some of these others. They're trying to build apps on top of their inference layer now that really
pull people in. The latest is the SORA video app that's pulling people in and trying to make it
more engaging, right? As far as inference goes, I think that just only comes down and cost over time.
And even though we're going to get bigger models, we're going to build chips that are more
efficient for processing those models. Apple, even though they don't have the right AI solution
yet according to the market, they have built these chips into every single device that are just
highly specialized at processing these models.
So one thing that we're looking at with Maple is doing a hybrid approach where you actually
have smaller local models that run incredibly fast and are extremely cheap to run because they're
just running on your spare cycles on your device.
And they will do a bunch of the initial processing and on some of the most sensitive
information.
And they will come up with the most efficient prompt to give to the cloud model.
So you might go in and bang out this massive prompt, paste in a whole PDF of information,
and then the local model will crunch that all and say, okay, this is all good and dandy,
but really what I need to pass on is a smaller chunk.
It'll pass it on to the cloud.
That'll get processed on the more expensive servers and then come back to you.
And I think in a model like that, inference continues just to drive into the ground as far as price goes
and gets faster as well.
And so we end up with a better user experience.
And so the people that can console that kind of user experience are going to have a better
mode, be better as a competitive advantage.
Yeah.
I recently read that XAI has custom ASICs that they're building just to improve the speed
of the inference.
And I guess it's 10 to 20 times faster than some of the GPUs, the best GPUs on the
market right now just because they custom made it for that specific task, which is fascinating.
And it also just, I guess it leans into this idea that I think it is just.
challenging to compete with some of these larger, you know, like X-A-I, it's going to be a bloodbath
of competition to compete against because they're going to be able to provide such quick and
efficient and quality responses because they're going out and doing things like this that
are very capital intensive. Like, I'm just looking at this whole space and I'm curious your
thoughts on just how expensive some of this stuff is. And, you know, we saw the thing with Open AI and
and Nvidia and help me out with the other Oracle.
Yep.
And that broadcom is in the mix too.
Yeah, the numbers are crazy, Mark.
Absolutely nuts.
How do you see that kind of resolving itself and like, where is that going?
Is it just going to keep going more?
Yeah.
It's crazy.
I mean, I saw somebody comment over the last 24 hours about the whole broadcom and
OpenEI thing where it's like, hey, open AI, we want to buy all these chips from you,
but we don't have the money to pay for them.
And so they basically say, let's do a press release together.
Rockcom stock goes up.
Now their market cap has gone up $150 billion.
And it's like, boom, there's your money that you needed.
So we'll loan you our market cap, basically, to help you out.
So it's kind of this crazy thing, a lot of money being tossed around.
Where does this resolve?
Oh, man, I wish I had a crystal ball to understand.
But I think that we are going to have big players out there.
We're going to have these people who are building these big, massive mainstream solutions.
And I also think that you look at the government contracts that have come to these major models, right?
They gave 200 million to XAI, 200 million to Open AI, 200 million to Open AI, to Anthropic.
I think META got that too.
I can't remember.
So there are bigger things that play here with Department of Defense and other governments around the world.
So I think that there is going to be a need for large-scale systems like that.
There's also a need for other people to build out systems.
And I think there's a world where they all exist together.
I don't think this is going to be a race to where there's going to be one win or take all.
Because really, there are so many different ways to approach intelligence in this life.
And there are so many different avenues and so many different needs that GROC's not going to be able to solve them all.
Chad GPD is not going to solve them all.
And so I don't know where all the money resolves.
I think we're definitely going to have a bubble at some point that's going to pop.
It's a view of very similar to the internet.
And so we're going to have all these companies that overinvest.
And then there's going to be a retraction.
and a retracement back, and the winners are going to remain.
So I don't have a perfect answer for you on that, but I just think that there will be some
overinvestment, but I don't think it's going to pop and go away.
There's too many benefits.
People have seen too much productivity from AI, too much value coming out of it, that it's
going to survive.
It's just which people and which companies will remain standing.
It's hilarious because it's almost similar to like a meme coin pump of them raising
capital and then transmute the common stock pump into capital, but then rest into the hardware.
It's like, I'm sure you've seen the charts of like Open AI goes over to Nvidia and they get
money to Oracle and it's all circular. It's this weird thing going on.
It's crazy. What's one of the most challenging things for you right now building out this business?
Well, we're trying to keep up and we're trying to get feature parity with arguably one of the
biggest companies in the world right now. Yeah. And so Open AI and
others, they have billions of dollars to throw at designers, at engineers, and trying to build
the best user experience possible. So in order for us to get people to care about privacy, we
have to give them a tool that's as convenient, as usable, as chatypity as a baseline. So that's
really the biggest challenge right now. That's where we're racing is we're trying to figure out
how do we pick off the most important features? Because we can't build them all right now.
It's just meeting another person. We can't build them all right now. And so let's selectively
pick the most important ones, get those to feature pairs.
and then keep building from there and keep growing.
So, you know, we're going to be raising some money soon, hopefully, and that'll help us hire a few more people.
But even then, we're never going to match these bigger companies as far as team size.
But we're using AI against them.
We're using AI to help us build faster than we could.
And we only launched back in January.
And we're getting ready to do our 2.0 launch probably next month.
And it's come a long way just in those last nine months.
And so the next nine months, next year are going to see drastic changes to what we're
we're building in a positive direction.
That's the crazy part, the reflexivity of the AI itself as it's getting better.
You just get more and more powerful.
And in a way, having maybe a smaller team, you're able to just kind of focus in on those
features that are most important.
The things that I'm seeing on the programming front, especially with the Google model,
seems to be almost unfathomable what it's one-shotting with a prompt.
Can you just kind of help us understand what's transpired just in the past year with respect to your ability to program and write code?
So there is a lot of salesmanship going on when it comes to the vibe coding space.
And so a lot of progress has been made.
And there are definitely stories where people go on and say, I wrote like a few sentences and they gave me an entire app that I can use.
And those are great, I think, for a proof of concept, we've definitely seen a lot of great things in that space.
But getting an app that you wrote, which is one paragraph of text into production that
millions of users can use that has covered all the edge cases and stuff, that's a totally
different story.
So I definitely see a lot of memes where it's like, oh, and software engineers are so cooked.
But really what I think the great power is is software engineers using AI and accelerating
their abilities.
So that's what we're seeing.
I don't want to cast throw shade at the companies that are doing vibe coding for people who
are not software engineers because what I see is a huge value at right there. Being a software engineer
myself for decades, I get approached all the time. Someone's like, I've got a great idea for an app.
I want to build this. Can you please go build this? And they'll draw on a little piece of paper.
They'll maybe build a PowerPoint presentation, but they try to give me requirements. And I can see
very quickly that the requirements haven't thought of everything or the idea just is kind of off base.
So where vibe coding comes in now is they can take that. And instead of coming to someone like me,
they can give it to an AI.
It can build them the proof of concept.
They can play with it and they can say, oh, this is a piece of garbage or this is a great idea.
Let me iterate a bit on this.
And so now when you go to approach someone to build it for you, you've got this really refined
proof of concept that conveys your idea and has thought through a lot of the initial things.
And then for us, we've taken AI and built it into all sorts of parts of our process.
So we're using tools locally where we are running coding environments, you know, IDEs locally
with something like ClaudeCodeCode. We've tried out Codex from Chatsapit. We're using factory right now
also as kind of a new hotness that's come on. So we're using all of these. We also use Maple.
We've got Quinn 3 Coder where we got that plugged into IDE. So we're using all these tools.
And then when we check in code to GitHub, do a poll request, we have two other AIs that hop in there
as code review agents. And so they're both reviewing the code and they come from two different
models, two different companies. And so they give a different perspective. And so they drop in their
comments and say, hey, this line of code is maybe, you know, you should think through this more,
there's potential bug here, that kind of stuff. And then we go in and we say, hey, Claude,
like respond to these comments from the whole request. And so you have these agents that are
really helping out. But ultimately, in the end, we are the ones reviewing the final say on the code.
And we might say, hey, we don't like the approach they all took. So let's get in there and bang
things out a little bit differently. But truth be told, I think probably like 90% of our code,
maybe 95% of our code is written by AI with the human.
there directing it, guiding it, inspecting it, and making sure that it comes out correctly.
Would you say that your time has been multiplied 10x? Like, what used to take you 10 hours
you can do in one hour now? Yeah, I haven't measured it specifically, but that's definitely the vibe
that I feel. I look at what we've built in the last nine months and launched. And we have
a sizable number of users now, a sizable amount of revenue coming in. And I think about
trying to do that before. I've been part of multiple startups. I look at another startup that
I was with and it took a lot longer just to get the product to market. It took almost an entire
year of just understanding and writing initial versions and then throwing those away and doing
different versions to where now we're just, yeah, we're so accelerated. Like I said, it's just
two of us. And so if we were doing this prior to AI, we probably would have had to have
two more people, three more people to get to this point. So we're definitely seeing an acceleration.
Yeah. On the home hardware, running this locally, like right now you're saying you're doing it
in the cloud, you send the encrypted data over to the cloud, it then processes it, it sends it back
encrypted. But for people, let's say 10 years from now, I'm curious how you see the home market
for running your own server. I know as a bitcoiner, I run my own node. There seems to be some
synergy there for people that are do it yourselfers that like to do these types of things.
But is this going to become something that's almost like a heater in your home or like any
type of appliance that's in your house? Are people going to have their own data that's stored
locally that's run with some type of ASIC or GPU that's a specialized piece of hardware so that we
can not give up our data? Is that where this is kind of moving? Do you see that trend going there?
Or do you think that that's still a very hard ask to ever expect out of people that might not
have the technical competence or desire for privacy? What are your thoughts? Yeah. I would
love a world where everybody had their own home server plugged in and it's doing all this for them.
So, you know, 10 years from now, I could see the technology catching up to where we can make
really easy plug and play. You just, you get this little box, you plug it in, you connect it to your
Wi-Fi or plug it in via Ethernet. And now everything you do on your phone, everything
doing your computer, everything you do on your watch or if we're on glasses, whatever device
is the input device and the output, it's talking to your home server and doing everything locally.
I think we will definitely be there from a hardware perspective and from a user experience perspective.
That'll be possible.
The difference is, will people do it?
I don't know.
I mean, there's definitely incentive to keep the cloud model running.
And if you want to look at a parallel to this, you can look at email where we all have the capability to run an email server at home.
We can host our own server.
We can be totally sovereign and have total control over it.
But we don't.
We still just go on to Google Workspaces, give it our domain name,
And now our email is run by Google because it's just so inconvenient and they handle all of the
DevOps and the IT headaches that would come along with running our own email server at home.
So I would love to think that that's the future is home AI stuff and maybe it could be.
Maybe this is finally the line in the sand where it's like, you can have our emails, but you can't
have our brains.
Our brains need to live at home.
And I hope that's the delineation that people are not willing to give up that aspect of them.
So I just don't know, but we will definitely technically be there. It'll be possible.
Last question I have for you, you mentioned Noster earlier. There's a lot of opinions as to what in the world Noster even is.
But one of the talking points that's constantly shared is just it's an identity layer.
And so you're talking about having a private key, public key pair to ensure that the encryption is actually being conducted between the cloud and your request coming from your phone.
or your computer. Do you see Noster as playing an important role because that's an inherent feature
of the Noster protocol, which is this public key, private key relationship that you can sign
anything with? So I see it all coming back to that word verifiable. And that's really the power
of Noster is verifying that this communication truly came from me. Now, right? And so that's the private
key, public key promise that we have. And whether or not it's Noster as an open source brand that
ends up being the thing, I think that concept is what's important. And so being able to say,
hey, this little piece of memory that went into my AI, that's signed with my private key. And so I
know that I'm the one who put it there and that it came from me. Right. And so I think that's where
verifiability takes us. And so that could be online communications. I post something. You know,
I want to sign our maple builds and have those signed with a private key. So I want to kind of
integrate this throughout the entire process.
And I think that's really where it shines, whether or not it becomes a replacement for
Twitter, that remains to be seen.
Right now it's a niche protocol, but I think the power is beyond that.
People who are looking at as a Twitter clone are not seen far enough in the future, where
it's really all about verifying that this communication came from the person that is, they
said they are.
Wow.
Any other parting comments or things that you think are important for the audience to know
about what you're doing?
Sure.
I think what you should view like AI is as a toolbox, right?
So you have a toolbox.
You have different tools to use for different things.
So I would say I'm not asking people to throw away ChachyPT.
I'm not asking to throw away these other services.
Instead, I'm asking them to add Maple into their toolbox.
And that way, when you are talking with ChachyPT and you're like,
I don't really like the fact that I'm sharing my children's name and personal information
about them, you can switch over to Maple and you can have the exact same conversation
with models that are as powerful or like 95% of the way there,
powerful enough for you.
And it's very refreshing.
You get this refreshing feeling knowing that this is just a private room with you and an AI.
And nobody else is listening.
Nobody else is recording that information and it's not being sold to anybody.
You're not being influenced in any way.
And so I would just say, go to try maple.
com.
Get the free account.
You can upgrade if you want to, support us, whatever.
But grab it and just have that extra tool in your toolbox and play around with it
and start to see where it takes you and what you gain from that.
Mark, I think you're working on one of the most important things in the world right now,
truly.
I wish you all the best.
And I can't wait to try it out myself.
We'll have a link in the show notes for people, but it's tri maple.
com.
If you want to go to the website and try it out.
And this stuff is so important.
I think it's only going to get more important.
And hats off to you for what you're building.
And I really appreciate you taking the time to come on.
the show for the conversation. Definitely. Thank you, Preston. Appreciate it. Thank you for listening to
TIP. Make sure to follow Infinite Tech on your favorite podcast app and never miss out on our episodes.
To access our show notes and courses, go to theinvestorspodcast.com. This show is for entertainment
purposes only. Before making any decisions, consult a professional. This show is copyrighted by the
Investors Podcast Network. Written permissions must be granted before syndication or rebroadcasting.
You know,
Thank you.
