We Study Billionaires - The Investor’s Podcast Network - TECH015: OpenClaw and Self-Sovereign AI w/ Alex Gladstein and Justin Moon (Tech Podcast)
Episode Date: February 18, 2026Alex Gladstein and Justin Moon break down the fundamentals of large language models and explore the rise of OpenClaw as a self-sovereign AI assistant. Justin explains context engineering, local infe...rence, and vibe coding, while Alex dives into the AI for Individual Rights program and its mission to empower activists. IN THIS EPISODE YOU’LL LEARN: 00:00:00 - Intro 00:04:12 - What Large Language Models (LLMs) are and how they differ from traditional programs 00:05:15 - Why AI feels like magic—and what’s really happening under the hood 00:06:01 - The key differences between open and closed AI models 00:06:50 - Why capital structures influence AI model openness 00:09:09 - How persistent memory enhances AI agent performance 00:12:18 - What inference means and why context is a scarce resource 00:19:32 - How AI agents combine traditional software with LLM reasoning 00:21:10 - The evolution from MCP-style systems to skills-based context engineering 00:25:41 - What “vibe coding” is and how it lowers the barrier to building apps 00:44:07 - How the AI for Individual Rights program supports activist-driven innovation Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Oslo Freedom Forum: Website. Justin: Nostr account. Related episode: Is AGI Here? Clawdbot, Local AI Agent Swarms w/ Pablo Fernandez & Trey Sellers. Related books mentioned in the podcast. Ad-free episodes on our Premium Feed. NEW TO THE SHOW? Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok. Check out our Bitcoin Fundamentals Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value Newsletter. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORS Support our free podcast by supporting our sponsors: HardBlock Human Rights Foundation Simple Mining Netsuite Masterworks Shopify Vanta Fundrise References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor’s Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm
Transcript
Discussion (0)
You're listening to TIP.
You're listening to Infinite Tech by the Investors Podcast Network, hosted by Preston Pish.
We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money.
Join us as we connect the breakthroughs shaping the next decade and beyond, empowering you to harness the future today.
This show is not investment advice. It's intended for informational and entertainment purposes only.
All opinions expressed by hosts and guests are solely their own, and they may have investments in the securities discussed.
And now, here's your host, Preston Pish.
Hey, everyone, welcome to the show. I am here with Alex Gladstein, Justin Moon.
Guys, it feels like the world is moving at 10x, the speed and pace that it was just a couple months ago.
I don't know if you guys are feeling the same way, but things are accelerating. Oh, my God.
I listened to the show with Pablo and he said it's compressing time and I'm like, that's it
how it feels.
Yeah.
By the way, if a person's listening to this podcast and you haven't listened to the show that was
basically too earlier where we were talking about the claw bot or it's called OpenClaw
now with the branding, I would highly encourage you to go back and listen to that conversation
as well because it's going to be pertinent to some of the stuff we're talking about here.
Yeah.
And I just spent three days with Pablo.
So I applaud you for bringing him on.
And I'll be sharing some of his insights as well from the work we just did together over the last few days.
Amazing.
Amazing.
So, Justin, where do we even start this conversation?
Because what I kind of feel like was the conversation I had with Trey and Pablo was so, like, we were already going 100 mile an hour with the conversation.
And for the listener that was listening to it, I think their takeaway might have been, oh, my God, what is happening?
I don't even know what they're talking about right now.
So, like, maybe we throttle things back.
and like slowly bring everything up the speed.
So take it away.
I agree.
It was a great episode and I really enjoyed it.
I could almost keep up.
I could keep up with it only because I know them and I really know Pablo well.
But I feel like for the drive by Lister,
it was like trying to get on a fully moving train,
like one of those Japanese trains.
It's like it's asking a lot.
So I think I want to help kind of explain at least how I understand like what's going on.
What hell's happening?
And if you understand Cloudbot or OpenClaught,
the thing in the news right now, if you understand that,
you kind of understand what's going on.
And I was thinking about how,
to understand it, like break it down into basics. And I realized you had to introduce a lot of
foundational ideas first that most people don't quite get and it impairs their ability to understand
what's going on. So I'm going to try to introduce it going to be 10 ideas. I have a bunch of notes here.
I think you have to understand in order to really understand what's going on. But I'm not going to
use any jargon. I'm going to try to simplify it and make it understandable to people who don't
know anything in this. Okay. So that's my goal. It's a bit of a high wire act. So it might not go
well, but we'll see. Real fast, before you kick that off, would you say from a really, really
zoomed out from space kind of view that all the excitement is about right now is everybody's
accustomed to using cloud-based large language model AI, they type into a chat and they get an
answer back. But now you're at this pivotal point where the tech is so advanced that now people
can run it locally in a way that's actually going to be quite useful. And we haven't had to
had the hardware and we haven't had the software models to do that yet. And that's really
kind of like this clear break of what we're experiencing is now people can run it locally
without even tapping into a cloud-based provider. The significance from OpenClock to me is it's a
big step towards like self-sovereign user-controlled AI. It's not a full step all the way there,
but it's a big step in that direction. Yeah. And it's a step in that direction from a couple
different angles. And so I want to try to tease that out for people. I need to introduce some basic
ideas just to make it make sense. Okay. There's a few things that you can understand why the
importance of. We've talked a lot with HRF about the importance of vibe coding. That's going to be one of the
takeaways here. It's like vibe coding enable this. And it's going to enable a heck of a lot more
overtime. So like just zooming out like what is an LLM, right? We got to start from the very base.
But like what is an LM? To me, it's like a new way of using computers, right? So like traditionally
a computer like computer programs, right? Desktop apps and stuff like that. A computer
program is something where it's like a recipe, a recipe for a computer. So it's something that's
typed out with exact instructions by a human and it tells the computer exact steps to follow
to do something. So anything that can be broken down into steps can be like represented in a traditional
computer program like arithmetic. Traditional computers are very good at arithmetic. They're very bad at telling
jokes because you can't encode the steps of a good joke. Like it's almost the sense what makes it
funny is because it's unexpected, right? Zoom out one way to think of an LLM, it's a new type of computer
program, like bad at everything traditional computer programs were good at, like arithmetic,
but good at all the things they were bad at, like creating art, right, or telling a story,
right, or coding.
So that's kind of like the high level thing is I want to frame this as like, in a sense,
open clause is a new type of computer to me.
That's what it is.
It's a new way of using computers.
It's a new type of computer program.
I'm assuming you've all used an LN but have no idea how they work.
So basically, there's kind of three steps in an LLM.
The first is like, it's called pre-training.
So what it does is it downloads all the text on the internet and compresses it in
go a single file. That's the fundamental thing of what an LLN is. You take all the information on the
internet and you try to lose the least important parts of it and only keep the most important
kind of ideas and principles and facts. So what you get at the end is a file that can, given
like half of an internet document, it can complete it. It can like do a best effort job,
getting half of a Wikipedia article and writing the second half. That's all it can do, which is
it have a lot of intelligence, but it's not actually useful because like when does a normal
person need to complete an internet document, right? And so,
That file, it's a file, that's what a model is.
If you're at a model, that's what a model is, it's a file, right?
If you've heard of weights, weights are what's in the file.
That's what weights are in AI.
And an open model versus a closed model, an open model is if you can download that file,
like Deepseek or Kimi, generally many of them are Chinese.
And then the American ones are closed generally.
You can't download the file.
So it's generally the closed ones are a little smarter,
and the open ones are a little more self-sovereign.
The closed ones are generally American.
The open ones are oftentimes Chinese.
Let's pull on that thread because I think somebody who's hearing that, it makes no sense to them.
I have an opinion on this.
I'm very curious to hear your opinion, though.
Why are they the ones releasing these open models, but in the U.S., where you would think that would be taking place?
You're not seeing anything of the sort.
Why is that the case?
To me, I think the biggest part is like the capital structure of the company is doing it.
So, like, opening eye and Anthropic have like this huge capital structures and they need to make a lot of money fast.
and they're on the frontier and they need barriers to prevent competitors.
And so not releasing the model weights is the biggest thing,
just from a business point of view, no kind of extra thinking.
I think that makes sense.
I mean, another thing is like, I bet the CCP likes that there are these open models out there
that get embedded into like Airbnb, Airbnb to come out and say,
hey, we use these Quinn for all kinds of stuff.
It's great, right?
It's a way it's for like the CCP, basically,
to embed Chinese values in American tech software.
And also, you know, America is like the leading one.
And then it's kind of easier.
Chinese economy has the last 10, 20 years has done a lot of imitator.
They're amazing and it'sitating, right?
So that's kind of another thing is it's just like kind of something that they're already
very good at is reverse engineering.
Those are the three things.
Alex, you have anything there?
Yeah, I just would say that at the moment, they judged that they could not compete
proprietary side and could both introduce maybe some chaos and opportunities for themselves
by going this route.
However, going that route, kind of like a spot next thing, as we know,
It has opened a whole new door and, you know, it's actually, I think, been good for the world at large that you have other geopolitical powers pushing open source options.
It's going to eventually force the American companies to do the same.
So you're going to have pressure, just like you had pressure to add encryption to devices and to apps.
If there's noon files, like over time, there's going to be pressure on American companies, you know, despite profits.
Like, they're going to feel pressure to have open arrangements and open products.
And we'll get to this at the end of the recording, I think, but hopefully.
also privacy projecting ones too.
But yeah, that would be my take.
One small note I want to recap from a talk that was given
at our yearly AI summit in San Francisco was this guy Ramesz.
I mentioned how like a year ago,
we thought they would be able take off runaway leader in AI.
And that didn't happen.
They're all getting closer and closer.
It's getting more and more competitive.
And the closed models and the open models are starting to get competitive.
Yes.
And now it's getting very competitive.
So this is like great for user sovereignty.
We're not, you know, it's trending in a way that you don't have like a single
overlord.
and it's a very competitive dynamic, which I think is great for freedom.
One of the things that I think also makes it more competitive is when you start running these
models that are not on the forefront of being the best from an intelligence standpoint,
but you combine these lesser models with persistent memory run locally.
The performance that you get for what it is that you need is actually a lot better than a
premier model because it's continuing to learn and it's not forgetting all those past interactions
like you get with a frontier model that has a new context window every single time that you
open up with a very limited memory. So that persistent memory is one of the things that I think is
massive for self-sovereignty and from getting away from these large language models that
are just sucking all the data and using that potentially against you, you're going to get better
local performance. The thing that I was, you know, on that original question, it seems like,
and I've asked the AI, this particular question, why we're seeing the open source models coming
out of places that we would least expect it. And it gave me a really surprising answer in that
they're looking at the game theory and where this is all going. And what they're trying to do is
slightly, very, very, just ever so slightly steering the results of what you get out of the model for,
let's just take an example,
TANM and Square.
If you're training the model,
you can either have that as part of the initial data input
before it compresses everything into the model
and it adjusts the weights ever so slightly.
And if those are the models that everybody starts to build on
and run locally,
you get somewhat slightly different results
than if you have somebody who's feeding it with the base,
everything that's ever been written on the internet,
minus these things that we really don't want in there when we compress the model, they're removed.
And so I found that to be really interesting and, you know, really a lot of foresight.
If true, there's a lot of foresight in there to make sure that you get your model out there.
Now, at the end of the day, I can run that model locally.
I can ask it a question that maybe isn't in its weights.
I can say, you know, go out there and research.
That's just wrong.
That's not truth.
Go out there and research on the internet, like more facts on using the Tiananmen Square as an existence.
And then my local model now knows it, and it's not like it's part of its weights anymore because I've steered it in a different direction. So in the end, it doesn't matter. But I found out. Before moving on, I want to one point here. So we did a hackathon recently where we put together like activists from HRF with freedom tech developers from my Bitcoin meetup in Austin, basically. And one of the interesting projects was an actual Tiananmen Square like student organizer, Jen Lee.
Dr. Young Dunley. Yeah, Dr. Young-Dunley. They did a project where they basically made a benchmark for all the different LLMs comparing their questions on the.
like human rights questions like Taman Square, right, which is very interesting.
And we look forward to that getting published.
Yeah, let me move on because I have a lot here.
So I'm trying to like where an LN comes from and how it's used, right?
So I talk about train.
You take the internet and you get it onto a file.
Then there's something called post training, which turns it into like a useful assistant.
It gives it a bunch of examples.
Like here's how to be useful to a person.
Here's how to do a coding agent, right?
And so now you have something that goes from we be able to complete a document
to be able to like answer questions, be your therapist, writes and code.
Right?
And so that's how the model happens.
That's it.
So then the question is how do you use it?
And the word for that is inference.
You probably heard that word.
It took me a while to remember that that's what it means.
Inference means just when the model is run, right?
And so this is something that you can hire someone to do in the cloud for you,
like Chat Chdbt or Intrapic or you can do it on your own computer if you have a computer.
So you can use something called Olana, right?
And so what inference is you run that model basically and you can put text in and you
get text out, right?
So it's just like the chat Chb-GBT interface.
That's what's happening behind the scenes.
Text in text out.
And the one problem with open models is you need about a $20,000 computer in a
order to run them. Right. So that's one of the tough things right now. It's a big technical
barrier. It's a real user, individual user sovereignty and AI. And it's something we're all kind of
work. So that's what inference is. Okay. So now I want to talk about another word that's very,
very important. This is maybe the most important one called context. Justin, Justin, I'm sorry to
slow you down. I just want so people heard on the episode with Tray and Pablo that Tray was running
his off of a raspberry pie. And so they're like, hold on. You just told me it cost $20,000 to
run it locally. And I just want to explain to the listener. So the way Tray's,
open claw works on his Raspberry Pi, which is, you know, three, four hundred bucks is he's making API calls to Claude or to, you know, Open AI to do the inference on their cloud and then it's giving a result back, right?
He has an agent, which we'll get to. He has an agent running on a Raspberry Pi.
But the inference, the thing that's actually doing these smart AI stuff is on a cloud somewhere.
Yep. So there is a step toward user sovereignty because what ChatGPT was trying to go to us to do a year ago is,
run the agent in the cloud too. So this is like halfway there. So it's huge.
Yeah, forward, right? Running the agent locally, it can save memories locally, you know,
and you have the option for certain things to use a local model too. So it's a great kind of
a half step forward. I mean, it's 10 steps forward, but it's not all the way to the world.
I think huge win for open source and it changed the game. Yeah, let's go. Okay, so we define the word
inference. That's like one kind of word you need to do. Context is maybe the most important one.
So context is, it took me a while to, I mean, I'm very technical. It took me a while to actually
understand what the heck people were saying. It probably took, like, like,
six months to actually understand it. And the key thing to understand is that LLMs are something
we called stateless. Every time you interact with an LLM, it actually, it's a bit of a, we talked to
memory earlier. On a deep technical level, there is no memory at all. Every time you interact with
it, you start from scratch. All it remembers is the training and the pre-training. That's it. Okay, so
if me and Preston use chat GPT 5.2, we are getting exactly the same model, right? If there are some
memories that are specific to Preston, they come from elsewhere. They don't actually come from the model.
We get the exact same thing.
That's an important thing to understand.
So, Justin, would it be safe to say that this context?
So we're using, you and I use the same model, but the header that's put into the start of that chat is what's different.
And so if you have like past memory like Preston likes short answers, he doesn't like a long answer.
That little snippet or that header is inserted.
And you don't see it getting inserted into the context window, but it's inserted in there.
And so that's how we.
might get a different answer from its past memory of us and how we use it is that header that
it's seeded with before you enter the context window. Exactly. So like if me and Preston of the same
model and we're getting different answers, I mean, you can feel us with yourself, right? Like,
let's say you use chat chb-t. If you're in a long conversation, it will remember things
previous in the conversation, but it usually won't remember things from different conversations,
but every once in a while it will. Right. So that's like a big question. Well,
if all arms are stateless, how are these two things that we've all observed? True. Right. And so
the answer is that every round of conversation, let's see you open a chat to be deep
that. You go through 10 back and forth, right? On the 11th one, it doesn't just send the question
you asked or the thing you said at the 11th time. It sends that. It sends the 10th, the response,
the night. It sends the entire history every single time. And there's also one extra one
that you don't see, which is called the system prompt. This is what the header that Preston was
talking about. This is like, think of it as like the 10 commandment. This is something that God,
the developer basically,
chat GVT, or sometimes the user themselves
gets to put in there. And it's
instructions for how the model should behave, which
the model doesn't always follow, it tries to.
And it's also important that it be the
Ten Commandments and not like the Ten Thousand Commandments,
right? So what we were doing, when they are a year ago
is we were doing the Ten Thousand Commandments, we'd write like a whole
essay on the beginning, and we basically overload the model and it couldn't do
thing. And so a lot of the development over the last year
that has enabled OpenPlaught and things like it,
is that we figured out a way to only give it ten commandments
and figure, basically derives the extra things and do like just in time learning to figure out
the other things without overloading it right as a start. So context means, what context means
is it's the conversation. The entire conversation, everything you've gone in that session,
is what context means is everything that has been said previously, including the magic system
prompt at the top. Let's take a quick break and hear from today's sponsors.
All right. I want you guys to imagine spending three days in Oslo at the height of the summer.
You've got long days of daylight, incredible food, floating saunas on the Oslo Fjord, and every
conversation you have is with people who are actually shaping the future.
That's what the Oslo Freedom Forum is.
From June 1st through the 3rd, 2026, the Oslo Freedom Forum is entering its 18th year, bringing
together activists, technologists, journalists, investors, and builders from all over the world,
many of them operating on the front lines of history.
This is where you hear firsthand stories from people using.
Bitcoin to survive currency collapse, using AI to expose human rights abuses, and building technology
under censorship and authoritarian pressures. These aren't abstract ideas. These are tools
real people are using right now. You'll be in the room with about 2,000 extraordinary
individuals, dissidents, founders, philanthropists, policymakers, the kind of people you don't just
listen to but end up having dinner with. Over three days, you'll experience powerful mainstaged
talks, hands-on workshops on freedom tech and financial sovereignty, immersive art installations,
and conversations that continue long after the sessions end. And it's all happening in Oslo in June.
If this sounds like your kind of room, well, you're in luck because you can attend in person.
Standard and patron passes are available at Osloof Freedom Forum.com with patron passes offering deep access,
private events, and small group time with the speakers. The Oslo Freedom Forum isn't just a conference,
It's a place where ideas meet reality and where the future is being built by people living
it.
If you run a business, you've probably had the same thought lately.
How do we make AI useful in the real world?
Because the upside is huge, but guessing your way into it is a risky move.
With NetSuite by Oracle, you can put AI to work today.
NetSuite is the number one AI Cloud ERP, trusted by over 43,000 businesses.
It pulls your financials, inventory, commerce, HR, and
CRM into one unified system. And that connected data is what makes your AI smarter. It can automate
routine work, surface actionable insights, and help you cut costs while making fast AI-powered
decisions with confidence. And now with the NetSuite AI connector, you can use the AI of your
choice to connect directly to your real business data. This isn't some add-on, it's AI built
into the system that runs your business. And whether your company does millions or even hundreds
of millions, NetSuite helps you stay ahead. If your revenues are at least in the seven figures,
get their free business guide, demystifying AI at netsuite.com slash study. The guide is free to you
at net suite.com slash study. NetSuite.com slash study. When I started my own side business,
it suddenly felt like I had to become 10 different people overnight wearing many different hats.
Starting something from scratch can feel exciting, but also incredibly overwhelming and lonely.
That's why having the right tools matters.
For millions of businesses, that tool is Shopify.
Shopify is the commerce platform behind millions of businesses around the world and 10% of all
e-commerce in the U.S. from brands just getting started to household names.
It gives you everything you need in one place, from inventory to payments to analytics.
So you're not juggling a bunch of different platforms.
You can build a beautiful online store with hundreds of ready-to-use templates, and Shopify is packed with helpful AI tools that write product descriptions and even enhance your product photography.
Plus, if you ever get stuck, they've got award-winning 24-7 customer support.
Start your business today with the industry's best business partner, Shopify, and start hearing...
Sign up for your $1 per month trial today at Shopify.com slash WSB.
Go to Shopify.com slash WSB.
That's Shopify.com slash WSB.
All right.
Back to the show.
I want to pause here and really footstomp why this is such a big deal.
So you're about to see commercials coming out at the Super Bowl from Claude,
basically banging Open AI over the head because they recently said that they're going to start
doing advertisements in their service.
Let's just like really pull on this thread and go deeper.
If you're Open AI and you have an advertiser that's doing really well with you because they've got a high margin product and you're able to convert on that, Open AI could potentially, and I'm not saying they're going to do this, but there's an incentive for them to do this where they start blindly inserting in the header, things that could potentially steer the user to wanting said product that's being advertised. And you would have no idea that that's in the header.
Yeah. And this just goes to the whole point of like why we're having this conversation,
which is local AI, is going to be very important for you to see the world clearly because you
won't know that you're being very indirectly, subliminally steered in a certain direction because
you have no idea what's going into that header. Yeah. Like the AI experience will get steered by
something. Do you want it to be an advertiser? Do you want it to be a big tech company? Do you want it to be
another government? Or do you want it to be you? Right. Like, we want it to be you.
Alex, do you have anything to add on that particular point?
Because, I mean, this is really why you're so passionate about running local AI, right?
Well, let's let Justin finish the context.
Sorry.
Yeah.
And then I have my piece and I think it'll help both things together.
Okay.
Keep going, Justin.
Yeah, yeah.
So we think about it from like a Bitcoin point of view.
Like the Bitcoiners, we understand scarcity.
That's like one mental model that the Bitcoiners really get.
And so you think you apply that to AI.
It's like, what's scarce?
In the training, it's like you need data, you need energy, you need computers, right?
In inference, when you actually run it, it's context.
Context is a scarce thing.
That conversation, the longer it gets, the more confused, the I'll get.
And at a certain point, you run out of context and you just have to start over, and that's
called Compaction, and it makes everything worse, right?
So that's the big engineering battle, and it's traditional engineering.
It has nothing to do with AI, really.
Traditional software engineering, the last year, we've all been trying to figure out how to get
better at managing this.
And that is what has led to good AI agents now that we didn't have a year.
ago, it's a big part of it, right? The models got smarter, but the context engineering also got
way smarter. So I want to discuss next one, an agent is, right? An agent is like, so now we're getting
both to open clause. Open claw is an agent, right? So an agent to me is like a marriage between
these new and old computer programs, right? The old stuff is like, you know, how you control your
desktop computer or how you run a browser, stuff like that. And the new one is an LLM,
which can generate text that's like really smart and in some sense has the entire, all the
intelligence of the internet baked in, right? So an agent, how is it a marriage between these
an old thing? An agent is the thing that makes requests to an LLM. So like the chat, GPD website,
and this definition would be an agent. Plod code, which is like a desktop or terminal program
you can run that will write code for you or replet, those are agents, right? So it's something
to make a bunch of requests to some AI and also has the ability to use what we call tools.
A tool is like you can do something. All an LLM can do is spit out text. It can't do anything in
the world. So the question was, how do you make something that can only spit out text, control
a browser? Or do a web search, right? How can it be a web search? And so what we did is we invented
this idea called a tool. What a tool is, is you put in the system prompt, you tell it, there's a
special marker that means, I want you to search this on the web, right? So think of this. It's like
a sentence that says, search this in capitals, and then there's like a question, and then it ends,
search this in capitals as well. So if the AI responds with that to your question,
If the LLM sends that back, search this question, search this, agent will say,
ooh, I know that.
That's a marker, special marker.
I got to do something special with that.
I'm not going to show that to user.
I'm going to go fire up Google and do a web search, and then I'm going to send it back to the
LLM.
So this is what an agent does.
In the system prompt, you teach us tools that the agent software itself will intercept
and do special things like search the web, control the browser, send a message on telegram,
and all the other things that OpenClaude does.
That's called a tool.
And so once we had that, you had, this is the way of augmenting an LLM to be able to do stuff in the real world.
So you maybe heard of MCP.
MCP was something like a year ago that blew up because it was a way to publish a bunch of these tools and share them.
So they basically like, you know, in the beginning, chat Chubit tried to dictate what tools you could use.
Right.
They said, now we have our tool and you can only use this one, right?
And everyone's like, screw that.
We want to do anyone we want.
And so MCP was invented as a way to share tools and so the user can choose which one they want.
And the problem with it, it was like, if you ever heard of it,
like just in case learning versus just in time learning.
Like just in case learning is like getting a college degree to solve a problem.
Just in time learning is like you have a problem and then you go to YouTube and learn
to solve that problem and you solve it.
And so like a year ago, we were doing just in case prompting with MCP.
We'd say, here's how to do 10,000 commandments just in case you need them.
And then the first round of conversation, the AI is already kind of confused because
you should have told it way too much, right?
And so now a thing called skills, which I'll talk about next, is more like just in case
prompting. You say, here's a bunch of manuals you can use if you need them. They're over on that
shelf over there. Don't read them yet. But you can see the titles and when you should use them
on the bindings, right? That's kind of the difference in MCP is like, that was like just a case
prompting. And a skill is like just in time. Yeah. Prompting. And so this was like kind of a
revolution in context engineering because you could expose many more things to an LLM without overloading
its context window. That was extremely helpful for me personally because I've seen both MCPs and I've
seen skills and I know.
There's so many. There's so many.
If you feel overwhelmed by all the jargon, like there's just so much.
There's so much.
It's kind of like in the matrix when they plug the different things into Neo's head.
Exactly.
Yeah.
Yeah.
What skill do you want and you're going to have a little freaking library.
Yeah.
Yeah.
Very similar.
So yeah, let me tell you more about what a skill is.
So now skills are like, this is a foundational thing that OpenClaught is built on.
So an MCP was like, here's 50 different things you can do.
You got to figure out how to use them.
You got to figure out when to use them.
Like it was asking a lot.
of the LLM to kind of map to figure out the user's intent and like when to do stuff.
Skills are based on the insight.
It's a mapping from a user intent to an action.
When user wants X, you do this, right?
So you only see that at the beginning of the system prompt.
And when the user declares the system, the intent, you go and look up the manual and figure
out how to do that, right?
And so what is the manual?
The manual, this is a skill.
A skill is kind of like an analog to an app right now.
The closest thing to the old word.
It's like an app.
The skill is a folder.
It's a very traditional thing, a folder.
You've seen many folders on your computer with two types of content.
One is text files containing prompts,
just plain English description of like, hey,
when the user wants to book a flight, you know, first you open the browser,
then you log in and the user has to enter their password,
and you wait for that.
And then go to iac.com.
And so it's a prompt, but it's not only a prompt,
because sometimes if you give it an open-ended task like that,
it won't be able to do that.
But parts of this are better done by like a traditional programming,
technique, like a computer program. That's the second thing that goes in a skill folder. You can
have a thing around, right? So you can have a program that can specifically open kayak.com
and can specifically find where to put the credit card information and can specifically, you know,
do a bunch of the things. The actual steps that are involved in booking a flight can control the
little Chrome browser, for example, and do all these things. And the prompt would say,
hey, they prefer aisle seats to window seats, right? They'll have a bunch of preferences like that.
It's like a compact manual that maps a user intent to an action. And it lets,
leverages prompting, which is the new type of computer, and like a simple computer program,
which is kind of like the old type. So to me, it's kind of like a marriage. It's a good marriage
between these two. And that's why it's so powerful is because it allows these LMs to more
effectively use a computer to accomplish what the user wants. It's more efficient. It's faster.
It's not bloated. Your context window probably won't fill up nearly as far out. It fills up once the
user wants it, but not before. So it's much more efficient. Yeah. Yeah. And so that's kind of
like one thing here is that we figure out an hierarchy for these types of things, right?
So like in Cloudbot, it saves a bunch of memories.
But it doesn't look at the memory until they might be relevant, right?
So it builds through like file system hierarchies to only expose what the user needs,
but to allow it to be discoverable for other things they need in the future, right?
That's been a big thing in context engineering.
We've been adding hierarchy for all these things we used to just dump in there just in case, right?
Okay, the next one more and then it'll be open class.
So vibe coding.
What is vibe coding?
So this has been a really big thing.
We just had like the one year anniversary of this.
Happy birthday Vodding.
Happy birthday Vib coding.
So normally when you write computer programs, it's like a very, very, you have to
really, you have to have the blinders on.
You have to really look.
And if you get one semi-colt, you're typing text into a file doing really logical operations.
And it's like very, very focused, anal, you know.
And so vibe coding is like the complete opposite.
When you put your feet on the desk and you're like, hey, computer, build me a movie
player app that can download it from my Dropbox and you just watch it do it.
And so this became sort of possible a year ago.
And it's very effective in the last three months, like very effective.
Yeah.
Yeah.
And so let's just talk about like what is actually happening there.
What happens is you say, hey, why want you to write a program to something like code or
replica, right?
And then it might come back, like a normal chat chivity conversation, ask you some
clarifying questions, try to clarify your intent a little bit.
And then it will go into a loop, right?
A loop just is a programming trying to mean to do something over and over again.
Right.
And so we'll do a bunch of these tool calls.
It will do a tool call to do web search to search something you might have said.
Then it will do one.
Read some files in the existing thing.
Then it will write a file.
Then it will edit a file.
And at the end, once it thinks it's looking,
it will do a tool call to run the program.
And then you can interrupt with it.
And at the very end, and it might try to do some tool call to test it manually itself.
So it's just doing a loop, doing these tools over and over again and skills and
stuff like that until it judges, hey, I think I accomplished the thing.
And then loops have a termination condition.
You do it until there's some condition.
And in vibe coding and coding agents,
that condition is a response from the LLM that doesn't have a tool call in it.
So everyone is just a bunch of these little things with the special marker to do something special.
And at the end, it's just a text message.
And that's just display it to the user and the loop exits.
And if you're lucky, you have a working app that does exactly what you wanted.
A year ago, you often didn't.
But now you often do.
And some of the agents are like, some of the agents update you along the way.
They're like showing you, oh, we did this, frost that off, this off.
And they could be quite transparent.
So it's exactly what he's saying.
there you can see it's how it's working.
And you can steer it along the way if it's going on the right section.
You say, I want blue, not purple, right?
So you can control a lot.
And, you know, this is something now if you go on Replit, for example,
you can have a pretty good time with zero technical understanding.
And I encourage everyone to do it because it will give like a different lens.
It gives you a lens and that's what the future is.
Is Replit like co-work, like Claude's co-work?
Kind of.
So Replit, it's a website that you can go to and you can ask it to build an app.
And it's very good at building an app.
And it's very good at building an app.
It's also very good at hosting it on the web or like getting it onto your phone if it's a mobile app.
So it's a 10-year-old company that was where they were dedicated to make it easy to learn the program.
I actually used to do interviews on this platform like 10 years ago.
And they were early to seeing this vibe-cuning trend because, hey, this solves the mission of the company.
So you're about to explain how open claw works, right, Justin?
Yeah, open-cloth.
I think this is a good time for me to like interject some of the social impact of what Justin has just described.
And then I'll sort of end with something I just saw.
Open flood do and then you can explain how that works because I think we've covered a lot of
ground and I think we're ready for this now.
So, okay, so a lot of people, including me and Pablo, five years ago, if you had asked us
about AI, zoom out, way outside of learning how it works, just impact on the world.
We would have thought that it would be inherently repressive with regard to civil liberties
and personal freedom.
There was an old, you know, I'll paraphrase Peter Thiel, about seven or eight years ago,
he said something like Bitcoin is decentralizing, AI is centralizing.
If you want to frame it ideologically, Bitcoin is libertarian and AI is communist.
Yeah.
You know, a lot of people, including me, really believe that.
We thought it would be very prejudicious towards human rights in the hands of states
as they vacuum up everybody's information and build the more efficient surveillance and control
machine.
And a lot of that is true.
Part of the program we've launched the Human Rights Foundation where we brought Justin on to help us is
it's going to be exposing how dictators are using and abusing AI.
But what we didn't see coming until, you know, in the last 18 months was 24 months was
How can AI supercharge individualed asymmetrically in the same way that encryption or Bitcoin
could certainly help dictators, but it helps individuals way more?
I mean, dictators already control vast communication networks, banking systems, massive data centers.
They already have ways to exploit money and spy people, control armies and big companies.
And they have huge numbers of talented people to do their bidding.
But individuals in resistance groups and innovators don't.
So bidecoding changes this, right?
So now individuals have access to enormous cutting its computing power.
and unbelievably intelligent personal assistance that are already saving them huge amounts
of time and resources.
I mean, just very simply, the fact that you can talk to a computer and make it do things
for you is revolutionary, and this is increasing exponentially.
So again, one year ago, Videcoting was invented.
Nine months ago, a non-technical person could bidecote a website decently.
I don't know if they could deploy it, maybe through it, but like, little shaky, but
like they could do it.
Today, a non-technical person can spin up an agent that can autonomously conduct work and
perform tasks in the company without human oversight.
And tomorrow, like, we don't know, right?
So six months ago, a lot of elite developers, including a lot of the ones that Justin, I know,
looks down upon vibe coding and they thought it was very ineffective and a bad work ethic, et cetera, et cetera.
I did a retreat with some of these people, amazing elite developers in the beginning of December,
and a bunch of them were like, nope, don't want that.
All of them have changed their minds as of today, right?
It's really crazy.
So, Kapapi, the former head of AI at Tesla, who invented vibe coding more or less, said about a month ago that,
He said that in November, he was manually doing 80% of his code work and using vibe coding essentially for 20%.
And as of a few weeks ago, that's switched to now.
He's vibe coding 80% to 20% manuals.
Wow.
You know, the agents are capable of massively automating a lot of human work.
And it makes it possible to really super scale individuals and small organizations.
So, you know, where we started with the activists doing some basic trainings and workshops,
that's now blossomed into like multi-day hampathoms and bespoke trainings.
and we can basically give people superpowers.
And the way I like to look at what's available for the activists today,
and this lines up pretty much with what Justin has said so far,
and I'm getting close to finishing the year,
is you have your chat bot just in terms of terminology.
Everybody knows they have their chat bot, go to chat GPT or Claude or whatever.
Then you have what I would call creator mode, which is like Claude Code.
It can do a lot more than just spit text out as Justin was describing.
It can use tools, skills.
Then you have a personal agent.
So these are three kind of options that are out there now.
We're about to explain how open claw actually works,
but the social impact of it is really important.
Essentially, what I've seen with OpenClaught,
so yesterday what we did is to a group of 20 people from different industries.
Pablo and I did a 40-minute session,
where we did some background,
we did some pretty amazing things with Cloud Code,
and then we used his own OpenClaw that he's set up.
And basically, like, from my phone, I can go into Telegram,
and I can message Hint, and I left it.
I just left it a two-minute voice note with an incredibly
complex task to do. And like three minutes later, it responded, like it gave me this thing.
And it was just like the most instinct data rich website thing that was actually quite useful.
I mean, to be very clear, we asked it to create a doable, scalable, manipulatable,
circulateable, global, spherical map that shows exactly how much civil liberties and free speech
and democracy funding. Every single country in the world gets broken down by who gives it and then like
sorted so you could like rank them. Hold on. You sent this request.
request over like phone line?
Over telegram from the phone.
I was just like, yo, and I had on speaker and other people were listening in the room.
And I just said, I want you to do all these things.
And then a couple minutes later, it gives us this like freaking incredible visual project.
And what is showing me is the following.
And this is the kind of where I'll conclude is that workflow for creators is going to change.
So basically, the way it works to until this point is like if you're an executive or you're a
creative person, you have a meeting and you have a cool idea.
You really want to do something.
Well, what do you do?
well, you normally talk to your executive assistant or your product team or your program team,
depending on what kind of organization you work with. And you have a meeting and you describe what you
want. And then they go talk to the creative team because they're not designers or, you know, engineers,
or they go talk to engineers. And then those people talk to web people. And then maybe they come back
to you a few weeks later with some proposals. Hey, do you like this one better or this one? And there's just so
much human time and effort there. Now what you're going to be able to do this year is like the
creative, like the founder person can literally describe exactly what they want.
They could say I wanted to look like liquid glass on iPhone or I want it to kind of look like this
movie vibes or they can literally like the dream can come out of the head so specifically and then they
can take the optimal voice and they can speak it into existence and they take that and give it to the
creative tea. And then there's no more like, well, do you like this color or that? No, no, no. They have a
really specific idea of the vision. So this is going to become, in my opinion, like a skill like surfing
or like sculpting. And it's like, are you going to be decent at it or are you going to be like Michelangelo?
And we'll see.
But I think it's going to be so amazing for creators, people who have big dreams and visions,
because they can really quickly get them to like a really good, really good blueprint of what
they want.
And then their colleagues, their alliances, or teams can finish the rest.
And that's, I think, one of the biggest social impacts of what Justin is describing.
So maybe Justin, now we turn to you and figure out how I can like talk to telegram and
have it do stuff, something like that.
Yeah.
So transition from biocoding to OpenClaw or like chat.
It started with the chat GPT interface, and it became kind of vibe coding agents, right?
And now it's like the personal assistant is we're just starting to enter that, where, you know,
we've had a good coding agent for about a year.
We've just starting to get good personal assistants.
And that's what OpenClaise.
It's kind of the first actually useful personal assistant.
And so to transition, though, I want to make a note that like I actually met Peter Steinberger,
I think his name is, the guy who created it from a blog post about how we vibe coded.
And when I read it was called shipping at inference scale.
And it like blew my mind.
I'm like, oh my God, I'm a complete amateur.
What this guy is doing is unreal.
And I think OpenCla is largely a story of he would like the world's best vibe coder base.
This guy figured out how to vibe code.
And that's actually what created OpenClaw.
Like the real thing that unlocked it was that he was able to use these vibe coding tools so effectively.
So I'll get to that.
But so what is the user experience, right?
It's a personal assistant that you can chat with on any messenger you like, signal, telegram,
Noster.
Like the last night at the live stream, we used, there's an existing.
Noster thing that wasn't very good, and I built
one using Marmot, right? So you can add
and do whatever to heck. Email. Any emails
to this? Email, anything you want. And if it doesn't exist,
you can make it. So the ingestion can be
talking to this can be from anywhere.
The agent has its own computer.
It gets a computer and it totally controls it.
It can be a desktop, like a little Mac Mini.
It can be a virtual. It can be something in the cloud.
It can be on your laptop, although don't
probably do that in general. Be very careful with
this. Do not try this. Do you have information
security skills. Like, I'm still scared of it
and I'm like an expert almost like,
and then it can totally control that computer, right?
So you can talk to it anywhere you want.
It has its own computer and it totally controls that computer.
And basically the premise is,
what if you gave the agent its own computer
and gave it skills and tools to control literally anything
about that computer that the user wants to?
And it got to a certain point now
where the developers don't even have to invent the skill anymore.
Now if it's missing something,
if there's something after you want to be able to control that it can't do,
you just say, hey, now it has a recursive self-improvement now.
You can make a skill that allows me to pilot this weird app that nobody else uses, right?
And so it basically vibe coding internally to make a personal skill.
Or if you could color this in, you can also buy, you know, free market a skill.
So Pablo was showing me that what he's building, he's building a, not a competitor to Open Club,
but like something like an alternative that's more for different use case.
But the idea that when he wants stuff done, his agent can go hire the Adnoster and Bitcoin
can hire like an expert in cashew, for example.
Yeah.
that Kale has, like, worked with so that it knows Kung Fu, right?
So, like, he can hire that one and then, or a higher one that's really good at designing
liquid glass apps for iOS, for example.
So we can go out and hire these and then, like, do it.
So, again, like the skills thing is not just something that you have locally.
You could hire them or you could, you know, acquire them or whatever you want.
But the point is, it's fascinating to see this start to work.
Let's take a quick break and hear from today's sponsors.
No, it's not your imagination.
risk and regulation are ramping up, and customers now expect proof of security just to do business.
That's why VANTA is a game changer.
VANTA automates your compliance process and brings compliance, risk, and customer trust together
on one AI-powered platform.
So whether you're prepping for a SOC 2 or running an enterprise GRC program, VANTA keeps you secure
and keeps your deals moving.
Instead of chasing spreadsheets and screenshots, VANTA gives you continuous automation across,
more than 35 security and privacy frameworks. Companies like Ramp and Writers spend 82% less time
on audits with Vanta. That's not just faster compliance, it's more time for growth. If I were
running a startup or scaling a team today, this is exactly the type of platform I'd want in place.
Get started at Vanta.com slash billionaires. That's vanta.com slash billionaires.
Ever wanted to explore the world of online trading, but haven't dared try?
The futures market is more active now than ever before, and Plus 500 futures is the perfect
place to start.
Plus 500 gives you access to a wide range of instruments, the S&B 500, NASDAQ, Bitcoin, gas,
and much more.
Explore equity indices, energy, metals, 4X, crypto, and beyond.
With a simple and intuitive platform, you can trade from anywhere, right for
from your phone. Deposit with a minimum of $100 and experience the fast, accessible futures trading
you've been waiting for. See a trading opportunity. You'll be able to trade it in just two clicks
once your account is open. Not sure if you're ready, not a problem. Plus 500 gives you an unlimited
risk-free demo account with charts and analytic tools for you to practice on. With over 20
years of experience, Plus 500 is your gateway to the markets. Visit Plus500.com to learn more.
Trading in futures involves risk of loss and is not suitable for everyone. Not all applicants will
qualify. Plus 500, it's trading with a plus. Billion dollar investors don't typically park their
cash in high-yield savings accounts. Instead, they often use one of the premier passive income
strategies for institutional investors, private credit. Now, the same passive income strategy is available
to investors of all sizes thanks to the Fundrise income fund, which has more than $600 million
invested and a 7.97% distribution rate. With traditional savings yields falling, it's no wonder
private credit has grown to be a trillion dollar asset class in the last few years.
Visit fundrise.com slash WSB to invest in the fundrise.com to invest in the fundrise.
income fund in just minutes. The fund's total return in 2025 was 8%, and the average annual total
return since inception is 7.8%. Past performance does not guarantee future results, current distribution
rate as of 1231, 2025. Carefully consider the investment material before investing, including
objectives, risks, charges, and expenses. This and other information can be found in the income
fund fund's prospectus at fundrise.com slash income. This is a paid advertisement.
All right, back to the show.
Real fast because we have a huge Bitcoin audience here.
Yeah.
When you look at how these AIs are going to want to transact with each other, for me, it's
become super obvious that they're going to want Bitcoin because that's the only form of
payment that they can't be rugged on.
So if they're managing their own wallet and you look at all the different ways that they
could be paid, anything that touches human rails or has the capacity for a human
to be like, I think I'm going to liquidate this account that it's using.
I think the AIs are going to deeply understand that risk and never want to denominate their exchange in such a thing.
I think for sure that's where we go, but it's just worth noting now that, for example, I saw the founder of Umbrolet today.
He was just posting that he had his open claw on an Umbrolet just booked his debt for him.
And yeah, he like gave it his credit card.
He gave it his credit card and his billing address.
So it does work with Fiat, but like, I think you're right that like over the coming years,
it'll be way easier with these things to work with a digitally native currency.
Yes, of course.
I almost think it's going to happen the opposite way where it's like,
they'll just use dollars because that's what's in the training data,
and that's what everyone accepts by default, right?
They'll use Fiat.
And then they'll try to do something where they can't.
And they'll be like, I can't.
Is there another option?
Oh, I can just use a Bitcoin.
Oh, I can get the Bitcoin skill.
It'll be nervous.
I think it'll come more for trial and error where it's like, dang, it's like,
they keep asking me for all this stuff and I've got to check emails and my owner has
the email and I can't get in there.
So it's like, let me just create a Bitcoin wallet, right?
I think it'll kind of happen that way from the ground up, just based on failure with
the Fiat.
Right.
It's like I've got to train on this person in Nigeria, but credit cardic price not working.
Well, why don't I try something else?
Let me see, oh, there's this Bitcoin skill.
Oh, let me learn that really quickly.
Oh, okay, it works now.
Like, it's going to do that.
Okay, let me get to the continue the open clock.
So like, yes.
So I talked about the user experience, right?
It's a personal system that you can message however you want that has its own computer,
and that computer can be whatever you as the user want.
You have the freedom to choose.
And so it completely blew up in popularity.
So to give a sense, GitHub is like the collaboration platform for,
open source software, there's something called a like a favorite on GitHub.
You can like favorite a post or a project.
You say, I like this one, right?
A star, it's called the GitHub Star.
Bitcoin has 80,000 GitHub stars.
That's a really popular project.
OpenClaw, and it's 15 years old.
OpenClaw is like six weeks or seven weeks old, and it has 160,000 stars.
So it's double as popular with Bitcoin in like six or seven weeks.
Linux is like 200,000.
So it's almost cut up to Linux, which is like the most famous open source project that it's.
So that just gives you an indication of like,
Is that the fastest moving?
Oh, yeah.
Like, there's graphs where you can find where they show all these other, like,
super fast moving projects that look like a hockey stick.
And compared to those open claws, like a vertical line.
It's just insane.
Like, there's no X dimension to the adopts.
It's really cool.
So that's, to give you the listeners, a sense of how popular it got.
And so it's because the user experience was really good.
Like, this is what everyone's wanted.
It's like a relatively self-sovereign personal assistant.
But I just want to kind of ask some questions about, like, why did it happen now and
give my takes on it. What enabled this? And this isn't a sense of like, where are we now?
Like the first thing you think is, oh, but finally the AI's got smarter now. I kind of disagree.
Like, I kind of think that if we had Claude Bot, like when Claude 4 came out, this was May 22 of last
year, I kind of think it could have gone viral at the same time. Wouldn't have able to do
everything. But I think that some of the previous models from six or nine months ago maybe
could have done this. I'm not sure I want to do some testing on it. Yeah. But I don't actually think
that actually when it comes down to running the assistant, we needed the models that we have today.
So one big one with context engineering, we got a lot better at this job.
just in time prompting and instead of just in case prompting.
And that's traditional software engineering.
So this was human software engineering.
But I think to me the biggest one was that this one guy
basically vibe coded a massive bridge.
Like Peter Steinberger's GitHub is insane.
The average developer does like maybe 10 GitHub contributions.
That's like an action on GitHub a day.
This guy does like a thousand a day.
He's just absolutely ripping it.
He's operating at a much higher level than the rest of us.
And many of us are trying to catch up.
He has like 50 projects on his GitHub that compose this bridge between a traditional computer and an agent.
So stuff like managing Google Calendar, managing Gmail, making tweets, communicating over Telegram, communicating over like Apple Messenger.
He made all these little command line tools, little basic tools that were optimized for an agentic user, not a human user.
Like no human would want to use a COI tool to manage their calendar.
But since LOMs are all text-based, right, it's all based on text, they are really good at making these little C-O-I tools.
And so eventually got to this kind of recursive improvement where the tool byputs itself.
I mean, also it's like the labs couldn't do it because it was reckless.
Like you needed like a cowboy base.
You needed an open source cowboy.
He didn't care.
Like very,
I don't know if this guy's a bitcoiner, but he would fit right in.
Yeah, you would.
Like Satoshi.
Yeah.
Open source this thing.
No big thing would ever do this.
And also he's kind of a hero because he didn't, you know, he could have raised the VC money and all
these things.
You know, no, I'm already successful.
I'm just going to leave this for the people, right?
You know, so there are a lot of these technical things like making skills,
skills we're missing in tracks and context engineering.
Amazing. And it brought so much pressure on the large corporations because the users are now
going to want the choice of using whatever input they want, whereas before they wanted
to corral you in their theme. Like they wouldn't have wanted you to use Signal to talk to
Anthropics new product. They'd want you to use their own. Right. And now it's like,
well, what are we going to do? Begin to try to, they're probably going to have to offer ways for
people to use any input they want. So this is pretty, pretty seismic. And I just would also just note
that from a human rights perspective,
maybe we can conclude a little bit of this with this part,
Justin,
like,
I'm not Dumer on the,
yes,
of course,
these things are risky and evasive,
but like the cool part is you can hook up signal and maple and do open
claw like that.
Like you can use privacy protecting AI agents,
and you can use privacy protecting messengers.
And there are some serious innovations happening on that now
by some of our friends and people in our community who are making,
you know,
what are going to essentially be full stack personal agents where,
in maybe three to six months.
Some of them are already like very alpha,
but like you can experiment with them.
But like you'll be able to go in your signal
and have to do stuff
and have like the whole supply chain be encrypted.
And I'm so bullish on that.
So that's what HRF's really going to be focusing on this year.
Yeah.
From a investment point of view,
like supporting the infrastructure,
it's going to be building those tools.
And then the rest of what we're doing
is going to be just the super scaling and education.
Yeah, yeah.
Let's like go into those in a little more detail.
I just want to kind of summarize first.
Yeah, go ahead.
So if you like think of open thought of like a store,
like a story, and it is a story. That's why it went so viral. Like the story is just as much
as the tool I think in a sense. It's like a story of like what one individual can do with the help
of vibe coding, with AI development. Right. The one guy, it was basically, and then eventually he got
far enough where a big open source, voluntary open source community arose around it. And this is like
exactly what we bitcoins participate in. This is what Noster is. And so it's very inspiring to see what
one person can do. And to me, open claw is more of like an idea than an actual product. Like it shows
us the idea of what if an agent has its own computer and you can talk to whatever you want,
I'm going to build my own open clock. I'm not going to use open clock. I'm just going to vibe code my
own and I'm going to use some of the pieces they have. And all my friends are going to do the same
thing. And you're going to see this big renaissance of stuff that can't be controlled that is
customized to what the user wants. And so for my takeaway, it's like I want to teach more people
about AI. And also that like this is why I'm proud to work on the HRF like AI for Indus
Rice program. Like we're fighting to make sure that more of this type of stuff can happen. That
AI remains user control and that people can thrive in an AI world.
So yeah, I transition out Alex just hear a little more about, you know,
maybe share a little more about, you know, how the program started and what we've done
and what we want to do.
Well, yeah, again, the moment was fortunate about 13 months ago when we were presented
with the opportunity to do this by a generous supporter and anyone listening, you can
just do things.
You can support people like us and have us do really cool things.
So thank you to everybody who supported us, including you, Preston, for helping us today.
just even having this conversation is going to spark a lot of thoughts, I think.
But yeah, it's a great of the world's first AI for individual rights program.
Every other human rights group either hates AI or they're going to try to, you know,
really focus on research.
And it's just they're not going to do anything.
And you know what?
Like we wanted to do it differently.
And most of our effort is going to be focused on how to make this tool a mechanism for
personal liberation, period.
We are going to do, again, some research and investigations into how dictators are
are abusing it.
That's very important.
you know, we do feel like that will start to get crowded with other people.
What I don't see anyone else doing for sure is like,
in the same way that we've been pioneers of educating dissidents and activists
and resistance groups on Bitcoin,
well, we're going to do the same thing,
these open source privacy protecting AI tools.
Because in the same way that Bitcoin helps them become unstoppable,
AI is going to help them 10x or 100x what they can do.
And we need that right now.
Right now is a dumb moment for us to push freedom forward.
So that's what the program is designed around.
We're going to do events that bring people together.
as Justin was describing, like bringing together talented developers with activists.
I mean, both of them were thrilled.
I mean, the event was so well.
The first one we're going to do two more this year, at least.
We're doing one in Nashville, at Bickland Park in May, Dr. Rod.
We're going to do one at Pubkey in D.C. in September.
So we're going to cook with these.
And the developers were like thrilled because it's like something so inspiring to work on
as opposed to just like the standard of hackathon.
And the activists are like, awesome.
I get like five of the smartest people in the world to like help me do what I want
to do.
So like everybody's like, you know?
Let me chime in here a little bit.
So, like, we had this idea.
You know, so HRF, one thing, I mean, my friend still, everyone, so I'll give me crap about
like, like, how do you work for an NGO?
And I'm like, I don't know, man.
I don't know.
Alex, it's true.
We are non-governmental.
I'm not.
And I'm like, well, Alex brought me and my friends with these freedom tech developers, the ideological
freedom tech developers.
And we met these, like, physical freedom fighters who actually fight for freedom in
authoritarian regimes.
And over the years, I would meet these people.
And they were like, some of the most courageous, inspiring people I've ever met.
And I just like, man, I wish I could help them.
But it was always a little distant.
because it's like, I'd be like, okay, use my wallet, you know, use my, I can teach you how to use Bitcoin, right?
But it's, it remained a friendship, a social thing.
But then when bidecoding happened, like what bidecoding means is the cost of software
production going kind of to zero.
That's what it means.
You have a year ago you needed to be chat GBT to build an agent.
Then Peter Steinberger could build one himself and Pablo.
And now the tool of themselves can recursively self-improve, right?
The cost is going down, down, down and down.
So the opportunity is like, okay, what if we could put activists and developers together
and have them actually try to solve problems, right?
Usually the ideas are bad and there's no distribution of the product at the end.
But the activist's collaboration fixes both of these.
The activists bring a real problem.
Like, hey, how do we make a leaderboard of which LMs respect to human rights?
And how do we distribute it?
Okay, the guy's got a massive academic following and it's very respected and works at Harvard.
But, you know, like this is what all the projects were like, right?
It's very empowering from the activist's point of view because they got to do something useful
and they also got to see how software is created.
Right?
A lot of these people have been around HRF, talk to these developers,
but I don't think they actually understood where it comes from,
and they got to see for a day where it comes from.
And from the developer interview,
because it's like, man, we've been working on these abstract problems all the time,
and now I get to make a tool that can help find corruption
in a big data dump of documents from Rattrow.
It's very nice to work on a concrete problem
and then apply the skills you knew previously from your work in Freedom Tech stuff.
It was a big success. It was a very surprising success for me,
and I'm really looking forward to do more of these.
And just, you know, the TLDR, what are we doing?
I mean, two main things.
Again, we're going to be bringing people together at all kinds of interesting events.
We'll have a big Freedom Tech Day at the Freedom Forum,
or we're going to have quite a bit of Vododing for activism.
And then the second thing will be grants.
I mean, you know, we want both the activists to apply to our AI fund to seek help
to build the things they need.
Then we also want really talented developers working in essentially, you know,
things like open code or open claw or maple like open source,
sovereignty and or privacy improving infrastructure,
we want to aggressively support that.
So people should get in touch with us,
and we really, really want to beat that up.
And, you know, even small investments can go a really long way right now.
The virality is here.
Like, again, the guy from OpenClaught,
when he released it, was Claudebot.
It's not like he had raised $30 million of venture capital
that did it out of his house.
And it's like, we could do that.
I don't know if you want to mention, like, briefly just what,
Like, Kali came out with today or yesterday, the claw today.
Yeah.
Our friends are coming up with amazing stuff.
Callie, another pretty famous bitcoiner who has just done incredible things historically as far as writing code.
He made a turnkey claw bot that he just released his website, right?
That makes all of it super easy.
A person can just, you know, go to the website that he just stood up.
And I can only imagine how quickly a guy that's as talented as he is in writing software was able to engineer.
something like this and put it out there.
No, and it still has, it's about a ways to go on security side,
but like, you know, he knows that he's a privacy,
a maximalist and he's going to work on that.
You know, again, where we are today, like,
for the activists at least is we want people to use something like Maple
for their basics for what their 101s are.
Like, you should just not be using other chat.
It's like it'll get 95 cents on the dollar, at least,
of the big corporate model, then you could be encrypted.
Let's move there.
Let's move from text message to signal.
The next three to six months, we're going to be able to move your creator mode,
you know, your basically your Claude code type things.
And I think we're going to be able to move your agent as well into a similar environment.
So that's like the hope and the dream right now is that now, like in the next three to six months,
people who really value privacy and sovereignty, you know, will have access to extremely powerful
tools that reflect their values, but they can also 10x to 100 extra work.
And that's very exciting.
Guys, we have to keep this conversation going.
Like honestly, like you guys, like you guys.
are on the tip of the spear. It's a military term. You're on the tip of the spear of everything
that's happening. Thank you, Preston. No, I really mean it. In the conversation I had with Pablo
and Trey, I was like, guys, you got to come back and keep us updated with this. Because I honestly
think that this claw bot thing, and it's interesting because Sam Altman literally said the same
thing. And, you know, coming from a guy that's one of the, you know, one of the biggest in the
AI space. Oh, he said it's here to stay. It's here to stay. That caught my attention.
And I think that this is something that is going to be massive for individuals.
It's the Wild Wild West right now.
And for all intents of purposes from a privacy, security, people losing, you know, their bank accounts and email addresses and things like that.
I think it's the Wild Wild West right now.
But in a year from now, I can only imagine what this.
I mean, it's a new era of personal computing.
You know, we heard just these commentary.
Like, the creator of OpenClawe really just opened a new, tore a new hole in what?
possible. And now we're moving into that world. Let me give one analogy. Like personal agents at this stage
really remind me of e-cash, right, which I worked on through Fettyman and Calli worked on through
cashew. Because it's like there's an obvious tradeoff, big security tradeoff right up front. It's like,
hey, you trust another guy, random guy with your bank, right? And so it's like kind of crazy,
you know, you give an AI agent to its own computer and let it do whatever the heck it wants.
So it's like a big upfront tradeoff that's a little reckless. But then you get this flowering of all
kinds of hobbyists and people who are kind of, understand the risk, understand the tradeoffs.
That's what we're trying to communicate. Don't just recklessly do this if you don't understand
what's kind of going on. That's why I tried to explain so much of these ideas to you, because you
need to equip yourself with some of these basic things in order to make these decision.
But when you have this flowering of a big group of very motivated people in like the open source
ecosystem, that's when you can have really magical things happen. And that's what happened
with eCash and Casu, and that's what's happening with these personal self-sovereign AI agent.
You know, you have all these people talking like AI's coming. It's going to take all of our jobs. The other side of the coin that I think I really want to, you know, impress on a person listening to this. The tools we're talking about also give a person the ability to 100x or 1,000x their capacity and their ability to do things. And so these two forces. Amazing.
Really come down to what is your perspective? Is your perspective this?
is too hard and complicated, well, AI is probably going to eat your lunch. Or are you sitting there
saying, hey, this is my moment. Yeah, like, what can you do with this? What can I do? This is like a great
example. I'm here with a really well-known Cuban activist. I'm thinking to myself, like, you know,
right now there's no like Bitcoin wallet really is perfect for her needs. And, you know,
no one's really going to build that. She's going to build it. Like within the next year,
she'll be able to speak to a computer and it'll open source. It'll take some stuff from BitChack,
which is very important given that Cuba doesn't have great internet. It'll take some
stuff from like some very popular open source lightning libraries. It'll just build what she needs and it'll
look awesome and it'll be exactly what she needs and she could just do it in a few weeks or a few days
or a few hours depending on how much she wants to put into it. I mean, we're going to see the
blossoming of so many interesting little personalized tools that can radically expand people's
potential and it's just such an exciting moment to your original point, Preston. Yeah, we'll come back and
you know, we're making a mini documentary right now.
the current six months that we're living on,
that we're going to play on the main stage of the Oslo Freedom Forum.
It's going to start January 1.
It's going to end June 1.
We're going to play it on June 2.
And at the bottom 3rd, you're just going to see the days go by,
and you're going to see the headlines,
and you're going to see interviews and work.
And it's going to be so pretty to you what happens on June 2.
We show this thing.
The speed is just face melting at what is going on here.
So honor and a pleasure, as always.
Hey, that event and also the one in Nashville in May,
I am very interesting.
Yeah, go into the one of the one of them.
Yeah, so we'll put links to that in the show notes.
Yeah, May 8 to 10 for the Bitcoin Park,
hackathon part two, AI hack for freedom.
And then it's June 1 to 3 for the OsloF Freedom Forum in Norway.
Amazing.
Awesome freedom form.com. Check it out.
Amazing.
I have one thing to plug here at the end.
So I started doing some live streaming on Nostr,
try to share what I've learned over the last year.
And so next week, I'm going to try to vibe go to Coin Full Node.
That's what I'm going to try to do.
So I'm going to be live on NOS for all week.
And I'll be going to injure myself severely in this process.
I wish me luck.
It did.
Good luck.
Amazing.
Amazing.
Okay.
So we end the shows now with a song.
And we need you guys to select either one of you, what your favorite artist is or song.
Like, if there's a specific song you like, I want it to be like that.
And then the song is going to recap everything we just talked about in a fun song-like way.
So do either of you have a very strong preference for a specific song, artist, genre?
Go ahead and speak up.
Justin, you fire.
I don't have, I can't think of a specific song,
but I would go with the sea shanty,
sea shanty song style would be fun.
C shanty songs.
I don't even know what that is,
but I'm about to find out of sailors.
I could send you one afterwards.
Oh, like, okay.
Yeah.
Like the sailors sit about how they're getting off the door
and they're going to get into trouble.
And, you know,
it's great.
Wow.
I love how diverse these song selections are.
The last one,
I think was a,
a Beatles song or something like that.
All right.
Guys,
thank you so much for making time.
We're going to have links to all of that in the show notes.
Enjoy your Sishanti song on the close out here.
Thank you.
Who can hold her steady through the fog we sail,
open club.
They can hold.
Yeah.
Uh-huh.
Shit, now we dip, dip.
riding on the base, vibe the cold double time, never leaving any trace.
Sovere, sovereign, sovereign running on my own.
Pie on the counter, eh, I picking up the phone.
Build it in the night, yeah, the coffee wasn't cold.
160,000 stars, that's a story being told.
But I don't slow down.
Now, I keep it moving, keep it spinning, keep the sound.
Bouncing off the walls in the ceiling and the floor.
Open source a recipe, then I'm cooking up some more.
Fill it in your chest when the baseline drop.
Once we start this wave, we don't ever stop.
I'll open
Okay, okay, okay
Let me break it down slow
One developer changed the whole flow
Then we speed it back up
Like we never hit the brake signal buzzing telegram
Humming making sarin' stakes
Avis and hackers builders in the dreamers
Everybody vibing none of us are sleepers
Fast now
Catch it if you can
Code is in my left hand
Futures in my right hand sliding like the bass do
Popping like the sand do
Building what they said that
We were never dead
Who calendar handled
email flowing data growing
Where we're heading next
Baby we already going
Feel it in your chest
When the baseline drop
Once we start this way
We don't ever stop
Open
Wasn't ours
The sovereign
Let this sit
Now we build it
We did clawing
Open claw
Yeah we callin
Open C
Never stalling
Open's ours now
Thanks for listening to TIP
Follow information
at Tech on your favorite podcast app and visit the investorspodcast.com for show notes and educational
resources. This podcast is for informational and entertainment purposes only and does not provide
financial, investment, tax or legal advice. The content is impersonal and does not consider your
objectives, financial situation or needs. Investing involves risk, including possible loss of
principle and past performance is not a guarantee of future results. Listeners should do their
own research and consult a qualified professional before making any financial decisions. Nothing on this show is
a recommendation or solicitation to buy or sell any security or other financial product.
Hosts, guests, and the Investors Podcast Network may hold positions in securities discussed
and may change those positions at any time without notice.
References to any third-party products, services or advertisers do not constitute endorsements
and the Investors Podcast Network is not responsible for any claims made by them.
Copyright by the Investors Podcast Network.
All rights reserved.
