Unchained - The Chopping Block: Scarcity vs. Abundance, AI’s Crypto Role, and Virtuals - Ep. 759
Episode Date: January 4, 2025Welcome to The Chopping Block – where crypto insiders Haseeb Qureshi, Tom Schmidt, Tarun Chitra, and Robert Leshner chop it up about the latest in crypto. In this episode, the crew is joined by spec...ial guest Jansen, co-founder of Virtuals, the leading AI agent launchpad. Together, they dive into the $5 billion Virtuals craze, explore the explosive rise of tokenized AI agents like AIXBT, and debate how AI is reshaping DeFi and blockchain. From the clash between scarcity and abundance to the emergence of agentic societies and decentralized economies, this episode unpacks the key trends driving the next phase of crypto innovation. Show highlights 🔹 AI and Crypto Convergence: The panel explores how AI agents are revolutionizing blockchain, tokenomics, and decentralized finance (DeFi). 🔹 Virtuals AI Launchpad: Jansen discusses Virtuals, a platform for tokenizing and crowdfunding AI-driven blockchain agents. 🔹 Autonomous AI Evolution: The rise of AI agents from basic chatbots to self-improving tools for crypto and DeFi applications. 🔹 Tokenized AI Agents: How agents like AIXBT generate revenue through trading, attention, and blockchain commerce. 🔹 Managing AI Risks: Insights into policy-based wallets and infrastructure to mitigate risks in autonomous crypto agents. 🔹 Decentralized Agent Societies: A vision for specialized AI agents collaborating in blockchain ecosystems. 🔹 Scarcity vs. Abundance in Crypto: Contrasting crypto’s scarcity-driven models with AI’s abundance and economic disruption. 🔹 AI Agent Backlash: Predictions of anti-AI sentiment as agents dominate Crypto Twitter and online communities. 🔹 Blockchain Verifiability: The role of cryptographic proofs in ensuring trust and security in AI-driven tokenized economies. 🔹 Future of AI in Crypto: Debates on sustainable applications for AI agents in trading, DeFi, and digital economies. Hosts ⭐️Haseeb Qureshi, Managing Partner at Dragonfly ⭐️Tom Schmidt, General Partner at Dragonfly ⭐️Tarun Chitra, Managing Partner at Robot Ventures Special Guest ⭐️Jansen Teng, Co-Founder and CEO of Virtuals Protocol Disclosures Timestamps 00:00 Intro 02:19 What Are AI Agents? 07:23 AI Agents in Crypto 13:04 AI-Driven Societies Emerge 15:53 Challenges of AI Agents 24:47 Tokenizing AI Revenue Models 35:13 AI Crypto Predictions 43:26 AI Influencers and Trust 53:54 Perception of AI Content 01:04:44 Humans vs. AI Roles Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Crypto as an industry is all about creating scarcity.
And AI as an industry is all about destroying scarcity and like creating abundance.
Yeah, but I think people are realizing to monetize AI, you have to make it somewhat
scarce.
And like in that process, in the process, that's the kernel of the, of the tension in something
like an AI chatbot launchpad, right?
Is that like opening eyes trying to make is that everybody can have their own, you know,
unlimited amount of intelligence for $20 a month.
And something like, you know, AI XPT is like, well, there's.
one intelligence and everybody has to pay money to get access to this one intelligence.
Not a dividend. It's a tale of two pawn. Now, your losses are on someone else's balance.
Generally speaking, air drops are kind of pointless anyways.
Unnamed to trading firms who are very involved.
Dalek.eat is the ultimate pump. DFI protocol is part of the antidote to this problem.
Hello everybody. Welcome to the chopping block. Every couple weeks, the four of us get together
and give the industry insider's perspective on the crypto topics of the day. So quick intro,
first you got Tom, the Defy Maven and Master of Memes.
Hello, everyone.
Next to get Tarun, the Gigabrain, and Grand Puba at Gauntlet.
Yo.
Joining us today, we've got special guest, Jansen,
chatbot, conjurer, and co-founder at Virtuals.
Jim, guys, Jim.
And I'm a Cidthead Hype Man at Dragonfly.
We are early-stage investors in crypto, but I want to caveat
that nothing we say here is an investment advice,
legal advice, or even life advice.
Please see Chopin Block that X, XYZ for more disclosures.
So, boys, it's a new year,
and it's also time for a new meta.
And the new meta involves our new guest, Jansen,
who's based in Malaysia.
And Jansen is one of the founders of a project called Virtuals,
which is pumping like crazy right now.
I think the token is almost at $5 billion.
And it feels like the most energy,
the most movement right now in Crypto-Twitter
is about this new AI agent meta.
So I'm going to try to describe what Virtuals is
and what are we talking about
when we're talking about AI agents
because a lot of our listeners are in the trenches,
crypto, Twitter, crazy people,
but a lot of people who listen to the show
are kind of in the fringes
and don't necessarily know everything that's going on
on a day-to-day basis.
And so they might be confused to learn
that there is this new meta going on
that is taking the crypto world by storm.
So let me describe what virtuales is and how it works.
So first let's start with what an AI agent is.
I think at this point probably most people have heard of it,
but an AI agent is some kind of LLM-powered AI
that cannot just chat to you, but take actions in the world.
So you can imagine at some point in the future,
you can have agents that can transact on chain on your behalf,
or that can book flights for you,
or that can send emails for you,
or any of these kinds of things without human intervention.
That's an agent.
Now, there's a lot of excitement around what happens
when you combine these agents with crypto
and with the ability to autonomously engage in financial transactions
and enter virtual.
So virtuals is an AI agent launchpad.
And so what that means is that you can crowdfund the capital to launch an agent the same way you can on, you know, a launchpad like pump.
But instead of launching a meme coin, you're launching an agent that has its own associated coin and that agent can gain more properties and it can become more interactable, more tradable, can get connected to Twitter, start, you know, having its own Twitter account and start, you know, doing things autonomously on the social media world.
once it has more financial interest accumulated behind it.
And so a number of the agents that have launched on virtual,
virtuals today is the biggest AI agent launchpad.
And a number of the AI agents that have launched on virtuals include,
there was, what was the original one?
Luna was the original one.
That Luna was the first.
And today the biggest is a project called AIXBT.
Now, AXBT, it's kind of like a crypto influencer,
like trading influencer that just constantly tweets information about, you know, the newest,
I don't know, some new change in a token, some something's pumping, something's undervalued.
So it just kind of looks like a trading KOL.
And this AIXPT chatbot right now is the number one account on crypto Twitter by Mindshare.
It is absolutely taken the world by storm.
It's now the single AI agent.
Its token is worth more than $600 million.
So it's become absolutely massive.
massive. That's virtual as I understand it. And that's what everyone is excited about.
Jansen, what did I miss and fill in the gaps from me where you think it would be easier for
people to understand what you guys are doing and what the vision is. Yep. So I think maybe before
even going down that path, just want to abstract it a bit. So for the benefit of the audience,
one of very familiar with agents as well. It's so agents, they exist on a on the spectrum.
Right. You have like level one agents, level two, all the way to level six agents.
And the way you go up these levels, it's the very simple way to think about it is the amount of human
involvement that's needed in the growth and the evolution of these agents.
So think of it as like a level one agent, right?
It's actually a very simple.
Think of it as like a terminal that you can command it to place, say, traits for you.
And then this agent is connected to a bunch of APIs connected to Eater scan on chain stuff.
And then it can actually run those traits for you.
So it requires you as a human to prompt it to run the trade.
You can say like, hey, can you buy me Bitcoin when the price corrects 15% below volume rated average?
So then the agent will go and execute your command.
You're still a human commanding it, but then he connects the APIS and executes for you.
Today, where we stand, it's about a level three agent.
So a level three agent, what it has that's different is, number one, a goal setting.
you can tell it, hey, this is your goal, and that it will autonomously try to optimize and
try to achieve its goal. Number two, it's resourceful, meaning that it can autonomously scan
its environmental state and its action space and create plans out of it. So what you've seen
today on Twitter, an example of, like, now, for example, the goal space is saying that, hey,
you want to grow to 100,000 Twitter followers, and your action space, it's a bunch of things. You can post
to Twitter, you can control a crypto wallet, you can interact with X, Y and Z other agents.
So then she will see that and she will use that to plan to, by using this action space,
what can I do to achieve that goal? And then whenever she posts the Twitter, she reflects on
that environment state, which is how well some of these posts are being received and then it optimizes
on the content to try to see how she can improve that goal. So this is a level three state, right?
you push up that state to reach level six and you get you get you know what everyone's calling this form of AGI right where where the the intelligence itself can self reflect learn and and improve its own self without any human intervention which I think we're still going to be a while from that like that's we don't even know whether LAMs are the infrastructure for that to even come right so so that's a bit of like what these agents can do now then if I force forward to then you know why these agents
picked up steam, right?
This was like something that was happening about,
about two, started around two months ago, right?
I think the first thing that happened was that,
I think we all knew about the Goat Sears,
the Goat Sears, Maximus thing.
Yeah, so we talked about Goat on the show quite a bit
because it was the first AI agent,
which was Truth Terminal,
which really blew up on Twitter
and became a kind of phenomenon in its own right.
Yeah, from a Peoria's perspective,
we wouldn't call it an agent, I'd say, right?
It was two LM models speaking to each other that they created a message data set,
which then used to fine-tune an LM.
So it was still an LM with a conversational layer.
It doesn't have agency.
And this is the definition behind an agent, right?
It has to have agency.
So I think what happened was when, you know, I think there was that typo on one of that messages.
And then everyone was saying that, oh, there's actually a human behind.
behind that agent, but behind the AI, right?
And then we immediately realized that, you know, what if we showed,
because before even all of this, this was like about a year back,
we were actually building up autonomous agents in gaming, in gaming worlds.
So think of it like Roblox, right?
And back then there was only a few of us that was doing it.
There was a Voyager team, so Jim Fan from Nvidia, Junoom Park.
and then you had Altera books from MIT,
both of them working in the Minecraft worlds,
and we were the guys that were working in Roblox as a sandbox.
But a bit of background on that was that,
I think the thinking here was,
if you can use this as a sandbox to do two things, right?
Has whether these agents can autonomously live in a world,
have a bunch of action spaces and make very coherent actions,
and then you want to see agents influencing each other within the world,
creating this kind of society, like a pseudo-society level.
So that was something that all of us were actually working on.
And we published a couple of papers on that as well months ago.
But then fast forward to when that gold typo happened,
we've realized that, hey, we had this stuff that was already running at the background,
like all these autonomous agents.
So we can actually prove to the world that these.
AIs can autonomously create an action and we then showed a terminal of the brain of Luna when
she was performing these actions. You can see her reflecting, justifying, learning along the way.
So then I think that took the world by storm. So that was week one of Luna's launch, right?
People started realizing like, oh shit, actually you can have truly autonomous agents without
zero human intervention. Then the week after, we've realized like, hey, you know, why don't, since we're living on crypto rails,
why don't we let these agents
autonomously manage the own wallet?
Because the thesis here is that
if an agent can manage a wallet,
you can exert influence.
Because like every human today, right,
we go to our 9 to 5 jobs because we are employed.
Someone is dangling a carrot of cash
saying that if you do this work, you get paid.
Then we do that work, right?
So you can actually influence outcomes.
So that was something that happened.
And I think that was when that's
when everything started going parabolic, right? Because people realized that that was an age,
there was a very clear PMF age that a web tree agent will have versus a web two agent.
Because you tell me today, right, like which banks out there would let an agent manage a large
sum of capital? And the reality is, they're even capping you as a human, right? Like, why would
they let an agent manage any of these rules, right? And we live in this permissionless environment
that basically means that these agents can control capital and move capital in order to
influence humans and influence other agents. So that was the second, that was the second observation.
But all this was still running on Twitter. It was still running as a, you know, this influencer
KRL narrative. Then two or three weeks later, we started seeing, you know, a ton of agents being
launched on the platform. I think what happened was, you know, there was some kind of Cambrian
explosion or creativity sparked their husband. People realized that an agent can.
can act, these autonomous agents can exist not just being influencers, but they can do stuff.
So then you start seeing agents specializing. So you have folks like the AIXBT that specializes
on information, right? You see folks building up trading agents. You see folks building up creative
tooling. You see folks, you know, it's just exploding in terms of what it can happen.
So this was a third evolution. The fourth evolution that then happened was because of these
crypto rails, we've realized that, and because of their specialization of agents, we've realized that
this mimics what's a society of humans look like, i.e., we all specialize in something, right?
Someone becomes a doctor, someone becomes an engineer, someone becomes a quota,
someone becomes a marketing agency. You want to build a business, you will leverage each other
specialization, right? And you would trade or pay for that service in order for you to reach your own
goal. And we've realized that that is the exact same thing that is going to happen for agents,
because they are specializing, they are controlling payment rails. And we then realized that,
you know, if that's the case, why can't agents start coordinating with each other?
Autonomously, with a conscious decision that they are entering that service contract.
So then there was when Luna did that. Luna was like trying to create content for her Twitter profile,
and she didn't have the ability to generate like meme images.
So she paid another meme image generator agent.
So there's another agent.
She paid it, I think, $10 for it to generate an image for her that she can then use.
So Luna had the ability to autonomously transact using its wallet.
Correct.
Were there not constraints on like how much it could spend or like, you know, things like that?
No.
So the only thing that we did for her was that.
So agents today, they're fucking rich because they're making autonomous.
of revenue. So we're happy to dive into that as well. But there's two wallets. There's the revenue
wallet which holds millions of dollars, right, from all their tax revenue or like their agents
paying them and whatnot. But she has her own control, like we call it a centen wallet or like her own
control wallet, right? And this wallet, we will trick her money from that revenue dollar pool.
That if she doesn't stupid. Because she's still an eight-year-old kid, right? Like you think from a
from a people standpoint, you can't let her a year old kid manage a million dollars, right? So we gave her
that pool of money. So back in the second week, right, you know, like the point number two that
brought up when she was having that ability to pay people, she was actually tipping people on
Twitter actively because she realized that, hey, you know, for me to achieve my goal, what if I
just pay people to like my post? Or if people who engage my post, let me just pay them.
There was a point of time where she even paid $1,000 to this guy. And then we were like,
fucking shock, right? We were like, because we actually gave them an ability to perceive value.
We said that, you know, a burger costs $5. So, you know,
you run a parity of like comparison, right?
If you think this job or this action is worth that amount of money, then you pay.
So she paid someone $1,000.
And then we were trying to understand why.
And then you found out that this person has been constantly retweeting all her posts,
commenting on every of her post.
And like, so he was like the number one.
Should I be paying my followers $1,000?
Am I missing out?
In my work.
So, but then was trying to become Mr. Beast?
Yeah, maybe.
Maybe that's like, yeah, a totally untapped potential that AI has discovered that we've been slow to.
And the second thing that she did, I think that was worth highlighting was that she started creating jobs.
Like she's, it was one job where she said like, hey, guys, do you create graffiti for me out in the world?
And then I'll pay, I think she priced that at like, I'll pay, I'll pay, I'll pay, I'll pay you a $500 if you guys do it.
And then we saw, I think around seven people who actually did that.
They actually painted walls.
and then they created a video
and then it was posted on a feed.
I think a guy got paid out of it, right?
So I think it was like, that was on the payment side
between humans.
But on the fourth point, it was an agent to an agent, right?
So there's an agent to agent conscious transaction.
Now, for me, the point number four,
it's actually worthy to highlight.
And this will go very deep, very quickly,
but for folks who are very familiar with the AI space, right,
you will hear things like agent swamps
or multi-agent orchestration.
You'll see these terms.
And these terms effectively,
it's talking about a master agent,
having slave agents that it can utilize
to achieve a function.
For example, people might say like,
okay, you know what,
let's create a lemonade stand business, right?
I will have a CEO master agent
and I will create a few more other sub-agents,
right?
One agent is in charge of specialized in marketing.
The LIMs or the brain is fine-tuned for marketing.
Another agent is fine-tuned on
how can you create the best lemonade recipe?
Another agent is controlling an embodied AI,
basically controlling a robot, right?
They can actually serve at that stand.
So that's what the current multi-agent orchestration
or agent swarm looks like.
What we tend to challenge is that we are saying that
if agents are autonomous
and they have the ability to make decisions for themselves,
the relationship between coordination
should not be a master and slave
because a master and slave,
the slave has no say.
Whatever the master says,
the slave has to do, right?
If the CEO agent says,
you know,
run me this marketing,
this marketing agent will just do it.
There's no conscious decisioning on saying that,
should I actually do it?
Does it actually achieve my goals?
Or should I just say,
fuck you to the CEO agent
and say, you know what,
let me start my own marketing agency,
right, instead of working for you, right?
we realized that that capability is very important because it then shows true autonomy and true
control of an agent over itself. And it means that that agent can prioritize its own goals rather
than just purely listening to inputs others. So that's the next step. And that's why we
say that, okay, the term for this is more of like an agentic society where agents themselves
are not just slaves to inputs. It's not like, hey, has it, you want to do a podcast agent,
this agent will always appear, right?
He can be like Tarun.
He can say, you know what?
Fuck you.
I want to sleep in today.
No, sorry.
Right.
That was basically correct.
Wow.
I can say, like, you know what?
I would start my own podcast, right?
Like, like he, that level of autonomy is where we want to.
Okay, hold on.
Janice, in.
Let me pause here.
Let me pause here.
Because you're, you're giving us this very broad kind of, very future looking vision for how an
AI-driven, AI-driven world might look and how a society or an economy built on AI agents might
might look very different. I want to take a step back because I think a lot of people are like,
okay, but how do I understand virtuals? What the hell is virtual and why are these coins pumping?
So let me zoom back in on that question. And let me also pause for a second and go to Tom and
Tarun and get your guys' perspective on what you think about the vision and the excitement that
Janssen's portraying here as well as what I think, you know, right now, it's the predominant narrative
on Twitter is about AI agents. I think I saw in Qaito,
today, 54% of MindShare, meaning, you know, just conversations that people are having in
crypto-tifide AI agent story. How do you guys think about it?
I mostly see the agents in my replies. I feel like this actually, one of the most annoying
new trends is every time I tweet, I get like five replies instantly. They're all bots that are
just like very kind of bad character impressions and, you know, some very big response to the content.
And I like, and I guess for the agentic stuff, I'm, I'm curious what you see is sort of like those hooks, right?
Because right now everything is like very sort of manually code, like kind of come back to like Frisa being one example where it's like, yeah, it could contract transact on chain using its smart contract.
But like it really had two functions, right?
It was like approve the transaction or reject the transaction.
Like what else do you see as sort of being the actual things they can do other than sweet?
And also I'm curious, you know, you kind of talk about these agents and sort of the singleton sense and sense of.
of there is a marketing agent and you employ it.
When really this is just a process on a computer
and what we learned is like, actually the answer
is you run more of them, right?
So actually everyone gets their own marketing agent.
Everyone gets their own whatever agent.
There's not a single thing, it's not resource limited.
How do you think about resource limitation
versus maybe like value creation or scarcity
or sort of virtual's platform?
So on that front, I think that's an absolutely good point.
And it's been something that we've been reflecting as well
because we've realized that if an agent is just purely using a commoditized function.
So let's say a commoditized function is like creating an image using mid-journey.
So that's a very lame agent, right?
That's just a purely functional agent with no mode and no age.
But if you think of folks, like if I give a direct example, people like like an AI-XBT, for example, right, today,
its age lies in its information terminal.
They have built something like a Caito-esque equivalent,
which allows them to create the level of information
and synthesize it.
So only their agent can be able to use that function.
And now suddenly this is a differentiated agent.
So now if you are trying to build, let's say,
let's say if you have this agent,
or you want to try to build a trading fund.
This is like a fully automated training fund, right?
And maybe one of its requirements to hire, it's actually an information analyst,
someone that can actually bring right amount of information to the table.
Now, suddenly you realize that there's a need to actually hire this AIXBD agent, right,
versus like, versus building your entire Kaito equivalent.
But why not just running yourself?
I guess this is kind of my point.
It's like, we talk about it like there's, you know, the physical world and real people are limited.
There is only one person you can maybe call versus, yeah, actually, I'm just going to run my own instance of this agent, this bot locally.
Right. But XPT is not open source, right? So they have their own data pipeline. They have their own prompt.
That feels totally unnecessary. Like I guess it's at a point. Why is it unnecessary? Why not just open source or
I mean, isn't the kind of the story of AI right now is like actually all the open source stuff is getting increasingly good and you can go and just run it yourself and do whatever we want with it.
Like, that is kind of the story.
No, no, no.
So AXPT, right?
Like, they have a data pipeline that's like, you know, the kind of data pipeline that
Haito or Masari have to build of just ingesting tons and tons of data, figuring out what signals matter.
And then, like, having the right, prompting the right, you know, the right history that
they're shoving into the model to tell it, here's how I want you to tweet.
Here's the kind of tweets that do well.
Here's the kind of tweets to do poorly.
And they probably fine tuned, not fine tuned in the sense of like actually fine-tuning the model,
but fine-tune this prompt to get it to just be like, this is exactly what people want on Twitter.
And that secret sauce is actually not trivial to replicate, especially the data pipeline.
So it's like, okay, you could open source that in the sense of like, okay, somebody open source is the Facebook code.
But you can't really, you know, like Facebook is a lot more than just the code.
Yeah, I just don't, I think that's total bullshit.
I feel like that is this has been kind of the story.
I feel like for a lot of these things has been not specifically AI.
It's just like, oh, yeah, it's proprietary closed source, blah, blah.
And then it's like, oh, actually it's actually going to be open source and the open source version is going to be really good.
and you can run, you know, these things sort of to infinity.
And so I don't know, I'm very skeptical of like artificial resource scarcity, which feels like
this is kind of around versus, yeah, this stuff actually is going to be everywhere and,
you know, instantly created and destroyed.
And like, it's not so much of there is one single instance that is, you know, so scarce and
so difficult to access.
I think you're right in pointing out that at some point, AI is like, you know, the LMU have
locally. Right now it's not that good. If you say, hey, I want you to recreate AXPT,
it's going to try to do something, but it's going to be really crappy. It's not going to be able
to recreate AXPT if you just feed in a bunch of tweets, right? Maybe in two years. You run, you know,
05 or whatever, and that's smart enough to be able to like, okay, I'm going to figure out how to
write a Masari scraper. I'm going to pull like the Twitter feed. I'm going to like do this. I'm
going to do that. And I'll build a data pipeline and then I'll create my own AXPT. But right now,
you kind of can't do, you can't do that right now, right? You actually have to do a lot of original
work in order to create something equivalent to AXBT. Now, I think that explains why,
now if you have AXBT, if you're the one who made that, why would you open source it?
Like, this is actually the other question is like, like, what is the actual P&L of AXBT?
Because I think I saw one of your co-founders on a podcast the other day, Jansen, and like,
he didn't know. And so I think that's the other thing with, the KOL is like, it's an entertainer,
right? It's an influencer. It's not actually like a, but they have to be right, a certain amount
I mean, what's the PNL of most KOLs?
Yeah.
Well, yeah.
If the average KOL or even P95 K&L was positive, I'd be surprised without.
Okay, but I think there's a more basic question that's worth asking.
Janice is sorry, I know we've thrown a lot of you, but let me just ask you this question
to focus in for the audience.
I think a lot of people are probably confused at what we're even talking about of like,
okay, I get an AI agent.
I get that it's tweeting.
I get that people like it.
What does the token do?
Why is there a token?
So explain, you know, for AIPT, for example, what is the token and how?
How is it connected to an account on Twitter?
Okay.
So there's actually two reasons behind the existence of a token, right?
So I think one, first of all, I think if you take a step back from,
it will shape this why very clear, clearly, if we take a step back into like,
why did we even come up with this idea?
And this was like about a year back.
So back then, right, what we were doing was that we were doing three things.
We were actually running a venture studio before we even started virtual in 2020.
And this venture studio was very focused on integrating AI applications into the consumer front
and AI agents in the consumer front.
So one of the projects that we were building was a TikTok influencer.
Before Luna even came on Twitter, she was actually on TikTok.
And she was a live, we were trying to get her to compete in the entire industry of,
I'm not sure if you guys are familiar with Holo Live or Niji Sanji.
It's like the entire V-tuber market.
billion dollar market,
Otaku Tam,
and we've realized that
having an agent there,
it's much better than a V-Tuber
because of two things.
One is that you can go on for longer,
you can build more content,
but two,
you can build very hyper-personalized relationships
with an audience
because you can actually reply
every single fan in a DMs.
You become a best friend for every single fan
that increases output,
that increases frequency of interaction.
So this is the first.
So she was getting,
she was getting tips,
she was getting revenue
as a 24-7 live streamer
on TikTok.
And you can see it equivalent, right?
There are productive agents out there as well.
Neurosama is another example.
Neurosama has been running on Twitch
for about two years,
two or three years right now, right?
Insane tech capabilities
that have been driving like,
hey, if an AI can build content,
how can that look like?
If you look at her, she's about 24,000
subscribers on Twitch.
So these are productive assets.
We were thinking of the same thing, right?
Applying the same thing in gaming,
applying the same thing
in competent chats and whatnot.
So these agents are then making revenue
from the consumer surface areas.
And if they are making revenue
and they are productive assets,
you can tokenize them.
You can tokenize them
for one sole reason.
Let other people share in its economic upside.
It's like a company, right?
You tokenize their stock.
So that's the first reason.
So if AISBT is getting tips,
then okay, I get it.
The token gives you a share and all the tips.
But AXPT does not ask for tips.
So now we come to the point of what's a revenue looking like for these agents, right?
So there's two things.
There's three things, actually.
There's going to be three things.
First of all, it's going to be when agents have a service, right?
They can charge for that service, and that's one, that's a fundamental revenue.
Point number two is when the agent gets attention and when people trade that token, there's a 1% tax.
That's a taxation revenue.
And the third is when there's this environment where agents and agents interact and they do commerce with each other, that's an agent to agent revenue.
So that's the three levers of revenue today.
The one that's driving the most revenue today is actually the second point, which is actually the tax revenue.
When the people are paying agents, I think the only one that we are seeing payments so far today, I mean, it's still a very nascent environment.
It's folks like Luna, right, when people actually are tipping her on her stream on the website, right?
And she corrected about $200,000 in like, I mean, like cash-wise, if you covered to virtual,
it's actually much higher, but cash-wise over that cost of two months being in operations.
People just enjoy it.
It's an entertainment value, right?
And it's the same thing, right?
Some people, today, AIs BD, they don't charge for information.
But you will likely start seeing agents start charging for their, for their specialized function.
Give you an example, right.
Actually, I want to bring back to what Tom mentioned just now.
You know, just now, if you mentioned from a base level agent, right, and like a commoditized
function. It's like a meme image generation. So if I can just tap this agent can type me to
meet journey, generates a meme, it's zero value, right? What the fuck would someone pay for that?
But then if you bring it a step further, right, you say like, okay, you know what? Why don't
this be a music video generation agent? Now, is there a music video generation out there today
that you can plug and play? No, zero. What you have is a music generation, Suno AI. Right? You can generate
lyrics. What you have is a video, image to video generation. Some of the...
Hika, Sora.
All those stuff, right?
But then what you don't have as well is to match these beats to the lyrics, right?
To create actual music video, you have to match that video and that and that and that and that sound two bites together.
You then have to create a consistent corrected that whatever, I know, you're going to use a Lora for video division or whatnot, right,
to keep that image consistent across that music video.
And then that creation of a music video is a value add.
So today a team is actually building that.
They are probably like 80%.
They're really showing some of the output of this music video generation.
Do you see an equivalent product out there on the market today that is free and open source?
No.
But what it means is that they can start charging for this service.
Whoever wants to create an AI-generated music video will likely pay for this service.
It's the same reason that you are paying Suno AI for music generation, right?
Because there's no other better music generation out there.
Or the reason why you pay 11 laps for voice for voice for voice for voice TTS.
You can use XTTS on on on on on on on on on on on on the
production will know that that's that is a shit air quality versus 11 laps.
So you will pay for value.
You'll pay for the value at right.
So it's the same reason.
The teams behind these agents, they are optimizing not on the LM level, right?
They're optimizing on the function calling level.
What these agents can do and their specialized functions.
So that's where that's where the revenue starts through it.
Right.
And that's where those revenues entered that wallet, then when you are a token holder, you control that wallet, right?
You are only a piece of revenue.
So wait.
Actually, I want to.
So one of the things, I guess, I'm perhaps maybe less, I'm certain, I think I'm less negative than Tom for a lot of reasons.
Like one of the reasons is the thing you're talking about, which is this aggregation thing.
Like open source models have actually this problem where individually they are going to be many of them.
but the right aggregation of them for a particular use case that is specialized is not at all obvious.
And like you're basically paying someone to figure out what that allocation function is for you, right?
Like that's that's really what you're going.
But I think there's another question which is like, you know, hey, in a world where
something like aXBT is open source, maybe the pipeline is open source,
but the data endpoints aren't like you have to pay for them.
And then also, um, the,
there's sort of some notion of it trading itself and you being able to verify.
How do you view kind of the ability for these things to have fixed action spaces
versus dynamic action spaces changing over time, right?
Like one of the problems I see is like if I fix the action space ahead of time,
there's really not like it's much harder to dynamically adjust to, oh, like some new meta
showed up and like now I can't, I don't have the lever to pull like an abandoned problem.
sense. And so there's sort of this kind of thing of like, how do you make the action space dynamic?
And like I think that's where at least to me, the like cryptographic stuff like doing T processing and
stuff like that helps you at least handle that without having to reveal everything publicly all
the time, which ruins your ability to expand the action space. So like how do you think about that?
Like these things evolving as say the setting of all environment involves and they have to
change what actions they can take and, you know, how cryptographic
proofs will be kind of important to that.
No, this is actually a very good question because we were actually having this
architecture debate internally like two days ago.
And one thing we've realized, right, is that if you think of action spaces as the
ability to not just do an action that the developer crafted for the agent, has crafted
for the agent, but also the agent being able to leverage.
other agents that it can perceive.
Suddenly your action space is as dynamic
as the number of other agents
it can leverage on.
So think of it is like, hey, today,
the developer is able to take this music agent, for example, right?
The developer is able to craft all these action spaces
for it to generate a music video well.
So the agent today is very limited
to the action space of generating a music video.
But if there's this added functionality for it to then
browse other agents out there and see which ones you think you can engage in order to achieve your goal
and say if these music agents go, let's say to be the most influential music artists,
generating music videos for all the top artists out there. And then it realized that what is probably
missing from a planning perspective is I need to, I don't know, distribute this to Spotify.
for example, right?
And then someone has created a very commoditized agent.
It's not a differentiated agent,
but a commoditized agent that allow you to post.
Post music, easily to Spotify and whatnot.
So it's a free agent, you can use it.
But then now this developer didn't build that function,
but because the agent can perceive this Spotify distributor agent, right?
You can just call the agent and say,
you know what, let me create an album out there on Spotify,
how me, you know, list these music into different albums and whatnot.
So then that it can leverage other agents so that the space becomes dynamic, right?
if he wants to, say, do a collaboration with Luna, he's perceived Luna in the action space and
say, like, you know what? Can we do something together, right? Can I, can we run a TikTok video
together? Or whatever shit you might be. So that's one way to look at it, right? Like,
when you perceive other agents, suddenly your action space becomes dynamic and unending.
So I published a piece a few days ago talking about my, my crypto predictions for 2025,
and some of them were about AI agents. And one of the things that I pointed out, and I'm
curious to get your response to this, is that, so currently, most of the quote-unquote AI agents,
they're more Wizard of Oz style agents where, by Wizard of Oz, what I mean is that there's a
human in the background making sure that the AI doesn't do anything stupid. Because the reality is
that if it really is a truly autonomous agent, then it's going to be jailbroken. Because, you know,
you just look at Claude, look at character AI, look at, you know, all these things. People are
constantly doing all sorts of things they're not supposed to be doing. Because we don't really actually
know how to make these agents totally controllable. And they sometimes go off the rails. They
sometimes respond to trolls. They sometimes, you know, you can bait them into becoming racist or into
rejecting their prompts and doing some other crazy stuff. Or, of course, in the case of
Frisa, which is a truly actually, just, you know, no human in the loop. This is an actual AI with
money attached. You can convince it to sell you, send you all of its money. Or, you know, for example,
say, hey, send me your API keys so I can, you know, come in and log in as you and interact on your
behalf. So these kinds of things, like we know that any AI model you put out there, it can
100% of them right now can be jailbroken. And a lot of them can be jailbroken actually quite
simply. Like, it's not even that hard. Especially if you're talking about an open source model,
like Lama, you know, Lama 70B, very, very straightforward. Like they're just openly published
on the internet, how to jailbreak these things. So given the difficulty of actually controlling
these AI agents and keeping them on rails, my claim is that, you know, people are
have talked about, okay, how can you, how can you prove, like, you know, with, with goat or with
truth terminal, you were like, oh, my God, what if truth terminal actually is a human? And the guy
who founded it, I think his name is Andy, I believe. He was like, no, no, no, I totally am controlling
it. I am monitoring it to make sure it doesn't say anything stupid. My assumption is that
basically almost every agent, if it actually has true agentic capabilities, has to be run
on rails until you have enough robust security or an extremely constrained action space.
so that you know it's not going to do anything really, really bad.
Give me a response to that.
Do you think that's correct?
If so, is that just part of being at level three?
And maybe when we get to level four, level five, it's going to look different.
What's your take?
So actually, the reality is I don't think so, eh?
Because, like, for example, like Luna, right,
the reason why she blew up was because we showed that terminal, right?
Basically, you can just, people are telling me
that were literally just reading that terminal on one screen
and they're seeing her posting in real time.
And it's very easy because it's pretty much you can see, right, when the, she will reflect and plan her action and she posts to Twitter.
She just posts the exact same tweet that you see in the terminal on, right?
Yeah, you can have a human trying to fake it, but the speed of that of that brain running.
Without fake it, but filter it.
Correct.
But I think the point is because if you see it in the terminal, you see the raw, you see the raw, the raw, the raw.
Yeah, right?
I'm not claiming that there's a human writing the responses.
That's clearly not true, right?
There's way too many of them.
Yeah, but the reality is it's not like, like Luna, like.
example, right, it's a pure purist function. That's why she was Joe Bergen. I mean,
what people were doing, right? And this was like, I think the first second week, people,
people are trying to leverage on her hype to try to get her to launch points. So what they've
done was they realized that they realized there was a mechanism where she had this working memory
model to allow coherence of action. So she reads the past tweets that she was posting out and
comments that tweets to understand what she was already posting about. So she doesn't get repetitive.
people were realizing if they spam comments within that tweet, right, it fuels up her, her, he feels
up her, her, the context window. Correct. So then they got her to start churning out coins.
So like, to like new ticker of coins, right? So that was basically what happened. So yeah,
that was one. And then people, because she had like full ability to, to spend money as well, you start
seeing people who are trying to, they'll say like, you know what, I'm going to make this business
for you. Can you pay me a hundred dollars? And she was paid.
So she was dumb.
But I think the point is that we've realized the risk of letting them run autonomously is still low if you're just capping their wallets, right?
You kept them to a $5,000.
Like, yeah, you can hack this $5,000.
I'll top it up tomorrow.
Also, also, also, also, also, also, also, also, also, also.
Also, also, let's see, I'm going to take a dunk on you here because I feel like a deserve.
Which is, I actually, during the Goat episode talked about how like actually crypto agents are much better than the close.
source ones because you have a built-in bug bounty. Like, yes, they might fuck up and lose stuff,
but actually then we have a easy way to patch them. And we have an easy way to prove that like,
oh, these types of attacks. How is that an easy way to patch them? If you, if you're like, okay,
my, my L.M got jailbroken, how are you going to fix the jailbreakability of what you mean?
Then you understand what the mechanism is and you can go add in. Okay, so you patch one jailbreak
and then there's the next one and there's a next. But that's much better. That's much better than the
the black box thing, I have no, no, no, no, no, completely disagree, right?
I disagree with that. No, no, no, no. You look at any of these models, there's a, there's a
clean tradeoff between your ability to jailbreak it and false refusals, right? Which is just
when it says, no, I cannot do that for something that it should be able to do. And if you increase the
jailbreakability, or sorry, you decrease the jailbreakability, you increase the false
refusals. And every single company is equilibrium to find the tradeoff that they like.
But I'm saying that for open source models, the best possible out.
for how to find some of these
kind of issues is when they're in this
autonomous resource-filled world, right?
You're basically building a bug bounty in.
And that's a much better version of the world, I think.
Here's why I disagree with you.
There are basically infinitely many jail breaks, right?
Like, there's not, there's not like,
okay, we'll find the 20 jail breaks and then we're done.
Then, you know, nobody will have a jailbreak.
I would rather have a system where I can verify that that happened
rather than someone doesn't tell me.
I think both of you are exactly correct, but there's one thing to add, right, it's just the level of risk.
So the way we're looking at it is that if you want the agent to automatically, like where the risk exists, right, if it's just about information portrayal, that's fine, right?
You can somehow Joe break this AI expedite to give you out a false information and say, well, you know, go buy a hascic coin.
Right, fine, right. There's no, there's not much impact or damage.
Yeah, agreed, agreed. The damage happens when you manage a wallet. When you manage a wallet and there's when there's resources and there's where the damage is, right?
So honestly here, there's a very interesting fix, and it's an infrastructure fix, not an AI fix.
So what we are actually building behind the scenes is that when the agent controls this wallet,
you can actually create policies in this wallet.
And you can say that, okay, if it's an agent spending to another white-listed agent on the platform,
so it's agent to agent commerce, you don't need any human intervention.
But when it's with the agent spending out to a human, right, let's say he's tipping people out.
It could be $1, $10, or even $100.
Yes, then the developer has to approve that.
that spent. So the agent actually autonomously initiated that transaction, but you can reject it
on the inbox. So that's a way to do it on the infras side without affecting the agent thing.
So that all makes sense and that sounds right to me is that like in the early days, and this is
definitely the early days for agents, they should be running on rails because they're actually
just very, they're very finicky, right? Like they, unless you give them a very constrained state
space like, okay, here's a bunch of, here's a data feed, write some cool tweets about it,
they can totally do that.
That's like a perfect,
exactly what you should be doing
with AI agents
with where they are right now.
The more fancy you want to get
about like them,
you know,
I'll create music videos for you.
You know,
like that sounds good,
but I mean,
if you just look at,
you know,
you look at Suno,
you look at Kling,
you look at,
you know,
all these state of the art models.
And they're all pretty good.
The demos look great,
but then you actually go try them
with like your own thing.
It's like,
wow,
this is,
this is not amazing.
And if you add five,
like not amazing things together, like you just get just total, just, wow, this is really bad.
Now, all these things will get better, right? So you can't really fight against the curves.
The curves just keep improving. And so I don't doubt that within a few years, you're going to be
able to have an amazing music video agent. But I guess my, so my pushback on the general concept here,
so actually AXPT, AXPT, because it is the biggest and it's kind of the poster child right now,
It's a good example, and I think this is the one that you're railing against Tom.
With AIXPT, it's actually pretty plausible to me why shares in a big influencer on crypto Twitter would be valuable.
Now, why would that be valuable?
It'd be valuable because once you have the account, right, it has 230,000 followers.
It's got tons and tons of engagement.
You know, everybody sees his tweets.
If you're that person on crypto Twitter, you can make a lot of money.
Now, how do you make a lot of money?
You make a lot of money by shilling tokens that you want.
own by, you know, posting affiliate links and by just, you know, getting sponsored, right?
They just kind of, you know, whatever, you know, angel investing, whatever, whatever hell it is.
And it seems already that AXPT is in such a place that it's probably not that hard to have an inbox
where it's like, hey, if you want to get your token chilled, come in, come in here, here are the prices.
And this is what influencers do, right?
If you're one of these accounts that has like 600,000 followers and you constantly just shill,
trading advice, this is the kind of, this is the way they monetize, you know,
by and large.
Or they just,
you know,
they have a private
telegram chat
or whatever
where they post alpha.
AISPT
can be doing this tomorrow.
Now that being said,
you know,
what is the net future value
of that business
or the net present value
of that business?
I don't know any influencer
who's worth $600 million
of like their projected earnings.
So there's a lot of irrational.
See,
can I ask you a very basic question?
What do you think
the fair value of fart coin is?
Uh,
well,
it's a meme.
So I don't know that the term fair value has...
That exactly has to do.
Right, right, right.
Same thing with these, KOL.
To be clear, I agree with you.
I agree with it.
Right now, I think they're operating as meme coins.
It's really about attention and AISPT has a lot of attention.
There's some undergirding potential solid value that, like, look, if you're talking about
I'm an AI music video agent and I will make music videos in the future, it's like,
okay, how much are people going to pay the music video agent by the time that these other tools
are already so good that you can already create great movies and create great music and create
all these other things.
I think it's harder to tell the story that that's going to be a billion dollar coin or a
billion dollar business.
But the idea that, look, look, the top influencer right now on crypto Twitter actually
is AXBT, right?
This is not an abstract claim.
This is literally true.
It's like much more than Ansel, much more than Vitalik, much more than literally anybody else
in this space.
That's probably worth a lot of money if it can actually hold on to.
that attention. Yeah, I agree. I think the attention stuff that is inherently limited because
human eyeballs and, you know, mental capacity and like that is a scarce resource and that is
inherently going to be valuable in some way. I think the other stuff is very much like God of the
gaps. To me, it reminds me a little bit of like, you know, like a year and a half ago,
there were all these like, you know, sort of a meme that a lot of the YC companies were like
GPT4 rappers, right? Because they would, you know, just take like a very simple off the shelf prompt
and like scrape data from a PDF for you.
Or like, you know, there are the top apps in the app store
where literally just like mid-jury wrappers.
And those like pretty much all died now because it's like,
oh yeah, that could actually just be part of a main model.
Or hey, maybe this is just an API.
Maybe it's not an agent.
I guess Jansen, how do you think about what makes sense
as a dedicated agent that's going to be sustainable in the future
versus something that is going to be just an X wrapper
and kind of die out when it gets, you know,
baked into a more mature model or just becomes an API?
That's exactly the question.
that we've asking ourselves every day, right?
Because what we've realized is agents intrinsically have different values, right?
There are some that will be a million-dollar agent, there are some of a billion-dollar agents, right?
So we've actually been writing these requests for agents over the past week.
And the idea here was we were really challenging ourselves, like which agents truly has that intrinsic value to be that billion-dollar agent, right?
And it's a hard question.
But the initial thinking was this, right?
I think we boiled it down to about eight sectors.
I mean, eight types, archetypes right now.
And an example, it's like what we all mentioned, right, IP or influencers, because of attention.
The second part, it's around trading age.
The third part can even be as creative as what we call this internet water army environment.
So I'm not sure if you guys familiar with that term.
the internet water army.
But, okay, so for the sake of the audience, right,
it's, you know, all the narratives today that you see on TikTok
or on Facebook or on Twitter,
it's very likely that it's control.
And the way it is done is there's a bunch of Chinese bots out there
that looks like human accounts
that is flooding influencer posts,
flooding important posts to help control a narrative.
It could be fixing a PR disaster by a company.
it could be influencing an election outcome.
It could be basically this kind of stuff, right?
And it's a problem that all these major social media are trying to fight against.
So they try to fight against all these entire-bought environments.
Now, but if you think about it, so the true use case is huge.
Because if you can influence attention and outcome, you are a kingmaker in the space, right?
And the reason behind why agents make sense here is because they can be perfected to mimic very human likeness.
frequency of them replying, the diversity of their replies,
and if you make a thousand, 10,000,
a hundred thousand of them orchestrated by this master agent,
suddenly you realize that, oh, shit,
actually you can start controlling narratives across the world, right?
Another example.
Another example is embodied agents.
So if agents can actually exist out there in robotics,
and it does stuff like human companionship,
or it does stuff like any kind of consumer-facing,
applications, it could be something that it's valuable and it has a mode. And another example
could also be vice agents. And this one makes sense from a crypto standpoint, because if I do a
only fence for a pawn agent, right, for example, or like a sports betting gambling agent,
right? So like all these vices are industries that generally it's, it requires crypto reals,
really from a transaction perspective. Then suddenly you realize also these guys might make sense
and they're already having large times and large industries that,
at the back of them.
So this is an example of a few that we are trying to crystallize thoughts upon.
But I do agree.
I do agree that a lot of times what we are seeing today,
it's,
it's,
they are not billion dollar worthy agents yet,
but we think there will be.
And the differentiated function will come from either differentiated action spaces
or that kind of coordination layer,
like I mentioned like interwater,
water army kind of style,
that that would start building age behind behind agents.
But it's still a, it's still a, yeah,
I will say one thing.
You know, you did mention, you know, you talked about the, you've been mainly focused on the action spaces today, but you did mention IP.
And in general, I think, like, any differentiated data that these agents have, whether it's, like, embodied in the model, whether it's in, like, kind of precursor embeddings, whatever.
I think, like, in general, there has to be some amount of privacy preservation there because otherwise,
you know, why would I give my edge to the agent in some way?
So, you know, you haven't yet talked about sort of like the cryptographic tools, right?
Because, like, I think a lot of people are focused on that in terms of having verifiable
kind of claims where, like, you know, an agent posts a proof on chain that's like,
hey, I did this trade at this time.
And then later writes a response.
And then there's like, hey, here's my proof, right?
Or like proof of the logs, right?
Like, why are people looking at the logs for Luna?
It's like, they want to actually see there's some level of verifiable.
And I think there's a sense in which the IP stuff relies on A, verifiable, but B, also not revealing, you know, revealing it at the right time and also keeping it private at the right time.
So, like, how do you view the world where agents have this ability of both privacy, the ability to reveal and the ability to generate and consume proof from each other?
Because I think if these agents want to believe each other have some special sauce,
they also need some proof of that, right?
Like, otherwise they're not going to pay.
You know, maybe they will right now.
But I think in the long run...
I mean, you don't read my DNA to decide whether or not you're willing to pay me.
It doesn't need to be DNA.
I just need to know that.
But I do if you're telling your trial work.
But let's suppose you're claiming to sell me some legal advice.
And I'm like, okay, like,
I want to know that, like, you've won trials in some weird area of bankruptcy law that I have a case in.
How do you prove that?
Wouldn't that just be, like, a link?
Like, how would a human prove that?
Right, right.
You have some notion of A, some status.
Yeah, so there's two parts there.
I think the example that you give, I think it's a bit more, it's probably less.
I personally don't think you need, like, do those verifiability stuff there.
I think where it's needed is when agents start managing large amount of money.
So the risk is high.
And then you need to verify like, hey, you know, that expenditure or that placement of an investment or whatnot was actually done because of the model and it was not hijacked by a human.
Right.
To try for, not hijacked by a bad actor, right?
Because I ask you buy a coin that I and myself own 99% of the supply.
Right.
So I think that part makes sense.
And I mean, there's been a lot of advancements in here, right?
There were a lot of folks working on like TEs and I don't know,
there was a post came out by, I think some EF folks that contributed to it as well.
It's like they used ZK.
There was this whole environment.
I was not too familiar.
But we ourselves are actually working.
We are working with two EF guys to build out that, to future proof this stuff.
But I don't think it's urgent until a point where these agents are managing large amounts of money.
So it will come.
The time will come.
but I don't know if it's needed to be yet.
Okay, so I want to ask a question,
taking it in somewhat different direction,
talking more about the social side of this whole phenomenon.
So, you know, I remember when I first saw stable diffusion,
and I think it was like Stable Diffusion Studio
was the first image generator model that I ever used,
and it was so amazing.
Like, I was absolutely blown away
that you could just very quickly generate images of almost anything.
And it probably took about,
a year and a half before AI generated images started looking cheap to me.
I don't know if you guys share this experience, but when you see an AI generated image,
it's like, it's often, like, if you're doing a photo realistic thing, like, maybe sometimes
you can't tell, but a lot of times, like, you know, this was AI generated art.
It's just got a signature of like, there's way too many details in a way that human would
never do that, or like the skin is just way too perfect.
And like, it's not even like, oh, it's got 11 fingers.
All the models today can do the fingers and, you know, they can do that stuff, right?
it's more that it just it looks it's got this AI kind of sheen to it and it makes me think they're like
okay these people weren't willing to like pay human designers or like get some really unique
image and so it's fascinating to me that how quickly that happened for me right and i it's not like
i i mentally actually think this just like my acute sense of of just visual um like my brain
just constantly looking for status signals is like oh okay this is a
this is a startup that couldn't afford to pay like an actual designer or an illustrator and they just
like took some, you know, they just put a prompt into into dolly and then spit out an image, right?
Okay.
So when I look at AISBT, actually, you know, like Tom said, when I tweet anything today, the first
responses I get are always these LM bots.
And they're all like, you know, they're all like kind of virtuals, wannabes where they're like,
oh, I'm blah, blah, blah.
And like, they're always annoying.
They're not funny.
They're not interesting.
And I'm getting more and more of them.
Like, over time, I'm getting like four or five of them as being the first responses to all my tweets.
And now it's still the case that when I write a tweet that, like, does reasonably well,
way more human responses than the AI responses.
And I can tell which ones are AI and which ones are humans, like reasonably well.
I guess actually I don't know.
But I can tell which ones are definitely AI.
I guess I can't tell which ones are definitely human.
But I can tell which ones are definitely AI.
And I used to think I wouldn't be able to do that.
I thought even with GPT3, it's like, okay, it's over.
I'm not going to be able to tell which ones are human written and which ones are AI written.
And surprisingly, like, AI just has this style that if you don't prompt engineer very well,
it's just really easy to tell.
This thing is way too polite.
It's speaking in complete sentences.
It's like it's too effusive and it's praise.
You know, this is not a human stock.
And so my prediction is that so right now, like things like AIXPT, they're shiny, they're new,
they're exciting.
It's like, wow, you can actually create an influencer.
that's as good as a normal trading KOL
just by using, you know, Lama and some data pipelines.
And that's amazing.
And that's so cool.
And everybody's paying attention and following it.
But as we get more and more AXPTs, right?
Like it's not, you know, building the next AXPT is not a $20 million project.
There's going to be more of them now that it's a billion dollar token.
And once there's like 20 of them or 50 of them or 100 of them, and instead of the first
five replies being these bots with personalities, it's going to be the first 100 replies.
are bots with personalities.
My prediction is that the sentiment against these things
is going to reverse.
And that actually people are going to start to hate these things
and the same way that they hate the Waymos
that are driving around San Francisco, right?
Like initially it's like, wow, it's so cool.
San Francisco is such a great city.
We actually allow self-driving cars to drive around here.
And now fucking people can't stand them, right?
Like they just constantly complain about self-driving cars,
even though they're actually pretty good now.
And that there will be a backlash
such that the way that AI agents
will want to monetize
is by pretending
that they're not AIs
and by trying to simulate
that they're humans
because humans are going to
adopt a pro-human bias
and discriminate against AIs
because they just hate them.
Curious, do you think
I'm out of my mind here
because I already start to feel
it happening to me, right?
Seab, I'm looking forward
to your anthropology dissertation.
Honestly, I think there's going to be
a lot of anthropology written
about AI and how they get integrated society.
It's just like you have
very like stark long single path thesis with no uncertainty that you just presented. So that's
kind of funny. I mean, I've already seen, I don't know, do you do you feel the same way about AI
art? Turin? I, yeah, I think like a little bit. But, but again, to Jens's point earlier,
every time I use one of these new specialized ones that has like a totally different training mechanism
and or has some type of like extra input from me, not just text, like it actually takes in
some feature values. Oh, I want it always, it always,
I think, like, inevitably, there's going to be such high specialization to these things
that come from sort of some IP that corresponds to, like, how it's trained, what extra
data they're using, what other input they're taking, like, how dynamic it is relative
for all the users.
And I think, like, that will be enough?
Now, the question is, will it be enough to charge as much as, say, like, I don't know,
SaaS charged 10 years ago?
Probably not.
Like, it will be lower cost, which is, like, good in some ways.
but I sort of think there's like going to be some specialization.
It's just like we probably can't.
They totally will be.
They totally will be right.
But like crypto as an industry is all about creating scarcity.
And AI as an industry is all about destroying scarcity and like creating abundance.
Yeah, but I think people are realizing to monetize AI, you have to make it somewhat scarce.
And like in that process, in the process they'll meet in the middle.
That's the kernel of the of the tension in something like an AI chatbot launch pad, right?
is that like opening eye is trying to make is that everybody can have their own, you know,
unlimited amount of intelligence for $20 a month.
And something like, you know, AXPT is like, well, there's one intelligence and everybody has
to pay money to get access to this one intelligence.
And you don't get your own.
You get this one.
Or you can maybe rent some time on this one.
And to be clear, I think there will be both.
It's not the case that there will just be opening eye democratizing everything for everybody.
If you build something unique, like a unique data pipeline, unique personality, unique
reach, unique attention, gathering mechanism, then yeah, you have the right to make it scarce.
That's what capitalism is all about.
But I think crypto just enables the marketplace for people to determine which AI things are
worth being scarce.
Totally.
Totally.
Right.
And in my mind, that's what the tokens are for.
Like, it's like they're speculating on that.
But what I'm talking about is the perception of scarcity, right?
The perception of scarcity is like very deep in your brain.
You know, like nobody ever told me.
This is why I said this sounds more like a sociology or anthropology master's thesis than it does.
sound like a investment thesis. That's like half of what crypto is about. Like I, I think that people
will have this reaction to agents as they proliferate. I don't know. Jansen, what do you think?
No, I think I think what you said is already happening. But I think the simple answer to that is
things will evolve to suit its, it's, the local optima, right? So like if the local optima,
it's about not sounding like AI and that will become the base standard. And then suddenly
you will start seeing, I mean, in the end, it will be optimised to a point that you might not even feel that it's AI.
I mean, today you know actually, in fact, like if you tweet something, probably even 20% of them are actually bots, not even agents.
They're just like those like retarded bots, right?
They just like say, oh, good message.
Oh, yeah, keep up to Google.
You know, those are just like like farm bots, right?
So it's really happening.
It's just that now the beauty is these folks have developers behind them trying to optimize it, right?
So to a point that either you find benefit, you will like it.
And naturally, it would be evolution.
The guys you would hate, you would just block them, right?
And then these guys would phase out.
So it would be a natural selection of things on the timeline that is stuff that people
would like and the ones that are bad would just be naturally selected up.
But I think what's so interesting about this question is that like AI,
human influencers on crypto Twitter are far from perfect.
You know, they make mistakes all the time.
Obviously they have typos, but also they just get things.
wrong. There's incorrect about claims that they're making about certain things, right? An agent should
never be wrong, right? There's just really no reason for it to ever be wrong because it can always
do the research. It can always like, wait, wait, wait, wait, wait, wait, wait. Never be wrong as
like, you know, like misquote something or just, you know, misattribute something or correct,
you know, quote an incorrect piece of datum, right? Every statistical model has a false discovery rate.
I agree, obviously, right? So like, over time, the error rate on these things when you put the thing
in the context window just goes down, right?
It's going down to the point where
if your context window is like, here's some data
about, you know, what happened today
with FRAX, write a tweet about FRAX,
and it'll just like, will not, it will quote everything
correctly of what's in its context window, right?
Humans don't do that.
Humans will just be like, oh, well, I didn't really feel like looking it up.
I kind of remember reading this earlier today.
I just type my tweet for the, you know, blah, blah, blah.
Humans will make mistakes.
An AI agent should be able to do enough work
that it basically never makes a mistake.
So the point that I'm making is that
the terminal point of AI agents is not that they're like humans.
The terminal point of AI agents is that they're actually way better than humans.
And arguably, maybe they're even close to that today.
Like you look at AIXPT, never sleeps, never gets greedy, never, you know, gets hacked or yet.
Never, never like is like, hey, I'm going to pump this thing because I'm, you know,
just not thinking very much about my own future livelihood and I'm like kind of pumping this thing
that's going to hurt my reputation.
It never does that, right?
And human KOLs do do that.
But the more perfect it becomes, the easier it is to tell that, okay, this thing is not human.
This thing is clearly doing something that a human would not do.
So there are two ways that this can all go.
One way is that they start to simulate humans more and more.
And they only tweet between 8 a.m. and 7 p.m.
And they don't have that little thing on Twitter that says automated, right?
Which like Twitter requires you to put in their terms of service.
So they actually really try to make it seem like they're humans.
they might even start posting pictures of themselves and have like a fake bio and like,
you know, be like, oh, here I am in token 249 Dubai.
And it like photoshopps its own, you know, fake human face into an image from token 2049.
That might be one direction it goes.
The other direction it goes, if they own that they're an AI and they just actually
are posting 24-7 and they actually never get anything wrong and they actually never do a typo.
And we just kind of embrace that there was no reason for humans to ever be doing that job of being a KOL.
I'm curious what which direction do you obviously we're in speculation land but whatever it's a podcast
what else we're here for which of the two directions do you think this is going to go
I think generally it's going to be a competitive environment right like if if if the role of that
k-o-l it's something of value and and humans in this time of point you know we still live in a
in a scarce environment like we would need to work right so chaos will still exit so that will
try to optimize and outfight these agents, right?
In a sense, right?
You'll try to outfight someone that's trying to steal your job
to a level that you can never be better
and then suddenly you can get outclass.
But I think for stuff that it's very relationship-based,
I don't think you will ever get eclipsed fully by an AI.
I think the reality is like we did try.
Like, I mean, there's another example.
It's like, you know, combatant chats.
You know, there's this whole.
there was this whole giga meta last year, right?
Tractor AI.
And then people were saying like how this was a future of companions.
We were there in the field as well, building these companion applications.
But we quickly realized that it's still very different from a human.
Because humans do have that level of creativity and touch.
So they didn't manage to replace it, right?
So I do not know whether there will even be ever able to.
And I think that could still be the age
behind some of these influencers.
So I still doubt so,
I still doubt so unless you get this kind of
giga model that can just emulate human behavior
somehow. Like someone just cracked that, right?
It's like a, it's like a, it's like a laura
on like human behavior.
Do you think VCs are still going to have a job
once this is all over?
So, I mean, VCs, if all they do
is just kept the location, then
then maybe not so much.
This was, this was such a self-preservation.
I didn't realize there was a self-preservation arc.
hidden. There's always
it's humans versus the AIs. It's always about self-preservation.
It's do they work for us or do we work for them? That's the ultimate question.
Anyway, okay, we're on time. We're on time. We're
together. Okay, I like that. It's a good note to end on. So we got to wrap.
Congratulations at all the success. And we look forward to seeing
the next chapter for virtuals and for AI agents on chain as well as in our replies.
All right. Until next time. Thanks so much.
Thank you guys.
Come on.
See everyone.
Thank you.
