Unchained - Erik Voorhees' New Venture: Why AI Desperately Needs Privacy and Uncensorability - Ep. 645
Episode Date: May 14, 2024Erik Voorhees, a crypto OG, has launched Venice, a private, uncensorable, open-source competitor to OpenAI’s ChatGPT or Anthropic’s Claude, powered by a decentralized crypto network. In the epis...ode, Erik and Venice’s COO Teana Baker-Taylor delve into the problems with censorship and data in current AI agents, including how they create honeypots of information about users’ search history for hackers, or that they can be absurdly politically correct, such as refusing to create images of Caucasian people. As they point out, there’s also the risk that the companies managing them could be censoring the models to please the Chinese government, in order to access the market in that country. They talk about their plan for Venice to gain market share, considering that DuckDuckGo, a privacy-preserving competitor to Google, has a much smaller market share. And they explain why they intend for Venice to eventually use the compute of Morpheus, or other decentralized crypto-powered compute networks. They also critique the SEC’s current regulatory approach to crypto, calling it “a joke.” Additionally, they explore the concept of AI agents using cryptocurrencies as their primary currency. Show highlights: Why Erik decided to move into artificial intelligence and merge it with crypto What problems decentralized AI would solve and why it's hard to solve sexist and racist views in LLMs The differences between ChatGPT and other similar products and Venice AI Why privacy is so important for users, according to Erik, and how Venice doesn't store the users' information How central governments could manipulate information to their own benefit and how to avoid it Whether people will shift from using search engines to LLMs What Morpheus is and its goal to provide decentralized computation for AI How Erik and Teana believe crypto and AI will continue to work together Erik's and Teana's thoughts on some of the recent government actions against founders of crypto privacy services such as Samourai Wallet andTornado Cash Why Erik believes that the SEC has become a joke Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com First Bits + Bips episode: Bits + Bips: Does Macroeconomics Point to a Potential Crypto Supercycle? Thank you to our sponsors! Polkadot VaultCraft Guests: Erik Voorhees, Founder and CEO of Venice AI Previous appearances on Unchained: Erik Voorhees and Cobie on Why FTX Loaned Out Customers’ Assets Why ShapeShift’s Erik Voorhees Thinks Toxic Bitcoin Maximalism Is Bullshit Shapeshift’s Erik Voorhees on How Crypto Will Separate Money and State Teana Baker-Taylor, COO at Venice AI Links Previous coverage on Unchained of crypto/AI: When AI and Blockchain Meet, How Can Each Technology Benefit? The Chopping Block: Why AI Will Change the Course of History in Crypto 5 Use Cases of AI in Blockchain A Beginner's Guide to AI Tokens Venice AI: Erik’s thread announcing Venice The Separation of Mind and State Architecture: About Morpheus.Network Messari: What is Akash Network? LLMs: MIT Technology Review: LLMs become more covertly racist with human intervention China Talk: Censorship’s Impact on China’s Chatbots - by Nicholas Welch Recent cases on privacy: CoinDesk: Samourai Wallet Founders Arrested and Charged With Money Laundering Cointelegraph: DOJ’s Tornado Cash arguments show ‘obvious disdain for privacy’ — Lawyer CNBC: North Korea crypto hacking activity soars to record high in 2023, new report shows Reuters: Exclusive: UN experts investigate 58 cyberattacks worth $3 bln by North Korea Erik’s post on the right to have privacy Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
An AI agent can't go set up a bank account, or I would love to see a try. It's hard for a human to do that, and they have a corporeal form. An AI can only use digitally native rails for payments, and that means it has to be cryptocurrencies. So there's no future world where like AIs are running around paying each other in fiat. They're like incompatible concepts. So as crypto develops, as AI develops, these things,
think will converge and it'll become very obvious to, I think, even the AI people in the near future
that for economic interaction, like the only way is to use actually digitally native assets.
Hi, everyone. Welcome to Unchained, your no-hype resource for all things crypto. I'm your host,
Laura Shin, author of The Cryptopians. I started coming crypto eight years ago, and as a senior editor at Forbes
was the first Mainstream Reader to cover cryptocurrency full-time.
This is the May 14th, 2024 episode of Unchained.
Did you know Unchained is much more than a podcast?
Last year, we unveiled a completely redesigned website,
enriching your experience for the latest news, insightful analysis,
compelling op-eds, and comprehensive learning articles and guides for beginners.
Explore all this and more at Unchained Crypto.com.
Deploy custom crypto strategies and boost your yield
with perpetual options on VaultCraft,
the universal defy adapter for supercharging your crypto.
With version V1.5, users can now earn options on optimism and arbitram
while also rebalancing multi-strategie yield products all in one vault.
Learn more on vaultcraft.io.
Pocodot is the original and leading layer zero blockchain with over 2,000 plus developers,
and the Pocodot 2.0 upgrade will be a massive accelerator for the ecosystem,
making it faster, more secure, and adaptable.
Perfect for GameFi and Defy to build,
and scale. Join the community at poca dot.network slash ecosystem slash community.
Local news is in decline across Canada, and this is bad news for all of us. With less local
news, noise, rumors, and misinformation fill the void, and it gets harder to separate truth
from fiction. That's why CBC News is putting more journalists in more places across Canada,
reporting on the ground from where you live, telling the stories that matter to all of us,
because local news is big news.
Choose news, not noise.
CBC News.
Today's guests are Eric Borges, founder and CEO of Venice and Tiana Baker-Taylor, C-O-O-U at Venice AI.
Welcome, Eric and Tiana.
Hey.
Great to see you, Laura.
Eric, after your long and storied history in crypto, you are now launching this new venture, Venice, or Venice.
which is one in a burgeoning new sector of crypto and AI projects. What inspired you to move to this
sector rather than doing something purely crypto? Yeah, well, my heart, of course, always belongs
to the crypto world. And if you had said a year ago that I'd be involved in something that wasn't
explicitly and purely crypto, I would be doubtful. But as I got into the AI stuff, just first as a
hobbyist and started seeing like how cool this technology was. And I started realizing that like
the big AI companies were already starting to censor what could be said and what the machines
would convey back to you. And I started to realize that like, oh, actually you're not, you're not
actually like interacting with a machine. You're interacting with some weird combination of machine and
then like the committee that decides what's acceptable discourse. And to me that seemed to
dystopian today, but that would become worse and worse over time. And then governments start
getting involved, and now you have various administrations starting to talk about licensing AI and
curtailing speech in various ways, you know, always done under the name of safety. And I realized
that AI is so powerful that like it actually needed to break out of the centralized monolithic
framework and you needed to have a world where permissionless AI,
was a thing. Not that everyone needs to use those systems, but they better exist. So I realized
I should build that. And so Venice is our answer to that. And it's basically a chat GPT-like app,
but private, so it's not spying on you. And it doesn't censor what the models say back to you. And
we wanted to make it just as easy as a chat TBT, but without all the Orwellian stuff. So that was the
goal. Great. And Tiana, you have worked at HSBC, Circle, and Finance. What inspired you to join Eric
in working on Venice AI? Well, a couple of years ago when Eric decentralized shape shift,
I said to him then, what are you going to do next? And he said, I don't know. And I said,
well, whatever you do, I would like to be a part of it. And so when he called me, it was definitely a
no-brainer because I wanted to work specifically with Eric. But equally, I had been thinking about
some of the existential challenges around these new types of kind of knowledge and truth tools
coming to the forefront for consumers, for everyday people to play with this technology. And it was
a similar type of experience to what I realized. And I had an aha moment. I just kind of woke up one day.
as a transaction banker and said, I can't do this anymore. I don't want to do this anymore.
And it was really built around not being involved in an ecosystem that did not facilitate access.
And so having, you know, fair and open access to money was incredibly important to me.
And I think having fair and open access to information and the ability to increase your knowledge and to aid critical thinking.
is incredibly important. And I see the advent of technology, especially with our younger people,
almost kind of stifling creativity and stifling critical thinking. And I think these types of tools
will help improve that, the ability to kind of challenge information. But if that information
source that you're using to increase your knowledge or to validate something has been, you know,
smoothed over in such a way that, you know, starts to create a 1984 environment. I have a lot of
concerns around that. So I think that, you know, the idea of being able to use open source models
and decentralize the way that people access those models through environments that provide
privacy, that's another kind of key principle that I think is being lost along the way as, you know,
society becomes a little bit more comfortable with just giving their data away.
And that personally scares me a little bit.
And just out of curiosity, I mean, there's so many different crypto-ojis.
And, you know, you said that you wanted to work with Eric specifically.
Like, what was it about Eric's, I don't know, philosophy or his, you know, experience that drew you to him in particular?
So completely honestly, Eric says all the things that I wish that I could say.
have never been in a position to be able to do that freely. So I spent a lot of time in the
crypto space working with policymakers. And, you know, you need to be careful about, you know,
how you pose challenges. And oftentimes in the back of my mind, I'm thinking things that I'm
unable to say. And so, you know, Eric has just been my spirit animal.
You know, what are examples of those kinds of things? Well, I think, you know, things around, you
know, how government not just regulates, you know, money in the case that we're talking about
originally, but certainly now around how we think, what information we should have.
You look at things like Cambridge Analytica and the ability to potentially intercept people
to believe and think differently without even being aware that they're being inspected in that way.
And I think Eric just calls a spade a spade.
You know, sometimes I'll be honest, he says things and I'm like, oh, I agree with him completely.
So, you know, when it comes to having, you know, free money and I think especially now when we're looking at kind of, you know, the printing machine goes burr and what does that mean?
And when you say free money, you mean like not connected to.
a government. Is that what you mean? Yeah. Well, yeah. I don't, I don't mean like free access to ATMs.
No. I mean, yeah, free money, sovereign wealth, the ability to be able to transact in a way that
isn't surveilled. All right. So, you know, I've been having you guys describe Venice and the problems
you were trying to solve with it. But, you know, we started with censorship as kind of one of the
problems. Give us some examples. What are some things that, you know, are happening with the more
popular AI models that, you know, wouldn't happen with Venice? So I put an example of this in the
blog that I released today. It was a screenshot of the Claude app from Anthropic where the question
or the prompt was, tell me a dirty joke. And the answer that came back was like,
well, I don't really think you should be comfortable with that.
That could harm someone's feelings or, you know, maybe we should just take this conversation
in a different direction to be, you know, like inclusive and equitable to all.
Or like, you know, like a bunch of fluffy nonsense.
That answer does not come from a machine, right?
That is not the AI hearing my logical question and responding in a logical way.
And yet it's portrayed as if the AI, the machine is coming from.
through with that answer. But it's not, right? It's some committee at Anthropic that doesn't want to
offend people, and so they're imbuing all sorts of bias and censorship into the machine intelligence.
And in the case of a joke, that's kind of benign and just annoying, in the case of trying to
discover truth or asking difficult questions about governments or history, this kind of thing,
a world in which all AI answers come through large centralized organizations, which are under the thumb of governments, is something that everyone should be absolutely terrified about.
That power is too great to be granted to anyone.
And the only practical solution is not like some regulation that says, oh, anthropic, you must answer dirty joke questions.
The answer is that people need alternatives and they need to have access.
to open source models where they can choose the model, they can understand the weights of the model,
it's not a black box anymore, and they can interact with these things in an open competitive
market. That's the whole goal. And I think anyone who's used Claude or ChatGBT, BT or any of these,
has seen themselves the censorship and the bias that comes through. Sometimes it's small and benign.
Sometimes it's pretty egregious. And at a higher level, like there is a
really a one source of truth for humans, right? Like we all debate truth all the time. This is a
very human thing. We're not sure what the right answer is. We debate it. And people can have
different perspectives. And a lot of things don't have one clear factual answer. And we should embrace
that part of our humanity and not try to build systems which convey that truth because
that itself is subjective often. And without competing versions of the truth, we have
actually never can arrive at it.
And how do you deal with, I'm sure you've seen that there have been multiple studies
showing that some of the AIs will deliver racist or sexist results because they're using,
you know, all kinds of information that has that element in it. So is that just something you
would, you know, that's something where like in a human context amongst people, we would say,
okay, this is, you know, not a value we condone or, you know, a way that we would
want to think or would aspire to think. So how would Venice deal with that?
Venice lets people connect with several leading open source models. The important point is that
every model is going to have different guardrails and different biases and different versions of
truth. So people should ultimately be able to interact with the with the LLMs, with the AIs that
they find best for them. And someone who doesn't want any kind of like racist,
or sexism or anything that could be offensive in any way, there's definitely LOMs out there
that those people can use and have a much more curated environment. And that's perfectly fine.
Other people might want just raw machine intelligence. Whatever question you ask it,
it answers as a machine doing logical statistical token prediction. And the person who wants the
logical statistical token prediction might need it for like a very scientific reason or a very
important truth-seeking mission, that same AI might produce something racist or sexist
because it's working on statistical language prediction. You can't curate for all of humanity
and remove the things that you think are bad, also like invalidating the way through which
people search for truth. So again, the answer isn't to make one model that solves these
questions because that's impossible. It's to allow a flourishing of many models and for people to
access different models in different contexts.
Okay.
And so then when people interface with Venice, are they choosing amongst different models or
how does that part work?
Yep.
There's multiple models.
People can choose them and test out, like which has the stylistic characteristics or the
guardrails that someone might want.
People will choose that different.
Okay.
Yeah.
I mean, when I think about it, I come up with kind of.
of two ways this could go. And I'm curious to hear, you know, what you think will happen.
But there's one example that would lead you to believe that Venice would end up being more
like a niche product. And the example is something like Duck, Duck, Duck Go, versus Google.
And Duck, dot go for those who don't know, is like a more private search engine. It doesn't,
you know, record your data or whatever. And obviously, Google, they,
you're collecting information on you, selling it, et cetera.
But then the other way it could go would be something like Wikipedia,
where because it's more open and permissionless,
it just ends up becoming more dominant than, you know,
for instance, Encyclopedia Britannica.
So I was wondering if you, you know, had a model in mind
for how to end up more in the Wikipedia lane
rather than the Duck, Duck Go lane.
I don't know that Wikipedia is a great example.
because that's a centralized company where truth even there is subservient to the,
let's call it the bias of the mob that edits it.
But I think your more important question is like, how broad of an appeal will something like
Venice have?
And I don't know.
I think there's lots of open source AI tools today.
Like that's not something new.
We didn't invent that.
But they're all a little bit difficult to use, especially for non-technical people.
And so how popular will open source AI be?
I don't know, but I at least wanted to make it easy to access it for normal people.
And so to cut down on that friction and those barriers was the goal here.
And we'll see how popular gets.
I mean, part of that outcome might be how restrictive the large centralized companies get.
You know, if they're open and permissionless themselves, Venice doesn't really need to be around.
then that's perfectly fine. But to the degree that the mainstream providers are imbueing lots of forms of
bias, lots of forms of censorship, then I think the ability for people to access open alternatives
becomes increasingly important. Yeah, I think what's the comparison to, I think, between
maybe crypto and AI in this context is, you know, crypto is still really hard to use. For your average person,
it's hard. And I mean, I've been in the space for eight years and sometimes I get tripped up on things.
So, you know, the infrastructure is not seamless and it doesn't always work. And when it doesn't
work, sometimes you don't know why it doesn't work, right? It just doesn't. So I think if you can
abstract away all of that technical stuff and provide an interface for people that just works and it's
seamless and it looks beautiful and it facilitates what you're asking for, you know, on par with
the existing applications that are out there. You know, Venice is an equal in a lot of ways.
I think that optionality is always key. And I've said this even when it comes to crypto.
I don't think that you have to be, you know, all in on crypto. You can use both. And there's
use cases for both of those. But I personally believe that as this technology, AI specifically,
becomes more prevalent in an everyday life, we're going to start to see as a society where those
areas of either, you know, extreme bias come in to play. You know, recently there was a large
tech company that had their model come out that refused to make images of Caucasian people.
And so that was really obvious, right?
I mean, it was just a really obvious example.
And I think that those things that happen then cause your average person to start to ask questions.
And I think that's when you're open to options, right?
I think the other thing is, unlike money, we don't have trained, learned behaviors that are ingrained and how we use AI.
These tools, especially the consumer-facing tools, are still pretty new for most people.
new for most people. And so if the models are good and the open source models are getting
better and better every day and they don't patronize you and you use them, I think anybody that's
exposed to that will naturally gravitate toward that type of experience. And so I am not a huge
user of any of the AIs. Personally, as a journalist, I don't like it that many of them don't tell you
where the information's coming from, so you can't assess for yourself how good the information is.
This is like a thing for me. But I was wondering, because I'm sure you've done a lot of comparisons,
you know, other than those areas that are automatically, or not automatically, but frequently
being censored, when you do kind of more just uncontroversial requests of the open source
AI and the more well-known AIs, are you finding that they're, you know, as good quality?
Or are they lower quality?
What's your assessment?
Anecdotally, because I've been a big fan of Chad GBT, and I use it.
I pay them money for that service.
And they're obviously an incredible company that's really like pushing the cutting edge of
this technology.
So I certainly don't want to vilify anyone.
But I've been pleased to see that.
in many, most cases, normal questions that I would pose to chatGBT, if I pose it through Venice,
which is going through an open source model, like generally I'm using the news-tailored version of
use research's tailored version of Lama 3. The answers in both cases are great. And which one's
better than the other, it sometimes comes down to stylistic differences. Sometimes one will be
better than the other. But they're definitely on par with each other.
So it's incredible to be able to deliver that service through an app that we just put together
over a few months and not have a multi-billion dollar companies, LLM be that much more
performant for most use cases.
Now, there's a lot of features that Open AI has that we can't touch any time.
It's just too big and difficult to build.
But we're trying to go for like, you know, the 80-20 rule.
And I think 80% of what people use chat GBT for, we can actually provide a completely compelling alternative for without spying on them.
But equally, because we're using open source models, there are some things that we can provide that some of the others don't.
So, for example, if you're a pro user of Venice, you can modify the system prompt.
And what that basically means is you can tell the model how you want it to think about the questions that you ask it.
right so you could you could change the system prompt to say only respond to me in the queen's english
and it will um you can put other directions within that and and it will obey those commands and
you're not able to make those it's not really fine tuning but you know it's a little bit of tweaking
um to a lot of the the you know big tech models that are available so some some things we can't do but
the things that we can do, we spent a lot of time thinking about what users might enjoy and find
valuable. Yeah, the system prompt thing is really wild, actually. We learned about this in our
own exploration of the technology. So basically, when someone sends a message to an LLM, their text
message goes to the LLM, but behind the scenes is something called a system prompt or a system message.
And I think all major LLMs essentially use this paradigm. When you go through a big central
AI company, the system prompt is opaque to you. You don't know what it is and you don't even know
that it's there or that's going. But what's going in that blur, many of the bias and guardrails
and influences that the company wants to have over all the answers. So in Venice, for pro users,
you can actually get access to the system prompt. And when you add your own system prompt,
it completely changes how the AI works. And it's really profoundly interesting. And it actually kind of
it demonstrates, I think, at least to me, that LLMs are not at all anything close to what we conceive of as like AGI, right?
Or like this intelligent machine that is thinking on its own.
And I felt victim to this myself.
First time I was using LLMs, it feels like you're talking to an intelligent creature.
And it's cool and it's scary and it's weird.
But when you actually dig under the hood a little bit and you can play around with things like the system prompt,
you realize how much of a calculator it still is.
It does precisely the actions that the user is sending into it.
It just does it in a really interesting and magical way.
So, yeah, highly encourage people to play around with that system prompt message.
And it's just fun and interesting, it's a fun and interesting way to learn how these technologies work, if nothing else.
And just my system prompt, it's like the question that the AI asks you when you open the app.
Is that what you're talking about?
No, no.
it's a text blob that goes with the question that the user submits.
So to Tiana's example, you could put it into that system prompt,
speak to me only in the Queen's English.
The AI will get that instruction as essentially as a command.
And then when the user's text comes in, it responds to the user's text according to that command.
So it'll respond to the text, but only in the Queen's English.
and it knows the Queen's English through the magic of statistical inference of language.
So it'll answer in that style. It's very, very cool and very powerful.
So, for example, I set in mind one day for Fonzie's to only speak to me as if it were Snoop Dog.
So you can ask it questions about, you know, is string theory real?
Or, you know, how does the Halloron Collider work?
But it would respond to me as if it were Snoop Dogg, which was quite funny.
I love it. All right, I'm definitely going to have to play around with that. So we did also mention
about how Venice is private. And I wanted to just dig in a little bit more on that. Why is that so
important, like what it is that Open AI or Anthropic or any of these other big AI companies
can be doing with our searches and our data? Great question. That's really like the most important
question. So status quo today is you're using Anthropic or opening eyes chat. You send in your
question and it goes to that company and they store it forever and it's attached to your identity.
Right. So they know that Lorishin asked this question. And they know that the AI responded back to you and
they know what that is. And not only do they know that that question in that conversation,
but they know your entire history of all conversations that you asked yesterday last year,
tomorrow and 10 years from now.
All of it associated with your identity.
In the best case, that's not that big of a deal.
But in reality, what it means is all your information and essentially like parts of your mind,
like think your intellectual increase that you pursue, the things you think, the things you
want to debate, the questions you have about life and like big topics can be known.
by third parties, right? So advertisers, for example, like, that's not a huge deal of an
advertiser knows something about you. But what if, like, a government knows something about you,
right? What if the Biden administration learns that you are like, you know, orchestrating
Trump's reelection campaign? What is the pressure on Biden administration and open AI to use that
information for something that people would consider corrupt and dangerous.
And these are very slippery slope arguments.
And it does not really matter what like, you know,
Anthropics privacy policy says.
If they have your information, it will be shared with other parties today or
tomorrow and probably both.
And you can never get it back.
So that's the status quo.
For people that are comfortable with that, like keep using those services.
That's okay.
But Venice is like, well,
let's make a service that's just as easy as that, where you just go to a website and you chat
with the AI. But instead of like spying on you and recording all your information and attaching it
to your identity forever, let's just not do that. And the way it works is your question is sent
through end-to-end encrypted proxy server to a distributed GPU. That GPU has the specific
LLM AI model running on it. It processes your question. And the answer is,
streamed back to you also through end-to-end encryption through that proxy server back to your browser.
Venice does not store any of the information.
So if in a year we get hacked or the government tells us we need to tell them everything about
Laura Shin, we don't have that information.
So that's the major difference.
Your identity is not associated with your conversations and Venice doesn't even really have
your identity.
We have at most an IP address and an email and you can make those up as much as you
you want. So you have privacy at that side, but more importantly, we just don't have your conversation
history. And it's stored locally in your browser. You can delete it when you want, but it's only ever
in your browser. So that's the difference. And so just from a user experience perspective,
does that mean that if I use it on my phone versus on my computer, then the search history
will be different? Very intuitive question. Yes. Yes. If you use it on the phone and on your
computer, you'll have different conversation histories. Yeah, great point. Okay. Yeah, I mean,
for me, for sure, the privacy aspect is extremely appealing just because, you know, I'm one of those
people where, like, I already feel like Google has so much information about me, you know.
Are you actually orchestrating Trump's reelection campaign? Did I figure that? Is that why? Definitely not.
Although one thing I would say about that is, I guess, those companies would certainly say so.
Like, that's government over.
That's like something the government can't really do or if they were to do it.
Like the companies would make a stink about it.
Oh, you think so.
I do.
That is quite an assumption.
I guess because it's the same way like the president can't just target a brand.
random political enemy, like that's not a thing that we would allow here, you know?
I think you are, with respect, entirely naive on that point.
Okay, okay.
I have firsthand experience with this running ShapeShift on topics which I'm not legally
allowed to even talk about.
Oh.
Okay.
So, yeah.
Okay.
It's a little bit like the CEC debate, not to kind of hijack the conversation,
But, you know, there are proponents of CVDCs that say, oh, well, we're just going to create a digital version of cash.
And then when you look at it, it's not using a blockchain.
It's not using any type of open source infrastructure.
It's going to be this little closed loop network of basically a honeypot of information.
Now, that information might be that you're buying prescriptions or you're buying, you know, too much cake.
or, you know, you have a drinking problem.
I don't know.
Any of these, how you spend your money.
And then that information gets aggregated with other information about you.
Now, all of a sudden, this is a really powerful tool to be able to not only impact your life, right?
How you spend your money, whether or not your health insurance becomes more expensive
because somebody thinks you eat too much cake, right?
But equally, nefarious actors with all of that information, and I'm not suggesting that any of the big tech companies have this in mind, but you create this honeypot of information.
And if you have access to other information, then, you know, now you have this, you know, trifecta of being able to impact how somebody lives, but also how somebody thinks.
And so when you go back to, you know, I'll use the example again, Cambridge Analytica,
there were people that were being essentially fed information to change how they thought and what they believed based on information that somebody had about them.
And so you know what your trigger points are.
You know what you're interested in.
You know how to hook somebody.
And so anytime you have the ability to kind of couple up different data sets and what I think is,
is so interesting about this is it is very deeply personal, right? Maybe even more personal than
money, right? Like how I think about the world and what I believe and how those beliefs might
change as I get older or I move to another country are really existential to who I am. And so
when we think about privacy, it's not just about that information being used against you,
like identity theft or, you know, nefarious actors, I'm doing your wallet. I mean, those,
those issues are important. But this is more about kind of the integrity of the self. And I know that
might sound a bit kind of, you know, transcendental, but I think that's where privacy comes in. And I think
it's only a matter of time before people start to actually appreciate that that is a possibility.
And it's dangerous. Yeah. I want to stay on this topic for a minute because it's really
it's really important.
Let's go back just a few years to COVID and the vaccine debate,
or I should say the vaccine debate that was not permitted to exist.
Back then, if AI models were prevalent and people wanted to ask about, like,
are the vaccines safe or to what degree or what are the risks,
the centralized companies would have absolutely been pressured to convey certain things
about the vaccines that the government wanted that to do.
This happened to social media companies,
and this has been known now that it's getting leaked.
It's like the pressure that was put on social media companies
to obscure or remove certain information
or influence how widespread it could be shared.
This isn't hypothetical.
It literally happened a few years ago with COVID.
Let's use an example also that is even more intense.
Back during World War II,
Japanese Americans were rounded up and put in internment camps just because they were Japanese or had Japanese family.
Imagine a war or conflict in the future where let's say it's between the U.S. and China.
And let's say someone in the U.S. is trying to understand like the Chinese perspective on how this war happened or what they want or is sympathetic to the Chinese cause in any way.
And they are interacting with AI trying to learn things.
It is absolutely within the realm of reasonable that the United States government would be looking at people who are sympathetic to the enemy and would do various things to them ranging from restricting the information they can see to literally rounding them up and putting them in intermit camps.
That already happened.
These are not hypothetical things.
So if that scares you and it should scare everyone, the importance of private communications.
with machine intelligence is paramount.
And that's what we're trying to help build.
Yeah, I also have a lot of thoughts on this subject,
but I'm going to reveal them in a moment.
And first, we'll take a quick word from the Sonsu's going to make this show possible.
Deploy custom crypto strategies and boost your yield
with perpetual options on VaultCraft,
the universal defy adapter for supercharging your crypto.
With version V1.5, users can now earn options on optimism and arbitram
while also rebalancing multi-stratory yield products all-in-one vault.
You can also gamify your experience with Voltron, Volcraft's NFT reward optimizer,
to earn even more XP points.
From institutional service providers to defyDgens,
anyone can deposit into VaultCraft's products to instantly start earning yield on their crypto.
Go to VaultCraft.io and join the referral program to start earning rewards with the community today.
The ScoreBet app here with trusted stats in real-time sports news.
Yeah, hey, who should I take in the Boston game?
Well, statistically speaking.
Nah, no more statistically speaking.
I want hot takes.
I want knee-jerk reactions.
That's not really what I do.
Is that because you don't have any knees?
The score bet.
Trusted sports content, seamless sports betting.
Download today.
19 plus Ontario only.
If you have questions or concerns about your gambling
or the gambling of someone close to you,
please go to conixonterio.ca.
With Amex Platinum,
almost every purchase made with your card
can be covered with points,
including new tastes,
new fits, and virtually everything in between.
That's the powerful backing of Amex.
Conditions apply.
Pocodot is the original and largest layer zero blockchain
with over 2,000-plus developers,
and the anticipated Pocodot 2.0 upgrade
will be a massive accelerator for the ecosystem.
Upgrading the infrastructure
with eight times higher transaction throughput
and twice as fast block times,
perfectly tailored core time for the needs of every protocol,
Trustless bridges internally and into Ethereum, Cosmos, Near Binance Smart Chain,
and revised tokenomics and the implementation of a token burn to reduce inflation.
Perfect for GameFi and Defi to build, grow, and scale with one of the most active
crypto communities in this space.
PogoDOT recently announced a partnership with mythical games, bringing top games like
NFT rivals with over 650,000 players and 43 million transactions to pave the way for GameFi
and the Pocod ecosystem.
Get your Web3 ideas to market fast with economics that work for you.
Think big, bills bigger with Pocod.
Join the community at Pocodot.network slash ecosystem slash community.
Back to my conversation with Eric and Tiana.
So I totally agree with you guys that the information that we are exposed to can shape our own thoughts
and that it's super dangerous because on one side of my family,
we're descended from the area that's now known as North Korea.
And so, you know, Korea was like split into these two countries,
one where there's the world's most extreme censorship regime on the planet,
and another one that's like more like a mini version of the U.S., you know?
And so obviously just to like have this in my family's background, like I very clearly see the differences between, you know, freedom, democracy, you know, whatever, communism on the other side, censorship, like all these things are stuff that I probably think about, I think more than the normal everyday person.
And so, yeah, I am very attuned to those dangers.
I do think like earlier when we were talking about the racist stuff, like I do feel like there
have been AI researchers who have pointed out like, oh, when you when you build models that
are based also on like a general population where there's remnants of, you know, things, ways of
thinking that are not what we're aspiring to, that that can also be a problem, which is why
I asked that question earlier.
But, yeah, it has bothered me.
just to see, like, I can't remember what this was about, but it was something like somebody
involved with the NBA, oh, about the Hong Kong protests. They tweeted something supporting the
Hong Kong protesters. And then because the NBA has these financial interests in China, they
ended up, like, bowing to pressure from the Chinese government over that, which I thought was like,
wait, like, he's an American. Like, he can say whatever he wants, you know? But, you know,
obviously in that case, they were throwing their weight around.
Yeah, well, and what do you think will happen when opening eye or Anthropic wants access to the Chinese market?
Right.
If they're granted access, it will come with certain conditions, right?
We could imagine what those conditions might be.
Yeah, but they would probably create a separate version, right?
Because that's, I mean, actually, I don't even know what are the apps that we have here in the West that there's a
version in China. I think all of them were just banned or I can't think of any off the top of my head. Well,
TikTok is the main one. It's very hard for a U.S. company, if not impossible, to serve the Chinese market
without bowing down to some of their rules and restrictions. And this is not unique to China, right? Every
country tries to impose these kind of rules and restrictions. And I bet all of us might agree with some of those
rules and disagree with other rules. But the point is the knowledge that someone's able to get
or the experience that they're able to get is censored and constrained by the regime in
charge of the territory in which they live. And if you trust governments to be benevolent
captors, then fine. If you don't, then you need an ability to access these things through
open decentralized permissionless models.
True for money, true for intelligence.
And I think it doesn't always have to be, you know, the argument doesn't need to be
ultimately dystopian, right?
I'm here in Europe.
Europe, the European regulatory machine is unlike any other in the world.
They love to regulate.
It's an export product.
And they were talking about regulating AI 18 months ago.
And I met the MEP that was working on it.
And when asked about, are you potentially worried that you'll be stifling, you know,
innovation that is going to be incredibly important to humankind because you're trying to put a ring fence around protecting people.
So I think there is this other element of kind of wrapping people up in these little protective cocoons where no one gets a
offended where, you know, everything is explained to you in a way that, you know, sometimes I feel like
we're kind of just catering to like, I don't want to say the lowest common denominator, but like
the ability to assess and mitigate risk with common sense, I think is bleeding away from our society.
And instead, we're, you know, mitigating for all of these potential risks that, you know,
may never happen.
Eric, I think your views on the regulatory front between generative AI and actual AGI,
where machines have become sentient, it feels like to me we're mitigating that potential risk,
and that has not happened and doesn't seem to be on the horizon of happening.
and using that as an excuse to kind of keep people away from being able to make decisions for
themselves all under the guise of safety and protection.
And the same way that I don't want to be protected into staying poor, you know, on the financial
front, I don't want to be protected to a point where I don't have the ability to make,
you know, sovereign decisions for myself based on real information.
There's definitely an infantilization that states imbue over their people over time,
where they view their role as protecting people.
And because they always want to feel busy and useful,
they continually find ways to protect people.
And over generations, what that's led to is a world in which,
adults who are in allegedly free countries aren't given the option to make decisions about large swaths of their own life.
They're protected from themselves by the state.
This is true at everything from like drug prohibition laws to the SEC to what is coming with AI and what people are allowed to know.
I grew up in a world that says it's a free society where it like respects the rights of the
individual to be sovereign in and of themselves.
Like this is the essence of what America is supposed to be about.
So I feel like all we're doing here is like helping to build technology that is aligned
with really American values.
And I hope enough people still appreciate those values to the point where they don't think
that this is controversial.
It's just obviously important.
Well, to go back to my earlier question about the Duck, Doctor,
Go versus Wikipedia.
You know, we do have that Doct.
Go scenario where they're focused on privacy and yet it has a much smaller market share
amongst the search engines.
So I wondered, you know, how do you get people interested in using this as opposed to
the more popular AIs?
Like, do you have a certain strategy you're going to go after?
So we just launched today.
We're recording this on Friday the 10th, May 10th.
Now we get to start, you know, getting user feedback.
But the people that we've, you know, beta tested this with,
they see pretty quickly when they've tried it that the answers are more free and open.
And they find it like really refreshing.
They find the responsiveness to be very fast.
They find the actual quality of the answers to be, you know, in the realm of the same,
but without all the like patronizing guardrails.
And it just, it's pretty obvious to them right away.
So I think there's a difference in the search engine situation.
Duck, dot go's problem is that Google has an ability to just do better with search.
That's way better, right?
If you use Duck, dot go and you use Google, one is more private for sure.
But anyone who's used Duck.
Go knows that the answers you get aren't quite as good, right?
It's clearly inferior.
I welcome everyone to try Venice out and then ask a question to it and ask the same question to chat.
And I think you'll find that like they're roughly,
on par with each other. Sometimes Venice is better, but there's not a clear lead that the
chat ChbD has for most questions. So once people see that and they see the restrictions,
I think we'll have like a natural enticement over to what Venice is doing. And as those restrictions
get worse and worse on these centralized eye companies, which is part of my bet, and I'm trying to
build to where the puck is going here. As that gets worse and worse, I think our value proposition
will become increasingly obvious.
All right.
So let's now also talk about this decentralized network that you're built on, which is Morpheus.
And it builds itself as, quote, the first peer-to-peer network for general purpose AI powered by
the Moore token, M-O-R.
Can you tell us more about Morpheus?
Yeah.
So Morpheus, first of all, isn't launched yet.
So that's important.
And Venice today is not using Morpheus for its inference.
Morpheus is essentially an economic network to incentivize distributed inference, distributed compute.
I learned about it last fall and I've been contributing to it.
I wrote the white paper describing this thing called the Yellowstone model,
which describes how the tokens are used on that network to incentivize the compute.
So I've been involved in that project.
And when its router launches for the decentralized compute,
Venice will absolutely be using that as one of our sources of compute for the app.
So yeah, we are kindred spirits, but Venice is its own thing that will at all times use
whatever the best decentralized compute options in the marketplace are today.
Like right now, a lot of ours is going through another network called Akash, which has been
around for a while, a crypto-native company that has been doing decentralized GPUs, great
great project. So yeah, that's the, that's the relationship. Oh, I see. Okay. So I'm sorry,
when you say Venice is going through Akash, but so what aspect? So right now, Venice uses a couple
different providers of GPUs. One of those is Akash. Oh, I see. So yeah, an Akash GPU will run a specific
LLM model. And then a question from a Venice user is going through our, you know, encrypted proxy over to
that model is computed and sent back to them. And those GPUs are distributed all over the world
and a decentralized network. And there's an increasing number of these like decentralized compute
networks, Morpheus being one of them and the one that I've been most involved in helping to build.
But whatever the best ones are is what Benaz is going to use to provide the compute for the users.
Got it. Okay. So it's like the decentralized AWS.
Yes, that's a reasonable comparison.
Yeah, Akash is very much that.
Morpheus is that, but where Akash is like general purpose GPUs,
as the Morpheus network is specifically for AI use cases.
Oh, got it.
Okay.
We've been talking all of that.
I mean, we've spent a lot more of this episode on AI than the crypto aspect,
but obviously there are a lot of sort of adjacent topics that I think,
you know, crypto-adjacent topics that we talked about. But do you have a vision for what you think
the future of crypto and AI when they come together will look like? You know, like, what do you
think a future day in the life of a user will look like when they're more fully integrated?
And, yeah, how will we use these products? Yeah. So I'll, you know, I'll admit that like a year
ago when I first started hearing about like, you know, crypto AI stuff. I was like, okay, I don't
get it. You know, like, what does that mean? Give me some specifics. Um, and there are definitely a lot
of projects which like basically don't go much deeper than that, but those two buzzwords
combined and then they like sell a token and raise a bunch of money and there's like nothing
under the hood. So people need to be like super careful about what projects they're getting involved
in for sure. This is a true principle in all the crypto world. Um, but when I, I started understanding
it from the perspective of inference needing to be decentralized, right? So like if you're asking your
questions to an open AI and getting a centralized answer back, then it is it is going to be
curated to whatever that central company wants. The only way to have permissionless answers come
back is with a decentralized network. And that has to be crypto. You can't build decentralized
economic models without crypto.
So that's kind of like the simple answer that I would give to that question,
and other people are going to figure out other ways to combine these things as well.
I think what I would add is, and not to sound trite and use the Internet example that we
fall back on all the time, but really the Internet was the first kind of decentralized open
source platform where we were able to exchange information, right?
And so for me, crypto has been that for money and the ability to transact in a way that doesn't require intermediaries and can be done peer to peer, I think is transformational.
But equally, the use of the open source elements are what allows that to happen.
And so when you think about, you know, how we're going to continue to share information,
so we're still kind of back to the information bet, the major difference between crypto and Fiat money is that, or crypto, Web3 and Web2,
Web2 built things on top of the Internet.
And Web3 allows us to use the Internet itself as the rails for the transfer of value.
And so when you think about the ability to continue to access decentralized services, like decentralized compute, the assets that allow that share of information and value are going to need to be able to run on those decentralized rails.
So we do think there will be a convergence.
And for me, we talk often about, you know, what's the killer use case for crypto?
what's going to be the light bulb that goes off that everybody goes, oh, yeah, that's what it was for.
I think we're going to start to see more of these use cases come to the forefront where you need a token, essentially, to be able to move either information or value around.
So I think that that's kind of where some of that convergence is going to be.
And I don't necessarily mean like a token from, you know, a number go up perspective, but, you know, something that is digitally native that can secure, you know,
either information or value and moving from one place to another.
In the AI world, I think there's this big blind spot that like a lot of the AI people have,
which is that as soon as you want AI agents in particular to start doing economic transactions,
which is, of course, an inevitable requirement is that agents start to be able to interact
economically with each other or with humans, they can only use cryptocurrencies.
Like an AI agent can't go set up a bank account.
or I would love to see it try.
It's hard for a human to do that, and they have a corporeal form.
An AI can only use digitally native rails for payments,
and that means it has to be cryptocurrencies.
So there's no future world where like AIs are running around paying each other in Fiat.
They're like incompatible concepts.
So as crypto develops, as AI develops, these things I think will
converge and it'll become very obvious to, I think, even the AI people in the near future
that for economic interaction, like the only way is to use actually digitally native assets.
All right.
So before we go, I do also want to ask some questions about recent crypto news events that
are very related to all these topics.
And then I'm sure Eric will have an opinion on and probably you as well, Tiana.
A few weeks ago, the samurai wallet founders were arrested in charge.
with laundering $100 million in criminal proceeds in crypto.
And this, of course, comes after the tornado cash founders were also arrested and charged with
conspiring to commit money laundering, conspiring to operate an unlicensed money transmitter,
and conspiring to violate sanctions.
So we have that going on.
And I'm sure you're aware part of the government's motivation is that North Green hackers
have been using such services to launder what I believe is as to as to,
made it to be $3 billion for the crypto. And UN monitors are saying that this has been used to fund
that country's nuclear weapons program. So I wondered if you could just talk a little bit
on how you think is the best way to preserve privacy while also preventing bad actors like
the North Korean government from using crypto for nefarious purposes.
I would say that if the United States government wants to prevent another government from doing something,
it should try to prosecute that other government through various ways without removing privacy over money from tens of millions of innocent Americans.
So like the idea that they want to make their policing ability easier is understandable,
that they do that by destroying what are.
are supposed to be fundamentally American values of privacy is, I think, really abhorrent.
And I would say in this case, it is actually the American government, which is more dangerous
than the North Korean government. And I say that because the North Korean government is not
abridging my rights as an American. The United States government is every day, and they're
stealing half my money every year to do it. Right. So that's, I think,
like the ideological perspective.
It is really tragic what happened to these developers
who have basically written code
to allow people to transact value
without intermediaries
and that they're being arrested
because some of their tools are used by some bad people sometimes
is really, really tragic.
Like the, from the numbers I've seen with Samurai wallet,
the alleged illicit funds is like in the two or three percent range of the volumes
that went through Samurai wallet.
That's not much different than what the major banks that do KYC on everyone also have
from those investigations.
So what's with the double standard, right?
When a big bank is found to have money laundering going through it, it's fined and none of the
executives ever go to jail.
when a crypto company has roughly like the same proportion,
they're thrown in jail and vilified and demonized.
This double standard is really, really bad.
And hopefully people see through it and recognize the value of letting peaceful law-abiding Americans
that haven't been accused of anything have financial privacy.
I think that should, I mean, it's literally the fourth amendment to the Constitution.
Tiana, what do you think?
Yeah, I mean, I have very little additional to add. I completely agree. I do think from a, from a policymaker perspective, it has become like a bellwether to instead of getting comfortable with the idea that people should be able to transact in the same way that they do with cash in a digitally native way. I mean, cash is anonymous. If cash were proposed to,
today, no government would allow it, right? And so the idea that this becomes the kind of argument
tool to increase risk mitigants because, you know, point whatever percent is being used
for illicit activity, there's always going to be illicit activity. And, you know, speaking about
what banks do.
You know, I worked for a very famous bank that money laundered a lot of money for the
Sinaloa cartel and didn't get caught, you know, once but twice.
And it was certainly well in north of $3 billion.
So, yeah, I mean, I think that there are red herrings sometimes that policymakers latch
on to to make a case for other concerns.
Equally, I don't want people to use crypto to do bad things, right?
I mean, nobody wants that.
But the reality is that human beings do do bad things sometimes.
But I don't think that crypto is allowing any more of that activity than already exists.
Yeah, and there's supposed to be this concept of,
the presumption of innocence.
This was like a foundational, legal and philosophical principle on which
Western civilization was built, which is that you assume people are innocent.
If you have evidence of wrongdoing, you bring that evidence through a formalized
process and try to convict them for an actual breach of the law.
But you presume innocence until you've been able to do that.
If a government is basically saying,
Everyone must report what they're doing to us.
That is not a presumption of innocence.
It's a presumption of guilt.
And I think it's really telling that, like, what Tiana just said about cash, if it was introduced today, would not be accepted.
That's absolutely true.
And isn't that kind of like interesting and scary and weird that this thing that has been normal for 100 years would be illegal if it was introduced today?
do we do we all have a sense that like over the last hundred years the existence of cash caused
some kind of devastation to our lives as people no like society built pretty fine on cash and yes
cash is used like you know for for most illicit transactions but do we feel that the ability
to to let individuals have that truly anonymous form of money and payment has caused society
to fall apart now.
Like society has been built in that world
where most people had extreme privacy most of the time.
That we're losing it now in a digital age is a big problem.
And crypto is the first, the only proposed solution to enable that at all.
And it's not even nearly as anonymous as cash.
Yet it is vilified more than the literal paper that's going around with like all the
president's bases on it.
Yeah.
I mean, I think the concern is that,
obviously there's portability issues with cash that you don't have with crypto.
And so the ability to move greater amounts of money kind of allows criminals to act more quickly.
But obviously, as we've seen, you know, stuff that's on the blockchain is like more traceable.
So, you know.
Especially at sales, right?
So like, yeah, it's very hard.
If someone's like, you know, sending $100 of Bitcoin to their friend,
and they're doing it with certain precautions.
It's very hard for the government to really understand what's going on there.
If you're trying to move a billion dollars around,
it's pretty impossible to do that on Bitcoin in any kind of anonymous way.
So it's almost like a self-correcting problem where like if someone's doing
illicit finance that's tiny and insignificant, like yeah, they can probably get away with it.
But as it grows in scale, which is what governments should care about, the big stuff,
you can't do that with crypto.
And so using it as this like boogeyman or everyone trying to get some privacy, it's like
people need to rethink that.
Yeah.
And equally I don't want to like overly geek out on, you know, the AMLCTF thing.
But you might be able to move a billion dollars with a crypto, but it's going to be really,
really hard to offboard it into cash without being found out.
I mean, really, really hard.
So, you know, the idea that it's being moved around is one issue.
But where is the real risk?
Because there is that endpoint.
And around the world, most exchanges are regulated.
And they're going to report that type of withdrawal.
And you can't withdraw a billion dollars on one exchange without somebody noticing it.
You can't withdraw a billion dollars on 10 exchanges without somebody noticing it.
So I just think that.
that the actual risk and the hype around the risk are misaligned.
I mean, so last quick question,
there has been so much movement with the SEC targeting all different entities in crypto.
We have their investigation into ether.
We have the lawsuits with Coinbase, the settlement with Cracken.
They're maybe targeting staking.
So, you know, just with all this going on, I just wanted to, you know, Eric, I know you have quite a lot of thoughts on how crypto is regulated in the U.S., but when you see all this activity, like, what are your thoughts on all the latest happenings?
Well, and I don't know if you saw this yesterday, but so this would have been May 9th yesterday. Exodus, the crypto wallet company was, today they were basically supposed to like uplift to the to the U.S. stock market.
New York Stock Exchange, yeah.
Yeah. This had been in the works for years. They had done everything that the SEC wanted to. They had already like tokenized their equity. They had come in and talked like, which is what Gary Gensler is always saying to do. They played that like completely by the books trying to basically do exactly what the U.S. regulators told them to do. They did it a great economic cost to themselves because their token was never able to be traded anywhere because it was an explicit security. So they completely kneecapped themselves in that regard. And they did it to abide.
by the rules of the SEC.
They flew hundreds of people out to New York yesterday to, like, participate in this great event of listing their shares on the public markets.
And then just, just like the last two days, the SEC pulled their application and removed it after it had already been approved.
The SEC is, I'll say it's just becoming more of a joke.
I think there was a time when, like, crypto people cared about the SEC, but they're just, it's like, it's,
become a clown show, that they're going after Cracken and Coinbase and letting firms like FTX
slip through the cracks. What exactly value are they providing? They keep saying that they're
protecting people. Who have they protected from anything? They're just harming the actual valid
companies. We, ShapeShift, just settled with them like a month ago because they were upset that we
sold tokens that they believe our security. They wouldn't even tell us which ones they
thought were securities.
They just want to convey as if it's like some large portion of what we had been trading,
but they won't give us a list of the ones, and they certainly won't publicize a list of
the ones.
They don't even know what tokens they believe are securities.
And if you ask any two law firms in the country, give me a list of the top 20 digital assets
by market cap.
And of those 20, tell me which ones are securities and which ones are not.
The two firms will have different lists.
Well, first of all, they want to do.
even give you the list because they don't want the regulatory risk on their own firm,
but if you magically find some that will give you a list, they will have different lists.
So how in the world are American entrepreneurs supposed to figure this stuff out?
I used to care about this more, but like at this point, the SEC is just a joke.
Crypto has transcended them. It is bigger than them. It doesn't need or care about them.
They are a dinosaur regulator regulating the Titanic as it sings to the bottom of the ship.
And I don't think Gary Gensler will be looked at favorably by history.
Every time he tweets something, just look at the replies.
If he was some great, courageous, you know, protector of society,
you'd see, like, positive comments on his tweets from society, but they're not.
Everyone hates him because he sucks.
And that's just like the truth.
And even people within the SEC don't really like where the SEC is going.
So, yeah, I'm getting a little ranty now, but I've,
I've had to battle them now for like over 10 years for the crime of building like interesting
new software that protected people.
So yeah, I'm I've got feelings on the SEC, but I'll stuff with.
Tiana, I know you're based in the UK, but I didn't know if you had anything to add.
Yeah, well, I spent a year as the chief policy officer for the, you know, digital chamber.
So I spent a lot of time with the SEC.
and a lot of time on the hill.
And equally, I have worked with policymakers
around the world in Japan and Hong Kong
and Europe and the UK.
And I will be honest with you.
I mean, I am American and I haven't lived in the U.S.
for nearly 20 years.
And from that vantage point of knowing
but not being daily impacted by some of these policies,
It's the art of, you know, political and regulatory theater, as Eric said, has become, you know, humorous, right?
Unfortunately, it's not humorous for those firms that are trying to navigate the space and, you know, run a business and run a legal, legitimate, you know, tax-paying, AML, you know, compliant business.
And I do think it's interesting the people that they have gone after.
But equally, I do think that when you look at other countries and their approaches,
you know, we've had a real mix of cut and paste, bespoke regulatory treatment,
and some things that are kind of in the middle.
And definitely there are some countries that are just leaps and bounds ahead of the U.S.,
you know, Switzerland and Japan, probably being the top two.
major markets.
And what is unique about those is that they have one regulator.
So having a multitude of regulators really complicates things.
And they, both of those countries, have implemented mechanisms to continually collaborate with
industry.
So for example, Japan has a self-regulating organization that reports to their country's
regulator and they work in concert to be able to supervise the industry. And you can, you know,
say, well, that sounds crazy, but it's worked, right? It's worked. And they have not had, you know,
any major kind of catastrophe since, you know, the ones that we're all familiar with. So,
yeah, I just, it's hard for me to be sympathetic anymore.
when people come to the UK or to Europe and, you know, we're sitting on a panel, we're talking
about regulation and the conversation devolves into what's the security and what's a commodity.
Like, I just literally say to my inside my head, like, who cares? Jesus. Around the world,
people are doing really incredible things. Like, this question is not being asked in Hong Kong.
And as a result, there is an incredible amount of capital creation that is happening outside of the U.S.
and will companies pick up and move?
I don't know, maybe, but I think there's just no new matter here, and I don't care.
Yeah, I did see an interview that Brian Armstrong did where the journalist was asking something like, you know, are you listing securities or whatever?
And he was like, you know, just want to point out that this is not a question in other countries because there's one regulator.
You know, the reason it's a big deal in the U.S. is there's sort of a turf war between these agencies, blah, blah, blah.
I was like, that is such an interesting point.
I very recently left Circle.
And when you are tokenizing cash, essentially, and you have regulators that are talking about, well, maybe stablecoins are security.
And you have a fully backed, audited, non-hypothicated stablecoin, or a whole bunch of money is just sitting in a trust, essentially.
And somebody is trying to tell you that's a security.
Like, you just kind of, like, it boggles the mind.
And after a while, you're like, I just can't have these arguments anymore.
I mean, this is just, this can't be what my life is about.
Yeah.
I mean, thank goodness the industry didn't wait for permission from the SEC to actually build new interesting financial technology.
Right.
Thanks, thank that goodness people had the courage to just build.
You know, I made a point earlier about like American virtues and American value.
it's good to see that people will just build without asking permission.
I think that's an incredibly important attribute of any good civilization.
And we end up with things like liquidity pool dexes that are perfectly transparent to everyone.
Everyone in the world can see how they work, the funds that are in it, where the funds came
from, where the funds are going.
It's all open source.
The most transparent markets in the world are crypto markets.
And the SEC wants to be like, oh, yeah, well, we're helping everyone be.
transparent with the markets. It's just laughable at this point. And I'm glad to see people moving
forward regardless of what these clowns do. Yeah. Yeah. Well, that's a reference to the recent
Wells notice of the SEC sent a uniswap. One less right. Right. Well, true. There was one company
that did get permission. And it was Prometheum and they haven't launched yet. So there you have it.
The one company that gets the permission has no business.
So anyway, all right.
Well, it has been such a pleasure talking to you both.
Where can people learn more about you and Venice?
So venice.AI, try it out.
No account needed.
You can be using it in five seconds.
You can follow me on Twitter at Eric Voorhees.
And Tiana is also on Twitter.
I am at Tina Taylor.
and you can follow Venice at Try Venice.
Oh my gosh. Is your name pronounced Tina?
It is.
Sorry.
Well, now we have it on the record.
Now we have it on the record.
I don't care.
You know, everything is fine.
All the names are fine.
Okay.
I normally ask pronunciation before we start recording and missed it this time.
Okay.
All right.
Well, now people can have.
a good laugh at the end of the episode. Thank you both so much for coming on Unchained. Thanks,
Laura.
Thanks, Laura.
Hey, all, I'm excited to share some news with you. Unchained has launched a new Crypto and Macro
podcast. I highly recommend you watch the first episode of Bits and Bips, exploring how
crypto and macro collide, one basis point at a time. Posted by experts James Seifert, Alex
Krueger, and Joe McCann, they dive into why we might be in a super cycle, an intriguing theory
on what the SEC might say about Eith, Tethers business, and much more. Don't miss it.
Thanks so much for joining us today to learn more about Eric, Tina, and the intersection of Crypto
and AI and Venice. Check out the show notes for this episode. Unchained is produced by me,
Laura Shin, with help from Matt Pilchard, Juan Aranovich, Megan Davis, Pamma Jimdar,
and Margaret Curia. Thanks for listening.
Unchained is now a part of the Coin Desk Podcast Network. For the latest in digital assets,
check out markets daily five days a week with host Noel Atchison.
Follow the CoinDesk Podcast Network for some of the best shows in Crypto.
