Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Brave: Building the Private User-Friendly Internet - Kyle Den Hartog
Episode Date: October 16, 2025Brave has spent a decade building a privacy-first browser that empowers users with tools like ad-blocking and fingerprinting protection, now serving 100 million monthly active users. Security engineer... Kyle Den Hartog joins Friederike to unpack the centralization traps in digital identity from email's spam-driven dominance to one-size-fits-all DeFi lending rates and how Brave counters them with BAT's user-rewarding ad model, zero-knowledge personalization, and wallets that act as privacy guardians. Kyle warns of on-chain transparency's risks to consumer behavior, advocates for intent-based "vendor relationship management" advertising, and draws historical lessons on censorship's chilling effects amid rising regulations like EU chat controls. He shares Brave's vision for seamless private payments and user-controlled algorithms to reclaim the open web from Big Tech monopolies.Topics discussed in this episode:IntroductionKyle's background in security and identityWhy identity and privacy matterThe history and centralization of digital identityEmail as a cautionary tale for decentralizationBrave's privacy-first vision and 100M usersBAT: Rewarding users for attentionChallenges and evolutions in Brave's ad modelZero-knowledge for intent-based adsBrave Wallet: Privacy by defaultOn-chain privacy pitfalls and wallet solutionsBrowser wallets vs. built-in securityCensorship, regulations, and history's lessonsFixing social media algorithmsBrave's 5-year visionLinks mentioned in the episode: Kyle Den Hartog, Security Engineer at Brave: https://x.com/PryvitKyleBrave Browser: https://brave.com/Sponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at gnosis.io This episode is hosted by Friederike Ernst.
Transcript
Discussion (0)
To put it quite frank, I think we're doing it wrong right now in the Web 3 space.
Email is actually a decentralized protocol by design.
You look at Google, you look at Microsoft, you look at Spam House.
If you couldn't send an email to somebody using Gmail, then you stopped running your own server.
And that created the natural centralization, even though you could be doing everything in a decentralized way.
There's tens of millions of users who use Brave, but there's a couple of thousands of websites that use BAT for incentivization.
If you look at how much you actually get for watching an ad, it's tiny.
What do you think the challenges here are?
I think you're hitting at the heart of the problem.
Excellent question.
Welcome to Epicenter, the show which talks about the technologies, projects,
and people driving decentralization and the blockchain revolution.
I'm Friedrich Ernst, and today I'm speaking with Kailen Hartog,
who is a security engineer at Brave.
You all know Brave, the privacy fast browser.
And before I talk with Khylis, let me tell you about our sponsors.
This episode is brought to you by NOSIS, building the open internet one block at a time.
NOSIS was founded in 2015, and it's grown from one of Ethereum's earliest projects into a powerful ecosystem for open user-owned finance.
NOSIS is also the team behind products that had become core to my business and that are so many others, like Safe and CowSwap.
At the center is NOSIS chain.
It's a low fee layer 1 with zero downtime in seven years and is secured by over 300,000 validators.
It's the foundation for real-world financial applications like NOSISPAY and Circles.
All this is governed by NOSISDAO, a community-run organization where anyone with a GNO token
can vote on updates, fund new projects, and even run a validator from home.
So if you're building in Web3 or you're just curious about what financial freedom can look like,
start exploring at NOSIS.
Kyle, thank you so much for coming on.
Thank you very much for having me.
Cool.
Kyle, it's been a pleasure using Brave for the last many years.
We've had Brandon on ages ago.
So we'll dive into kind of everything that's happened at Brave since then.
But maybe before we do that, tell us a little bit about you yourself.
What's your background?
Yeah.
So my background kind of comes from a little bit.
bit of a mix of like cryptography, security, and then expanding into Web3 is kind of the niche
specifics. So originally I started out as a penetration tester as an intern right out of university
and then expanded into digital identity with that from taking a computer security course.
So I spent about five years working in that space. And in that, I kind of expanded my knowledge
on cryptography and web standards, working as an editor of the verifiable credential
specification. And then through that and kind of the touch points of digital identity to Web3,
that's how I ended up at Brave to come and help on the security side of things within the wallet
and within the browser. And these days also helping a little bit on the search side.
What was your take on Web3 when you came in? I know kind of like it's a very divisive
subject in the security arena. Yeah. For me, I've always found it to be kind of user empowering
and the concept of self-sovereignty, I think, is really embodied by the community.
I think the way that we represent the principles at times doesn't always match what we say that we
could do. But as I've learned with anything, nothing is perfect. You're always striving to be better.
And so I think that's really what I'm trying to contribute to the space as well is how do we make
sure we get closer and closer to these different principles that we say that we're working on.
So at Brave, kind of your work centers around identity and privacy.
Tell us about what made you interested in these topics in particular.
Yeah.
So it largely actually comes back to the computer security course that I took in university.
So when I was interning my final year, my computer security professor presented me with an interesting problem.
He said, how could we build a voting system on a blockchain?
And I kind of ran down a rabbit hole with that and started looking at digital identity from that perspective.
And so it kind of opened my eyes to the idea of how identity plays such an integral role of all of what we do online.
We have to be identified whenever we're interacting with a website and whether that's just an IP address so that they can figure out who to send a response to up to building authentication.
and authorization systems, passwords, pass keys, all of these sorts of things.
They all come back to the concepts of digital identity of knowing who says what about whom.
And so that really kind of led me down this path of understanding, okay, so if this sits at
the base layer of everything, understanding it is very useful and very impactful to everything
that we do on the web.
DID, decentralized identity, it's been here since the beginning of Web 3 as a dream.
Right. But so far, all the solutions that we've seen have been pretty centralized.
Can you give us an overview of what the ecosystem looks like?
Yes. So I think what's interesting is actually going back before even decentralized identity,
as we call it today, and looking at the history of digital identity.
Taking a step back to Open ID Connect, originally the design of that was meant to be more
decentralized because it was, well, it actually technically be referred to as federated because
the goal of it was to be able to allow bloggers to be able to authenticate between their different
sites so that they could add comments within their systems. So the original design of a lot of
these pieces were designed in that sort of way, but what happens typically is that centralization
is easier to implement and it allows you to provide better security controls. And so the actual
implementations themselves ended up going more towards a centralized approach and and creating
natural hubs of centralization over them.
Like sign in with Google and sign in with Facebook and kind of all the likes, right?
Yep, yep.
And education and, you know, the main problem that really kind of drove this was actually
enterprises doing single sign-on systems within all their services that they wanted their
employees to be able to log into.
So it's kind of really trying to take that concept further today, but do it within a more decentralized
approach because what motivated the concept of DID actually came back to web payments.
So one of the problems that was encountered with web payments was that it was built on an email.
Well, if you, you know, lose your email address or your email address gets hacked,
then you're able to basically steal access to the web payments.
And so that's where we needed a different identifier.
And that's where the creation of the decentralized identifier specification came into play
was creating this kind of alternative method to be able to control access to the identifier use for web payments.
These days, it's doing less of that and it's kind of expanded into many different areas.
But it's kind of that same concept of understanding the underlying aspect of the identifier itself.
If you kind of look at the market segmentation today, when people are on the Internet,
what do they actually use to identify themselves with?
It all comes back to the email.
What I find very interesting about this is email is actually a decentralized protocol
by design.
If you look at the design of it, anybody can set up their own domain and anybody can send
an email to anybody and anybody can run their own email server in the same way that
anybody can get their own public key, anybody can run their own blockchain node,
and anybody can send payments or assets around on the blockchain protocol.
What's interesting is wondering how did we actually centralize email?
Well, it really comes back to, you look at Google, you look at Microsoft, you look at Spam House.
They were all trying to address the problem of spam protection.
Well, they ended up winning out because they provided the best spam protections.
And by doing that, everybody decided, hey, let me go use it.
So then their implementations within the protocol naturally created centralization because if you couldn't send an email to somebody using Gmail, then you stopped running your own server.
You just had too much maintenance headaches and managing it yourself.
And that created the natural centralization, even though you could be doing everything in a decentralized way.
Yeah, that makes sense.
I'm sure there's economies at scale here, but kind of they tape off, right?
Kind of like whether you kind of scan 50 million emails or 500 million emails, kind of at some point, kind of your spam detection doesn't get any better.
So if kind of spam protection is the bottleneck, how?
many disjunct email services do you think we could have or could Gmail kind of offer a plug-and-play
spam filter to kind of run locally on your own server? How do you think kind of we could,
we could end up in a much more federated email system?
So I think this is one of the things that I'm still trying to figure out what the answer is.
I read a good blog post recently and I can't remember who it was.
But one of the things that they pointed out was that naturally over time as you add complexity, you will centralize because you gain a certain level of expertise.
And that node provides better guarantees than everybody else.
So as you address more and more problems, fewer and fewer people can participate and do it at a high quality level.
And so naturally you're going to create these sorts of things.
So I think the idea that you're exploring there is how can you do this like adversarial interrupt type?
plug-in-play approach, where you create the openness of the protocol such that anybody can just
go by instead of having to build it themselves. And then that plugableness is what keeps the
network more decentralized by design. So Brave was founded in 2015 by Brandon Ike and Brian Bondi.
And kind of the original mission was very much kind of to build a privacy first browser.
How do you think that vision has held up over the years? I mean,
It's been 10 years, right?
Yeah, yeah.
Well, I think the numbers kind of speak for themselves.
We actually just announced today that we had 100 million monthly active users.
So, you know, the vision is going and it's growing.
You know, you can look at our growth rates to see how many more people are kind of moving in and what are the advantages of it.
A lot of people don't think about privacy in the sense of like, you know, privacy nerds like myself do.
They think about it in a practical sense of like, hey, I don't see ads on YouTube.
That's kind of nice.
Like that's literally one of the number one features.
But behind the scenes, what they're actually getting is canvas fingerprinting protection, shields built in,
all the different, you know, capabilities that we have.
We have a built-in VPN to be able to do IP address blocking,
proxying of services behind the scenes.
You know, like when we have to communicate with Google Save Browsing services,
we're proxing those communications.
All of these things are things that people are getting by default
because we care about making sure that people have those,
those capabilities built in from the start.
And so we bring them in with the no YouTube ads,
but along the way, we actually help everybody
to get better privacy on the web.
Why do you think people don't care about privacy?
I mean, there are people who care about privacy, right?
But the by and large, the majority of people don't care at all, right?
Even for things that are clearly exploitative in some sense.
So people using Gmail, despite the fact that kind of proton mail offers exactly the same service and is free also.
And kind of why do you think kind of like this privacy has to be shoved down people's throats?
I think a lot of it comes back to what I refer to as the power user problem.
So caring about privacy requires toggling and changing a lot of things in a lot of cases.
So that naturally comes into the picture of I'm a power user.
I know how to go in.
I understand what those toggles are doing and I configure them.
So this is where I think defaults come into play in a lot of cases,
is knowing that you get it,
but that somebody else is taking care of it for you.
Going back to the email example,
I could run my own email server today.
I'm technically capable.
I understand how it works.
I could probably figure out ways to bypass the different spam restrictions that come into place.
but my time is worth money, you know, like my time is worth the ability to, you know, have the
meme of touch grass.
You know, I like to go out and I like to play golf and stuff.
So it's a choice between do I want to go play golf or do I want to manage my email server?
So when you come into those sorts of restrictions, I think that's the reason that so many people
end up defaulting to that, well, I'll just pick the easy and convenient thing, even if that means
that, you know, I'm giving up some of my privacy or making some other tradeoffs and some
sort of way. So in that sense, I think that's where it comes back to, you know, the maintenance
headache is really what it's about. And it's not about people not wanting privacy. It's about
people wanting it as long as it's convenient still. Would you say that proton mail is less
convenient than Gmail? In my experience, no. Billions of people use Gmail and millions of people use
proton mail. It's the vertical integration that, you know, that you see with Google, that you see
with Microsoft, their ability to integrate their different services together to make them all work.
I think that gives them a natural network effect that creates kind of their walled gardens.
And slowly over time, everybody hits a friction point. Like the reason that I switched from
Gmail to Proton mail in the first place was actually because my free me Gmail account hit the
15 gigabyte limit. And I was like, well, if I'm going to have to pay some, why not just pay an
extra $2 a month and I did, you know, free drive, free VPN, free Proton Pass. Like, I ended up
saving money because I dropped my last Bass account. I'm not paying for the Google Drive. I still
get email. And so like the net benefits that I got from Proton Pass were better as soon as I
kind of used my way out of the free part of the system of Google. Okay. Yeah. Initially, kind of the
idea at Brave was also kind of that attention should be monetized, right? Because kind of like every
time you get something for free, you don't really get it for free, you're kind of being squeezed
in some way that you can't immediately tell. And kind of the way that Google does that, obviously,
is it kind of, it data farms you and then tries to sell you things or have other people sell you
things. In 2017, Brave kind of did this ICO.
the basic attention token, BAT.
And kind of the idea was that if you choose to kind of be data-mined or look at adverts and so on,
you would be compensated for that, right?
Tell us about what the idea was and how that has evolved over the years.
So I think where you have to first start is back at the economics of why did we go down
the advertising path in the first place.
What it really comes back to and some of the things that,
you see argued by Google within the web standard space is that the advertising model actually
creates the open web. If you had a pay wallet at every single site that you had to show up at,
you would create a friction point where I have to decide, do I want to spend a penny or not
every single time I visit some site. The advertising model has created that, but along the way,
it's also meant that we need better targeting because that's what the customer was demanding.
The advertisers are demanding the ability to know that they're getting,
you know, as good of value from the dollars as they can.
So that's the advertising model to understand.
And then what we looked at was, okay, how do you make sure you maintain the open web,
but you do it in such a way that it's actually privacy preserving?
Well, we know that the advertising model works.
It funds the entire open web today.
But to achieve the privacy preserving principles, what we started to look at was, you know,
how can we do this directly within the browser?
So within the browser itself, we're able to do these confirmations where we can basically send the entire advertising catalog down to the browser.
And then the browser can do the selection of the advertisement.
And then using zero knowledge proofs built on a modified version of the Privacy Pass protocol, we're actually able to do the confirmations to be able to say, yes, this ad was viewed.
So then the question becomes, okay, so we've got the advertising.
we've got the privacy model.
How do we incentivize users to move into the system?
Well, eventually when you are like thinking about the economics of this again,
the only way that you can go if you've reached the point of free
is to start paying people and subsidizing them.
So that's where kind of this split model comes into play
is that trying to take into consideration the equity and the alignment of this,
but also taking into consideration the competitive,
advantages that can come into play. So I guess essentially what it comes down to is this is a natural
evolution of maintaining the open web and doing it in an economic way, but also making it so that
it's user first so that users can choose to opt into it if they want to. And so as soon as you
try and tackle all three of these hard problems at the same time, that's kind of how you end up
on this sort of model. How well do you feel it's working? Because kind of like there's tens of millions
of users who use Brave. But there's a couple of thousands of web
websites that use
BAT for
incentivization. If you look at how much
you actually get for watching an ad,
it's tiny,
right? So kind of, to me, kind of like,
I'm a brave user. I use Brave user. I use Brave
as my primary browser. Even on mobile
where it's gotten much, much better, by the way.
But
kind of, I've, I've
only watched kind of
ads for test purposes.
You get such
tiny amounts, kind of like, 0.0.
0.005 or something.
What do you think the challenges here are?
I think you're hitting at the heart of the problem.
This is a very tough problem.
So first of all, excellent question.
The way that I would address this is,
I think it comes down to who you're able to attract as advertisers
makes the value of how much you can return to users
a big part of it.
Something that's not well understood
within the advertising space
and within that vertical is that Google basically controls this entire market.
You know, that was a big part of this antitrust case that came into play,
is, you know, how do you create these sorts of restrictions around these natural monopolies and stuff?
And so with that, you come into the question of, like, how do you attract a new advertisers
and how do you, you know, make sure that they want to buy these different ad units?
You know, they're used to selling search ads, but when you go in there and you say,
hey, we've got this other type of ad unit that's built directly into the browser.
Are you interested in it?
And then you also come in and say, oh, by the way, we can't give you cohorts of data about
the user because we're just doing that all locally on the browser and we're not sending
that back.
You run into this kind of restriction around capabilities.
And so what we've seen is that there's a certain number of people who really kind of get it.
And then for some of the other companies, it's become a little bit more.
restrictive. So oftentimes people will complain about, oh, you know, Brave is showing me all this
crypto stuff. I have come to think that a part of that is because the crypto industry really
understands this ad unit and understands the value of it. And so because of it, you know,
when we're dealing with a new tab page advertisement, they're interested in it because they understand
that there's a lot of people within the Web3 space that use Brave as a browser. And so then they're
able to attract people to their products and services based upon those things.
Theoretically, it doesn't have to work that way, though.
I mean, Ford has been able to use these advertisements and been able to expand on it because
they've understood it.
And so in that way, I think it really comes back to understanding that so that you can
drive the value into the ad unit so that you can return more value to it.
Because that's the thing about the design of the bat token is that, you know, we're doing
essentially a revenue share agreement with the user.
we're taking some of what we earn and we're passing it on to the user.
So as long as the value of that advertisement can continue to go up,
then you can continue to pass that value onto the user.
You think kind of like the crux of the matter is the address of a market
that's not quite ready for this yet because kind of like if I think about myself,
I'm in principle, I think I'm a juicy customer.
So say, for instance, I book a holiday or something.
If I were to be able to kind of self-describe myself and say, I'm a 39-year-old German, I'm a mother of four kids, I like going on holidays, and I'm somewhat price insensitive, I'd like to go somewhere where the sun shines and where I don't have to be on the beach or a day, where I can go on a hike or something.
In principle, kind of this is something that's information I would be happy to divulge about myself
if it led to a targeted advertising where I'm not pitched things I would never consider going to.
And B, if I were to see some of the kickback here, that would be even better, right?
How much do you think we can modularize this?
And I'm sure there's probably an AI plane here somewhere.
Yeah, yeah.
I mean, you're already seeing it from everybody on the AI space,
but I'll cover that in a second.
I think it's better to talk about like what we're actually doing right now,
which is like essentially we've pivoted more towards this offer wall approach
and being able to display these offers directly to users
to be able to offer discounts of being able to transact with bat
and do things like that.
That's been one part of our consideration for this.
We still have the ad units that are in play as well,
but it's basically trying to figure out
how do we add utility to the model
such that it's still coming back to that.
So I think that's the first thing to understand
is like how do you create an ad unit that is more attractive?
And I think what you highlight there is exactly the theory
that I've heard promoted by Doc Searles.
It's a concept called vendor relationship management.
He's had this for, I don't know, at least a decade.
But it's basically to be able to intent cast.
And like what I mean by that is advertise to vendors.
I'm interested in buying this.
Can you compete for my business to offer me a better offer?
So, you know, say for example, I say I'm interested in buying a pair of shoes.
Then Nike can go, hey, I'll offer you a 10% discount for buying my money.
Nike shoes and then Adidas will go, I'll offer you a 15% discount for buying my shoes.
I love this. It's kind of, it's intent-based advertising. Yes. Yes, it's exactly that.
And so I love the theory behind this. And I think that's kind of the vision of where we're
kind of moving towards with this offer wall approach is being able to do this, but doing it in such
a personalized way that like you don't need to advertise your cohort of information, kind of like the, the,
the flock-based approach that's being tested out by Google Chrome.
Essentially, what they're trying to do is they're trying to advertise three different flags
or interests to the site or to the advertiser.
And then the advertiser chooses an ad based upon that.
And the unfortunate reality is that three interests are roughly enough to be able to still
simulate or isolate who you are and identify you in some sort of
way just based upon the probabilistic like bit entropy of this. So like to explain that a little bit higher
level so that people kind of pick up what I'm saying, if you were to take somebody's name,
date of birth and their zip code, the likelihood of being able to figure out who that person is
comes down to like a 95 percentile capability, you know, just by taking, you know, three pieces
of information about them and combining them together. You know, that's just the way the math works. So
I think that's a big part of the question. Now, to get to your question about the AI side of things,
I think this is where everybody's still exploring what can actually happen. You know, there's
concepts that are coming into play with advertisers or people who are creating rewards-based
systems wanting to have their ad shown within the actual LLMs and within the chat box and stuff
like that. And so we see advertisements coming into play in that sort of way. One of the
struggles that you're going to run into, though, is that essentially you're going to run into
this rag model where you're augmenting the actual chat through context of the marketing data itself.
And so then how do you know that what's actually being presented from the chat interface is not
just marketing material and that it's unbiased in some sort of way? And so I think this is one of
the interesting questions that I'm trying to keep an eye on what people are doing with this,
because, you know, part of what happens when people are leaving search and going to more of these chat LLM approaches is that the advertisement, potential number of advertisements you can show is actually going to drop over time.
So that may be an opportunity for us because we're already kind of like repositioning the entire advertising market in the first place because fewer people are visiting sites and, you know, more stuff is being done through the actual like LOM and chatbot interfaces.
and in that sort of way to an advertiser,
they're looking at it going,
hey, our previous ad units
that we were used to being able to buy
through Google Adsense and stuff
for already being replaced,
maybe we should go consider exploring
some of these ones that are happening in the browser,
and the new ones that are coming out through the LOM.
Yeah, and I mean, we already see this,
that kind of SEO is being overtaken by GEO,
so kind of optimizing for generative LLM.
So, yeah, 100%.
I think kind of like part of the appeal to me is the zero knowledge component.
So in principle, I don't object to being advertised to, right?
So kind of like if I say, okay, I'm here and I want to buy a holiday, kind of I want a direct flight,
I wanted to be somewhere warmer than here, and I want to be able to see an elephant,
kind of, this is kind of, in principle, I'm inviting people to kind of send me offers for that.
What I object to is the fact that this goes into a targeting brief about myself.
Okay, this lady is willing to drop a pretty penny on a nice holiday.
So in the future kind of target these things at her.
And I think a lot of this can be solved by zero knowledge technology.
Tell me where you are at there and kind of where you think this is going to go.
Yep.
I think the way that we've constructed this system works quite well for this
because you don't need to share the information to still get the personalization.
So in concept, essentially what you can do is be able to utilize the interests of things
that are happening within the browser to do that classification.
and matching all client side.
So if you have the entire list of potential ads,
and they've already been tagged as related to a particular topic,
then you can essentially still do that intent-based model,
but do it all client-side directly within the browser itself.
So in that way, like, you're just changing the information model first,
such that you don't even have to share the information.
And then the zero knowledge comes into play to do the confirmation aspects
itself. And so I think that's what's more interesting about it is rather than going down
kind of Mozilla's exploration path where they're trying to do all of this inside of trusted
execution environments and still sharing the data itself, modifying the information model is
the best answer here because the best way to guarantee your privacy is to never share the
information in the first place. Yeah, 100%. So you guys have branched into offering a wallet,
which kind of also comes hand in hand with kind of having to store that token somewhere and talk.
So how do you connect all of these dots?
Yeah.
So a big part of it is what are the features that people want to use on the web today?
And how do we build them such that their user first in principle?
Like that is the guiding light between all of our different product features and capabilities
is really how do we do something that focuses on the user.
user and addresses the user's needs. So when you look at the wallet, it's essentially exactly that.
We see that where the web is going is that we need payments. I mean, this isn't new. Like the first
attempt of web payments is like PayPal if you think about it. And then PayPal was kind of the first
generation. And now we've seen that we've got, you know, the stripes and the blocks are square
as kind of the second generation. You've also got Google Pay and Apple Pay that have fit with
this and now crypto is kind of the third evolution of this.
So really we're addressing the same user need, but we're doing it in a slightly different way.
So that's kind of where that comes into play is, you know, what's slightly different?
Well, it's designed to be more open.
It's a permissionless system by design.
With the current design of like, you know, the stripes and the paypals and stuff,
one of the biggest headaches that we run into is kind of the compliance factors and the
regulations that come into play.
traditionally you've had to extend the banking compliance regulations into the centralized
providers.
But when you're doing peer-to-peer payments, the application of the laws just doesn't quite
work in the same way.
And so that's how the wallet comes into play.
And it's the same thing with kind of the talk thing is like people are inherently social.
They want to get together and meet.
You know, we're on a video call right now.
And so in that sort of sense, it's one of those things.
where you're wanting to interact with people.
And so how do you make sure that that's happening in a way that is addressing their needs?
So like some of the things that we built into talk that was a little bit different was
NFT gating specifically around your community.
So, you know, technically an NFT community could basically build a web call that you could
present your wallet itself to be able to show the capabilities of logging into the
actual talk.
So like let's say I owned a, what is the well-known NFTs, the Cryptopunks.
Let's say I owned a Cryptopunk, and it was meant to be an online meeting of the Cryptopunks.
I could present and sign a message that shows, hey, I own a wallet address that owns a Cryptopunk,
and then I'd be able to have access into that gated community because of the NFTs I own.
And so a lot of this comes back to how do we build social communities and build capabilities, and build capabilities.
abilities that the users actually want to be able to do what they're already doing today on the web.
How do you tie in the wallet with the identity? Because in principle, identity is really touchy.
And privacy on chain is hard and kind of connecting everything then also with kind of this monetary layer.
How do you see that?
To put it quite frank, I think we're doing it wrong right now in the Web 3 space.
What I mean by that is privacy is inherent.
because what it grants us. It is a means to an end. Privacy is a way to be able to achieve greater
control over what you present to other people. Similarly so, so I oftentimes refer to that as agency.
I had the agency to be able to determine these things. Similarly, so you want to be able to
segregate certain aspects of who you are and who you share with people. As an example,
you know, I enjoyed golfing. When I go talk of my first,
friends who are talking about golf, they don't have a clue about what I do for work.
That's not because I can't share that with him. It's because like it's not a point of common
interest. And so in that sort of way, it's just kind of rebuilding the capabilities that we
already have today within normal life digitally. And I think that's where we get this wrong is that
today on, you know, in the Web3 space and on chain, we're actually assuming that everyone wants to
share everything about them. The reality is, though, there's a lot of dangers to that. And also,
that's not very well aligned with kind of just how things work. One of the things I love to point to
is, you know, the business aspects of this. If I'm a business and I'm receiving payments from all
of my different customers, I don't want it to be known, like, basically my real-time revenue data
on-chain. Like, that's just, that's financial, intellectual property that I just don't want
shared. So if the chain shares that by default, that's a problem and I'm not going to want to use it.
Similarly, so I don't want my competitors to know who I'm like buying my supplies from.
You know, if I'm buying from a service provider or I'm buying ingredients for a restaurant or something
like that, I'm getting it for a certain price because I've negotiated based upon wholesale discounts
and stuff, I don't want that to be public information. You know, that's my competitive advantage is
being able to produce at a cheaper rate and being able to do those things. Similarly, so, you know,
if I'm paying my employees, maybe I don't want the employees to know how much I'm paying the other
people that are working there. So this is all on-chain data that's existing. And as long as you can
link that data back to some real world person, then you're basically creating a problem for yourself
in some sort of way. And that's the part that we can't really understand today because it's an
emergent property of the data that exists. So that's where it becomes a headache. Like,
looking at the Web 2 space as an example in the advertising model, today we talk about data.
Well, what is that actual data? Well, it's behavioral data. We want to know what people are using,
what YouTube videos they're viewing. We want to know which sites they're visiting. That's what a
third-party cookie is doing. So we're building these behavioral profiles of these people and then
taking that behavioral profile and then using that in order to target them with
advertisements.
How would that work in the Web 3 space?
Well, essentially what people would do is not just the behavioral data, but also the
economic data.
So I know where you're spending your money, not just what you're viewing.
And I'm taking that information and I'm building a profile around you.
And then I'm going to airdrop you an NFT as an advertisement.
That's the world we're creating if we don't do stuff as a private by default system.
100%. So kind of I see the transparency that you have, that we have on blockchain.
This is not compatible with a, with consumer behavior that I would want to endorse at all.
How do you fix it?
I've got some ideas.
I can't claim that they're perfect, but they make sense to me.
And I think it's time that we start experimenting.
with them. Some of the things that I think should be done is within the wallet itself, I think the
wallet needs to act essentially as a user agent on behalf of the user to provide that privacy
capabilities in the same way that our browsers do today. So our browser doesn't advertise our browsing
history to every single site that we visit. The browser makes certain choices in order to try and
protect the user on their behalf. I think it's the job of the wallet to be doing the exact same thing.
So protocols are getting built in order to try and do these sorts of private transactions,
rather than expecting the user to show up at the privacy pool's protocol and integrate
and make sure that they're swapping all the transactions into privacy pool themselves
and then jumping over to the next site so they can actually spend the money and jumping through
three different sites.
That should really be the responsibility of the wallets to integrate those features directly in
so that it's just happening on their behalf.
Why does this matter?
Well, when you start factoring in things like what's happening with X402 protocol, wallet pay,
and basically the ability to transact automatically through microtransactions,
all of that is going to be linking your browsing history on chain in some sort of way.
So say, for example, I'm showing up at a news article,
and I'm reading a news article about some topic, and I paid for it,
Well, that's being advertised on chain because I made a payment.
Then I show up at the next website and I do the exact same thing and then the next site
and the next site and the next site.
All of a sudden, the behavioral profile that we just got done spending like 10 years
trying to get rid of with third party cookies is now permanently on chain until the rest of time
unless the user figures out how to stop the footgun by switching accounts every single time
and having to do with the burden.
So going back to that conversation that we had a little bit earlier of,
like privacy works when it's convenient, that's where I think it's the wallet's responsibility
to solve these problems for the user.
A lot of these problems can already be addressed.
It's a lot harder to kind of do so with smart contract wallets, which are preferable
for other reasons, right?
So kind of if you have an EOA, kind of privacy pools and it's much simpler than doing the same
with a smart contract wallet,
but then kind of you lose recovery.
And I mean, in principle, on a smart contract wallet,
you would want to have it such that the private key
never actually leaves the enclave.
You don't actually have to vary seat phrases from one place to the next.
You just kind of add another signer.
How do you see this from an IT architectural standpoint?
Yep.
I think again it comes back to what do you expect the user to know if the user is expected to understand smart contract logic or even at a smart no user you can't you can't expect any user to kind of understand that right kind of like it kind of for the user it just has to work but kind of as the system architect you have to think about it right you have to think about kind of like can can I still kind of have recovery in some sense kind of with an EUA or do I go smart contract and then kind of
deal with kind of all the headaches that I have with privacy pools and the like.
And where, as an IT architect, where would you fall on that spectrum?
I think more of it needs to be done on behalf of the user by default, but they have the option
to opt out when necessary. I think that's kind of the user first principle that I would take.
So as an example of how I think about this as a toy today is if, for example, I wanted to send
money to somebody privately from one chain with one token. So like, let's say I have
USDC on ETH Mainnet and I want to be able to send USDT on Solana. It introduces a lot of
complexity. So how do we make this work? Because A, we're talking about two different
smart contract systems. We're talking about a swap and a bridge that needs to occur. We're
talking about different tokens. And we're also taking into consideration the fact that the site,
which is probably like an e-commerce website, really does.
have Web 3 expertise, so they're not going to be formatting that transaction for you in the
first place. So how do you make that work? The way that I think about this today is that
like essentially the abstraction at the RPC endpoint of the wallet, so window.etheum
request where you're actually, the site's actually making the request, needs an intent API
built into it. We need to be developing high level APIs where it can say, hey, send some
money to me of $3 in this amount on this chain. And, you know, maybe I provide three different
things instead in the same way that when you're using a FPOS system, it's like, it's saying,
I'll take visa or a master crowd. I don't really care. Just pay me the money. So you do that.
And under the wallet is where all the complexity is hidden. The wallet takes that RPC call that's
being made to it. And it goes, great, I can build a 5792 transaction that,
performs a swap and a bridge all at the same time and performs it through a privacy protocol so that it
comes out on the new chain in the proper currency at the proper amount and it just ends up at that
address. And then you send back as a response to that async call that came in from the site.
Here's the transaction hash on the Ethereum chain and here's the transaction hash on the
Solana chain so that they can verify that the transaction actually occurred. And once that happens,
then they continue through the rest of the business full of their while.
In that way, like the site is doing what it's an expert at, which is selling goods and services.
It doesn't have to care about the design and formatting of transactions that are being sent to the wallet.
And, you know, the wallet is acting as responsible participant within this system, helping the user to make sure they understand what they're actually doing to be able to send this money to the site at the end of the day.
Yeah, that makes a lot of sense.
Do you think we'll continue seeing browser plug-in wallets in the future?
Because kind of like to all my non-Web3 friends, browser plug-in seem really dodgy,
as ways of kind of keeping money kind of go.
I think it will happen until the browsers decide to get into this layer.
What I mean by that is part of the reason that Web3 continues to exist is because
Web 2 has not taken an interest in this space. Google and Apple, they're not playing in the
crypto space. They might have small teams that are operating in this stuff, but the stable
coins might change that, you know, kind of the new legislation that's coming into play,
like they're very adverse, you know, in the same way that like Google watches what we do,
and sometimes they'll copy some of our ideas. It's the same thing that's happening, you know,
within the entire Web 3 spaces, when they feel like there's a legitimate use case where they
can make money within that business, they'll get into it. In the same way that you're seeing,
you know, Stripe, who started in the Web 2 payment space and decided to get into Web 3
payment space, they're doing the same thing. Because they see advantages or they see that, you know,
Web 3 is starting to eat their lunch in some sort of way. And so they want to make a competitive
play to try and stop those sorts of things. So I do think that the browser serves a role within this.
And I think this is why Brave is kind of early to that game of showing what.
what's possible, especially when you start looking at, you know, X402 design where you actually
need to understand the HTTP headers.
Well, the extension doesn't have the easiest capability, you know, and part of what we do
within our design principles for this is like, we're not going to try and sit here in the center
and intercept every single communication.
Like technically we can do that with extensions today.
We could be intercepting every windowed out of Ethereum request and making sure that our wallet
appears instead of Madamask. We've intentionally not done that because that's not user first.
That goes against our principles in that sort of way. So I do think that there is a role that
browsers play with in this, whether that's just simply at the security key management layer so
that there's a competition that can exist between the different extensions, or if they decide
to vertically integrate further and kind of take approach more like what Brave is done of building
it directly within the browser to offer better security guarantees because we can operate
within the browser process
and be able to have a greater level of control
over what happens with the memory and stuff.
So I think it'll happen,
but I don't know when.
Okay.
Brave is built on Chromeium, right?
Like most modern browsers.
Do you see risks in relying
on a Google maintain code base here?
No, because
like we have some advantages
in that, like, we evaluated different aspects that came into play.
I don't know if people know this, but originally, I believe, this was before I joined the
but I believe we started on Gecko and tried it and realized that there was actually
limitations to building on the Gecko engine, which for people aren't familiar, that's Firefox.
And we pivoted away from that specifically because the Chrome model is much faster moving.
they integrate a lot more features.
And part of the trouble within the browser space is most web developers are typically
only building their websites to work compatibly with Chrome itself.
So the fact that it is an open source code base and we can go in and modify it as we wish,
such as turning off flags or adding new features directly ourselves, allows us kind of a competitive
advantage to be able to play within this space so that we can focus on the things that we think
matter most to users while being able to easily just toggle things on and off when necessary.
So I think it's actually more of an advantage rather than a disadvantage. And if Chrome decided to
basically close the source of the code, it's not going to just impact us. It's going to impact
Edge. You know, Edge is built on this as well. So like, you know, just imagine, you know, Microsoft
and Google fighting that one out of them. So I think the likelihood of that is very, very small
that that actually happens because the way that Chrome works anyways is they build the open source layer,
but then they also have a closed source Chrome layer on top of it.
Another danger that kind of looms is kind of from the regulatory side.
I don't know how much you are in the loop with European politics, but kind of here we have the chat control vote that's kind of happening.
Well, actually, it will have happened when this comes out.
So it happens October 14th, and Germany currently is the deciding fact.
And basically for everyone who does know, it would force any application to have a backdoor,
to kind of give access, to be able to give access to unencrypted messaging.
So, I mean, obviously that raises really tough questions for the open internet.
And as always, it's done under the guise of, I don't know, child protection and whatnot.
And it's like, I mean, whenever you kind of say, I'm, I'm pro-privacy, people go like, well, do you want kiddie porn?
It's like, no, it's not about kiddie porn.
It's about kind of like, you know, normalizing privacy.
So how do you see Braves role in helping users navigate these challenges?
Yep.
I think it comes back to user first by design.
And I think this is where a lot of these things are in these tough tradeoffs.
what happens when a user actually is malicious?
What happens when a user actually is a criminal?
You know, and who's responsible for taking care of that?
Ultimately, we're getting down to the deeper philosophical questions.
And what I actually like to do rather than focus on the here and now moral crisis of today
is actually look at the history of the past.
I think it's much easier to detach ourselves when you look at history
and be able to understand what's happened before
to understand how we can apply those lessons to today.
So the thing that I like to cite is actually the Inquisitions.
So the Inquisitions was basically the Catholic Church being able to decide what was considered within the boundaries of the religion.
And so as a part of that, the Inquisitions basically allowed for a set of Inquisitors to censor concept within the world and the different nations that they had influence within.
that lasted for 700 years.
So, you know, like we just got done in theory with a much bigger problem that ended about the 1800s, where numerous generations occurred and it modified life as we see it today.
So like what I like to point out to is Galileo, who was right, by the way, about the heliocentric theory, was considered horrific theory.
was considered heretic by the Catholic Church.
What actually happened with that?
Well, Descartes became concerned about the impact of this,
and so he modified his mind-body theories because of it.
So what lessons can we learn from this?
The lesson today is our concepts of the mind, psychology, mental health,
all of these things were modified because of the censorship that occurred previously.
And this is what happened 300 years ago.
There is kind of this like butterfly effect that occurs where had Descartes been able to actually publish his actual theories, we might have a completely different historical record because of this.
And so this is the impact of censorship that comes into play today.
What I like to point to is actually a historian who I read all of this from, like I didn't do this research myself.
Ada Palmer has done an excellent job of basically citing all of this stuff, and it's kind of really
helped me inform my theories on this, particularly just because understanding the history of this
and understanding the impact, she also points out what we can take away from this.
And so one of the things that she points out is most censorship isn't actually effective
at scale.
So, you know, the Inquisitors couldn't censor every single book.
They couldn't cross a line through every single copy of the book.
when the printing press came out. Instead, what they did is they utilize the enforcement model
of scaring people away from participating, or in other words, what we call the chilling effect
today, to create censorship through self-censorship. So that's the exact same thing that we'll see
with these different track controls and age verifications and things of that nature is actually
the goal here isn't to scale the censorship model in the same sort of way. It certainly will scale
better than, you know, somebody individually crossing out lines and stuff like that. And that is
something to be worried about as well as the algorithmic capabilities to censor. But ultimately what it
comes back to is how many Descartes are we going to have where people choose to self-sensor their
ideas because they're worried about it not sitting within the Overton window of today's time,
even though it might fit within the Overton window in the next 50 years.
Yeah. Yeah, that's a tough discussion to have, isn't it? If you look at the discourse, the public discourse, over the last, say, 10 or 15 years, it feels like it's become a lot more divisive and kind of it relies a lot on self-censorship and kind of things that are in my eyes, completely.
within the realm of what someone should be allowed to say, are in some way suppressed by society.
What's your view on this? And how do we kind of revert this development?
Yeah. So to get us back on topic, I'll tie it back to how it
impacts the web.
Thank you.
You're welcome.
So if we consider the thesis that Facebook originally started with, and most social media
platforms started with, it's meant to be kind of the town hall of basically everybody
participating and sharing their ideas.
So if it's meant to be speech, who gets to decide what is said?
Ultimately, it comes back to there is a inherent social contract.
that exists between each of us as we communicate.
You know, we all set boundaries.
When somebody talks about something, we may say, hey, I don't, I'm not interested in that
conversation or we may change the topic.
How do we do that online is really what it comes back to?
What tools are we putting in the hands of people in order to do that on social media
today and who has the power to enforce those sorts of tools?
I think Tim Berners-Lee in his new book actually highlights this very well.
One of the problems that exists is the fact that the engagement model,
and the business model of a lot of these social media websites are designed in such a way that they create essentially like classification filters, or as he refers to them, collaboration filters.
So what that is is basically they build a profile on me and they see what interests me and what causes me to engage.
And they represent that content to some other user.
So how is a user do I get to decide when I've had enough of that content and be able to enforce that?
That can either happen in a centralized way where, you know, I can go to Facebook, I kind of say, you know, disengage with these topics.
I can, you know, manually go through and dislike it and try to modify my algorithm in some sort of way.
Or one of the things that I have been exploring is this idea very similar to what Blue Sky moderation lists are doing is essentially could you use,
utilize, you know, ad blocking lists or some sort of filter lists or moderation lists to be
able to automatically block this content client side in such a way that the algorithm sees it as,
well, the user didn't engage with this, so I'll stop showing them that content.
And then what you do is you naturally are going to create more unification within the collaboration
filters on these different sites in such a way that that will prevent the divisiveness
because it'll just create this kind of like feedback loop of creating.
more collaborativeness as people choose to disengage with this divisive content.
I like this in principle.
I think the fact of the matter remains that people like the divisiveness of it, right?
Kind of like some people kind of, they relish in this, right?
So kind of because you clearly know you're on the right side.
One thing that kind of, one measure that I've always been a fan of is,
forcing big social media companies to open, kind of to offer an API for other people to kind of
come in and lay over other sort algorithms. So kind of like for Twitter, Twitter basically has
two settings. One is kind of the for you. And I don't know whether that's different for anyone else.
But for me, that's basically TikTok style videos that kind of like, I don't want to be
shown kind of it's kind of the kind of content that kind of you have to look at because it moves
but you hate yourself for it. So kind of like I don't I don't have this. I don't look at that tab.
And then the other tab is following. And basically that one shows you everything. And kind of you
want some sort of moderation because I don't want to know every time Reuters kind of publishes an
update on something because 98% of them are not relevant to me and I do not want to be shown
them. So kind of having
some
modularity here where kind of
there are plug and play systems where
I say, okay, I get to choose from
these algorithms or I can pay for
kind of a premium one
or whatever. I think this is something
where regulation can
really come in and do something
useful for people
because kind of social media platforms
are naturally
emergent monopolies, right?
Kind of like you can't say, okay, we're
split up Twitter, kind of like you did with, I don't know, United Fruit's sort of thing, right?
Kind of like you can't say you get this part of the country, you get that part of the country.
It doesn't work because it's just much better if more people are on it than kind of like if you have two smaller ones.
Where do you fall on that?
I think doing it at a regulatory level likely just shifts who decides the algorithm from private to public services.
And so then you're back to the question of, do you trust the public institutions more than the private institutions?
Oh, no, no.
I want other people to kind of be able to kind of offer these kinds of.
So kind of, I want to be able to kind of plug in the Kyle algorithm for Twitter.
And then kind of I get the Kyle.
I get the Kyle feed kind of like prioritized by whatever kind of like your filters are.
I don't want governments to kind of take over this part at all.
I kind of, I just want big social media companies to have to open source this AP.
Yeah, that makes sense.
I don't know, are you familiar with Brave goggles within our search engine?
No, I'm not.
So we do exactly this.
We have the ability to be able to do what's called boosting and disregarding different URLs within
the search engine itself so that you can actually modify your search results.
Some people use them, some people don't.
There are lists that are being built.
We put out some public lists as well.
But this is exactly that because in our view, this is user versed.
You know, this is how you use a search engine so that you can be able to see it.
So, like, the best way to explain this is, like, you can actually subscribe to a goggle
that's for the left.
And then at the same time, you could turn off that gobble and then you can turn on for the
right.
And so you can view political content in different sorts of, like, biases, but you can see
it in a transparent way, you know, like what the results of the engine are and you get to
control it in that sort of place.
I shove this out.
That's very core.
Now what you're forced on everyone?
So this is where I think
I kind of am worried about that.
I worry that
when governments step in and
try and force kind of this
requirement, there's this
subtle pushback that occurs between
these different tech companies depending on
if they're actually willing to participate or not.
So like looking at how Apple
chosen to engage with open web advocacy and some of the regulations around the requirement
to open up WebKit and stuff like that.
You know, Apple has been maliciously compliant in the way that they've done it.
They've met their compliance regards, but they've done it in such a way that's not actually
given us the capabilities as users to decide what we actually want.
And that's because their business interests don't align with it.
Similarly so, you know, like what you have to take into consideration is how do you set your
defaults?
How do you choose these sorts of things for the users?
Because the power defaults still do exist.
You know, that's the whole reason that default screens within the EU have worked well
and has helped brave is because no longer can Google or Apple say you have to use Chrome or you
have to use Safari on the mobile devices.
Now we create these sorts of consent screens where our user choice.
screen where people can choose the browser.
So I think that creates the same problem with the algorithmic decision making is who do I
delegate that responsibility to?
And you're going to see this natural kind of creation of like some lists are going to be
excellent.
You know, maybe I'm the dedicated person who just likes modifying my algorithm all the
time and I do it for my own needs and then other people come complaining to me.
I'm like, sure, I'll help you.
I'll build it for you.
But then what happens when we have a business step in and go,
hey, I'm going to be a full-time person who just builds moderation list, you know, to protect
safe search content.
And then I'll sell that to the parents.
And no longer is it about, you know, trying to help people to regulate the content of what
their kids see.
But instead it's like, hey, as a parent, you have to pay for this.
So you're probably not going to do that because you have a choice of buying your kids
milk for the week or buying, you know, the online TikTok algorithm content that, you know,
they're seeing behind the scenes and, you know, they'd probably want to eat before, you know,
having access to TikTok, even though TikTok is where we spend more of our time. So I'm concerned
that it sets up perverse incentives, but in principle, I fully agree that we need algorithmic
transparency and we need user choice. I'm just not sure how you do it in such a way that aligns
business incentives with that without going down the regulatory approach and introducing kind
of secondary effects that we just don't understand it.
If you look into the future for Brave, say you look ahead five years, what would success look like for Brif?
I think a big part of it is user growth. We need to continue to see that user growth because, you know, the more users you have, the more that we're able to provide better services, the more engineers we can have on hand to be able to build more capabilities and, and, you know,
more things. So continuing to grow in that sort of sense, I think is a big part of it.
Diversification of different products and services that we see generating revenue. So today we have
premium services. You know, Brendan posts us on Twitter every month when we post our growth numbers.
We've got premium services. We do crypto deals. We've got search ads. You know, we've got many
different lines of business. And being able to expand in those different ways helps us to be able to grow as a
business so that we can focus on, you know, the user's needs. And then with that as well, being able
to utilize that user growth to be able to go influence what's happening within the tech standards
and those sorts of things today. So, you know, one of the things that we are very acutely aware of is
the fact that we have more users allows us to step into W3C and be able to argue for different
sorts of things. You know, Google, because they own a natural monopoly within the browser space,
They also own a natural monopoly on the direction the web goes.
We can choose to turn off features if we want, but the reality is if Chrome ships it,
that's what web developers build against.
So being able to continue to grow and continue to get users using our stuff
allows us to advocate on behalf of the user more to push back against the way that
Big Tech is working today and kind of force a new competitive nature to exist so that, you know,
we don't need regulators to step in.
Instead, we can actually use just kind of the natural marketplace to be able to compete and force Google to have to change how they act.
Yeah.
And personally, what are you excited to be working on next?
For me, I love that the stablecoin stuff came about.
I want private payments.
Like, that is the thing that I care about most is like I want private payments on an open protocol.
As Zuko put it best on Twitter, permissionless private money with the cash.
I want that to be the default for all Web3 stuff.
And I want that to be to take the Web3 stuff and come back and meet the Web2 people where they're at and be able to show them the capabilities and create a competitive advantage so that, you know, the credit card natural duopoly that we have today with MasterCard and Visa are being forced to have to compete against the Cryptorails and stuff like that.
So to me, that's where I want to spend some time.
I am also very interested in what's happening with social media.
How can we help to build these things acting as a user agent to be able to improve these sorts of things?
So I'm keeping an eye on what's happening in blue skies, some of the new challenges that they're being faced with and how they're approaching the problems and trying to reinvent it because those are kind of the two big problem spaces that intrigue me, but also I think we'll have the biggest impact on users over the next decade.
Roger that. Famous last words. Thank you so much for coming on, Kyle. It's been a pleasure.
Yes. Thank you very much. I appreciate it.
