The Jordan B. Peterson Podcast - 460. AI, Internet Scams, and the Balance of Freedom | Chris Olson
Episode Date: July 1, 2024Dr. Jordan Peterson sits down with cybercrime expert and CEO of The Media Trust, Chris Olson. They discuss the key targets for cybercrime, dating and other online scams, what legislative measures for ...internet safety might look like, and the necessary sacrifice major companies need to make for a better digital ecosystem. Chris Olson is the CEO of The Media Trust, a company founded with the goal of transforming the internet experience by helping technology and digital media companies create a safer internet for people. Under his leadership, the company invented the world's first digital data compliance, Children's Online Privacy (COPPA) and website/mobile-app malware scanning technologies. Through infrastructure in 120 countries, The Media Trust protects billions of people every month from digital attacks. Fortune 10 to hundreds of small and medium-sized tech and digital media companies leverage The Media Trust to protect their customers from digital harm and unwanted data collection.  - Links - For Chris Olson: Website https://mediatrust.com/ The Media Trust Social Media LinkedIn https://www.linkedin.com/company/the-media-trust/ X https://x.com/TheMediaTrust Chris’ Social Media LinkedIn https://www.linkedin.com/in/chrisolsondigitaltrustandsafety/X https://x.com/3pc_chrisolson?lang=enÂ
Transcript
Discussion (0)
Hello everybody.
Today I have the opportunity to speak with Chris Olson, who's CEO of the Media Trust
Company. His company is involved,
occupies the forefront of attempts
to make the online world a safer place.
He mostly works with corporations to do that,
mostly to protect their digital assets.
But I was interested in a more broad ranging conversation
discussing the dangers of online criminality in general.
Substantial proportion of online interaction is criminal.
And that's particularly true if you include pornography
within that purview, because porn itself
constitutes about 20 to 25% of internet traffic,
but there's all sorts of criminal activity as well.
And so Chris and I talked about, for example, the people who are most vulnerable to criminal
activity, which includes elderly people who are particularly susceptible to romance scams
initiated on dating websites, but then undertaken off those sites, And also to phishing scams on their devices
that indicate, for example,
that something's gone wrong with the device
and that they need to be repaired in a manner
that also places them in the hands of criminals.
The sick and infirm are often targeted
with false medical offers.
17-year-old men are targeted with offers
for illicit drug purchase and juvenile girls,
14, 13, that age, who are interested in modeling careers, for example, are frequently targeted
by human traffickers.
This is a major problem.
The vast majority of elderly people are targeted by criminals on a regular basis.
They're very well identified demographically. They know their ages,
they know where they live, they know a lot about their online usage habits, and they have personal
details of the sort that can be gleaned as a consequence of continual interaction with the
online world. And so I talked to Chris about all of that and about what, how we might conceptualize this as a society
when we're deciding to bring order to what is really the borderless, the ultimate borderless
Wild West community. And that's the hyper-connected and possibly increasingly pathological
online world. Join us for that.
Well, hello, Mr. Olson.
Thank you for agreeing to do this.
We met at the Presidential Prayer Breakfast not so long ago
and we had an engaging conversation about the online world
and its perils and I thought it would be extremely interesting
for me and hopefully for everyone else to engage in a serious conversation
about, well, the spread of general criminality and misbehavior online.
And so do you want to maybe start by telling people what you do and then we'll delve more
deeply into the general problem.
Great, yes.
And thank you, Jordan.
Thanks for having me.
I'm the CEO and founder of the Media Trust Company, not intended to be an oxymoron. Our primary
job is to help big tech and digital media companies not cause harm when they
monetize audiences and when they target digital content. So let's delve into the
domains of possible harm. So you're working with large companies.
Can you give us who, like what sort of companies
do you work with?
And then maybe you could delineate for us
the potential domains of harm.
Yeah, so I work with companies that own digital assets
that people visit.
And I think maybe to set a quick premise, cybersecurity is a mature industry
designed to monetize the CISO, the Chief Security Officer, generally protecting machines. So there's
a mindset geared to making sure that the digital asset is not harming servers, the company or
government data.
Our difference is that we're helping companies that are in digital. So think big media companies, we're helping them
protect from harming consumers, which is the difference between
digital crime, which is going to target people, and cybersecurity,
which is generally targeting corporates and governments and
machines.
So now do you do you your, does your work involve protection of the companies themselves also
against online criminal activity or is it mostly aimed at stopping the companies themselves
from what would you say, mostly, I suppose inadvertently harming their consumers in pursuit
of their enterprise and their monetization?
Yeah, so the great question, and I think that's where the heart of the matter is. So our primary
job is to watch the makeup of what targets digital citizens' devices. The internet is made up of
roughly 80% third-party code. And what that means is when a consumer is visiting a news website, when they're checking sports scores, when they're visiting
social media, the predominance of activity that's running on their machine
is coming from companies that are not the owner of the website or the mobile
app that they're visiting. That third party code is where this mystery begins.
So who actually controls the impact on the
consumer when they're visiting an asset that is mostly made up of source code and content
coming from other companies. So our job is to look at that third party content, discern
what is good and bad based on company policies, based on what might be harming the consumer,
and then informing those companies what is violating
and how they can go about stopping that.
What sort of third party code concerns
might they face or have they faced?
What are the specifics that you're looking for?
Maybe you could also provide us
with some of the more egregious examples
of the kinds of things that you're ferreting out, identifying, ferreting it out and attempting to stop.
Yeah, so I think putting any digital company
into the conversation is critical.
So we're talking about tech support scams
and romance scams targeting seniors.
That is an epidemic.
If you're a senior and you're on the internet on a regular basis,
you're being attacked, if not daily, certainly every week. That is now a cultural phenomenon.
There's movies being produced about the phenomenon of seniors being targeted and attacked online.
It's teens. So a 17-year-old male is being bombarded with information on how to buy opioids
or other drugs and having them shipped to their house. If you're a 14-year-old male is being bombarded with information on how to buy opioids or other drugs
and having them shipped to their house.
If you're a 14-year-old female
and you're interested in modeling,
you're being approached by human traffickers.
The sick and infirm are frantically searching
the internet for cures.
While that's happening,
they're having their life savings stolen.
So our job is to watch that third-party content and code,
which is often advertising.
It's basically a real estate play on what keeps the consumer
active on the digital asset to find that problem
and then give it back to the company.
I can jump in quickly in how we go about doing that.
So we become a synthetic persona.
We've been doing this for not quite two decades, but getting on 19 years. how we go about doing that. So we become a synthetic persona.
We've been doing this for not quite two decades,
but getting on 19 years.
We have physical machines in more than 120 countries.
We know how to look like a senior citizen,
a teenager, someone with an illness.
And then we're rendering digital assets as those personas,
acting more or less as a honeypot to attract the problem that's coming through
the digital supply chain, which runs on our devices.
And I think that's gonna be a key part
of this conversation as we go.
Most of that action is happening with us.
And so it's difficult for tech companies
and media companies to understand fully
what's happening to us.
That's the point of their monetization, right?
That moment in time.
So our job is to detect these problems
and then help them make that go away.
Right, okay.
So, okay.
So you set yourself up as a replica
of the potential target of the scams
and then you can deliver the information
that you gather about how someone in that
vulnerable position might be interacting with the company's services in question to keep
the criminals at bay.
Let's go through these different categories of vulnerability to crime that you described.
I suspect there's stories of great interest there.
So you started with scams directed at seniors.
So I've had people in my own family
targeted by online scammers,
who were in fact quite successful at making away
with a good proportion of their life savings in one case.
And I know that seniors in particular,
who grew up in an environment of high trust, especially with regards to
corporate entities.
They're not particularly technologically savvy.
They're trusting.
And then you have the additional complication, of course, in the case of particularly elderly
seniors that their cognitive faculties aren't necessarily
all that they once were.
And they're often lonely and isolated too.
And so that makes them very straightforward targets
for especially people who worm into their confidence.
You talked about, was it romance scams on the senior side?
It is romance games on the senior side. It is romance scams on the senior side.
Okay, so lay all that out, tell us some stories
and describe to everybody exactly what they would see
and how this operates.
Okay, so a senior is joining a dating website
just as a teenager or someone in the middle age would do,
they're looking for romance.
There are people on the other side of that that are collecting data on that senior potentially
interacting with them.
Once they get enough information on that particular senior, they're going to start to find them
in other ways.
Send me emails and information.
Let's move off the dating site.
They're going to start calling them on the phone.
As that starts to evolve, it's that information collection, getting them to do certain things
online that sucks them deeper and deeper in.
From that moment forward, they become very much wed and emotionally oriented towards
that person that they're involved with, and the theft goes from there.
Right. the theft goes from there. Right, so you go on a dating website,
as say someone in your 70s,
you're lonely and looking for companionship.
There are scam artists that are on those dating websites
as well, who must have what?
I suspect they probably have keywords
and profiling information that enables them to zero in
on people who are likely targets.
Do you know how sophisticated is that?
Like, do you think that the criminals
who are engaged in this activity,
how good is their ability to profile?
Do you think they can identify such things
as early signs of cognitive degeneration?
I think this is organized crime and they have their own algorithms and processes to identify
people.
I also, to your earlier point, people believe what they see on computers.
They're following what's being provided to them, which makes them relatively easy marks.
So once that process starts, they're reeling them in.
If they lose a fish, that's no problem
because they're going after so many in any given day.
They also have infrastructure in local markets
to go deal with people personally.
So this is a very large criminal organization
that has a lot of horsepower to identify in that attack.
Right, okay, so do you have any sense? See, I hadn't thought about the full implications of that.
So obviously, if you were a psychopathic scam artist, posing as a false participant
on a dating website would be extremely, potentially extremely fertile ground,
not only for seniors who could be scammed out
of their savings, but you also mentioned, let's say,
younger people who are on the website who might be useful
in terms of human trafficking operations.
So do you have any sense, for example,
of the proportion of participants on a given
dating platform that are actually criminals or psychopaths in disguise?
Let me give you an example.
You undoubtedly know about this, but there was a website, I can't remember the name of
it unfortunately, I believe it was Canadian Canadian that was set up some years ago
to facilitate illicit affairs.
And they enrolled thousands of people,
all of whose data was eventually leaked,
much of that to great scandal.
The notion was to match people who were married secretly
with other people who were married to have illicit affairs.
They got an awful lot of men on the website
and almost no women.
And so they created tens of thousands,
if I remember correctly, fake profiles of women
to continue to entice the men to maintain
what I believe was a monthly fee for the service. Ashley Madison,
it was called. Right. And so obviously, our dating website would be wonderful hunting grounds for
any kind of predator. And so do you have any sense of what proportion of the people who are participating on online dating sites are actually predators, criminals?
I don't know what percentage of the participants on the sites are predators, but where we come in in our expertise is that everyone that is visiting is giving information into sort of the digital ecosystem.
And so the issue from there is that they're then able to be targeted wherever they go
online.
So there's information that's being collected from the site that they're visiting that is
then moving out into the ecosystem so that wherever they go, they're being pulled back
and targeted.
In an example, like in Ashley Madison, a criminal may be able to get the digital information
about the people whose data was stolen,
come back to them six months later,
coming from another website via email or SMS text,
and then press the attack at that stage.
For us, in becoming a digital persona,
our job is to look like someone based on the information
that sites have collected about them.
So we look like an 85 year old grandmother
living in a senior community.
When you become that type of profile,
no matter who else is engaging with you online,
the algorithm and the content that is gonna be served to you
is coming from criminals, regardless of their activity
on that particular site that you're visiting.
It's simply based on who you are.
So artificial intelligence has been around in use
for digital media and targeting people for 2010, 11.
So the initial, initial use case was collecting data on us.
That was the key initial step for AI utilization.
The second step was then turning that around
and targeting people better, right?
So AI was first used to collect information, right?
Make things interesting behind the scenes for people.
Second, creating better audience segments
which enable that targeting.
This third phase that's happening today, you see ChatGPT and the LLMs being used in regular use.
The third big stage is writing content on our devices on the fly.
So regardless of where the criminal actor is,
regardless of how they're moving into the ecosystem
and what initial buying point,
they're able to find that person,
write content on the fly that's particularly tailored
to what the digital ecosystem knows about them,
to create the situation where they then respond
and the criminal activity can occur.
Right, and so what that implies as well then, I suppose,
is that we're going to see very
sophisticated LLM criminals, right, who will be able to, this is the logical conclusion
of what you're laying out, is that they'll be able to engage. So I just saw a video, it's gone viral, released about three weeks ago, that portrayed the
newest version of chat GPT.
And it's a version that can see you through the video camera on your phone and can interact
with you very much like a person.
So they had this chat GPT device interacting with a kind of unkempt, nerdy sort of engineer character who was
preparing for an interview, a job interview, and the chat GPT system was
coaching him on his appearance and his presentation. And I think they used
Scarlett Johansson's voice for the chat GPT bot. It was very, very flirtatious,
very intelligent, extremely perceptive, and was paying attention to this engineer who was preparing his interview,
like a, what would you say,
like the girlfriend of his dreams would,
if he had someone who was paying more attention to him
than it was ever paid attention to him in his life.
And so I can imagine a system like that set up
to be an optimal criminal,
especially if it was also fed all sorts of information
about that person's wants and likes.
So let's delve into that a little bit.
How much of a digital footprint do you suppose,
like how well are each of us now replicated online
as a consequence of the criminal or corporate
aggregation of our online behavior?
So the typical senior, for example, how much information would be commonly available to
criminal types about, well, the typical senior, the typical person, typical 14-year-old for
that matter?
Right. The majority of their prior activity
that they've engaged in online.
So corporate digital data companies know a highly,
their job is to know as much about us as possible
and then to target us with information
to maximize profit, right?
That's the core goal.
Criminals have access to that data
and they're leveraging it just like
a big brand advertiser would.
So they know it's a grandmother
and they're gonna put in something
that only runs on the grandmother's device,
which makes it very, very difficult
for big tech and digital media companies
to see the problem before it occurs.
I think another thing that's really important to understand is this is
our most open border, right? So we've got an idea of national sovereignty. There's, you know, lots
of discussion on whether or not our southern border is as secure as it should be. Our actual
devices, our cell phones, our televisions, our personal computers, are open to source code and information
coming from any country, any person at any time,
and typically resolved to the highest bidder.
Right, right.
So the digital world, the virtual world,
is it a lawless frontier?
I mean, I guess one of the problems is,
like if I'm targeted by a criminal gang in Nigeria,
what the hell can I do about that? I mean, the case I mentioned to you of my
relative who was scammed out of a good proportion of
their life savings,
that gang was operating in Eastern Europe. We could more or less identify who they were, but
there was really nothing that could be done about it. I mean, these are people who are operating, well, out of any
physical proximity, but also even out of, hypothetically, the jurisdiction of, well,
say, lawmakers in Canada, police services in Canada. And so, how lawless, how is it,
how should we be conceptualizing the status of law
in the online and virtual world?
Yeah, and I think this is where the major rub is.
So I'm gonna walk back and talk about cybersecurity
as an industry first. So I'm going to walk back and talk about cybersecurity as an industry first.
So cybersecurity is relatively mature. It is now geared to monetizing the chief security
officer or the chief information security officer. What that means, it's providing products
and services designed to protect what they are paid to hold dear, which is the corporate
asset. So the machines and the data for the
corporation. If you're part of the government, which is where we're going to go in the conversation,
then your job as a CIO or a CISO is to protect government machines. Governments will tell
you that they're protecting you, right? They're protecting you from digital harm. What that
means today is they're protecting your data on the DMV website. That's basically the beginning and the end of cybersecurity and digital protection.
There's a legislation which is occurring coming from attorneys general, from state legislatures,
from the federal government in the US to a degree. Other countries seem to be further ahead,
seeking to protect people from data collection. And that's your GDPR in Europe.
Many states in the United States are putting some rules in place around what corporations
can collect, what they can do with the data.
The predominant use case is to provide a consumer with an opt-out mechanism.
Most consumers say, okay, I want to read the content.
They're not doing a whole lot with the opt-out compliance.
So that's not been a big help to your typical consumer.
But it's really the mindset that's the problem
and the mindset of corporate and government
that is at issue.
And so governments need to tactically engage
on a 24-7 basis with digital crime
in the same way that they're policing the
street. So the metaphor would look like this. If grandmothers were walking down
the street and being mugged or attacked at the rate that they're getting hit
online, you would have the National Guard policing every street in America. The
government needs to take step forward. When I say the governments, that is
governments need to take a step forward and do a better
job at policing people tactically.
And that does not mean that they're going after big tech or digital media companies.
It means that they're protecting people with the mindset that they're going to go ahead
and cooperate with the digital ecosystem to do a better job to reduce overall crime.
Right.
So your point appears to be that we have mechanisms in place, like the ones that are offered by
your company, that protect the corporations against the liability that they would be laden with if the data on their
servers was compromised. But that is by no means the same thing as having a police force that's
accessible to people, individual people, who are actually the victims of criminal activity. Those aren't the same things at all.
It's like armed guards at a safe in a bank compared to police on the street that are designed,
that are there to protect ordinary people
or who can be called.
Is that, have I got that about right?
Yes, and digital crime is crime.
So this is when you're stealing grandmother's money,
that is theft.
We don't need a lot of new laws.
What we need to do is actively engage
with the digital ecosystem to try to get in front
of the problem to reduce overall numbers of attacks,
which reduces the number of victims.
And to date, when we think about digital safety,
it's predominantly education
and then increasing support for victims.
Victims are post-attack.
They've already had their money stolen.
Getting in front of that is the key.
We've got to start to reduce digital harm.
I've been doing this for a good number of years, and the end of that conversation does
reside with the local and state governments, and ultimately, the federal government in
the United States is going to have to find resources to actively protect beyond having discussions
about legislating data control or social media as a problem.
Okay, so I'm trying to wrestle with how this is possible,
even in principle.
So now you said that, for example,
what your company does is, and we'll get back into that,
is produce virtual victims, in a sense,
false virtual victims so that you can attract the criminals
so that you can see what they're doing.
So I presume that you can report on what you find
to the companies so that they can decrease
the susceptibility they have to exploitation
by these bad actors.
But that's not the same thing as actually tracking down
the criminals and holding them responsible
for their predatory activity. And I'm curious about what you think about how that's possible even in principle
is first of all these criminals tend to be or can easily be acting at a great distance in
jurisdictions where they're not likely to be held accountable in any case, even by the authorities, or maybe
they're even the authorities themselves.
But also, as you pointed out, more and more it's possible for the criminal activity to
be occurring on the local machine.
And so that makes it even more undetectable.
So I don't see, I can't understand easily,
you obviously in a much better position to comment on this,
how even in principle, there can be such a thing as,
let's say an effective digital police force.
Like even if you find the activity
that someone's engaged in,
and you can bring that to a halt
by changing the way the data is handled,
that doesn't mean you've identified the criminals or held them accountable. So what, if anything,
I can't understand how that can proceed even in principle.
So the digital ecosystem is made up of a supply chain, just like every other industry. There are
various steps that a piece of content
is gonna go through before it winds up on your phone.
So it's running through a number of different companies,
different cloud solutions, different servers
that put content out.
Okay, they're intermediaries.
And so a relationship between those digital police
with the governments and those entities on a tactical basis
is really the first step. Seeing crime and then reporting that back up the chain
so that it can be stopped higher and higher up towards ultimately the the initiation point of
where that content is delivered. So it it it seems fantastic but it is possible. The criminals need to have, they need to use intermediary processes in order to get access
to the local devices.
And so you're saying that I believe that those intermediary agencies could be enticed, forced,
compelled, invited to make it much more difficult for the criminals to
utilize their services.
I guess that's, and that that might actually be effective.
Does that, that still doesn't, does that aid in the identification of the actual criminals
themselves?
Because, I mean, that's the advantage of the justice system, right?
Is you actually get your hands on the criminal at some point.
Yes. And I think ultimately it does.
So you have to start and you have to start to build the information
about where it's coming from.
You then have to cooperate with the private entities.
Our digital streets are managed and made up of private companies.
It's not a government run internet.
All of the information that's fed to us, at least in Western society, is coming from these private companies. It's not a government-run internet. All of the information that's fed to us, at least in Western society, is coming from these private companies.
And so I think rather than having an antagonistic relationship between
governments and private companies where they're trying to legislate to put them
into a position, that may be appropriate for certain rules and regulations.
It may be appropriate to raise the age of accessing social media from 13 to 16 or 18.
And that is a proper place for the government to be legislating.
On the other hand, an eye towards reducing crime is critical.
And the ethical and moral mindset among all of the parties, and that's governments through
our corporations, has to be solely on protecting people.
And I think that's something that is significantly missing.
It's missing in the legislation.
It's missing in cybersecurity.
It's not something that we've engaged in as a society.
So there are a few countries and I think even a few states in the US that are looking at
a broader whole of society approach. That whole of
society approach is a mimicking of how the internet and the digital
ecosystem works, which is certainly a whole of society activity, right? So it is
the thing that influences and affects all of us every single moment of every
single day. Engaging in that, looking across the impact of society and doing better via cooperation
is a critical, critical next step.
How often do you think the typical elderly person
in the United States say is being successfully,
no, is being first communicated with by criminal agents
and then how often successfully communicated with by criminal agents, and then how often successfully communicated with?
What's the scope of the problem?
The scope is if you're a senior citizen,
in particular if you're a female senior citizen,
roughly 78 to about 85 years old,
we see that two and a half to 3%
of every single page impression
or app view is attempting to target you with some form of crime or influence
that's going to move you towards crime. So it is highly, highly significant.
In some ways, looking at this is shooting fish in a barrel to make a dent.
So you're concerned that the
legal system isn't going to be able to find the criminals. There is so much to detect and stop
and so much room to turn them off quickly, right? That we can gain a significant reduction in digital
crime by working together and considering society as a whole instead of the different pockets and
how can we legislate or how can we try to move a private company
to do better on their own?
Okay, so let's, now let's, okay,
so we talked a little bit about the danger
that's posed by one form of con game
in relationship to potential criminal victims,
and that was senior romance scams.
What are the other primary dangers that are posed to seniors?
And then let's go through your list.
You talked about 17 year olds who are being sold
online access to drugs.
That includes now, by the way, a burgeoning market in
under the table hormonal treatments for kids who've had induced gender dysphoria.
So you talked about seniors, 17 year olds
who are being marketed illicit drugs,
14 year olds who are being enticed into let's say modeling,
and people who are sick and infirm.
So those are four major categories.
Let's start with the seniors again,
apart from romance scams, what are the most common forms
of criminal incursion that you see?
The most common form is the tech support or upgrade scam.
And essentially, the internet knows that you are a senior.
When you're going to a website that you and I would visit,
instead of having a nice relationship with that site
and reading the content and then moving on to something else, you're getting a pop-up or some form of information
that's telling you there's something wrong with your computer.
You either need to call a phone number or you need to click a button, which then moves
you down to something else that is more significant.
This is happening millions and millions and millions of times per day.
And it is something that we can all do something about.
Attempting to educate seniors to try to not listen to the computer when it's telling you to do something is not working.
No, well, no wonder. I mean, look, to manage that, it's so sophisticated.
Because, you know, once you've worked with computers for 20 years,
especially if you grew up with them,
you know when your computer, your phone,
is telling you something that's actually valid
and when it isn't, it doesn't even look this,
a lot of these criminal notifications,
they don't even look right.
They look kind of amateurish.
They don't have the same aesthetic that you'd expect
if it was a genuine communication from your phone.
But man, you have to know the ecosystem
to be able to distinguish that kind of message
from the typical thing your phone or any website
might ask you to do.
And educating seniors, it's not just a matter
of describing to them that this might happen.
They would have to be tech savvy cell phone users.
And it's hard enough to do that if you're young, much less if you're outside that whole
technological revolution.
So I can't see the educational approach.
The criminalists are just going to outrun that as fast as it happens.
So yeah, so that's pretty, so 3%, eh? That's a lot. That's about what you'd expect. Yeah, it is highly significant.
And I think getting in front of this problem requires cooperation with states,
moving that tactically to have the idea of a police force looking at digital.
And I think one of the things that both sides, whether it's private companies or
states needs to wrap their head around, um, is that there's going to be a
cooperative motion to do better with people in mind.
Yeah.
All right.
So let's move to the other categories of likely victims.
So unless, is there, so you talked about romance scams and alsoams and also computer upgrade repair and error scams for seniors.
Is there any other domains where seniors
are particularly susceptible?
Also, I think what I'd put into context
is a lot of the data collection that results
in people getting phone calls with a voice copy
of their grandchild, right, which then
ultimately is going to result in a scam.
It is that digital connection that is the leading point that drives the ability to commit
those types of crimes.
The ability to marry their grandson or granddaughter's voice with their digital persona,
and then finding a phone number
that they can use to call them.
So there's a lot of action happening
just in our daily interactions
that's ultimately being moved out into the ecosystem
that we have to take a look at.
That is not easy to fix, right?
So that's a-
Right, well, and then you're gonna have, right?
Well, you're gonna have that deep fake problem too,
where those systems that use
your grandchild's voice will actually be able to have a conversation with you targeted to
you in that voice in real time.
And we're probably no more than, well, the technical capacity for that's already there.
I imagine we're no more than about a year away from widespread adoption of exactly that
tactic. So I've been talking to some lawmakers in Washington about such things, about protection of
digital identity. And one of the notions I've been toying with, maybe you can tell me what you think
about this, is that the production of a deep fake, the theft of someone's digital identity
fake, the theft of someone's digital identity to be used to impersonate them should be a crime that's equivalent in severity to kidnapping.
That's what it looks like to me.
Because if I can use my daughter's, if I can use your daughter's voice in a real-time conversation
to scam you out of your life savings, it's really not much different than me holding her at gunpoint and forcing her to do the same thing.
And so, I don't know, like if you've given some consideration to severity of crime or
even classification, but theft of digital identity looks to me something very much like
kidnapping.
What do you, like any thoughts about that? Yeah for me, I would simplify it a little bit that
Using section 230 or the First Amendment to try to claim that the use of our personal identity
To do something online when it's a crime
Doesn't make sense. So if it's being used we want to simplify this first. We don't need a broad, broad based rule on identity
necessarily before.
We simply state that if someone's using this for a crime,
it's a crime and that that is going to be prosecuted
if you're caught and detected,
which then goes back to actually catching and detecting that.
The way that that also-
That uses the pre-existent,
that uses the pre-existent legal framework
and doesn't require much of a move.
But I'm concerned that the criminals will just be able
to circumvent that as the technology develops
and that was why I was thinking about something
that might be a deeper and more universal approach.
I know it's harder to implement legislatively,
but that was the thinking behind it anyway, so.
Yeah, for us, there is a path that leverages that content
to bring it to the device.
And I think understanding that mechanism
and how it's brought forward versus looking at the content,
and I'll give you an example of what's happening in political advertising as we speak.
Understanding the pathway for how that content is delivered is ultimately how we get back
to the criminal or the entity that's using that to perpetrate the crime.
The actual creation of the content is incredibly difficult to stop.
It's when it moves out to our devices that it becomes something that we need to
be really paying attention to.
So in political advertising up to October of this past year, our customers asked us
to flag the presence of AI source code.
So the idea there was they didn't want to be caught holding the bag of being caught
being the server of being caught being the
server of AI generated political content, right? Because that just, it looks bad in the news.
Someone's letting someone use AI, it's going to wind up being disinformation or some form of
deepfake. By October, we essentially stopped using that policy because we had achieved greater than 50% of the content that we were
scanning had some form of AI.
It may have been to make the sun a little more yellow, the ocean a little bit more blue.
But using that as a flag, right, to understand what's being delivered out, once you get over
50%, you're looking at more than you're not looking at.
That's not a good automated method
to execute on digital safety.
So as we move forward,
we have a reasonably sophisticated model
to detect deepfakes very much still in a test mode,
but it's starting to pay some dividends.
And unquestionably what we see is using the idea of deepfakes to create fear
is significantly greater than the use of deepfakes. Now that's limited to a political
advertising conversation. We're not seeing a lot of deepfake serving in information or certainly
not in the paid content side. But the idea of fearing what's being delivered to the consumer is very much becoming part of a mainstream conversation.
Yeah, well, wasn't there some insistence from the White House itself in the last couple of weeks that some of the claims that the Republicans were making with regards to Biden were a consequence of deep fake audio,
not video, I don't think, but audio.
If I got that right, does that story ring a bell?
And I think where we are at this stage in technology
is very likely there is plenty of deep fake audio
happening around the candidates.
So whether you're Donald Trump or Joe Biden,
or even local political campaigns, it's really
that straightforward.
I think on the video side, there are going to be people working on it left and right.
I think it's the idea of using that as a weapon to sow some form of confusion among the populace.
Some doubt.
Right.
Some doubt is going to be dramatically more valuable than the actual utilization of deepfakes
to move society.
Oh, that's, you do, eh?
So you do think that even if the technology develops to the point where it's easy to use,
so you think that it'll be weaponization of the doubt that's sowed by the fact that such
things exist.
And we've been watching this for a very, very long time, and our perspective is coming at this from a digital crime
and a safety in content.
Safety in content typically means
don't run adult content in front of children,
don't serve weapons in New York State.
They're not going to like that.
Don't have a couple walking down the beach in Saudi Arabia.
Their ministry of media is gonna be very unhappy
with the digital company that's bringing
that kind of content in.
I have the beholder safe content, drugs and alcohol,
right, targeting the wrong kinds of people.
So we look at this from a lens of how do you find
and remove things from the ecosystem?
If we continue down the path that we're on today,
most people won't trust what they see.
And so we're discussing education.
They're gonna self evolve to a point
where so much of the information that's being fed to them
is just gonna be disbelieved
because it's gonna be safer to not go down that path.
I'm wondering if live events, for example,
are going to become once again extremely compelling
and popular because they'll be the only events
that you'll actually be able to trust.
I think so.
I think it's also critical that we find a way
to get a handle on kind of the anti-news
and get back.
The entities promoting trust in journalism, that is a very meaningful conversation and
it is something that we need to try to get back to.
It's much less expensive to have automation or create something that's going to create
some kind of situation where people continue to click.
That's a terrible relationship with the digital ecosystem.
It's not good for people to have that in their hand.
And with the place where digital crime is today, if you're a senior citizen, your relationship
is often net negative with the internet.
You may want to stick to calling your kids on voiceover IP
where you can see their face, lots of different ways
to do that in video calling,
but doing other things on the internet,
including things as simple as email,
it may be more dangerous to engage
than any benefit that you're gonna get back.
And I think as we move closer to that moment in time,
this is where we all need to be picking up
and focusing on digital safety, focusing on the consumer.
I think corporates are going to have to engage on that.
Okay, okay.
So let me ask you a question about that,
because one of the things I've been thinking about
is that a big part of this problem is that
way too much of what
you can do on the net is free. Free. Now the problem with free is that, let's take Twitter,
for example. Well, if it's free, then it's 20% psychopaths and 30% bots, because there's no
barrier to entry. And so wherever, maybe there's a rule like this,
is wherever the discourse is free,
the psychopaths will eventually come to dominate
and maybe quite rapidly, the psychopaths and the exploiters
because there's no barrier to entry
and there's no consequence for misbehavior.
So like we're putting together a social media platform at the moment
that's part of an online university and our subscription price will be something between
30 and 50 dollars a month which is not inexpensive although compared to going to university it's
virtually free. You know and we've been concerned about that to some degree because it's comparatively expensive
for a social media network, but possibly the advantage is that it would keep the criminal
players at a minimum, right?
Because it seems to me that as you increase the cost of accessing people, you decrease people's ability to do, well, low cost, you know, multi-person monitoring
of the sort that casts a wide net and that costs no money.
So, what are your thoughts about the fact
that so much of this online pathology is proliferating
because when we have free access to a service,
so to speak, the criminals also have free access to us.
Am I barking up the wrong tree or does that seem...
Does it mean that the internet is going to become more siloed
and more private because of that?
I think it's going to go in two ways.
So one, you will find safety in how much money you spend.
And that's already true.
So when there are paywalls, paywalls within even large news sites, the deeper you go into the paywall,
the higher the cost to reach the consumer, right? Not just coming from the consumer,
but even through with advertising and other content producers, the lower the activity of
the criminal because it's more expensive for them to do business.
That is true.
Right, okay, okay.
That's been true throughout.
I think the other requirement,
because we're very acclimated to having free content,
is that the entire supply chain is gonna have to engage.
So when you think through who is responsible
for the last mile of content that's able to reach
our devices inside of our home?
Right, is that the big telcos,
is that the companies that are giving us wifi
and bringing data into our houses?
Right now they're putting their hands back
and it's not our job to understand what happens to you
on your device.
If anything, there's a data requirement that says,
we're not allowed to know,
or we're not allowed to keep track of where you go
and what comes onto your device.
There's a big difference between monitoring where we go
online and what is delivered into our device.
And this is missing from the conversation.
Privacy is critically important. and privacy is about how we engage in our activities on
the internet.
The other side of that is what happens after the data about us is collected.
And that piece is not something that is necessarily private.
It should not be broadcast what is delivered to us.
But someone needs to understand and have some control over what is actually
brought in based on the data that is collected.
And that is a whole of society, meaning all of the companies, all of the entities that
are part of this ultimate transaction to put that piece of content on our phone and our
laptop and our TV need to get involved in better protecting people.
One of the primary issues is there are so many events,
trillions of events per day on all of our devices,
that even when you have paywalls,
the problem is so huge
that you can always find access to people's machines
until we get together and do something better about it.
Okay, okay, okay.
So paywalls in some ways are a partial solution, but they're...
Okay, so that's useful to know. Now, do you have... I want to ask you a specific question, then we'll go back to classes of people who are being targeted by criminals.
I want to continue walking through your list. Do you have specific legislative suggestions that you believe would be helpful at the federal
or state level?
And are you in contact with people as much as you'd like to be who are legislators who
are interested in contemplating such legislative moves?
The reason I'm asking is because I went to Washington probably the same time I met you.
And I was talking to a variety of lawmakers on the Hill there who are interested in digital
identity security.
But it isn't obvious that they know what to do because it's, well, it's complicated, you
might say.
It's extremely complicated.
And I think the big tech companies are in some ways
in the same boat.
So do you already have access to people sufficiently
as far as you're concerned who are drawing on your expertise
to determine how to make the digital realm
less criminally rife place?
I would always like more access.
What I find is that the state governments are really where the action is.
And when I say, and they're closer to people, right?
So the federal government is quite far away from, you know, a grandmother or
someone in high school.
The state governments know the people who run the hospitals.
They know people at senior communities.
They understand what's happening on the ground.
They're also much closer, if not managing
overall police forces, right?
So that may be down at the county level
or other types of districts,
but they understand a daily police force.
So I think what we're seeking is to influence states
to take tactical action.
And if that requires legislation,
what that would be is putting funds forward
to police people from digital crime
the same way that they're policing people
or helping to police crime against people in their homes,
walking down the street on our highways people in their homes,
walking down the street on our highways, in our banks, the typical type of crime.
We're 20 years in from data collection, data targeting,
third party code, kind of dominating our relationship
with our devices.
It is the one piece that governments really haven't started
to work on a whole lot.
The United Kingdom, on the other hand, has three different agencies that are, that they've
been given the authority to tactically and actively engage with the digital ecosystem.
So those are the companies that make up the cloud that serve advertising and serve content
that build websites and e-commerce systems.
They're finding problems and then they're engaging tactically
with that digital supply chain to turn off attacks.
It's the beginning, right?
And are they doing that in a manner that's analogous
to the approach that you're taking?
The creation of these virtual victims and the analysis of-
I think mostly it's receiving feedback from people
that are being targeted and getting
enough information about those to then move it upstream.
Legislation that would say that a synthetic persona in a particular local geography counts
as crime, that would be a big leap for governments to take.
That would be very, very useful in the ability to go out and actually prosecute.
But I think that's going to be a very, very difficult solution.
I think the problem must be addressed in cooperation with big tech and digital media.
And that as a police force in a local market, content is targeted locally.
It's geofenced, right?
So something is going to be served into the state of Tennessee differently than it served
into New York state.
As that information is gathered, it should be given to those who can turn off attacks
quickly, that is crime reduction, and then ultimately be working together where if there's
certainty that there is a crime and the companies that are part of the supply chain have information
on the actual criminal, that they're sharing
that in a way that one, they're not getting in trouble for sharing the information, but
two, they're collectively moving upstream to that demand source that's bringing the
content to our device.
I think that becomes a natural flow at some point in the future, the faster we get there,
the better.
And I want to make sure that I'm making this clear. That's not about protecting a machine.
That's about protecting the person
at the other end of the machine
and keeping that mindset is critical.
Right, right.
Okay, so let's go back to your list of victims.
So you were talking about,
you mentioned 17 year old males
who are being offered the opportunity to buy drugs online.
So tell us about that market.
I don't know much about that at all.
So, and do you have some sense of how widespread,
first of all, if the 17 year old is being targeted
to purchase illicit drugs,
are they being put in touch with people
who actually do supply the illicit drugs?
Like has the drug marketing enterprise
been well established online?
And can you flesh that out?
What does that look like?
Yeah, so this is a place where the biggest tech
and digital media companies have done a very good job
removing that from digital advertising
and targeted content on their pipes.
But that is still something that's happening
every single day and actually growing,
predominantly through social media channels or interactions between the person who's going
to end up selling the drugs and the person could be in any country. This is coming through
the mail or it's leading to the streets and making a purchase. But what I can give you,
if I'm going to get these numbers right, roughly 2,000 deaths from fentanyl or similar drugs
in the Commonwealth of Virginia in 2023.
And the belief is that greater than 50%
of those drug transactions began online.
So it is a predominant location
for the targeting of people to buy,
informing people that the drugs were available,
and then ultimately making the sale.
Okay, okay.
And they break down the demographics in the same way,
making the presumption that males,
in all likelihood of a particular age,
are the most likely targets.
And using, I wonder what other demographic
information would be relevant if you were trying
to target the typical drug seeker?
I don't know.
Okay, so then you talked about 14 year old girls
who are being targeted by human traffickers.
And so, and you mentioned something about modeling.
Yeah, so if there's going to be a core profile, you know, a low-hanging fruit profile for human
traffickers, young females that are interested in things like modeling, fashion, presenting
themselves out on social media are going to be a high, high target.
Often that is going to be the people
that are finding their way to become friends with them
and using that information psychologically
to have a relationship.
But there's also the algorithms in place
that enable entities to put code on device,
which allows them to continue to track them
and find them off platform, off social media as well.
Right, and what's the scope of that problem?
Do you like, you said that it's something in the neighborhood of 3% of the interactions
that the typical elderly woman between, you said, I think 78 and 85, something like that.
3% of the interactions they're having online are facilitated by criminals. What's the typical situation for a 14-year-old girl who's been doing a fair bit of fashion
shopping online?
That is incredibly difficult.
We don't have that data.
Okay.
Okay.
Incredibly difficult to find, but it's something that's happening on a daily routine basis.
Okay.
So it's something for people to be aware of if they have adolescent girls who are interacting
online.
Yeah, yes, I imagine they're targeted in all sorts of ways.
And then you mentioned people who are looking for medical information online, who are sick
and infirm, and then obviously in a position to be targeted by scammers in consequence
of that.
So, so that, and if you want to put it into context of frequency,
senior citizens are the highest targeted.
And then the next highest targeted segment
are going to be those searching for some form
of medical solution to a problem.
Oh yeah, okay, so they're number two, okay.
And that is a heavy desperation moment.
They're also often traveling back and forth
to health facilities,
whether they're in a hospital directly or they're moving back and forth between doctors' offices.
That information is made available. You can buy people who visit health places on a routine basis.
And then they're very easy. They're desperate. And so they're very easy to suck in to a problem.
It really ranges from stealing money,
so having access to bank accounts,
to phishing attacks where you're suggesting
that they become part of a program,
you're gathering more and more information on them
to then do future attacks, to selling scam products.
So one of the great phenomenons in digital media
during COVID, especially in the first maybe nine, 10 months,
was targeting seniors and then people
with any form of illness explaining to them
how COVID is gonna do something very, very bad to you
by this product now.
And those were scams.
So the product rarely showed up.
It certainly wasn't very, very useful.
Those may be kind of low,
they're low problem on a per crime basis.
But when you look at it across society, the impact is spectacularly huge.
Okay, why is the impact spectacularly huge if you look at it in that manner?
Well the numbers add up, right?
So they're spending more and more and more money, which is a big, big issue. But it also feeds the mindset of, I'm going to, the computer is going to tell me something,
it's going to create some sort of concern within me. If they weren't looking at the
computer, it never would have occurred to them to look for the problem in the first
place. And so in addition to stealing our money, it's stealing our time and it's creating a great sense
of fear that people are then living with
and kind of walking around all day wondering,
my computer told me this thing, I'm very concerned about it.
It's continuing to feed more information.
The more you click, the more afraid you become,
which becomes a very, very big impact on society.
So I don't know if you know this, but it's an interesting fact.
It's an extremely interesting fact in my estimation.
Do you know that sex itself evolved to deal with parasites?
I did not.
Okay, so here's the idea. I mean, I don't know if there's,
there are very few truths that are more fundamental
than this one.
So parasites are typically simpler than their hosts.
So they can breed faster.
And what that means is that in an arm race
between host and parasites, the parasites
can win because they breed faster, so they can evolve faster.
So sex evolved to confuse the parasites.
Imagine that the best way for your genes to replicate themselves would be for you to breed
parthenogenetically.
You just clone yourself.
There's no reason for a sexual partner.
When you have a sexual partner,
half your genes are left behind.
That's a big cost to pay if the goal is gene propagation.
The parasite problem is so immense
that sexually reproducing creatures,
and that's the bulk of creatures that there are,
sexually reproducing creatures, and that's the bulk of creatures that there are, sexually reproducing creatures are willing to sacrifice
half their genes to mix up their physiology
so that parasites can't be transmitted perfectly
from generation to generation.
So the parasite problem is so immense
that it caused the evolution of sex and creatures
will sacrifice half their genes to prevent it.
So what that implies, like we have this whole new digital ecosystem, right, which is a biological
revolution for all intents and purposes.
It's a whole new level of reality.
And the parasite problem is very likely to be overwhelming.
I mean, we have police forces, we have laws,
we have prisons to deal with parasites in their human form.
But now we have a whole new ecosystem that is amenable
to the invasion of the parasites
and they are coming like mad.
I mean, in all sorts of forms.
I mean, we don't even know how extensive the problem is
to some degree because there's not just the criminals
that you talk about, they're bad enough or they're bad.
But we also have the online troll types
who use social media to spread derision
and to play sadistic tricks and games
and to manipulate for attention.
And we know that they're sadistic, psychopathic, Machiavellian, and narcissistic,
because the psychological data is already in, they fall into the parasite category.
And we also have all that quasi-criminal activity like pornography.
And so it's certainly possible that if the internet in some sense is a new ecosystem full of new life
forms that it could be swamped by the parasites and taken out. That's what you'd predict from a
biological perspective looking at the history of life. And so this is an unbelievably deep and
profound problem. See, I kind of think this is one of the main dangers of this untrammeled
online criminality is that societies themselves tend to undergo revolutionary collapse when
the parasites get the upper hand. And it's definitely the case that by allowing the unregulated
flourishing of parasitical criminals online that we risk,
we really risk destabilizing our whole society.
Because when those sorts of people become successful,
that's very bad news for everyone else.
It doesn't take that many of them to really cause trouble.
So anyways, that's a bit of a segue into-
Well, it's pretty fascinating. A couple
of quick points here. So, so one, the primary concern for the
entities in digital is on their content versus the consumer. So
there's content adjacency, the largest flag we have for content that's brought by third
parties that's going to run on someone else's content is the
Israel Hamas conflict.
The reason for that is less about having a person get upset
than it is for having a large brand like Coca-Cola or Procter
& Gamble have other content that's going to run near that
Israel Hamas context. Right, just in vicinity.
Yeah.
Yeah.
And that is worrying about pixels or perhaps the name of a corporation more than the impact
on the grandmother, right, who's going to be here in the next impression with the crime.
And so we're still in a nascent spot within the tech infrastructure where those who would
provide the capital
to provide us with all of those free services are dominating the conversation.
That's part of why a government needs to step in and say we're going to focus on crime.
What that also does, getting back to a parasitic evolution, what's the sacrifice that big tech,
digital media and the corporates, the brands are are gonna make in order to protect grandmothers.
Right now, the bigger concern is about what might be fake
because it's wasting a penny or a fraction of a penny
when a pixel is delivered to an end device.
The spend is about monetizing each individual nanosecond
to pixel that's going to run in
front of us versus the consumer.
And I think this is an incredibly myopic viewpoint.
Digital safety for the brand is about making sure the picture of their product is in a
happy location while grandmothers are losing bank accounts.
And I think that evolution is going to require a sacrifice.
I think the companies that engage in digital safety
and many big tech and digital media companies
go way out of their way to do a good job protecting people.
Ultimately, they're going to win because the relationship
with us is going to be so much significantly better
and protected and trusted that they're just going to wind up
interfacing with us better
than those who are trying to protect their own.
Right, right, right, right, right.
Well, that's an optimist.
Well, that makes sense to me.
That's an optimistic view because, I mean, fundamentally, what makes companies wealthy,
reliably over the long run is the bond of trust
that they have with their customers, right?
That's what brand, that's really what a brand worth is
in the final analysis.
I mean, Disney was worth a fortune as a brand
because everybody trusted both their products
and the intent behind them.
And so that's a very hard thing to build up,
but it is the basis of wealth.
I mean, trust is the basis of wealth.
And so it's interesting to contemplate the fact
that that means that it might be in the best interests
of the large online companies to ensure the safety
of the people who, rather than the safety of their products,
the safety of the people who are using their services.
That's a good, that's an interesting to think about.
Okay, so let me, maybe we can close
at least this part of the discussion
with a bit of a further investigation
into these virtual persona that you're creating
that work as the false targets of criminal activity.
Tell me about them and tell me how many of them, as the false targets of criminal activity.
Tell me about them and tell me how many of them, approximately if you can,
I don't want to interfere with any trade secrets,
but like, how many, what kind of volume of false personas
are you producing to attract criminal activity?
And is that something that can be increasingly AI mediated?
Yes, so it is.
We use manual processes, but we also
use AI and continuous scanning of digital assets
to keep those profiles active.
So our job isn't so much to become a grandmother
to the world.
It's to have certain components that enable big tech or ad
serving or content delivery to perceive us to be that.
And so we're really kind of gaming back to the system
to find those objects or those persona kind of
classifications on device, whether that's actual phones
or televisions or actual computers, and then to run the
content with as much of that as possible.
So we're running millions of combinations of potential
consumers.
Some of them are many, many profiles at the same time,
because it's not going to discern between different
activities as long as you have something that leans towards what they're looking for.
But then what gets very interesting is a predominance of the content is an auction model.
And so you have to fit within price points of what the criminals are trying to attract as well,
which is not always people with a lot of money.
It's everyone in the ecosystem.
And so we're, we're very much becoming a very nuanced set of personas.
Millions of these, a very, very critical component is geography, right?
So they're going to target a specific town differently.
I don't know if you've had any offers from, for your government to buy you
solar panels, there aren't actually a lot of government programs
that are gonna pay for your solar panels.
Those are typically some forms of scams
and they're directed at a local market.
What's very interesting right now is that
rather than using AI to design content
to pull you in better,
we're seeing more and more of similar content designed
so that it's harder to pull it down once it's made bad.
Right, so they'll make 30 copies
where there used to be one or two.
So you can pull down 15, 20,
and there's still gonna be 10 or 15 left.
Right, right, right, right.
So what a strange world, eh,
where we have the proliferation of AI-enabled victims,
the proliferation of AI enabled victims, victim decoys to decoy AI enhanced online criminals from praying.
Yes.
AI is used for safety to defend us from AI.
We have hit that moment.
Yeah.
Well, so then, you know, I was just trying to contemplate briefly what sort of evolutionary
arms race that produces, right?
Hyper victims and super criminals, something like that.
Jesus.
Weird.
Well, there is some worry that ultimately it's a horsepower game, right?
So when it's AI versus AI, the more computer horsepower you have, the more likely it is
that your AI is going to win.
And at the Media Trust, our job is to make the digital ecosystem safer for people.
We're not all that concerned about one AI beating another AI, unless that's in context
of having a grandmother not not loser bank account.
That is the core gist of how we look at it, which is different than an enamor relationship
with technology.
In seeking technology solutions for a technical problem, this is a human issue.
And with that, the personas are human reflections back into the delivery of content.
It's not about the machine.
How are you feeling about your chances of control over on,
or our chances for that matter of control
over online criminality?
And how successful do you believe you are
in your attempts to stay on top of and ahead of the criminal activity
that you're trying to fight.
For our customers that prioritize digital safety, it is a, the vast majority of what
might run through to attack someone is being detected and removed.
They need to have the appropriate mindset.
They need to be willing to go up onto the demand source
to remove bad activity that's gonna be coming down.
You don't just wanna play whack-a-ball.
You have to engage in that next step.
Those that do are very successful
and create safe environments.
It is not possible to make this go away.
The pipes, the way that the internet works,
the way the data targeting works,
it's just not something you can eliminate entirely.
But there are companies that are in front of this
that will withhold millions of dollars in revenue
at any given moment to prevent the possibility
of targeting something and having something bad happen.
But there are a lot of companies
that are not willing to go that far.
I think right now in some of the bigger companies,
we see a lot of risk towards this,
who's gonna win the chat GPT,
who's gonna win the LLM race.
There is so much at stake in that from a competitive
and revenue perspective.
The companies that can monetize that the best are going to start to leap forward.
When you're looking at the world from a how does my technology win versus how do I safely
get my technology to do the things that I want, that's when you start to run a lot of
risk.
We're in a risk on phase in digital right now.
Right, but your earlier claim, I think,
which is worth returning to was that
over any reasonable period of time,
there's the rub, the companies that do what's necessary
to ensure the trust of what you say,
to ensure that their users can trust the interactions with them are going to be the ones that are arguably best positioned to maintain their economic advantage in the
years to come.
And I think it's...
Yes and those that are willing to engage with governments to do a better job to ultimately
find the bad actors and take them down.
They're going to be a big part of making the ecosystem better rather than insulating and
hiding behind this sort of risk legal regime that's going to not want to bring data forward
to clean up the ecosystem.
Okay.
Okay.
Okay. Okay. Okay. Well, for everybody watching and listening, I'm going to continue my discussion
with Chris Olson on the daily wire side of the interview where I'm going to
find out more about, well, how, how he built his company and how his interest
in prevention, understanding and preventing online crime developed and
also what his plans for the future are.
And so if those of you who are watching and listening are inclined to join us on the Daily
Wire side, that would be much appreciated.
Thank you to everybody who is watching and listening for your time and attention and
thank you very much, Mr. Olsen, for, well, fleshing out our understanding of the perils and
possibilities that await us as the internet rolls forward at an ever-increasing rate.
And also for, I would say, alerting everybody who's watching and listening to the,
what would you say, the particular points of access that the online criminals have at the moment
particular points of access that the online criminals have at the moment. When we're in our most vulnerable states, sick, young, seeking, old, all of those things,
because we all have people, we all know people who are in those categories and are looking
for ways to protect them against the people that you're also trying to protect us from.
So thank you very much for that.
Thank you.
Thanks for having us, me.
You bet.
You bet.
And again, thanks to everybody who's watching,
listening to the film crew down here in Chile today
in Santiago.
Thank you very much for your help today, guys.
And to the Daily Wire people
for making this conversation possible.
That's much appreciated. Thanks very much, Mr. Olsen. Good to talk to you. Thank you.