American Thought Leaders - The DeepSeek Psyop Explained: Nicolas Chaillan
Episode Date: February 12, 2025The Chinese AI app DeepSeek recently became the most downloaded iPhone app in the United States and caused U.S. tech stocks to plummet. President Donald Trump described it as a “wake-up” call for ...American companies.So what’s really going on? Is DeepSeek as powerful as people think? Or is there a bigger story here?In this episode, we sit down with AI expert Nicolas Chaillan, former chief software officer for the U.S. Air Force and now founder of the generative AI company Ask Sage.Views expressed in this video are opinions of the host and the guest, and do not necessarily reflect the views of The Epoch Times.
Transcript
Discussion (0)
The Chinese AI app DeepSeek recently became the most downloaded iPhone app in the U.S.
and caused U.S. tech stocks to plummet.
President Donald Trump described it as a wake-up call for American companies.
So what's really going on?
Is DeepSeek as powerful as people say?
Or is there a bigger story here?
They were able to disrupt half a trillion dollars of market share, market cap, shorting the stock and making a whole bunch of money at the same time with really nothing to back it up.
Just noise and completely manipulating the market. That's scary.
Today, I'm sitting down with AI expert Nicholas Cheyenne, former chief software officer for the U.S. Air Force and now
founder of the generative AI company AskSage. He's worried that the U.S. Department of Defense
is falling woefully behind China when it comes to adopting AI technology in the military.
What you see is the CCP adopting an incredible amount of velocity and speed, Beidou GPT and DeepSeq models,
even the new Alibaba models into their military networks.
This is American Thought Leaders, and I'm Jan Jekielek.
Nicholas Cheyenne, such a pleasure to have you back on American Thought Leaders.
Good to have you back on American Thought Leaders. Good to see you. So Deep Seek has been
described by some as a Sputnik moment, a second Sputnik moment for America. What do you think?
I don't think that's that easy, right? When you look at what happened, effectively, you see
China, particularly the CCP, manipulating markets. And, you know, it's kind of interesting to see how quickly U.S. companies and also investors
in different markets reacted to this news, fake news, really, which was, OK, China has
created a better model with very small investment in GPUs using, you know, the older models of GPUs of NVIDIA.
And none of it is true.
When you start digging into what happened, you find that that company is really led by investors that have been investing around crypto for a while, they had access to about 50,000 H100 latest NVIDIA chips.
And also the models are not that good.
Not only these tremendous bias baked into the models,
of course, coming directly from CCP propaganda,
but you also see something pretty insane,
which is they ingested an immense amount of data
coming from OpenAI and other companies,
which, you know, everybody does.
But at the end of the day,
what you also see is these models being trained
to pass the benchmarks that are used to decide
whether or not, you know, they are better.
And quite honestly, when you use them in real life with real use cases that we do here,
we find pretty quickly they are not up to par and quite behind what you see with the latest models
from OpenAI and Google and even Meta. So, you know, I think it's important to realize that they're
leading in many fields and they know how to manipulate opinions and markets, which is,
you know, they shorted the stocks, they made, you know, hundreds of billions of money by doing this
announcement. And so they are smarter than us and play a better game to manipulate
what's going on in the United States
and even in Europe.
But Selly, this is not a Sputnik moment
when it comes to AI.
We need to be at the top of our game
and we need to make sure the government
is adopting the best US companies' capabilities.
But it does not mean that China is leading right now when it comes to these models.
But it still is something that we need to pay attention to because at the end of the day, they might be winning.
Nick, I'm going to get you to unpack a few things for me before I go further.
For example, you said that they're being trained,
this deep-seek AI is being trained against the benchmarks. Before we talk about that,
tell me exactly what it means to train an AI for those of us that are uninitiated.
Well, most of the time, the way these large language models are trained is literally
ingesting massive amount of data to pretty much capture everything that exists.
And in fact, they're running out of money.
So they are creating new data
by using large language models
to actually create new content
because we're running out of data to ingest.
And so it's not surprising
that you see all these lawsuits,
not only with OpenAI, but also other companies.
And now you're seeing DeepSeq also ingesting
effectively data directly from OpenAI using their APIs,
their technology to generate responses,
and including, of course, their documentation
and all these things.
That's pretty common.
And that's the way, you know, these models are trained.
And it's very difficult to then change the way these models are trained. And it's very difficult to then change the way these models
are going to behave because they get this bias from the content ingested based on the volume
of data ingested. So it's very difficult for DeepSeq to then remove facts that are automatically
ingested via all this massive amount of data. And so that's why you know, automatically ingested via all this, you know, massive amount of data.
And so that's why you see the models initially answering the questions about, you know,
President Xi and all the information that China is trying to suppress. But then they have safety
mechanisms to look at the response and override the answer. And that's how they start hiding
all the moment in history that China doesn't want us to know about.
You know, Nick, on this point, it seems to me like such an odd thing that the model will actually
show you the answer for a split second and then basically say, sorry, I can't show that to you. I was just
looking at a recent tweet from a user who asked hundreds of times about Tiananmen Square, for
example, and kept getting the same answer. And it sort of appeared to, quote unquote, trigger
the AI into saying, look, enough. Don't ask me this question anymore. But it almost seems
intentional that they're sure because they wouldn't absolutely
have to show you that there was an answer and then hide it. Do you make sense of this?
So the way these special models called the reasoning models work is you see the reasoning
first. And that used not to be the case, right? That's very recent with the new O1 models coming from OpenAI.
These were the first models that resonate first throughout a very detailed process of thinking.
And the longer it thinks, the longer the response gets to become better and cost more money to generate, of course. And so what's interesting is
they're showing you the thinking which many models don't show you that and they
only show you the response. And so by showing the thinking, they have no way to
hide what the model is doing behind the scene. And when the response is ready, it triggers their safety mechanisms
to then remove the answer.
But it's too late because of course,
the thinking and the reasoning of the model
is there for everyone to see.
And you can see all those insights
being shared to the user.
And so that's kind of the downside of reasoning models.
They have no mechanisms to hide that.
I mean, that's absolutely fascinating. Explain to me the relationship between the number of chips and the success of one of these models. data and very complex learning and machine learning algorithm that takes an immense amount
of GPU and electricity to generate.
And so the bigger the chips, the faster you can train models, the faster you train models,
the more quickly you can release the next generation models.
You just saw last week OpenAI with all three mini uh released their
latest models which is far superior by the way to any other model uh these models you know took you
know months to uh update and generate but if you had you know older chips and less chips it would
actually take even longer so it's just a matter of velocity and speed to deliver the latest capabilities the faster.
So effectively, these companies are investing in infrastructure that enables them to release their next generation models faster and faster.
And it's a never ending game because the GPUs keep getting better
and more efficient
until you have to buy new hardware.
And all that money spent with NVIDIA chips
is then used to train the models.
Tell me very quickly,
how is it that you know
that they didn't use this low amount
of computing power that they claim it's pretty
obvious when you look at the research that first of all you know five six million dollar investment
is is just plain ridiculous but really at the end of the day that's what you know the ccp does right
they lie about every number on the planet um and so, you know, when you see all that access to these beefier chips,
and when you look at the background and the knowledge of the people behind DeepSeek,
it's pretty obvious to anyone that they know what they're doing
and they have access to a pretty much unlimited amount of compute and also funding.
And so, you know, when you see the numbers just don't
lie. And, you know, the CCP does. So, Nick, you said that they trained, if I understand correctly,
DeepSeek on the benchmarks themselves to kind of give it the appearance of being more sophisticated
than it actually was. Can you expand on that a little bit, please, if I'm right?
Yeah, I mean, everybody is doing that now that the benchmarks are public, right?
It's pretty easy to pass a test when you know what the test is about.
And so they're going to spend extra time and extra focus on trying to get as good a
result on these questions as possible.
And so it's easy to cheat a test.
It's less likely to make it happen when you have real-life scenarios.
That's the way we test models, and we don't make those tests public
because the minute you do, you lose all that advantage of surprise.
So the bottom line is, you know, they launched this thing
and kind of wowed the world, wowed the markets, wowed users.
But then when you look under the hood, it's really not nearly what it appeared.
It's not. I mean, it's not. Don it's not don't get me wrong they still had very good um outcomes when he came when
he came to the uh training methods and they they did come up with ways to to be more efficient
and save money um and there's different definitely value there of course the fact that it's uh the
ccp behind it and biased and, you know, with a bunch
of made up answers that kind of kills the entire value, in my opinion, because how are you going
to trust it? Even in math, even in research, you have, you know, they could they could train it in
a way that, you know, if you ask a question in English, you get answers that are not as good as
if you were to ask it in Chinese, for example.
That would be easy to do.
So again, I would never trust it.
And so that defeats the point of using it.
The numbers and the benchmarks and the amount of money that they spent to train it is completely made up.
And I don't believe it in one second.
And it might certainly not be as much money
as we spent on OpenAI 03 or 01,
but it's still way more than they're claiming,
a thousand times more.
So when you left the Air Force,
when you left being the software chief over there back in 2021, I think around the time when we first spoke online, you said you believe that the U.S. has really lost the war or on the cusp of losing the war in AI.
But you sound more optimistic today.
So it's always interesting. I always said
that the U.S. was doing very well, but the DOD was losing. And I think that's something that's
a little detail that's being lost in translation with my French accent often. And I think that's
the most important fact. What you see is the CCP adopting at incredible amount of velocity and
speed their latest, you know, Beidou GPT and DeepSeek models and, you know, even the new Alibaba
models into their military networks at scale across classification levels on very complex weapon systems.
And, you know, while the models are less good and less capable than the U.S. models,
unfortunately, the U.S. has a massive wall and lack of adoption of the best of breed U.S.
capabilities into the Department of Defense. And so what you end up seeing is the U.S. leading compared to China when it comes to companies.
But when it comes to defense, which is quite honestly the most important use of AI we could think about in 2025 and beyond,
you see DoD being at least five years behind China, and it's compounding by the fact that these models augment and empower the velocity of people up to 20 to 50x.
So one person turning into 50 people.
And let's face it, the CCP already has more people.
And now they turn these people to 50 times more thanks to being augmented
and empowered by AI. It's almost impossible to compete, particularly if you don't have access
to AI capabilities. And so where I'm very concerned is the amount of money spent,
particularly during the Biden administration, for four years on AI ethics and other massive amount of money wasted by research teams
building their own DoD capabilities like NIPR GPT that just are years behind what you see companies do
and really push us even further behind China.
I'm wondering if you could actually explain this. I really like how you talk about this idea of AI
actually increasing the velocity of the human being. So, you know, just in a more layperson example, right, this is far beyond using a chatbot.
Yeah, it's a life changer for me.
It became my chief of staff, my accountant, my lawyer.
I had the best lawyers in the United States.
And even when I send them some, you know, contracts to review, they have very little comments compared
to what AI gave me. So it's pretty mind-boggling to see what you can do when you go to the next
more advanced AI, you know, models and when you know how to use them. Because that's really the
issue is you see a lot of, you know, citizens giving up, frustrated that they can't get to the right outcomes.
And I tell people, look, blame yourself, you're not using it right.
We have a lot of free videos on our website to help people get started with Gen AI.
It's a life changer, but it needs some learning.
And honestly, it is not rocket science.
On average, it's going to take about three months
for someone to get the hang of it.
But it's super powerful.
You know, it could be, for me, you know,
my company was created by Gen AI.
My logo, my website, my, you know, application,
my entire code is 90% of the code is generated by Gen AI.
We estimated I would have needed 35 developers
full-time to build what we built with two people in eight months. And the entire company, including
the logo and everything we do in marketing and LinkedIn and even, you know, Google search and advertising is 100% driven and designed
by Gen AI, giving us feedback, options, ideas, course of actions.
It's your chief of staff on steroids.
It's a way to save an immense amount of time when you know how to use it for pretty much every non-blue
collar jobs. And it's funny because, you know, you kept hearing for years that technology was
getting rid of, you know, all the blue collar jobs. And what you see happening is the exact opposite.
You know, the jobs that are at the most, like the highest likelihood to be disrupted by AI is actually non-buccal jobs, particularly coders and, you know, accountants and lawyers.
Things that, you know, if someone had told you this 10 years ago, people would not have believed it.
And, you know, people went into coding and software to have, you know, a safe, secure job for 20 years or 40 years.
And what you see is a very high likelihood that most basic coding jobs will be replaced by AI.
I mean, that's absolutely remarkable. And so the other thing that we
talked offline that you said was you actually get the AI to give you a range of options when
you're querying it. Like you don't ask it, okay, give me the answer, right? Because my big concern
is you don't want the AI to tell you what to do. That sounds like a really bad idea.
Yeah. And it's also you bring your bias into the questions. And that's probably the number one issue and mistake we see people make. We call that prompt engineering. It's really learning
how to prompt and question things. And words matter. And instead of saying, can you do X, you could say do X.
Right. So like simple things saying, you know, you want five course of actions for a problem instead of talking about give me give me one solution or give me what you would do for X.
Say, you know, give me five options to tackle this problem. You're going to get way more well-defined answers, but also more options to navigate.
Sometimes I tell them, well, give me a mix between number two and three, and how would you do that?
And so you can then become also the driver.
And so the way I think about it is,
you become the orchestrator of the AI, you guide it.
So you still need to have your brain
and your desired outcome at the center of the puzzle here,
but you still need to be able to navigate it
to get you options that maybe you
didn't think about. And if you show up with biased and already pre-made, you know, answers and you're
guiding the bot to go to those solutions, then you're limiting your options by limiting the choice of answer. So you want to be open to, you know,
pushing you outside of your comfort zone. Give me a few examples of, you know, use cases that
work for you. One is, which is fascinating to me that you just mentioned is, you know,
looking for holes and contracts, right, to making sure your contracts are rock solid.
One that I use a lot is for research.
I use perplexity.ai.
I know AskSage has a similar capability,
which I'm in the process of exploring right now further.
What are the other just sort of use cases
to give people an idea?
Yeah, so I mean, there's so many options, right?
Number one is, and it really
depends on what you do on a day to day basis. You know, we have coders using it to do the entire
coding, research, testing, cybersecurity, compliance, you know, all these compliance
paperweight, we got to fill not only in defense, but also with climate change requirements and all these different paper reports we got to generate.
That's a great way to get it done much faster.
You talk about summarization, extraction of insights, contracts.
I mean, it's not just writing.
We use it to write all our proposals and response to bids.
My company responds to a lot of government bids.
We went from five days on average down to 32 minutes to respond to a government bid
with 98% accuracy on the first try.
You take contracts, it's on both sides.
It's writing your own contracts, but it's also reviewing contracts that you receive from third parties.
And for me, it's been a great way to ask it to see what I should pay attention to and what pitfalls to avoid and what I should be concerned about and which clauses should be reviewed more deeply by humans. And so it's a way also to navigate the noise and kind of save a whole
bunch of time, you know, instead of manually reviewing something. Just last week, I was
reviewing a contract I signed and I forgot, you know, what the terms were when it came to
the termination of the contract. Is it, you know, one year? Is it, you know, what is it?
And I just, you know, attach the contract to Assage and I say, hey, you know, one year? Is it, you know, what is it? And I just, you know,
attach the contract to Assange. And I say, hey, you know, tell me what's the deal with the
termination clause. And he gave me a, you know, two minutes, a quick summary of the contract
termination clause. I saved probably, you know, 20, 40 minutes to figure, you minutes to figuring this out. And it took me one minute.
You know, every aspect of my life as a CEO,
whether it's marketing, my LinkedIn posts,
I train it on all my previous LinkedIn posts
and all my articles.
And so now he speaks, you know, with my tone,
almost has a French accent, you know.
So, you know, it's all about how you use it.
And unfortunately, people don't have access to all the tools,
particularly that we built in our product,
to customize the behavior and the tone.
And so many people using something like ChatGPT, for example,
sound like robots.
I mean, you can tell it was written
by Gen AI. And, you know, I welcome anyone to look at my posts and compare, say, two years ago,
my LinkedIn posts to my posts yesterday or the day before, and tell me that you can tell they
were written by Gen AI. And I bet you, you cannot. I mean, absolutely fascinating. Let's jump back to DeepSeq. Bottom line is you're saying
that the US models like OpenEI and perhaps others are actually superior. Do I have that right?
Yeah, I mean, we tested in real life use case, you know, it's benchmarks are really mostly useless. I mean, they do a decent job to at least differentiate the junk to the good models,
but they are not very good when it comes to details of real life scenario.
And there's nothing better in life than real life use case, right?
And doing a deep dive with real data and real research.
And so when we test models, we have 150 models on Assage now, which is pretty insane.
Both commercial models from the OpenAI Google, you know, and so on.
Anthropic, you know, meta, all the way to pure, you know, pure open source models.
And we put DeepSeek on Assage because, you know, the Department of Defense and the intelligence community wanted to research this securely. And, you know, you got to be close to your enemy to
know what's going on. And I saw so many people freaking out, you know, that we're putting
DeepSeek on Assage. Number one, we did it securely.
It's self-hosted and siloed and sandboxed.
But, you know, if you don't know what your enemy is doing,
how are you going to be able to take action?
It's just foolish to think we should put our hand in the sand and hope for the best.
And so that's the first thing we did is we gave access to researchers
with a very clear guidelines on how to use it and how not to use it.
And that's been a great tool to be able to try to understand how they built it.
And to see the bias. And honestly, the bias is actually super revealing because you can then look into what they try to suppress, which gives you a hint that, you know hint about what they care about. Right. And so when you find what they're hiding, it's a great way to keep digging to see what
else they're trying to hide.
So it's actually a super interesting tool for intelligence.
Right.
But again, the way we build this entire stack is to be secure from the ground up and completely
air-gapped and siloed.
So there is really no cyber risk.
The responses are biased, obviously, and each model is going to have bias, right?
But we put it through a real life use case in coding, cybersecurity,
compliance, data analysis, and look, it's doing fairly well in many questions
and many, many things, it's not as good as coding as we see other models like,
you know, 01, 03, Anthropic, Cloud 35, Sona. And so, I mean, there's so many options today,
which, by the way, is why tools like Assage are essential, not only for the government,
but also for companies. So you're not getting locked into one technical stack. We don't know which company is going to be leading tomorrow, and you don't want to be
locked into OpenAI or anybody else.
My job is to give customers diversity of options so they can try things out and see what sticks.
You want to have those options, and you want to have them quickly.
And more importantly,
you want to be able to train your data once
and use any model.
So we built this abstraction layer
so that you can ingest your data
with all your enterprise business data,
all your knowledge base decoupled from the model.
So that way, if there's a new model tomorrow
that comes out from the same company you're already using,
or it's a completely new company,
coming up with a disruptive way to do things
and you want to use it right away,
there's no change to make to any of the work you've built.
And that's a game changer for companies.
And that's why we see such a massive success
because everybody else, right?
All our competitors are pushing this model
where they're pushing only their own models.
So, you know, Microsoft with OpenAI,
all the Microsoft models, you know,
and Anthropic and, you know,
all these companies pushing their own,
you know, models, of course, with their own and all these companies pushing their own models, of course,
with their own bias.
We are agnostic.
And I think when you're a business or when you're in the government even more, you don't
want to be locked into anybody.
So what is the difference between someone loading DeepSeek on their phone or on their
computer directly and loading it through AskSage?
I mean, just for the very basic user. Yeah, so that's a huge difference, right? So the hosted
DeepSeek app is using Chinese hosting and servers. So all your data and everything you do,
including your keystrokes and everything you type, which means they could even potentially look at what else you're doing on your phone or your device.
So be very careful there.
I would never use this ever is then sent to China and the CCP
because they have a mandate to share with the government.
And so, you know, that's very different from hosting the open source version
on your device or on assay, like we did. That's completely controlled and hosted in the US and no
data is flying back to the CCP. So you're still going to get the same bias and the same, you know, made up answers for
some of the questions. Although what we found is the open source models have less bias, funny enough,
than the hosted DeepSeq model. For example, the Tanim and Square answer on some models is not
blocked for the open source DeepSeq version.
But yeah, if you go on the hosted app
and you download DeepSeq on your phone,
it's gonna be blocked.
And so that's interesting.
It would seem to be a safety that they added
on top of the model, not into the model.
And so again, when you host the model on your device,
whether it's your laptop, you can download it and host it,
which again, not everybody knows how to do that.
And that's why Mr. and Mrs. Everybody
are gonna go and use the app.
And that's where damage is done
because all your data flows to China.
And that's why companies like us exist
and host it for you so you can just use us on because all your data flows to China. And that's why companies like us exist and hosted
for you. So you can just use us and query the model instead of hosting it yourself.
So how does DeepSeek compare with TikTok in terms of a threat?
Number one, DeepSeek has clearly used data coming from TikTok to train itself. So it's interesting how, you know, that access to data is paying off.
As you know, the next weapons, particularly AI weapons, are all driven by data.
And so the more data you have, the more powerful weapons can become.
And, you know, it's foolish to give the CCP access to all that TikTok data.
And so you see some of the results here with DeepSeek having access to all that data.
It's very similar, right, to the risk and the threats of TikTok. You know, I was disappointed when I saw that Americans were downloading Chinese apps the minute TikTok got
banned and seemed not to understand what we're facing. And I think there's a lack of education
when it comes to the threats and the risk that the CCP is bringing, not just to your data,
but also your family and the nation as a whole.
And most people think, oh, I have nothing to hide and I don't care if China sees my stuff.
But it goes beyond that.
And people don't comprehend how China is able to use that data to then understand better the population to create political weapons of misinformation and just like manipulating markets.
A perfect example is DeepSeek announcement, the way they were able to disrupt half a trillion dollars of market cap in a day and shorting the stock and making a whole bunch of money
at the same time with really nothing to back it up, just noise and completely manipulating
the market.
That's scary, right?
Because if they get good enough to manipulate the entire market in the United States and Europe,
that's going to become a real threat to the economy of the nations.
And they are doing that by understanding how to communicate and how to position messaging to Americans. And there's no better way than seeing how people react to videos and
information on TikTok and other apps, because they can see what works and what doesn't work
and how people respond to those videos and data points. And that's then used to do these kind of
campaigns, you know, that can completely, you know completely destroy the economy of the United States.
Unbelievable. Nick, one thing that just struck me, it was so weird somehow that all these users,
when TikTok was going down, were jumping to an overtly communist party app, named that way even. And it just struck me, was this not just kind of
a demonstration by the CCP of being able to subtly influence through TikTok, through messaging,
through finding the people that are most susceptible to be influenced this way, to go
to that specific app? I mean, it just struck me that
isn't this itself, you know, kind of the case study of how people can be manipulated. I mean,
I saw videos of young people saying, it's amazing people have housing in China. Almost everybody has
it. We've been lied to all along. It's such a horrible country. You know, just kind of wild stuff on the face of it. But
it just seems so unlikely and bizarre that they would be running to this,
you know, red note app. What are your thoughts? Well, you can see right away the impact of letting
our kids for the last, you know, we started calling for the ban of TikTok in 2018 and so on. And
that's what happens after six years of brainwashing. You know, you see the effect right there and you're
100% right. A lot of people are now completely brainwashed, not just on the CCP in China,
but also on other dozens of subjects. You look at Palestine and what's going
on and how much more hateful messages you can find, anti-Semit messages you can find on TikTok.
And that's not an accident. That's a design, you know, into the system. And it's always shocked me
that, you know, people would use something that is banned in China, but yet is allowed
elsewhere. And China is smart enough to know the damage it would cause to their kids,
and they don't let people in their own nation use it. But yeah, it was stupid enough to let
that happen. Well, and so DeepSeek, obviously, it's not taking in as much information.
It's not ingesting through a camera, I guess. It's just ingesting kind of knowledge or what
sorts of questions people have. But basically, I think you've been suggesting that it's kind of
the same model, same model of data acquisition and then ability to influence potentially right
through the types of answers it gives.
Can you flesh that out for me a little bit?
Well, number one, in the terms and conditions,
they mentioned they can log your keystrokes.
So everything you're typing,
not just what you're typing into the app,
but they could potentially log everything you're typing.
So that could be an immense amount of information.
Particularly when you do speech-to-text,
you're still having those keystrokes sent to the operating system.
So they still get the text typed into the boxes,
the inputs of your apps. So even if you use voice,
that could still get the text coming out of the speech-to-text app. So you're talking an
immense amount of data that you would get access to. And you're right. You also get to control
what right and wrong is, right? And you can make your own story. And as you know, whoever
controls history controls the world. Are the responses custom made for people?
And I mean, in general, in AI models, you know, are AI models learning about who you
are and customizing their responses to you specifically at this point?
Or are the answers more in general?
And does DeepSeq have this capability from what you can tell thus far through your tests?
So no model already learned on the fly.
The way this works, when we activate this feature,
either on OpenAI or other platform like Assage, we have it as well. You can activate learning, meaning when you're searching something
and it tells that the answer is relevant to the user,
we can log it to keep that insight into a database. And that database is then used
to augment the knowledge of the model. It's never into the model. It's on top of the model. Now, some companies offer a free service
and in exchange, you're giving away the rights to your data to the company. And then they can
use the training data to ingest it into the future release of the models, but not the current model, right? It's only used for the next training cycle.
And so we don't do that at our stage because number one, we never want to train customer
data. We have very sensitive duty and US government data and banking and healthcare
customers. We never train models with customer data. Most do, we don't.
But it's never in the current cycle, right? So it takes months to train a new model. So you're
never going to see magically your data showing up into a model that other people can use.
What instead people do is they create this knowledge base called a vector database where
they store all that insight about you so that the model can get you better answers.
And we use it often to give context to the bot when it comes to who you are and what
you're working on.
It's always great not to have to re-explain who you are and what you're doing. So you get
better, faster answers. But for us at Assage, we activate it on demand and you can create what we
call data sets, which is like a bucket of data where you can have different topics like folders
and you can ingest files and whatever data you want to ingest in there to augment the knowledge of the model so they know better, you know, what you're doing.
I mean, this is a whole brave new world, isn't it, with AI that we're just beginning to scratch the surface of?
It's going to change a lot of things for me in the last two years i was not an ai guy whatsoever i
was you know security software cloud you know created you know 14 companies in you know 25
years and uh you know was the chief software officer of the air force and space force and
chief architect at dhs after I moved here from France,
created my first company.
I was 15.
I had never seen a technology, including the smartphone or cloud, that impacted on a day
to day basis my life as much as this.
When you really understand it it the issue is people often
started to play with this and gave up after a couple of weeks or just didn't
grasp the opportunity and the scale of this and what you can do with this and
honestly they need to double down because, you know, I'm very much worried when I hear people like Sam Altman or Mark Zuckerberg, you know, talk about, you know, the impact on jobs and within five years a good you know 50 to 70 percent
of existing non-buccal jobs are going to be impacted drastically by ai and the people that
are going to make it other people embracing it and augmenting themselves to become, you know, 20, 30, 40 people's velocity.
And that's going to be a big disruption.
And the world is not ready for it.
And we can debate all day long whether it should happen or not.
But it will.
You know, and if we don't do it, China is going to do it.
And so regardless, it's going to happen.
And so my take is people need to jump as much as possible and learn pump engineering and everything they need to know about how they can use these technologies to become augmented AI people.
And that's going to be the next generation of workers.
Well, I, for one, would like to kind of keep an air gap between myself and the AI.
It's very important, of course, to look at privacy and safety and security and how you decouple
kind of the way you're going to use this and the way you're going to let it give you insights. At
the end of the day, you shouldn't probably not replace humans.
Although there is basic tasks that, you know,
we demonstrated are fine.
But when it comes to real life decision making,
you know, the human needs to be the driver in the driver's seat.
But it doesn't mean you don't also understand
the pros and cons of the technology. And unfortunately,
what you've seen is a lot of people focusing on the cons. And instead of saying, okay,
that's a limitation of the technology. How do I create my next company to overcome this
and find a way to go above it and fix it.
That's what we've done here at Assage.
We found solutions to keep shifting to the right the limitation and the issues we were seeing with Gen AI.
And now, you know, we pushed it to universes
that we didn't think were possible.
And it's so sad to see people always looking at the bad things
and not seeing those as opportunities to create value.
Tell me a little bit about the intersection of AI and nuclear capability. This strikes me as
something that's very important because you mentioned some of the limitations right now of AI usage in DoD.
Yeah, it's, you know, there's a new administration that is eager to focus on,
you know, all these fields, including hypersonics, quantum compute, AI, you know, drones, right? And
already, you know, even the nuclear deterrence of the United States is already in shambles because of some delays we've been seeing in some programs that we're running, again, lagging behind schedule and way over budget.
We're talking 10x over budget.
And so it's concerning.
We need to do better.
We need to be less complacent. We need
to have more urgency to get things done. And more importantly, we need to understand the kind of
fights we're going to be fighting moving forward. And I'm not sure it's going to be much about jets
and bombers, although they still need to be, you know, at the top of their innovation game. But it's going to be also about, you know, software and drones
and AI capabilities to empower humans to make better, faster decisions.
And quite honestly, right now, the money spent on those domains
were mostly wasted in paper research, like DEI research on AI or ethics.
We spend so much money debating whether or not to use AI in a DoD,
which is just mind-boggling to me.
China is not wasting time pondering life,
whether or not they should use AI in their weapon systems. And if they have it,
and we don't, it's the same as saying, maybe we just don't do nuclear systems anymore. And China
has it, and so be it. But I don't think that's a good answer. And I don't think anyone would
want to be living in that world.
Nick, I have to ask you about this ethics piece. I mean, it's one thing to say whether it
should be used at all. It's a different thing not to have some significant ethical guidelines. And
you're right. Those guidelines may not exist in communist China at all. But you can kind of
imagine, yes, autonomous weapons systems driven by AI that do what they want. You know, there's been very popular movies made about this, right?
We definitely don't want that.
Clearly, we have to have some kind of ethical framework.
We do, but it doesn't mean you spend 100% of your budget on ethics.
You know, if I were to pick, I would probably spend, you know, 5% of the budget on ethics
and 95% on capabilities.
And look, we're so far away from autonomous weapons powered by AI, you know, we're still
trying to use it for contracts and for basic, you know, use of saving people's time and headaches.
You know, I had a great story with someone working as the executive officer for the secretary of the Air Force.
And he called me, you know, almost in tears, saying, you know, I was going to spend hours.
It was 5 p.m. I was going to spend hours to write this report for the SECAF. And I remembered we have
ass age and I was able to do it in 15 minutes. And now I'm going to see my kids and I'm going
to be able to spend time with my kids, thanks to you. That's, you know, what we're about.
And that's so far away, you know, from, you know, people using it on weapons. And by the way, I agree that we should be very cautious
when it comes to putting AI on weapons.
The fact is we're going to have to get there spotly,
but we're going to have to get there because China is going to do it.
And we demonstrated when we did dogfights with jets
that humans lost every single time against jets powered by AI
with no human on board.
And so humans don't have the ability to move as fast
and make decisions as fast as technology.
And so we're going to need to find a way
to compete against these new weapons.
And you don't want to be the one playing catch up.
You know, one thing that I think we all know
from World War II and the nuclear weapons
is being the first was important
and playing catch up is never good.
And so we can't just dismiss the importance of getting autonomous weapons powered by AI with still control from humans.
But the fact is, at some point when you get into a fight and it's a face to face fight between two weapons, two AI power weapons.
The human, to some degree, needs to allow the fight to begin and then get out of the way to let us win.
And so that's the world we live in.
We can pretend it's not happening.
We can pretend, you know, and everybody says, well, you know, we could have China sign, you know, some agreement and treaty to say they're not going to do that, but they will sign it and do it anyways.
Right. So we can't take that chance.
And quite honestly, we're so far away from those weapons, which scares me even more because China is not.
And so we need to keep that in mind as well.
Fascinating. Something I've been seeing a lot of videos of recently is these incredible light
displays, which are actually something like tens of thousands of drones being used in coordination.
And people have been noticing and I've been thinking about the military potential of this
kind of technology. Have you thought about this?
And where are we at with that?
I pushed back in 2018,
the creation of a special office
to go after the defensive side of this
and also the offensive side of swarming technologies.
Honestly, we have not done nearly enough to even comprehend what can be done
with those swarming technologies. The speed and the cost of these devices are mind-boggling,
and most people don't even know how quickly these things can move. And the time it would take to react. By the time you even understand
what's going on, the attack is already over and you have nothing left to do. So I can tell you
it's probably one of the biggest threats, and not just from the CCP, but also from terrorist
organizations. Quite honestly, the cost being so low nowadays and the technology being so accessible,
there's really no barrier to entry.
And the fact that we're not spending
significant amount of money in the defense budget
and at DHS as well to go above and beyond
to understand and have answers to prevent those attacks is very concerning.
Give me a scenario of what one of these attacks might look like.
You know, Sky is the limit with its attacks.
They could put explosives on some of these drones, but even just using them to crash
into things and just, you know, there's so many things that can be done with swarming technologies to disrupt, you know, air traffic control, to disrupt airspace.
I mean, you know, sky's the limit.
There's almost nothing you cannot do with it from putting, you know, weapons to putting bombs on them or just dropping them from the sky and hit people and objects.
It's a very concerning capability.
And when you see how well-coordinated these can be
and disconnected as well, which means if they lose control,
they can still continue to behave and achieve whatever it is they were programmed to do.
So a lot of the technology we use to disrupt their capabilities would not be impacted
because they would still fly and go do what they're trying to do.
And many commercial drones are designed to fall out of the sky if they lose control from the controller.
And I'll just go back land where they took off.
The military version of these drones can be programmed to continue doing the last instructions they were given. And so a lot of the technology we use to disable those drones would just not work efficiently
to stop these attacks.
And honestly, people go back to basic means like eagles and other things.
But again, we don't have nearly half of the volume of eagles or birds to go
attack these drones. So again, and the nets technology and all that, this is not realistic
against a massive swarming attack. And so I think we really need to wake up.
So there's a huge interest, as we've seen in the last several weeks in the US government
right now at cutting costs, at getting rid of bloat, this whole Doge. Last night, I was
listening to the first Doge report, I guess, as Shared Space on X. What you're describing
sounds like requiring a lot more military spending. How do you view that?
I don't think it is more military spending.
I think we need to waste less of it on the wrong things.
You know, my take is probably 70% of the defense budget is wasted on the wrong things.
So we probably have too much money, if anything.
It's just spent on the wrong things by the wrong people that
don't understand technology, don't understand the next battles we're going to be fighting.
Stuck in time, 50-year-old way of thinking, outdated way of thinking. We partnered with
NATO for years when I was in the building at the Pentagon. And I would be shocked by countries like Singapore and smaller nations
that don't have the luxury of wasting their money, but yet had very good capabilities
because they had very little money to spend and they had to spend it wisely. And there's such a
thing of having too much money and that's what we have. We't need to increase and we certainly can save a lot of money.
The only issue I have with Dodge is, you know, bringing people from the outside like me when I started, you know, back in 2018, I had zero engagement with the government.
I knew nothing about it, but it took me two and a half years to be dangerous, to understand it enough, to know what actions to take.
And, you know, they need to surround themselves with people that have kind of this mentality of breaking things smartly and not just breaking things without understanding the impact of it.
And so they really need to start bringing people that have a good understanding of
what's going on in DoD particularly. I love that they're spending time on other CVD agencies,
but let's face it, the DoD is a beast and you need to go after it with the right people. And
you cannot just bring people that have never dealt with it
and hope to have any kind of good success
within two, three years
if you don't bring the right talent.
Nick, this has been a fascinating conversation.
Tell me briefly where people can check out AskSage
and congratulations on the success
with this new company.
Yeah, we're one of the first company to be built and powered by Gen AI.
And we demonstrate, you know, we were two people, my wife and I, when we created it.
Since then, we, you know, we grew to 20 people.
But, you know, getting to the kind of valuation we got with two people, plus Gen AI demonstrates
hopefully to your audience what they can do with the technology
and create their next business, their next innovation. I built things I didn't know how
to build and didn't know how to do it, but I was able to do it by being empowered by AI.
And if they want to check us out, it's assage.ai. They can go create an account for free and go try it out.
Any final thought as we finish?
The most important is to make sure that the duty starts adopting AI technologies.
And I'm not just saying, you know, my company, they can pick whatever product they want to use.
They need to pick best of breed.
We need to stop spending money building and creating government made up junk, which is exactly what we've been doing for the last four years.
Instead of investing and collaborating with industry, the government is not good at building products and not good at innovating.
We can't keep up.
There's so much bureaucracy, so much paperweight.
It's impossible for the government
to build technology at the speed of relevance.
And so they need to partner with best of breed companies.
And if you want to have a chance at winning against China,
we need to reignite the partnership
between the private sector and the public sector.
And honestly, we're so far behind.
And the Silicon Valley refused to work with DoD many times.
And yet, when a few companies are willing to collaborate with the Department of Defense,
DoD continues to compete using research and development money illegally
to compete against the commercial companies.
And that's not how we're going to win.
So if there's one thing that the administration needs to look at
is why is teams like AFRL spending medians
building their own technology in a vacuum
instead of partnering with Google, the world, OpenAI, you know,
whatever company, right? Pick your poison, but use best of breed. Don't reinvent the wheel
and let's augment the capabilities to empower airmen and guardians and our service members
to be more efficient instead of building stuff in a vacuum. Well, Nicolas Cheyenne,
it's such a pleasure to have had you on again.
Thanks for having me.
Thank you all for joining Nicolas Cheyenne and me
on this episode of American Thought Leaders.
I'm your host, Jan Jekielek.