Endgame with Gita Wirjawan - Randy Goebel: Why the Global South Can Still Lead on AI
Episode Date: May 6, 2026AI today is powered by massive models, massive energy, and massive capital concentration. But what if this trajectory is essentially unsustainable? And what if the Global South could redefine the futu...re of AI?Randy Goebel, an AI scientist and advocate of open science, explores the sustainability crisis behind large language models and the costly cycles that define AI’s history. He also emphasizes that for Southeast Asia and the Global South, the future of AI depends not on hardware, but on building and sustaining intellectual capital.----------------------About the Guest: Randy Goebel is a computer scientist and professor of computing science in the Department of Computing Science, University of Alberta, Canada. He's known for his work in artificial intelligence (AI), machine learning, logic-based reasoning, and explainable AI (XAI). Goebel also helped found the Alberta Machine Intelligence Institute (AMII) and has been deeply involved in AI research and teaching.#Endgame #GitaWirjawan #RandyGoebel------------------------You might also like:https://youtu.be/xUdCSq4W1Kk?si=mVh92mCGznE4GEmphttps://youtu.be/WNcW2jHGrtk?si=dVprUeByatSeOfFhhttps://youtu.be/m8h7zojuM5Q?si=7qx3ICQ6Z0oYs11Z
Transcript
Discussion (0)
How do you encourage a society to choose to do good?
It's never been the case on our planet in our society that open knowledge could not be misused by those who choose to misuse it.
So it's a deeper philosophical question that you understand very well, and that is...
Randy, I'm a little troubled by what I call the paradox of abundance.
Are you convinced that with respect to AI, it's going to be defying whatever we might have been witnessing with respect to the paradox of abundance on economic capital and a paradox of abundance on labor as it relates to intelligence?
They're serious.
We have to convince decision makers how serious they are and show them the value that accrues from doing things differently.
that can avoid negative consequences.
This is the ones you say,
the localization of power, authority, and intelligence
for the few, just like we see that with capital.
Hi, friends, it's a pleasure to tell you that my book,
What It Takes, Southeast Asia,
has been released in English and Bahasa Indonesia.
You can buy it through books.endgame.ID or at any of these stores.
Now, back to the show.
We are being watched by close to 2,000 people here.
that are eager to hear your wisdom and views, of course, with respect to AI.
Oh, my God.
Randy, I want to start off with a rather structural problem statement.
And I want to put that in the context of Southeast Asia,
which is electrified only to the extent of, you know,
with varying degrees between 1,000 to 10,000 kilowatt hour per capita.
Unfortunately, only two countries out of the 10 countries,
i.e. Singapore and Brunei are electrified at 10,000 kilowatt hour per capita,
which is really the threshold for the next level of modernity.
Unfortunately, Indonesia is only at 1,300 kilowatt hour per capita, Malaysia at 5,000,
and all the rest are well below 3,000 kilowatt hour per capita.
AI uses tremendous amount of energy.
One forgets that when you use GROC, Deep Seek, Gemini, or even ChatGPT,
that uses about 10 to 50 times more energy than a simple search on Google.
And if you're in the film industry, if you use SORA as a platform to create artificial intelligence,
sophisticated images, that requires about 10,000 to 50,000 times more energy.
there just seems to be a structural limitation here for developing economies around the world.
And I call that the Global South.
And Indonesia is a big part of that.
And the Global South makes up about 84% of the planet.
How do you deal with this?
Because to build up pre-existing capacities in Southeast Asia, just to go up to 6,000 kilowatt hour per capita,
for the eight countries in Southeast Asia, it's going to require to building up.
of about a terawatt worth of power generation capabilities, which will cost about two to three
trillion dollars. We don't have that kind of money. It'll take us more than a hundred years.
It just tells me with an intuition that Southeast Asia is going to be left far behind,
just from an energy standpoint. Please share your views.
Thank you. First, my opportunity to use the problem
Malayu. Alama Pegi.
Tarmakasi,
Jamputan.
There I have, I've used the name.
But it's a privilege to be invited and to be able to talk here.
So thank you for that.
I think the first thing to create context
your very pressing question about energy is first convey the perception that the modern large language models
are a current artifact of the status of artificial intelligence science and they may not be on the
trajectory towards a long-term more stable use of intelligence in that way.
The good news is that the trajectory when people talk about the deflating AI bubble is that most
of the improvements of the last two years are how to do more with less.
So it's very important for the audience that you have and what
I understand their interest to be.
To first always question whether the current state in affairs is on a sustainable trajectory.
As an AI scientist, basic AI scientists, the answer for me is no.
One of the examples that we discussed that I would like them to hear is that when IBM built
the program Deep Blue.
It was built for the leading of it,
it was built by somebody from our group
in the operating machine intelligence.
IBM,
when Deep Blue
five beat Ray Kasparov
said the future of AI
is more hardware and faster,
deeper search in
game trees, and if you want to succeed,
buy more IBM hardwood.
that proved to be wrong.
It was knowledge of the game
that actually helped
computer programs
I use that instance
and I want to try to convey
that to say that
it's currently true
and someone would say, some of my
scientific friends,
that the current methods
of compiling
information to large language models is the first thing we think about and perhaps the least
efficient way of gathering knowledge. Just think about how school children learn. They don't
need that kind of power to be able to learn to read, for example. So I think that's the first
thing to keep in mind. In the short term, some of the issues that you've raised when I know
have spoken about around the world have more to do with how that trajectory is given access
to people to help benefit society, not to be able. And I think that's the first thing to consider,
is that trajectory of the print AI use of massive volumes of power. I personally and many of my
colleagues around the world. I think that's a short-term observation of doing things in
the most naive way possible. I'm still in the camp that believes the data intensity that you alluded
to, or the marginal data intensity, is not likely to intersect with the energy intensity.
AI is really about algorithm data and clout.
All these will require, I think, significantly more marginal data, I mean, energy intensity.
I don't foresee that happening at least in the next 10 to 20 years for most parts of the world, call it the developing economies,
as to be able to intersect with the kind of data intensity that you're alluding to.
The most important thing I think of the aspects you bring up related to data in
is we've seen these spiraling cycle science of artificial intelligence.
In the 80s, we believe that we could capture expert knowledge,
that is, knowledge of real experts by working with an expert,
be able to articulate the rules that govern their real.
reasoning about so that's when we built. Dendril was a computer program guided by a
Nobel Prize chemist to talk about chemistry, organic chemistry. Mycine was guided by a computer
science, PhD, and a clinician actually from where I sit now in Edmonton to do diagnosis
bacterial infections. But the cost of gathering that knowledge was
incredibly high
because first you identify
a next week and then you try to get them
to spend the
manned years of time
it takes to codify the rules that they claim
and can be verified to be their knowledge.
That's really the seed of the blossom of machine learning
to say
and we automatically
begin to gather
and learn information
and short-circuit that energy
investment. That was of the
most of the 80s and early 90s was taken out.
Now we have quite good,
quite simple,
but quite good machine learning mechanisms,
which are the basis of building large language models.
And what we're finding out,
they're also very expensive.
In fact,
we can quantify their costs very easily
because we take rudimentary algorithms of transponders
and we repeat them
trillions of times.
in a very, very straightforward and very naive way,
to build the knowledge required to exercise,
the questions we pose to those language and audio and video more.
And so I want you to think about that as another cycle.
Now, at the core of what you said that's most important is about data.
And I think what people are starting to learn is that
we're talking about data sovereignty.
So some of the biggest lawsuits in the world are between jurisdictions and large technology companies
against their unguided, unregulated use of copyright material.
But that copyright material by current laws are owned by the people who created it.
And so one of the things we're observing is that data sovereignty, you controlling your data, is an essential aspect of changing the paradigm of only the big technology companies and the jurisdictions they serve are the ones that will win.
We see this.
It's no mistake that the Chinese announcement of B.C.
was announced and constructed at 100 in the cost of chat GPT for example.
Everybody has observed the media in which Sam Altman is now in a company.
That's just an instance of what's starting to change is that people are starting to figure out that we,
can do better, we can use the energy more efficiently to be able to achieve the same
results. But I think that has to be comfortable. What you said about data is essential.
The data sovereignty of Indonesia with respect to health should be exploited as your health
system, not bought or siphoned off by some other company who wants to sell you back
their smart health systems, if that gives you some.
Just to pick up on this notion of data sovereignty.
It relates to who's supposed to be regulating.
It's supposed to be, it's supposed to relate to who's going to be shepherding or guiding
this trajectory, right?
Now, I want to just bring up an observation.
that the regulators, even in developed economies,
much less developing economies and underdeveloped economies,
they don't seem to have the kind of comprehension
about what needs to be regulated.
Right?
Always.
So we can talk about data sovereignty all we want,
but if the people that are supposed to be within the regulatory framework
do not have the necessary comprehension,
of what needs to be regulated. I call this oftentimes the anachronistic, numb, regulatory oversight.
You can see it in the United States when people in the Senate, they don't know what questions to ask.
When they're sitting in front of a 28-year-old who's come up with a new LLM, and I think that's going to get even more exacerbated in developing economies, much less underdeveloped economies.
How do we deal with this, not just in the context of protecting data sovereignty, but protecting other elements of AI so that it can be shepherded a little bit more judiciously going forward?
I think there are a couple of instances of ways to do that.
So, for example, in all of the work that I do with colleagues in medicine,
the clinical researchers, we see that data in Canada is very tightly, perhaps tightly guarded.
But the good news is that the data is not for sale and cannot be taken away for other systems.
So as we learned about how that data can be used by AI systems,
systems to improve health.
We talk about for 20 years, so-called health learning systems,
or you continuously learn from the data you have.
That gets the attention and helps to,
this is simple that an acronym-examination you talk about by saying,
if you're going to invest in health,
you first have to ensure you have access the information you use
to show you the value that emerges from the applications and health.
And I think that's the start of it.
So areas that are controlled to jurisdictions,
in most jurisdictions, health, law, legal, judicial processing,
and sometimes logistics and manufacturing are high up rank.
When you start to confirm access to your own data,
you only inform those decision makers, as you said, that are talking in some political committee,
talking to the 28-year-old, you only inform them when they can measure the impact in terms of dollars and cents in that way,
in health systems and judicial systems, for example.
So I think that is what I take as the premise.
Now, remember, part of that is scientific aid.
that goes in that direction.
And so my faith as supporting open science and open AI is that as long as we can focus on
jurisdictional value, that will produce the argument to the APN eponistic treatment of
technologies in general, I hope.
we see we see signs of that in canada where the jurisdiction of control of health is not federal
provincial it's state by state if you like and you start to see some states that because they have
better use in control of their data have better health care as opposed that and so now you start
to see the jurisdictions sharing data across jurisdictions so I think that that
That's an instinct of one of the answers to your question.
And it's not a simple answer.
And it's a very complex question, of course.
I want to take this to a more philosophical level.
Okay.
Some of the pre-existing platforms in the United States, particularly,
they've chosen to be close-sourced and for profit.
They've pivoted from being open source.
to being close source from being not for profit to for profit.
Yes.
A for profit ideology subjugates you to short-termism, right?
Yes.
And you contrast that with deep seek in China, which so far has been pretty resolute
in wanting to be open source and not for profit.
We can argue about the orders of magnitude in terms of how much it cost them.
But I do believe that philosophically, a platform like Deep Seek, which insist on continuing as an open source and also not-for-profit will resonate a lot better and a lot more to people in the global South who earn still a lot less than $13,000 per capita per year.
Is that the right kind of philosophical observation going forward?
I think it's the right one, and the subtlety of it is that it's right for a couple of different reasons.
You can contrast that with some of the discussions that come out of some of the leaders of the big tech oligarch in the U.S.
And a little bit at Europe, not as much.
And this is convolving
safety,
AI safety,
with AI control.
So you will have seen one of my Canadian colleagues
Joshua of NGO
claim that he doesn't like the idea of open source
because that means
all the people who want to use that open source
to do damage with dual use
have free access to it.
That's always the thing.
the case. It's never been the case on our planet in our society that open knowledge could not be
misused by those who choose to misuse it. So it's a deeper philosophical question that you understand
very well, and that is, how do you encourage a society to choose to do good? So if you couple that
the open source and open use of those technologies,
then I think you have information that moves forward.
But if you take the position that close source means you have control
and nobody can do bad things,
then you get Google saying, well, do no harm,
but we've dropped that slogan because we want to make money
from our interest in smart missiles,
for example.
So I think that part is very important.
But it's a hard road.
You have to educate.
You already said that education.
What does it mean for someone
to be able to get access
to open weight models and other things?
They still need to have an enablement
to a threshold that allows them to create value.
It just seems paradoxical to me.
because open sourcing anything dovetails into the ability or the sheer ability to democratize
information, ideas, and hopefully public goods.
And the United States being the second largest democracy in the world seems to be on a path
of de-democratizing from an AI standpoint.
Just by way of close sourcing it.
It's just by way of just sort of like concentrating profit in a certain corner.
Whereas paradoxically, at the same time, China, which is an autocracy, seems to be resolute on open sourcing the AI platform, which I think will dovetail more to the democratization of information, ideas, and public goods.
Yes.
And I think this will play out.
at least in the midterm,
almost for sure in the long term,
a lot better to the global south,
which I think deserve the democratization of public goods
more so than ever, more than others.
Absolutely lying with you that way.
It's kind of ironic because who knows,
as Canadians, we think of ourselves
as the mouse living next to the elephant.
And what we see,
see is in the so-called
hypocrisy of China
is the exercise
of a system
that has to
keep 1.4
billion people happy
and lift the thing, right?
That's a necessary thing.
To do so means being a little
autocratic.
And I think that you're exactly
right.
They're in it for the merit of
nothing. And we're starting to see
the edge of spray on the opposite model in the United States because it's being disassembled.
People are disassembling things.
The current predictions by economists who understand this much better than I are saying that
the valuations of some of the technology companies, the bubble will burst or at least
deflate.
Will it be the kind of bubble burst that we saw in the year 2000 with respect to the
internet companies. Talk about that. You know, the hyping, or call it the overhyping of the
AI platforms or companies in the U.S. and others. Oh, yes. It seems to be pulling back in recent months
on the back of the recognition that there is a structural energy availability issues.
Absolutely. And then there is this notion that perhaps it's not going to be the ultimate
democratizer. What do you think could pan out?
from this pulling back from the hype with respect to some of these AI companies?
One of the things I see is that there is definitely a divide in the world of modern AI scientists
between those working in service of the for-profit companies.
I had mentioned previously that in some papers and some exchanges,
that the investment of AI has been surpassed by private funds rather than public fund.
But the public scientists have never had more passion for exactly what you point out,
more passion for keeping things open and reducing our dependence on companies that keep things secret.
because those will ultimately fail.
We already see the edges being great as we pointed out.
And so I think the good news is that there is a new fuel for the passion of public scientists
to say we can do better and we'll do better even with fewer resources
and those scientists inside of the big companies.
I see that every day in every conversation I have with the public scientists I know and trust.
Deep Seek was a Sputnik moment.
I call that non-linearity.
It is highly inspirational to people in the global South.
How soon do you think we might see somebody in Papua coming up with an LLM that's going to be orders of magnitude cheaper, orders of magnitude more?
efficient orders of magnitude and whatever shape or form.
Do you foresee further Sputnik nonlinear moments happening in the global South as we might have
witnessed?
I see them.
I think they will happen with greater and greater frequency and regularity.
If the following people like you in leaving the thought.
thought change that has to happen, it's going to come down to the creation of talent and the
attraction of talent that can do this. Because great talent always does better than great
hardware, if you like it, if you think about it that way, is the innovation from clever people
working on grand challenges without the resources of Google or meta or Amazon.
Amazon or Microsoft or Apple have repeatedly in history shown all kinds of innovation that nobody ever thought would happen because it never had to inside of the companies, those other companies.
So I think that that will happen with much more frequency.
You already know about the company open minds that Rich Sutton and I and a few others have started is the premise is the long term free-use.
of artificial intelligence and the scientific output of it for everyone to use.
And it's like what you said, the trajectory of that is growing.
I'm passionate about it because I hope it's sustaining itself and can sustain itself.
But it's exactly on point with what you said about can the global South participate.
Absolutely.
Randy, I'm a little troubled by what I call the paradox of abundance.
We have seen in the last few decades
An abundant economic capital
But the abundance of economic capital has become so concentrated
It has not become adequately democratized
Then we have recently seen
The abundance of labor on the back of robotics
The cost of labor has come down asymptotically to zero
but the abundance of labor has been more concentrated as opposed to democratized.
And now we're sort of like staring at this increasing abundance of intelligence,
the cost of which is theoretically assentotic to zero,
but it doesn't become adequately democratized.
Are you convinced that with respect to AI, it's going to be defying
whatever we might have been witnessing with respect to the paradox of abundance on economic capital
and a paradox of abundance on labor as it relates to intelligence.
I think we have to redouble our efforts to guide and to channel modern AI
to be able to avoid exactly what you said.
I'm always
coming to mind
are two examples
that I've used
and talks around the world
related to that
impact of labor
and
and the consequences
inside in that labor
one was people may
remember but
when the
when Gutenberg first delivered the
hunting quest
in Germany
in 16
whatever it was, 1612.
1450.
Even earlier is the first recorded criticisms of it were from monks,
where the lead monks were critical of the printing press would make the scribe
the trends that rewrote the Bible for distribution by hand,
would make them into tired, lazy monk.
That never happened.
Fast forward 400 years, Charles Babbage was asked about the consequence because he was
caught a computer scientist as much as an economist.
He was asked about the impact of the Junkard loom.
And the figures I have is that before the Jachard loom, there was something like 1,400 people
weaving cloth during the Industrial Revolution in Great Britain.
All of them were men because those looms required a very strong upper body to use the shuttlecock back and forth.
What Jacques Haqard did is he enabled women to use looms because they were so much more efficient.
The population of women loomers went from 200 to 3,600, producing better cloth with better designs.
Now, those I take in history as being,
as having criticism leveled at them for the same reason
that we think about the management of labor, robotics, and AI.
But inside of those things are always new opportunities.
I haven't seen any AI method, however simple or complex,
that doesn't create novel new situations to be adapted and applied by humans to make society better.
It's exactly to your point about whose control and how the democratization comes.
And I've seen jurisdictions in guests.
I've noted some in our conversations.
Two of the Japanese 10-year projects that were both public and private,
the combination for the first time of Japanese history,
fifth generation systems project and the real world computing project
at their basic democratization, computing and information.
So if we can create those out of the jurisdictional culture,
then I think there's a chance that we can address the challenges that you note
because they're serious.
We have to convince decision makers,
how serious they are, and show them the value that accrues from doing things differently
that can avoid the negative consequences.
This is the ones you say, the localization of power, authority, and intelligence for the few,
just like we see that with capital.
What would be the simplest advice that you have in the context of how these technological oligarchs,
they can mobilize tremendous amount of economic capital, tremendous amount of technological capital, tremendous amount of political capital.
What would be your advice for civil societies around the world?
Especially those in developing economies.
Because I think there's true.
Go ahead.
Well, the easiest way to do it is to top down it, right?
But at the rate that the top doesn't have a good comprehension.
the long term, I think, will have to involve the bottoming up of this culture of thinking
or at least rethinking of the pre-existing paradigm, which may have some fallacies or fallacy.
You are on a mission of basically educating the world about what is positive and what is negative
about AI.
But you're basically just confronting these behemates or technological oligarch.
that have those necessary resources, call it technological capital, economic capital, political capital, even geopolitical capital.
What would be your advice to civil society?
It's a good way to phrase it because one of the things you'll note is that none of those successful for-profit companies have achieved what they have without a talent tool, without intellectual capital,
And the intellectual capital wings and disperses as the companies get successful because they can buy what they need now.
They don't need to culture it.
One of the strengths of Southeast Asia that I see is the opportunity to relatively economically efficiently develop intellectual capital.
That's why Rich and I started the open mind global in Singapore, for example.
It's a tiny piece of puzzle, but I think to address that issue,
it has to be don't bottom up by first education,
more endgame time off,
more emphasis on small capital keeping novelty in applying AI to jurisdictionally
motivated problems, and more sharing of the public knowledge that we know,
we get from public scientists. We can easily make the network of public college operate more efficiently.
I think you need to find those things and build a sustainable talent pool. That's also a contrast
in some of the big Chinese companies compared with some of the big American companies. The same as the
cultural aspects you referred to earlier is that the talent tool for them is the most significant
part of their capital.
So I think that that's, and that's
the part of the bottom up thing.
Let me
inject a little bit more realism
into the narrative.
Most members of the global
South don't have the necessary
cognitive capacity
to differentiate between good and bad,
to differentiate between opinions
and facts.
Those are just
obvious
vulnerabilities to the pushes that are being undertaken by these oligarchical or technological
oligarchs.
Now, you've mentioned Singapore.
Singapore, I think, has been a little bit different in the sense that it's been more prudent
in concocting the necessary regulatory framework as it relates to AI.
What do you think could be the lessons that the other nine countries in Southeast Asia could
learn from what Singapore has been doing?
I think part of them
comes from lessons that
Singapore has learned, but
they're not so different
lessons that are being
used in Europe.
For example, think about European
AI, the general data
protection regulation
three years ago now and the
EU AI Act
is the focus of the
regulation is
to avoid doing
damage to human societal structures.
And I think that's partly what I see from what emerges in Singapore.
And it's an easier laboratory to work in because it's so small and well-contained
to Europe and North America, for example.
So I think you're right.
You need something concrete like that.
Another practical thing is that I've had experience hiring.
and raising people when I was the CTO, the German Software Company,
when we decided to build development offices in Kuala Lumpur,
I lived and worked for a while.
And we could get very good young computer scientists,
but not if we didn't sell them on the idea to do something grand,
even though we would make less money than if we moved to Singapore.
Singapore is at that point, at least for the Malaysian students like California is to Canadian.
That way.
But I think it's the second pillar of that bottom up thing.
One is have a focus on the talent pool.
Second, figure out what keeps that talent pool passionate and sustainable in the jurisdiction they're in.
And every jurisdiction has the capacity for talent to be nurtured.
But the oligrant of the technology companies, they exploit what you just said about people not being able to determine the difference between fact and fiction.
That's exploitable.
And the only antidote is to build a more intellectual capable talent pool, I think.
Well, I'm in a camp that believes that short term requires activism.
midterm requires legislation, long-term requires education.
And I'm also in the camp that believes that the reason why perhaps Europe has been a little bit more stringent with respect to data privacy, with respect to AI, ostensibly is because they've not been a significant beneficiary of software development and AI.
whereas the United States has been a massive beneficiary of software development and AI.
So I would probably speculate, not hypothesize yet, that at some point in the future, midterm or long term,
the degree to which developing economies will embrace AI will largely depend on whether or not they're going to be beneficiaries economically of the advancement of AI, right?
And at the moment, I'm still thinking that we in the developing world, I think, are still quite exposed to elitization of the economic order.
The Internet has been an elitizer as opposed to an equalizer.
It democratized information.
It did not democratize ideas.
It did not help with respect to the democratization of public goods.
I think AI is at risk of further elitizing the pre-euroxie.
existing economic order, which was already elitized. Now, at the rate that it's going to get so
elitized, I think it's going to become evident that developing economies are not going to be
massive economic beneficiaries of AI. That, I think, will be to pivot when we take the necessary
measures or steps to pull away from this frenzy of AI, you know, journey. Is that the right
way to think? It's actually a wonderful idea. I like your activism, regulation, and what was the noun you
used for the third? Which one? Frenzy? You had activism. Activism for short term,
legislation for midterm, education for long term. Right. And so I think that has part of the
solution to the formula you need to do that.
Let me give you an example that's very practical.
Most of the judicial mechanisms in every jurisdiction are hypocrisy or democracy or
anything along those spectrums.
Most of the judicial systems we have are slow, they're lethargic, they're inefficient,
and their aging and their exterior aging.
But there's a lot of investment
in artificial intelligence applied to law.
But what I found in my experience
of studying AI in law for more than a decade
is that judges require different tools
than lawyers.
And in your trajectory,
the judges are the people you enable to produce
faster, more fair,
decisions, not the lawyers who are in the business of ensuring that the decisions go according
to how well they're paid. And that's a, it's a nice illustration because it's, it's,
it's in your face, it's real. There's no, no avoiding it and no rational speaking about one
being the same as the other, a matter of degree. And so that's an example based on what you've said,
where if you get the activism to invest judicial systems,
the regulation to instillate the views,
and the education to know what the positive outcomes are from that
is what I see as one instance of the thread along your trajectory
that says we can demonstrate value to the society
that encourages further investment in that,
rather than invest in law firms being able to control the Supreme Court in any jurisdiction, for example.
Randy, we've only got a couple of minutes left.
I'm going to ask you the last question.
Okay.
How soon are we going to experience AGI?
And to the extent it occurs, how do you think that will have an impact on Southeast Asia or the developing economies?
I think we may never see what the big technology company leaders say they're trying to be with FBI.
We may have it already in some sense or we'll never have it.
That's a contradiction, I think, that you have the sort of the capacity to embrace.
because the reality is that if you want the machinery of computers to be in service of society,
they have to understand society better.
And they don't, right?
Robots that interact with the world already understand the world better than LLM,
what LLMs understand.
They understand text or images or audio or video and that's it, right?
So the reality is how you use the tools to accelerate the creation of value
and whether their artificial general intelligence or another term what I call it, he calls it, he calls it B.S.
I mean, if you're from the prairies of Canada and a farming community like me, BS means something different.
but he calls it broad, shallow intelligence.
And that's what NILAMs are.
And that doesn't mean they're not useful.
So I think the question of whether you're being sold on the promise like Sam Elkin is on,
so on the idea of their on the edge of artificial general intelligence,
they don't even know where the edge is.
but they do know what may rescue their four thousand motives to create revenue that might someday in the future surpass their investment and selling the world on AI.
I don't, I think that that's going to change.
Maybe by the intellectual leadership from people like you will help us sustain what we need to do.
We defeat the the picturing of that idea.
Amen.
We'll plow on.
Thank you so much.
Thank you so much, Randy, for gracing our show.
And you can take the rest of the day off now because it's midnight in Edmonton.
Thank you.
All right.
And good luck with the rest of your meeting today.
A big hand to Randy Goebel.
Thank you.
Thank you.
Thank you.
Yeah.
Thank you.
