Tech Won't Save Us - Why Countries Must Fight For Digital Sovereignty w/ Cecilia Rikap
Episode Date: August 21, 2025Paris Marx is joined by Cecilia Rikap to discuss how countries’ dependence on US tech companies is harming them and why they need to get serious about digital sovereignty.Cecilia Rikap is Associate ...Professor in Economics at University College London and Head of Research at the Institute for Innovation and Public Purpose.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Cecilia (and Paris!) worked on a report offering a roadmap to reclaiming digital sovereignty.The UK Labour Party forced the chair of the Competition and Markets Authority to step down earlier this year to promote its pro-growth agenda.A Microsoft executive told a French Senate committee that it could not guarantee data sovereignty if the US government requested information stored on its servers in Europe.Alexandre de Moraes is the Brazilian judge pushing back against big tech.The US is sanctioning judges from the ICC (as well as Alexandre de Moraes)Support the show
Transcript
Discussion (0)
But what needs to happen is that governments need to also be not just the promoters, but also
by themselves with public institutions, the producers of the technologies that we want.
And they need to create the spaces for the population to discuss about what technologies
to produce, what technologies shouldn't be produced, and why.
Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine.
I'm your host, Paris Marks, and this week my guest is Cecilia Rickap.
Cecilia is an associate professor in economics at University College London and head of research at the Institute for Innovation and Public Purpose.
Cecilia was in our Data Vampire series, but, you know, has never been on the show properly, so I figured it was a good time to have a conversation with her.
And just to be totally clear, this interview initially appeared on.
another podcast called Politics Theory Other that I was guest hosting earlier this month. And it had
so much relevance to what we talk about on this podcast. And it was a conversation that I wanted to
have anyway, that I was happy that Alex, the regular host of Politics Theory Other, allowed me to
share it with all of you as well. And I would highly recommend checking out his podcast too,
because he has some fantastic conversations there on politics, geopolitics, and so many of the
things that I think you would be interested in as, you know, a listener of this show as well
and just assuming that we probably share similar-ish politics in common, I would imagine.
But anyway, in this conversation, we talk about digital sovereignty.
We talk about the costs and the threats that come with our dependence on U.S. tech
companies for the most part, but that can also be extended to Chinese tech companies in
particular as they become a more powerful and global force.
Cecilia and I start by talking about the U.K. because it does seem like such an emblem
example of how not to pursue tech policy in this moment, also because Cecilia is based
in the UK, so it's of relevance. But then we do broaden that conversation out to look more broadly
at what digital sovereignty would actually mean, some proposals that have been made in Europe
along the lines of developing a more sovereign technological stack and what Cecilia actually
thinks of that. And then to look even more at opportunities in the global south, in places like
Latin America, where projects like this might be able to take off, might be able to make a real
difference. So at a moment where we see these threats from the Trump administration and from Silicon
Valley, I think it's important to have conversations like this to think about the opportunities
that exist to do something different, not just to create a whole set of tech companies in different
parts of the world that are doing just as bad things as the U.S. tech companies are, but to rethink
this model at a fundamental level. And that's something that Cecilia and I would very much like to see.
of course, co-authors on a white paper released at the end of last year, where we were making
an argument for a vision of digital sovereignty that's much different than the capitalist
tech industry that we have today. So if you do enjoy this conversation, make sure to leave a
five-star review on your podcast platform of choice. You can share the show on social media or with
any friends or colleagues who you think would learn from it. And if you do want to support the work that
goes into making tech won't save us every single week so we can keep having these critical in-depth
conversations on important issues like digital sovereignty, you can join support.
supporters like Teddy from San Francisco, Miles from Toronto, Ilya from Amsterdam, Allie from
New York City, and Greg from Brisbane in Australia by going to patreon.com slash tech won't save us
where you can become a supporter as well. Thanks so much and enjoy this week's conversation.
Cecilia, welcome to this special episode of Politics Theory Other. Very excited to speak with you.
Happy to be here. How are you, Paris? Yeah, I'm well, thank you. I'm very excited because I'm
getting to talk to you today about a topic that we're both very interested in when it comes to
digital sovereignty are dependents on, you know, these U.S. tech companies that has been building
for a very long time. And I figured a good way to start this, because a lot of politics theory
others' audience will be in the UK is actually to look at what the UK government has been doing
in terms of its tech policy and AI policy. Because, you know, since Kirstarmer's labor
government took power, we have been seeing it trample over the rights of local communities in order
to approve data center projects, to try to push AI onto many different parts.
of society, and of course, to try to gut copyright regulation so that AI companies do not need
to pay for all of this data that they are using to train their models. So I wonder, you know,
kind of big picture, what you make of the approach that the Labor Party has taken to tech policy
under the approach of Kirstarmer. I would say that actually is quite similar to what we saw
before with Rishi Sunak, which is quite sad because Sunak belonged to the Tories. And with labor,
one would expect something different in particular for a topic like tech that directly affects
workers, directly affects the livelihoods of people and how they live in communities. And I would say
that basically there are two very broad ways of thinking about tech policy today. On the one hand
side, we have the governments in the world that understand that this technology is crucial,
not only from an economics perspective, but from an ecological, social, ethical, political
perspective. And therefore, they are trying to expand what we would describe as digital
sovereignty. And sometimes they fail, but they at least try it. And then we have the
governments that are completely blinded by the promise of economic growth, by the promise of
AI as the magic wand that will bring this economic growth, that again is supposed to
eventually bring happiness to everyone. And if I need to give you an example of what's the
most paradigmatic case of the latter, this is a stormer. And this is very sad because the UK,
for instance, had been at the forefront in terms of, with its competition and markets authority,
for instance, at identifying, investigating the power of these companies. Offcom was one of the
first institutions researching the cloud and the power of big tech in the cloud and calling the
world's attention about it. And then we have this government that in 2024, just a day of
after Microsoft's UK CEO was appointed chair of one of UK government's industrial committees,
it was something called like Industrial Strategy Advisor Council,
so directly sitting Microsoft inside the decision-making spaces of this government.
Just one day after that, as Starmor goes out and says that, he promises to make the CMA,
which in principle should be, which is the computational market authority,
which, as I was saying, in principle, should be autonomous or somehow independent
from the ruling administration, Starmer goes and says that he's going to focus on making sure
that the CMA focuses more on promoting economic growth.
And just after that, he signed this five-year agreement with Microsoft that Satya Adela traveled
to the UK to celebrate with Microsoft UK about it.
Satya Adela is Microsoft CEO, and he was basically celebrating the fact that the UK government
will now be forever and ever trapped within Microsoft.
because it's not just a five-year agreement for Microsoft digital technologies,
but as we know, once you start using these technologies,
especially on the cloud, it becomes almost impossible to leave.
And this also came with backlash at the UK CMA.
What happened was that the CMA's chairman resigned after this.
The official news was that he resigned after these announcements,
after the government's intervention, basically.
And the person that replaced the former chairman is someone that was,
country manager of Amazon UK. So really just making things worse and worse. And they also
announce the dismissal of 100 employees from the CMA, which clearly sends a signal you're telling
that you're weakening the UK's watchdog, basically. And if we fast forward to today, a very recent
news is this wide-range agreement with Open AI. And this has already raised concerns, actually,
because it is unclear to what extent the UK ministries will be sharing public data with a U.S. company
and with a private company, regardless of the nationality, ultimately.
So really, I think that the UK has become an example of all the things that shouldn't be done at this moment in time.
No, I think you've laid that out really well, right?
And when you talk about Open AI and that agreement and the questions about data,
you know, one of the things that comes to mind as well is the agreement that Palantir was seeking to get access to
NHS data, right? Kind of like the holy grail of data and allowing this very exploitative,
you know, very harmful American company to get access to it. And it feels like as I'm listening
to you describe all of these things. I'm thinking, as you're saying, not just that this is
the example of what you would not want to be doing, but it also makes me think back to, you know,
a few years ago when the UK was talking about becoming global Britain and striking out on
its own because it was leaving the European Union and was now going to, like, chart its own
destiny. And it feels like seeing, as you're saying, you know, we saw it with the conservatives as
well, but seeing what Kirstarmer and the Labor Party is doing is, you know, not championing
the U.S. position on the global stage or carving its dependence out from different blocks,
but kind of switching from this kind of integration with the European Union to a much greater
dependence on the United States, a kind of subservience to the Trump administration and not
not wanting to anger them to try to get a good trade deal, but also just doing whatever these
U.S. tech companies want and kind of handing over whatever sovereignty, whatever, anything that
the U.K. has that is valuable to these tech companies to try to attract a bit of investment or to work
with them in some kind of a way. Absolutely. Even the UK energy, the UK water. So really what
else is left to give to the U.S. And what is really interesting, this UK administration is
completely subordinating to what I would describe as the U.S. power block because it's not just
Trump. It's not just the government. It's the government in coalition, in alliance with the U.S.
Big Tech. And although, you know, after Musk left the U.S. administration, some people were thinking,
oh, wow, this is the end of this alliance. And not at all. That's a soap opera among two very
particular characters. But where we really need to look at is, for instance, that what now has
become clear with the U.S. AI action plan, which is basically the U.S. government saying
that, one, they are going to fast forward the development of AI, accelerated regardless of the
environment, regardless of the implications it has on the population and whatsoever. They are also going
to develop more and more applications. And the aim, ultimately, is to all the allied nations
to adopt the U.S. technology. And they are very vocal about the...
the fact that they are going to develop all the AI, and then the other countries are going
to adopt the USAI.
So they're not even living in space, let's say, for US companies to develop part of the
technology inside the UK.
So where's the economic growth?
So the question really to Starmer is, let's say that we buy your goal and we say we need to focus
only on economic growth, and that magically will make the world better, regardless of its
implications on the planet, inequalities and so on. Are you really going to get this growth?
Because it seems that they are not even going to get it. Yeah, it seems more a hope that the
implementation of generative AI tools, whether it is in government or in businesses across the
UK, just as we're seeing in many other countries, you know, we're seeing a lot of talk of
similar things over here in Canada, where I am, it's that those implementations are going to
bring the economic growth, the productivity benefits, the efficiency, rather than
you know, actually developing it and selling the technology, which is what the U.S. companies are
doing and kind of reaping the benefits that way. And then harvesting all the profits, because
actually what the rest of the world will be doing is paying more and more intellectual rents
to the U.S. companies. Even that on Asemoglu, who won last year's Econnoble Prize, so not
precisely like a leftist person or, yeah, like someone that belongs to the mainstream to some
extent that Asimoglu has a paper that is called the macroeconomics of AI, and he has
forecasted that in the next 10 years, the productivity growth at the world level associated
with the development and adoption of AI will be 0.53% in 10 years for the whole world, on average,
which means that if most of that is concentrated in the US, we may actually see decreases in
productivity in the rest of the world. And again, this is all died.
by this imperative of growth
that is also not only
not equipped for the challenges
of our time, but it's also very
wrongly measured. Because
when we think about productivity or
competitiveness, and this is also relevant,
for instance, the Dragger Report and all
the discussion about this in Europe,
this obsession with growth and competitiveness
misses a key point, which is
the fact that you can create a lot of value
in your territory. But if then
that value is siphoned through
trade, through sometimes in business,
trade of digital services that don't even pay taxes in your country, it doesn't matter how
much value you create. The result will be that you are perceived as less productive just because
productivity metrics measure value capture, not value creation. So really, I think that this narrative
of economic growth will not deliver. And while it doesn't deliver, it will just make the life
of the majority is worse. I think that's such a key point. And I think you outlined it so well,
right? And it brings to mind to me, you know, 10 years ago when there were all of these discussions
about how AI and automation and new forms of robotics were going to transform everything in the 2010s
and destroy all of these jobs. And I remember Aaron Beninov wrote a book where he actually dug
into these claims and found that even though there are, you know, all of these claims that
productivity was going to increase so much in the 2010s, then actually we saw less productivity
growth in that decade than in previous decades, you know, a time when there were supposedly
all of these new technologies changing everything. But that is one of the costs of
dependence that you laid out, right? You know, the fact that all of this value can actually
be siphoned off by these companies, the Microsofts, the open AIs that say the UK or other
countries are dependent on. What are some of the other costs that you see of this continued
dependence on US tech companies? In a nutshell, I would start by saying that I see what I
describe as twin extractivism. So on the one hand side, we see all this data upropeer.
and we've been, like, analyzing it, several scholars have been speaking of digital or data colonialism,
but this is only one part of what can be described as the appropriation of intangibles or intangibles
destructivism. The other part is knowledge, because more and more we see governments around the
world, dedicating public money, taxpayers' money, to invest in science and technology, in particular
to invest in computer science and technology, in AI models, digital technologies more broadly.
And all this ends up integrating ecosystems that are controlled by U.S. large tech companies or secondarily by Chinese large tech companies, which means that a lot of effort put into creating new knowledge that could potentially later help these countries to develop their own solutions, tech in their own ways, actually is completely constrained by not only the appropriation that is exercised by these companies, but also by their research agendas.
Because something that is clear is that as much as governments around the world invest in AI and public science in related fields, they all end up answering the questions that big tech companies are imposing in the world in a very soft power way by controlling the AI research network.
So on the one hand side, we have this intangible extractivism.
This is one part of the twin astructivism.
And the other part is that by producing a lot of knowledge that actually ends up being captured,
and used for the profit of a few companies from ultimately just two countries, the rest of
the world, in particular, the historical peripheries of the world, feel more obliged to keep
reinforcing nature instructivism as a way to get the dollars that they need to balance their
economies. And today we see this with an obvious chapter with these data centers. The way that
now this is being implemented is by opening up the doors of the countries, the energy, the
water, destroying local populations, just to install data centers with the promise, again,
that this will generate economic growth. But actually, most of what's needed to install a
data center is imported. So in Latin America, for instance, they don't produce, they produce
less than 20% of all the inputs necessary for installing a data center. At the same time, once a data
center has been built, Microsoft says this. They only hire up to 20 people per building. So really,
it doesn't generate employment. It's not generating, it's not trickling down in the economy
and generating like some sort of changes in the productive matrix of these countries.
And at the same time, it's extracting more and more water and energy. And even if some of these
companies will come and say we're using renewable energy, actually this is a big lie. They may
use some renewable energy, but renewable energy is not flowing constantly. So they always have a
reserve of fossil fuel. So they are still consuming fossil fuel. And sometimes they do these
offsets that we know about. So actually, they are buying some renewable energy somewhere in the
world, but that is not actually what they are using to power their data centers. And sometimes
when they reduce the consumption of electricity is because they are increasing the consumption of water.
And think of what it means to have football's stadiums size of infrastructure that are always
with the lights on, what it means for the people that live around that area.
So really, really disruptive.
And this ends up being there for a new form of nature extractivism that at the same time
enables more data and knowledge extractivism.
Because even if you have the data center in your country, it's like having a military base
in your country.
It's like a U.S. military base in your country.
It's not empowering governments to get access to the data, to access the technology really
and understand how technologies work to do something themselves.
So I think that this is part of the other things that are happening,
part of the implications of this completely curtailed,
if you want digital sovereignty around the world,
I think that because there is something else
which refers to the specifics of AI,
and in particular generative AI,
because AI is the only technology that gets better,
the more it is used.
It's a cybernetic machine.
This is why we say that it's learning.
we say machine learning. It's a specific type of learning. It doesn't learn in the same way as humans.
And although we call it artificial intelligence, it's still advanced statistics. It's not that they're
going to replace our brains. But anyway, these are different machines. They are not the mechanic
machines that we are used to think of when we think about how a machine works. And because they
are different, they are the only means of production that can actually appreciate instead of depreciate
when you use it. This means that when we are prompting the algorithm, we are all contributing
to make it better. So we are producing what should be digital commons and end up being captured
by a few companies. So it's not the same as appropriating, let's say, a piece of land, because
the same piece of land cannot be used by many at the same time without reducing our overall
satisfaction, whereas we could all be using this digital commons that we are producing. And actually,
knowledge has this unique feature also that when we share it, it expands. So it's not just that we can all share it without undermining our satisfaction. We are actually going to make it better. So really, the effect ends up being that what now is becoming a mainstream method of invention, of creation, it's been more and more use for whatever. You were talking before about the impact it has on copyrights and arts. So it's used for the arts. It's used for new research in science and technology.
It's been used even to code.
It's used for everyday lives.
So in the end of the day, this method of thinking is becoming mainstream, is sidelining other ways of thinking, and again, it's just controlled by a few.
So the consequences for me cannot be compared to, for instance, let's say, how big pharma companies capture also joint research than with universities, biotech startups, funded by the public sector, and then profit from our health.
Yeah. And I think what you're laying out there illustrates how significant these dependencies are and the costs of, you know, continuing to basically outsource digital technology to these often American, but as you say, occasionally and increasingly Chinese companies as well, especially depending on where in the world that we're talking about. And so I wanted to ask you because you mentioned earlier this kind of alliance between the Trump administration and the U.S. tech companies that we're seeing very clearly and very out in the open in a way that we didn't always.
see previously between the tech industry and the U.S. government, right? And so I want to ask you,
are these issues unique to this moment with an alliance between, you know, the Trump administration
specifically in Silicon Valley? Or are these more longstanding issues that there's now just kind
of a light being shown on because of the political moment that we're in? But these were always
problems that we should have been dealing with before. It's mostly the latter, but there is also a bit
of the former in the sense that, at least in part of my research, when I looked at the relationship
between the U.S. government and U.S. big tech, you can clearly see that, for instance, people
associated directly with the C-suite of Amazon, Microsoft, and Google have been sitting in advisory boards
for many, many years now, deciding the AI policy that then the U.S. government, in particular,
the U.S. Department of Defense, were going to implement. What we saw before, just to talk, for instance,
about the Biden administration, inside the Biden administration, there were clashes of power.
States are not internally homogeneous, and we know this. So we had the U.S. Federal Trade Commission
with Lena Kahn and her team that we were trying to, to some extent, tame the power of big tech,
few as big tech. But at the same time, we had the U.S. military, the U.S. Department of Defense
that had become completely dependent on AI technologies, digital technologies,
barges as services from U.S. big tech companies.
And this has resulted in having a sort of internal alliance allies inside Biden administration
pushing the government to be nicer with big tech.
And an example of how this turned out and why I'm saying that this is longstanding
is that the U.S. executive order to regulate AI, the one that came out with Biden,
was exactly what the U.S. large tech companies wanted, which was,
was basically to regulate the uses of AI. Of course, AI cannot be used for pornography, for other
illegal uses to make a bomb and whatsoever. But it doesn't say a word about how AI has been produced
and the responsibilities of those producing AI. So it basically diverts the conversation
towards something that doesn't affect large companies and that, of course, they were not
expecting to make a business out of selling AI to teach people how to make a bomb.
So basically what that executive order did was to show the world, if you could read between
the lines, that the US government was already completely aligned with these companies.
Even Harris said during the release of this executive order something like US leads in AI
and it is US American companies that are the ones that will continue leading.
So it emphasized the role of the companies.
It completely neglected, for instance, the role of US universities.
in the development of AI and, of course, the role of organizations from around the world developing
AI. So I think that this really comes, and even not only from Biden's administration,
although during Biden's administration, we also had the release of very important report
that was from an AI advisory group that was chaired by Eric Smith, that used to be the chairman
and CEO of Google, Eric Smith. And he invited to join this council, this advisory group,
Andy Jassy, who back then was still not the CEO of Amazon.
He was only the CEO of AWS and others from also Microsoft and Google.
And basically, their main recommendations were, one,
accelerate AI adoption wide and large in the public sector and in the country and whatsoever.
Two, in particular, the government needs to invest a lot in AI for the military,
for the defense sector, and beyond, because three, the real threat,
is China. So US big companies have been using the rise of Chinese big tech and the fact that the
Chinese government has been claiming that they want to be the AI leaders by 2030 to divert again
the conversation against from regulating them towards taking care of China. And this in the Trump
administration basically became like the best possible scenario for them because for Trump,
China is a threat. China has always been an enemy, Russia as well. And just three,
days after the inauguration day, which everyone may remember U.S. Big Tech CEOs who are sitting on the
front row. And three days after that, Trump terminated, ruled out this executive order. And now what we
see is this AI action plan that is further contributing to deregulating everything, including
what the U.S. Federal Trade Commission had done during Biden's administration. They are going to revise
the investigations to make sure that not...
nothing that stands is jeopardizing again, quote-unquote, AI innovation. So this idea,
this crafted narrative by big tech, that regulation stifles innovation and that this applies
particularly to AI. And the AI action plan is also basically emphasizing again and again
that the way to expand AI development and adoption is through the cloud and it's with cloud
providers and agreements with cloud providers that public researchers, that the rest of the world
and so on, and even, of course, also the U.S. Department of Defense will get all the economic
benefits from AI. So really what we see today is it's perhaps a government that is much more
internally aligned with big tech, but it's not that parts of previous administrations
were not completely aligned and pushing those in the role of president of the United States
to really in favor ultimately of big tech. Yeah, the integration has gotten closer. And of course,
at a moment where the political stances of the tech industry and their orientation toward
government power, toward regulation, has become very oppositional, right?
You know, they were close to the Obama administration, but that was a very different time
in the development of the tech industry and the types of things that they were looking
for, right, than what we see right now.
And so, you know, you're talking about these dependencies, these issues, and in particular,
because of the alliance between Donald Trump and the tech industry, we're seeing
these growing movements and discussions about digital sovereignty, right? So what does it actually
mean? What does this term refer to? And when we hear governments or different organizations
talking about it, what are they trying to achieve through something like digital sovereignty?
So exactly as you were saying, actually, what happened in this closer alliance is also that for
US big tech, it became problematic to see in particular the European Union advancing regulations
against them. So it was a sort of defensive move, but when we say defensive move, this doesn't
mean that their power was challenged. It was never a challenge. It was like, this is the game now,
and now we move to and do this or that, just to continue reinforcing our power. So that's basically
what has happened. And you can see it in the fact that their market capitalization continues
growing and growing. But at the same time, we've seen, as you were saying, different movements
for expanding digital sovereignty.
And this is a complex term,
because for some, digital sovereignty
basically means what China has done.
So you close your borders
and you just put all the money out there
in the private sector,
put the money inside universities,
you develop the technology internally,
and yes, then you have your own national champions.
And if you look at what has happened in China,
these national champions operate in the same way
as US big tech.
They are in the same way now,
in alliance with the Chinese government.
They share data with the Chinese government for surveillance.
And they also capture knowledge, appropriate knowledge, from Chinese universities.
They capture data from the whole Chinese population and also from countries linked to China
through the digital Silk Road.
So really, these forms of twin instructivism that I was describing before also exist inside China.
So this is far from the discussions that are taking place in part of Europe and also
Latin America when we speak of digital sovereignty. This is not a way to emphasize techno-nationalisms.
And this shouldn't be the aim, actually, in this particular moment in time, with more geopolitical
clashes, I think that more and more, let's say, non-aligned countries should work together,
should work together to address the ecological crisis, should work together to address the social
crisis, which are both intertwinal part of the same crisis, which is ultimately a global crisis
of capitalism. So what do we mean by expanding digital sovereignty from, let's say, a more
progressive way of thinking? And I would first start by saying that digital sovereignty is not
only about accessing a technology or using a technology, but it's more about deciding,
having the power to decide what technology is produced, how it is produced, who gets access
to that technology. And this, of course, requires understanding how the technology works in the
first place, because otherwise, you're not really deciding. And digital sovereignty, therefore,
is also about having the physical infrastructure to produce the technology. When we say we need to be
able to decide what technology is produced, if you don't have the means of production,
you will not be able to decide what technology you are producing. So physical infrastructure,
data centers, submarine cables, satellites, sharing public-led infrastructure. This is needed
also for digital sovereignty. But this is still not enough. And I will
give you one example. Europe, they're trying to identify how to expand digital sovereignty.
However, they've been mostly trapped within an almost techno-nationalist way of thinking,
and they do not fully understand the ecosystemic way in which digital technologies are produced,
how everything is producing an interconnected network. And basically, for instance,
although they are installing public sector data centers,
within, for instance, the AI continent proposal,
they have supercomputers, gigafactories, and so on.
Let's say we have AI startups or universities developing models there.
They train the models.
It's public infrastructure amazing.
And then they want these models to be used.
Who's going to use them?
Who's going, if they are companies, they will want to sell them these models.
Where are they going to sell the models?
The only supermarket for digital technologies today is the cloud.
And the cloud is controlled by Amazon, Microsoft, and Google.
So the cloud is not only infrastructure as a service, which of course is part of the story.
And unless we understand that infrastructure is the same as the pipes for electricity or water today in digital infrastructure.
I mean, of course, we will never be able to expand digital sovereignty at the same time,
just installing public sector data centers or data centers in a country,
that will not expand digital sovereignty.
There is, of course, a difference between a public data center
where the public sector with democratic institutions
could decide what technology is trained there
and not just say, come and train whatever.
It doesn't matter the energy we use.
It doesn't matter if we have more efficient ways,
less harmful ways of dealing with this problem.
So, of course, it's different having a public data center
from having a private data center in your territory.
I was saying before a private data center in your territory
doesn't expand digital sovereignty.
A public data center in your territory can, but it's not enough by itself.
We need an ecosystemic solution.
What some of us are trying to help governments understand and policymakers widely understand
is that they need to complement this with really what can be described as a sort of true public
marketplace where states can procure technology, where open source models can be developed,
where open source technology can be developed, and also proprietary technologies.
We're still living capitalism and we need to understand the role of market.
and how to put markets within capitalism, where it makes sense to have them, where we can
have competition and where there are concentrated forms of power bottlenecks.
Digital sovereignty is also about understanding these networks, understanding market structures,
and being bold to decide that the state needs to be in control, that needs to be providing
the technology or needs to be investing in the technology.
And when I say the state in control, it doesn't mean that the state should do everything.
this can also lead into problems.
A way to solve this is, of course, with international collaboration,
which also reduces the ecological burden of the solutions.
But at the same time, this may not be enough.
And this is why I think that what we need is more imagination at the level of the institutions
that can be governing in a democratic way this public-led forms of digital sovereignty.
But all this is very mixed up at the moment in the sense that I think we have a window of opportunity,
but it's a very narrow one.
So there are a lot of things in there, and I want to dig into some of them a little bit deeper, right?
So we can get a better understanding of what is going on.
And I want to come to the Global South, the picture outside of Europe in a little bit.
But first, I do want to start with Europe, right?
Because, you know, we've seen proposals for, say, a Eurostack, you know, language that has been
adopted by, you know, the European Commission.
Obviously, there has been talk about, you know, building up Europe.
capacities, both at the kind of European Union level, but also within different national
governments within Europe. What do you see to compare what you are talking about, you know,
when you say what digital sovereignty looks like from a progressive angle versus, you know,
the approach that a lot of these governments and that the European Union seems to be taking
in really focusing on digital sovereignty as an opportunity for economic growth or how that
is going to be achieved. I guess if you're thinking about both of those different visions of what
digital sovereignty is or could be, what do you see as the differences between them or as the
contrast there? I think that a key contrast is the hope, the trust they place in European
companies. They think that SAP or Siemens will at some point contribute to European citizens' digital
sovereignty. And perhaps because they haven't done the research about it, I really don't know.
but all my research on these companies points to showing that SAP has become the largest company
in market capitalization in Europe because it has decided that the best business is to sell
software as a service on Amazon, Microsoft and Google Cloud.
These companies are in complete alliance with U.S. big tech companies.
They will not from one moment to the other and probably never decide that they are just
terminating their main sources of profit to do something else.
And I think that ultimately the problem is that is the siloed way of thinking.
And if you think about the EU and what they've done so far, they have so many acts.
You have the DMA, the DSA, the AI Act.
Now they are discussing the Cloud.
They will discuss the Space Act as well.
All this should be discussed together.
This doesn't mean that you shouldn't have a specific chapter,
but you need to understand how one relates to the other.
Then you have all the antitrust cases, investigations at the same time.
completely detached, detached from each other also, because they look at them from a case-by-case standpoint,
but also completely separated from all the industrial policy and aims to develop some form of digital sovereignty in Europe.
So I think that the lack of an ecosystemic view leads them to mistakes, because again, they think in terms of the physical infrastructure,
but they don't see how that will anyway reinforce the status quo will backfire, therefore.
They also put a lot of hope on European companies without understanding that not only these large ones, the AI startups, in the end of the day, they decided that they want to be satellite of U.S. companies.
And this is their DNA.
Companies want to make profits.
So if you want something different that can eventually be, to some extent, combined with some profits for companies in a more competitive way of thinking, you cannot tell the companies to take care of which should actually be a state prerogative.
So I think that the main difference is who is leading the initiative.
It's not just about putting the money.
It's not only about investing.
It's about who will lead in order to get digital sovereignty.
In the Eurostock vision, it's still the quote-unquote economy, quote-unquote, market.
So companies ultimately are going to be the ones that are supposed to bring digital sovereignty to Europe.
Whereas in the framing that I was presenting and that is in the report that we co-author with other people,
what we emphasize is that we need a public-led way of building digital sovereignty.
And by public, we don't mean a specific government.
We mean, again, creating the institutions that can empower citizens, communities,
that are also respecting planetary boundaries.
So I think that's the key difference when I compare both approaches.
No, I think that lays it out really, really well for us.
And, you know, you're talking there about investment, but also about competition authorities,
antitrust cases.
I wonder, you know, when we're thinking about this dependence that we have,
have on U.S. technology, to a lesser degree, Chinese technology, and we're thinking about how we
reduce that dependence and kind of start to claw back some of that power to build out kind of
digital sovereignty or whatever we want to call it. I wonder what you see as the role for
regulation, for going after competition cases, for governments trying to reign in the power
of these companies, and then on the other side of it, what you see from the investment piece,
right? How to actually build up those capacities? I guess what uses do each
those different approaches hold for, you know, clawing back that power?
To start with, I would say that they need to work together and that we really need to think
of a sort of ecosystemic network way of thinking of regulation, basically. It's not just
that the industry operates as a network, a network whose top points are in the hands of a few
large U.S. companies that control beyond ownership, all the rest, and that also oversee have a
an optican view, not only of all the other actors of the digital sector, but also ultimately
of the global economy, because when everyone migrates to their clouds, basically, they can
oversee who's consuming more of one service or the other. So they have this chopin and panoptican power.
And therefore, there is definitely a role to be played by regulation, but there is also a need
to create an alternative. And the two need to be working together. So I think that for competition
The competition authorities, one of the key things that needs to change is the paradigm.
Today, competition authorities focus on specific markets, artificially cut markets and
try to identify market boundaries, which in a globalized world, in a network way, specifically
in the case of tech, this makes no sense because everything is interconnected.
And then you don't understand why there is a dependence at the end of the system.
And this is because of layers that were way below and you were not seeing.
So really, I think that they need to change completely their paradigm.
And instead of having a theory of harm that only looks at what's going on inside one market
and that only cares about consumers, they need to look at the network as a whole.
And when we look at networks, the forms of exercising power are different.
Are these ones that I was explaining before?
Capacity to exercise a choke point, capacity to have a panoptic in view,
and they afford to direct the whole evolution and the type of technology that is developed,
the type of solutions that are being offered, the type of science that is produced inside
the network, which is basically what large tech companies do.
So I think that for competition authorities, the challenge is really huge because they have a
role to play.
They need to require more information, more information about this quote-unquote strategic alliances,
what has been shared when to large companies sign a strategic agreement, what is
shared when large tech companies invest massively in hundreds or even thousands of a startup
companies. And even what are the agreement saying when a company migrates to the cloud,
even if it's not called a strategic? So really, they need to require much more information
to understand how the rules of this network are decided. We know that they are being decided
by Amazon, Microsoft, and Google. But if they want to identify network power, they need to understand
what rules they are deciding, how they are ruling.
So I think that there is a role to be played by them.
This may require in some cases to do breakups.
For instance, an obvious one, of course, is the case of meta.
Because meta, I mean, these were three separate apps
and they are now interoperating shit like meta can use data from one to feed the other
and so on and so forth.
But in other cases, because there are what in economics we would describe as
natural monopoly, situations where efficiencies of different forums lead to a situation where
it's better to have only one provider than more than one. This is the case of search engines,
for instance, the solution will not come from competition. And competition authorities also need
to understand that although they are called competition authorities, in the U.S. is antitrust
the term that is more used. And I think that there is a reason for that, which is that
competitionists may not always be the solution. Sometimes, especially in the case,
of natural monopolies, and especially when this natural monopolies occupy such key positions
for the global economy, the solution needs to be public. So there should be state-owned companies,
international commons, for the provision of, for instance, a search engine, and so on and so
forth. And competition and market authorities should be the ones promoting these solutions.
At the same time, if we don't build alternatives, it's very unlikely that the population will
simply shift away from Google, for instance. It will not happen. And this is something that needs to
be understood also when thinking of investments and an industrial policy. You can put all the money
to create a competitor. But if nobody wants to go and purchase from that competitor, nothing will
change. And this is also why in our proposal we, and this is, this also appears in others like the
Eurostat. There is a lot of emphasis on the role of states as demand. So procurement, state
procurement. And it's not just about making it feasible. It's also about the implications for
digital sovereignty of having governments migrating, like the UK government, to a big tech cloud.
What it means that the everyday operations of the state, hospitals, education, even the military
sector, everything will be operating inside a US large tech company's cloud, which means
that we'll be operating with technologies that they can use, but they do not control. They do
understand, they do not have real access to, and technologies that they need to purchase as a
service constantly. So it's not only about the economic burden, which as you can imagine,
it's, and as you know, it grows and grows, but it's also about the implications it has for learning,
for understanding how things are done. Eventually, one day, the public sector will have no clue
about how to rule, because everything, all the indicators, all the metrics, what data is harvested
or not, how its process, how services are provided, and so on, and so forth, will be decided
by U.S. large state companies.
It's a very concerning position, and I feel like, you know, European governments, but I'm sure
governments even beyond were really, really shown the power that these companies can have
when, for example, you know, the U.S. government put sanctions on Karim Khan, head of the
international criminal court, and his Microsoft account was cut off as a result of that.
And then, of course, just, you know, much more recently, in front of a French committee, like, you know, in the French parliament,
Microsoft representative basically said that they couldn't guarantee that the U.S. couldn't demand data that was stored on their servers in France, right?
So, you know, these questions are ones that governments are increasingly demanding, and the answers that they're receiving are, you know, not very reassuring ones.
Absolutely. And this is why you see sometimes, like, and you start seeing it more, some like avenues of hope. And this is, for instance, when you see there's some provinces in Germany and also in Denmark saying we are going to decouple from big tech, in particular from Microsoft, actually. But this is only a starting point. What they are going to do is to stop using large tech companies technologies and use what's out there, become users, or
open source technologies that are out there.
But what needs to happen is that governments need to also be,
not just the promoters, but also by themselves with public institutions,
the producers of the technologies that we want.
And they need to create the spaces for the population to discuss about what
technologies to produce, what technologies shouldn't be produced, and why.
So I think that it's not just about how to defend ourselves,
but also about how to basically start a completely different game.
because this is the other problem.
More and more, and you see it in the U.S. AI action plan very explicitly, the U.S. government,
large U.S. companies, and even startups that are their satellites, promote this idea of a race,
an AI race, a technology race.
We cannot stop.
As if AI was developing by itself, nobody was pushing the accelerator pedal.
Come on, these are the CEOs of these companies deciding that they are going to do more and more and
invest more and more and more. So basically, instead of trying to win a race that it's impossible
to win also, because of the specific features of the technology, it's impossible to win a race
against these companies that have been robbing the world's data, the world's knowledge,
now the world's electricity. It's impossible to outpace them, basically, because their algorithms
are the best ones, not because they have the most talented people. They may have some of the
most talented people inside the company. But they have the best algorithms because of all the
data that was used to train them and because of all the knowledge that they have captured
from public sector, from universities and so on around the world. So really, there are two options
here. So the easiest one, even if it's very hard and we are not seeing it yet, is to start
an alternative and to say we're going to build an alternative with all the features that we were
discussing. The other one is to directly socialize big tech. I don't think that the socialization
of big tech is going to happen any time soon. It should happen because they have captured what
should be, as I was saying before, digital comments that we are all producing collectively.
But because I don't see that scenario happening because of the role of the U.S., the role of China,
and so on and so forth. But I do see some will from different governments in Europe, in Latin America,
to do something different, I think that they really need to work together
on advancing and creating this alternative ecosystem
that shouldn't be focused on winning the AI race.
It should be focused on solving people's problems,
addressing the ecological crisis.
And if digital technologies can help, then do it.
But if there are other ways of solving the problem,
then we should prioritize the other ways,
which will be probably cheaper and also less harmful to the planet.
Cecilia, you also brought up Latin America there.
know that you have been doing a lot of work down there specifically, you know, beyond looking
at what's happening in Europe. What do we see from Latin American governments and groups in
Latin America in talking about digital sovereignty and trying to promote these initiatives?
Because obviously, you know, the kind of boot of the United States has been felt down there
for much longer than, say, the Europeans, you know, have much more recently been scared about
what the United States is doing to them and kind of how the United States is kind of flexing its muscle
toward Europe, but Latin America is much more familiar with that version of the United
States. So what do you see down there with what is going on with movements for digital
sovereignty? It's very interesting to see that within the progressive governments, there has
been an aim to expand digital sovereignty. And the clearest case is Brazil, of course,
is the only country in the world that has forbidden social media from operating in its country
with clear reasons.
It was not the Chinese way of doing it.
It's just I want to protect my companies.
I don't want the U.S. to be part of it.
But actually, you could operate here before.
You are not playing by the law.
Therefore, you cannot operate in my country.
And this was decision made by a judge, not by Lula, but the president.
And this is important to say it.
There are different ways of making policies or regulating in this case.
And the legal system has a role to play.
These companies do not play by the law.
and when they don't do it, there are things that should be done.
And therefore, Brazil did it.
And this is important to highlight.
It's also important to highlight that Brazil developed an AI plan.
It was called an AI plan, but it was mostly just an assemblage of policies that were
without a clear hierarchy.
This was one of the problems.
And because they were basically trying to please everyone inside a government that is also
a coalition government, what ended up happening is that the end up happening is that the
AI plan included really progressive measures, like having public sector data centers for the
public sector, or a sort of cloud done by the public sector for the public sector in Brazil,
and an AI model developed again publicly and potentially a model that could have been
open source. But at the same time, the AI plan included fostering already existing
collaborations with some tech companies. And what basically happened when the AI
plan was released, was that Microsoft, Amazon, Google, they've super accelerated their promises
of investment in the region in Brazil and pushed as much as they could, including loving
with Brazil's state-owned companies in the telecom sector. These companies basically then
lovied its own government to accept a solution that it's supposed to be protecting Brazilian
citizens' data, but it's actually just using large tech companies for the provision of
digital services and just encrypting the data with some technology produced by these Brazilian
state-owned companies. This is very problematic because it shows that if a country as large as Brazil
wants to do really something about digital sovereignty, then it will face all these challenges,
all these constraints, and it shows us why the collaboration needs to be international,
why governments need to work internationally and help each other. And I will just give you another
example, which is Chile. In Chile, they don't have state-owned companies. It's a highly
privatized country that has suffered neoliberalism more than others in the region. And Chile has
been one of, and Brazil as well, and also Mexico have been the three countries in the region
where more data centers have been installed so far. And in particular, in the metropolitan
area of the country of Chile, which is the around the capital and the capital itself, Santiago,
the data centers that are already functioning
and the projects for installing new ones
were already anticipating a scenario similar
to what has happened, for instance, in Ireland
where basically there was the electricity,
the power grid could not cope
with all the electricity demand of the data centers.
So the government, this progressive government,
again, decided that they were going to try to regulate
or to influence the industry in some way
and they did something that should be celebrated,
which is that they developed a mapping tool.
They computed on the territory,
so they built a map of part of Chilean territory,
and they considered for that map almost 80 socio-environmental variables,
key variables.
So not only electricity and water, which are two variables,
but many, many other variables.
For instance, whether there are natural parks in that area,
whether like communities live there and so on and so forth.
And then putting all that together on a map,
they were able to identify the areas
where it was more or less harmful to install a data center.
At the same time, they mocked that together
with the industry requirements,
where the industry wanted to install data centers.
And they identify the specific places
where both from an industry perspective
and from a regulatory perspective,
it was not that bad to install data centers.
This tool exists and to some extent the government is going to try to convince the industry
to use it or to install the data centers where the tool recommends.
But at the same time, the government is not really going to enforce any type of policy
of regulation that comes out from the tool.
And something else happened, which was that Chile was the only country in the region
that had an environmental regulation that required companies installing data.
centers to do a sort of like check and send a report of how they were managing and dealing
with toxic substances, in particular with fossil fuels that they desell that they have in their
reserves. These regulations, the Chile's environment ministry decided that they were going to
raise the minimum threshold of the concentration of toxic substances that company needs
to have in order to be obliged to present this declaration.
And now that they increase the threshold, data centers no longer have to comply with this sustainability requirement.
And when you see it from, like when you zoom out and see it from a larger perspective, what you see is a government that on the one hand side was trying to regulate the industry, was saying, okay, yeah, I mean, we cannot prevent data centers from installing the boardage administration had already been signaled by being an administration that was not defending.
workers because they were not defending economic growth and therefore regulating industry
would be perceived as such. So on the one hand side, you see a government that wanted to regulate
the industry that developed a tool that can be also extremely useful for governments around
the world, but that then probably also because of the pushes, the internal pushes from big tech
companies, even it's not only, I'm not saying that there was corruption there. What I'm saying
is that these companies are very efficient in selling the economic growth narrative
in saying, we are investing in your country, you will have economic growth, you will have
this and that, and this is what you will be able to show because you will be part of the digital
economy and thrive in the region and whatsoever. So because they sell all this narrative,
they convince even the Chile's environment ministry to change the regulation, the environmental
regulation to favor them. So I think that the case of Chile, the case of Brazil, are very
illustrative of the limitations of peripheral governments, even progressive governments,
and how much they are trapped within this imperative of growth, how much they are trapped
within these promises of employment, of technology, bringing in wealth to the people,
and how much they end up hijacking every possibility of a meaningful way of developing an autonomous
industry, of developing also an industry that can really fulfill people's needs.
and can democratize the decision-making of what technologies will have and which ones we don't want to have.
There is one, if you want one, space of hope there, which is Uruguay, a tiny country.
The new Uruguayan administration in its government platform has the promise to use Antel, the state-owned company,
as the cloud provider for the public administration, and to transform Antel, which is responsible
for having 95% of the households in Uruguay with internet connectivity,
so a company that has done great,
and that during neoliberal governments in the 1990s,
it was the population who decided that they didn't want to privatize Antel.
So the fact that it's still public is the result of citizens and their will.
To continue having this company as a public one,
now it may be the time for Antel to transition from a telecom company
to a digital company and provide services to the Rewan state,
and to become this example for the rest of the world, even for Europe, to be bold and try to do something similar.
Certainly one good example, you know, in a region where the tech companies, you know, like we see in so many other countries, have had a lot of influence in shaping policy to their benefits.
But as you're talking about, you know, there is a real opportunity here to do something different, to think about these things differently.
Cecilia, I really appreciate you taking the time to speak with me for this special episode of the podcast.
that I'm guest hosting. Thanks so much for taking the time. I really appreciate it.
And it's great talking to you.
Cecilia Recap is an associate professor in economics at University College London.
Tech Won't Save Us is made in partnership with the Nation magazine and is hosted by me, Paris Marks.
Production is by Kyla Husson.
Tech Won't Save Us relies on the support of listeners like you to keep providing critical perspectives on the tech industry.
You can join hundreds of other supporters by going to patreon.com slash Tech Won't Save Us and making a pledge of your own.
Thanks for listening and make sure to come back next week.
Thank you.