Tech Won't Save Us - How Cloud Giants Cement Their Power w/ Cecilia Rikap
Episode Date: January 2, 2025Paris Marx is joined by Cecilia Rikap to discuss the ways Amazon, Microsoft, and Google gain power from companies becoming dependent on their cloud services and how generative AI exacerbates that prob...lem.Cecilia Rikap is an Associate Professor in Economics at University College London.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Eric Wickham.Also mentioned in this episode:Paris and Cecilia were co-authors on the “Reclaiming digital sovereignty” white paper.Cecilia wrote a report called “Dynamics of Corporate Governance Beyond Ownership in AI.”Support the show
Transcript
Discussion (0)
But if we really want to have this infrastructure operating as the water pipes, this needs to be provided as a public service, needs to be a public utility.
And for many countries in the world, I think that it's interesting to think of international solutions. Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine.
I'm your host, Paris Marks, and happy new year.
We don't have a new episode this week.
We have another premium episode from our Data Vampire series that is being made public for all of you to listen to while I take a little break over the holidays.
This week's episode is with Cecilia Rickap, Associate Professor in Economics at University
College London. Since recording this conversation, Cecilia and I have actually been co-authors
on a white paper calling for digital sovereignty for governments to reclaim their digital sovereignty
and laying out kind of like a policy program for how that might be pursued. I'll include the link in the show notes
if you want to check it out. But in this episode, we talked about a report that Cecilia authored
that looked at how companies exert their power through AI and through the cloud. And I thought
that there were a lot of really important insights in this conversation. Some of them were included
in the Data Vampires series, of course, but some of them were not. And so I figured this conversation
paired well with the one last week with Ali Al-Khatib looking into the politics of AI more
generally. And this looks at the corporate power surrounding AI and how these companies like Amazon,
Microsoft, and Google are able to exert and enhance their power through something like the AI boom that we have seen over the past couple of years, but
also this larger push to have all these companies using cloud infrastructure and cloud computing
that is often owned and controlled by one of these three major tech giants.
So I think that this is a really insightful conversation.
I hope that you enjoy it before we get back to your regularly scheduled episodes next week.
But I think that this will be an insightful one
if you still want something to listen to
over the holidays.
So if you enjoy this episode,
make sure to leave a five-star review
on your podcast platform of choice.
You can also share the show on social media
or with any friends or colleagues
who you think would learn from it.
And if you do want to support the work
that goes into making Tech Won't Save Us
every single week
and to get access to future premium episodes like this one that is being made public for you,
you can join existing supporters. And I'm going to say a number of names here again to try to get
through our list and to make sure that people hear their names on the show. So supporters like Boone
in Massachusetts, Layla from Toronto, Grady from Laurel, Maryland, Tara from Portland, Maine,
Grongrilla from Germany, Aaron from Ontario, Aaron from San Diego, California, Tara from Portland, Maine, Grungrilla from Germany, Aaron from Ontario,
Aaron from San Diego, California, John in Denver, Colorado, Tom from Arlington, Virginia,
another Tom in Winnipeg this time, Corey in San Diego, Vanessa in Seattle, Armando in Sweden,
Ben in Oakland, California, Paul in Lismore, Australia, Danny in East Lansing, and Anderson
in Norfolk, Virginia.
By going to patreon.com slash tech won't save us where you can become a supporter as well.
Thanks so much and enjoy this week's conversation with Cecilia Rickap.
So what role does cloud infrastructure play in the power wielded by major tech companies like Amazon, Microsoft, and Google?
So basically the cloud, if you think about how AI and how every technology
is produced, it always needs to be processed somewhere. Although we're constantly speaking
about intangible assets and the role of data, the role of knowledge, it's not that they're
just in material. Actually, they all need devices, computers. And if you want to run
large pieces of software, you cannot do it in your computer. So basically what happens is that Amazon, Microsoft and Google have been developing this narrative that the most efficient way to do it, the cheapest and more flexible way of doing it is on their clouds.
Their clouds basically are lot of computers. And instead of just buying your own infrastructure with smaller companies and also bigger companies
and also states and universities and you name it, what they can do is rent the use of it
when they need it.
In principle, this sounds very attractive.
If you think about the transformations of big corporations since the 70s onwards, it
has been a big part of it about trying to reduce
the tangible capital, trying to outsource that, for instance, to countries in Asia. Of course,
the case of China is a prominent one in this history. But also, if you think of the US,
its relationship with Mexico and how it has evolved, it has a lot to do with how big companies
have outsourced part of their manufacturing capacity to Mexican
companies and also offshore part of their own factories to Mexico.
So in this process, basically, of reducing your tangible assets, it becomes very attractive
instead of having to have your own data center in your premises to outsource that to, in
this case, Google, Microsoft, or Amazon.
And they saw this.
They saw this attractiveness of being more flexible.
But what the customers didn't see was that outsourcing your digital infrastructure is
not as outsourcing call center or outsourcing your manufacturing capacity, because it is
very much entrenched within tangible assets themselves, which are also the main
asset of these big companies and are also crucial for running a university, a government
and so on.
And they are also crucial for startups themselves.
So basically what you have is a system in which these companies and Amazon, Microsoft
and Google, because they concentrate together 66% of the global market in this cloud business space, they end up being everywhere.
And the more organizations migrate to the cloud, the more dependent they become. And if you think,
for instance, of a startup companies, in particular AI, but this goes beyond the AI startups,
they develop their infrastructure on the cloud, which means not only renting space for storing or using servers for processing their software,
but also means buying as a service small pieces of software.
On the cloud, they can also get as a service software data.
So if you don't have a data set to train your model, you can also rent as a service, a data set. And also of course, they offer platforms as a service, which means that
basically a company will end up writing all the algorithms, all the code on the cloud.
So let's say you choose Amazon Web Services. So you will be doing all your architecture,
your software architecture on the cloud. You will be writing code, but in between, there will be kind of moments when you call a software as a service
that is offered by either directly Amazon Web Services or a third-party company that also
offers its services on the cloud. And I will come back to that in a minute. So basically,
you keep on writing the code and one would think, okay, I can access the technology. That's perfect
and cool. And no, what you do is use a technology that is sold to you as a black box. So all your
software, all your architecture becomes dependent on the cloud, on these different pieces of the
cloud and becomes so expensive to leave. And so time consuming, that is not only impossible for
small companies, but also for larger ones. For the small companies, the startups, especially the startups from the tech sector,
what happens is, what are they producing?
In the end, they end up producing a software as a service.
They end up producing something that will be sold on Amazon Web Services
or Microsoft Azure or Google Cloud.
So they depend on these companies and pay constantly to these companies every time
they are using one of their services. And then when they have a product, they offer the product
on this company's cloud, which means that they are also sharing the profits with these companies.
So it's basically a process where they become more and more dependent as we speak.
That's so fascinating. And you explained it so well. Thank you.
What is the appeal for one of these companies to use, say, a data center or cloud infrastructure
over building, you know, computing infrastructure of their own?
So we need to split the question into, for the smaller companies, it is very expensive
to do it.
It is very expensive.
And you still think of a startup.
The startup doesn't even know if it's
going to succeed or not. And actually part of this process of mushrooming startups in the tech sector
is being explained by the existence of the cloud that has been around for over a decade. Actually,
I think like already more than 15 years now. So basically for them, it enables this idea, this narrative of you create a startup in the garage.
Yeah, you can create a startup in your garage because you have the cloud, because you don't need to invest in all the tangible part of your business.
You just need a laptop and an internet connection.
And then, of course, ideas, brains that are capable of developing this idea. But then if you look at big companies, and I'm particularly interested in showing
how the largest companies in the world
are also becoming dependent on the cloud,
because if these companies also become
technologically subordinated to Amazon,
Microsoft, and Google,
then it is clearly showing us
that this is a big threat for society at large.
And the more as other organizations like states
start looking at the so-called digital public infrastructure,
which more often than not means,
depending on big tech clouds.
So basically a big company, for a big company,
they can afford a data center.
But in principle, moving to the cloud will be cheaper.
What my research shows actually
is that for
companies like Coca-Cola, Nestle, Ikea, also automobile industry companies like Toyota,
big pharma companies like Novartis, Pfizer, and so on, and I'm naming just some because they are
the ones that I talk to about this and many others as well. In the beginning, it seems cheaper. But
what happens is that the more you use, the
more you end up paying. And as these companies start relying more on processing data, start using
AI algorithms that are partly developed on the cloud or completely sold as a service on the cloud
to process that data, the more dependent they become on big tech and the more they need to pay
as a bill on the cloud.
So in the end, it's not that cheap.
To try to compensate for that,
they operate in different clouds,
but because there is no way to interoperate between them,
they need to build sort of a new layer of software on top that connects the different clouds.
So more money is spent on that and more time.
And at the same time, while all this is happening,
what they realize is that these are not just, again, technologies that are serving on the sidelines of the business.
A company like Coca-Cola or a company like Nike is relying more and more on AI data that is
processed with AI to develop their own brands. For brand building, you are using AI. AI is also
becoming a method of invention in every discipline, which means that let's say you want to do drug
development like big pharma companies, you're going to use more AI and more data. So this,
it always means expanding the business of big tech. It means also expanding the bill that all
these other leading corporations pay to big tech companies. So why?
Why aren't they doing this internally? And to this point, what I came to conclude is that they are
doing it to escape from uncertainty. I call this the certainty of uncertainty. They don't know when,
but they do know that things will go worse, that we will have ecological disasters coming,
that we may have more pand coming, that we may have
more pandemics, that geopolitics in the world is hotter than ever, or I don't know, not hotter than
ever, certainly. But since the Cold War, certainly it hasn't been as hot. So all this considered
together is okay. They end up paying more. But I think that in the end, they prefer to depend
on big tech because if to depend on big tech,
because if they depend on their cloud, they are not investing all this themselves. And they can
always change and update the technologies faster, which for companies that are also operating as
intellectual monopolies in their own fields, again, with brand building, with constant developments of
encapsulated scientific knowledge, and so on and so forth, getting access
to the method for innovation, the method that is becoming the primary one, the one that in a way
is being imposed as the mainstream for keep developing intangible assets for these companies
ends up being a sort of like best alternative available. In the end, what enables them and why it's the best alternative
available for them is that it enables them to keep on extracting value from those that participate
in their global value chains, that are their franchisees, that participate in their platforms,
and so on and so forth. And the first time that I identified this was precisely looking at how
companies like Uber or regional e-commerce platforms like MercadoLibre in Latin America, even Salesforce.
So also companies that in the tech sector are large, but still rely on the cloud of Amazon, Microsoft and or Google and why they are doing it, because they could.
They do have a lot of people in tech that are doing other things or part of the software.
So in the end, I think that it's also an effect of the times we live in.
And we live in these times also because of Amazon, Microsoft, and Google,
because they are sitting in the advisory boards of the U.S. military,
for instance, and they were part of a board that advised on the use of AI
for the U.S. Secretary of Defense. And they
were saying constantly, the U.S. state needs to invest more on AI because if it doesn't do it,
then China will outpace us. And this is the real threat. So basically, they are also putting more
fuel on this fire. So, you know, you were talking about AI generally there. Over the past year and a half or so, we have been in this moment where generative AI specifically is the next big thing in the tech sector. How does the proliferation of generative AI and the growth of that as an industry further cement the power and position of these cloud giants, Amazon, Microsoft, and Google? So one thing that happened as soon as ChatGPT was released, and one thing that is different
from the AI that we were living with before, is that it was massively adopted.
Everyone started using it.
And it's very easy to use it for different purposes.
I always make the comparison between ChatGPT and let's say Amazon's search engine or Amazon's
algorithms for setting prices in its marketplace. These are also extremely advanced AI algorithms,
but one algorithm that is developed to set prices cannot then do a translation or create you a nice
picture out of a text and so on and so forth. The uses are much more limited.
And because generative AI enables you in a very easy way to use AI for your own daily life,
and also because ChatGPT has a free version that has speed up the adoption process,
not only of ChatGPT itself. And this is interesting because even if all the so-called good things about generative
AI are questioned and it's not as accurate as other AI models, for instance, precisely
because of this plasticity, this capacity to be doing many things at the same time,
which comes from different training methods and different algorithms in particular.
Anyway, because of all the fuss, what has happened is that a lot of companies that were still more cautious and reluctant to adopt AI and to move to the cloud.
And when I say adopt AI is, again, maybe it can be other types of AI, not necessarily
generative AI.
But just because everyone is talking about AI these days, as a big company, you don't want to be left out. I was interviewing someone from Coca-Cola who was telling me,
well, really, we're still figuring it out. But anyway, in the meantime, we use a generative AI
for a marketing campaign where consumers could directly prompt a generative AI to create a
poster of Coca-Cola and whatever you wanted. If you wanted a smiley kittens, or if you wanted a
devil, you could just ask for that with Coca-Cola on the back. So basically what has happened is a
much faster adoption, not only of generative AI, but widely of the cloud and widely of all the
different forms of AI. And because behind all this, we have the power of Amazon, Microsoft,
and Google, not only because of the cloud, but also because they have been investing as venture capitalists in pretty much every single AI startup in the world.
They keep on expanding not only their profits, venture capital is in terms of how big tech companies control AI. companies for at least since 2012. They have been, on the one hand, brain draining people
from academia to directly work with them. But also they have other means to make all the scholars,
the leading scholars in the field, be developing AI that then will keep on expanding the profits
of big tech. They do it with a thing that is called a sort of double affiliation. So I said before that I'm associate professor at UCL, but I also keep an associate researcher
position in Argentina from the CONICET.
So I have two affiliations.
So the same can be done with a big tech.
So you can still work at the NYU like Jan LeCun from Meta, but at the same time, you
work as a chief AI scientist at Meta.
And some people would think, oh, cool. They are like sharing information between the university and the company.
But that doesn't work like that.
Jan LeCun has a non-disclosure agreement.
He cannot say what he is discussing inside Meta with the people he's working with at the university.
However, he can, of course, and he, of course, does this, he steers the research of the people working with him at the NYU, as well as other scholars doing the same thing in other parts of the world that end up making the universities virtually change their research agenda or adjust their research agenda towards what is of the interest of big tech. I did some research and I found that there are at least a hundred institutions like
universities, public research organizations that have at least one scholar with this double
affiliation. And on top of that, even if there is no one with a double affiliation, either pouring
money into the universities or directly proposing to work together collaboratively, because this is what we do as scientists, as scholars.
We work with others.
We work with others from other universities.
But once you start seeing that all the most prominent figures in a discipline are working for DeepMind, which is part of Google, or are working for Microsoft, or are working for OpenAI that has been funded by Microsoft in 2019, then it's very easy to establish a collaboration with them. And again, you establish a collaboration for a piece of knowledge, but then other pieces will be kept secret. And once all the puzzle is put together, it's just, again, these few companies that profit the most. You mentioned the venture capital aspect of this in part of what you were saying there.
I was wondering if you could expand on that and explain how Amazon, Microsoft, and Google,
these major tech companies, use not just the infrastructure that they have, but also the
financial resources that they have to invest in a lot of these companies to develop these
relationships to them without doing a full acquisition that
might get more antitrust scrutiny, and specifically how that has played out in the case of Microsoft
and OpenAI.
Absolutely.
And we can actually explain it with that example and then give some more general figures.
So in the case of Microsoft and OpenAI in 2019, Microsoft decided to invest $1 billion
in OpenAI in 2019, Microsoft decided to invest $1 billion in OpenAI. Of course,
Microsoft, with all the profits it makes annually, has a lot of liquidity and can decide to invest
in many different things. But Big Tech in particular have decided to pour a lot of money
into the startup world as corporate venture capitalists. So Microsoft did this with OpenAI, but the main motive is not financial.
It's not that they want to make more money
just like by investing in the company,
but the way to make more money,
it's actually about how OpenAI is developing technology,
what technology OpenAI was working on
and how Microsoft can steer that development.
I was interviewing someone once who told me,
well, it's a double deep because you actually get a direct line to the CEO and you can steer
the technology. And by doing it, you can eventually get access to that technology
earlier. So you can adopt it earlier as Microsoft did with open AI, but you eventually may also be
able to make extra profits if the company you invested in is successful
and starts developing a business.
And it's actually like that.
It's steering the technology
that smaller companies are developing
and also getting privileged access to what they are doing.
So Microsoft was a main responsible
for OpenAI repurposing its efforts.
And instead of launching in advance GPT-4,
they postponed that and decided to work on an application
that ended up being Chugipity
of the model that they already had.
And of course, Microsoft got like advanced access
to this technology.
Someone from the company told me that they got access
with six months in advance of what they were developing.
So they kept track of what was going on. And this is why they were also able to adopt in a very fast
way the chat GPT to their different services. And at the same time, what ended up happening is that
because in the end, OpenAI is a separate company, it's also easier for rivals to Microsoft to adopt Open AI.
And they gave me this example of Salesforce, which is an Open AI customer, but of course
will be a bit more reluctant to develop that type of relationship with Microsoft that is
a main competitor.
So in a way, it's a win-win situation for the big tech companies.
For them, what they're investing is not that significant.
And of course, they are choosing to invest in startups with a lot of talent.
And they also do it, we've seen it with this acqui-hire of inflation AI.
They can just go and get the people and make them work inside the company.
But sometimes it's better to leave
them there as a separate company, formally speaking, while they get access to all the
intangibles and can steer the development of the intangibles. After the case of OpenAI became public
and when some regulators started saying that they were going to investigate this case,
Microsoft did a very clever move, which was to actually disclose that they were not just doing it with OpenAI. They were
doing it with many other companies. And they started saying, no, now we're investing in
mistrial. But actually, this reinforces this will, because you have established core, a core of a few
big tech companies and a very turbulent periphery. And by funding the periphery, they make sure that more companies
will be part of that periphery,
will be competing against each other,
and that none of these companies
will dream about entering the court.
And if you think in terms of figures,
there is this indicator that says that in 2023,
two thirds of all the money that was invested
in generative AI startups came from Microsoft,
Amazon, and Google. And then I also looked at crunch-based data. And there, since the moment
when ChatGPT was released until February 2023, Google, for instance, was funding 2,445 companies
as a top investor. They were funding more, but those were the companies that had Google as one of its
top five investors. And I make this distinction because one could still say, okay, if they are
receiving just, I don't know, 2% of the funding from Google, how much will Google be able to
influence the company? But if Google is among the top five investors and it's Google also,
it's very hard to say that they will not be able
to influence the company to get access to the technology that that company is developing.
And of course, this is a way, part of the money that Microsoft gave to OpenAI did not go as money
in the bank. Part of that money was computing power. So it becomes a way to make sure that all
these AI startups are developing their models
in the cloud and that then they will offer the models also on this company's clouds.
I've heard it said that, you know, there's a lot of been a lot of discussion about
how expensive and how computationally intensive generative AI is and how a lot of these AI
startups are not actually making any money. But I've seen the suggestion that the AI startups themselves are losing money,
but the Amazons, Microsofts, and Googles of the world are making money
because it increases the reliance on their cloud infrastructure
and makes it so that anyone who wants to develop these generative AI tools
is now buying cloud services or additional
cloud services from them.
Do you think that there's anything to that argument that, okay, generative AI itself
is not profitable, but it becomes profitable for the cloud giants because it drives so
much more business to their cloud businesses?
So when you think of generative AI, there are two things to say.
One is the production of the models. And the other one is the applications that call the model that is
already a software as a service and then develop applications on top of it. And it's true that
developing a generative AI model, if you go and look, for instance, at the paper that was published
by OpenAI when they released GPT-4, they say in the acknowledgments that around 300 people
contributed to code the model. On top of that, they acknowledge people from Microsoft that were
also part of working for the development of the model. And they add to the picture that there were
testers and people working on adversarial attacks and so on that also participating in creating the
model. Because as we know, developing an AI model is not just about scientists and engineers. It's
also about a lot of low paid people coding or classifying, actually labeling the data sets.
And in the case of generative AI, they also answer questions when the way to do reinforcement learning is by asking the model to answer something.
And then they classify whether the answer was good or bad, basically.
So for all that, you also have people.
So it's very expensive in terms of the number of people working.
It's also extremely expensive in terms of the processing power that you need to train the model. And also when you query the model
just as a footnote, but as a relevant footnote in terms of the energy consumption and a model like
Gemini from Google or ChatGPT consumes around 15 more times of energy whenever we query them
compared to the typical Google search. So it's like whatever you look at, it requires more resources, more and more.
So yes, it's extremely expensive.
And this is also why for a company like Mistral
or Cohere or OpenAI and so on and so forth,
Anthropic, another very prominent name these days,
they all need this corporate venture capital.
They all need these big tech companies
to offer them the
chance to use the cloud, to give them this so-called cloud vouchers. And yes, it is extremely
expensive. At the same time, we don't know yet, because if we get a massive adoption, once you
get the model, you basically can resell again and again the same lines of code. So it will depend on the scale. Once you get a lot of clients,
then you can envision a startup making profits. But of course, this is not a business for the
many of all the startups that are trying to develop these models. And the same with the
applications. Some applications will make some money and at the same time, many others won't. But anyway, in every case, even for those
that fail, while they are trying, they're consuming more from the cloud. So for Amazon,
Microsoft, and Google, no matter the scenario, they're always winning and winning more.
One of the things that we've heard discussed in the past little while is how, okay, there are
these closed models that OpenAI and some other
companies make that, you know, have these relationships with, you know, Microsoft in
particular, but some of the other companies too. And that is a fundamentally bad thing. But there
are these other companies that are working on open source models. And Meta in particular has
talked a lot about the fact that it is pursuing open source models rather than these more closed ones.
Is that actually a solution to these problems or are there still issues with the open source approach that often are not recognized given how we tend to think of open source as this really positive thing?
So in principle, open source is very positive because it's a way to share knowledge
and knowledge is non-rivalent. Actually, it's more than that because the more we share knowledge,
the more knowledge we are producing. We improve the knowledge once we share it with others.
The problem is that in order to do that, you need to have sufficient, what in the economics
literature we call sufficient absorptive capacities. It's a capacity to understand,
to make sense of what you're listening or reading. You can only do it if you have all the pieces of
the puzzle disclosed. But basically, Amazon, Microsoft, and Google, but in particular,
Microsoft and Google, and also Meta comes into this picture, what they have been doing is putting
pieces of the puzzle in open source. They just put those pieces
in open source. That helps them to gain popularity, also improve their reputation inside the company
because developers want to contribute to open source. Scientists want to contribute to open
source. It is also a way to get people working for free in improving that pieces of the puzzle.
But that piece will only make sense once you put
it together with the other pieces. And some of these other pieces are kept secret. Others are
registered as copyright or patented. So basically in the end, those that profit from a collaborative
development are the same big tech. And this happens particularly in the case of these large
language models. And early this year, Mark Zuckerberg was explaining to the shareholders why they decided to put the LAMA models in open source.
So the most advanced AI models that Meta was developing, they were put in open source.
It was like, OK, why are you doing this? Because this seems to be the new technology.
And he clearly explained that this was, in the end, a business strategy.
It doesn't matter where you make the business,
but to make the business in the end.
So their aim is not to make a business of the model itself,
but their ultimate aim is actually to develop a sort of Android
for generative AI in the sense that Android is also open source.
Being in open source, expanded adoption made Android,
except for the Apple world, made Android the norm, the standard.
So Meta basically wants to do the same.
It wants to make its models the standard of the industry,
not because they are the best ones,
but because they are in open source, so it's easier to adopt them and once everyone starts adopting them it means in this sector it means coding on top of
it it means creating apps on top of it and then it will make the business it will make the business
by either creating an app store for generative AI based apps or also by developing complementary services that are not open source. And one thing
that we need to have in mind is that AI is not just about the model. AI, the way AI exists today,
and especially since 2012, when it became clear that machine learning models, in particular,
deep learning models that are models that become better the more data they are trained with,
these models require three things to exist. One, of course, is the talent of the people writing
the code, thinking of the underlying math, and so on. The two others are data and the compute,
the processing power. So if you just have access to the model, but you have no access to sufficient
processing power and no access to the largest and most diverse data sets, there is no risk.
Nobody will become meta.
And also Mark Zuckerberg said it clearly.
We have the data.
So it's a way for him to say we are not in danger by putting our large language models in open source. The other way around, because JetGPT went first by putting its model in open source is an attempt or, yeah,
a way in which Meta is trying to outpace the adoption of its own model,
knowing that network effects in this industry are crucial.
So I think that this is part, and what we were discussing before
about the use of corporate venture capital
or how they are also using in a predatory way scientific research and so on and so forth.
All this points to a model where these big tech companies operate by controlling without owning, controlling others without owning them.
And this is also why regulators arrive so late, I think. It's because they are focused on what a company
owns and how the company uses what it owns to control others. But if the company is not owning,
but just controlling, the regulators never arrive to the picture or arrive too late.
And this is what we should be looking at. Ways of controlling beyond ownership. It's not that
it doesn't matter that they are owning intellectual property or that
they are expanding also the size of their R&D laboratories. They are doing it as well, but
they are at the core of today's capitalism, not only and not mainly because of what they are doing
in-house. It's not their internal R&D which is changing the world. It's their control of everyone
else that is co-producing this knowledge?
And this is very important also
when we think of regulation
because there is this narrative
that was and that is often imposed
or further developed
by big tech companies in particular
that regulation stifles innovation.
As if we should accept
whatever innovation comes as good, first problem, but also
as if they were the innovators. And actually, these innovations that we get, which are the
ones that they want, are not mainly developed by them. They are actually co-produced by many.
So if they are regulated, all these others will continue innovating. It's not that they will stop because
a Microsoft or a Google is regulated. But of course, for things to change, it's not just about
the regulating the companies. You need to develop an alternative. You need to develop, for instance,
an alternative cloud so that these companies, the startup companies, but also, as I was saying
before, the universities, the governments, find an alternative to operate with more compute if they need it, which is also a discussion that we should have.
Like whether we need all these models, why do we need AI?
What's the purpose of AI?
How can AI be put at the service of the people, at the service of the planet, and not just simply consume more, especially in the midst of an ecological crisis?
I was saying before,
the energy consumption of these models is insane. So this is something also to be regulated.
So, you know, as we've been discussing, you know, this model that we have is one where we have the small number of really large tech companies that control a lot, that control a lot of these other
kind of AI startups through their investments that have this infrastructure that so many other companies rely on. What do you see as the alternative to this
current model that we have that better serves the public with these technologies rather than just
the interests and the needs and the bottom lines of the Amazons, Microsofts, and Googles of the
world? Does this alternative require as much AI computation and data collection
as we have under the current model?
So as an alternative, I think that it is important, first of all, to ask, what do we want this
technology for?
So the priority should be to start by looking at the problems, looking at the challenges,
what are the things that we need to solve?
And also identifying that these challenges are not only associated with technologies that need to be developed, but also with political
problems, social, economic problems. So this as a first step. What I'm trying to work on is on this
idea of moving beyond trying to catch up with big tech. I've seen in Europe, I've seen also in peripheral countries, this eagerness
to try to do something with AI, that the public sector really wants to do something and sees
what's happening in the US, in China, and they have this perhaps insufficiently understood
conclusion that they need to catch up in whatever way. And actually catching up is invisible.
This system, because it moves so fast, it's impossible. It's like the coyote and the
roadrunner. You'll never make it. But it's not only that you'll never make it, you shouldn't
be aiming at making it actually. Again, if we think first about people's problems and ecological
problems, societal problems, then instead of trying to catch up,
we need to start thinking of the concrete uses of AI for healthcare, for contributing to the
ecological transition, not thinking in terms of techno-solutionisms, because that simply will not
happen. But I do think that an alternative needs to come from therefore creating new institutions, new research institutions, ideally international ones, collaborative ones, where AI is developed together, not only with scientists and engineers, but also with social scientists, also with representatives from unions, from social movements, from civil society organizations at large, people that are specialists
in human rights. And this is essential because every time someone is making a decision, when
it's including or not a parameter, it's making a political decision too. And this is also essential
to decide what models are necessary and what models it's better not to develop, precisely
because we want to, in a way,
reduce the consumption of energy in the world.
It's not just about changing the energy metrics.
It's also about using the energy that is truly necessary.
For this to work, we also need to change our mindset
in terms of data.
Today, we live in a world where it's either
like free harvesting for companies,
and those that are at
the frontier keep harvesting data from us every day, or the alternative to that seems to be the
privatization of data, individuals becoming the owners of their data. And that is dangerous in
many ways. First of all, those that can pay for our individual data, the ones that can pay or will pay the highest price, are the same companies that are controlling the world.
But even without taking that into account, I was saying before that we need to share knowledge to expand it.
So we really need to find what type of data should be harvested and share that data.
If we want to think, again, healthcare, I think it's a good example. If we can put together
healthcare data with social data, economic data, and map the places where people get sicker and
to what extent that is related to the way in which they live, to whether they are just next to a
plantation that is regularly fumigated with agrochemicals, for instance, and provide
more evidence that the use of agrochemicals has cancerogenic effects, for instance.
That is a use of AI that is good for the people and good for nature.
So we need to move towards that scenario.
And for me, the way I try to think of it is a scenario where we embrace the idea of data
solidarity. We need to share our
data, but not whatever data. Some things shall be forbidden, like facial recognition. So there is no
need in collecting data of my fingerprint, of my face, but there is a need to collect some healthcare
data and to share it with others. And actually most of the data that we are creating online is already social
data of a purchase. It's the seller and the buyer of a post. My post then will be retweeted or
liked or all that is showing interactions. So data is already social, but it's about making it
truly social. And then we have the third part of it, because it's data, it's new ways of developing
the models, but we also need computing power. And for that, I think that an alternative way
of thinking about AI comes hand in hand with rethinking the cloud and thinking of a truly
public cloud. I say truly because I call it the public cloud and that's anything but a public
cloud. It's really a private for profit business, very profitable for Amazon, Microsoft, and Google.
In China, very profitable for Alibaba, another big tech.
But if we really want to have this infrastructure operating as the water pipes, this needs to be provided as a public service, needs to be a public utility. And for many countries in the world, I think that it's interesting to think
of international solutions, regional solutions, at least, not only because it's very expensive,
but also because really, again, sharing makes things better. These three, like in a way for me,
are like pillars. But if you just build the pillars, you don't have a building. So if you
want a building, what I believe that we need more is to copy what these companies are doing, but in the good sense. And what are they doing and so on and so forth. If they do it, it's because it's more efficient for their purposes.
And actually, I think that therefore states and democratic institutions should embrace this idea
of planning, but not planning as corporations, but democratic forms of planning for developing
in the global, for development in the global south, but more generally, really developing
forms of democratic planning
that put these pillars together
and that put the pillars
under the service
of a different social,
economic and political goal.
Cecilia Brickap
is an associate professor
in economics
at University College London.
Tech Won't Save Us
is made in partnership
with The Nation magazine
and is hosted by me, Paris Mars.
Production is by Eric Wickham.
Tech Won't Save Us relies on the support of listeners like you to keep providing critical
perspectives on the tech industry.
You can join hundreds of other supporters by going to patreon.com slash tech won't save
us and making a pledge of your own.
Thanks for listening and make sure to come back next week. Thank you.