Tech Won't Save Us - Smart Tech Is Designed For Control w/ Jathan Sadowski
Episode Date: July 9, 2020Paris Marx is joined by Jathan Sadowski to discuss the politics of smart technology, how it enables powerful actors to further control the population, and why we should be more comfortable dismantling... technologies that don’t serve the public good.Jathan Sadowski is the author of “Too Smart: How Digital Capitalism is Extracting Data, Controlling Our Lives, and Taking Over the World” and a Research Fellow in the Emerging Technologies Research Lab at Monash University. He recently wrote about how smart tech is a means for corporate control and the need to dismantle urban surveillance systems. Follow Jathan on Twitter as @jathansadowski.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter.Support the show
Transcript
Discussion (0)
Not all innovation deserves to exist, and much of it shouldn't have ever been built in the first place.
Hello and welcome to Tech Won't Save Us, a podcast that can't stand the Alexas and the Google Homes and all the smart gadgets that seem to be invading our lives.
I'm your host, Paris Marks, and today I'm speaking with Jathan Sadowski.
Jathan is the author of Too Smart, How Digital Capitalism is Extracting Data, Controlling Our Lives, and Taking Over the World.
He's also a research fellow
in the Emerging Technologies Research Lab at Monash University, and he's written for a number
of publications including One Zero, Real Life, and The Guardian. Today we talk about his book and how
he outlines how smart technology is fueling a digital capitalism that's invading our lives,
our homes, and our cities, and what that really means for all of us who are
living with all these technologies around us. If you like our conversation, please leave a
five-star review on Apple Podcasts. Make sure to share it with any friends or colleagues you think
would enjoy it. And if you want to support the work that I put into the show, go to patreon.com
slash techwontsaveus and become a supporter. Thanks so much and enjoy the conversation.
Jathan, welcome to Tech Won't Save Us.
Thanks for having me, Paris. Happy to be here.
Yeah, thanks so much. So I wanted to speak with you because you have this fantastic book that
came out recently called Too Smart, where you really dig into the smart technologies that
are increasingly taking over our lives and becoming ever present in the way that we live.
So I think this is really important to speak about, especially from the really critical perspective that you bring to it. You begin the
book by laying out your ideas on digital capitalism and how smart technologies really play into that
and make that possible. So did you just want to start by kind of giving us a broad overview of
your ideas on those topics? Yeah, for sure. So I mean, you're definitely right that you can't
really turn
around anymore without bumping into some smart upgrade of anything in our life, right? And so
part of why I wrote this book is just trying to figure out like, where all this smart technology
is coming from, and like why it's become so ubiquitous, right? And in doing so, I really
wanted to try to understand not in the way that like gadget reviewers do,
where it's like, you've got this individual device, and you kind of look at it, and you
turn it around in your hand, and you figure out how it works. But instead, I wanted to figure out
how all of those separate things kind of plug into a larger kind of system. And that's really
where this idea of digital capitalism came about and understanding smart technology as a way of kind of like materializing those interests and those values that are
part of that larger kind of political economic system.
So in that way, we can really start drawing a network.
We can start kind of threading and connecting together all of these different smart technologies
at different scales and that do different things.
Yeah, no, I think it's incredibly important to bring that perspective to these technologies
and to understanding everything that is going on with them, right? And you write in the book that,
you know, one of the most important things that comes out of Silicon Valley is not so much the
technology, which is obviously very important, but the ideology that comes out of Silicon Valley is not so much the technology, which is obviously very important,
but the ideology that comes out of Silicon Valley is also very important. And the way that
they kind of fight for our imagination and kind of prescribe our ideas of what the future should
look like, right? Yeah, that's right. I mean, I really understand Silicon Valley as a kind of
ideology machine. And it makes sense even on a technical level, because so much
of the stuff that they make doesn't really work, right? It doesn't function in the way that they
say it does. Like take, for example, something like artificial intelligence, which is really
like the kind of, you know, hallmark example of what an emerging technology is. And it's really
what a lot of Silicon Valley kind of VC and startups and so on
are kind of like shoveling money and attention into. But I've written before calling it Potemkin
AI in the sense that like a lot of this so-called artificial intelligence is not artificial in its
intelligence. It has people hidden inside the machine doing a lot of the actual kind of like
cognitive and manual labor of making these things
appear that way. So in that way, I mean, something like artificial intelligence is a perfect example
of mistaking a technology for an ideology. And that just like runs all throughout so much of
what Silicon Valley really puts into the world of any kind of like meaning or value.
I completely agree. And you talk about in the book, like, it's important to understand this as a political action, right? Because so often these technologies
are presented in a way that makes them seem as though they're divorced from politics, right?
That they are just driving humanity forward. And obviously, where they're leading us must be where
we're supposed to go. Because, you know, it's just technology, right? Like, this is the way it works. But Bezos or whatever, even down to, I mean,
just the lowly, you know, coder, engineer, right? Anytime you want to try to give a political
critique of something like technology, you know, oftentimes the cries are that you're politicizing
it. Oh, you're politicizing this thing, this tool. But that totally is a mistake. And I try to
dispel that myth very early on in the book.
It's not that these things are politicized. It's more recognizing that they are and always have
been political from the very beginning for the very simple reason that all technologies are the
product of human choices, the product of human values. So I mean, that in itself makes them political because some values
are prioritized over others. Some choices are made rather than alternatives. And that's really
what politics comes down to is the question of who gets what, when and where. Right. And so anytime
we make a decision to do something in the world, to put something into the world, whether it's
technology or policy or law, that is a political act. And I think just by understanding and thinking
about technology in that way, it really starts to shape how we understand the things that exist in
the world, the ways in which they kind of shape how we live in the world, but also it starts opening up our
mind to new ways of thinking about what's possible, what we can do, and on the flip side,
what we shouldn't do or what we shouldn't want. Yeah, I completely agree. And obviously,
in order to kind of imagine those alternatives, we also need to understand how these technologies
are currently affecting us in the many ways that they do, right? To help us
see that and to help illustrate it, you talk about how smart technologies specifically impact three
particular areas of our lives, right? You talk about the smart self, the smart home, and the
smart city, and particularly look at how smart technologies are utilized in those three instances
to show us that the claims, the kind of marketing speak,
the PR talk that we often hear about these technologies are not completely accurate about
how they're actually being used. And what is sold to us is often allowing this much greater and I
guess more insidious use of these technologies that many people I think don't completely understand
when they start to use them, right? And so I wanted to go through
those three different instances with you. And to start with the smart self. So I guess just briefly,
how do you see smart technologies being used on the personal level or on the individual level and
the importance of the data that we generate in how these kind of companies understand us?
Yeah, so a somewhat informed or kind of aware reader might expect me to look at something like
the quantified self, right? That was like a really big movement five or six years ago. I remember it
being like, that's what every news story was basically about. It was like, oh, there's all
these people tracking every part of their life, you know, how many hours they sleep, how many
steps they take up to like really crazy, you know, daily blood tests,
things like that. But for me, what was more interesting, once I started digging into these
technologies was not how people choose to use these technologies themselves to achieve some
kind of goal of like human well being or the maximum level that a human can reach, whatever, right? For me,
what became more interesting was how other people, other institutions and organizations were using
these kinds of technologies on people, whether they wanted to or not, with or without their
choice. So that really led me to looking at two main instances of the smart self. One would be more financial in ways of using this. So with
things like credit scoring, that kind of way of like quantifying somebody's worth and judging
what they have access to. And then on the flip side, very similarly, is the way that employers
use these technologies for managerial reasons, to juice productivity,
to kind of keep a tight rein over their workers. For me, where the rubber really hits the road
with the smart self is not when we choose to monitor our sleeping patterns, but when the bank
chooses to monitor our sleeping patterns or our employer chooses to monitor our sleeping patterns,
because then that data becomes out of our hands, right? And the impacts of that data no longer become me getting a
better night's sleep, it becomes me having access to credit, it becomes me having a job, the
consequences of it become much greater. One of the kind of factoids that really stood out to me when
I was reading your book, because there's been a lot of talk about data brokers in recent years and you wrote that the
major data brokers are also the major credit agencies and I was like that makes so much sense
but like I just didn't realize it before I read it in your book yeah a lot of the book is really
just trying to draw like this kind of historical material thread throughout these technologies being like, yeah, exactly. It's like the big three data brokers like Experian have
their origins as credit bureaus. Some of them are over 100 years old, and they've just changed with
the times because, you know, it's going from collecting filing cabinets full of paper documents
and these dossiers to now you've got data servers
full of profiles, right? But in a way, it's doing the same thing, just doing it to a much greater
capacity. So it makes sense that, you know, the ones that would survive would be the ones that
would not only embrace the new technologies, but innovate these new ways of going from credit
bureaus to data brokers. I completely understand that.
Like, as soon as I read it, I was like, of course, these are the companies who are doing
this, right?
And now one of the specific things that you wrote about in that chapter was the kind of
social credit system that we read about a lot that is supposedly being developed and
rolled out in China, right?
And I feel like a lot of the stories that we get about it are like, oh, of course, this
like Chinese government is rolling out this technology system that is going to track their citizens
in this really invasive way.
We would never do that over here, right?
So I was hoping that you could kind of describe what the social credit system is for people
who don't understand.
And obviously, you wrote about how there's like one implementation of it that is already
taking place right now. But there's also this implementation of it that is already taking place right now.
But there's also this broader idea of what it could become in the future.
So I just want to describe, I guess, what that is, the kind of data that gets tracked
and gets fed into it, and the kind of benefits or restrictions that could apply to people
because of these systems.
I was really kind of conflicted about if I wanted to use the social credit scoring system
as one of the main kind of like examples in this chapter on the smart self in large part because
it became a really kind of like popular topic. But there is also so much, I don't know if it's
controversy around it, but more so just like bad reporting around it and kind of like misinformation.
And I try to get at this after describing what it is in the book.
You know, I really try to hone in on this fact that wrapped up in these fears around
the social credit scoring system is to me a kind of Orientalism.
It's looking at this thing that's happening in China, which I think a lot of people would
be familiar with who are listening to this because there was so much attention over like this is the dystopian technology of what like 2018, you know, so long ago. is rolled into these data banks and into these algorithms that are basically kind of creating
this score of how trustworthy you are, of how credit worthy you are, of how well you fit into
the model of a good citizen in society. So it's just a way of trying to like keep track and
quantify all of these different things in a way that much like a credit score does, which is used as a kind of really easy number to decide,
are you over the threshold for getting a loan or getting this apartment or getting this job?
Instead, the idea is to have some kind of social credit score, a number, a score to make those same kind of decisions,
but about a wide range of things ranging from will you get a visa for international travel to if you're a high
score person, then maybe you'll get the fee waived for renting a car, or maybe you'll get a discount
for eating at this restaurant or something like that, right as a kind of high score perk. So it's
just taking that logic and applying it all over. But as I was saying, a lot of the reporting had
this kind of like Orientalism to
it, where it was this vibe of like, looking at what's happening over there and saying to yourself,
man, I'm so glad I live in a place, i.e. a Western country, which values freedom and liberty and
would never do anything as authoritarian as having a social credit scoring system when it's like the data brokers, the credit bureaus
from the US like innovated this whole system of a credit scoring system. And it's, you know,
the FICO credit score in the US, that's the main one. And it's like everyone has it,
whether you want it or not, you have a FICO credit score. And if you don't, it's because
you're unbanked, and you have no interaction with the
financial institution, which is worse than having a bad score because it means you are just excluded
from the entire system. And so I was really conflicted over writing about it because I
didn't want to add more fuel to that kind of oriental dystopian fantasy that people had.
So in writing about it, I wanted to be really serious
about here's what's going on, here's what the reporting on it is like. But then also, I really
wanted to talk about how we can, again, tie it back to a history of how these things have been
developing. And for me, I think that made it really key and why I decided to write about it is because it's this good example of how to clearly analyze something that is actually happening while also keeping in mind that larger historical States did this earlier when the technology was more primitive.
So they put together the system that they have and are surely working toward something more like a social credit system.
Whereas China developed it later when there were more capabilities for it to bring more into it and make it more expansive from the beginning.
One of the really important things that you bring up in discussing that. And again, it's not something that just
applies to China because these systems work in the same way around the world. And just because
China might have a more advanced one right now doesn't mean that these kind of technologies
don't influence our behaviors everywhere around the world. Because you talked about earlier in
the book how one of the important pieces of kind of digital capitalism and the
smart technology infrastructure is the ability for really powerful groups to further control
the population, right? And it just seems that in creating these kind of systems where all of this
data is being brought in and we can get benefits if we act in a certain way that, you know, is
prescribed in the system to be
positive, then like there's a lot of concern about governments passing laws restricting our freedoms
and things like this. But it seems like embedding them in this technological infrastructure then
kind of bypasses that concern and just nudges us to act in those ways anyway, because we'll get
some sort of benefit from it. Yeah, that's right. What you're
getting at is this kind of like creation of a subjectivity, right? It's this idea that we become
the type of person that the system wants us to become. And that's going to happen no matter what.
So it just becomes then a question of what kinds of systems are those that we're creating, right?
And what kind of selves or persons are kind of built
into that imaginary or that vision of what the outcome of that system is.
I think that's completely right. And so I want to move on to the smart home as well, because
I don't know about you, but I'm one of these people who absolutely hates these Alexas and
Google Homes and like all this sort of stuff. Like I hate all the smart gadgets. I can't stand it. Get it away from me. And so I found your smart home section really
interesting because you talked about how these smart appliances could be creating this kind of
business model like we see in so many tech business models where the product gets given
away really cheaply or even free so that the company can then collect all the data that they get back
from it, right? And so you talk about how this same kind of business model could then be rolled
out in the home space and maybe we'll get like free refrigerators and microwaves and stuff. But
then what we're giving up in exchange for that is all the data that gets collected on how we live,
right? What's the big concern there? I think it should be obvious, but I'll leave that to you to explain. Yeah. And I mean, this really gets into one of the kind of core ideas in the book,
which is this idea of data capital and this kind of imperative of data capital or data collection
that really drives the design and implementation of a lot of these technologies. And I don't know
if we'll ever quite reach the time where you get a
free refrigerator, as long as you just kind of like constantly give the data. I think that's
the logic though. That's the kind of endpoint. We've seen some legal instances or battles already
where, for example, Whirlpool, which is an American-based appliance manufacturer,
tried to bring a fair trade complaint against LG and Samsung, which are these South
Korean appliance manufacturers, because the whirlpool was saying that these foreign manufacturers
were undercutting the market by selling their kind of like smart washing machines and stuff
at a really deep discount because LG and Samsung had already kind of taken this like digital
capitalism step where for them, the real value was getting the
appliance in people's homes, so that then they could constantly collect data about how people
were using those appliances. For them, the hit they might be taking from like the kind of capital
investment of like buying the appliance was offset by the data that they were going to be getting
over the life cycle of that appliance.
So I mean, if anything, that's the kind of thing that we should expect to start seeing happen
more and more. These kinds of subsidies, these kinds of artificial kind of low prices,
all for that data. I mean, we can already see it as well with things like there's insurance
companies that have partnerships with Google Nest. So smart thermostats and smart fire alarms.
Basically, you know, the insurance company says we'll give you this for free.
So that is an instance where you'll get it for free as long as we, the insurance company, can get that data, have access to it and kind of constantly monitor its status.
So we can ensure that your fire alarm is working
properly. So, you know, we can use that data in case there's ever a claim, for example.
This is really the vision of this smart home kind of brought to you by the manufacturers,
the insurers, you know, other parties that have an interest in you having these devices in your home.
It's super concerning. Like I hate it. But another one of the business models that you wrote about
was maybe this product isn't free, but there's kind of this like micro rent, you know, every
time that you use it. And it really brought to mind, I don't know if you read Cory Doctorow's
most recent book where he has like kind of four short stories about like potential kind of techno dystopian kind of futures. But one of them is of this
refugee to America who lives in this like subsidized apartment, but all of the appliances
in the apartment have to use the products that are produced by or approved by the manufacturer
of the product. So like there's this toaster,
but she can only toast like the approved bread or whatever that like cost way more money. Yeah.
So like when I was reading your book, I was like, oh my god, like this is totally a way in where they could do something like that, right? Yeah, it's that kind of DRM or digital rights management,
which is a total concern of Cory Doctorow, right? Exactly. It makes sense,
but he's right in the sense of trying to take that logic or take that system and just apply
it a little bit more widely. And I think that makes sense. I've written about the core political
economy or the way that these kind of digital platforms operate is more akin to a form of
rentier capitalism. A lot of critics and so on will try to look at
kind of like neo-feudalism, right? So we're kind of looping back around to a pre-capitalist period.
Or you look at someone like Shoshana Zuboff with surveillance capitalism. And for her,
the problem with surveillance capitalism is the surveillance part. It's not the capitalism part.
It's this idea that this is just an aberration of a normal, just form of capitalism.
But I think both of those are mistaken, the kind of neo-fugal or that surveillance capitalism,
because understanding the way that these digital platforms work and the way that the smart
home works is understanding it as the kind of expansion of rentier capitalism. It's this expansion of this landlord relationship, but applied to the toaster, to your vehicle,
to everything that now it becomes about not owning property, but renting access to the property.
And oftentimes the property in question is software. So that's what a licensing agreement is. It's a
lease, basically. It's a rental lease for the software. So when you buy the smart fridge,
the smart toaster, now any kind of car and automobile, what you're buying is the metal,
the materiality of it, but you're renting the software, which is really the crucial part.
That's what gives it its smarts, but that's also what's
necessary for its operation. And so the manufacturer, then the platform retains a huge,
a massive amount of control over this thing that you thought you bought. And that's because they
want those data rents, right? They want that data rent. Totally. It makes me think of those stories
a few years ago that you'd see
like in the business press about like millennials are the generation that don't want to own
anything. They just want to like rent it every now and then like so they can have it when they
need it or whatever. Right. Like they don't want to actually own things. And it's like a surprise
like that really works well for the business model of these companies. Yeah, exactly. And so
the third piece of this is the
smart city, right? And I think what you write about the smart city is really topical because
of the larger conversation that's going on right now about police brutality and policing in cities
in North America and around the world. And so do you want to describe broadly your perspective on
the smart city and how it isn't so much about just like making urban
life a bit more efficient for urban residents, but you actually call it a captured city. So do
you want to talk about how you actually perceive the smart city and how it's not like what, you
know, the tech press really tries to make us think it is? I will say that a lot of these things,
the smart home, the smart city, it's like they're really ill-defined and amorphous and nebulous.
And that's kind of by design because part of what we talked about before, that kind of like
ideology and imaginaries kind of part of this, a huge part of that is controlling the narrative
and creating the definitions. So if you can define what the smart home means or what the smart city
means, then you can define the kind of solutions and the kind of technologies that
are required to bring it into existence.
And oh, lo and behold, they happen to be the solutions and technologies that you are selling
and that you have the expertise to provide.
So a lot of the attention on what smart cities mean, including my own and a lot of the kind
of critical scholarship around it, has really focused on this vision of a smart
city kind of brought to you by like IBM and Cisco, right? It's this vision of a smart city,
which is housed in City Hall. It really has this attention on local government, on doing good
governance, entrepreneurial governance, on kind of best practices for urban planning. So the vision, whether you're for it or
against it, really focuses on making city life more efficient, more optimized, having these kind
of systems of systems that talk to each other and so on. In my book, though, I really tried to
bring attention to what I think has been largely ignored in the kind of smart city
reporting and literature on this to who I think is actually the real vanguard of smart urbanism,
and that's police. I mean, the whole chapter on smart cities is about smart policing, because
in reality, just as we talked about before, a lot of those kind of IBM and Cisco visions of the smart city are not actually existing in any kind of meaningful way.
They exist in like design concepts and marketing brochures.
What they're selling is an idea that they hope to one day deliver on.
But who's actually delivering on these in a real kind of like technical and material way is policing technologies.
The police departments in cities have access to way smarter technology than anyone else,
right?
And they're actually deploying it in their everyday operations.
It's not just a like, oh, we've got our digital innovation team in City Hall, which is our
kind of like our Skunk Works team.
It's like, no, for policing, this kind of data collection, analysis is our kind of like our skunk works team. It's like, no, for policing, this kind
of data collection, analysis, that kind of smart technology, surveillance, like that is a core part
of their operations. It's really transformed how police do their job and what it means to do
policing. I found it really interesting that you kind of paired the increasing use of technology
by police to also the increased use of kind of private policing paired the increasing use of technology by police to also
the increased use of kind of private policing and the militarization of the police. And you also
talked about how these kind of smart police forces are kind of like having an NSA in every city sort
of thing in the way that they collect so much data on people, right? So I was hoping that you
could explain a little bit more on the role of data collection in policing and how by adopting this kind of smart policing system,
a lot more people then get kind of dragged into the purview of the police.
I think we can do this through an example of a particularly odious company that I think most
people will be familiar with by now is Palantir. We hate them. Yeah.
Yes, as we should, as we should. And even if we just trace the history of Palantir,
we can see in that history of Palantir, a kind of history of the evolution of kind of like modern
smart policing. So I mean, you've got Palantir started, you know, co founded by Peter Till in 2004,
with In-Q-Tel, you know, CIA backed funding, really funded to do what they have come to see
as their space of social network analysis, right? So it's really kind of coming out of the war on
terror as this kind of like, data analytics apply to finding terrorists and preventing terrorism
by taking this kind of data maximalist approach, collecting as much data as possible from every
source as possible, and infusing it together, right, kind of throwing it together in new ways,
so that you can then start mapping out the entire networks around these terror cells.
So all of the people, the places, the vehicles, the addresses, you know, everything that's connected to this.
So you can start to fill out this full picture of how this decentralized network is actually kind of like operating.
Right. And so that was for a long time, what Palantir was doing is kind of
providing these services to the Department of Defense, the Department of State, the Pentagon,
you know, the CIA, the FBI, right? Like that's who their market was, until at some point,
they realized that that's a really concentrated market. But a much larger market is police departments. And that kind of followed this
trickle down of the counterterror mandate from the tops of the federal government, the Department of
Homeland Security, and so on. That eventually diffused down to now every police department
in every city is also on the front lines of counterterrorism. And so they're getting armory,
they're getting training, they're getting
training, they're getting weapons and vehicles from the Department of Homeland Security for free
or like pennies on the dollar. And so it made sense that along with that, Palantir started
marketing their services to police departments, kind of setting up these really secretive,
we still aren't quite sure who all Palantir partners with. A lot of it is kind
of like a slow revealing through investigative reporting and journalism. But so now you've got
Palantir analysts kind of set up in police departments like New Orleans and Los Angeles
and New York doing that kind of like social network analysis, terror cell mapping and stuff,
but now applying it to crime,
applying it to gangs and quote unquote criminals, right?
Kind of treating them in that same exact way,
which means treating the public as insurgents.
So that kind of mindset is already part of
how data is being collected,
how it's being crunched and how it's being acted on.
So wrapped up in that is like new technologies, new ways of looking at the city, but then that
also influences not only how the police operate, but how and where they devote resources, which in
itself also justifies more and more bloated budgets, because you need to pay for all these
analysts and these technologies
and so on. So it's just like this constant ramping up of this kind of militarization,
which is this kind of like understanding of the police as an occupying army in a city to a kind
of what I call a city intelligence agency, a kind of like a new form of the CIA, right, where every police department is its own
NSA or its own CIA. So now they act more like intelligence agents kind of out there gathering
data, crunching numbers. What we see with the wave of uprisings and protests right now,
and the reaction to that by the police is that these two models coexist simultaneously. So they maintained that kind of
like army capability, while also incorporating a new kind of CIA capability into how police
departments operate. Literally the last thing that I want to hear right now.
I think that's a really important point, though, especially with how you talk about how like what Palantir was doing and the equipment that police so often use now was originally used in Afghanistan and in the West Bank are
now being used by the Indian government in their COVID-19 response to increased tracking
and particularly against poor and Muslim people, right?
So I think it's so important to recognize how many of these technologies don't just
come out of nowhere, right?
These technologies are tested on the people who have the least power,
right, the least ability to push back, and then they are rolled out to more and more of the
population over time. That's totally right. We can see that direct link as well. If we look at
Palantir again, it now has partnerships with the CDC in the US and the NHS in the UK for doing contact tracing, right, for kind of surveillance pandemic
response. It's because the essential form of this network analysis and contact tracing are
essentially the same. It's just a different name for doing the same thing. But perversely,
we see this language looping back around where it really stuck out to me a few weeks ago, the Minneapolis
chief of police talked about in a press conference, quote unquote, contact tracing arrestees. So using
this language of contact tracing, but applying it to people that protesters that they arrested
in order to find and arrest more protesters. My God. So, I mean, these kinds of framings,
they're very flexible and liquid
in the sense that they can latch on
to the most advantageous framing of a thing,
whether it's terrorism or a pandemic.
And that just becomes the way that they framed
the same exact thing that they were doing all along.
It's so worrying. And what your book really makes clear, and hopefully what this conversation has
really made clear, is that a lot of these technologies are not working in the public
good, but are working in the good of some very powerful actors that really don't have the
well-being and the goodwill of so much of the population at the top of their minds, right?
They're thinking about profits, and they're thinking about power, and they're thinking
about control. And so then you talk about how we should then think about technology in a different
kind of way, because when we recognize that technology is political, and that it's not just
something that happens because it's just the steady track of history or whatever, like progress moving forward.
So what is your perspective on how we should approach technology and how we should think
about these technologies and how we should think about how technology should work in the future?
Yeah, and that's a really good point there, Paris, that I want to kind of reiterate is that I kind
of start the book talking about Silicon Valley, but I never
really talk about Silicon Valley again throughout the whole book because I wanted to really try to
trace and understand all these different interests that are having their say in how these smart
technologies are being designed and how they're being implemented and for whose benefit, right? So it's like, you know, the smart self there, rather than talking about Silicon Valley and
quantified self, I wanted to talk about financial institutions and data brokers and employers,
bosses, the smart home.
I really spend a lot of time talking about insurers, the insurance industry, and they've
been latched on to the kind of potential
and promise of the Internet of Things and this kind of like informatics and telematics for quite
a while. And then for the smart city, again, rather than talking about IBM, Cisco, it's the police.
So I think that's one way that we really need to understand smart technology is that it's a very powerful instrument, presents
a very powerful tool, the kind of data collection, the network connectivity, the automation and
control. And so it only then makes sense that already powerful interest would want to kind of
like get their foot in the door, right? They would want to not only take advantage of things that are being made, but they would want to have an active say over how things are made and for what reason,
so they can ensure that their values, their goals, their motivations are kind of built into these
technologies from the very beginning. And so that for me is like understanding that these technologies
fit into a larger political economy of interest,
of imperatives. And through that, that's where we can start to understand the real impacts,
the real consequences that these things have. It's not about how I decide to use it,
or it's not about making the transportation run a little bit more efficient, right? Those things
are nice, but the real impact comes in how other more
powerful institutions decide to use it as a way to materialize their values in the world to kind
of realize the goals that they want to see come about. I agree. And so then if we recognize that
these powerful companies are using technology in this specific way, then as people who want
technology to actually serve
the common good and to actually play a positive role in the world, how should we be thinking about
technology and what technology should be doing in the future? I talk a lot in my book, I've really
been talking a lot about this idea that I think we could go a long ways just by simply recognizing
that many of the innovations that exist probably should not
exist, right? Not all innovation deserves to exist, and much of it should have never been
built in the first place. Innovation is treated as this fetish, right? It's treated as this kind of
like this object that we idolize. We idolize the idea of innovation. We idolize the innovators.
So it has this kind of fetish quality to it.
Fetish in the sense that it's like we're alienated from it, right?
It kind of like exists on its own, separate from everything else that's happening.
I want to push towards thinking about innovation in a kind of anti-fetish way, right?
As something that can and should be destroyed if it's deemed unnecessary or if it's
deemed not beneficial to the public. It should not be radical to think about throwing it away,
right? I talk about this as a form of like Marie Kondo, but for techno politics, where it's like,
rather than asking, does this thing spark joy? We should ask, like, does this thing contribute to
human well-being or social welfare? And if not,
then it should go into the dumpster, right? And we shouldn't be afraid of that as an option.
But it seems really radical because we've been so inculcated with this idea that innovation
equals progress and progress equals good. And to go against that is to go against the advancement
of civilization or something like that. But if we simply ask the question of like advancement
towards what? Whose advancement? All of a sudden, it starts becoming clear that that pathway is
maybe advancing for some people, but it's leaving a lot of other people behind.
I'm just waiting for someone to like spin off a consulting company
where they say that they're the Marie Kondo
of technology now.
And, you know, try to make some big bucks
from the big companies.
Yeah, I'm waiting for bands of unmakers
going through the city,
tearing out the surveillance infrastructure,
dismantling all of the innovations
that are just simply there
to make our lives worse in various
ways. I love it. I welcome them. Jathan, it's been fantastic speaking with you today. Thank you so
much for sharing your perspective. And I recommend everyone go check out your book. Thanks so much.
Thanks for having me, Perith. It's been a blast. Jathan Sadowski is the author of Too Smart, How Digital Capitalism is Extracting Data,
Controlling Our Lives, and Taking Over the World.
It was published by MIT Press, and hopefully you can find it at your local independent
bookstore or library.
You can follow Jathan on Twitter at at Jathan Sadowski.
You can also follow the podcast at at Tech Won't Save Us, and you can follow me, Paris
Marks, at at Paris Marks.
If you like our conversation, please leave a five-star review on Apple Podcasts. Tech Won't Save Us is part of the Ricochet Podcast Network,
which is a group of left-wing podcasts that are made in Canada. Thanks for listening.