Tech Won't Save Us - How Australia Used Tech Against Welfare Recipients w/ Dhakshayini Sooriyakumaran
Episode Date: August 5, 2021Paris Marx is joined by Dhakshayini Sooriyakumaran to discuss Australia’s robodebt scandal where automated decision-making was used against welfare recipients, and how exploitative AI implementation...s are being deployed by governments in social welfare and at the borders.Dhakshayini Sooriyakumaran is a proud Tamil person and a PhD candidate at Australian National University whose work focuses on digital identification systems and border policing regimes. Follow Dhakshayini on Twitter as @Dhakshayini_S.🚨 T-shirts are now available!Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.Find out more about Harbinger Media Network at harbingermedianetwork.com.Also mentioned in this episode:Dhakshayini wrote about robo-governance and why we need to oppose it.Robodebt eventually resulted in a A$1.8 billion settlement in favor of welfare recipients.Robo-planning was another proposed system for Australia’s disability insurance scheme that has been canceled. A blockchain trial was also considered.Australia is trialing a controversial cashless welfare card, and plans to increase the use of biometrics.Scarlet Wilcock researched the history of “welfare cheat” narratives in Australia.Canada has been using automated decision making to process visa applications.Just Futures Law and Mijente released a report called “ICE: Digital Prisons.”Israeli company NSO’s Pegasus technology was weaponized against activists, politicians, and journalists.The Australian Human Rights Commission released a report on human rights and technology, while the European Data Protection Supervisor has called for a ban on biometrics.Support the show
Transcript
Discussion (0)
The welfare system is inherently punitive, so when you increase efficiency in a punitive
system, there's only going to be one result.
Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks, and this week my guest is Dakshaini Suryakumaran. Dakshaini is a PhD candidate at Australian National University,
and she recently wrote a piece about the exploitative implementations of artificial
intelligence tools in government systems called resisting robo-governance. In this week's
conversation, we talk about Australia's experience with what was called robo-debt,
a program by the government to use artificial intelligence to try to crack down on what they
called welfare fraud, but actually subjected welfare recipients to false debts that they did
not actually owe, and then forced them to fight, often for months,
if not years, to try to get these false debts overturned. And obviously, during that period,
a lot of people experienced a lot of stress, as they had to prove that they didn't actually owe
these debts to the government. I think this is a really important case that people outside of
Australia should understand,
which is why I'm happy to be doing this episode today. But we also talk about the broader implementations and the broader scope of these kind of programs
and how they're proposed for other kind of social benefits, but are also deployed on
the borders in society and toward migrants.
I think it's really important that we be paying attention to the deployment of
these kinds of technologies because they start by targeting some of the most disadvantaged people
in our societies before eventually being turned on us too. So I think this is a really important
conversation. And I just want to note before we get started, at one point, Dakshaini talks about
Scott Morrison. And if you're not familiar with Australian politics, he is the prime minister of Australia.
And at the end, Daksha Ini says that if anyone is working on similar issues, they can feel
free to reach out to her.
Obviously, her Twitter is in the show notes, and you can obviously Google her name to find
her university email address.
Tech Won't Save Us is part of the Harbinger Media Network,
a group of left-wing podcasts that are made in Canada.
And you can find out more about that
at harbingermedianetwork.com.
If you like the show,
please make sure to leave a five-star review
on Apple Podcasts.
And if you like the episode,
make sure to share it on social media
or with any friends or colleagues
who you think would learn from it.
Every episode of Tech Won't Save Us
is provided free for everybody
because listeners who can support the show choose to do so. You can join supporters like Anna from
Singapore and Johan from Sweden by going to patreon.com slash tech won't save us and becoming
a monthly supporter. Thanks so much and enjoy this week's conversation. Dr. Shaini, welcome to
Tech Won't Save Us. Thank you for having me. I'm really excited to chat with you because you had this really interesting article recently
about the Australian kind of experience with robo-debt and the broader implications of that.
And I think that robo-debt and kind of what came out of it is something that a lot of people
outside of Australia will not be familiar with. But I think it's a really important example
of how these technologies can be used against the poor and against welfare recipients.
So I want to dig into that with you in this interview. And I wanted to start by kind of
laying the foundation for people. So what is Robodebt and how did it come to be in Australia?
What was the process that brought it into being? Great question. Robodebt really kicked
off in a big way back in May of 2015. So before that, even back in say 2001, we had data matching
going on between Australia's social welfare agency and the tax office. And so what they would do is
they'd match data. So they'd get the kind of
annual data from the ATO, divide that down into weekly chunks, and then match it against what the
welfare recipients were reporting. And then if there was a discrepancy, they would review that,
they would contact the social welfare recipient and kind of clarify that discrepancy. So that
has been going on for a long time. And obviously, that's not a very good
method to calculate that discrepancy, right? Because most social welfare recipients have
insecure and casual work and don't have kind of regular fortnightly income. And so this inaccurate
kind of data matching approach that's actually been going on kind of in the shadows for a while
really formed the basis of Robodebt. So then
fast forward to 2015. And what happened was the government at the time announced they're going to
crack down on social welfare recipients, welfare fraud is going to be a big focus. And they decided
to basically amplify that data matching. And they did that by saying, we're going to actually go back in time and
historically match anyone that's been stealing from the government. And it was really terrible.
So they decided to expand the data matching and the subsequent year in 2016 decided to automate
it. So the colloquial term is robo-debt. The technical term is the OCI, so the online compliance
intervention. And so that algorithm automated not only the data matching, but also the release of debt
letters to social welfare recipients.
And a key factor to keep in mind there is that instead of checks and balances where
staff who worked at what is now called Services Australia, formerly called Centrelink, they
would normally check it.
But instead of that, because of a huge reduction in staff, these letters were just going out
at a huge scale. So the scale changed from say 20,000 letters in a whole year to 20,000 letters
in just a week. It's absolutely wild to think about the scale of what you're describing there,
right? The number of people who are being affected by this and how that is a significant increase on the past. And as you're describing,
at the same time, there was a reduction in staff at the agency to be able to process these things
and deal with people who are having these letters arrive and being incredibly stressed out about it,
worried about it. So who were the people,
like what were the type of people who were being affected by this? And what were the effects on
those people when they were receiving these letters from the government? It's quite upsetting
to talk about actually. Obviously, social welfare recipients, I've been one myself,
you know, you're really living on the edge, living on little to no income,
really struggling. And these debt letters were anywhere from, say, $2,000, which is a significant
amount of money, but that was in the lower end of the scale, all the way up to, say, $25,000.
And the way that these debt letters operated is that they would add a 10%
increase if you weren't responsive.
So Robodebt, obviously hugely controversial program. The year that it was launched,
it started becoming the subject of investigative journalism, huge community outrage,
and a lot of these welfare recipients organizing online, launching a notmydebt.com.au website where they collected over 1,200 stories
of horrific stories of people having to quit their jobs to actually deal with responding
to Services Australia or Centrelink. There was several accounts of depression, suicide.
There's been two Senate inquiries into robo-debt, several legal challenges,
and actually it's the subject of the largest class action in Australian legal history.
And so several horrific stories have come out through that. And one that particularly comes
to mind that I did discuss in the piece was the story that a mother, Miss Miller, who talked about her son, Riz, who had received
notice after notice harassment over the phone from this debt collection company. And she testified
that this is what led to ultimately him committing suicide. And so there's been at least five accounts of kind
of clear direct correlation between a death notice leading
to a death, but I'm sure the numbers, you know,
are vastly greater than that.
So the harm is huge and it still hasn't really been reckoned
with when you think about the fact that the people
that were involved in the class action, incredible efforts organizing and mobilizing only to receive
$300 per recipient. It's such an insult. So the government's really gotten away with avoiding
accountability essentially for this. It's so sad just to have to reckon with the harm
that was done by this program. A program that I'm sure was designed around this notion that
it was about efficiency and making things easier. And as you were saying, cracking down on welfare
fraud or people stealing from the government and all this ridiculous language that we've been hearing for so long. I remember reading that as activists and as journalists
started to look into this, as they were hearing stories about it, the government was trying to
actually hide what was happening and the scale of what was happening. Can you talk a little bit
about the government's role in trying to do that? Sure. So we've had,
you know, at Senate inquiry after Senate inquiry, ministers saying that these harms and these deaths
are not because of robo debt. There's so many other causal factors. So they've completely,
you know, avoided accountability, avoided accountability. And then finally, you know, only just last year was the class action settlement delivered. And we had Scott Morrison,
you know, being essentially forced to do an apology. But at the same time, if I recall,
the government services minister, Stuart Robert, simultaneously said that regular debt collection
will resume on the other side of COVID-19, right?
So that's where the government's at with this.
So there's no understanding that it's not just the inaccuracy
of this algorithm that's actually the issue here.
It's this coercive debt collection.
It's this punitive service paradigm around social welfare that we have in
this country and in many countries. And so, yeah, it's really unfortunate. And just recently,
in the past few months, the Morrison government has also, it came out that they were actually
trying to hide particular documents in relation to robo-debt that had
further detail about what went on.
So yeah, we can see repeated government mismanagement and obfuscation going on for years.
So then you can see that after all these years, even though there has been this class action
lawsuit, even though there has been journalism and reporting being done, even though activists have been collecting stories from people, that I'm sure the full scale
of the harm that was caused by this intervention is not fully known, right? And as you can see,
as you're saying, the government is trying to hide the full extent of the harm that they caused
by implementing this program. And so, you know, as you said, there was a class action,
the largest class action in, I think you said, Australian legal history over the Robodebt
program and the people who were subjected to that system won. So can you talk a little bit more
about the outcome of that lawsuit and is Robodebt still in operation today? Sure. So in terms of the outcome of that lawsuit,
the class action refunded about $721 million to 373,000 people, $112 million in compensation
and $398 million in cancelled debts. But actually some of those participants in the class action
didn't even receive any of the settlement
because the debt collection notices they received
weren't linked to inaccurate data matching.
So we can see how people even ended up with nothing
after taking all those efforts.
And so in terms of what's happening with the program now,
interestingly, the class action and other legal cases did find that particular data matching
approach, which is called income averaging. So that approach that I described earlier,
that was found to be illegal, but the broader kind of way that the program operates isn't illegal, right? So,
they're actually continuing. Essentially, they've made the data matching a bit more accurate.
They've introduced a bit more oversight and checking, but the coercive debt collection
is still continuing. So, that's not really been widely reported on, but that's certainly the case.
And so, it's very concerning that they've gotten away with this, really.
Absolutely.
You know, I think everyone should be concerned about that, especially as it's pretty clear
that the kind of logics underpinning this program have not gone away and that I'm sure
the legal case was just kind of a road bump sort of on the
way to continuing to do something similar, right? In your article, you described Robodebt as a
predictive policing technology where people are presumed guilty unless they can prove their
innocence. You know, as you're talking about with the matching of these figures from the different
agencies and the assumption that if those average figures do not match that there must be a problem there,
right? And so presumably, that is also tied up in, you know, these false ideas about the objectivity
of technology, the all knowingness of these algorithms, things like that. So can you talk
about the implication of this shift where by using these technologies
and also by cutting back the staff at these agencies, that the onus is then placed on the
social welfare recipient to have to prove if there is any kind of problem that comes up in these
flawed systems that are used against them? You're exactly right. What you've described
is exactly right. So the onus was shifted back
onto the welfare recipient. And so really it was assumed that they were guilty. They then had to
spend a lot of time. These are people who are living on the edge, struggling to make ends meet.
And as I mentioned, there were stories of people having to quit the jobs they did have just because
that's the level of time that responding
to a debt letter took.
And so essentially, they were put in a position where they were guilty until proven otherwise.
And I just think it's really helpful to think about this as predictive policing, because
that's what it is.
I think we're used to thinking about predictive policing in other contexts, and certainly racialised communities are the primary targets of that kind of predictive policing.
So in Australia and in many other places, you know, we have programs such as the Suspect Target Management Plan, the STMP, which is a predictive policing program that's known to preemptively target Indigenous children as young
as nine. And so, you know, those are the kinds of schemes that are in play. And a lot of the
literature, I would say as well, really talks about this idea of coded bias within systems
as it relates to race. And that is how this bias primarily presents itself. But it's an interesting
case for Obodet because I think we don't have a lot of literacy in Australia anyway in talking
about class and how coded bias can also happen across any social category of difference. And in
this particular case, it's very clear that, you know, this algorithm is class biased.
It's assuming that people have a consistent income.
You know, who has a consistent income?
People who are full-time employed, high salary workers, you know, certainly not social welfare
recipients.
And the fact that this data matching has actually been going on since 2001 is kind of really indicative of the fact of how deeply ingrained these kinds of biases can be in systems.
What you are describing there plays perfectly into my next question. These kind of ideas around social welfare recipients, around the poor, do not just come out of the robo debt program,
you know, that was created in 2015 or 2016. Right. We know that in Australia, but also here in Canada,
where I am in the United States and in parts of Europe, that these ideas about the poor and about
people who receive welfare are longstanding. And you discuss how, you know, they come out of ideas from the 1970s that really target this kind of class
of people in society. And so can you talk a little bit about that history and why it's important to
position Robodebt within it? Oh, that's a great question. And this is something that I think
doesn't get enough attention. We just don't kind of ever really grapple with how far back these really damaging and harmful narratives go.
And so actually we can trace them back even to colonial Australia, really, and the idea of the deserving versus the undeserving poor.
And this idea really that was adopted from Britain that the poor are culpable for their circumstance.
And so that idea of, you know, this fear of dependency,
of welfare dependency, was really baked into the fabric
of the nation.
And then we've seen subsequent regimes to really reify that idea.
And so, as you say, in the 70s, we had a really coordinated and targeted campaign
to craft these narratives around the welfare cheat and the dole bludger, etc. And so,
this wasn't kind of an accident, but we had, you know, a global network of policy think tanks and
other very powerful institutions investing a
huge amount of resource and effort into this. You know, for example, the IPA and the Centre for
Independent Studies here as well in Australia really played a central role in propagating this
distrust really in welfare recipients. And then, you know, we've had really excellent scholarship
from in Australia, a woman called Scarlett Wilcock, who really tracked the emergence.
So Centrelink as an agency really launched in 97. And she in detail looks at how the government of
the time really launched these narratives specifically around the welfare cheat
and not just were they about dependency and crafting this really this hatred of the poor,
but really these stereotypes were gendered in a huge way and characterised particularly women
and single mothers, of course, most of all, as deserving of punishment. So single mothers on
welfare as deserving of punishment. And we've seen this archetype in many other jurisdictions as well.
And it's horrific. So a lot of media characterising those convicted of welfare fraud as greedy,
deceitful, sexually deviant, maternally incapable. And, you know, these have real world impact. So,
you know, women are twice as likely to be convicted of welfare fraud offences compared
to men. And so we can see these narratives having that kind of level of impact. And of course,
you know, you can't really talk about narratives in the welfare system without mentioning
all of the racialized narratives around
First Nations communities and migrants. And so there's been systematic kind of exclusion of
First Nations people and migrants, refugees, and people seeking asylum from the social welfare
system. Yeah, you know, I think those are all essential points, right? And I think what you're
talking about, about the targeting of single mothers is something that we definitely saw in North America as well, as well as, you know, the kind of racial distinctions within the welfare system and how that particularly targeted people of color, like in the United States, you know, the idea of the welfare queen was often, you know black woman as i understand it um and so you can
see how these how these ideas were created at a time so that we can we can target certain people
and so that they are created to kind of um draw the ire of the population at a time when there
are these kind of larger political economic shifts that are happening um I don't know if you wanted to go into that a
little bit more, like the changes that were happening at this time to begin this kind of
shift toward targeting the poor and cutting back on these programs in a really significant way.
Yeah, the 70s were a really interesting time and it's very contested, you know,
what is neoliberalization? What exactly did it mean?
And I think ideologically it's really about the retreat of the state.
But in actual fact, what scholars and other analysts have really found is that the state
didn't really retreat.
It retreated certainly from some areas, but actually magnified its presence in a big way
when it comes to law, order, security and playing that function.
So we saw a huge rise in investment in policing and surveillance of all kinds.
And so as with anything, this affects racialised communities the most and we've certainly
seen that in the so-called criminal justice system and in border
policing predominantly, but we simultaneously saw that in welfare fraud policing as well.
Certainly. And I think those are essential points as well. So as you write in the article,
it's important to understand robo-debt does not exist on its own, right? I wanted us to start by
talking about robo-debt so people could understand it. But obviously, these technologies are not just
being deployed in this single way. And so it's part of a larger transformation of government
and of service delivery that you called robo governance in the article. Can you give us some
context on the other ways that these oppressive technological systems are being deployed in the social welfare system. It's really come to my attention and the attention of others that
it's not just an isolated incident that we're seeing with Robodebt, but actually, you know,
the Digital Transformation Agency, as it's called, its remit is really this. It's wide-scale
experimentation. And I believe in its strategic plan
it talks about kind of fully automating service delivery
of government.
And so specific examples that have been really
in the public domain recently, one example is what was
colloquially called robo-planning.
And so what this was is as part of our,
what's called the National Disability Insurance Scheme,
so the disability support scheme,
the government was proposing that the budgeting of supports be algorithmically generated.
And so that's ludicrous, obviously,
and that proposal has been rejected now,
but that was certainly on the table up until just very recently.
There's also a trial of a blockchain payment technology within the NDIS as well. We've got
parallel systems in play like the cashless debit card system, which disproportionately targets
First Nations communities. And that's a system that quarantines 80% of welfare income
to ensure that it's not spent on alcohol, et cetera.
And so it's an extremely punitive and ineffective system.
We've got widespread use of biometrics now
within the kind of digital ecosystem of social welfare.
And just very recently, the government signed
a contract with a very infamous facial recognition technology called iProof,
which is going to be used for the MyGov ID, which is the kind of main ID system that enables you to
access social welfare and it links you to your tax as well. And so simultaneously to
a range of different developments within the welfare system, we've also had the Australian
government pass a bunch of legislation that's going to make it easier for data extraction and
data sharing between government agencies. We've had a bill that specifically makes it easier for law
enforcement to access data. So we can see that this is a more widespread issue than just robo-debt.
And actually, if you zoom even further back, you can see how AI and algorithmic decision or automated decision making is really making its way into
all facets of life. So particularly those where marginalised groups are. So as I mentioned before,
law enforcement and border policing, we can really see it taking off in a big way there.
But, you know, even employment and, you know, who's shortlisted
for a particular position. Yeah. And I understand Canada, for example, uses automated decision
making increasingly for visa processing, which seems innocuous and harmless because they're
saying, you know, it's just speeding up the process for those who would already be very
likely to be granted entry. But, you know, that's just making a the process for those who would already be very likely to be granted entry.
But that's just making a bigger and bigger gap between the mobility of different groups, isn't it?
So, yeah, we really can see how robo-governance is something that we need to pay attention to more broadly.
Absolutely.
And I know that some of your work deals specifically with AI and its deployment on the borders.
And I remember when I went to Australia for the first time a number of years ago now, when I arrived, the kind of passport gates and the use of facial recognition technologies at the international border, it was something that I had not encountered before.
It's rolling out now in North America, obviously, and has been for the past few years. But when I encountered it in Australia, it felt
like that was kind of novel to me. So do you want to talk a bit about the use of these technologies
on the border as well? Because as you're saying, I think that this is related to this kind of
broader deployment of these technologies and many different facets of life that are happening around the same time. Yeah, sure. So this kind of huge growth in AI within border policing is not just an Australian
phenomenon, but like you said, it's really a global phenomenon. So the leading work actually
in this space is being done by groups in the US. I would specifically highlight
a Latinx organizer group called Mijente, who's doing incredible work, really mapping out, you
know, what are the different companies that are involved in the value chain as it relates to
ICE, so Immigration and Customs Enforcement. And they've really been able to
articulate this in really simple and easy to understand terms, how, you know, one company,
so like a Thomson Reuters, for example, bizarrely, you know, often we think of them as a media giant,
but they are actually a data broker company. A Thomson Reuters might collect data from a range of different sources
and put it all into one place. Then we have, you know, the infamous Palantir providing data
analytics and then an Amazon Web Services storing that data. And so we can see how in the chain of
the data that leads to the list of people that will then eventually be deported, you've got a range of different actors in play. So it's quite disturbing, but their work, they've just released
a report, I think it's called Digital Prisons, but yeah, certainly check that out. But yeah,
in the Australian context, I would say it's actually less developed than in the US, but it is really concerning and interesting that, say, an IBM is really managing all of that smart gate data.
They have several contracts across government, across different aspects of government services.
So this is a company that's also deeply involved in the smart cities agenda.
So it's really concerning all of these
developments because increasingly what we're seeing, particularly with COVID, right, where a
lot of these large tech giants are just more and more in bed with governments through lucrative
COVID response contracts. So Palantir, for example, has, I think, contracts with up to 12
or so governments to assist with its COVID response.
Now, that's really concerning. And really, we're ending up in a world where mobility is going to
be more efficient and easy for the elite and more and more difficult for everyone else.
I think what you're describing there is really concerning, but it's also important to
kind of draw the connections, right? To see how
these institutions and how these companies are connected to one another, how this data travels,
the things that they are pushing for the future of these systems and how that would actually affect,
you know, not just the people who have the most privileged access to travel and ease of travel
and things like that, but everyone else
as well, right? Because I think what you're describing, whether it is technology on borders,
or whether it is technologies being deployed against people on social welfare, is how so many
of these technological systems are tested on the people who, you know, cannot push back against
them, cannot fight them, are, them, are just kind of subject to
these systems. And that is something that we need to be paying attention to instead of just waiting
until maybe it comes for the people who are more powerful and actually have some ability to change
things at a stage when it could actually make a greater difference. Completely agree. And I think
that's one of the core features really that I always try and talk about when we talk about
this concept of robo-governance, which is that it is always tested on marginalized groups first and
then expanded. And that's always happened for a long time throughout human history. But the most
relevant example of what you're talking
about there is really this Pegasus technology
that's been unearthed recently, the Israeli spyware technology
by NSO Group, which was experimented on through Palestinians,
who are one of the obviously the most oppressed groups
in the world.
And now we can see how that very technology is being used
against activists and
journalists and other groups. And so we need to, like you say, really understand the interconnectedness
and the way that these systems kind of just expand over time.
It's an essential point and such a good example of it as well. You know, I think what you're also
describing here with so many of these
implementations is how this is also a question of democracy and of accountability, right? Because
you explain that, you know, few people know how these systems work or develop these systems,
you know, technologists and certain lawyers. So they gain a greater power in determining how
these processes that govern our lives actually work. It's a further expansion of this technocratic ideal through the justification of efficiency and things working properly and all of these things that we as you say, happening really in the dark, there's no
transparency whatsoever. It's really a vacuum of regulation. There's no way of really finding out
what's going on. Yeah, we're put in a really difficult position. And I think the challenge
here is that a lot of the thinkers on this that are quite prominent, unfortunately, are calling for kind of self-regulation mechanisms, right? So mechanisms such as, for example, you know, a human rights
impact assessment. And so for someone that is, you know, an impacted individual or community member
or someone that's working in this space, the concept of a human rights impact assessment,
you know, when you're talking about some of the most malicious actors, you know, globally, like a palantir, it's not likely that a human
rights impact assessment is going to do very much for the people that are affected by this.
And so we really need to think of more creative regulatory mechanisms.
And this is the kind of thing that really needs independent oversight and actual harder regulation to actually unearth what's going on here. It's really critical.
Have you seen any proposals for those types of regulations or progress toward those types of regulations in Australia? Or are there any other examples around the world that stand out to you? In the Australian context, the Australian Human Rights Commission just released their final
report in a long series of reports called Human Rights and Technology. And, you know,
it really articulates, I think, some of the issues that we've been talking about here really well and
in a lot of depth. And in terms of the responses that it's calling for,
it does err on the side of self-regulation and it really is kind of looking to industry
to regulate itself and really relying on this set of AI ethics principles. And AI ethics
principles are really widespread and being taken up quite widely across other jurisdictions.
But yeah, I guess I'm, as I said, more interested in what are the harder legally binding mechanisms
that we can find to address this issue. So the most relevant recent example I can think of is
the draft legislation that's just come out of the EU, which kind of articulates this hierarchy of risk. So high risk AI would be most biometric
technologies. And so they're calling for high risk technologies to essentially be banned. I think
then there's a bunch of caveats and, of course,
enforcement is a huge issue, but that's the kind of starting point that I would like to see
Australia coming from. The Australian Human Rights Commission was saying, you know, facial
recognition should be banned, but I don't see much of a difference between voice biometrics and face
biometrics and fingerprints and actually all and fingerprints. And actually, all of these things
are interconnected, and it's the interconnectedness that makes them powerful. So let's talk about
banning the collection of biometrics. Let's take a broader systematic approach.
Yeah, obviously, I love that. I feel like, I'm sure that you share this, I feel like
one of my frustrations is that these technologies are
often proposed. And so often I feel like there's not a consideration of like, is this the type of
technology that we actually want to see deployed in our societies? Or should we be open to saying
that this is a technology that doesn't work for us, that is doing negative things in our societies,
and that we should be okay in saying this is not a technology that we should allow to be deployed because it is having
these negative effects. And if we need things to work in a different way, or if we need technologies
to help people, then we can develop those specific technologies to do that. But we need to be not
just believing that because a technology is created, that it
naturally has to be accepted and a regulatory framework created around it so that it can be
accepted. But sometimes a technology just shouldn't exist. I love that. I agree. I think that some
technologies just shouldn't exist. And many of these technologies, because they have proliferated
so widely, you're exactly right.
We shouldn't be kind of regulating around them. We should be doing our best to rid ourselves of
these technologies. You know, automated decision-making doesn't belong in the welfare
system. The welfare system, like several other systems that we've talked about, is inherently
punitive. It is a punishing system. Every aspect of it is punishing from, you know, the mutual obligations, requirements
to the assessment process, the eligibility process, all of it is quite punitive.
And so when you increase efficiency in a punitive system, there's only going to be
one result.
And so I think we should be making really bold requests of governments to actually rid those kinds of systems of these technologies.
Yeah. And it sounds like what you're saying there is not only rid these systems of the
technologies, but also transform the system so they're working in a very different way
than how they work right now. We've obviously covered a lot of ground. We've talked about
the Robodebt experience policy in Australia and the effects that that has had, but also how that fits within these broader kind of trends that we've seen over decades is there anything that we collectively organise not just
within the social welfare system but actually across multiple systems. So border policing,
law enforcement and the criminal justice system. I think because many of these communities are
actually one and the same and so you that, so where I'm from,
Hallam, which is like a little suburb in the southeast of Victoria,
and so young people in that suburb are subject
to predictive policing by law enforcement.
They may be also subject to predictive policing
by what's called task force integrity, which is like a systematic
effort to kind of target particular geographic groups with police. And so I think just thinking
a bit creatively about how we can come together is something that I'm interested in doing. And
so if anyone's listening to this and also thinking along those lines, I would love to hear from you.
Yeah, I think that's an
essential point. And, you know, going back to that interconnectedness between all of these things and
understanding how these systems work together, and then affect people when they do that. I really
appreciate you taking the time today to talk to me about these issues, and to educate, you know,
people outside of Australia about these things that are going on that we should be learning about
and how obviously these are not just issues
that are happening in Australia,
but are happening in many other countries as well,
even if they might take a slightly different form
than what has happened in Australia.
So thank you so much.
Thank you for having me.
It's been a pleasure.
Dakshaini Suryakumaran is a PhD candidate
at Australian National University.
You can follow her on Twitter at at Daksha Inni Suryakumaran is a PhD candidate at Australian National University. You can follow her on Twitter at at DakshaInni underscore S.
You can follow me at at Paris Marks, and you can follow the show at at Tech Won't Save
Us.
Tech Won't Save Us is part of the Harbinger Media Network, and you can find out more about
that at harbingermedianetwork.com.
And if you want to support the work that I put into making the show every week, you can
go to patreon.com slash techwontsaveus and become a supporter.
Thanks for listening.