Tech Won't Save Us - Chatbots Won’t Take Many Jobs w/ Aaron Benanav

Episode Date: April 20, 2023

Paris Marx is joined by Aaron Benanav to discuss OpenAI’s claims that generative AI will take our jobs, how previous periods of automation hype haven’t resulted in mass job loss, and why we need t...o ensure it doesn’t further empower employers. Aaron Benanav is an Assistant Professor of Sociology at the Maxwell School at Syracuse University and the author of Automation and the Future of Work. Follow Aaron on Twitter at @abenanav.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.The podcast is produced by Eric Wickham and part of the Harbinger Media Network. Also mentioned in this episode:Aaron wrote about why chatbots won’t take your job for the New Statesman.Microsoft is rolling out generative AI in its enterprise software services and its Azure platform.The Writers Guild is proposing contract language on AI in scriptwriting to ensure writers still get the credit.Support the show

Transcript
Discussion (0)
Starting point is 00:00:00 You know, every time you hear this proclamation by tech people that they finally closed the gap between computers and human beings, just know that every other time they've said that, it turns out that that gap was much larger than what they had thought. And that goes all the way back to the 19th century when you had these images of robots that were, you know, steam powered that were going to replace human beings. That gap is always larger. Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks. And before we get to this week's episode just a reminder that this month it is the third birthday of tech won't save us so we are running a membership drive to ask listeners like you who enjoy the show who listen to it you know maybe every week maybe not every week but feel like they learn from these conversations to consider going to patreon.com slash tech won't save us and becoming a supporter. That not just allows me to keep doing this work, doing the research to have these really in-depth interviews with experts on a whole range
Starting point is 00:01:11 of tech topics, but to ensure Eric can keep producing the show and putting it all together and Bridget can keep putting together our transcripts every single week. And we've set a goal this month of getting 200 new supporters over on Patreon.com so that we can make a special series digging into Elon Musk, his history, and we still have plenty of time to get all the way there. And if you want to help us meet it, you can go to patreon.com slash techwontsaveus, help us out this month for the podcast's third birthday, and allow us to keep making the show. With that said, this week my guest is Aaron Banninoff. He was previously on the show on November 5th of 2020, episode 34, way back in the early days of the podcast, Aaron is an assistant professor of sociology at the Maxwell School at Syracuse University and the author of Automation and the Future of Work. Now, I wanted to have him back on because we've been having a lot of conversations recently about generative AI, chat GPT, and the potential impacts of these technologies on the work that people are doing
Starting point is 00:02:26 and what it might mean for their jobs into the future. And Aaron, based on the title of his book, as you can tell, has done a lot of research into the impact of automation on work and has been thinking about what this means in the context of generative AI in these more recent developments. Obviously, it will be no surprise to you that we don't think it's going to have the massive impact that some people are hyping it up to make it seem like, and we discussed that in this interview. But I do want to make one note before we get into it, and that is that we don't talk so much about what the generative image and animation tools might actually mean, in part because I want to do a specific episode
Starting point is 00:03:06 on those things in future to talk to someone who has more specific knowledge on those applications. So look forward to that, you know, in the weeks or, you know, month or so ahead. Hopefully I'll find the right person to be able to have that conversation with. But for now, you know, I hope that you enjoy this interview, this conversation with Aaron as we discuss the potential implications of these technologies and compare it back to earlier times when it looked like automation and AI were going to wipe out a bunch of jobs. And then, not surprisingly, that is not actually what happened. And once again, I think that this is going to be another one of those moments, though that is not to say that generative AI will have no impact at all on work. It's just that I don't think that the impact, and certainly Aaron would agree, that is going to be as transformative as some of these boosters and hype people would want us to believe. And so the best thing that
Starting point is 00:04:00 you can do this month to help us out is join supporters like Leonie in Boston, Simon from London, Jim from Seattle, Kevin from Canada, Riley from California, Don Cusack, Vincenzo from Copenhagen, Caroline from Asheville, North Carolina, and Jeremy from Dearborn Heights by going to patreon.com slash tech won't save us, becoming a supporter and helping us hit our goal of 200 new supporters this month. Thanks so much and enjoy this week's conversation. Aaron, welcome back to Tech Won't Save Us. It's really great to be here. Thanks for having me, Gareth. I'm very excited to chat with you again. You know, it's been two and a half years since you were last on the show. I just can't believe it's been that long, to be quite honest. Sometimes it's hard to even think I've been doing the podcast for that long. But last time you were on the show, we were talking about what was happening in the mid-2010s when there was all this scare and hype around whether robots were going to take all of our
Starting point is 00:04:56 jobs and what really came out of that. And of course, with everything that has been happening with chat GPT and generative AI lately, I was thinking about that period, especially as there's been more talk of these technologies taking a bunch of jobs. And of course, you know, I jumped into your DM inbox recently to ask what you were thinking about it. And then you wrote an article in the New Statesman recently, kind of giving your thoughts on what was going on here. So I thought it would be a good opportunity to have you back on the show and to discuss all of this with you. And so I want to lay a bit of kind of the
Starting point is 00:05:29 groundwork, a bit of that history out for the listeners, and then we can kind of dive into it. So in 2013, there's this Oxford paper that says 47% of jobs will be lost to automation one to two decades. And that kicks off a bunch of kind of sensationalized coverage about automation and job loss in the years that follow that, right? And there are a number of other studies that kind of follow on this one that say, oh my God, so many jobs are going to be lost. And there's these questions of like, what is going to happen there, right? The idea is that plenty of jobs are going to be eliminated, including all driving professions like truckers and taxi workers, but also the media covered all kinds of robots serving coffee and taking care of the elderly and operating Amazon warehouses. And even more with the message
Starting point is 00:06:09 that all of these robots were just on the cusp of taking away all these jobs and what were we going to do after. And then that kicked off campaigns for universal basic income and even bigger visions like fully automated luxury communism. But then the expected job losses didn't come. And now we're all hearing about, you know, how there's a tight labor market where there aren't enough workers, even at the same time as there's talks of once again, technology taking all of our jobs. So if we kind of go back to those that period in the mid 2010s, what was going on then that you think really stands out? And why didn't that mass wave of job destruction that many people
Starting point is 00:06:45 were expecting actually come to pass? Yeah, there's a sense of deja vu in our current moment, right? We see this paper that's come out from OpenAI saying that chat GPT and related technologies are going to take, they say, 49% of our jobs, which is 2% more than the Frain-Osborne paper from 10 years ago predicted. It's a really good time, I think you're right, to look back on the original context of the kind of 2013, 2014, 2015 period, which was a major period of automation hype. And to think about what was going on then and what happened with all these claims about robots. And also back then, it's important to say too, there was already this sense that it wasn't just robots, but also back then, it's important to say too, there was already this sense that it
Starting point is 00:07:25 wasn't just robots, but also artificial intelligence in the form of more simple machine learning and deep learning algorithms, which are also the basis in many ways of the chat GPT and generative AI revolution. When I went back and looked at the Fray and Osborne paper, one thing I found very interesting is that the paper tried to take a methodology that was applied to the question of offshoring. Like what kinds of jobs in the U.S. were susceptible to offshoring? Could we look at the tasks those jobs involve and guess which jobs are likely to move overseas where you have remote workers often working for much lower wages, right? So what jobs can be offsh offshore and what can't?
Starting point is 00:08:06 Frank Osborne took that idea and tried to say, what if we apply the same thing to computers? What if we looked at the kinds of jobs that computers can do, that they could take over, that U.S. workers are currently doing? And I think already you see there what the problem is, right? That like human beings in other countries, they may have a different set of skills. There might also be limitations to the kind of work that they can do from far away, from a distance, but they still have human minds and they have human cognitive capabilities. And the same wasn't true of the computers and robots and machine learning that Trey and Osborne were referring to. The other problem is that in the original paper, as in the one that OpenAI just published, the evaluation of which jobs could
Starting point is 00:08:52 actually be done by computers was just done by a bunch of computer experts who didn't really know anything about what those jobs actually required, right? So there's just something so silly and self-referential about it. And then the other thing about that paper, which is also repeated in the new one, is that they asked the technology itself. In that case, you know, in the Frey and Osborne paper, they asked a machine learning algorithm to categorize jobs and say which ones it could replace. And in the current paper, they asked ChatGPT to do the same thing. And I think that's how you know. I don't know.
Starting point is 00:09:27 When I see that, that's kind of when my bullshit meter goes through the roof, right? Because there's just something so silly about that particular strategy. But in any case, it did lead to this huge wave of hype, as you said. We heard so many stories about machines, robots, and computers that were going to take over all of our jobs, or at least 47% of them, as free knowledge. And, you know, I mean, I'm always looking back at those technologies and seeing which ones succeeded and which failed. And the truth is that the vast majority of them, like 99% of them have failed. They didn't work out. They were just startups that were riding that same wave
Starting point is 00:10:11 of hype to, you know, apply their wares and, and almost all of them failed. And of course, a lot of startups failed, but you don't ever hear the stories about that in the media reporting. Yeah. They just kind of move on, right? There was all the hype and then they don't talk about how it all failed and didn't work out. They just move on to the next thing that gets hyped up and is the next thing that the tech industry wants us to be excited about, right? I think it's so fascinating to hear you describe, you know, that paper, everything that came out of it, but in particular, how, once again, you know, we're asking the technology to predict, like, the impact that it's going to have as if the technology has any kind of brain behind it or any kind of understanding. It isn't just kind of pulling stuff out.
Starting point is 00:10:53 And, you know, as I was talking about with Emily Bender is kind of like a very advanced kind of autocorrect sort of function, right? There's no kind of intelligence behind it. It's not making a real kind of prediction or what have you. You know, it's just this tool that we've kind of created. I think it's important to say in retrospect what went wrong with the technologies, because you see these statistics about robots, like the number of robots in the economy and across all these different economies is rising so quickly and the price of robots is falling. And that was what really excited people and thought that there was just this incredible
Starting point is 00:11:25 transformation going on. It's important to know that a lot of the robots per worker statistics you see are about robots in manufacturing, which is a sector that accounts for a smaller and smaller share of the workforce. So just on that basis alone, it's important to know that the robots that really haven't worked out are robots in the service sector. You might see some experiments with using robot sort of busboys and staff and so on. But for the most part, robots and services outside of Amazon warehouses, where they do very specific jobs, more like a factory job, haven't worked. And what I tell my students, which is what I think everyone should know, is that the main thing that robots do
Starting point is 00:12:05 is they pick up heavy things and move them from place to place. So whenever you hear about innovations in robots, ask yourself the question, is this robot picking up something heavy and moving it from place to place? And you'll find that that's what the vast majority of robots still do. And that's why most robots are in the car industry. So like 50% of all the robots that are deployed in Europe are deployed in the German car industry. And those robots, they do a few other jobs like welding, painting, some simple types of assembly jobs. Those are jobs that robots have been doing for a very long time. And what all the analysts in manufacturing say is that a lot of the robots we see, they're not qualitatively
Starting point is 00:12:45 different from the robots we've seen over the past 20 or 30 years. And that's when those robots made the biggest difference. The ones like self-driving cars that are operating outside of the factory context, it's all the problems that you mentioned already that are still there and present with generative AI, which is just that these tools are not very good at operating in kind of open and like unpredictable environments. They tend to fail a lot. And so all the efforts to kind of bring robots out into those unpredictable spaces. The robots in manufacturing are usually in cages. They're actually separated on the floor from interacting with other human beings. And so-called collaborative robots, which are allowed to work around human beings,
Starting point is 00:13:26 are legally required to move more slowly and have less force, which just means that they're not good at doing most of the things robots are used for, and they represent a very small portion of the robots. But insofar as robots and services, you know, are operating in these unpredictable environments, they also need to be battery-powered,
Starting point is 00:13:44 like spot the dog or whatever, the robot dog. I think its max runtime is like 90 minutes or something. So there's all these problems with doing these things with robots. And the machine learning stuff, as your guests have already said before, is just very limited in terms of what it's able to do and what kinds of embodiment it can provide to the robot.
Starting point is 00:14:04 So those are the main technological reasons why these things failed. And as in the past, you know, every time you hear this proclamation by tech people that they finally closed the gap between computers and human beings, just know that every other time they've said that, it turns out that that gap was much larger than what they had thought. And that goes all the way back to the 19th century, when you had these images of robots that were, you know, steam powered that were going to replace human beings. That gap is always larger than what they say, which doesn't mean it might not be closed one day. It's hard to say what the future holds, but most people in the field who are not basing their predictions on hype or moneymaking, One researcher said that AI is always 40 years away. No matter when you ask people how far away it is, it's always 40 years, which is just
Starting point is 00:14:51 researchers way of saying we have no idea how you get there. I think those are all such important points. And it's great to have that context on where these robots are actually being used. You know, your point about the spot robot makes me think that, okay, you know, we're kind of held off right now from these robots being deployed in really kind of dystopian ways because we don't have the battery technology. But if we get those battery advances that are promised for electric cars that keep saying are right on the horizon, then, oh man, we're really in for a dystopian future then with those robots. And, you know, you say about the robots in car manufacturing, many of them being kind of caged off
Starting point is 00:15:27 from the rest of the kind of workforce or whatever. You also see that in Amazon warehouses, right? Where a lot of these robots are in use is also in a very kind of caged off area of the workforce where a lot of this stuff is happening and where these robots are being used. It's not on the same floor or in the same areas as where the human workers are going to collect things
Starting point is 00:15:46 and whatnot, right? So I think that's an important point to make. And so I wonder, you know, if we look back at that period, just keeping on this for a little bit longer, obviously we didn't have these technologies like eliminate a whole load of jobs as were expected. But what did we see happen, you know, since kind of the mid 2010s
Starting point is 00:16:03 through the rest of that decade to jobs? And how was technology kind of affecting that to the quality of jobs? That's a really good question. So the point of my book, which came out in 2020, was just to a lot of it was about contradicting the robot and the AI hype of that time. And a lot of what I base my argument on is that the way we see technologies progress in the economy, the way we measure that is by looking at labor productivity growth rates. And people often find that confusing because they think, well, isn't that a measure of like the productivity of workers? We're interested in the productivity of robots. But the way those statistics are designed, they pick up all of the increases in efficiency
Starting point is 00:16:46 that come from augmenting human labor, even replacing human labor with robots. They're not a measure of what human beings contribute to production. They're just a measure of how much you produce per hour of human work in total. And so the decade of the 2010s, which were the decade of this incredible automation hype, turned out to be the decade that saw the lowest rates of productivity growth since the modern measure came about during World War II and its immediate aftermath. So there's this huge contrast between the hype story and what we know by looking at the data. And what that led me to say is that in reality, there was a lot of insecurity in that
Starting point is 00:17:26 decade. And that really affects a lot of workers. What I stressed in the book was that the recovery from the Great Recession was very weak. So it just took a long time for all the people who lost their jobs, who dropped out of the labor force to find a place in the labor force. And wage growth really didn't pick up until the very last years. It was one of the longest periods without a recession since they've been measuring it. But wage growth only picked up in the very last year or two before the COVID recession. Even then, it wasn't as significant as predicted. So at the time, the Fed was saying that we don't think that these low unemployment rates are actually reflective of tightness in labor markets because we don't see significant wage growth.
Starting point is 00:18:08 And I attributed that, and that's a whole long story, to this tendency of especially mature capitalist economies to just grow more and more slowly as more and more of their jobs are in services. And services tend to see lower rates of productivity growth. And I think that the tendency of jobs to kind of shift into services swamped out a lot of the technological effects. Although actually, even in manufacturing, they recently redid the numbers. They found that in the United States, at least, productivity growth in manufacturing was zero over that whole decade. There was just, you know, unbalance across all the firms. So there's just a real contrast
Starting point is 00:18:46 between what people are saying and what actually happened. I think the part of the story that I can talk about more, if you like, is also about how technologies were implemented in that context, because I think the use of technologies to surveil, watch over, collect all this data about workers, change power relations within the workplace. And I think that's a really important part of the story as well. Absolutely. And I think it is as well, right? And I think that if we look at kind of the hype and the narrative around robots taking all of our jobs, like my feeling is that those narratives actually distract us from what is actually happening with technology, how technology is actually used to change jobs
Starting point is 00:19:25 in these ways that you're talking about? Yeah, I think that there were a number of people who were analyzing this in terms of digital Taylorism, which I think is, is both, you know, a very good framework for initially thinking about it, and then also show some limits with regard to what's happening today. But the basic idea is that what managers are doing is they're trying to collect information about how workers do their jobs, especially workers who are skilled, workers who have some trade that makes it necessary to pay them more money. And managers try to figure out what is it that these workers are doing? How can we make it so less skilled workers can do these jobs with technology, right? Because if you can do the same job with less
Starting point is 00:20:05 skilled labor, you can pay everyone who does that job less money. And so I would say that key example of this was Uber versus taxi drivers, right? Like people talk about how taxi drivers in the past, they needed to know their area where they drove really well. They needed to know all of these back routes and, you know, special ways to get around traffic in order to make more money. And Google Maps was able to figure out what a lot of those special routes were. And also, I think because they were tracking the way people drive and the way people responded to traffic patterns, they were able to kind of give everyone the capacity to use these special routes that used to be available only to people who'd really driven for a long time and knew the area. So there's a kind of de-skilling that happens there. There's a way that this software, Google Maps, is collecting all this information that makes
Starting point is 00:20:54 it possible for anybody with a car and a license to start doing, maybe not as well as a taxi driver, but to kind of do the same work without the skills. You know, we were promised that Uber was going to automate away driving, but actually all it did is it kind of opened up the possibility for a lot more people to do this work and especially to do a part-time than before. And they did, of course, pay taxi drivers less. Now, when it turned out that the promises about self-driving cars didn't pan out, Uber used more and more its technologies and its platform to try to shape how workers worked. They started paying workers less for each ride. They invented all these bonus systems, and Fina Duval and others have really talked about this a lot, and started to use these algorithmic techniques to manage the work and to figure
Starting point is 00:21:42 out ways to get workers to be more available while paying them less in total. And I think that's an important part of the story that takes it a little bit beyond the digital Taylorism account, because when these techniques were invented in the 20th century of scientific management, it required a lot more managers. Like if you were going to use workers who didn't have the skills of your former workforce. You need a lot more managers to be there watching what those workers do. And what these digital technologies clearly make possible is that you can extract all this information from workers, you can surveil them. On that basis, you can kind of have the algorithms watch them. There's all of these, you know, biases built into that and all of these disturbing features of the algorithmic management system,
Starting point is 00:22:24 but you don't need as many managers as before. There's another economic theory that I think is very powerful for thinking about this, which is called the efficiency wages theory, which just says, to summarize, I think it's got some issues, but the basic idea is that when you don't know exactly what workers are doing, you have to pay them a little bit more. You have to try to get them to identify with the firm and also make the threat of job loss more significant if you can't really observe what they do. And so what we know about a lot of these technologies as they're used by Amazon, Uber, everywhere else, is that they make it much easier for the firm to really identify the productivity of this specific worker and to threaten those workers if they don't do their jobs according to what Amazon or these other companies want for them.
Starting point is 00:23:12 So all those surveillance technologies, it wasn't just that they were like de-skilling workers, but they were making it easier to track the minute differences in contributions across workers. And that on the whole made it possible to pay workers less. So you see the truckers, anywhere where there's more capacity to like track and collect data on workers, you've been able through this like decline in efficiency wages to pay workers less.
Starting point is 00:23:38 And all these things, you know, they're just turning people, they're trying their best to use the technologies we have since we can't replace workers, to make workers more robotic, to attach all these wearable technologies to them, to try to make them more and more robotic in their activities at work. It's pretty bad. Yeah, that's the real kind of dystopian scenario, right? But it's not the one that you hear about with all of these kind of, you know, endless predictions of robots taking all of our jobs
Starting point is 00:24:04 and things like that, right? It sounds very different, but the kind of implications of it are very sinister and really suck for a lot of workers who lose, you know, a lot of their power in the workplace or have their jobs kind of redefined in this way, such that, you know, the employer has so much more power over them and, you know, they have less ability to kind of push back on working conditions to get better pay and all of these other things, which is what we've seen from companies like Uber and Amazon and the like over the past kind of decade. I want to kind of start bridging our conversation to what's going on now. And during the first year or two of the pandemic, there was another kind of wave of writing or articles that were arguing that we were about to see a massive
Starting point is 00:24:47 surge of investment in automation technologies and robots because of the experience of the pandemic and what employers had to experience when workers were in lockdown, couldn't come to work, all this sort of stuff. And now, of course, we're in this moment where we have a very tight labor market based on everything that we can see, even as, you know, the Fed and other central banks around the world keep raising rates. It seems like the economy is still keeping up and there's still quite a number of people in work. They're not losing jobs to the degree that, you know, that economists and central bankers
Starting point is 00:25:18 wanted to see. So what did we actually see in the aftermath of that period of the pandemic and the predictions that there was going to be a ton of new investment in automation as a result of that period of the pandemic and the predictions that there was going to be a ton of new investment in automation as a result of that, that was going to take away a lot of jobs? Yeah. I mean, just the short answer is that no, that didn't happen. Right. And it didn't happen for many reasons. One of them is that as I've just been explaining, the technologies are just not up to the task, right? The technologies just can't do what the proponents of those technologies claim. It's also just like in the pandemic, the future became really uncertain and getting robots and to actually do work better than
Starting point is 00:25:56 current people doing jobs requires massive investments. It requires all this tinkering with the work process to actually make it work well. And there was just not, the pandemic was not a period in which companies were like, let's massively invest and transform the whole way we do things. No one knew what the future was going to be. And so companies just didn't respond that way. And that's generally what we'd expect from pandemics. They create, and they create as a shock, even much further into the future, a lot more further into the future, a lot more uncertainty about the future. And that tends to reduce the degree to which businesses invest. And so what was the story of the pandemic? It was that, you know, middle class and professional
Starting point is 00:26:35 workers were sometimes able to use new technologies to work from home and to protect themselves from being exposed to the virus. Whereas many human beings, poorly paid, so-called efficiency or like everyday hero workers, have to step up to the plate and put themselves in danger and continue to work during the pandemic. And I think the pandemic had two effects. One is that it made people appreciate, if only for a moment, that it's really true that we can't do the things we need to do without many other people we have to trust other people we have to depend on them there's a kind of relationship of solidarity there that was only briefly felt but i think is very important for
Starting point is 00:27:16 us to recognize um the other thing it did was it kind of gave people a sense that there's other things that are more important than money right and you kind of gave people a sense that there's other things that are more important than money, right? And you kind of see that a lot. I think that when we talk about what's going on with jobs today, it's really important to know that it's not necessarily that there's just this incredible demand for work. It's that because all these businesses shut down for a number of years, there's some pent up demand as a result. But it's also that like, you know, if you had a restaurant in a dense and wealthy neighborhood of a city like New York or Boston, the workers who used to work in those restaurants just like couldn't afford to live
Starting point is 00:27:56 there anymore. And they left, you know, they went back home, wherever they're from. They lived off of their pandemic savings, because there were these boosts to people's incomes and savings at that time. And so they estimate that there's about two and a half million workers who are missing from the labor force in the US. And the US and UK have been the countries that have been hardest hit by this. Part of it is older workers who are afraid of getting sick, who retired early. Part of it, which is why you see this, especially all this talk about restaurants that can't hire people. It's really in the wealthier neighborhoods and cities where there's a roaring back of all this restaurant jobs. But those areas have become so expensive that people just left and they're not living there anymore.
Starting point is 00:28:42 Or they took the time of the pandemic to find other things to do. A lot of people died, you know, or got long COVID. And that just meant that they weren't around. And of course, a big issue, especially in the US, is that there was just a lot more care work. So for people who had long COVID, for elderly people, for children, like the amount, especially as these things became more expensive, people couldn't afford to work. They had to drop out of the labor force to take care of the people around them. And that's been a big problem in the US. You have an overall lower labor force participation rate because the country never solved its problems with elder care and childcare. So it's a complicated story of what's going on. None of it fits with the robots are taking our jobs or the AI are taking our job story. But I think it's important to point out
Starting point is 00:29:29 those real things that have been going on and that have been creating a lot of tension in the economy and will continue to do so for some time. Yeah, it's also like a big difference from the kind of general narratives that we hear about why people aren't working, right? Like there's been a big push by financial publications, places like the Wall Street Journal, to say people just don't want to go back to work and whatever, when actually there are a lot of reasons why there's a tighter labor market right now
Starting point is 00:29:54 that if they really cared to understand, like it's very understandable. And they're putting children to work in markets again, so. Right, yeah, you know, they're very excited about opening up Charlotte Labor in Alabama and some other places again so right yeah you know they're they're very excited about opening up child labor in alabama and some other places again right which is just terrible right and i think that also gives you quite a contrast from the narratives that like robots are taking our jobs or whatever and then on the other hand we're lessening the regulations on child labor so that more companies can hire
Starting point is 00:30:20 children it's just wild and i don't think we really need to go on more of it. That would be a disturbing headline, right? Like the children are taking our jobs. Yeah, right. But now let's move on. You know, obviously, we're talking about how there's this tighter labor market right now. But also what we're seeing in this moment is all of this excitement around chat GPT and generative AI and these tools that have really kind of exploded over the past year as the kind of boom and the hype around cryptocurrencies and metaverse have tailed off. And so now there needed to be something else. And this is where, you know, generative AI really comes in, right? You have your large language models, you have your kind of image generation tools, things like mid journey, stable diffusion. And of course, the narrative
Starting point is 00:31:04 that we're hearing now is that these are the new technologies, these artificial intelligence technologies are what is going to wipe out a ton of jobs. As you said, OpenAI had a report that said that 49% of jobs are at risk because of this. So what are they actually saying about the potential impacts of these new technologies on work and whose jobs in particular do they imagine are in the crosshairs because of this? So in methodological terms, they just took over exactly what Frey and Osborne did. I mean, they do mention that there were critiques of that perspective. And I think it's very important to know what those critiques said. So a number of researchers did redo the Frey and Osborne numbers. They found that the machine learning algorithm had just miscategorized all these jobs,
Starting point is 00:31:50 as I'm sure the CHAT-GBT model did in the current paper. But also what further research showed, and there were some researchers at the OECD who really made this point most strongly, is just that the entire way of thinking about how jobs change, that Frey and Osborne adopted and that this new paper also adopted, is just wrong. Like, it's just not the case that there's some threshold of tasks in a job. And if a computer or a robot can do those tasks, then the job disappears. There's a false assumption there. And it leads the public, I think, when they hear, when these papers and this methodology gets translated by journalists and others into a
Starting point is 00:32:30 kind of larger media framework, you get asked these questions. Like I got asked this question a lot after I wrote the automation book. The first question is like, so what jobs have already disappeared? You know, what jobs have gone away? Like, and what's next? And when you listen to the automation hype hype people answer that question it's very interesting because the truth is like not that many jobs have gone away you know like we still have wait staff at restaurants we still have nurses you know and the jobs that have really gone away like they're gone are jobs like and even there it's not totally true but like travel agents you know that's a really, I think, important one. There used to be a lot more travel agents. Now there's very few. There used to be a lot more, I don't know, people who manually read like utility meters, and those are increasingly being replaced. Like toll booth workers are increasingly being replaced. But even so, it's like, it's very hard to identify jobs that have like totally collapsed in terms of their employment. So
Starting point is 00:33:25 there's just something wrong about that methodology. And part of the reason is that as technologies change, the content of the work people do just changes, right? And it's just different. Like to be a school teacher today is just different than it was 20 years ago, 50 years ago, 100 years ago. It doesn't mean the number of school teachers has declined. Actually, the number keeps growing because productivity growth is low. But the kinds of tasks those workers do just changes with changes in the technology. And there's a few really important corollaries to that insight. One is that when you look at this database that the researchers from OpenAI were using, they were like kind of acting
Starting point is 00:34:06 as if every job is like a fixed set of tasks. And if you could just get rid of those tasks, then a computer could do the job. In reality, the way you do jobs is just very different across workplaces. Those databases called O-Net, it's an attempt to kind of figure out on average what a job requires. But in reality, jobs look very different in different places. And that's true across firms within one country. It's really true across all the different countries of the world. And there's a lot of reasons for that. One is that, you know, an example I like to think about is like if you're on a film crew and you're in Hollywood versus if you're in Bollywood in India or
Starting point is 00:34:46 Nollywood in Nigeria, probably the tasks that you do just look very different, having to do with different access to technology, right? But there's also all these legal frameworks. There's also all these collective bargaining agreements. There's worker power. All of those things shape how technology changes work. It's not just a story about technology. It's a story about economics. It's a story about politics. And it's a story about other social
Starting point is 00:35:11 factors. And all of that is kind of effaced in the approach that these researchers took. But in the end, the point is that the methodology was exactly the same in this paper as in the frame Osborne paper. They just took this database called O-Net and they categorized jobs in terms of the tasks they did. They asked a bunch of computer experts which of those jobs could chat GPT and related technologies accomplish. And they said 49% of jobs saw 50% or more of their tasks taken over by computers. And that's the headline statistic that they're giving. It's just, there's no way to know even a job that had 50% of its tasks replaced by a machine. And we can talk about how bad these things are predicting that. There's no reason to believe that even a job that changes by 50% will necessarily go into
Starting point is 00:36:04 decline or disappear. It just means that it could mean that what a job that changes by 50% will necessarily go into decline or disappear. It just means that it could mean that what those workers do changes. I think it's really fascinating to hear you outline that, right? Because it gives us some good insight into what the company is actually saying and what is behind these kind of headlines and statistics that we're now seeing around chat GPT taking a ton of jobs. But I think we can also see how people who are very invested in this industry, who are invested in these kind of technologically determinist narratives are really echoing and trying to push this notion that AI is going to eradicate all this work and all these jobs, right? Like, I saw venture capitalist Jason
Starting point is 00:36:40 Calacanis tweet the other day, you know, and he's been on a real kind of tweeting binge lately, people might remember all his tweets about Silicon Valley Bank there about a month or so ago. And he was tweeting, and I'm quoting here, AI is going to nuke the bottom third of performers in jobs done on computers, even creative ones in the next 24 months, white collar salaries are going to plummet to the average of the global workforce, and the speed at which the top performers can write prompts. So this is kind of what he's arguing. There was a long thread that followed that, that I didn't read the whole thing. And, you know, I'm certainly not going to read it out on this podcast. But then I also noticed in the replies, the first reply that came up for me
Starting point is 00:37:17 was from a guy named Scott Sentence, who was kind of agreeing this and pushing this notion that it was going to eradicate a ton of work and that this was going to have huge consequences. And for people who don't remember, Scott Sentence is a big proponent of universal basic income. And back in the mid 2010s, he was really pushing this notion that robots and AI in that time were going to eradicate a ton of jobs, that all drivers, truck drivers in particular, were going to be out of work and this was going to transform economies because it worked for his kind of argument that what we need is a basic income in order to respond to this, right?
Starting point is 00:37:52 And so I think it's interesting there just to see kind of the same sort of people gravitating toward this narrative once again, because it serves particular kind of agendas that they have in arguing in favor of the idea that technology is going to eradicate all this work and what are we going to do about it. So I would like your thoughts on that. But I also want to note, you know, you mentioned in your previous answer, how being a teacher has changed over the past 100 years and will continue to change into the future. And one of the things that some of the proponents, people like Sam Altman, argue that chat GPT and these generative AI tools is going to be able to do for us is to replace a ton of teachers and doctors and make education and medical advice so much easier for people to gain access to. So I guess, what do you make of
Starting point is 00:38:36 these kind of grand claims that are being made around chat GPT and generative AI based on the things that you've been telling us? I think it's a really good question, and it's very hard to predict the future, right? So what I focused on in my intervention is just showing how bad the methodology is on which these predictions are being made. I think it's harder to know what's actually going to happen, right, and how work will change. I think that something I focus on a lot is that over the past 100 years, productivity growth in services has been very low. So if you want to teach more children, you need to put more teachers in schools, right? Like everybody knows what is the metric of like how good a school is.
Starting point is 00:39:19 Isn't it often the student-teacher ratio? Like better schools have more teachers for every student, right? And that just shows you how low productivity growth has been, that we still think it's like this bespoke model like that. And it's possible that these technologies will make teaching easier. There's been claims like this, of course, for a long time, like Khan Academy, all of these internet services, we've already been traveling down this pathway. There are potentials in those technologies to make education better. Maybe there are potentials that, you know, will help poor performing students to get assistance that lets them rise up towards an average student, right? I mean, one of the big reasons why middle class people are so scared of these technologies is like, what if the tutoring that I'm able to get my kid because it costs like thousands and thousands and thousands of dollars is now available in a less good way, but it still does something at a cheaper price that's more available to working class families, right?
Starting point is 00:40:23 Like that's what they're really, that's a big fear that those people have. And it could happen. I really don't know. Like it could be the case that these technologies help educators in some way, or you could at least imagine a world in which that's possible. You can imagine many worse worlds in which these technologies are used for really nefarious purposes, even in education. But it seems very unlikely, at least to
Starting point is 00:40:46 me, given the limits of these technologies, that they'll actually be able to replace classes and replace teachers. They might make teaching slightly more efficient, which would be amazing because teaching is a field in which it's so inefficient and you need more and more teachers to teach more and more people. But I think if you look at the whole trajectory of change there, from the availability of like online courses to Khan Academy to all these things, you'll just see that the actual benefits have been much less in every round than what the proponents have claimed. So that's what I would say about teaching. It's interesting to me that we can see that there are potential ways that it can be used like in a positive way to help as long as we know like the parameters where it's useful and we're not kind of distracted by the hype around how these technologies work versus how people like Sam Alton might want us to think that they work because it benefits the company, right? But then I feel like we're also in this kind of environment in this economic system where we can see how technologies are actually deployed and how they're actually used, where I think you can very much see
Starting point is 00:41:51 the notion that the chat GPT can be seen as a teacher or as a doctor or a nurse or something be used to say, oh, now we don't need to hire so many teachers because chat GPT is going to take over and do some of this stuff. And I feel like, unfortunately, like that's the more likely scenario just based on how things are going and the unfortunate state of the society that we live in is, you know, how these technologies tend to roll out, right? They're always used in a way that kind of empowers, you know, capital, which is not something I have to tell you that you're very aware of instead of, you know, capital, which is not something I have to tell you that you're very aware of, instead of, you know, improving the world. So more than happy for you to comment on that. But also, yeah, let us know, how do you think chat GBT might actually affect some of the work that's out there, even if, you know, it's not going to eliminate 49% of jobs or whatever, as we might be
Starting point is 00:42:40 misled to believe? Well, I think that what you just said is really important. And I think you can think about some of the nefarious ways it might change jobs or just, I don't know, standard capitalist ways that it might change jobs, which often tend to be nefarious, which is that I think if you think about that model I was mentioning before of like how Google Maps was able to collect all this information about routes and then kind of make any driver able to know how to get around traffic in the way that formerly skilled drivers used to. The internet is just full of information produced by skilled cognitive workers, right? Like, you know that one of the main tools that's used to build the translation software is the Canadian Parliament,
Starting point is 00:43:26 which produces, you know, and UN, like all of these bodies that have to produce the same text in multiple languages, and then have translators who are meticulously doing that work because it has to be perfect. These databases are able to take all that information. And what they might be able to do is more and more kind of like reduce the skill level that's needed to do that work, right? In a way that will ultimately be polarizing of all this information that comes from skilled workers that's already available in various kind of digitized forms and making it so workers who aren't as well trained could kind of like look at the version that ChatGPT has produced of a text and then edit it. They might be able to achieve levels of productivity that used to only be true of skilled workers. But then on
Starting point is 00:44:25 the other hand, as we know, like chat GPT just isn't up to the job. So there's still going to be tons of computer programmers, not just lower skilled programmers checking for mistakes in what chat GPT writes, but also all these programmers who are concerned with like, yeah, building systems that chat GPT just isn't very good at building. And something, again, and what it points to is that you can't think about these things just in terms of technology. You have to think about a wider set of economic and social factors. And just to consider the economic factors here, no one knows whether increasing programmers' productivity by like 10 or 20 percent will result in a loss of
Starting point is 00:45:07 employment for programmers of 10 or 20 percent it's much more likely that a programming gets even a little easier and like a little cheaper as a result that the demand for this could explode or might just increase a little bit right but the point is that like you never know whether bringing down the price of something will mean that fewer people will be employed in it or whether, on the contrary, more people will end up being employed in it. That's how goods go from being luxury goods to kind of mass goods, right? Is that they become cheaper and then suddenly they're more available. I think it's really important to point out that, as Gary Marcus and Emily Bender and others have said, there's a lot about these technologies that could be used in really dangerous ways and could be used by scammers and could be used by people with really bad goals, bad actors, as they say, to do all kinds of things that could make our lives substantially worse and also really reduce workers' productivity.
Starting point is 00:46:02 And I think that's important to talk about as well. Yeah, absolutely. No, I completely agree with you on that point. also really reduce workers' productivity. And I think that's important to talk about as well. Yeah, absolutely. No, I completely agree with you on that point. I wonder, maybe as we start to wrap up our conversation, we've been talking about how when it comes to these big promises and bold statements that robots or AI are going to eliminate all of this work, that we see time and again, you know, we can go back very far right to the 1800s. And we can see that those predictions don't tend to play out as people expect at the time, the kind of sensationalist headlines don't be realized
Starting point is 00:46:37 a few years down the line when we actually see the impacts of these technologies. And so I think it's very likely that once again, with chat GPT and generative AI, that we don't see the same things. But that doesn't mean that there won't be impacts, as you're saying, right? And so I want to put this question to you in the same way that we saw in the 2010s, where these technologies were not used to kind of eliminate work, but rather to improve algorithmic management to ensure less autonomy for workers, to move more workers into a gig economy where they're carved out of employment relations? Do you think that there's a risk that instead of eliminating a ton of jobs, that these technologies like generative AI are implemented in such a way that once again, employers get even more power over workers, or maybe workers in sort of different sectors of the economy than previous technologies allowed, that might be the ultimate impact of these technologies. And that is another reason why we shouldn't be distracted by these kind of
Starting point is 00:47:33 big, grander claims and should be paying more attention to what actually might be happening here. Yeah, I think that there are really a lot of ways you can imagine that happening. Microsoft is already releasing kind of like enterprise versions of chat GPT. And I think one of the things that they might allow employers to do, much like what I was talking about with Amazon warehouse workers or truck drivers, is that they might make it possible to like have a better surveillance infrastructure. The generative AI in an enterprise context might make it easier to know what particular workers are doing without having to ask them by like looking through all of the data they're generating and kind of like producing a summary of it. That might make it possible to like better track what individual workers are contributing and then to kind of lower the efficiency wages to
Starting point is 00:48:25 replace carrots as it were with sticks and none of us work well when we are surveilled right like no one does good work when we feel like people are constantly watching us we can't lose ourselves like the few joys that people have in work one of the main ones is this feeling of like losing yourself in your work and not constantly being reflecting on what you're doing. But the more we worry that we're being tracked as we work, the less we can kind of lose ourselves in what we're doing. And I think, yeah, these new technologies will probably have the effect just like old ones of kind of like it's irresistible to employers to just start, you know, generating a lot of data, but also being able to understand more what that
Starting point is 00:49:05 data means with these tools. And that's why it's really important both for workers to organize themselves and fight back, because what you see is that in countries where workers are stronger, like in Sweden and in other Scandinavian countries, they have more of a say over how technologies are implemented. It's more possible to imagine with a stronger working class that we could ban certain forms of algorithmic management. Frank Pasquale has also talked about this, like just say, no, you're not allowed. A company is not allowed to gather this kind of information on its workers. So you can imagine legal changes. You can imagine workers through their own power negotiating how technologies are
Starting point is 00:49:45 implemented. And I think you can also imagine just a different world where research into these technologies just takes a different form. It's not based on move fast and break things and try to figure out the most profitable ways to use things, but ways to actually meet people's human needs and produce a better life where we all flourish. I think you can see in some of these technologies, like the threads that don't get followed that could actually lead to an improved life for people, but are just not the focus of the researchers who are getting paid a lot of money to try to figure out what you can do with these technologies. And again, I think that the generative AI stuff, I agree with Emily Bender and Gary Marcus. I just think it's just a lot more limited than what the proponents are saying. But that doesn't mean that it's going to be totally useless.
Starting point is 00:50:33 It is going to, I think, change work. It'll just do it a lot more gradually and a lot less severely than what the proponents are claiming. Yeah, no, I think that is an important point, right? And always something that we need to keep in mind with these technologies and how they are rolling out and also how we're talking about them and thinking about them in the moment, right? I think always a good rule of thumb is not to be distracted by the hype and kind of the PR narratives of the companies and actually try to get a good grasp on what is actually going on so you can understand the potential implications instead of getting distracted and then a few years down the line realizing that actually some really negative things have happened that you maybe could have
Starting point is 00:51:13 been able to curtail or lessen if you had realized those things earlier. And I would just say, you know, on your point about the workers kind of having some power around this, one thing that we're seeing right now is that in the United States, the Writers Guild is renegotiating with the Hollywood studios. And one of the things that they immediately put on the table was a clause in the contract around generative AI to ensure that even if it's used for some aspects of script writing, that the ultimate credit goes to the writer and that the technology can't be kind of credited in that sort of a way. So that's just one example of where if you have the kind of power, you can potentially push back and try to get some wins on this early on. But, you know, if you're not in a
Starting point is 00:51:54 union or you don't have that kind of collective power, it becomes much more difficult to ensure that employers aren't implementing it in a way that takes away the rights of workers. So to close off our conversation, last time when we spoke, we talked a bit about what kind of a post-scarcity world might look like. You know, we talked about how the kind of fully automated luxury communism is probably not going to happen and pretty unrealistic because technologies are not really taking away jobs and work in the way that are often predicted. And so I wonder, you know, it's been two and a half years since we talked. I wonder how your thinking on this has evolved as you've continued to think about the ways
Starting point is 00:52:31 that technologies are deployed in society and how they might be able to be deployed in a way that is actually beneficial for all of us instead of just benefiting employers and making our work even worse and paid less as we too often see. When I was studying all of the AI hype and thinking about their own internal sort of model or vision of a better world, I think that their vision is one where we can use technologies to like meet everyone's wants, every last thing, every whim that people have. Like they're trying to think about a world where we have this kind of super-powered computers that can just do everything. And I think what's interesting is that those visions often draw from science fiction,
Starting point is 00:53:17 like Star Trek or the Culture series or Cory Doctorow's Down and Out of Magic Kingdom. But as the title of that book suggests, we actually read a lot of that literature. What it shows is that even if we lived in a world with limitless resources, there'd still be a lot of reasons why people have conflicts, why people are unhappy. And a lot of that has to do with the fact that human beings are meaning-making animals. So we care a lot about the meaning of things, right? And we fight over the meaning of things. And I think that that insight from that literature and so much of it's being lost in all of the generative AI talk, right? All of these efforts to say, no, no, no,
Starting point is 00:53:54 these machines do understand. They are making meaning when in fact, as your last guest pointed out, that's a really bad way to understand what these technologies are doing. I think that if you read that literature and think about what's going on, we can kind of like try to save the future from Silicon Valley and from its vision and realize that like the much more viable future vision we could have, one that would radically transform our lives, is not a world where we like try to meet everyone's last wish and win, which you can't do anyway, no matter how many resources you have. There's just, you know, experiences that are unique or,
Starting point is 00:54:29 yeah, there's a lot of reasons why you can't do that. But we could get to a world where we meet people's needs, right? And especially in a world where we're facing, you know, devastating climate change, where there's still many people who can't eat. When we saw during the pandemic, how few of our resources really had gone into healthcare in this kind of like deeper way of not just healthcare, but also care for people in all these other senses, mental healthcare, childcare, elder care, and so on. Like we could get to a world where we use our resources,
Starting point is 00:55:00 our human resources, our technologies to really securely meet people's needs. Every hype is a scare about how these technologies are going to take away your security or even the tiny dream of security that you might have. And there's no reason why technologies have to do that. We could use technologies to improve our ability to meet people's needs with human labor. We could use them to create a world where no one has to worry anymore about going hungry, not having a place to sleep, about the earth burning up due to climate change. And that would be a world where I think humanity would really be transformed, even if people still have all these wants and wishes that they can't fulfill.
Starting point is 00:55:41 And in fact, you know, as psychologists or psychiatrists will tell you, like desire, unmet desires, that's a really important part of what it is to be a human being, right? Like having everything at your fingertips isn't always the best thing for you mentally. So I think we can envision a world like that. And I think these technologies can contribute to that. While I was a critic of fully automated luxury, communism, literature, and also these different ideas about how we can use, I don't know, like very rapid computer processing power to like plan a whole economy with a computer like the one in Westworld, you know, like a computer that just plans everything for everyone and then human beings are just cogs in its machine. Like, I think that stuff is really silly, but I think what you are seeing
Starting point is 00:56:26 with information communication technologies and even generative AI are more and more ways to imagine that people could coordinate themselves without bosses, with less of a role or even no role for markets. Like you could imagine a different research program that could lead to a world where people not only meet their needs, but feel like they have some say in the world that we live in and the future toward which
Starting point is 00:56:50 we're driving. And so I think it's really important not to get lost in Silicon Valley's utopia, which supports a whole silly venture capitalist strategy. Just watching these people go from like, literally last year, everything was crypto and now everything is generative AI. Like, it's just so obvious that this is just a part of this endless hype cycle, which is under a lot of pressure, it should be said, due to rising interest rates. You know, the real basis of the hype stuff is coming under pressure. We should reject both their utopias and the corresponding dystopias. We should worry about what these technologies are doing to us. And they're having really negative effects on people's mental health, especially for children, especially for young girls. So we should think about all of those negative effects.
Starting point is 00:57:35 We need to create our own positive visions to fight for. I think that's really important. And I worry sometimes in the tech space, like there isn't enough of those kind of counter visions, counter utopias or counter realistic visions of where we could go. I would like to contribute to that with my work. Yeah, you know, I'm totally on board for that. And I think that it's fantastic to have us thinking more in that way. You know, just pulling on what you said and maybe to reconnect it, tie it all kind of up into a bow for us. I think it's really interesting to think about
Starting point is 00:58:10 kind of the narratives that we have around technologies that we've been talking about through this entire conversation and around the notion that they're going to eliminate all these kinds of work that we know are even quite important, right? Like I was talking to James Wright recently about the efforts to automate care work,
Starting point is 00:58:26 elder care work in Japan and how that has not worked out, but how there was a moment in the mid 2010s when this was kind of the model that we were all gonna emulate and Japan was gonna show how this was gonna happen and then it was gonna roll out to the rest of the world. And I think that the risk there
Starting point is 00:58:41 and the risk with so many of these technologies is that it distracts us from the recognition that these are things that do need to happen. And that ultimately, if we're going to kind of further incentivize this work and make this work better and be able to deliver these kinds of, you know, care systems, this kind of healthcare and all these other services that people rely on to people, the question is not, do we have good enough technology to be able to do it so that we don't need workers? It's always inherently a political question. And I think that always framing it around the technology and what is the technology going to do, you know, kind of distracts us from
Starting point is 00:59:18 that more political conversation and that ability to think about where we're putting our resources, what kinds of things actually matter to us, instead of just thinking, you know, are we going to develop technology that's going to be good enough for us to be able to have, you know, better healthcare or what have you. And instead, we don't need to wait for the technology. Rather, we just kind of need to have the kind of political momentum and the political will to act on those sorts of things. And I think that's very much the types of things that you are talking about in your work. And certainly when you talk about what a better kind of society might look like, even if we use technology to help us achieve that. Yeah, I thought that episode you did with James Wright was really great about elder care in Japan, because I think that that case
Starting point is 00:59:59 study really illustrates something that we can think about throughout the whole economy. And many case studies have generated the same results, which is that automation, already the term has so much hype around it, but it works best when it's bottom up rather than top down. That is to say that, you know, it works best when it's about solving problems that workers identify in their work process. And when you live in a society that is so hell-bent on using technology to replace all going to have to be involved in that process. But that would be one in which there's much more distributed and shared power across these different groups in society. And I think that that's exactly the kind of vision of technological change that OpenAI and these other Silicon Valley institutions do not want you to think about.
Starting point is 01:01:03 They want you to think about technology as something that comes from on high for them, that either save or destroy the world, and we're all just spectators, you know, in their mad scientist gambit. Yeah, in their grand projects and the world that they're trying to realize. No, I completely agree. And I think it's such an important insight to have when we think about technologies and who should really be behind technological development, pushing it forward, deciding the types of things that we're working on and trying to achieve. And obviously the whole kind of venture capital system, the whole Silicon Valley model is set up in such a way that it's the complete opposite of that, completely flipped on its head. It's all coming from above and these wealthy people who are
Starting point is 01:01:40 choosing what the focus should be and how those technologies are actually implemented in many cases, as we've been talking about, in a way that is very much against workers and against the public to kind of further enrich the venture capitalists and the people at the very top. Aaron, it's been great to have you back on the show to get your insights on all of these questions. Thank you so much for taking the time.
Starting point is 01:02:00 Yeah, it's a real pleasure to talk to you again. Can't wait till I have more work to share so I can come back on another time. Looking forward to it. You can follow me at at Paris Marks, and you can follow the show at at Tech Won't Save Us. Tech Won't Save Us is produced by Eric Wickham and is part of the Harbinger Media Network. And if you want to support the work that goes into the show every week, you can go to patreon.com slash tech won't save us and become a supporter. Thanks for listening. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.