Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3X08: The Invisible Workers Behind the AI Algorithms with Alexandrine Royer
Episode Date: October 26, 2021AI is spreading around the world, both in terms of technology and workforces. Many tasks that support artificial intelligence are being outsourced globally, with many workers exploited or mistreated a...s they take up the opportunities offered by the information economy. In this episode, Alexandrine Royer, Student Fellow at the Leverhulme Center for the Future of Intelligence, joins Chris Grundemann and Stephen Foskett to discuss the prospects for global AI workers. The situation is akin to globalization of manufacturing or shipping, with powerful corporations exploiting differing regulations and approaches around the world. Given this situation, collective action and advocacy might be the only way for workers to improve their situation. Three Questions Stephen: How long will it take for a conversational AI to pass the Turing test and fool an average person? Chris: Are there any jobs that will be completely eliminated by AI in the next five years? Tony Paikeday, Senior Director of AI Systems at NVIDIA: Can AI ever teach us to be more human? Links The urgent need for regulating global ghost work (brookings.edu) The wellness industry’s risky embrace of AI-driven mental health care (brookings.edu) Guests and Hosts Alexandrine Royer, Student Fellow at the Leverhulme Center for the Future of Intelligence. Connect with Alexandrine on LinkedIn. Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 10/26/2021 Tags: @SFoskett, @ChrisGrundemann
Transcript
Discussion (0)
I'm Stephen Foskett.
I'm Chris Gunderman.
And this is the Utilizing AI podcast.
Welcome to another episode of Utilizing AI,
a podcast about enterprise applications for machine learning,
deep learning, and other artificial intelligence topics.
Chris, we've talked in the past about the ethical and moral implications
of artificial intelligence
and the assumptions that some people have about the machine and who's actually doing the work.
A couple of recent episodes come to mind.
First of all, when we talked to Sophia Trejo about the impact of artificial intelligence on third world countries
and the changing demands there
and the different levels of access to AI technology.
We also talked to Saif Savage
about how artificial intelligence
is creating opportunities,
but also challenges for so-called ghost workers.
Yeah, absolutely.
I think that the conversation about,
especially the global South's involvement in AI and technology in general and kind of the power centers within the United States and Africa, or sorry, the United States and Europe, was really interesting because it actually had a much more positive tint than I was expecting going into it.
Thinking about this idea of ghost work
and this idea of workers that are kind of working
on things that maybe are one, not that fun to work on,
potentially actually psychologically damaging
in enough quantity and also not being paid very well.
Whereas Safe's message was that we can actually use AI
to give those workers a toolkit
to help fight back against that.
I think it's worth diving into the darker side, so to speak, of what those problems really are and those challenges and that other side of that blade.
Well, in terms of further investigation, we decided to follow on that with a recommendation from Sophia.
And we're joined today by Alexandrine Royer. Alexandrine is a student fellow at the
Leverhulme Center for the Future of Intelligence. Welcome to the podcast.
Hi, everyone. Thank you for having me.
So tell us a little bit about your field of study and your work on this whole topic of
ghost work and how AI is impacting regular people?
Yeah, sure.
So I'm currently a PhD student
at the University of Cambridge,
researching questions surrounding the digital economy.
And I had an illumination last year
when I read Mary Gray and Statue of Stutters' ghost work,
which kind of addresses the question
you guys were bringing up before about what are some of the darker sides of developing AI algorithms. And one of them
is sort of the labor inequities that kind of arise when you're developing and training these
AI or ML models. And so across the world, you find workers who operate from their computers,
who are doing tasks such as identifying images, retraining data sets,
recognizing images that have been flagged for inappropriate content and sort of
following through. They're doing sentiment analysis of reading sentences and trying to
sort of capture what's the emotion that's invoked by the sentence. So all this kind of task that's
essential to building a well-functioning algorithm is being done by very cheaply sourced labor that's really dispersed in different parts of the world. So you find
a huge global workforce in the Philippines and different parts of Africa and different parts
of South America, but also in North America as well, you have a lot of workers in the U.S. that
are kind of turning to this form of digital labor. And so the kind of forms of sort of algorithmic based work
and algorithmic based management that are sort of arising
in this new digital AI driven economy
are sort of the questions I'm focusing on at the moment.
And in my own area of expertise,
it's more on the African context,
but it's really a global phenomenon.
Yeah, that was really what came from our conversation
with Sophia was really an eye-opening thought about how global inequality of AI access is driving different aspects of AI in different parts of the world, and how we have sort of these AI superpowers, and then these, I don't know, AI workforces that are
distributed globally. And it recalls as well a conversation we had with Saif Savage about
bringing, empowering these workers in some ways, but also exploiting them. So you said that you're,
mostly focused on the African continent.
What kind of AI work ends up in Africa? So in East Africa, so in Kenya and Rwanda, you're finding a lot of investment into ICT infrastructure. So there's kind of this attempt
to recreate the Silicon Valley model, but in African cities. And what you're sort of observing
is that, you know, there. And what you're sort of observing
is that there's this kind of global discourse
of the digital nomad that you can do anything from anywhere
and all you need is internet connectivity and a computer.
But a lot of times, the tasks that workers find online
are these kind of micro tasks or sometimes larger tasks
like software development or graphic design
or other sort of computer-based skills
that people kind of turn to.
So this sort of idea of creating
that anyone can be a digital entrepreneur
for many parts of the globe is sort of,
when you look at the reality on the ground,
a lot of the work that these people end up doing
is poorly paid and sort of clerical image identification
or dataset training or
even transcribing where, you know, they're competing against a global market. So the wages
aren't necessarily very high and they don't have any sort of labor protection because it's a
platform that's managing them. So they pick up these tasks on what's advertised and they don't
have any kind of form of checks and balances to mitigate
any kind of harsh treatment by employers so you know if you pick up a contract and the person
the company you're picking up a contract for decides not to pay you these oftentimes these
workers have no recourse so what we're seeing I guess against this kind of like, you know, narrative of Africa rising and the benefits, which there are many to like enhancing ICT infrastructure, a lot of work that people do on the there. Especially with some of the hype, as you said, around many of these topics today.
You know, to me, it kind of calls back some scenes from the industrial revolution itself,
right? And maybe the last turn of the century, where we had, you know, in the United States,
anyway, miners and other kind of physical workers doing really dangerous jobs and not being treated
very well by their employers in many cases.
And then even like, you know, times when Pinkertons and then other, you know, quasi law
enforcement agencies were brought in to break up initial unions and things like that. And obviously
this doesn't sound quite as dire, hopefully. I don't think anyone's, you know, necessarily dying
by working on AI today, but I do think that, you know, the treatment sounds very, very similar.
And so maybe the recourse is the same, or how are you or how are groups you're working with
approaching this to, is it just shedding light for now? Or is there unionization? Or I mean,
what else are the options here to available to these workers, or maybe just more generally
available globally?
So I think the parallel you draw with the industrial revolution is sort of an app metaphor or an app historical example, because I think sometimes when there's a lot of discourse around
AI and labor, you're assuming that there's a big fear of a lot of people becoming redundant,
but labor won't disappear. It's just going to emerge in new forms. And the forms that we're
seeing is this online platform labor. And sort of what's happening is that our
conventional notions of how we sort of evaluate labor is often based around, you know, wages and
time, but these platforms economy, they don't operate on the same, you know, nine to five
structure. So it's kind of difficult to sort of establish, you know, all the principles about a 40 hour working week or an eight hour working day. You don't find that in these kinds
of jobs and tasks that people are picking up. So it's more about establishing a fair wage and even
having that kind of waiting time when you're sort of waiting to pick up a contract or you're kind of
even with Uber drivers while they're waiting for a ride to have that waiting time be compensated as well. And in terms of traditional unions, it's not as easy because
you're really it's so global. So you could be competing for a contract with someone in the
Philippines and you're in Kenya. And so it's kind of a race to the bottom because you don't know who
your competitor is in terms of wages. So if they they bid lower you have no choice but to bid lower even though um you know the cost of internet might
be more expensive for you which is why you don't necessarily want to accept tasks at a cheaper
rate um rate so what we're sort of seeing is a lot of activism by the workers themselves
so um the workers that pick up contracts on amazon MTurk, they have Reddit forums where they kind of share tips on how to sort of game the system to their advantage.
So how to have kind of the highest paying contracts appear at the top, how to kind of modify their workers rating.
So, you know, if they've been unfairly rated by an employee.
So a lot of it is happening in sort of this sort of like algorithmic activism ways. And
there is a brilliant initiative by the University of Oxford that has the Fair Work Foundation that
sort of rates these platforms in terms of sort of different factors such as fair pay, fair conditions,
fair contracts, fair management, and fair representation. So those are sort of the five
principles they have, and they rate different platforms but since this form of labor is kind of new there hasn't been that much action in terms of, you know, governmental policies or regulations
to sort of ensure that people are being adequately paid and that there's a sort of living wage that's installed, or that's ensured by whoever's picking up contracts.
Another thing that this reminds me of is the flag of convenience and international shipping, where you have global crews working on globalized ships.
And effectively, every player is trying to exploit the system to maximum effect in order
to extract the most profit they
can. And so in a way, it's sort of the same thing, because I imagine that a lot of these tech
companies may have an international affiliate or subsidiary in some other country that gives them
certain legal or tax benefits. And then that subsidiary may be the one that's outsourcing
this through some other kind of international contract. may be the one that's outsourcing this through
some other kind of international contract. And at the end of the chain is this worker
in a country that may not have appropriate laws and worker protections. It really kind of makes me
feel almost hopeless to think that there is any opportunity, even with well-meaning people, to help those workers.
And then I hear from you that the workers are organizing and using these platforms and kind
of turning the tables here to use these platforms to organize and help each other, which, you know,
really changes the story for me. Because otherwise, honestly, it feels very, very hopeless. Yeah, there is some hope and not to darken that hope, but it is very much
the platforms who at the end of the day get to decide. So Amazon Enter, for example, decided to
change the way it was paying its workers. And only certain, so now only certain places are paid in US dollars. And a lot of it now
is converted into Amazon gift cards. But for workers across the world that just don't have
Amazon delivered to them that we're doing all these contracts, it's as if their wages disappeared.
And they had no form of collective action against Amazon, you know, to actually compensate them for
the tasks they did. So as much as there can be
forms of like collective action by these laborers, the platforms kind of remain all powerful. And if
they want to change their terms and conditions within a 24 hour period, there's not that much
these workers can do. Yeah, I think it's further complicated, it sounds like by, you know, kind of
something that you had alluded to, and Stephen brought up there too which is that you know not only are these you know just workers
kind of competing against each other from different places where maybe the conditions
are different and so of course the way they do pricing from their perspective is different
but but to both your points they're doing it across various different countries that have
vastly different laws i'm sure and so you know, collective action becomes essentially the only way
to, to, I guess, fight back, for lack of a better term there. Because, you know, even if you went to,
you know, country by country and change regulation, I mean, that seems like one, a very,
very slow road, and two, one that would take a very long time to get to the point where you
finally got to that last country to change the laws. Because if you, until you get to that one,
there's still someone there who potentially is willing to work under that
regime and, and, and kind of brings down the risk at the bottom,
as you mentioned. So is there any potential for,
you know,
and I think this is something that one of our previous guests,
Safe Savage had talked about,
which is kind of providing toolkits for these workers.
And that was kind of using AI for good piece of things where potentially you can actually give people maybe another platform that kind of looks at the platforms that are out there to work on and can kind of give you feedback or can give you the tools you need to choose the right one. I know we're seeing some, you know, in the U.S. anyway, and
that's kind of where my visibility ends in a lot of these cases, but I know there's some folks who
are kind of doing platform arbitrage, so to speak, right, with the different driving apps. So there
may be someone who has a phone that they've got their Uber app on, and they've got a phone that
they've got their Lyft app on, and maybe they've got a couple others, depending on what city they
live in, and they switch back and forth depending on which rates are better. And so they're able to
kind of game the platforms a little bit by looking at, you know, kind of a cross platform work.
But of course, that's not always possible, right? For some jobs, there's only one platform,
or for some areas, maybe only one's accessible. But I wonder if there's work being done in that
realm to kind of empower workers to just see these kind of differences between the platforms and use the ones that are most beneficial for themselves?
I mean, it is also, I think that's an interesting comment. And it's also, there is variety within
the platforms themselves. So I think that's why the Fair Work kind of index is really interesting
to look at, because there are some platforms like Sama Source, which I think now
goes only by a Sama who is committed to paying a livable wage for their workers,
reflecting the country that they're in. So it's not as if these kind of regulations are
insurmountable. You can kind of have a good estimate of what's a livable wage based on the
geographical location of where your workers are based. And so they're kind of committed to that sort of, you know, fairness and practice. And then you have
Amazon and Turk often falls as being one of the worst ones. And so does Clickworker. But you do
have other platforms that are kind of trying to sort of address some of these issues, especially
when it comes to algorithmic management and sort of workers being unfairly penalized by how the platform algorithm works. So, you know, if people on, for example,
Upwork get an unfair rating to have measures implemented within that platform so that they
can in response rate their employer or leave comments that sort of reflects. So a lot of
these solutions can also be embedded within the platform itself. And so that's also up to the developers,
so not just to wait for governmental regulation or for the workers themselves to kind of unionize
and put pressure on these platforms. Yeah, and we've talked a lot about kind of just the conditions
of payment and how folks are getting paid and how much work they're having to do and whether
they're getting paid for their downtime. But something we talked about, I think, at the very beginning of the podcast, I'd like to dive into just a little bit deeper,
is maybe some of the types of work that's going on.
My understanding is that in addition to kind of some of the, you know, the click work and, you know, mechanical Turk and things like that,
that there are potentially also folks who are working in environments where their job is to maybe tag images to help train machine
learning algorithms, and especially ones in particular for images that are supposed to be
removed algorithmically, that someone has to first identify those. And then so that could be
images of different types of death or other kinds of abuse, and then maybe language around those as
well. So I mean, obviously, that's not maybe not on these same platforms, but I think that's a big piece of kind of AI work
that's going on that maybe needs a light shined on it as well.
Definitely.
The content moderation jobs
are probably some of the most horrific.
And, you know, there's been exposés and articles
about Facebook employees dealing with post-traumatic stress
from, you know, the images they've had to repeatedly see.
And, you know, these workers don't necessarily get a forewarning of, you know, when a reported
image is brought to them, you know, they have to flag it. And so if you read ghost work,
there's this really interesting chapter about this worker having to, you know, sort of identify
whether day in and day out, whether it's a thumb or whether it's male genitalia and so
there's kind of no sort of you know consideration of what this sort of does psychologically to a
worker all day so the kind of emotional labor and the sort of support that you need to be doing this
kind of work is really you know not considered at all and there is a variety of work that people
can do and so I I think that some of
the stereotypes or some of maybe the misconceptions around this kind of click work is that you assume,
you know, maybe it's workers in low income countries who are kind of desperate for this
type of labor, but you also have, you know, workers in the US who have college degrees,
and even master's degrees, who for different reasons, either it's carrying obligation for
family members, or it's just difficult to use finding a job on the labor market, you kind of turn to this work. And so you have highly qualified
people doing this kind of work as well. And yeah, there is definitely like, you know, it's not all
the same when it comes to online labor. There's some people who do enjoy, for example, there's
Transcribe Me as a platform, where people kind of enjoy the liberty of being able to, you know, watch a video and transcribe out the words and that's, you know, a way to gain a digital platform labor, but really to recognize that
there's some work that's being done that's pretty horrific, that people who do it aren't being,
you know, looked over properly. And that's another aspect of sort of the shopping for
different international jurisdictions and things like that as well. I could totally see
a company trying to find a location that has more lax laws
about worker protections and so on as a way to get some of the worst of the worst tasks done.
But at the same time, as you point out, and as we heard when we talked to previous folks about
sort of who is responsible for doing a lot of the work behind the scenes.
It's not just people in third world countries or in, you know, many, in many cases, it's people who are disadvantaged in Europe and in the United States who maybe they don't
live near Silicon Valley and they need to work remotely, or maybe they have an unusual
schedule because of their family life or something. And in many cases, those workers might, as might the
workers in Africa or South America, they might say that they actually find the work to be an
opportunity because otherwise they wouldn't be able to be part of the digital revolution at all? Is that, I guess, how do we balance those
conflicting demands between exploitation, but also opportunity?
That's an excellent question. I think, you know, platform labor isn't bad in and of itself. It's
really the conditions in which it's being done um and i
think you know it's ensuring that people get an adequate wage that they have sort of resources
um if they need to go against like the way they're being managed algorithmically that there's a
designated person uh which the platform that they can reach and contact that their contracts are
fair and that the condition line outlined and the contracts are picking up, you know, are realistic and like reflect, you know, the payment or have an appropriate time
schedule because some of these workers feel like they have to operate 24 seven in order to get
a contract done. And, you know, fair representation. So ensuring that, you know,
these workers have different forms of recourses outside of, you know, fair representation. So ensuring that, you know, these workers have different forms of
recourses outside of, you know, the platform itself, somebody else to go to and sort of,
you know, an overseeing board or agency that can kind of help these workers if needed.
So I don't think, you know, it's not just a dark future for platform labor in itself,
but it's just really ensuring that the way
it's being done, you know, is equitable and fair and ethical. So, yeah. So many of the people
listening to this particular podcast are in the IT space and actually working at some of the
companies that are deploying AI applications. What advice would you have for those people in terms of how to make sure
that they're not exploiting workers and how to make sure that things are equitable in terms of
those ghost workers behind the scenes? I think for one, it's to be kind of aware of where your
training data is coming from. So this data set that you have, that you, you know, if you purchase one from an external, you know, client or anything, you know, where is it coming from? Where are you
getting this data when the model is being deployed? What are the checks and balances?
Who's responsible, you know, if something goes wrong or if images are flagged or, you know, if
sentences are flagged, who's in charge of monitoring that and checking that? So I think
just being really conscious of, you know, if you're purchasing models and
if you're purchasing training data sets, just to kind of inquire as to where, you know,
how has this been produced and where is it being sourced from and who's kind of doing
the sort of brunt work of keeping this model running.
Which is honestly pretty similar to the advice that we might give somebody in terms of the topics that Chris brought up earlier in terms of, you know, globalized manufacturing or something like that.
I mean, it's worthwhile for a consumer, no, you know, I think that we
should make sure that these workers have good conditions or good protections, right?
No, I think that's absolutely true. And I do think that there are a lot of parallels there
between, you know, kind of, you know, being a good consumer, I think, right. And this plays
out into much broader areas than just AI. But I think that in my experience,
that voting with your wallet is probably the most effective way of voting on the planet,
especially in a globalized world where, you know, voting for, you know, my city councilor may not
have much effect on this, or even my president, but deciding what businesses to do business with
and how to do business with them and under what conditions, I think is something that can be very
powerful on a global scale for issues like this and many others.
And it's interesting to think too about another aspect
that gets a lot of negative press is the global impact
of the biggest social media companies, for example,
or the biggest tech companies.
But that also can cut both ways,
because if I look at it from what you just said, Chris, if we have a situation where the ultimate environment for the end workers by putting
pressure on these companies. So, you know, for example, you know, I don't want to call them out
specifically as a bad example of this, but, you know, if somebody saw a Facebook or a Google or
an Apple exploiting workers in this way, consumers actually have a great deal of ability to pressure a big company
like that, whereas it might be more difficult if it was a unknown company in a foreign country.
And maybe that gives us hope as well, that we can maybe start looking at these things and
actually trying to put some pressure on these companies
when these situations occur.
Yeah.
And I think one thing is also to just demand more transparency from these big tech companies
because lots of consumers aren't aware that when you're using a search optimization engine
that sometimes it's an actual worker behind there that's kind of selecting or helping
you select these images or when you're kind of selecting or helping you select these images or
you know when you're kind of entering when you're using your social media and you're flagging
something you don't realize it's an actual person who's like responsible then for the the next step
in removing this content so I think demanding that these big tech companies be transparent about
what their workers are doing and how they're doing that and who they're hiring is also an important step.
And again, let's think about as well the workers inside these companies in case,
and I am assuming that some of them are listening to this conversation, you know, you should be
asking these questions as well. And don't just assume that the magic algorithm is doing all the
work. Assume that the magic algorithm is doing the work with the help of content moderators and reviewers and so on in many parts of the world.
And ask yourself, are they being treated right?
And advocate for them within these companies, within the environments that are deploying this technology, because frankly,
a project manager or an IT operations specialist who's implementing these tools might have a great
deal of ability to change the direction of these companies by just doing a small thing, by saying,
you know, are these workers being
protected or, you know, what recourse do they have, or do they have adequate support to help them
as they're processing this data? And don't just assume that it's all computers and robots
somewhere because there are real people involved. So before we move to end the discussion, one of the aspects of this podcast
that we love to get to is our three questions segment. So this is an area where we ask our
guests three unexpected questions and see what kind of response they'll come up with off the cuff. This tradition started in season two,
and then as a twist in season three, we are actually in bringing in questions from outside,
from folks who are listening, as well as previous podcast guests. And as a note, remember,
our guest has not been prepped, so we're going to just hear how they respond. So I'm going to ask a
question and then Chris is going to ask one and then we're going to bring in one from a previous
Season 3 Utilizing AI guest. So to start off with, one of the things that occurs to me is, as I said
there in the closing, it seems that a lot of people assume that AI is only computers all the time.
And effectively, the assumption is
that AI and machine learning technology
has already advanced so far
that it's passed the Turing test and we're done.
But that's probably not true.
In most cases, AI has not passed the Turing test.
When do you think it is
that an AI system will pass the Turing test
and will be able to fool the average person?
I think that's an interesting question.
I don't even know if AI will ever pass the Turing test
or if AI will ever actually need to,
because I think there are always going to be humans in the loop.
Because, you know, these kind of fantasies that we have
of super intelligent machines are still so far off. I mean, I think one of my favorite is, you know, these kind of fantasies that we have of super intelligent machines are still so far off. I mean, I think one of my favorite is, you know, machines still struggle to identify
a chihuahua from a blueberry muffin. So I think there will always be this kind of necessity.
And we should think in terms of having this human in the loop. And I think not only is it,
you know, a bit too forward thinking to assume that we'll get past the Turing test, but also question why we feel like we need to eliminate humans completely and human oversight and human overview from the process.
So I'd rather think in the future in terms of human machine collaborations rather than machines doing it all on themselves. I think that's a great answer. And I agree that I think that
it's probably helpful to think about machine intelligence and human intelligence as separate,
distinct paths that are intertwined and can be helpful to each other. And kind of speaking of
that, and maybe more to many of the things we talked about in this podcast, which is,
you know, the jobs and the labor market in general. From your perspective, can you think of any jobs
that currently exist that will be completely eliminated by AI in the next five years?
Completely eliminate? I have to admit, that's a hard one. I'm not too sure, but I think a lot of
these sort of, you know, clerical tasks, a lot of it's going to
be taken over by, you know, just classifying and processing data such as imagines that's going to
be taken out. And also just like, you know, in information sheets and things like that, that's
going to be taken over by natural language processing and different forms of algorithms.
I think, I don't know if jobs and then sales will be completely eliminated I just think it's a
different skill set so for example you sort of think of you know when you're phoning up a company
and virtual now you have chatbots and virtual assistants and all that's been kind of automated
but you still have this kind of person that's sort of deciding what lines should be dictated
in response to you know consumers inquiries and things like that. And somebody who's thinking of the emotional labor that needs to be inputted into, you know, these
kind of customer services that, you know, people frequently bring up as something that's going to
be forever automated or, you know, when it comes to like grocery store clerks or things like that.
So there's just going to be sort of that job, but in a reinvented form
or, you know, a lower percentage of the population that's going to be doing it,
but nothing fully ever disappears. It's just kind of continuous cycle.
All right. Thank you for that. And then finally, our producer, Abby, came up with a great fit from a previous guest. So the following question is
brought to us by Tony Paikaday, Senior Director of AI Systems at NVIDIA. Tony, take it away.
Hi, I'm Tony Paikaday, Senior Director of AI Systems at NVIDIA. And this is my question.
Can AI ever teach us how to be more human?
I think when it comes to AI, I think it's a good opportunity.
The sort of questions it raises to sort of recenter what our human values and what we kind of want a future to look like.
So really to reaffirm principles and to reflect on what those principles mean in practice.
So what does fair work mean?
What does fair labor? What does and, you know, the amazing thing about AI is that it could really
reduce a lot of the time we're spending on different sort of tasks tremendously. And so
with all that, why do we feel like we need to work harder when we have all these systems that
are there to assist us? So I think it presents a good opportunity
to kind of reflect on, you know,
what are sort of the values we hold
when it comes to, you know, all sorts of principles.
But, you know, in this case of this podcast episode,
specifically, what is like the future of fair work
and the future of fair labor?
Well, thank you so much for that answer.
And thank you so much for this wonderful discussion. It's been really interesting to hear your perspective and that here at the end. And if our listeners,
if you want to be part of this, you can too. Just send an email to host at utilizing-ai.com and we'll record your question for a future guest. Alexandrine, thank you for joining us today.
Where can people connect with you and follow your thoughts on artificial intelligence and
other topics? You guys can follow me.
I occasionally write for Brookings.
I have a recent article out on effective computing
and mental health and apps,
which is a different realm of topic.
But if that's of interest to you,
you guys can check out my articles there.
And also just follow me on LinkedIn at AlexAleinWhite.
How about you, Chris?
Anything new going on with you?
Yeah, everything that's happening to the moment
should be found on chrisgrundeman.com.
You can also follow me at chrisgrundeman on Twitter.
And I'll always love to have a conversation
on LinkedIn as well.
And I'm trying to use LinkedIn a little bit more too.
So look for Stephen Foskett there
and at S Foskett on most other social media platforms.
And please also check out the Gestalt IT Rundown,
which is our weekly coverage of enterprise tech news
every Wednesday in your favorite podcast application.
So thank you very much for joining us
for the Utilizing AI podcast.
If you enjoyed this discussion,
remember to subscribe, rate, and review. It really does help. And please do share this show with your
friends, especially if you think they'd be interested in the conversations we've had today.
This podcast is brought to you by gestaltit.com, your home for IT coverage from across the
enterprise. For show notes and more episodes, go to utilizing-ai.com, or you can find us on Twitter at utilizing underscore AI.
Thanks, and we'll see you next time.