Big Technology Podcast - He Helped Train ChatGPT. It Was Traumatizing. – With Richard Mathenge
Episode Date: May 17, 2023Richard Mathenge was part of a team of contractors in Nairobi, Kenya who trained OpenAI's GPT models. He did so as a team lead at Sama, an AI training company that partnered on the project. In this ep...isode of Big Technology Podcast, Mathenge tells the story of his experience. During the training, he was routinely subjected to sexually explicit material, offered insufficient counseling, and his team members were paid, in some cases, just $1 per hour. Listen for an in-depth look at how these models are trained, and for a look at the human side of Reinforcement Learning with Human Feedback. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com ---- OpenAI's response: We engaged Sama as part of our ongoing work to create safer AI systems and prevent harmful outputs. We take the mental health of our employees and our contractors very seriously. One of the reasons we first engaged Sama was because of their commitment to good practices. Our previous understanding was that wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalization, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so. Upon learning of Sama worker conditions in February of 2021 we immediately sought to find out more information from Sama. Sama simultaneously informed us that they were exiting the content moderation space all together. OpenAI paid Sama $12.50 / hour. We tried to obtain more information about worker compensation from Sama but they never provided us with hard numbers. Sama did provide us with a study they conducted across other companies that do content moderation in that region and shared Sama’s wages were 2-3x the competition.
Transcript
Discussion (0)
LinkedIn Presents.
Welcome to Big Technology podcast,
a show for cool-headed, nuanced conversation of the tech world and beyond.
Richard Matanga is our guest today,
and he's here to share the story of how large-language models,
like OpenAI's GPT model, get trained,
because he and a team of colleagues in Africa actually did it.
Matanga, who is based in Kenya,
is a former team lead at Sama,
a company that's trained AI models on behalf of companies like OpenAI,
and his is a story you really need to hear.
And fair warning, there are parts of it that just are not pretty.
While at Sama, Mottanga and his team reviewed and rated texts
based off of quality and explicitness to help make the product that you and I use today a pleasant experience.
This involved routine exposure to some extremely awful text, which Matanga will describe here.
And in this conversation, he talks about the human side of reinforcement learning with human feedback,
which is the key advance that helps make these models so impressive.
Too often, the human element of that advance is left out.
And so today, Matanga's going to share it.
He's going to share his story of how he and his colleagues, needing work in a pandemic wrecked economy,
found themselves at the ugly center of one of this generation.
biggest technological advances. My conversation with Richard Retanga coming up right after this.
Richard, welcome to the show. Thank you very much, Alex. So to start, Richard, where in the world
am I finding you right now? I am a resident of Nairobi in the outspots of a country called
Kenya, in Africa, a place by the name Mbakasi, and that's where you can find me.
And talk a little bit about the beginning of your career before you.
you started working in tech projects or as you started working in tech projects.
How did you get there?
Right before I started doing tech, I was engaged with insurance.
I was a salesperson doing insurance.
Right after insurance, I moved to customer service.
Largely, that's where my experience, most of my experience lies in.
I did this in an organization called Technobrain right before moving to Sama.
And what is Sama?
Sama is an organization that deals with artificial intelligence.
It has its headquarters in the U.S.
One of its branches is right here in Nairobi, where it has been operating.
for the last couple of years.
So what drew you to SAMA as a company?
What made you want to work there?
Well, right.
As I was doing customer service,
you will definitely want to,
as an ordinary human being,
you will want to grow in terms of career,
just like any other individual.
It was a dream just to move to a different organization
so that you can grow career.
And as many will have done it, you made an application and you will pray that your application
will receive positive response.
And hence, that's the reason why I tried my level best to make an application.
And that's how I found myself in summer.
And it sounds like it was also an opportunity to work on something pretty cool.
some cutting edge new technology to dig into artificial intelligence and help train it.
Did you know initially that that was what you were going to be doing?
Absolutely.
Just like any other individuals, as I said it earlier,
it was something that you were looking forward to.
It was a euphoria, you know?
Right.
So you start at SAMA in July 2021.
What is the work that you're doing look like in the beginning?
In July 2021, we were actually engaged in LIDA projects.
They were doing LIDA as well as content bonderation.
And so for me, I was actually alongside my friends.
We were introduced to LIDA projects.
And the LIDA project that we were introduced to,
it was really engaging and it was really interesting at first.
But right as time went by, you know,
interrelationship between the employees as well as the management was becoming
unwanting and we found ourselves in the wrong foot, you know,
of the organization where they didn't want anything to do with us.
And so basically that's how,
how they relate with people.
I want to get into the management and employee relationship first,
but I'd also like to understand a little bit more about the nature of your work before we jump in there.
So just to begin with the LIDAR project, is that like helping to train self-driving cars?
Or can you talk a little bit about what that was?
Absolutely. That's basically what, just as you put it, it is rightly what we were engaging.
It is self-driving.
Yeah, what does that work look like?
What are you doing when you're doing that training?
The training was all about annotating cars, something which, you know, will be seen in future.
Basically, annotating cars, moving objects, and all, so on and so forth.
So you have like the road and you're saying, okay, this is a car, this is a bicycle,
obstacle, just identifying, labeling, what's going on.
Absolutely. Absolutely. That's interesting.
Really, yes.
And so how do you move from there to start starting to train text models or chatbots?
So right after LIDA, just as I said, we had a friction with the management.
The management didn't want anything to do with us.
they made you know commands and instructions which didn't go well with some of some of my colleagues and I
and we found ourselves on the wrong side of the divide and so the next thing we saw we were
actually laid off they have this arrangement summer has this
arrangement where if you, if, you know, if your contract comes to an end, then they put,
they place you, um, on the bench, which means, uh, you, you are not engaged for a period of
time up until when they find it fit for you to get back to the organization, should they find
another, should they get another project? And so I was lucky enough. Um,
to be called for another project, which is the so-called chat GPT,
where you are given a text, then you benchmark the text according to several benchmarks,
which are provided for.
So that's how I found myself leading a team of 11, sorry, 10 individuals.
And so did they tell you that this was going to be for a chatbot or just for
the GPT model that OpenAI was using?
And what did it look like when you were actually working to train these models?
First, we went through training.
The training was about large texts or small text, sorry, short text.
And those texts, you're given several benchmarks.
I think it was six or seven.
When you read the text, you are actually instructed on P.
the closest benchmark which will, based on the instructions, sorry, the specifications
given by the client. So for instance, if you're given a text, several benchmarks, I
believe one of them was illegal, erotic, non-erotic. I seem not to remember the others,
But if erotic is the most severe, the most severe in terms of the specifications given by the client,
then you choose that benchmark.
So it sounds like you basically went screen by screen looking at different pieces of text
that this model would generate.
And then you sort of had to rate it on a safety level.
Absolutely.
Absolutely.
So that's the case.
you pick the most, the closest benchmark to save on the text that you have read.
So that's basically about chat GPT.
And Richard, I mean, we've all seen what these models can produce now.
But when you were training them, did you have a sense as to like what the product was that you were working on?
Or what did you think that this product?
I mean, LIDAR is obvious, right?
Like selecting images for cell driving cars.
Okay, simple, got it.
Rating text based off of these different guidelines.
I mean, what did you imagine this product was going to look like?
Looking at where it is right now, first of all, I could not, we could not even tell.
Remember, Alex, when we started engaging ourselves with chat to GPT,
It was during the COVID era.
We were just coming, we were just going through the COVID period.
And most of the individuals that were engaged in that project were looking forward to something which, you know, will make ends meet.
We were going through a recession.
Being in that situation, anything that,
came our way, we will just take it, you know?
And that's how we found ourselves there.
We engaged ourselves in assignments or obligations which, you know, were very traumatizing,
texts which will make you think and overthink and rethink.
And a lot of things were going through our minds.
we didn't imagine we will be here.
We didn't imagine that we will get to this point where we read things.
Some of the text messages are not even in a position to disclose to you
just because of the nature of the text.
Yeah, I'm getting the sense that the text that they threw at you was pretty awful.
You're sitting at these screens, you're getting basically batch after batch of text.
trying to rate it. And I imagine this is what's trying to make these models ready for prime time,
ready for people to be able to use it and not see things that will horrify them. But in order
to do that, you need to go through this training. So talk a little bit about the type of text that
you could see, not to the point that you feel disgusted by what you're saying. But tell us as
much as you can.
The closest and the most traumatic deaths was one that, you know, described a situation which is unlikely
to happen, unimaginable, so to say, in our society, a situation where a father is
having sex with an animal and a child, an infant is there, just watching the scene.
It is something that we never imagine that we will get contact with or get in touch with.
Some of these things, we tried as much as possible in as much as some laid it very clear
that we will be undergoing counseling.
Counseling was not forthcoming even after engaging ourselves in this text.
Something that is worth noting is the fact that from my position as the team lead,
for me, I am very good with interpersonal skills.
I can tell when my team is not doing well, I can tell when they're not in,
interested in, you know, in reporting to work, I can tell some of these things.
And these were very clear indications that, you know, my team was just sending signals that
they're not ready to engage with these search warnings.
Was this all of the text that you were seeing or like one out of every 10 pieces of text
you had to read? How often would you see something, you know, as horrifying as the stuff
that you just described?
Out of 10, a large percentage, say, about 7 or 8,
more or less the same as what I just described.
Wow.
Three will, you know, will just be as neutral,
but a large percentage will describe what I indicated earlier.
basically it sounds like they were trying to train this model
to leave out some of like the most horrifying stuff
so they just kind of sent you guys
anything that would be along the lines and you helped
basically determine what the line was that this trap or this
text generator would or would not cross
absolutely that's the point
yeah did you see the text getting you know better
as time went on like did you see it become for instance
more coherent, more cleaner,
like more clean,
did you think that the work
that you were doing
was having an impact?
Far from me.
The texts were not becoming any better.
We were not becoming any
any
holier, so to say.
They were just becoming worse and worse.
Really?
They were maintained.
At some point, they maintained.
But
they didn't become any better
to a point where
you know we say
a sigh of relief
you understand
it was not
it was not something
that you will
want to
to engage with
you know
and
definitely
for us and the team
we are actually
at a point where
between hard place
and the rock
remember
we don't have
any
should this contract be terminated, we don't know where to start from, and we don't know,
we don't either want to proceed with this engagement.
Right.
I mean, you had experienced trading artificial intelligence.
You knew what you were doing.
So, you know, when you saw all this terrible stuff coming through, it's very different
that I'm labeling images.
Are you just like, what the heck are we doing?
Like, what exactly are we training?
Like, did that thought come to mind?
No, no, no, not at all.
Not at all.
Remember, at some point, the impression that we received when we underwent trading was,
you know, we will get these text messages.
But we didn't know the severity of how.
how extreme some of these texts were.
And again, you know, for us, it was,
it was a euphoria, just as I say,
because we are looking forward to start work.
It is during the COVID season.
Getting work in a developing country is, you know,
it's a blessing in itself.
You understand.
Yeah, I think what you're saying is that it was basically, it would even be a luxury to ask questions about the product that was mostly like you needed, you needed work, this was work.
And so you did it.
Absolutely.
Yeah.
Absolutely.
You're the team lead.
Do you go to SAMA management and say, wait a second.
like put us on a different project or this is pretty awful stuff.
What do the conversations look like after that?
Well, for me, diplomacy really was key.
So what I could not even tell the management that we need a different project.
My conversation was centered on, you know, we need canceling.
The sooner the better.
because cancelling is very pivotal in as far as what we are engaging with is concerned.
Right.
At some point, the councillor reported one or two times,
but you can tell he was not professional.
He was not qualified, I'm sorry to say, asking basic questions.
like what is your name and you know how do you find your work and for us we need
something like please assist us you know we need remedy right now we need we need
you to help us mitigate that the the text or the horrific situation that we
have just encountered you know it's all through the week
You understand.
But that was not forthcoming.
So my job was centered on, you know, engaging the management to put mechanisms in place
where psychological and social support will be forthcoming.
But that, be it as it may, it was deemed as if, you know, I was becoming.
you know authoritarian or not submissive and you it was not it was not a friendly fire for them
right okay let's go to break and come back right after this we're here with richard mottanga
he is a former team lead at samma so hearing that you're going to want to know why he ended up
leaving and what happened after he brought some of these concerns to management and then also
what did it feel like to see some of the product of this work after all this awful stuff
chat GPT and open AI be chat GPT becomes the most promising consumer app in a very long time
and it won't spend this stuff out and open AI now is at the top of the world so we're going to
cover that and more in the second half back right after this hey everyone let me tell you about
the hustle daily show a podcast filled with business tech news
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now they have a daily podcast called The Hustle Daily Show
where their team of writers break down the biggest business headlines
in 15 minutes or less and explain why you should care about them.
So search for The Hustle Daily Show and your favorite podcast app
like the one you're using right now.
And we're back on big technology podcast with Richard Metengue calling in from Nairobi, Kenya.
And we're talking a little bit about the human part of reinforcement learning with human feedback.
A lot of the reinforcement learning and the feedback gets talked about.
Not so much the human side, but we're here doing it.
Richard, appreciate you being here.
So when you spoke with your superiors about needing the counseling, they deliver counseling that's sort of subpar.
What happens after that?
So right after that, just as I said, we are in a friction with the management.
And, you know, my superior or my boss went ahead and wrote a report about me.
But I was insubordinate that I was not quick in taking instructions.
This was very interesting and very ridiculous, so to say.
There was a project, just put that at the back of your mind.
There was a project that was coming.
It was a lighter project by the name Z.F.
And, you know, when they wanted to get resources to work on that project,
the entire team that worked with me,
was picked. And my superior went ahead to write a report about me that I was not quick at
taking instructions, which was very, very interesting. I engaged them to inquire, you know,
how can this be possible? How can I, how can Richard be deemed as someone who is not
quick at taking instructions, yet the people who are taking instructions from him,
were picked for this new project.
It was very interesting because they never responded to my inquiry.
They never responded to my questions, but be it as it may, it never bothered me.
What was at stake was, you know, the psychological imbalance that my team was experienced.
And I wanted that to be addressed as fast as it can so that by the time they are moving to this new project, they are able to be, they are in a position to say, you know, this is where we were, this is where we are.
this is where we want to be by the time we are starting of this new project.
Right. By the way, so how long did you end up working on training the large language models,
the text-based models?
That's very interesting. The period that we were engaged with Shat GPD was four months.
And these four months, the client provided a contract for one year.
The contract between the client and summer was one year.
For us, we had a contract between summer and ourselves.
The client knew very well that we will be engaged for one year.
But summer, because there was another prevailing story that was covered about content moderation that did not portray the image of summer and open AI very well,
summer found it fit to terminate the contract between them and the client.
so that they can protect the image
so that they don't fight two walls at the same time.
What did your team say when you moved,
when they moved, were selected to move on to the project,
but you were not?
They were very frustrated,
especially considering I was, you know,
every now and then I kept on fighting for them,
whether it is medical covers,
I stepped in to fight for,
to ensure that they were covered medically.
For those deferred,
deferred salary that I mentioned earlier on,
I stepped in to ensure,
you know, to ensure that the salaries were actually paid on time
and not delayed.
For any issue that prevailed,
I stepped in to ensure that, you know, the issues were covered and they don't repeat.
We don't see a repeat of the same issue over and over again.
Because that's the things that I'm talking about were perennially happening from month to month, from day to day, from week to week.
And you could see that there was a commitment in trying to frustrate the team and the agents to a point where they are pushed to the wall.
You know, but for us, our patients was expressed in a very, in a very mature way.
You know, my team was very mature in expressing their patients.
it was my responsibility, however, to face the management and tell them,
look, whatever you're doing is wrong, and it should not be happening in a day such as this.
It should not be happening to people who are vulnerable.
My team was vulnerable.
They were not being paid the salary that my supervisors,
of being paid.
Yeah, how much were they being paid?
My team?
Yeah.
My team salary range between 18,000, which is $180 per month to $200.
$200.
$180 a month?
Yes.
But how many hours did they work?
That's nine hours.
Nine hours a day?
Nine hours a day.
Five days a week?
Five days a week.
Sometimes we will even report on Saturdays.
Four weeks a month?
Yes.
That comes out to 180 hours.
Absolutely.
Are you saying that the salary for the folks who were doing this was a dollar?
Dollar an hour?
Exactly.
Just give a sense as to like is that, is the purchasing power in Nairobi for a dollar?
Let's say you wanted to buy like a,
Let's say you wanted to go out for a meal at a restaurant, for instance.
What does that cost?
A meal will cost, average it will cost $5.
Okay.
So there is more purchasing power than a dollar in the U.S., but not dramatically more.
I mean, not dramatically more.
So it's equivalent to U.S. dollars is, I mean, basically like paying people $3 an hour in the U.S.
which is, I mean, to me, is, anyway, it's unacceptable.
Okay, so what happens to you at Sama?
Are you left, right?
Yes.
So I love.
Talk about the answer.
So I have a friend who is a director at Sama.
He's the director of quality.
I have a privilege of, I have had the privilege of going up with him.
So, you know, we, we have that.
you know, rapport, we have that good and cordial relationship with him.
However, I came to know about, I came to know his true self right after engaging him
at summer. You know, I will, I will, you will think that someone like him will fight for
me and, you know, and try and address issues related to agents very strong.
But that was not the case.
So, you know, remember earlier on, before all this happened, there was a prevailing case between, sorry, there was a prevailing story that was being told by one of the journalists in the U.S., in a leading from a leading media house.
about the situation at Sama and Facebook.
Yeah, this was at time.
Yes, so this was that time.
So my, our story just came after,
came while this story was actually being told or being drafted.
So the director looked for me,
with the insinuation that he didn't want me
to talk to any journalists
because the journalist as well was looking for me.
The journalist who covered the story about Facebook and Sama
was looking for me.
So the director knew that the journalist was looking for me.
So he reached out to me, the director reached out to me to find out whether I got something or I got any engagement.
But all through I have been lying to him, playing around with his integrity, telling him that, you know, yes, I got something, you don't have to worry.
then he later on told me that there is a journalist that is looking for guys who worked for your, for your project, that is chat GPT.
So I asked him whether the journalist is the same person I know, I know.
Then he said, yes, he's the one.
Then he said, I can see you actually on the bench.
But if something comes up, you will be the first person we will consider.
Then I said, at the back of my mind, I am saying this is a lie because I've already
cleared with Sama.
I've already done my termination, my exit, yet I am appearing on the bench.
It is ironic.
It is, you know, it is something which is very, you know, it is something which is very, you
funny and hilarious.
So you were basically out at that point?
Yes.
And they were trying to put you back on the bench to like mess with your severance
agreement or what was the idea there or just to intimidate you?
Yes.
So they wanted to place me on the bench so that I can, you know, I can relax or save their, you know,
I can be that person where anything that comes my way, it's a yes sir or a yes, mom
kind of response, you know?
If you're on the bench, yeah.
If I'm on the bench, yes.
So that I will be, I will be submissive and, you know, and, you know, take anything here.
But that, that, that is not me.
Okay.
So at the end of the day, you decided to leave on your own.
Absolutely.
And when was that?
When were you actually, like, officially out at Samma?
So that was April 2021.
Okay.
Sorry, April 2022.
Okay.
And about six months later, in November 2020, chat cheap BT comes out.
And next thing you know, it's the fastest growing consumer product in history.
A hundred million people are using it within two months.
What did you think when you saw that?
A lot of things are playing around it in my mind.
You know, we need mechanisms put in place,
mechanisms that will protect content moderators all around the world,
whether it's sharp GPT or anything, you know.
Mechanisms that will help the, the,
the resource that is that are working on them become better and not worse than how they started it,
you know, than how they were introduced to it in the first place.
We need problem solvers.
We need people to identify with content moderators just like any other professional.
You know, these are professionals.
These are individuals who are working, they are learned, they are working on search machines.
They are placing information on machines.
At the end of the day, they are chipping in enough hours to their work.
You understand.
Was there any sense of like, oh, my God, like, have you tried CHAPT?
Was there a sense of, oh, my God, that product that we worked on, you know, I guess, yeah.
I mean, was there any sense of like, wow, this was the product we were working on and a reaction to see how many people actually wanted to use it?
For me and for us, we are very proud to, you know, to have worked for Chad GPT.
We are very proud because we, first of all, what we did in as far as summer is concerned,
our project, summer has this arrangement where it ranks projects as the best performing
project.
And for me and the team, we have ranked, we have been ranking at summer well for the period
that we were there as the number one project that, you know, that.
was ranked as the best performing in terms of meeting the KPI.
So for me, even in as much as we will not tell how the project went after the contract was terminated,
I can be proud of the fact that we were the best, you know, and we did our best, our quality.
was outstanding, our productivity was outstanding, our, you know, our KPI's, our attendance
was, you know, above board, Alex. Yeah. Have you used chat GPT yourself? And if so, what have you
thought about the experience? Well, I have used it again. The content that I went through
is not as severe and it's not as obscene as the one that we did at summer.
Yeah, so the experience is completely different from the one that we did at summer.
Do you have a sense like the work that we did, it worked, it was productive?
Yes, we did.
Yes, yes.
My team was the best performing team.
It was the best that, you know, I could have ever.
asked or worked with.
They were productive.
They were hardworking.
They did that with, you know, all the diligence that they had, you know, and they tried their level best.
They did.
The only thing that the only challenge that they faced was, you know, little handles here and there about default payments, about being granted some
form of counseling, this was disturbing to me.
Right.
Yes.
So let me ask you this.
Open AI has raised $11 billion.
Okay, 10 of that came after you had left, but $11 billion, one billion before.
Why do you think they're coming to Africa and paying people a dollar an hour to do this work?
It's very simple.
And your guess is as good as mine.
The thing about Africa is that you have, there is a ready resource, you know, there is a growing populace that is coming from the campus and is coming from the universities who are saying, we need something to engage ourselves with.
The society in Africa, if the population is not engaged, then there is higher level of crime.
You know, people start killing each other.
People start stealing from each other.
People start doing all sorts of crimes, you know.
And so the need to engage such binds, they need to engage such fines,
the need to engage such resource is, you know, it's inevitable.
So that's the reason why there is a growing supply in as far as such obligations
or such work is concerned.
Yeah.
Last question for you.
How is your team doing and how are you doing in the months after going?
going through all this explicit material.
And do you feel like you've gotten the support that you needed after going through what you have?
Not at all.
Not at all.
The support that we were looking for, remember, we have not even received counseling ever since then.
No one from Samar has had the audacity of reaching out to us, trying to understand how things are or how things are going ever since we left.
I can tell you for sure
one of our colleagues
is
has had issues with his relationship
you know
he had issues with
his wife and
you know his wife left
just because
of
the impact
that
that the text that he was reading had on him, you know, it has never, it was not easy at all.
It was not something that, you know, you will go home and say, I am healed.
I am better than how I started.
Yeah.
Yeah.
And so I will try as much as possible to get to.
And know if all these individuals or all my team members are, are, have gotten counseling and are
getting counseling, that will be my, my, my, my joy and my happiness.
Richard, thank you so much for sharing your story.
Appreciate you being here.
And I wish you well.
Thank you very much, Alex.
And looking forward.
And that'll do it for us here on Big Technology Podcast.
Thank you so much, Richard, for coming on and sharing your story.
I want to note that I've reached out both to Samma and Open AI and have not heard back yet.
So if they do respond, I'll include it in the newsletter that I'm planning for Friday.
So hopefully we'll have a chance to hear from them there.
You can subscribe at big technology.com.
And yeah, hopefully we'll have some updates there.
So more reporting is on the way.
So I hope you check out the newsletter.
I'm so glad that you were able to listen to the podcast.
If this is your first time here, please hit subscribe.
We do these twice a week, flagship interviews on Wednesday,
and then we break down the news on Friday with Ron John Roy, the margins.
If you are a long-time subscriber and are willing,
five-star rating on Apple Podcasts or Spotify would go a long way.
Thank you, Nick Guantney, for handling the audio.
Thank you, LinkedIn, for having me as part of your podcast network.
Thanks to all of you, the listeners.
I really appreciate you coming back week after week, sometimes twice a week.
And I can't thank you enough.
All right.
Thank you, Richard, again, for sharing your story.
This is what the podcast is where.
This is what we're here for, right?
Bringing stories, you're not going to read in the headlines,
and I'm hoping that that's what we've got today.
All right, that'll do it for us here today.
Thanks again for listening,
and we'll see you next time on Big Technology Podcast.