Software at Scale - Software at Scale 54 - Community Trust with Vikas Agarwal

Episode Date: February 1, 2023

Vikas Agarwal is an engineering leader with over twenty years of experience leading engineering teams. We focused this episode on his experience as the Head of Community Trust at Amazon and dealing wi...th the various challenges of fake reviews on Amazon products.Apple Podcasts | Spotify | Google PodcastsHighlights (GPT-3 generated)[0:00:17] Vikas Agarwal's origin story.[0:00:52] How Vikas learned to code.[0:03:24] Vikas's first job out of college.[0:04:30] Vikas' experience with the review business and community trust.[0:06:10] Mission of the community trust team.[0:07:14] How to start off with a problem.[0:09:30] Different flavors of review abuse.[0:10:15] The program for gift cards and fake reviews.[0:12:10] Google search and FinTech.[0:14:00] Fraud and ML models.[0:15:51] Other things to consider when it comes to trust.[0:17:42] Ryan Reynolds' funny review on his product.[0:18:10] Reddit-like problems.[0:21:03] Activism filters.[0:23:03] Elon Musk's changing policy.[0:23:59] False positives and appeals process.[0:28:29] Stress levels and question mark emails from Jeff Bezos.[0:30:32] Jeff Bezos' mathematical skills.[0:31:45] Amazon's closed loop auditing process.[0:32:24] Amazon's success and leadership principles.[0:33:35] Operationalizing appeals at scale.[0:35:45] Data science, metrics, and hackathons.[0:37:14] Developer experience and iterating changes.[0:37:52] Advice for tackling a problem of this scale.[0:39:19] Striving for trust and external validation.[0:40:01] Amazon's efforts to combat abuse.[0:40:32] Conclusion. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.softwareatscale.dev

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Software at Scale, a podcast where we discuss the technical stories behind large software applications. I'm your host, Utsav Shah, and thank you for listening. Hey, welcome to another episode of the Software at Scale podcast. Joining me here today is Vikas Agarwal, an engineering leader with over 20 years of experience in the industry. He spent over 15 years at Amazon and his last role at Amazon was head of community trust, which I think is an extremely interesting plus relevant area recently. Welcome to the show. Thank you for inviting me. What's up? How are you doing? Pretty good. I've been excited to talk to you for a while. Let's just start with your story. I know
Starting point is 00:00:47 you have a really interesting origin story. How did you get into tech? Oh, that's a very interesting story. So I grew up in different villages and countryside in one of the, you could say, least developed states in India. And I didn't really know much about what computers were. So what happened was I was going to a market with one of my friends and I grew up in villages, so I didn't really speak or read English too much. So we saw this magazine, it's called Computer Today. My friend who studied in English medium schools, he said, hey, Vikas, why don't you buy this magazine? These are great and it's amazing.
Starting point is 00:01:27 You should go in computers. I said, I don't even read this. Why would I buy this? But he convinced me to buy this. And he read it. And then I read it. And at the back of the magazine, there was this news clipping about Bill Gates, that Bill Gates is the richest man in the world.
Starting point is 00:01:48 And that kind of fascinated me a little bit. And I decided that day that, hey, I will work at Microsoft. And Microsoft was my first job. Wow. So within the next few years, I guess you just taught English to yourself or you learned it in college or something? Like what happened? So in India at that time, computer science was not as popular. I'm talking about 1994. So when we went to the, what they call counseling, where you choose the branch you want to take, because in India, you get the discipline based on your rank. So the first most desired branch was electronics, then mechanical, then electrical, and then computer science. So I actually wanted to choose electronics, but before I could choose that,
Starting point is 00:02:40 the last seat was gone. So computer science was my second choice. Maybe it was destiny. I ended up with computer science in Institute of Technology Kurukshetra in India. And I didn't know any coding in the first year of college. I didn't know any coding. And most of the classmates, they had been coding for a while and we didn't have much programming classes in first year. So what I did was in my first year college break, summer break, I learned coding in Pascal and I really loved it. And that summer, I just coded during whole summer. And when I got to the second year of my college, I was one of the best coders in the class. So that was interesting. It's so interesting because I feel like I have a very similar story.
Starting point is 00:03:27 I certainly knew programming before, but it was really my freshman year summer is when I decided to do a bunch of projects for online hackathons so I can make some money on the side. One of the only legal ways to make money as an international student was to win competitions. So that's the summer I learned all of my JavaScript, essentially. Interesting. And then what was your first job out of college?
Starting point is 00:03:50 Did you end up at Microsoft? No. So I did my bachelor's from NIT in India. And I had a job offer from Satyam Computers. But I didn't really want to take the job. What I really wanted to do was a master's, right? And I did my master's there and I finished my master's in 2000. And that was also the time when Microsoft was looking to hire talent all over the world.
Starting point is 00:04:20 And they had a recruiter in our campus, and that recruiter contacted me, and I interviewed in New Delhi, and I got an offer. Okay, so you made it happen. It just took a few years. Seven years. Seven years, yeah. Yes. I want to fast forward to your experience. So you started off as a software engineer.
Starting point is 00:04:43 When did you join the review business or community trust? How much time after was that? So before community trust, I was leading customer reviews. And in customer reviews, I had a great time, right? I joined a team at a time where team wasn't thriving. It was not thriving. It had very high operational burden and amount of innovation coming out of the team was low. So I was in that role for about one and a half years.
Starting point is 00:05:12 And during that one and a half years, things improved drastically. Our operational burden went down over 95% and we launched a lot of innovations. A lot of features in reviews were launched during the time I was at reviews, such as you must have seen that feature where if you click on that term, it shows the reviews related to that term. That was one of the features. At that time, we started noticing that the fake reviews problem was becoming big. And there was also an article on Washington Post around that time. And at that time, I was drafted into the role of community trust. And my manager said, hey, you've got to take this. It's very critical. And so I took that role. We didn't really have a
Starting point is 00:05:53 big team at that time, very small team, but the problem was growing very rapidly. So I got that along with a blank check to hire people. Okay. That's how important that team was. What is the mission of the community trust team? The mission of community trust is see, if you look at reviews, almost 60% of purchases made on Amazon platform, right? Customers read reviews before making purchase. 60% of the time they read reviews. And ask anybody. And even look at your own experience.
Starting point is 00:06:30 How much you value review in making purchases, right? So if you have bad actors trying to manipulate customers into buying substandard products, right, it loses customers' trust in amazon and in reviews so our mission was to make sure that we maintain the trust of customers in amazon and reviews by eliminating the inauthentic contribution from bad actors okay and like when you have an ambiguous problem and a blank check like where do you begin you just had a higher hundreds of people at once what do you do when you start off with a problem yeah so that's a problem pretty much every growing business faces
Starting point is 00:07:18 right and if you even if you look at startups right they got blank checked from VC. What do you do? So thankfully, Amazon is a big company and we have a vast supply of talent internally that are attracted to hard problems. A couple of things we did. One was at least to hire senior enough leaders. So I hired some really senior leaders in product, in data science, and in engineering. And we grew our engineering team very rapidly. When I joined, a lot of the reviews mitigation was quite manual. Engineers used scripts to suppress reviews.
Starting point is 00:07:56 So our first focus was on automation, is that how do we suppress the inauthentic reviews in an automated way and allow the sort of business people, business analysts to write rules that can identify the inauthentic contribution. So that was our first priority to automate. But with business rules, the problem is that because they have thresholds about certain behavior, right? So bad actors can identify that through reverse engineering, right? And adapt their behavior. So the moment you write a review rule, pretty maybe within a week or two, that is useless because bad actors figure out the thresholds.
Starting point is 00:08:40 So you're constantly writing new rules or modifying the thresholds to suppress abusive reviews. But then you get to a point that from that rules point of view, there is no difference between authentic submission and inauthentic submission of review. Right. So that you start suppressing a lot of authentic reviews. Right. And your false positive rate becomes very high. So at that time, you have to retire that rule. So that you start suppressing a lot of authentic reviews, right? And your false positive rate becomes very high. So at that time, you have to retire that rule, right? So the next focus was, is to use as much ML as possible, because with ML, you can be a
Starting point is 00:09:18 lot more hands-off wheel because you have model and that you can keep training the model based on the new data and model can self-adjust to the behaviors of the bad actors and just for some timeline like when was all this like when was this team started off so team was there for a while i joined the team in Q4 2018. Okay. So that's when there were people still writing scripts to suppress bad reviews. And in terms of another question is bad actors. My assumption was there's a seller who will give you something like a gift card if you give a review or like a five-star review to them. But it seems like you're talking about a different kind of bad actor who seems much more capable they can figure out and reverse engineer can you give
Starting point is 00:10:11 me whatever you can share publicly about what you know about these like bad actors yeah this is a lot of it i would still be very wary of talking about even though it happened a long time ago three four years ago but so there are there are various flavors of reviews. The one you talked about that you buy a product, in the product you get a slip, hey, write a five-star review and we'll give you a free product for $10. That's one flavor.
Starting point is 00:10:34 So we had a program for that as well. Then you have areas where people collude over social media, right? You have Facebook groups where you have review groups where sellers are incentivizing buyers to write fake reviews. And then there is another aspect of it is where you have, you could call them hackers or black hat tactics where bad actors would write reviews programmatically. And they have significant resources at their disposal because it's like a marketplace for these reviews that they sell are not written for a particular product they are written for a particular category because a lot of these products are indifferentiated coming from
Starting point is 00:11:18 mostly from china that you can have an earphone right right? And it's hard to differentiate 50 different type of earphones, right? You could have review of one product equally applied to another product. So these are very genetic reviews in that sense. Hey, you know what? This product has great sound, looks good. It's waterproof, right?
Starting point is 00:11:40 So bad actors have collection of these reviews and a seller comes and they just give them those reviews and associate those reviews with their product. So there were a lot of different flavors of reviews of views. Even in social media, bad actors congregated over Facebook, over WhatsApp, over WeChat, over Telegram. Yeah, all of this is in public knowledge. There are various media articles that have been written about this. Yeah. I'm curious about the program for that first kind, the gift card kind. How did you solve that? That seems particularly hard to solve because it's a genuine person leaving a fake review,
Starting point is 00:12:20 essentially. Yeah, I don't want to give out too much because a lot of these things may still be active. But really, if you have a sort of a card in a product that is incentivizing customers to write reviews, right? So the only way to find that is, actually, there are two ways to find it. One is within reviews. Some people write it that, Hey, I got a incentive to write reviews. I got this card, which is offering me $10 to write a review, right? So some people really take offense of that. They will write one style of view and say you know what i got this thing so you could write an ml model or just simple rule to identify those products and the second thing you can do is you can really go to the warehouse and have somebody open that package and see whether
Starting point is 00:13:17 it exists or not yeah fascinating yeah i can't even imagine all of the interesting stories that you have, because it just feels like one of those problems. Like it's similar to Google search again. There's always spammers trying to fake the ranking, trying to get their site up. And it's like an arms race, essentially. One of my managers, she said, if your product doesn't have abuse, it doesn't matter.
Starting point is 00:13:45 You've made it when your product starts seeing abuse. Yeah. Especially in these marketplace kinds of things. I can imagine even with the FinTech, right? If you're working in somewhere like Bolt and they might be just trying to, I don't know. It's such an interesting space. Yeah. In FinTech space, what we have seen primarily is chargeback fraud.
Starting point is 00:14:04 The best way, there are a lot of companies that provide great solutions for this today. In fintech space, what we have seen primarily is chargeback fraud. The best way, there are a lot of companies that provide great solutions for this today. And if you're starting up your own business, which accepts payments, I would highly recommend going with one of these providers. You have Riskified, Signified, Forwarder. There are multiple solution providers that you could go with because what matters the most is the amount of data a provider has, because as a new business, you're not going to have enough data to identify or train your model on fraudulent or stolen credit cards. But if you have an aggregator, such as say, Riskified or Signified, because a lot of vendors have connected with them, they have a lot more data. And the best thing is that they also provide indemnification.
Starting point is 00:14:49 So they could say above, say, 30 basis point or 40 basis point, anything that chargeback happens, we'll pay for it. That's like buying an insurance. Yeah, interesting. Yeah, and then there's no reason not to use it because you know you're not going to have massive loss. Yeah, but they do's no reason not to use it because you know you're not going to have massive loss. Yeah. But they do charge quite a bit of money
Starting point is 00:15:07 because they've got to run their own business too. It could be worthwhile. I really wonder how many tech companies are really just insurance companies. You're paying a premium every month and you're going to make sure that you're not going to have a massive loss at some point. I feel like there's a whole interesting thread over here.
Starting point is 00:15:27 But, okay, so community trust, it goes from scripts to ML models. So that's fraud. What are the other things you have to think about in a team like this? What I was just asking was like fraud is one really large aspect of community trust, like fake reviews because of bad actors. I'm sure there's other things you have to think about when it comes to trust, especially we were just discussing these policies around writing reviews for your own products, things like that.
Starting point is 00:15:51 Yeah, yes. So what we've seen is that sometimes sellers will write reviews on their own product because if you see cold start is a big problem for any product, right? And you want to launch a product and you're not sophisticated abuser, for example, right? You would say, no, let me just write a couple of reviews of my own product from different machines. So I think we were pretty sophisticated to
Starting point is 00:16:18 identify that when people were writing reviews on their own products, and we were pretty able to quickly suppress those reviews. But in some cases, we didn't do that. There's a very interesting story of one such case. So Ryan Reynolds, he's a major Hollywood actor. He had a gin, I think, aviator brand. So he wrote in a very funny review on his own product, right? And we didn't know. We didn't know that he wrote that review, so we didn't suppress it. But then he admitted that that funny review, I wrote that. So as per our policies, we should have been able to suppress it. We should have suppressed it, right? But one of the tenets we had was we value quirky reviews. We value funny contribution, right? So in that spirit, we didn't suppress the review because this was a special case where, you know, you have
Starting point is 00:17:15 multiple tenets of Amazon at a little bit of conflict. And that's where you have to apply judgment is that which ten tenant do we favor? So we had customer trust versus funny or quirky review. And we believe that because Ryan has admitted it, people would not be unduly influenced by review into buying this product. So we valued the funny contribution tenant and kept that review. Especially when it's like a well-known name and it does get a laugh out of you when you see it. Yeah. We even had a book published within Amazon. It was funny reviews, right? So anybody who joined reviews team, we gave them that book, which has the funniest reviews.
Starting point is 00:17:56 There's a banana splitter review. There's a shirt with three wolves midnight. Those are some legendary funny reviews. You may be also aware of and i think jimmy fallon used to have amazon funny reviews readouts and it's on his show i think this is going to be the most hilarious show notes for this episode ever you should find those reviews yes it almost reminds me in many ways like there's this it's similar to reddit right like reddit for example has this problem with people posting a bunch of their own spam links when they're trying to like sell a product it also reminds me of all the controversy around banning the donald trump subreddit especially after january 6th did you have to deal with similar like political problems yes so there is there is an environment these days where you have very polarized views and there's a lot of activism associated with those polarized views.
Starting point is 00:18:54 So what we noticed was that sometimes a book from some person who would be seen as provocative or politically charged or politically controversial or just somebody famous. And people will go and write extreme reviews just to hurt that product, right? We call it activism, right? So to protect such products and books, we built an activism sort of mechanism to protect these products. I think one of the first times where we implemented this when Hillary Clinton's book came out and this mechanism worked perfectly. So whenever we noticed that a book or a product is under attack, we limited the reviews to only verified purchase reviews, right? For a short duration until things calmed down. And it worked very well for that book. When we were working, we were politically neutral and, but you always get accusations, right? Some of the accusations came that, hey, you know what, Amazon is biased towards liberal crowd and they're preventing
Starting point is 00:20:12 reviews on one-star reviews from Hillary book. But that wasn't the case, right? I think we were protecting the activism on the book, people who were writing non-verified one-star reviews without reading them. But then tables were turned. Donald Trump Jr. had a book out and then you had people writing one-star reviews on that book as well. And we prevented the attack on that book as well. And then funny thing was that Donald Trump Jr. was speaking in RNC convention two days later and he was talking about his book, how it is a number one bestseller on Amazon and stuff. And we were like, if we didn't have that activism filter, he probably
Starting point is 00:20:50 would have been talking something else on that convention. It feels good that the work your team does brings goodness in the world and does not add divisiveness. It was good to see the food of our work. I have two questions from that like one is a surprisingly boring technical question like how do you implement something like an activism filter is it another ml model that determines that a book is being attacked or a product is being attacked is it like one time you preset okay this is going to be a controversial author how do you implement i don't know how yeah so i'm not going to be a controversial author. How do you implement? I don't know how. Yeah.
Starting point is 00:21:26 So I'm not going to go into how we implement it at Amazon, but I will give you a more generic answer. Suppose, you know, you are launching it. So obviously, you know, you want to build something as soon as possible, right? Number one thing you need to have is a policy around it, right? You need to define what is an attack, right? You need to have policy because at the end of the day, it might come down to manually reviewing the product, right? Manual reviewing the reviews and what do you do
Starting point is 00:21:59 then? So you need to have robust policy around it. What does an attack, activism attack look like? You could, however, and then there are various degrees of automation, right? One thing you could do is have simple rule, which sort of flags the product, which are under attack, right? And it flags them and you can have manual reviewers put limitation on the product and start suppressing the bad faith reviews. Right. And if you become pretty advanced in that case, you can build automated ML, which can look at various factors into identifying them, identifying the attack.
Starting point is 00:22:37 And you could automatically take sort of actions. But in our case, like activism is not a very frequently occurring problem and it requires quite a bit of human judgment. So my advice would be is to sort of definitely manually flag it in the beginning
Starting point is 00:22:59 and review it and then move more towards automation. That makes sense. Like a definite crawl, walk, run approach, right? Like first, the key piece is the policy. And that makes me think about Elon Musk and the changing policy, whether you agree or disagree with his approach. Like it seems like nailing down that policy in the beginning is just so important.
Starting point is 00:23:22 Yeah, when Elon bought Twitter, I found it very interesting. And my first thought was, Elon, you should have stuck with building the beginning is just so important. Yeah, when Elon bought Twitter, I found it very interesting. And my first thought was, Elon, you should have stuck with building spaceships and colonizing Mars. This content moderation is just too hard. The thing is that there is no right answer with content moderation. And you can't, you will always displease people. And I think Elon had a very good sort of framework for it. What he said was, I will consider success when I have angered extreme left and extreme right to the equal degree. So he came up with a negative definition of success rather than positive definition of success. What he said is necessary, but not sufficient. What makes it sufficient then?
Starting point is 00:24:02 There is no sufficient with this. is no sufficient you have to be omniscient that's it you have to guess the intention of a person for example even if you look at reviews right suppose i am launching a product on amazon right and i know you and i say what's up can you buy this product and write a five-star review and i'll give you the money over paypal and you wrote that review is it authentic review is it inauthentic review how does amazon determine how does anybody but you and i determine that right so there is no sufficient yeah like with all of these things, you also have to determine what you want your platform to be with perhaps reviews on products.
Starting point is 00:24:50 Your goal is authenticity. So that's what you work towards. And that's not a politically charged goal, right? You don't care that much. Whereas with a social media, like a social network, you want people to be engaged as much as possible. Yeah. But here's the thing. I'll give you a funny media, like a social network, you want people to be engaged as much as possible.
Starting point is 00:25:05 Yeah. But here's the thing. I'll give you a funny example, right? So one of our tenets was we are happy to lose or we can tolerate to lose authentic contribution in order to preserve customer trust. You have a model. You will have false positives. Right. There is no way around it. Right. You will have false positives, right? There is no way around it, right? You will have false positives, right? And when you have false positive, when you suppress somebody's
Starting point is 00:25:32 reviews, for example, if you're somebody who's been long time Amazon customers, and you wrote a review of something and it's a false positive, know amazon suppressed it right you will feel very bad like it was a revelation to me you know how offended people can be when you suppress a genuine review right and when i took over this role within first two weeks i had four or five jeff bezos question mark, because in one of the rules, we had very high false positive rate and we suppressed some genuine contribution. And that really turned my worldview around in a way that we have to be extremely careful in launching any models to make sure that we're not exceeding our false positive rate target. And we are very humble in a way when we identify an error, we reverse the effect of that. For example, if NML model or rule, it has some really unacceptable false positive rate. So we review the entire sort of outcome of that, you know,
Starting point is 00:26:46 model to see that we bring that contribution back. And it was in the finally appeals process, right? When you take an action which has adverse impact of customers, right? You've got to make sure that customers can contact you and express their disappointment, frustration, complaint or just communication to your company. And you review that. Right. See, the difference between, say, Amazon and Facebook is that Facebook has, I don't know, 3 billion users. Amazon doesn't have 3 billion users, right? And each Amazon customer spends a lot of money on Amazon, right? And if their sort of thing is going to deaf ears, right, you will lose that customer.
Starting point is 00:27:40 And Amazon is known for their sort of customer obsession. So we took all of these complaints for customers very seriously. Right. And we did manual reviews of those complaints and then applied high judgment to it and sort of took appropriate action either to restore or not restore after that. In many such cases, a lot of these things will escalate up to me. It's both from sellers and reviewers. So there were some judgment calls I had to make because we wanted to make sure that we respect the contribution from customers. We respect sort of their outreach to us and we are humble enough to admit our mistake when we make one. That's so fascinating, especially I cannot imagine the
Starting point is 00:28:32 stress levels of seeing question mark emails from this billionaire in your first two weeks of your job. That sounds rough. I'll tell you one thing, right? It was rough, right? It was my Thanksgiving, Christmas break were spent on ramping up on this and working really 10 to 12 hours every day, even on weekends. But I learned so much during that time. I think that was the best learning phase I had on Amazon. And one thing I can tell about Jeff is that,
Starting point is 00:29:03 and he's a genius, right? So that question mark email resulted in a lot of back and forth going with me. And I thought I tried my best in answering his questions. He just saw right through me and he schooled me. And the ability of Jeff Bezos to deep dive like a laser into a small team's operations and find a flaw and suggest the right solution is just uncanny. He literally dictated us what we needed to do. At that state in like 2018, 2019. Yes, yes, yes. And the funny thing is that that email thread with Jeff, it got forwarded to a lot of leaders at Amazon.
Starting point is 00:29:56 And I heard a lot of after effects of that. For example, I went to a party and I introduced somebody. I met somebody I hadn't met before. And I said, hey, I'm Vikas. He asked me, are you that Vikas? I said, yeah, that Vikas. And I met to meet my mentor and I was telling him about the, our progress in ML. And he was saying, yeah, there was another team in which Jeff eviscerated, which was doing a lot of stupid stuff. I said, Lalit, that was me. That was my team. It was a very humbling experience and I enjoyed every minute of it.
Starting point is 00:30:32 Yeah, that is so interesting. I cannot even imagine going through that. And I can't even imagine how he would know enough of the details to be able to school you. I don't know if you've read this story apparently there was a meeting and there was a math paper that was put in front of him he read it and he found an error like psds from top universities wrote that paper and jeff found a mathematical error in it and so yeah you've got to get props to him to jeff fascinating but then he it's not like he took action he just schooled you but he still let you like lead that role so
Starting point is 00:31:12 no so that's not how amazon works right so amazon especially jeff bezos gave a talk it's called good intentions don't work mechanisms do so do. So what are mechanisms? Mechanisms are closed loop processes. And how do you close the loop? Through audit. Jeff Bezos is a very busy CEO. He couldn't keep auditing us, right? So the way it will work is that Jeff gave his peace of mind to us. Then he delegated inspecting us to his lieutenants. Like the S team or whatever. Yes. So he sent it over to Jeff Wilkie.
Starting point is 00:31:49 We presented a couple of documents to Wilkie and then Wilkie handed us over to one of his reports. So we had quite a bit of scrutiny for a few months until we fixed things that Jeff wanted us to fix. And we had to convince sort of very senior leaders of the company that we were on the right track. Got it. It's like a trust but verify. So it's a closed loop. Auditing is a very important aspect at Amazon.
Starting point is 00:32:17 And that's how you keep such a big company running without having all of these issues. Absolutely. It's just a wonder that a company like Amazon exists. Hundreds of thousands of decisions have to go right for it to exist. There are millions of paths where Amazon doesn't exist in parallel universe. And there are very few parallel universe where Amazon exists. And I think the biggest credit goes to jeff bezos and the leadership principles that amazon has yeah i just think
Starting point is 00:32:51 about all the antitrust scrutiny and all of that like the current chair of the ftc her claim to fame was a paper called the amazon antitrust paradox like the fact that it exists still it's going strong it's amazing i have a question around operationalizing appeals at scale right like you mentioned that we have to manually verify these appeals make sure that you're listening to customers like how do you even make that work because when i think about the scale of it's not three billion with facebook of course but there is i'm sure there's a reasonable number of appeals, reviews to manage and to verify that they're correct or not coming in every day. And you don't have to share numbers, but how do you operationalize a solution for a problem with that scale?
Starting point is 00:33:35 Yeah, I think another principle that is practiced a lot at Amazon is operation excellence. Right. on is operation excellence, right? So operation excellence is iteratively you are reducing, removing defects in your processes, right? So our goal had been increasingly reducing the waste, right? And when you suppress a genuine review, when you take an incorrect enforcement action, that's a waste. That's a waste because now the customer whose contribution was suppressed, they will have to appeal to Amazon. We wasted their time more than our time in processing that, which is how most of the
Starting point is 00:34:20 companies will operate. From Amazon point of view is that we wasted customer's time, which is infinitely worse. Right? So having that empathy in improving our operations and how did we improve them? Getting the right metrics, number one. Number two is measuring them like as precisely as possible, right? You can have the right metric, but you could be measuring it incorrectly and you'll get incorrect results, right? So we obsessed over defining the right metrics, measuring them correctly and driving them towards our desired goal.
Starting point is 00:35:01 Like maniac, like obsessively, right? We had weekly business reviews. We looked at hundreds of metrics in driving down those defects. And then the other thing was auto-correcting these defects. So when we found that, hey, we made mistakes, this model has been making quite a few mistakes. We identify those quickly, right?
Starting point is 00:35:21 And then we try to remedy the output of those automatically, right? And then we try to remedy the output of those automatically, right? You want to ensure or you want to do the best effort so that customers don't have to contact you. You fix it before customers have to contact you. This has also been, I think, Amazon's motto is the best customer services when you don't need customer service is when you don't need customer service. So what I'm hearing is like this pretty amazing complex system where you have several models that are independently evaluating things like reviews.
Starting point is 00:35:54 You review metrics on each one. You decide that a model is not good enough. Not only do you turn that model off for new reviews, you also go back and automatically roll back any decisions being made based on that model. And you do this and you just keep iterating until all of these systems keep getting better and better. So I'm imagining a large-ish data science, ML, or just iterating and improving these things. Yes, we had an excellent data science. It took us some time to build that wonderful team. But yeah, the best part of that was we had hackathons pretty frequently, right?
Starting point is 00:36:35 And the models that came out of those hackathons, they were amazing, right? So we operationalized quite a few of those. So we had a culture which encouraged people to channelize their creativity. And since you have this sort of a, what do you call it, a safety net of different metrics, evaluation, measurement, right? We could evaluate all those ideas very rapidly. So it was like an innovation machine.
Starting point is 00:37:15 Yeah. It's like you want to improve the developer experience at your company so that you can iterate and keep making changes without each one of them being extremely expensive. And that seemed like it was key. Okay. That's right. Yeah. I think there's just so much to unpack here.
Starting point is 00:37:30 It's such an interesting area. Any closing thoughts on just content management? We had this interesting discussion about how this is going to be impossible for someone like Elon to just come and fix. What would be your advice to anyone trying to take on a problem like this, especially when they're at a smaller company? possible for someone like Elon to just come and fix. What would be your advice to anyone trying to take on a problem like this especially when they're at a smaller company they don't have as
Starting point is 00:37:49 many resources maybe as an Amazon like how do you solve this problem? I think that's a great question because you asked how do you solve this problem because there are degrees of wellness in this in solving this problem. I think the most important aspect I would say is get the right people on the problem. Like people who are competent, people who are passionate and people who are dedicated. Right. So that's a number one thing. Second thing I would say, be very humble. You will make mistakes. There is no, don't, if you thrive on external validation, don't take this job. You will see no validation. You will only see sort of discouragement because you're not pleasing anybody here. So you have to be internally driven, self-motivated, and very humble. Somebody who is very resilient, right?
Starting point is 00:38:47 So get the right team, get the right policy. And I think that are my two top advice. Yeah. No, I think that it almost sounds like you have to get folks who've been through experiences like a tough PhD, right? Especially when you're thinking about things like external validation, that you're not launching features every three months or every two months
Starting point is 00:39:08 with billions of users, but you're moving metrics and nobody really sees the impact other than those metrics internally or improving the trust of the platform. I see. Here's the thing. Trust goes up in escalator and it
Starting point is 00:39:24 comes down in elevator. Right. Nobody notices that the platform is trustworthy, but they notice that it's not trustworthy. Right. There's no positive aspect per se. It's the lack of negative is what you, what success is. No, I think it's a wonder at all that you can still trust things like a yelp review and an amazon review i don't even know if i trust other review platforms online
Starting point is 00:39:52 so and i think it is a testament to how much work is put in to make sure you can remove bad actors because it's such a high economic incentive amazon invested a lot of money resources and leadership into solving the abuse problem. People may not see it. People may not agree to it. The team is top-notch, and they have done phenomenal work. They continue to do phenomenal work. I think this is a great point to close out the conversation.
Starting point is 00:40:17 Thank you so much for coming on the show. This is such an interesting conversation with mentions of political figures of elon of course who attracts so much attention to himself there's so much to think about and thank you for sharing all of your stories with us thank you so much itself

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.