Screaming in the Cloud - Episode 72: Data Security in AWS with Chris Vickery

Episode Date: August 7, 2019

About Chris VickeryChris Vickery is Director of Cyber Risk Research at UpGuard. His research has protected over two and a half billion private consumer and account records which would have ot...herwise remained at risk of malicious exploitation. He has been cited as a cyber security expert by The New York Times, Forbes, Reuters, BBC, LA Times, Washington Post, and many other publications. Some examples of his high profile data discoveries involve entities such as Verizon, Facebook, Viacom, Donald Trump’s campaign website, branches of the US Department of Defense, Tesla Motors, and many more.Links Referenced: https://www.upguard.com/Twitter: @VickerySec

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to Screaming in the Cloud with your host, cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud. using Nagios at scale anymore because monitoring looks like something very different in a modern architecture where you have ephemeral containers spinning up and down, for example. How do you know how up your application
Starting point is 00:00:52 is in an environment like that? At scale, it's never a question of whether your site is up, but rather a question of how down is it? LightStep lets you answer that question effectively. Discover what other companies, including Lyft, Twilio, Box, and Jithub, have already learned. Visit lightstep.com to learn more.
Starting point is 00:01:11 My thanks to them for sponsoring this episode of Screaming in the Cloud. Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Chris Vickery of UpGuard. Chris, welcome to the show. Thank you for having me. It's an honor to be here. Oh, likewise. I've been a follower of yours for a long time, trying to, I guess, keep abreast of the interesting stuff that you do, but we'll get there. First, what do you do at UpGuard? Well, my title is Director of Cyber Risk Research. I do a lot of
Starting point is 00:01:41 things. Probably the thing that I'm most well known for is for leading our breach site team, which is a platform where we kind of give the white glove approach to enterprise level clients that have large network footprints and want somebody to shepherd over them and make sure there aren't any obvious glaring problems that can be potentially taken advantage of by actual bad guys. I first became aware of you folks because it turns out there's a number of security companies out there. But I became familiar with you when I kept seeing the same type of breach announcements coming out.
Starting point is 00:02:17 Well, breach is sort of a lofty term, but I'm sure we'll unpack that at some point, where companies had not properly secured S3 buckets. And they had exposed varying amounts of customer data. These are effectively brand name companies in many cases, not random, a hole in the wall taxidermists. And every time I kept seeing discovered by UpGuard, discovered by UpGuard. And I was waiting inevitably for it to finally snap and say, all right, what's UpGuard? And the response to be, not much, what's up with you? But it turns out it's not a pun. You're actually a real company.
Starting point is 00:02:50 Yes, UpGuard is a real company. It was started about six years ago, I believe. I've been with UpGuard since 2017. And it was started by a couple of Australians. Our main CEO is Mike Box. He moved the company over to the US after getting it started over in Australia. And we're incorporated in Delaware and all nice and legal and headquartered in Mountain View. Things are picking up quite well. So I've become familiar with you folks as the research company that finds publicly exposed S3 buckets and then writes about them. But I'm going to guess that when you're more than a two-person company, that probably isn't where you folks start and stop.
Starting point is 00:03:40 UpGuard has about three, well, now three platforms that we offer. One we call core and it's the internal kind of on-premises watch your configurations and make sure everything's hunky-dory within your environment you know can discover all the the random stuff that you have plugged in that you maybe don't remember is plugged in and kind of keep everything going well. Then we have the cyber risk platform that watches your vendor risk ratings and aggregates the whole total for your company and scores your vendors on a scale from zero to 950. And we say anything under a 600 is pretty bad, and I could probably find some exposed data for them if I were to have enough time in the world to look at everybody that intensely.
Starting point is 00:04:33 And then we have breach site, which is the thing that I am in charge of the team for. give the white glove approach to the network footprint of large enterprise customers and make sure they aren't accidentally exposing anything that bad guys can take advantage of. I'm assuming that most of the public ones where you're cited in the newspaper articles is stuff that you've discovered as you walk through the internet. It's not one of those stories where, oh yeah, we just do this for our customers and then we write news articles about it that basically publicly shame them. Is that accurate? Well, the take that I have on it, and we do have automated systems these days, always looking, finding this stuff, alerting us to do more manual scans and look into things more intently. But I don't think of it as much shaming as it is raising public awareness of the problem of exposed data.
Starting point is 00:05:29 Whether it's involving Amazon S3 or just open rsyncs or anonymous FTPs or Azure or Google Cloud or any number of other hosts out there, people are going to misconfigure their stuff. It's just a statistical probability and human nature. So we would like the public to take that into consideration a little bit more when they're trusting companies with their data, as well as for companies when they're hiring people to work with these platforms and their customers' data. I mean, this stuff is complex. I don't think it's unfair to say that no one wakes up knowing this stuff. And it's easy to understand
Starting point is 00:06:09 how a lot of these mistakes got made, at least in the realm of S3 buckets a couple of years back. There was a default setting in the web browser, sorry, in the web console from AWS where any authenticated user could read the data. Sure, that makes sense. This is just company confidential. What people didn't realize was that was any authenticated user could read the data. Sure, that makes sense. This is just company confidential. What people didn't realize was that was any authenticated user globally. They since fixed
Starting point is 00:06:29 that and then in turn made it increasingly difficult to accidentally do this with are you sure dialogues and scary labels in the console and series of emails that go out and entire services that are designed to stop this. So my perception, at least from the outside public world, as I've found some of these myself over the years, has been that it feels like this is tapering off. You're not seeing open S3 buckets in the same volume as you used to. And I mentioned that on Twitter, and then you commented in, which is what started this entire podcast recording, and said, well, that's not what we see.
Starting point is 00:07:00 You are way better positioned to see how this industry is doing across the whole. What am I not seeing? Well, for starters, the global authenticated user setting is still an issue. I don't know what you're referring to with they fixed that, but we notified a fairly large entity just last week of a bucket that had been open for quite a while with that exact setting being the problem to it. Yes, for clarity, when I said they fixed that, I meant that in the console, it's no longer a checkbox just sitting there waiting there as a trap for the unwary. It's no longer there in the console. You can still set it, but you have to do it explicitly via an API call. Okay, that makes more sense to me me. Yeah that's still an issue but we we're seeing plenty of buckets exposed.
Starting point is 00:07:51 There's not as many low-hanging fruit hanging as low as it used to be perhaps because I'd like to think our you know efforts such as our own have made people and systems administrators more aware of the dangers of leaving data just exposed or using publicly accessible buckets. But there are still quite a number of them out there. My team at UpGuard has been focusing a lot more recently on supporting our clients and taking care of basic responsibilities that we have there, as well as the advanced new stuff that we're always finding. So we haven't been writing as many reports, but there are still quite a few out there that could fuel a lot of coverage of the issue still. Gotcha. For a while here in my office, I only had two real pieces of art because I have a very,
Starting point is 00:08:46 well, let's say crappy aesthetic sense. But one of them is a map on the wall of all the announced and active AWS regions and CloudFront Edge locations, mostly because I want to keep the small map pin industry in business. The other, it was a monitor just having a consistent ongoing scroll from the certificate transparency logs of S3 buckets that had been open and announced as far as they would there be an announcement of a new bucket. Great, okay, now it's time to, there's an automated system that checks and sees if a quick list bucket call against this. This is not a tool I built, this is something that was available on the larger internet and it would continue to
Starting point is 00:09:23 scan this and if it was available it would flag it. It also did a similar check for the authenticated user approach as well. But it seemed over time, from my perspective, that that became almost entirely noise as opposed to anything that was substantive as far as being clever and discovering these things. It almost seems like there, and again, this is also tied into the further problem where for many use cases, having an open S3 bucket is a desirable trait. That's something that people want to do. And there are financial reasons why they should continue going down that path. The problem is, is that that's not the bucket you want to store your database backups in, or your user database, or the credentials to access things that are expensive and important
Starting point is 00:10:04 to the company. And I completely agree. There are plenty of good use cases. If you just have an assets bucket that's just graphics, creatives, or whatever the heck, and you don't really want to mess around with authentication too much to have random web browsing behavior be able to pull them, there's no huge problem there. You can do that. You can even make them listable. Who cares if people know they're there? They're just little images or whatever the heck.
Starting point is 00:10:34 And your transparency log anecdote is kind of along the lines of what I was getting at when I said the low-hanging fruit isn't hanging as low anymore. As in, I don't know if Amazon changed some way that bucket name registering occurs, but I agree there's not as much of a stream coming from easy feeds like that. I am familiar with the scripts and the tools you're talking about there that, you know, have kind of made the rounds. But there are still plenty of them out there, as well as plenty that were discovered years ago that are still exposed, mostly in other countries that, you know, don't speak the same language that I do or anybody that I know does. So I still see it as a big problem, but perhaps the 13-year-old sitting in his parents' basement, whatever, wouldn't be able
Starting point is 00:11:32 to find him quite as easily. Absolutely. I think you're right when it's about raising the bar of low-hanging fruit. And there is an argument where at some point, if a state-level actor is working against you, you're probably going to lose for most values of you. And some people in my experience have taken that and say, oh, so why even try? Security is impossible. Well, not really. It's a spectrum. Most of us are not getting breached by the Mossad. We're getting breached by some random person running a script they found on the internet because you forgot to change a default password. It's like, raise the bar at least enough so that you're not one of those low-hanging fruit companies where it's just an easy mistake to make. Yeah, that's the whole idea behind security in my
Starting point is 00:12:14 mind is it's about resiliency and making yourself not an easy target. You know, raising it to the point that they're going to go after the next guy. Maybe Massad is targeting you, but there's other targets that are equally juicy that are less secure. So they'll go after them first, and maybe you'll get ahead of the game. That's the whole security thing. It's not about being 100% impenetrable. Anything that uses electricity, just that blanket statement there, anything that uses electricity can be manipulated in ways that you and I would not anticipate, I'm certain. One thing that I've seen with a number of these breaches that have come to public awareness has
Starting point is 00:12:55 been that the company will admit the breach as they're legally required to do. Sometimes they drag their feet, sometimes not, but they're always very quick to say that it was, oh, a third-party contractor did it. And I understand why they want to emphasize that. But on the other side of the coin, they picked that contractor. I do business with a company. I don't vet who they have business relationships with. Given that you do this for a living more than as someone who just sits there on the sidelines like I do and angrily observes things, where do you stand on that, I guess, responsibility breakdown?
Starting point is 00:13:26 I don't think that you can contract away the liability. I don't like that argument that companies try to toss out there and obfuscate things with, where they say, oh, look at this clause here. It says we are, you know, indemnified against mistakes that our subcontractor makes or whatever the heck, you can write anything you want in a contract. But doing business with a third party to handle your cloud stuff, if the third party screws up, you're not absolved of any responsibility there. It's a natural human reaction to try to put the blame on the other guy, but I'm not a fan of that argument, and I don't think there's much legal precedence to hold it up either. No, and that becomes a somewhat serious and questionable concern as far as companies think, oh, well, that's okay.
Starting point is 00:14:18 I'm just going to punt the responsibility to someone else. You can't. I don't think you can. It feels to me, soup to nuts, that you can outsource work, but never the responsibility. Yeah, I completely agree with that. I've advocated for a long time about creating, I mean, you can write anything you want in a contract still, but if you were to specify in contracts with third parties of where the work is going to take place and make it a kind of a neutral zone where let's say the names of the buckets that will be used
Starting point is 00:14:52 are known and written in the contract and those are the only ones that will be used and you being the first party can check and and see if they're open and exposed to the public anytime you want. It's verifiable. It's kind of what Reagan said about the Soviets, right? Trust but verify. If everybody would start doing that sort of approach where anybody can check it, it would possibly keep some of these problems from happening, I think. I strongly suspect you're right. I mean, it is possible to get this done properly. I remember that article in the Wall Street Journal about how the Pokemon company winds up inspecting the security practices of its business partners. And the reason that jumped out to me was that it called out in the article that a vendor they were debating doing business with had improper security controls around an S3 bucket. And their response was, cool, we'll use another vendor.
Starting point is 00:15:44 I mean, that is the, I think, only public example of something like that coming to the forefront. I sent them a polished bucket engraved with S3 bucket responsibility award on it to their office. And to my understanding, it's still on display there. They have a good sense not to let me in the building, but that's neither here nor there. But it is possible to make smart decisions. It just requires not assuming you'll double back and fix things later. Yes, it is possible to make smart decisions. It just requires not assuming you'll double back and fix things later. Yes, it is certainly possible to do. There's a certain level of human competency and human nature that goes into the equation. And that really shines a light. That really shines a light on the importance of hiring the right people,
Starting point is 00:16:26 making sure the people you have get the right training, and not just going with something because, you know, sales or marketing something, demonstration, looked fancy and cool to you. You really got to have the right people that understand and can integrate with this great new technology. Otherwise, you're potentially in for some surprises. And I think one of the hardest things to get across to folks who are new to the world of cloud, at least the world of AWS, has been their vaunted shared responsibility model,
Starting point is 00:16:56 which is an incredibly boring and droll way of saying that there are some things that AWS is responsible for, and there's other things that customers are responsible for. Easy example would be ensuring that the application doesn't have a bunch of bugs in it. That's the customer responsibility. Ensuring someone doesn't drive a truck into a data center, grab a bunch of drives, and take off. That's AWS's responsibility. I'm curious where in that divide you find S3 bucket permissions. My stance on that for a while has been that I agree with the premise that there are certain
Starting point is 00:17:31 responsibilities that you just can't strap on being Amazon's liability to worry about. Things that you customize and upload to their cloud space that you've rented from them, they have no control over what you're putting up there. So bugs in your application, yeah, they have no control over that. Where there's a fuzzy line is in the concept of, did they develop this platform in a way that is proper for the way that it's been marketed? If they're making it sound really, really super easy, anybody with a credit card can sign up and upload data and bam and go, but it really does take a little bit more knowledge than that
Starting point is 00:18:14 to do it right and not risk clicking on the wrong box or whatever the heck, there's an argument to be made that maybe that could be better architected. But that's a continuing goal of any business to improve their product and make it more user-friendly and less mistakes be made. So that's where I see the line kind of getting fuzzier. I would absolutely agree with that. Interesting, at the time of recording this, a couple days ago, there was a very public breach on the part of Capital One. And it initially, to some people, looked a lot
Starting point is 00:18:52 like an S3 bucket permissions problem. A little more digging turned out that it wasn't. I mean, this has been all over the news now, and I imagine most listeners have heard about it. But at a very high level, can you give a quick summary of what happened? Well, the details still are a little fuzzy when you get down to the nitty gritty. But in essence, there was an individual named Paige Thompson, I believe, who, through some digital trickery, was able to enumerate certain information about a lot of cloud accounts, but Capital One is the one that kind of is in the news right now, and was able to list buckets and access the data within them, not because, at least to my knowledge, not because they were improperly made public or anything, but because there were some side doors and little channels
Starting point is 00:19:47 that you can gain information from and use that information to gain a little bit more information. And then, you know, if somebody's misconfigured part of the chain, you may be able to gain some privileged information that allows you to get through the authentication wall. And it was not a simple thing that anybody on the street could probably do. It was something that required a little bit more advanced knowledge. Interestingly enough, not that this is, you know, been reported as a cause of the situation, but this person, Paige Thompson, was at one point an Amazon Web Services employee a little while ago. So I've brought up the concept of we need to ask the question, did this person already know how to do the types of things that were used to get to the data in this situation?
Starting point is 00:20:38 Or did this person have experience from being an employee at AWS and then kind of corrupted that knowledge into using it in this way or what? It's not answered right now and the affidavit that the DOJ filed with the charges doesn't do much to further illuminate that question. I would agree. Everything that I've read so far, to my mind, is the sort of thing that I would do if I dropped my sense of ethics and decided, you know what, let's see how much damage I could possibly do. These are all things that don't require any insider access. And I would be, in fact, very surprised if it came out that there was any insider access that even remotely came into play here. But you raise the excellent question of how much of this came from a baseline level of exposure and experience from working there. And that's one of the fun questions that
Starting point is 00:21:33 I think that a lot of companies haven't really asked is, who are the people building services at large cloud providers? I mean, far and away, almost all of them are decent, ethical, intelligent people. But as we see, it generally only takes one person going in a strange direction to start raising uncomfortable questions like this one. And the real answer, at least in the world of AWS, is we don't even know publicly how many employees of Amazon work in AWS, let alone the rest. Yes, that is absolutely true. And as I brought up before we started recording here, when you throw in the contractors and subcontractors as well,
Starting point is 00:22:11 it just throws a bunch of wrenches in the machine. And people are not taking this into account when they decide to move their data center into the cloud. There's no way that Capital One has done background checks on all the AWS employees that have access to administrative level things that could be abused. Not that that was the case in this situation, but it's just a good example of there should be a concern there that I don't think is being addressed very well. Yes. To my understanding, we've never yet had a public case of an insider at a cloud provider causing problems like this. And I still would argue that we haven't in this case, as you mentioned.
Starting point is 00:22:54 I mean, she left a couple of years before this wound up happening. And since then, we saw the giant S3 outage in 2017. AWS has very publicly rebuilt massive swaths of S3 in a customer transparent way. So even then, some of the knowledge around how the system functions internally is going to be out of date. It just comes down to the question now, even though we haven't seen this in years past, is this a vector for the future? And AWS is very up front and center about how most of their employees, in fact, in some cases, any of their employees won't have access to customer data, period, provided that the customer configures all of the various security apparatus correctly.
Starting point is 00:23:38 And that's kind of what leads us to where we are now. It's very hard, even for a company as incentivized to do that as a bank, to wind up getting all of the edge cases nailed down. Yes, that's very true. It illustrates the kind of balance here, the seesaw of, you know, we're going to, you know, the cloud services provider is going to hire reputable, you know, well-meaning people that aren't going to do anything to cause problems. But there's also responsibility on the client side, you know, the capital one in this situation side to configure everything correctly and have the employees on their side that are knowledgeable enough to configure things correctly and not expose data or vectors or, you know, side doors into data storage areas. So it's going to be a constant back and forth on,
Starting point is 00:24:27 you know, where the responsibility and the liability lies in these situations. And I, you know, I, like you said, I don't think there's a lot of, if any, public, you know, cases or situations where that sort of thing has been really sussed out to this point. Absolutely. The fun thing, though, that from everything that's been read and reported so far has been that the attack vector was more or less someone tricking an edge device of some sort to make a web request against its own local metadata endpoint, which, if you know where to look, it'll spit out a set of temporary credentials that are bounded to six hours of validity.
Starting point is 00:25:03 And then you can grab those credentials, start exploring what else those things have access to. And in this case, it turned out there was an overly broad role. Okay, great. Now I'll list 700 buckets and the contents of it and transfer them out. There's a lot of things that have to happen first to be able to pull off something like that. But also in order to permit that level of, I guess, oversight, why does a role assigned to a firewall have access to talk to 700 S3 buckets is sort of the biggest big one that I don't think anyone has a good answer for. Yeah. The initial like genesis of the techniques that were used here, that, how was that request made that went to the internal facing area, that's still up in the air.
Starting point is 00:25:50 Like you said, it's some sort of trick, we're assuming, but until it's known more widely and concretely how that was done, it's hard to say where you know the initial blame lies if uh it was just a completely misconfigured open something or other that uh capital one had had either run uh incorrectly or configured incorrectly then you know the the blame would lie more on that side and if this was you know something that was very cryptic and and hard catch and maybe affects a lot more clients of AWS, then that may be something that Amazon wants to take a look at about maybe putting a little flashing sign saying, do not click this button or something, unless you want to expose things. But we just need to know more details at this point. Oh, absolutely. And I guarantee you that there are other companies that are vulnerable to this because let's not kid ourselves. As easy as it is to make cheap
Starting point is 00:26:55 shot jokes at a company post-breach, Capital One has an awful lot of very intelligent technologists working there and they don't show up in the morning assuming they're going to do a crappy job today. If they can get this wrong, I assure you there are way more companies out there who have gotten it far more wrong. Yeah, that's probably very true. And I wouldn't have a job if these sorts of things were not widespread. It's weird. I mean, I'm in the same boat. I fix AWS bills, you handle cloud security. In an ideal world, neither one of us would have any sort of job that remotely resembles what we do. We'd have to go build things rather than fixing things other people have built. For better or worse, here in reality, that's not the way the world works. It's one of those in theory versus
Starting point is 00:27:39 in practice stories. In theory, there's no difference between theory and practice, and in practice, there is. Yeah, and if either one of us were very, very good and godlike at our jobs, we would put ourselves out of business. But, you know, it's human nature always fighting back against that. Oh, it's generally a requirement that this type of function be reactive. I think that cloud economics and cloud security are both in the same boat as far as it's a number one priority for a company. Immediately after, it really could have benefited from being a number one priority. It always feels like a trailing reactive function just because it's super hard to invest in this up front. An argument I made on Twitter a couple days ago was that someone could have charged Capital One a million dollars to go in and just fix all the scoping on their IAM roles. And they would have been laughed out of the room if they'd proposed it. But now because they didn't
Starting point is 00:28:29 do that, they're, according to their own statement, they're assuming this year's charge for this will be between 100 and 150 million dollars. The ROI would have been instant and immediate. But you need to have the pain first before justifying anywhere near that spend on a project like that. A couple years ago, I read an article that claimed some reputable polling company had talked to a bunch of chief technology officers, and the consensus was that the CTOs would pay out of company pocket, like, I don't know, $160,000 or something, just to not deal with the smallest of breaches.
Starting point is 00:29:07 They just, without even thinking twice, would toss that kind of money just to not deal with any type of breach whatsoever, no matter how small. So yeah, I agree that if somebody had proposed a million dollars to go in and fix all this stuff, most executives would have left them out of the room. But time has told the truth that they would have been better off doing something like that. Oh, absolutely. And I will say that it's easy to be angry and blame Capital One for this. I mean, they're a bank. They need to take responsibility and handle these things. But looking at everything we know so far, I'm not seeing this as someone just sort of phoned it in one day when
Starting point is 00:29:45 they were going about their job. This is a sophisticated attack that understood deeply how all of these systems work together. This is not, generally speaking, someone random off the internet who is bored in their dorm room somewhere. This is someone who has expertise in this area and a deep knowledge of how these parts all interplay together. I mean, this is the sort of thing I might come up with, but I'm almost 20 years into my career at this point, and I've been staring at this exact problem space for an awfully long time. It's not in the same realm to me as someone just inadvertently left all of their user data sitting around an open S3 bucket despite the increasingly frantic warnings from AWS over the last year or so?
Starting point is 00:30:32 Yeah, this was a bit more complicated, but it raises the question of how honest are the marketing and salespeople being when they go and they demo how great AWS or any other cloud provider is, and they say, this is totally secure as long as you don't misconfigure it, etc., etc. There's no way anybody can break in. And the executives or the people on the other end of that presentation may take it hook, line, and sinker without any grains of salt and believe that. But if there is something that a sophisticated attacker can chain together, if they're dedicated enough, that needs to be at least part of the fine print. Part of the, we do as best as we can, but nothing's foolproof. You're taking a risk here, blah, blah, blah.
Starting point is 00:31:20 But I get the feeling that's not being represented as realistic as it should be. I absolutely agree with you. It's one of those things where, oh, don't worry about it. Move it into the cloud. It'll be better. But it does raise the question that if a company hadn't been in the cloud, would this exposure have been worse? Would it have been something that they have a better security posture if they'd never gone in on the cloud in the first place? And sure, for this particular use case, probably. Would they have exposed instead by not having effectively some of the best technologists in the world at a public cloud provider building these things out? And how much profit would they have lost for not being as nimble as they can be by using cloud services. It's kind of an apples to oranges comparison.
Starting point is 00:32:07 People ask me that all the time, you know, would they have been better off hosting it in their own data center? But it brings up a host of other problems that you got to deal with then and, you know, unoptimized issues. So it really is whether you prefer the taste of apples or you prefer the taste of oranges here. They're hard to compare, but you know, you have a preference for one or the other and they each have their goods and their bads. I think that you've absolutely nailed the salient point on that. Chris, thank you so much for taking the time to speak with me today. I appreciate it. Thank you
Starting point is 00:32:39 for having me. If people want to hear more of your sage thoughts on these and other matters, where can they find you? Well, you can always check out the latest blog postings at UpGuard.com, or you can go to my Twitter handle. That's VickerySec, V-I-C-K-E-R-Y-S-E-C on Twitter, and read my various musings there. Thank you so much. Chris Vickery, UpGuard. I'm Corey Quinn. This is Screaming in the Cloud. This has been a HumblePod production. Stay humble.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.