Screaming in the Cloud - Using the Cloud to Preserve the Future with Alex Chan

Episode Date: October 20, 2020

About Alex ChanAlex is a software developer at Wellcome Collection, a museum in London that explores the history of human health and medicine. Their role primarily focuses on preservation, an...d building systems to store the Collection’s digital archive. They also help to run the annual PyCon UK conference, with a particular interest in the event’s diversity and inclusion initiatives.Links ReferencedWellcome Collection: https://wellcomecollection.org/Twitter: https://twitter.com/alexwlchanBlog: https://alexwlchan.net/ 

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, and welcome to Screaming in the Cloud with your host, cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud. those boundaries. So it's difficult to understand what's actually happening. What Catchpoint does is makes it easier for enterprises to detect, identify, and of course, validate how reachable their application is, and of course, how happy their users are. It helps you get visibility into reachability, availability, performance, reliability, and of course, absorbency, because we'll throw that one in too. And it's used by a bunch of
Starting point is 00:01:04 interesting companies you may have heard of, like, you know, Google, Verizon, Oracle, but don't hold that against them, and many more. To learn more, visit www.catchpoint.com and tell them Corey sent you. Wait for the wince. N-Ops will help you reduce AWS costs 15 to 50 percent if you do what it tells you. But some people do. For example, watch their webcast, how Uber reduced AWS costs 15% in 30 days. That is six figures in 30 days. Rather than a thing you might do, this is something that they actually did.
Starting point is 00:01:41 Take a look at it. It's designed for DevOps teams, and Ops helps quickly discover the root causes of cost and correlate that with infrastructure changes. Try it free for 30 days. Go to nops.io slash snark. That's nops.io slash snark. Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Alex Chan, who, among many other things that we will get to, is, most notably, a code terrorist. Alex, welcome to the show.
Starting point is 00:02:13 Hi, Corey. Thanks for having me. So, you've built something wonderful and glorious. Well, that's my take on it. Most other folks are going to go in the opposite direction of that and start shrieking. Namely, you've discovered that AWS ships a calculator, something the iPad does not, but AWS does. And the name of that calculator is, of course, DynamoDB. Tell me a little bit more about how you made this wondrous discovery. So I was watching one of your videos where you were talking about some of the work you were doing in AWS land,
Starting point is 00:02:46 and you were talking about how you were starting to explore DynamoDB. And DynamoDB is the primary database that we use in my workplace on AWS. And I knew you could do a little bit of mathematics in AWS. You could do sort of simple addition using things like the update expression API, and you can also get conditional logic using conditional expressions. And then I decided to string all that together with a series of Python
Starting point is 00:03:11 and see if I could assemble those calls to get a basic working calculator. Some folks would say that you could just do the calculator bits without the DynamoDB at all. Python has a terrific series of math functions and other libraries you can import. What do you say to those people? The thing about running your code in Python is that you have to have somewhere to run it, and that's probably going to be something like a server. Whereas if
Starting point is 00:03:35 you run it in DynamoDB, that's serverless. And as we know, that's much better. Oh, absolutely. Because that's the whole point of modern architectures, where we wind up taking things that exist today and then calling them crap and reimagining them from first principles, just like we would on Hacker News, and turning them into far future ways that doesn't add much value but does let us re-architect everything we've done yet again and win points on the internet, which, as we all know, is what it's all about. And I'm extremely impressed by this. But my question now is, so as you've figured this out, what got you to a point where you looked at a database and said, you know what, I bet I can make that a calculator. How do you get there from here? So in this specific case, I'd done just a little bit of work with DynamoDB already. And I sort of had a vague notion that you could do something like this. But I'd never really understood. It was this API that I knew existed.
Starting point is 00:04:31 And so I just started working through the documentation, pulling apart examples, and discovering, oh, yes, actually, this API can do this. And in the process, I actually came up with a much better understanding of the API than I had before. One of the interesting pieces to me is that there's a certain class of people, and I refer
Starting point is 00:04:48 to y'all as code terrorists, and I aspire to be one on some level. I look at something, and my immediate question is, how can I misuse this? Kevin Kukda, for example, was able to build a serverless URL shortener with Lambda, just Lambda, no data store. It was a self-modifying Lambda function, which is just awesome and terrible and wonderful all at the same time. And you've done something similar. And I have in the sense of getting Route 53 to work as a relational database, sort of. And whenever I describe these things to people, they look at me in the strangest and arguably saddest ways that are just terrible, absolutely terrible. I love the idea.
Starting point is 00:05:27 I love the approach. I love the ethos, and I want to see more of it. But there's a long time that goes between me coming up with something like Route 53 as a database, and then there's a drought, and then eventually someone like you comes up with, now we're going to use Dynamo as a calculator. How do we get more of this?
Starting point is 00:05:43 I want to have these conversations more than once every six months. How do we get more of this? I want to have these conversations more than once every six months. How do we get more of this? I think, first of all, just coming up with the ideas and actually putting them into people's heads. We don't necessarily need to be the people who do them. Obviously, this DynamoDB as a calculator came about because of a throwaway comment you made on a video that I watched.
Starting point is 00:06:01 So if we can think of ways to, I don't know, use SQS as persistent storage or use SNS for two-way communication, and then you put those ideas out there and it will sit in someone's head, and hopefully someone who knows a bit more about these things, and they will start to think about it and they will know what the edge cases are and what the little bits of API they can exploit. And that's how these things come about, I think. But we've got to go out there and plant the idea in somebody else's head
Starting point is 00:06:29 and then let them sit on it for three months and then finally go, aha, I know how I'm going to do that. So I didn't realize I was one of the proximate causes of this. And I take a little bit of joy and I guess pride in the fact that I can inspire that. So there are a few different directions to take it in now. One, do you think that Texas Instruments has woken up yet to the fact that they have a new competitor in the form of AWS? I mean, I've not heard from any
Starting point is 00:06:55 of their lawyers yet, so I have to assume no. I mean, the TI-83 hasn't changed since I was in high school 20 years ago, and it feels like maybe Texas Instruments is not the best suited company as far as prepared to innovate rapidly goes. I kind of hope we'll finally break their ridiculous calculator cartel. I mean, certainly I think that's one of the greater injustices in the tech industry today is that, you know, Texas Instruments remains the undisputed king of graphing calculators. And obviously we look forward to DynamoDB breaking that hegemony. Well, that's the real question. How far does this go as a calculator? Basic arithmetic is sort of a gimme. What does it do beyond that? I implemented the basic arithmetic operations, addition, multiplication, subtraction, and division. And then along the way, as I was
Starting point is 00:07:42 trying to build those, I ended up coming up with a series of logical operators. So first a NOT, then OR, and then finally a NAND gate. Now, I don't have a computer science background, but I can read Wikipedia, which is almost as good. And Wikipedia tells me that if you have a NAND gate, you can essentially build a modern processor. And since we can do Nandgate in DynamoDB, there's no reason we can't build virtual processors on top of DynamoDB as well. And once you can simulate a virtual processor, then really it's a few short steps from there to having the next EC2. Well, what I was wondering is, now that we have the scientific calculator
Starting point is 00:08:21 stuff potentially taken care of, the next step becomes clearly graphing calculators. Is that something that DynamoDB is going to be able to do, or are you going to have to switch over to Neptune, AWS's GraphDB, which presumably would be needed for a graph calculator? I confess I haven't used Neptune. I have thought a bit about how you might... Well, you and everyone else, but
Starting point is 00:08:39 that's beside the point, really. But I have thought about how you might use DynamoDB for a graphic calculator. And really the answer here, obviously, is that we're going to have to turn to the console. The console will show you the rows in your DynamoDB database. And so we just use very narrow column names, and then we fill them with ones or zeros,
Starting point is 00:08:59 and then that will allow you to draw shapes in the console. I'm thinking through the ramifications of that. If you're able to suddenly start draw shapes in the console. I'm thinking through the ramifications of that. If you're able to suddenly start drawing shapes in the console, that would put the database system, now a graphing calculator, significantly further ahead than other AWS services, like, for example, Amazon QuickSight, which ideally is a visualization tool and in practice serves as a sad punchline
Starting point is 00:09:22 that we're hoping they improve faster than Salesforce can ruin Tableau. But right now it's like watching a turtle race. Exactly. And I think this is the flexibility of having a general purpose compute platform. You can do arithmetic operations, you can do your scientific calculator, but then you can build on that
Starting point is 00:09:38 to do more sophisticated things like visualizations. And I think it's a shame that more people haven't tapped the power of DynamoDB already. I keep hoping to see further adoption. So changing gears slightly on this, you have a apparently full-time day job that is not building things like this, which first, may I just say, is a tragedy for all of us. But secondly, what you do is interesting in its own right. Tell me a little bit about it. So I work for an organization called Welcome Collection, which is a museum and library in London.
Starting point is 00:10:09 And that is welcome with two Ls. Welcome with two Ls. It ruins your ability to spell the greeting. I'm waiting for AWS to acquire them. Anything that has a terrible name with extra letters, vowels, etc. seems like it is exactly on brand for them. I couldn't possibly comment. Of course not. So what does the Wellcome Collection do?
Starting point is 00:10:28 So we are a museum and library, and we primarily think about the human health and medicine. So obviously we've got a lot to think about right now. And one of the things we do is we have a large digital archive. So that's a significant quantity of born digital material. Somebody actually gives us their files, their documents, their presentations, their podcast recordings, and also a significant amount of digitized material where we've got some book in the archive, we take a photograph, we can put that photograph on the internet,
Starting point is 00:10:55 and then people can read the book without actually having to physically come to the museum. And what I work on currently is the system that's going to hold all of those files, because it turns out that if you have 60 terabytes of stuff, and you just put it on a hard drive and leave it in the closet, that's apparently not so great. It's great right up until magically it isn't, or so I'm told. But yeah, you're right. Every time I talk about long-term archival storage, it seems like there's a difference in terms. Oh, some of our old legacy archives are almost five years old. It's a very different story when we're talking in the context
Starting point is 00:11:28 of a library. People who believe the internet is forever, my counter-argument to them is, great, do me a favor. What is the oldest file on your computer? And that tends to sometimes be an instructive response. Absolutely. I mean, our digitization program goes back about a decade at this point so we've been keeping files for that long but then we're looking very far into the future with the archive in general and in fact so rules vary around the world but in the uk certainly the standard rule is that if you've got an archive about a living person you typically close that archive for 70 years after their death so that means they're gone, anyone who remembered them is gone,
Starting point is 00:12:06 and also probably their children and maybe their grandchildren as well are gone because particularly when you're dealing with medical records, people may not want to know that their grandfather was in this particular hospital. So we are planning very much decades or even in some cases centuries into the future. So when you're looking at solutions that need to transcend decades, does that mean that cloud services are on the table, off the table, part of the solution? How do you approach this?
Starting point is 00:12:39 So we, for the work we're doing, we very much do use cloud services. We're mostly running in AWS, and then we're going to start doing backups into Azure soon because you don't really want to rely on a single cloud provider for this sort of thing in case AWS hear what I'm doing with DynamoDB and close our account. And in part because organizations like AWS, like Microsoft Azure, they have much more expertise than we can have in-house on building very robust, reliable systems. So if you imagine a 60 terabyte archive and you're holding that locally, that's probably at the limit of your personal expertise on how to store that amount of data safely. If you give 60 terabytes
Starting point is 00:13:17 to the Amazon S3 team and say, this is a lot of data, they will laugh at you because their entire job is around storing large amounts of data, ensuring it lasts a very long time, ensuring that when disks fail, they get replaced and the data is replicated back onto the new system. So for us, we've really embraced using the cloud as a place to put all this stuff because a whole lot of problems around, is this disk still going to work in two years, are solved for us. There's another challenge too, in some ways, as you look at larger and larger data sets and looking at cloud providers, one of my favorite tools in Amazon's archival toolbox has been this idea of Glacier Deep Archive, where the economics are incredible. It's $1,000 per month per petabyte, which is just lunacy scale pricing.
Starting point is 00:14:06 But retrievals can take 12 to 24 hours, depending upon economics and how you want to prioritize that. And that works super well in scenarios where you're a company and you need to keep things around for audit or compliance purposes. And your responsiveness to those requests is going to be measured with a calendar rather than a stopwatch. But for a library where you need to have a lot of these things available online when people request them, 12 to 24 hours seems like an awfully long time to sit there and watch a spinner go around on a website. It is and it isn't. 12 to 24 hours is actually, in the context of some library things, is quite fast. If you're requesting physical objects, certainly, there are a lot of things in the library you can't just go and pick up off a shelf. London real estate is expensive, so about half of our physical collection
Starting point is 00:14:54 actually lives in a salt mine in Cheshire, and if you want to see it, you make a special request to us, and a van drives to the salt mine and picks it up for you. In a digital context, we generally just refer to the salt mine as Twitter. Exactly. The way we actually handle this for most of our work is we have two copies of everything, because you never want to have just one copy of it, because then you're one fat-fingered delete away from losing your entire archive. So we have one copy that lives in standard IA, that's the warm copy. That's the copy that we can call up very quickly if someone wants to look at something on a website. And then we have a second copy
Starting point is 00:15:30 that lives in Glacier Deep Archive. And then that's separate. Nothing should be reading that. Nothing should be really touching that. But we know there's a second copy there if something terrible happens to the first copy. It comes down to the idea of what the expected tolerances and design principles going into a given system are. You're talking about planning things that can span
Starting point is 00:15:51 decades into centuries. And I'm curious as to how much the evolution of what it is you're storing is going to change, grow, and impact the decisions you make. For example, if I'm starting a library in the 1800s, the data that I care about is effectively almost entirely the printed word and ideally some paintings, but that's sort of hard to pull off. As we continue to move into the 20th century, now you have video to worry about
Starting point is 00:16:19 and audio recordings becoming a technology. And nowadays we're seeing much higher fidelity video and larger and larger objects while the cost of storage continues to get cheaper over time as well. So I'm curious as to how you're looking at this. Today's 60 terabyte archive could easily be 60 exabytes in 20 or 30 years. We don't necessarily know. How are you planning around that looking forward? Well, it's very hard to predict the future. And if I could, I would have made some very different life decisions. So what we've done instead is just try to build it in quite a generic way, in a way that doesn't tie us too strongly to the content that we're storing.
Starting point is 00:17:00 The storage service we've built mostly just treats the files entirely as opaque blobs. It puts them in S3, it puts them in that Glacier Deep Archive, it checks they're correct, but it doesn't care if you hand it a JPEG or a movie file or a Word document. It's just going to make sure that file is tracked and is sorted vaguely sensibly. And we're hoping that that will give us the flexibility
Starting point is 00:17:24 to continue to change the software that supports it sorted vaguely sensibly. And we're hoping that that will give us the flexibility to continue to change the software that supports it as our requirements change. One of the things we were very conscious of is any software that we write in 2018 and 2019 to do this sort of thing is going to be obsolete and thrown away probably within a decade, certainly by 2040. And so we wanted to design something and store the data in a way that was not tied to a particular piece of software and that somebody could come along in the future and pull it back out again and understand how it was organized or start adding their own files to it if our software is long gone. If you're looking to wind up standing up infrastructure but don't want to spend six months going to cloud school first,
Starting point is 00:18:10 consider Linode. They've been doing this far longer than I have. They're straightforward to get started with. Their support is world-class. You'll get an actual human empowered to handle your problem rather than passing you off to someone else like some ridiculous game of ticket tennis. And they are cost competitive with any other provider out there with better performance in almost every case. Visit linode.com slash morning brief to learn more. That's
Starting point is 00:18:36 linode.com slash morning brief. Historically, when I was doing this stuff with longer term archival media, quote-unquote, you know, those special CDRs that are guaranteed to last over a decade, now the biggest problem is finding something to read them because technology moves on. BitRot became a concern. The idea that the hard drive that you stored this on doesn't wind up working, or there's media damage, or it turns out that there was a magnet incident in the tape vault. Whatever it is, the idea that eventually the media that holds that data winds up eroding underneath it,
Starting point is 00:19:13 rendering whatever it stores completely unrecoverable. How do you think about that? We think about that a lot because we still have a lot of that magnetic media. We still have Betamax cassettes and VHS tapes and CD-ROMs. And one of the big things we're currently doing is a massive AV digitization project to digitize as much of that as possible before it becomes unreadable.
Starting point is 00:19:38 I think if I'm remembering correctly, Sony stopped making Betamax players a number of years ago. So the number of players left in the universe and the number of spare parts is now finite and is only going to get smaller. And even though those tapes are probably good, might be good for another 10 years in our temperature control vaults, we and a lot of similar organizations are really having to prioritize digitizing that and converting it to a format that can be stored in something like S3,
Starting point is 00:20:07 because otherwise it's just going to be lost forever. Do you find that having to retrieve the data every so often and validate that it's still good and rewrite it is something that is viable for this? Does that not solve the problem the way it needs to be solved? I've dabbled looking into a couple of options at this stuff years ago and never really took it much further than that.
Starting point is 00:20:25 So I'm coming at this with a very naive perspective. We've never looked at this in detail, but we have done exercises where we pull out large chunks of the archive and completely recheck some of them and validate the SHA-256 of the thing we wrote six months ago is indeed still the SHA-256 of the thing that's now sitting in S3.
Starting point is 00:20:44 We did this for a significant chunk of the archive recently. It was pretty cost effective. We were able to run it on Amazon Fargate, scaled it out massively, ran in parallel. It was very nice. The biggest cost was in fact the cost of all the get object calls we had to make against S3. But it was a couple of hundred dollars at most. So the sort of money where if we felt it was important to do, we'd just do it again. It feels like
Starting point is 00:21:12 on some level, that's what things like S3 have to be doing under the hood, where they have multiple like this idea of erasure coding or information dispersal. The idea of you can have certain aspects of it rot and it doesn't tarnish the entire thing. You only need some arbitrary percentage. And we've played with this on Usenet in years past
Starting point is 00:21:27 with parity files. Download enough objects and you have enough to reconstruct the whole. Exactly. And there are people at Amazon whose entire job revolves around making sure S3 doesn't lose files, which is part of why we use it,
Starting point is 00:21:39 because they're going to think about that problem much more than we can. And we basically trust that if we put something in S3, it's probably going to think about that problem much more than we can. And we basically trust that if we put something in S3, it's probably going to be fine there. The biggest thing we're worried about is making sure we put the right files into S3 in the first place. And that's always the other problem too, which is a, if you'll pardon the term, library problem. You have all this data living in various storage systems. That's great. How do you find it? It feels like it's that scene from Indiana Jones and one of the movies. I don't know what it was,
Starting point is 00:22:09 Indiana Jones and the Impossible Cloud Service, where they have that warehouse scene at the end where everything for miles is just this enormous warehouse and everything's in crates. How do you find it again? Which system did that live in? That always seems to become the big challenge. And we see it with everything, be it Lambda functions, DynamoDB tables, still a great calculator, and other things. Which account was that in? Which region was it in? Expanding that beyond that to data storage feels like unless you're very intentional at the beginning, you're never going to find that again. Yeah. So one of the things we did that we made quite a conscious decision to do early on was we tie everything back to an identifier in another system.
Starting point is 00:22:47 So all of our files will be associated with at least one other library record. So they might be associated with a record in our book catalog. They might be something in the painting records. It might be something in the archive catalog. And then that's the identifier we use to hold the thing in the storage service. So you can look at a thing in the storage service and say,
Starting point is 00:23:08 oh, this has the identifier B1234. I know that's a book number. I can go and look in the library catalog and find the book that's associated with, and vice versa. So essentially we're pushing out the problem of organizing that to the librarians and the archivists because they have very strong opinions and rules about that sort of thing. And it's much easier just to let them handle it in one place than to try and replicate all that logic again in a second place. Do you think that there is a common business tie-in as far as the library looking to store things on that kind of timeline.
Starting point is 00:23:46 Seems like it's a very specific use case and problem space that any random for-profit company is going to take one look at and say, oh, that's not really our area. We don't know what next quarter is going to hold, let alone the far distant future. Do you think that's a mistake? Do you think that there are lessons that can be learned here that map to everyone? And where do those live? Certainly a lot of what we've been doing, I think, is more widely applicable than just the libraries and archive space. And I've been writing about a lot of it. We've been sharing a lot of what we've been doing, both for the benefit of other libraries and archives, but also for people who
Starting point is 00:24:19 might find some of this stuff useful. And I think one of the big decisions we made early on that I think was really valuable and would serve a lot of companies well is that idea that we assumed all of our software would eventually be thrown away. That at some point, all of the code we've written is going to be thrown in the bin and someone is going to have to do a migration to a new service.
Starting point is 00:24:42 Whether it's using JavaScript or it's running on DynamoDB or we've progressed past cloud computing and we're into nebula computing. Whatever it is, we assume the software will become obsolete and we very intentionally optimized to make that easier for whoever's got to do that in future. And a lot of the time I think I see companies build something that works great right now, but the moment it breaks or it needs to be rewritten, it's going to be a huge amount of work. And just thinking a bit about that earlier on would have made it much easier to move away when they eventually need to do so. Part of me wonders on some level, though, that when I'm building up
Starting point is 00:25:21 a company that doesn't necessarily know whether it's going to exist in a week, it feels like that is such an early optimization. Like the things that I worry about even in the course of my business, which is fixing AWS bills, is what if Amazon dries up and blows away is fundamentally very core to my business as far as disruptive things that might happen. But if that happens, everyone is going to be having challenges. It's going to be a brave new world. And building out what I've done in a multi-cloud style environment or able to be pivoted easily to other providers
Starting point is 00:25:52 just hasn't been on the roadmap as a strategic priority. Maybe that's naive, but I honestly don't know at this point. No, and we're still, you know, in the grand scheme of human history, we're still very early in these things. And yeah, maybe AWS will go away next week.
Starting point is 00:26:08 I certainly hope not. But when we were doing this, we didn't, obviously we thought about this a lot more than a lot of people would because we really are expecting to optimize for that very long use case. So what I'm suggesting is not that you prepare yourself to pivot multi-cloud, that you prepare to be able to run workloads anywhere, you'd be able to shift your workloads around dynamically, but it's just taking a little look at your decisions saying,
Starting point is 00:26:32 is this decision going to lock me in in a really aggravating way? And is there just a slightly simpler way that I can do this that is going to be much easier to unpick from later? One of the big ones for us was, for a long time we were looking at using UUIDs to store everything because UUIDs are brilliant. You never have to worry about uniqueness or versioning.
Starting point is 00:26:53 It's just handled for you. But then we thought about what it would take to unpick those UUIDs later and work out what they meant. And we realized, well, alternatively, there's a great identifier over here sitting in the library catalog. Why don't we just use that instead and throw away all these UUIDs?
Starting point is 00:27:11 And that wasn't a huge amount of work, right? It was just a case of deciding which of these two strings do we put into the database. But I think long-term, that's going to make a massive difference to how portable this system ends up being. That's one more topic I wanted to get into before we call it a show, is it ties together the two
Starting point is 00:27:31 things that you've been doing, maybe. Namely, how did you get in to looking at systems like DynamoDB and seeing a calculator, and possibly the archival stuff too, in such a weird and unusual way. It's not common and it is far too rare of a skill. How did you get like this is the question I guess I'm trying to ask, but without the potentially insulting overtones. No insult taken. So I think like a lot of people, my first community on the internet was primarily fan-ish. I grew up on the internet reading fan fiction. And for people of a certain certain age they will remember sites like fanfiction.net Wattpad and the big one at the time was LiveJournal and huge fan-ish discussions were conducted on LiveJournal
Starting point is 00:28:13 and I got to know a few people there and a friend of mine was friends with the head of LiveJournal Trust and Safety and if you've never come across it Trust and Safety is this fascinating role where you have to look at every aspect of a system and think about how terrible people will misuse it to hurt people and we're talking here things like stalkers like abusive exes like that co-worker who doesn't know what boundaries are and you've got to work out how for example a social media site is going to be completely ripped to shreds by these people and used to hurt users. Because
Starting point is 00:28:45 if you're not doing that in the design phase, those people will do that work for you when you deploy to production, and then people get hurt, and then you're extremely sad. And so that was the thing I was thinking about very early on on the internet, was I was talking to these people, I was hearing their stories, I was hearing how they designed their services to prevent this sort of abuse. And I got into this mindset of looking at a system and trying to think, well, okay, if I wanted to do something evil with this system, how would I do it? And in turn, when I'm building systems, I'm now thinking, what would somebody evil do with this, and how can I stop them doing it?
Starting point is 00:29:19 It almost feels like an InfoSec-style skill set. Yeah, there's definitely a lot of overlap there, and a lot of the people who end up doing that sort of trust and safety work are also in the InfoSec space. I think there's a lot of wisdom buried in there. And I think that, frankly, we've all learned a lot today. If nothing else, how to think longer term about calculator design. Thank you so much for taking the time to speak with me. If people want to hear more about what you have to say, where can they find you?
Starting point is 00:29:46 I'm on Twitter as AlexWLChan, that's W-L-C-H-A-N. And I blog about brilliant ideas in calculated design at alexwlchan.net. Excellent. And we will throw links to that in the show notes. Alex Chan, senior software developer and code terrorist. I am cloud economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple Podcasts. Whereas if you've hated this podcast, please leave a five-star review on Apple Podcasts
Starting point is 00:30:17 and a comment telling me exactly why I'm wrong about Texas Instruments. This has been this week's episode of Screaming in the Cloud. You can also find more Corey at screaminginthecloud.com or wherever fine snark is sold. This has been a HumblePod production. Stay humble. this has been a humble pod production stay humble

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.