Screaming in the Cloud - Generating AI Laughs with Daniel Feldman

Episode Date: July 25, 2024

Social media snark and rightfully criticizing industry trends? This episode of Screaming featuring Daniel Feldman must be a match made in heaven for Corey. Daniel’s wit is only matched by h...is engineering expertise, and his insights help us examine the current state of GenAI. This conversation breaks down how you can have fun with today’s AI tools, why product excitement can often hinder advancement, and even carries a tune with a bit of karaoke. As a bonus, Daniel indulges Corey about his signature conference attire. If you ever wanted to learn more about Mr. Quinn’s suit game, then this is the episode for you!Show Highlights:(0:00) Intro to episode(0:48) Who is Daniel Feldman(1:25) Backblaze sponsor read(1:51) Sounding off on social media(3:13) Having fun with AI(5:27) The origins of quackernews.com(6:36) Practical uses of AI(8:40) Enthusiasm vs. progress(12:30) AI’s online backlash(15:45) AI, slideshows, and karaoke(18:14) Firefly sponsor read(18:50) Tricks to prompting DALL-E(21:32) Markdown and software update naming(25:13) Exclusionary humor in tech(26:19) reInvent and Corey’s sense of fashion(28:32) Getting publicly recognized for Twitter posts(30:43) Where to find DanielAbout Daniel Feldman:Daniel Feldman is working on new projects in the service identity space. He previously was an early engineer at Scytale and deployed SPIFFE service identity at over a dozen companies. Before that, he was a principal software engineer at Veritas.Links Referenced:quakernews.comTwitter: https://x.com/d_feldmanBluesky: https://bsky.app/profile/dfeldman.bsky.socialMastodon: https://hachyderm.io/@dfeldmanSPIFFE: https://spiffe.io/Solving the Bottom Turtle: https://spiffe.io/book/SponsorsBackblaze: https://www.backblaze.com/Firefly: https://www.firefly.ai/

Transcript
Discussion (0)
Starting point is 00:00:00 Yeah, Markdown definitely is one of those technologies. It's sort of a meta technology that underlies a lot of different systems in a lot of different ways. And I have so much respect for the people who come up with stuff like that, because, you know, coming up with something that that's used in a thousand different products is really impressive and really challenging. Welcome to Screaming in the Cloud. I'm Corey Quinn. I tend to spend a lot of time running my mouth on Twitter, but I spend even more time reading other people running their mouths. Daniel Feldman has been recurringly on my Twitter feed
Starting point is 00:00:34 with insightful things that I would not have expected to come out of someone who is not, you know, a Twitter celebrity of sorts. No, Daniel just is consistently funny and generally spot on, particularly when it comes to tech. Daniel, what's your story? From whence do you come? Sweet. Thank you so much, Corey. I also spend far too much time on Twitter running my mouth a little bit. And I've greatly enjoyed your antics over the last few years. My story is that I do actually work in technology. I've been working on security products for years, and I can talk quite a bit about that probably ad nauseum, but I don't post about that very much. Mostly I'm posting about interesting things I find in the world of AI, interesting tech stories related to security, that sort of thing. A few jokes, a few jokes at AWS's expense, and I've loved your podcast for a long time, so really happy to be here. Backblaze B2 Cloud Storage helps you scale applications and deliver services globally.
Starting point is 00:01:36 Egress is free up to 3x of the amount of data you have stored and completely free between S3 compatible Backblaze and leading CDN and compute providers like Fastly, Cloudflare, Vulture, and Coreweave. Visit backblaze.com to learn more. Backblaze. Cloud storage built better. Part of the problem that I've been running into, and I imagine you do too, when you work in security, or in my case, when you work in cost, a lot of the things you encounter in the course of your day-to-day work don't really make good fodder for social media because it turns out, among other things, clients have a certain expectation that you're not going to be airing their dirty laundry
Starting point is 00:02:08 to the outside world. I have a standing policy that I don't talk about individual things I encounter in costing environments until I see at least three companies suffering from it because then, okay, now this is a pattern, not one company finding itself in a weird position. If you can de-identify yourself through these things,
Starting point is 00:02:24 that tends to be a bit of a problem. And I still periodically get people asking, I didn't realize that we were a customer of yours. Nope, you're not. Just this is a problem that everyone tends to have. The collective suffering, the esprit de corps of what we all tend to experience in tech, but most people, for whatever reason,
Starting point is 00:02:42 don't shoot their mouths off about it on the internet as much as I'd like. Absolutely. It's the same in security. Of course, every company doesn't want to air its security dirty laundry, just like they don't want to air any of their dirty laundry. But there's always new security stories every single day. Someone's getting hacked almost all the time these days. So there's always something to talk about that's out there in public that I found. And unlike cost, a lot of what we work on is open source. There's a lot of new open source tooling all the time, a lot of discussion about open source tooling.
Starting point is 00:03:10 So there's plenty to talk about on my end, talking about cloud security. From where I sit, one of the problems that I've been, I guess, encountering recently lately is because I've been somewhat negative on Gen AI, which is, you know, a thing you might have heard about if you've, you know, not been living under a rock. And that's been interpreted as, oh, Corey hates AI. No, I think it's hilarious. I use it for a number of different things, and I don't want to go back to having to write code without Copilot. I am not opposed to the technology. What I'm opposed about is the breathless enthusiasm that companies are ramming it down our throats on. This will completely transform the way you run your business. All I do is run my mouth in
Starting point is 00:03:50 the breeze and it hasn't grossly transformed the way that I operate from that perspective. It's a useful tool, but math did not suddenly radically change across the board when calculators were invented. I'm right there with you. AI is an incredibly powerful tool. The enthusiasm is a little bit over the top. So I've been working on AI side projects for quite a long time now. Maybe one of my first things, a few people might remember this, is in 2019, I did an AI D&D Twitter bot. So you could actually play D&D socially with your friends, typing in tweets, replying to tweets, and it would simulate a little bit of a mud D&D type game. That was incredibly fun. I shut that down when it got a little weird and people started going a little dark on that. So I just killed it. that is the problem uh but since then i've done a number of different things just learning as much as i can i just suck up as much information as i can about this technology we started a little conference here in minneapolis called the applied ai conference it's twice a year spring and fall
Starting point is 00:04:53 we have different people from around town just talking about their applications for generative ai in the many fortune 500 enterprise e-corporate environments around here. Fantastic conference. I really recommend it. And we also have a meetup. I learn all kinds of different things at that meetup, monthly meetup on AI. I was doing AI art. I did a couple of presentations on AI art well before Dolly came out, just messing around with stuff like VQGAN. If you've heard of VQGAN, that was sort of the
Starting point is 00:05:25 predecessor technology to diffusion models. Learning as much as I can about this technology, like I said, lately, one of my side projects has been a website called quackernews.com. You might know Hacker News. Oh, yes. The horrible orange website. Yes. I hate reading Hacker News. It's just full of people who are trying very hard, I would say, which I'm a little bit more sarcastic. I like a little bit more of a fun tone to things I read. So I made quackernews.com. Every day it downloads the latest Hacker News headlines, downloads every website that's linked to on Hacker News, downloads the top 20 or so comments on each article, and just makes fun of
Starting point is 00:06:02 them. Just makes it a little bit more lighthearted. Like NGATE used to do manually. Exactly. I was inspired by NGATE. Unfortunately, NGATE died three or four years ago, but that was done manually. But quackernews.com, it works every day. So I do that. Lately, I've been messing around with AI-generated podcasts. So I just uploaded an episode of the Quacker News podcast. AI-generated just talks about Hacker News stuff. That has about two subscribers right now. It's not a very popular podcast, but just learning as much as I can about the technology and having fun with it, trying to have fun with it,
Starting point is 00:06:34 learn more, figure out what the applications are. Just in my own work, I found generative AI to be incredibly helpful with the tedious parts of coding. So I can now rapidly create API endpoints. A lot of what I used to do is write API endpoints by hand. Took forever. Checking values that come into your system,
Starting point is 00:06:54 unit tests, database queries. There's a wide variety of things that are incredibly time-consuming in software engineering that AI can automate reasonably well most of the time. You still have to check over the results. I catch it making mistakes on a fairly regular basis. When you get into more complicated things, it is not as helpful, but it can still sometimes
Starting point is 00:07:15 give you a decent outline. I've been using lately, I've been using the latest Anthropic Cloud 3.5, which is the most powerful AI model. It can do quite a bit of stuff. So I was actually working on a, it's a fun little side project. It's a Tetris game, just Tetris that everyone's familiar with. We actually play by moving your arms.
Starting point is 00:07:37 So it's designed to be, you know, just a little bit of a, not a workout, but get your blood flowing a little bit. So you move your left arm, the block goes left. You move your right arm, the block goes right. You're sitting here flailing like a wacky arm flailing inflatable tube man
Starting point is 00:07:48 outside of tire stores. Exactly, exactly. And you know what? It works pretty well, actually. And I was able to get the Tetris game. Claude knows how to write a Tetris game in WebGL. That took about an hour, maybe, to get a really nice, basic Tetris game that you play with the keyboard. And then I was able to get the pose detection. There's a pose detection library that Google released, able to download the docs for that, paste them into Cloud, and it could process the pose end points and did a fantastic job. I still had to connect them. That's the thing. And I'm still actually working on some of the details of that, just getting it to work really smoothly, but it's playable and that's in a couple of days.
Starting point is 00:08:27 So just the technology really eliminates a lot of the tedium from software engineering, which is what I love because I hate tedium. Who doesn't? I think the implications on the industry overall, I don't know. I'm not smart enough to predict that. If I could, I would be a lot richer than I am. Yeah, you have the same arguments
Starting point is 00:08:41 a generation ago about Stack Overflow. If people just show how to do all this stuff online, are people just going to copy and paste their way to success? It's like, well, I've been doing it for 15 years, so shut up. Yeah, there's always the old guard that turns against new technologies and says, eh, not so much. But there's a far cry between that and the breathless enthusiasm, this will transform every aspect of what you do. That clearly every problem that a cloud customer has today is, of course, a generative AI problem. The number one Gen AI problem companies have for real is that their cloud provider won't shut up about Gen AI long enough to fix the things that they actually need that are core and central to their businesses. Absolutely. Absolutely. And, you know, I do understand where it's coming from, because you talk to executives at these companies, and I do on a regular basis. I have friends who are VPs at various cloud providers, various to Claude and they type in, you know, write a script that does X, Y, and Z, write an API for X, Y, and Z, and it does that in two seconds. And then they start to wonder, well, can my software engineers just do this? Maybe I need fewer software engineers.
Starting point is 00:09:59 It's a very natural question to ask. In reality, the vast majority of my time that I've spent in the working world at large companies isn't writing code. The writing code is important. The vast majority of my time has been developing new ideas, advocating for new ideas, figuring out how to implement new ideas effectively and efficiently, figure out what the customers actually need. And that work is not automatable. That's very hands-on, human-centered work. Writing the code, if you automated all the writing code, okay, so I saved 10 hours a week. I mean, that's significant, but that's not the majority. Part of the challenge that I keep smacking into when I'm talking to folks about this is that when you talk, the executive level is excited, like, oh, wow, this will optimize and improve performance of a bunch of things.
Starting point is 00:10:46 But you start digging into how exactly it's, oh, great, we're going to be able to get rid of significant swaths of our customer support team. It's really, really, is that how you envision this going out? Honestly, the fact in some cases you have the work you do for your customer support team implies you need to fix your documentation. But even a step beyond that, by the time I reach out to a customer support person, my answer is not going to be on one of the boxes that pops up of the four options I can give.
Starting point is 00:11:10 Make me start interacting with it through a chat methodology. And okay, but even if you have that modality with a person who is on the other end of like one of those intercom boxes, who's actively empowered to fix the problem that I have, I still hate doing it because it feels like it's already a failure of whatever service or product I'm trying to use. Making me talk to a robot instead of a person is not likely to make the customer experience better.
Starting point is 00:11:34 Yes, it saves you some money, but on some level, when you start driving customers away because they get frustrated and decide to go to one of your competitors, have you saved anything at all? Absolutely. And customer support, maybe the frontline customer support is a little bit formulaic, but when you get to the backline engineering,
Starting point is 00:11:50 they are solving problems that involve multiple components that they've never seen before under a tight timeline, figuring out bugs that no one has ever encountered before because the combinatorial complexity of systems like AWS is just enormous.
Starting point is 00:12:06 And then it's interacting with all kinds of third-party services. There's a lot of degrees of freedom there. So I have enormous respect for the customer service people. I think that in some ways that's more challenging than the software engineering because software engineering, you just make something, but you generally know if it works or not. Customer support, it's an incredible challenge and I have enormous respect for anyone who chooses to do that. APIs are deterministic in a way that humans are not. And then the other thing is that
Starting point is 00:12:31 I'm just seeing this enormous backlash against AI. When I talk about AI online, like half the people roll their eyes. Actually, most of my most successful tweets have been failures of AI because people like making fun of AI. They like thinking about how it can fail, how it's not as good as a human. For example, last week I tweeted this. I didn't
Starting point is 00:12:51 create this. I found this on the internet. I tweeted this gymnastics competition that was made using a text-to-video service. It was just hilarious. People would disappear. People had four legs. People had eight legs. People would disappear into the gymnastics mat. Probably my most popular tweet of all time, actually, it got picked up by a number of different blogs. And a lot of people enjoyed that. And I think the reason is that there's just this backlash. People are sick of hearing about AI. They're a little bit afraid of AI. There is a genuine fear there that it will take their job. Not only is this amazing transformative technology, it's going to put you out of work too. Isn't that exciting?
Starting point is 00:13:29 Why aren't people happier about it? Exactly. It's a strange place to be in. I've certainly found that, like if I tell people I've made this cool website, Quacker News, that makes fun of Hacker News, they love it. And then if I tell them it's made automatically every day
Starting point is 00:13:43 using an LLM, they get a little, you know, I don't know. This is also, I think part of the honest answer is that there's this expectation that people will want to consume something you couldn't even be bothered to create. And like, it's going to write my marketing copy for me. Okay, great. Good for you. But if a human couldn't be bothered enough to craft the message you're sending out, why are you presuming that people will be happy and will be thrilled to read and internalize that? It's not lost on me that my newsletter, combined with this podcast, takes roughly a year of human time
Starting point is 00:14:15 every time that gets sent out. And I have to be cognizant of that just in terms of how long people spend reading it and listening to it. There's this idea that I have to be worthy of that investiture of human time. And yeah, I don't let AI write the dumb things out. I did do an experiment for a while,
Starting point is 00:14:32 letting it craft the placeholder text about things. And every once in a while, it hits it out of the park, not with its insight around what it's talking about, but with a hilarious turn of phrase that I will then capture and use possibly in a different arena. Like it never would have occurred to me to say that something was as melodious sounding as a bag of hammers being thrown down a spiral staircase during an earthquake. But I'm definitely going to
Starting point is 00:14:53 use that phrase now. Amazing. But yeah, a lot of the world is signaling. So, you know, why do we do anything? Why do we present at conferences? Why do we go to college? Why do we apply for jobs? So a lot of it is signaling. If I put the effort in, that shows that I'm serious about that effort and I'm serious about that idea, serious about that project. It doesn't necessarily say anything about my capabilities. I never use anything I learned in college or grad school. Of course, no one does. But what it does is it signals that I put in some effort at some point in my life where I was confident that I would be working in this field. And when you use AI, of course, it's sending the opposite signal. It's sending the signal that you're not putting any effort in. So that's definitely a challenge that I think every company in this space is going to have to
Starting point is 00:15:37 really think hard about how to solve that, how to fit AI into society. Because if you use AI to take shortcuts, people won't trust you. I found it's been great for generative work as far as building images to put in conference slides. Otherwise, I'm stuck with bad Photoshop or paying for stock photography. And I am not opposed to paying artists for their work. Don't get me wrong. But it also feels a little weird to be paying for a stock photo of a data center rack, like just a hallway in a data center. Like that is great. If I'd had the wherewithal to take a picture of the last time I was in a data center, that would have worked fine.
Starting point is 00:16:10 And especially when you want to put a giraffe in the middle, no stock photographer in the world is going to do a deal with a zookeeper for that photo shoot. But I could make a robot generate that super quickly. And even if I tell it to spit it out in 16 by nine aspect ratio, now it's the perfect size to put on a slide. And it works to accentuate the probably ludicrous point that I'm making in that moment of the talk
Starting point is 00:16:30 without me having to sit there and figure out the best way to express that sentiment. I mean, people don't seem to realize for a lot of conference talks, most of my slides are placeholders. So I have something to stand in front of, but it's a talk. I'm not reading my slides to you. If you can figure out what my point is from a bunch of pictures in my slide next, great. If you can get to that level, I just narrated something for you and that's not as engaging. Absolutely. So Corey, do you know what PowerPoint karaoke is? I love that. I've also heard it called Ignite Karaoke and a few other terms as well. I do well at it because I suck at preparing for things. Like for those who are unaware, it's you have a bunch of slides that show up and you're seeing them on screen
Starting point is 00:17:12 at the same time the audience is. And your job is to tell a story about it. One of the tricks as a general rule is to have a theme that you're going to go with and then roll with it, come hell or high water. One of the ones I've always used to do is this is why you should come work with me over at Google, which is terrific. It almost doesn't matter what you're going to see. There's something that you can turn it into a joke on some level there. It's a blast. But a lot of people aren't great at necessarily thinking on their feet, and they just stand there like, ah, and that's a picture of a cantaloupe. Like, yes, it is.
Starting point is 00:17:43 Good work. Thank you for narrating it. Anyway, I did a PowerPoint karaoke event last week, actually. And I have a script. You can find it on my GitHub that generates the slides automatically, randomly, just using ChatGPT, DALI. This is germane to my interest. Yes. And, you know, it takes two minutes. And that is an application of AI where no one expects you to put any work in.
Starting point is 00:18:04 The work is in delivering the slides. It's not generating the slides. And I gave a fantastic five-minute presentation on how lawn gnomes are taking over the world, because that's what ChatGPT decided I should be talking about. Are you running critical operations in the cloud? Of course you are. Do you have a disaster recovery strategy for your cloud configurations? Probably not, though your binders will say
Starting point is 00:18:25 otherwise. Your DevOps teams invested countless hours on those configs, so don't risk losing them. Firefly makes it easy. They continuously scan your cloud and then back it up using infrastructure as code and, most importantly, enable quick restoration. Because no one cares about backups. They care about restorations and recovery. Be DR ready with Firefly at firefly.ai. I have a question for you. When you're using this stuff programmatically with the chat GPT chat interface on the website, you can wind up putting it, I can say, give me a picture of a data center with a giraffe standing in the hot aisle. Great. It'll do that. But then I ask it to describe the prompt that would generate that image, and it gives me three paragraphs of text
Starting point is 00:19:07 that is much more clear. When I query DALI directly, I need to be a lot more verbose and a lot more specific at what I'm looking for unless I want lunacy. Do you find that, is there a decent API endpoint for that these days, or are you doing the multi-prompt approach
Starting point is 00:19:21 where step one, ask something like GPT-4 to say, great, turn this brief description into a good prompt that will generate a great image out of DALI, and then you submit that prompt. I don't have a good solution for that right now. That was not the right answer. I was looking for a better one that I could implement. Like, honestly, the best answer would have been, oh yeah, there's a script in my GitHub
Starting point is 00:19:41 at which point, clickety clickety, and I've saved myself some work here. You're a crappy AI assistant. I've saved myself some work here. You're a frappy AI assistant. I'll have to figure that out. But ChatGPT definitely develops the prompt considerably further. And then the mysterious thing is that it won't output the actual prompt it inputs into DALI. So if I say a picture of a giraffe in a data center, it then generates some paragraph-long description of a giraffe in a data center. And then DALI will be using those sentences and
Starting point is 00:20:07 do some kind of vector embedding and generate an image. But you can never actually get that middle layer prompt out of the system. And that's by design. When you have it generate a prompt, it would submit to DALI and then take that. Perhaps. I don't know. I'll have to check. This gets into the borderline
Starting point is 00:20:24 prompt injection. It's kind of awesome. Yes. Well, the prompt injection stuff, that circles back around to security because there's a lot of possibilities for hacking systems and really interesting ways through prompt injection. Just any system where you're taking inputs from a user and feeding them into an AI, there's probably a prompt injection attack that's possible if you haven't found it yet. There have been a few chat GPT system prompts that have been dumped from various places. And what's amazing is that these are the people at the absolute forefront of AI. I mean, don't get me wrong, Anthropic's doing great work too,
Starting point is 00:20:54 and Amazon hangs out with some people, but the OpenAI folks are great. And they are, and even their system prompts are like in all caps, under no circumstances. And I repeat, like, okay, even these people at the forefront of this are just like the rest of us. They're begging the computer to please do the thing and hoping for the best. It's like, wow, that really does humanize machine learning scientists. Absolutely. And you know what? They're still using bold and underlining in Markdown in their system prompts deep inside their system
Starting point is 00:21:27 that you don't even see, just to try to emphasize things to the AI and make sure it behaves in certain ways. It's funny you mentioned that. Just before we started this recording, I saw a news article in Ars Technica that I put up there because this is my cynical approach. I said, breaking news.
Starting point is 00:21:43 Google finally admits defeat on the Google Docs team and acquiesces to doing something customers actually want. Google Docs now will natively fully support Markdown. It's like, that is amazing. Although I don't, I do have some beef with the Ars Technica author who wrote this because they talked about it as an archaic style of writing. It's like, first, fuck you, buddy. That's how I write. But they're also not wrong, because that is how I used to express emotion in IRC back in the 90s. And yeah, you put asterisks around things because you couldn't embolden things. And smileys, as opposed to the modern-day emoji, or as I refer to them to piss people off, Gen Z hieroglyphics. Yeah, Markdown definitely is one of those technologies. It's sort of a meta technology that underlies a lot of different systems in a lot of different ways. And I have so much respect for the people
Starting point is 00:22:31 who come up with stuff like that because coming up with something that's used in a thousand different products is actually impressive and really challenging. Yeah, the original author of Markdown was John Gruber, daring fireball guy. And he's an Apple pundit, but that's one of the early things that he did. It's like, step one, create this amazing thing called Markdown was John Gruber, daring fireball guy. And he's an Apple pundit, but that's one of the early things that he did.
Starting point is 00:22:46 It's like, step one, create this amazing thing called Markdown, and step two, declare it complete and never touch it again. And then everyone else is taking the ball and run with it. It's like one of the best transition stories. Another one was the band LMFAO, where they wound up creating an album, the original one, like Party Rock.
Starting point is 00:23:03 Then the second one was Sorry for Party Rocking and then disappeared and never did anything else. It's like, oh, we did a thing. Now we're apologizing and we're done. Yes, there's something to be said for the performative art element of that. That's a beautiful aspect. Yeah, similar with tech, the typesetting system.
Starting point is 00:23:19 I used to use that quite a bit when I was writing papers and that sort of thing. You know, Donald Knuth created that. Not the most user-friendly thing in the world, eh? No, certainly not. But he started with version 3, and then he had 3.1, and then 3.14, and then 3.141, getting progressively closer to pi. And at some point after about 10 versions, he just stopped and said it was done. Don't get me started on the versioning aspect of things where it's, okay, great.
Starting point is 00:23:45 Like between version 1.2 to 1.3, there's a whole bunch of syntactical breaking changes in libraries and whatnot. It's, do you think that versions are just numbers that you pick based on vibes? And then you look into their history
Starting point is 00:23:56 of the project and yeah, that's exactly what it is. It's based on vibes and great. But semantic versioning is a thing. And yes, I know it's imperfect, but so is everything in this space. And for God's sake, give me a heads up when you're about to upset my apple cart. Absolutely. Those version numbers never really make too much sense. We actually just earlier this week released a IETF working document with a small group I'm
Starting point is 00:24:19 working with that was version 0.0 of the document because we couldn't agree on 0.1. So I'm right there with you. Oh like that you can wind up getting like it's like asymptotically close to things too which is also weird the thing that drives me nuts is when at one point there was i was following some project early on it was version 3.9 of something great okay so the next one is going to be 4.0 right nope 3.10 which is okay I get going from 9 to 10. Truly, I do, but that's not how numbers work. If you do a sort, something that might naively assume it's a number, great. You're going to wind up with, at that point, like two things being the same because 3.1 and 3.10 are clearly identical.
Starting point is 00:24:59 And then you have a 3.11 that's going to go right between 3.1 and 3.2. And no, I'm old. I'm grumpy. I don't like it. Well, that's when you go to semver.org and read about how certain people have very strong opinions on how versioning should work. Yeah. Honestly, it's one of those hills I'm willing to let other people die on. But I do like embracing jokes like that and humor around that and arguments around that because unlike a lot of the other stuff that I do, it becomes broadly accessible to people. When I make a joke that AMI is pronounced with three syllables instead of two, the way that Amazonians do due to, I don't know what, collective corporate traumatic brain injury or whatnot. But there's this sense that everyone
Starting point is 00:25:37 can participate in that joke and you can have a strong opinion. And of course, it is ultimately meaningless. But when you start talking about arguing about the nuances of deep programming considerations, that's where you need, you must at least be three years into your tech career before you even begin to have an opinion that's sensible on this topic. That's where it's, I don't like the exclusionary humor nearly as much. I like the things that everyone can come in and start sounding off about. Absolutely. And I'm right there with you about the naming thing. The first company I worked at that used AWS, everyone there, probably over 100 engineers, called it Oz, like Wizard of Oz. As in Wizard of?
Starting point is 00:26:13 Oz, yes. And then they actually paid for me to go to reInvent, and I was very confused and probably seemed like a fool that year. I would pay not to go to reInvent at this point. Yeah. Well, I think you don't actually have to pay to attend. You can just show up in Vegas and all the people are there not attending reInvent, wearing reInvent badges. I strongly consider doing that. The challenge is, is one, I love walking the Expo Hall
Starting point is 00:26:35 because there's a lot of stuff that happens there. And two, I find myself in a bunch of weird meetings with folks that is very useful to be able to traverse a hallway where they won't let you in without having the badge. And I also don't want to necessarily, when I'm meeting with clients and whatnot, give the impression, oh, I'm just cheap. I'm just here hanging out with all the other cool kids around the periphery, which increasingly is something I think should be normalized. But I have enough meetings where I just need to talk to people in areas that are tricky to get to. So I do have a question. How do you always have a suit without any wrinkles? That is,
Starting point is 00:27:09 that is just incredible. This is going to amaze people. I know, right? Because I am, I'm a very white man and a very techno forward city, but I learned how to use an iron back when I was a kid and you can use travel irons in hotels. I also know how to pack clothes, which helps. Okay. Well, I like everyone else in tech. I also know how to pack clothes, which helps. Okay. Well, like everyone else in tech, I don't think I've ever worn a suit except for a wedding or a funeral. So it's all news to me. That's why I started doing it, to be honest. I did this as an experiment back in the early noughts. I was working at a company in 2008, well, not early, 2006, 2007. And I showed up for a week every day wearing a button-down shirt that didn't really fit,
Starting point is 00:27:47 the tie I barely knew how to tie, and everyone made fun of me for that week while simultaneously taking me more seriously in meetings because there's something societally hardwired where if someone is in a suit, you give them undue levels of credence. And okay, I'll go ahead and roll with that if that's how people are socialized,
Starting point is 00:28:05 why not? It also amuses me to look like a salesperson that got lost when I'm walking into a conference stage. People start to tune out and then I wind up going off on some technical topic with my typical sense of humor because sarcasm is my first language. And people start to realize, oh wait, this isn't a sales pitch. This guy is an actual lunatic and I'm here for it. Yes, thank you. I am too. It's fun to be underest oh, wait, this isn't a sales pitch. This guy is an actual lunatic and I'm here for it. Yes. Thank you. I am too. It's fun to be underestimated, isn't it?
Starting point is 00:28:29 But also everyone should just have some kind of a trademark. I have to ask you, your Twitter following has gotten sizable enough that you're right around the point, I imagine, where I was when I started to get consistently recognized at conferences and whatnot from my Twitter nonsense. Has that started to happen to you yet? That has happened twice. Not very frequently. It doesn't get better from here.
Starting point is 00:28:51 Spoiler. No, very rarely. Fortunately, probably, because I like being a little bit anonymous. Mostly I attend these open source conferences. Everyone's a little bit countercultural. They don't really spend that much time on Twitter. So I haven't had too much of a problem there, but it is always kind of fun.
Starting point is 00:29:08 Once a guy actually recognized me at a coffee shop, like down the street from my house, which that was really strange because that was a total non-tech context. That's where it gets weird and creepy. It finally happened where one of the, like, so my kid, my oldest child is in elementary school. And I'm just some normal schmoo of a dad at these events by design.
Starting point is 00:29:30 Because if you start giving a tech talk at a dad barbecue, that does not go well for you, I imagine. And I don't want to hear about work outside of work, please. But I had a couple of dads corner me last night. Like, I looked you up recently. It turns out you really have a big audience. You're well known in this space. It's like, well, crap. There goes the solitude and peace of me being alone at a dad barbecue.
Starting point is 00:29:51 But yeah, it's weird when it starts breaching containment into other audiences you generally wouldn't wind up talking to. It's fun getting stopped by people in airports, faraway places, at conferences. But I also like the fact that I can go to get a carton of eggs at the store in the morning on the weekend and not get recognized when I'm schlumpy in my dad shirts and my sweatpants. No thanks. Yeah, that's another advantage of the suit look, I suppose. Exactly. Oh, and if I don't have my happy smile, my mouth wide open, no one's like, you look familiar, but I can't place it. I do the smile, which is also on my driver's license. And suddenly, I know you. There we go. We take what we can get.
Starting point is 00:30:31 Definitely good to have a trademark, you know, a suit, a pink beard, a blue beard, crazy glasses. Love it when people get to express themselves a little bit at work. We do what we must. I really want to thank you for taking the time to speak with me. If people want to follow along with your adventures as they should,
Starting point is 00:30:49 where's the best place for them to find you? Twitter, Blue Sky, Mastodon. I'm always posting. Posting way too much, honestly. Always have a bunch of different stuff going on. Like I said, crazy AI projects that are probably never going anywhere, but are sort of pushing the limits
Starting point is 00:31:02 of what I can get away with. So then in the security world, I work quite a bit on this open source project called Spiffy. We do pretty regular Zoom meetings, chatting about the technology with people from around the industry. That's what I'm really passionate about as a day job, I suppose. I did co-author a book, which is at spiffy.org slash book. It's called Solving the Bottom Turtle, which is about solving what we call the bottom turtle of security issues, which is the root of identity. How do you find a root of identity
Starting point is 00:31:32 for these large distributed systems? And the turtle is standing on top of a bike shed. Absolutely. Yeah, the turtle's not terribly steady these days, but wrote that with some amazing people from some of the biggest tech companies
Starting point is 00:31:45 in the world who are working on that problem. It's an awesome group. I even have paper copies, if you ever run into me in person. Just a self-published book. It's not a real book. Oh, it's a real book. Publishers don't have a stranglehold on it anymore. Anyway, it's a fun little exploration of the kinds of stuff I work on. I should probably follow it up with some updates because we're working on a number of big updates to Spiffy that I could talk about ad nauseum, but probably shouldn't. And we will make it a point to put those links in the show notes. Thank you so much for taking the time to speak with me. I appreciate it. Thank you, Corey. Daniel Feldman, amazing Twitter account follow. I'm cloud economist Corey Quinn,
Starting point is 00:32:22 and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment, probably about how we're calling it Twitter instead of X. And at that point, we're going to have a bunch of follow-up questions about where exactly you were on January 6th.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.