Screaming in the Cloud - Tackling AI, Cloud Costs, and Legacy Systems with Miles Ward

Episode Date: October 22, 2024

Corey Quinn chats with Miles Ward, CTO of SADA, about SADA’s recent acquisition by Insight and its impact on scaling the company’s cloud services. Ward explains how Insight’s backing al...lows SADA to take on more complex projects, such as multi-cloud migrations and data center transitions. They also discuss AI’s growing role in business, the challenges of optimizing cloud AI costs, and the differences between cloud-to-cloud and data center migrations. Corey and Miles also share their takes on domain registrars and Corey gives a glimpse into his Raspberry Pi Kubernetes setup.Show Highlights(00:00) Intro(00:48) Backblaze sponsor read(2:04) Google’s support of SADA being acquired by Insight(2:44) How the skills SADA invested in affects the cases they accept (5:14) Why it’s easier to migrate from one cloud to another than from data center to cloud(7:06) Customer impact from the Broadcom pricing changes(10:40) The current cost of AI(13:55) Why the scale of AI makes it difficult to understand its current business impact(15:43) The challenges of monetizing AI(17:31) Micro and macro scale perspectives of AI(21:16) Amazon’s new habit of slowly killing of services(26:55) Corey’s policy to never use a domain registrar with the word “daddy” in their name(32:46) Where to find more from Miles and SADAAbout Miles WardAs Chief Technology Officer at SADA, Miles Ward leads SADA’s cloud strategy and solutions capabilities. His remit includes delivering next-generation solutions to challenges in big data and analytics, application migration, infrastructure automation, and cost optimization; reinforcing our engineering culture; and engaging with customers on their most complex and ambitious plans around Google Cloud.Previously, Miles served as Director and Global Lead for Solutions at Google Cloud. He founded the Google Cloud’s Solutions Architecture practice, launched hundreds of solutions, built Style-Detection and Hummus AI APIs, built CloudHero, designed the pricing and TCO calculators, and helped thousands of customers like Twitter who migrated the world’s largest Hadoop cluster to public cloud and Audi USA who re-platformed to k8s before it was out of alpha, and helped Banco Itau design the intercloud architecture for the bank of the future.Before Google, Miles helped build the AWS Solutions Architecture team. He wrote the first AWS Well-Architected framework, proposed Trusted Advisor and the Snowmobile, invented GameDay, worked as a core part of the Obama for America 2012 “tech” team, helped NASA stream the Curiosity Mars Rover landing, and rebooted Skype in a pinch.Earning his Bachelor of Science in Rhetoric and Media Studies from Willamette University, Miles is a three-time technology startup entrepreneur who also plays a mean electric sousaphone.LinksProfessional site: https://sada.com/ LinkedIn: https://www.linkedin.com/in/milesward/ Twitter: https://twitter.com/mileswardSponsorBackblaze: https://www.backblaze.com/   

Transcript
Discussion (0)
Starting point is 00:00:00 I think it's tough for me to see a trajectory where there isn't business impact for basically every job role in every vertical of company everywhere. So the scale of it is really, I think, difficult for a lot of business and analyst teams to sort of wrap their melon around. But how much impact and how many like AI calls, how many tokens that it takes to construct that business impact, I think is something we do not have a good handle on. Welcome to Screaming in the Cloud. I'm Corey Quinn. Joining me for the first time since the company got acquired is Miles Ward, CTO of SADA. Miles, how have you been?
Starting point is 00:00:44 Completely glorious. Thank you for having me back on. I'm really looking forward to it. Backblaze B2B cloud storage helps you scale applications and deliver services globally. Egress is free up to three times the amount of data you have stored
Starting point is 00:00:57 and completely free between S3 compatible Backblaze and leading CDN and compute providers like Fastly, Cloudflare, Vulture, and Coreweave. Visit backblaze.com to learn more. Backblaze, cloud storage built better. The logo on your site now says SADA, an insight partner, or insight company rather, an insight company. Partner means something different. User means third-party company. But I guess they went with that instead of SadSight as the unified company name for whatever reason.
Starting point is 00:01:30 Couldn't tell you why. We were just at the Google event last week and the emcee, who is spectacular, introduced us as Insight Asada Company. And I was like, oh, now that's like way better. The whole exec team was all excited. They were like, and business daddy was less than amused. Yes. They were like, no, no, hold on, hold on. It goes the other way. Uh, so it's, it's actually been really, really pleasant. The team there is, uh, is spectacular. And I don't say that just because I'm on recording.
Starting point is 00:01:59 Uh, they are, uh, they are really a fun crew. So it's, it's a, it's a fun thing to pursue. Prior to the acquisition, you folks were effectively, as best I could tell, the closest thing that Google Cloud had to something that would interface directly with customers. So seeing you be acquired by someone that wasn't Google was a bit eyebrow-raising. When you win Partner of the Year back to back to back to back to back to back to back, it kind of means we're doing an important part of the role. Google was not likely to acquire us, but I think they were in sort of incredible celebration about this acquisition
Starting point is 00:02:32 because Insight has something like 10 times as many customers as I do. So having the MSAs and the contractual relationships already in place for basically every major company is a huge lake for it. It means I can tackle a lot more customers faster. Which is a nice position to find yourself in, I would imagine. Are there any new use cases you can wind up getting to because of the acquisition? Can you do something now that you could not do previously? Though I'm sure if I had asked you this prior to
Starting point is 00:02:57 the acquisition, the answer would have been something a little bit different. No, that's not true at all. I felt bad about it. I always want to be able to say yes to everything. But there's a whole bunch of stuff that just doesn't make any sense, right? Like I was a part of the team that helped propose Anthos that's now on like name number four. I think it's called Google Distributed Cloud Edge or Google Distributed Cloud Air Gap now. And if you're going to deploy a system like that, you have to not only have the Kubernetes expertise and the Google Cloud general expertise, but you need computers and routers and switches and networking gear. And you've got to actually drive them someplace and turn them on and probably rummage around inside of a data center somewhere. And those are the skills that my company had not invested. So having, you know, this huge logistics and fulfillment machine, having thousands of engineers that do that kind of work all the time means I can make proposals
Starting point is 00:03:52 that are, that actually cover all the parts of those projects where I, instead of like doing some weird teaming agreement or asking Google to try to sort out who does what part that sucks, like nice to be able to just say, yeah, we can handle that. In the same kind of way, a lot of our migrations, you know, if I was going from AWS to GCP, let's call that the majority of migration that my business ran over the last three years. You know, I have a bunch of people
Starting point is 00:04:15 that do have AWS certifications. You kind of have to know where you're going from to be able to sort out where you're going to. What does that service do? Eh, couldn't be that important. Dynamo sounds Sounds weird. EC7 or what is it called again? Where an increasing fraction of the migrations
Starting point is 00:04:33 that we're doing are from data center. And there's not like one. There's like a zillion different ISVs and hardware vendors and all of the rest. Like the panoply of that stuff is humongous. So saying like, I have the certifications in is is fairly daunting if you're not quite a bit bigger so having insights whole huge book of credentials in that environment just means the discovery goes better the the accuracy they have this incredible there's like rv tools for vmware that's kind of
Starting point is 00:05:02 how many vms do you have basic they have have this much more advanced Snapstart that just really changes how quickly I can get to a binding proposal. This is how long it will take and what it will cost to make your stuff work. It's really been incredibly useful. I have never done a cloud migration from one cloud provider to another. I've done an awful lot of them from data centers into cloud. Do you find that it is easier to go from one cloud to another
Starting point is 00:05:28 than from data center to cloud? Absolutely. I mean, there are differences between the clouds, but call that 30%, right? Like once you've got, you know, we've done plenty of data center modernization stuff where they aren't on virtual machines yet,
Starting point is 00:05:43 let alone on containers. They aren't, you know, they don't have any kind of structured networking or software defined networking it's all manual configuration on devices like if you've gotten to any cloud you've already sorted through a whole bunch of that miasma and now it's just sort of transforming the primitives from one system to another and working out the plumbing between them that's just frankly a lot more tractable and only about about half of that is the technical work on your technical estate. Half of it is teaching the people. And so if you've got humans that have sorted out the difference between what it looks like to have a machine that you actually get the money back when you turn it off, miracle of all miracles, they will retain those insights as they move
Starting point is 00:06:25 becomes more of a syntax problem than an entire new way of thinking that's exactly right there's there's a couple spots right like you probably spend a little more time disabusing people of sort of horrible notions in in database stuff because google has some performance advantages there and uh you know there's places where uh like you know on the ai side uh we've done a whole bunch of refactoring from Azure where it's like, you have to glue together what to get things to go over there? Oh my god. Where, you know, on GCP a lot. Your account manager's hands, and then you offer the solvent only once they give you what you need. Yeah, yeah, there's, so there's certainly places where,
Starting point is 00:07:00 you know, we're able to give good news to customers as they take this on. But by and large, it's certainly a lot, a lot shallower. What have you seen as customer impact from the, we'll charitably call it the Broadcom pricing changes, instead of what I normally think of it as, as basically Broadcom wearing a VMware skin suit? It's tough because I really believe strongly that, especially given, you especially given what's happening in AI, every company has to get more comfortable with faster chain. But this is precisely the kind of change they really don't want to get comfortable with. Arbitrary, spontaneous, no notice price changes in the increase of 6x, 10x, something like that.
Starting point is 00:07:43 It's just never going to be a thing that businesses have. But you're getting more for that, what you're spending. Not if you're not using it, you're not. And most people have not budgeted for that kind of increase and are not able to absorb it. So suddenly it really has turned the cloud economics question on its head.
Starting point is 00:07:59 Like when you start doing a TCO analysis, like it becomes a very, well, depending on what you go into as a precept, you can wind up taking it in a few different directions. Okay, when you do 10x the price for running VMware that it does almost doesn't matter what you do in cloud, it's going to be cheaper. Right. It would have to perform incredible miracles as new features. And I heard no new feature growth as a result of those changes. It's absolutely extractive. A big motion for us.
Starting point is 00:08:27 I feel like I'm a fairly good negotiator. If I used VMware at all, maybe I would feel confident that I can go fight them about it. But I feel way better about having Google do that negotiation on my behalf. So using GCPE makes Google go fight with Broadcom on, you know, like they go do the negotiation. And there's no proof that they're even able to be negotiated with effectively. They and AT&T are slugging it out in the headlines about this stuff. And frankly, OK, if AT&T doesn't have the weight to throw around in that sort of situation, I sure don't.
Starting point is 00:08:59 Exactly. How do you expect you're going to go to your, you know, account manager that's been handed all of these changes? They're just they're just trying to hang on. do you expect you're going to go to your account manager that's been handed all of these changes? They're just trying to hang on. And look, I understand. They feel like they're in a position where there's not a lot of future in the model. If you can extract a bunch of that money up front, there's a time value of money. I've heard of that math. Yeah, there's no future relationship to it. It's the other side of the coin that I've always liked to be on, where you can either take a bunch of money from people right now, or you can take a little bit of money over time and build a working relationship and expand that in the fullness
Starting point is 00:09:28 of time. That seems to be something that they have given up on. Like we care about this quarter, next quarter, and then that's basically it. And once we have like the three customers left who are trapped for regulatory reasons on us, that's it. But you can only get so much blood from the stone. Let's give them the benefit of the doubt. Maybe they take all of this new largesse, plow it into product engineering, and make the product radically better over time. I suspect instead they will probably hand it to their shareholders. But that's their choice to make. I think businesses have a choice to make about how they react. And what I've seen, frankly, even more than I expected, is a willingness from businesses to get serious about a proactive approach there. Some way of getting into a relationship where they feel like they have much more control over their destiny, at least been fairly linear about prices and not done
Starting point is 00:10:25 totally ludicrous things like this. So fingers crossed, it would be nice if that was something that was guaranteed contractually. You know, there are some structures that enable that. But by and large, they've got at least a better reputation for pursuing cost savings for their customers than is the current competitive state. Speaking of cost, what have you seen happening with the world of cost and AI these days? I can tell a story that goes in almost any direction based upon that, depending on which stories I wind up picking. Folks who have tripled their cloud bill, folks who have seen no change,
Starting point is 00:10:56 folks who have saved money, et cetera, et cetera. But what are you seeing as the dominant narrative? I'll rewind way back. I was, I was, you know, I was the fifth solutions architect worldwide for AWS. I hear they eventually hired a sixth, but I can't prove it. Yeah. And then several thousand more after that, they were really some of my favorite times in my career. I literally had back to back 20 minute meetings. Lord knows you couldn't afford 30 minute meetings with customers. And you're just trying to get to a place where they're doing smart things with the platform. And I don't, I mean, I had several hundred conversations with customers asking for a larger virtual machine than the M1
Starting point is 00:11:35 large with its glorious seven and a half gigs of memory. And the reason they wanted a bigger virtual machine is because they were putting every conceivable thing in it, database, the queue and the storage, all of it, just fram all of that bad boy in there. So the recommendation to them that typically for most customers saved them an incredible amount of money was not, you need a bigger virtual machine, but you need to refactor what you're doing. You know, the queue goes over here and it costs a zillionth of a penny per queue. And what do you mean you put storage in the middle of your web server? Put that on S3. S3's great, right? You're re-architecting their system to take advantage of the primitives that are available. So fast forward 15 years, we're doing generative AI, and I'm watching
Starting point is 00:12:16 customers put unmitigated mayhem into their prompts, into that context window that's not big enough, right? And they're just like, I have customers, I swear to you, that are re-implementing like Boolean logic. They are working out chunks, what should be like one-liners in JavaScript. They're teaching the thing in English, how to do a bubble sort. I swear to you, I saw a bubble sort implementation
Starting point is 00:12:39 in English in a prompt because that's the tool you gave some developer and they want to be an ML developer. And so they're going to do everything they can possibly do inside that prompt. So I'm right back to the beginning. Oh, Hey, you know, I would use this computer. That's bad at math, right? Like over here, you can use this little chunk of terministic software that might possibly have a chance of giving you truth back. Uh, you know, and there, you know, there's this model over here called flash that costs a 20th of the model that you're using for every conceivable
Starting point is 00:13:04 thing. Maybe it makes sense to prototype some of this stuff in the big heavyweight models. So you kind of don't waste a bunch of cycles debugging, but, uh, but there is a generative solutions architecture that's being born at my company and companies all over the place. As we go through trying to help customers refactor these things into one systems that are, you know, more truthy and reliable and actually produce results rag is one sort of outcome of that but a lot of this is uh you know figuring out which kinds of model does what kind of thing at the right kind of price point so it actually makes sense to do it in production because we're watching our experiments seem to
Starting point is 00:13:39 turn into production a lot more than when we compare back to google what they say the averages are so i think some of that is a symptom of doing a better job at helping people navigate the production a lot more than when we compare back to Google what they say the averages are. So I think some of that is a symptom of doing a better job at helping people navigate the complexity of this whole new discipline of our application design. You'd like to hope so. I think that there's been a lot of noise around this. And whenever you have people seeing hype and, more importantly, money flowing toward a particular technology, well, they want to get exposure to that, even if for no other reason than it seems like this might be advantageous for their career. And I've made something of a career pattern out of avoiding the hype just because I've seen too many cycles where the hot next thing becomes yesterday's
Starting point is 00:14:20 legacy garbage that you have to wind up supporting. It's clear that AI is a useful tool, and I don't think we're going to see it going away. But I do question whether it's worth these multi-trillion dollar valuations we're starting to see talked about. I think it's tough for me to see a trajectory where there isn't business impact for basically every job role in every vertical of company everywhere. So the scale of it is really, I think, difficult for a lot of business and analyst teams to sort of wrap their melon around. But how much impact and how many like AI calls, how many tokens that it takes to construct that business impact, I think is something we do not have a good handle on. And there are architectures and implementations that are going to use one or two or three percent ai instead of
Starting point is 00:15:05 90 percent ai to produce the outcomes that we all want i think in a lot of ways the the innovation that happened is we got to a place where i can produce inputs and receive outputs in in in a language that i'm much more familiar with much more comfortable with that a lot more business people are much more familiar with and comfortable with, that doesn't mean that the actual legwork in most cases should be done by an LLM, right? I'm watching customers try to cram all of their raw data into a prompt in order to be able to do what would be seven or eight lines of SQL. It's just going to work better if you make the AI write the SQL for you. There is a future here.
Starting point is 00:15:45 And I think that the idea of getting value out of it is still a little early, but because it is so fiendishly expensive to build and run these things, especially at hyperscale, these companies are falling all over themselves to wind up monetizing it and at least getting it to break even as fast as possible.
Starting point is 00:16:00 But as a customer, we're still seeing use cases evolve and capabilities change week to week. It's a maybe I don't need to be trying to hit the moving target when it's going this quickly. I can instead focus on the myriad other concerns that my business has that, believe it or not, are not shaped like AI solvable problems. Yeah, I think there's a lot that at least that we're running into where my expectation is a lot. You know, Google has a gemini pro one five they name it like it's a samsung monitor but yeah yeah you can put two million characters into the context window right you can put up the works of shakespeare
Starting point is 00:16:37 in and start asking questions about it but they also have a monocle gem you can download stick it on your laptop and ask questions and stuff comes out and it doesn't cost you anything per call. And I think that there's, you know, the difference in the cost profile and the level of model that you need. They've been talking a lot with us internally about this concept of distillation, using what they call a teacher-student model to have the large models build the smaller models you need to actually do the work you're doing. And that has these multiple order of magnitude impacts in the cost profile of the executions that you're actually trying to do. Because not every chat box needs to be able to answer the open-ended questions of the entire world. Most use cases are just ever so slightly more narrow than that. So we are in very early days. I think that means that, yes, the total consumption of tokens through things like this is probably radically inflated. From a scale perspective, too, even just funny, yesterday, I was getting Lama running on
Starting point is 00:17:36 my laptop, and it's still now spitting inside my desktop at the moment. This morning, it was spitting out 98 tokens a second for Lama 2, 3.2. And that's great. It's wild to be running it locally just because for the first time in, I think, ever using computers, it's, wow, I can ask it questions and not have the FBI showing up on my doorstep if I ask the wrong question, as a sense. This has never happened to me. But I do wonder how many times I can ask Google for bomb-making instructions before the police would like to have some assistance with their inquiries. Not that that's the direction I am drawn within, obviously, but it's just the sense of not having a surveillance system looking over my shoulder
Starting point is 00:18:14 while I ask the computer to help with things. It's surprisingly freeing, even though all I'll ask to do is things like help me build a packing list for a camping trip. But suddenly nothing's trying to sell me ads for camping equipment either. You're describing it in the micro, but think about it in the macro. So you're a country that Google doesn't currently do business in. They don't have a region there. So that means necessarily any application that you build, you're shoving all of your data over the wire to some other jurisdiction. With Google Cloud AirGapped, you buy the racks, it will literally never connect to a Google service ever, period.
Starting point is 00:18:53 AirGapped. And you still get GKE and Vertex. You build a bunch of models, run a bunch of tests. Like, this thing functions. AWS Outposts will not do that. You can get them only in some countries because, again, there's always going to be regulatory things. Let's be clear here. You're going to have a bad time if you're Iran or if you're in North Korea trying to get any
Starting point is 00:19:13 large company to sell you things. But if you break the AWS Outposts Rack's connection to AWS, it will, over a non-deterministic period of time, stop working as IAM tokens expire, from the SDS tokens expire, so we can continue to work with IAM. Permissions models break down. It fails closed because that's what we want it to do. But it's absolutely not one of those things you can run disconnected forever. Yeah, this is a permanently disconnected Google-compatible cluster, which is, it's just, you know, I fought violently for this like nine years ago and they finally got there.
Starting point is 00:19:46 I'm stoked. It feels like Oracle's dedicated region with customer at cloud where they will deploy their stuff entirely into your facility. It might even be your hardware, but that's unclear to me. And it's every Oracle cloud service. Now that would be more useful if more people use them, but you know. Every one of the Oracle ones that I saw, they make you buy all of it. All new, all shiny, shrink wrap galore.
Starting point is 00:20:07 All of the air-gapped ones that I've seen so far from Google are the same, but they have also asserted that if you fit into this very narrow list of gear that they will support it. So that's not a hard rule. That's bold of them just because hell is customer hardware power network. Absolutely. The nice thing is like, I remember this is eight and a half, nine in the same kind of time window right after I started on the Google side or like, look, we need way more regions. This is lame. They should be all over the place. We should put in, I even wrote a proposal to stick them in every
Starting point is 00:20:39 embassy around the world. Then you have jurisdiction at the same time. It's like a smooth idea. The reason to do that based upon being close to customers or was it for data residency reasons? Absolutely jurisdiction and residency. Famously, Ben Traynor, the core operations lead for Google and Alphabet, had worked out the math for user experience and the 17 locations that Google was in so far covered their performance goals for 99% of humans online and not online. Once they all got online, it still worked. So they didn't need any more regions.
Starting point is 00:21:11 As of six years ago, because of jurisdiction, they continued to build them. One of the topics I want to get into as we're talking about comparative theology, for lack of a better term here, is a year ago, I would have basically more or less given you crap for fun about Google killing things. This year, that joke turns back around and Amazon is effectively, I don't want to
Starting point is 00:21:32 say that they're killing beloved services because you should not have been using almost any of these things. If you're still on SimpleDB, if you still are on SimpleDB, I'm sure there's an entire Amazonian team reaching out to see what they can do up to and including sending engineers to do rework to get you off of it onto something like Dynamo. These are not embedded services that people love and trust. They have not yet pulled the equivalent that I'm aware of of Google killing IoT Core where, well, hang on a second. That's embedded in devices we've shipped to customers, and we can't change the pricing on that. We've already sold them. How is that supposed to work? That was a weird one, but I have not yet seen that from Amazon. What I have seen is a drip, drip, drip, drip, drip all year of them killing things every week. So in the event that they're
Starting point is 00:22:18 afraid, we're going to stop talking about them killing things. Obviously a product strategy and not a PR strategy. And it's clear to me that they want to do a bit of a house cleaning. Anybody who's logged into the console on the AWS side anything recently seen would probably find that somewhat overdue. It is a daunting list of products. And I'm work on Google Cloud that has 187,000 SKUs or something possibility like that. So there is room to consolidate and focus. Google certainly got a bad rap for that. And I think deservedly so. They absolutely could have done a better job in a hundred different ways
Starting point is 00:22:48 on a dozen different products. I want to call out your evolution on this too, because back in the early days that I was making fun of this and had enough of a voice to matter, you would challenge me pretty heavily when I would call Google out for killing services. A lot of people did,
Starting point is 00:23:01 but those voices have gotten quieter and quieter over the years of them doing it because now it's become indefensible the way that they've been doing it. I remember working really early with the Stadia dudes. They were awesome. That thing was great. The technology remains great. It has been licensed to a whole bunch of other companies. It absolutely functions. Now chunks of it are available as an open source thing called Selkies if you ever want to build like an absolutely glorious remote streaming environment but uh but no they like i i think that there's plenty of places where where these evaluations are just they're just beyond the pale and uh and and i don't care if a business changes its mind
Starting point is 00:23:40 but it has a responsibility to to carry through that transition. And those are the places where, you know, a customer service function has to be mature enough and empowered enough and funded enough to be able to handle such a thing. And they've fallen short of that. What I haven't heard back, and, you know, this is praise for AWS, like, I'll agree with you that they're slaying stuff and they're doing it in this weird drip, drip, drip. But I don't have a whole bunch of customers that are like, oh my God, this ruins my business model. And I raised money on this. What are you talking about? Like they seem to have selected stuff that's not just unloved, but unused. And, uh, and maybe it wasn't, wasn't as scary a thing to get rid of.
Starting point is 00:24:17 Yeah. There's a, there's a way to do deprecations that doesn't aggravate people. And I think AWS largely has been doing most of those things. It's just rip the bandaid off at once, please. Especially when it's inside of sort of a business unit like that, like AWS for all of its weirdnesses, you know, absolutely contains the products inside of AWS in the same way that Alphabet contains the products inside of sort of broader Google
Starting point is 00:24:42 in a way that GCP maybe doesn't, right? Some of the things that GCP got made fun of for deprecating were not actually GCP products. They're over in different departments of Alphabet. Like the domain stuff. When I remember that hit, I was reaching out to people and asking this barely politer version of, what do you idiots think you're doing? And many of these people were finding out this had happened from my insulting question. It was, wait, they did what? It was their reaction, but they couldn't say that to me because, you know, it's you're supposed to at least pretend you know what's going on with your employer most of the time. But it was, yeah, it's the left hand never knows what the right tentacle is doing.
Starting point is 00:25:18 That is almost the definition of enterprise. The domains one continues to give me eyeball twitches. I get it, you know, especially with the kind of antitrust domains one continues to give me eyeball twitches. I get it, you know, especially with the kind of antitrust stuff that's going on. I suspect that there's there is more to lose there than there was to gain. But that's particularly a place where like you are not just pissing off customers, you are pissing off operators and you are doing stuff that is necessarily going to make them stay up late, do weird shit that they don't want to do, right? There's just no good outcome there.
Starting point is 00:25:48 And Squarespace has not made it easier or better. I don't know what their approach is. They are not an enterprise. That's the problem is because it's Google. They are simultaneously consumer and enterprise facing. And it's very hard to find a registrar that does that well. I've moved all my registration over to Cloudflare just because they not only can handle it all, but there's a single portal.
Starting point is 00:26:11 Every employee I need to log in to look at these things can be assigned the right permissions and can see this all in one place. Even in an AWS org, you get to play account whack-a-mole. Where was that particular domain registered? Recent trick I picked up is using an SCP to restrict it in other accounts. But that still creates friction for people. And any service that expects to be able to reach out directly to the zone record won't be able to do it. I had somebody, you know, pushing back, like, why would you, you know, why would you, you know, build an app in an environment where you can't buy a domain?
Starting point is 00:26:41 And so I was like, can you buy a domain and all the rest of them? And AWS, you absolutely can. But then it's inside your AWS account. So if you like screw up your build, does your domain implode? That like seems a little weird. Azure just delegates this to GoDaddy. Oh God, I have a standing policy
Starting point is 00:26:55 and most as most responsible infrastructure people do. You should never have a company on your infrastructure critical path with the word daddy embedded in their name. That is generally sound advice. I went with, you know, out of the possible options, like maybe this seems like this would
Starting point is 00:27:08 been my expectation as to your selection criteria, like funniest product marketing. So pork bun has been like unbelievably easy to use and half of the pages crack me up. It's great. It's perfect. Everyone keeps mentioning that when I haven't looked into it yet. I honestly, none of them have yet have what I'm looking for as the, I'm out having too much to drink with the, over a beer or six with you. I come up with a funny idea to repoint a domain to some other company's website. Like, I don't know, clownpenis.cloud is a purely hypothetical example of that. And it takes so many different services working together for the enterprise stuff.
Starting point is 00:27:43 You can just do it with a few clicks in the consumer stuff, but none of the enterprise stuff even has a website that works on mobile for those moments. WorkFun has done the, that's a great idea. That would be a good joke. I buy the domain and put up a silly picture and then text it to my buddy over beers. That's that workflow functions. And so at least checks that box. Yeah.
Starting point is 00:28:02 Let me check right now here. But sure. Yeah. Hey, wow. Their website works on an iPhone. They that box. Yeah. Let me check right now here. But sure. Yeah. Hey, wow. Their website works on an iPhone. They're done. You've convinced me. The bar, the bar is not hot out there, everybody. It's really not. And it's the challenge too, is so many domain registrars over years were so sketchy. Just they keep having the multiple upsells and all the rest and trying to do like the FUD moment. You sure you don't want an expensive certificate to go with it? Because, you know, hackers. This is the fifth interstitial page in the checkout flow.
Starting point is 00:28:32 Knock it off. I just want to buy an insulting name and report it somewhere. Not to go into, you know, what is a fairly hot region. No, I don't want to buy WordPress hosting from you right now. Oh, yeah. Let's not touch that one. That's a whole separate episode of nonsense and another drama to go into.
Starting point is 00:28:49 Maybe by the time we record next time, that will all be sorted out. Fingers crossed. So something Google did very well this year that I love is they moved next to April, which is basically a cloud dead zone. So I am still dealing with Q3 turning into Q4 being an absolute disaster.
Starting point is 00:29:08 And that is through the joy of dealing with planning for reInvent. Although this year I'm doing it surprisingly lightly and I'm looking forward to that like you wouldn't believe. But there's a strong feeling that I've got where by doing it in April, that's great because there's nothing else going on. They get all the attention. Now, this does blow back on them. When I started progressively putting on a clown suit in TK's keynote, because I didn't want him to be the only person in the room feeling like a clown as he continued to go down the AI path, I'll be clown myself too. But okay, that was humorous and got the right laugh at the right level, which I think is the way to play it. But that was just good, clean fun. I mean, AWS now has had, you know, additional six,
Starting point is 00:29:45 seven months to wind up responding to that in an effective way. Hopefully they'll talk about things that aren't just AI because most of the infrastructure problems customers have aren't. So maybe there's a balance in this. I feel really good in our position backing up, you know, the GCP AI practice, but the far side of that, you're exactly right. A majority of workloads have to do with making computers cheaper and get hosting to work right and get infrastructure to work right. And AWS has an incredible brand and incredible business there. They have every reason to be really leaned into that. I do know how hard it is to watch Tim Cook talking about maybe we won't do updates to iPhone every single year
Starting point is 00:30:28 because like, what are they on now? I was looking at M8G instances or NX8. They're still updating iPhones. It's other stuff that they're going to back up just for marketing reasons alone. How do you get excitement up about like computer version number 78, right? Like AWS also changed EC2 instances where now like to like from successive generations
Starting point is 00:30:48 is no longer less expensive. It's more expensive. So I have a, I have a single dev box that is running 24 seven because it does a few things for me that I just want to have my, my Unix server, my Linux servers hanging out there doing a thing, which is great. I don't need a lot of those, but I do need one. And if I upgrade to the latest generation, it just boosts the price by 10%. Okay. But there's a performance story maybe, but it's not a useful one for me. I don't care about its performance.
Starting point is 00:31:12 For my thing that runs at like 3% utilization, right? Like I don't care. I just need a persistent endpoint. So what does that computer do? 99% of the time it sits there very bored. Right. It's, it can, you know, can you outcompete a Raspberry Pi 5 is is the use case. Right. And, you know, I think that's that's actually a difficult target for cloud vendors. They have got to get into something that's a lot leaner while preserving all of the compatibility. You joke, but I have a Kubernetes cluster in my spare room running on top of Raspberry
Starting point is 00:31:39 Pi. And most of the stuff that I use these days, my RSS reader, the rest all tend to live on that cluster. One of the things I need that dev box, my RSS reader, the rest, all tend to live on that cluster. One of the things I need that dev box for is a public IP address that just does reverse proxy over tailscale. So I can have a consistent IP address to point a domain at and call it good because I already have the box there. That's not a critical thing. I could find other ways around that pretty easily if I had to. I don't think AWS is trying to lose to that architecture.
Starting point is 00:32:06 And I think they obviously spend a lot more time thinking about folks that are running a half million cores on various runs. But there's a part of meeting the developer and operator users where they are
Starting point is 00:32:19 and making the concessions to them that I think is useful. It is stunning to me that there is not multiple free ultra low utilization yet when I'm logged in relatively high performance endpoints for me to use at any time on all the providers. It's just crazy that they don't do that. It's not like it would consume this huge block of infrastructure that they're not already assigned. If they're not at 100% utilization on the fleets, it costs them nothing. That's a fun place to be at. I suppose. I really want to thank you for
Starting point is 00:32:47 taking the time to speak with me. If people want to learn more, where's the best place for them to find you? Sure. SADA.com. Nice and short, four letters, not registered on Porkbun, but it's okay. That's kind of the front end. If you want to get to me on Twitter, it's at Miles Ward or also Miles Ward on LinkedIn is pretty easy too, as Musk continues to do strange things to that environment. So happy to talk shop. It's also just that SADA.com, a complicated address. So happy to reach out. We'll definitely put all that in the show notes. Thanks so much for your time. I appreciate it. Great talk. Miles Ward, CTO at SADA. I'm cloud economist, Corey Quinn, and this is Screaming in the Cloud.
Starting point is 00:33:23 If you enjoyed this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice. Along with an angry, insulting comment about how hard it is to migrate to a different podcast provider.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.