Screaming in the Cloud - Writing Better Code to Optimyze Cloud Spend with Thomas Dullien
Episode Date: November 12, 2020About Thomas DullienThomas Dullien / Halvar Flake is a security researcher / entrepreneur known for his contributions to the theory and practice of vulnerability development and software reve...rse engineering. He built and ran a company for reverse engineering tools that got acquired by Google; he also worked on a wide range of topics - like turning security patches into attacks turning physics-induced DRAM bitflips into useful attacks. After a few years of Google Project Zero, he is now co-founder of a startup called http://optimyze.cloud that focuses on efficient computation -- helping companies save money by wasting fewer cycles, and helping reduce energy waste in the process.Links Referencedoptimyze.cloudQuoted TweetFollow Thomas on TwitterConnect with Thomas on LinkedInThomas’ personal site
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud. is actively ridiculous by trying to throw everything at a wall and see what sticks. Their pricing winds up being a lot more transparent, not to mention lower.
Their performance kicks the crap out of most other things in this space,
and my personal favorite, whenever you call them for support,
you'll get a human who's empowered to fix whatever it is that's giving you trouble.
Visit linode.com slash screaminginthecloud to learn more
and get $100 in credit to kick the tires.
That's linode.com slash screaminginthecloud.
This episode has been sponsored in part by our friends at Veeam.
Are you tired of juggling the cost of AWS backups and recovery with your SLAs?
Quit the circus act and check out Veeam. Their AWS
backup and recovery solution is made to save you money, not that that's the primary goal,
mind you, while also protecting your data properly. They're letting you protect 10
instances for free with no time limits, so test it out now. You can even find them on the AWS marketplace at snark.cloud slash back it up. Wait, did I just endorse something on the AWS marketplace? Wonder of wonders I did. It's also a realistic reality. So make sure that you're backing up data from everywhere with a single unified point of view.
Check them out at snark.cloud slash back it up.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
I'm joined this week by Thomas Duleen, the CEO of Optimize Cloud.
That's Optimize, O-P-T-I-M-Y-Z-E, so we know it's a startup.
Thomas, welcome to the show.
Hey, nice to be here. Thank you.
Of course. So let's start with my, I guess, snarky comment at the beginning here,
which is, yeah, you misspell a word, so clearly you're a startup.
You have a trendy domain, in this case,.cloud, which is great.
What do you folks do as a company?
Well, originally we set out to try to help people
reduce their cloud bill by taking a bit of an unorthodox approach. That sounds like a familiar
story. Well, familiar perhaps, but our approach was that I had seen just a tremendous amount of
wasteful computation everywhere. And the hypothesis behind our company was, hey, with Moore's law
ending, software efficiency will become important again, meaning people will actually care about software being efficient.
The reason for this is, A, Moore's Law is ending.
The second reason is, now that everybody is a SaaS vendor instead of a software vendor, all of a sudden the software vendor pays for the inefficiency.
In the past, if you bought a copy of Photoshop, you had to buy a new Macintosh along with it.
Nowadays, it's the vendor of the software that actually pays for it. So our entire hypothesis was that there's got to be a way to optimize code and then make things run faster and
make people happy doing this. One of the strange things that I find is that every time I talk to
a company who's involved in the cloud cost optimization space,
and again, full disclosure, I work at the Duckbill Group.
That's what we do on the AWS side.
And we take a sort of marriage counseling-based approach
for finance and engineering,
so they stop talking past each other.
And that's all well and good,
but that's a relatively standard services story,
part tools, part services-based consultancy,
and it has its own appeal and drawbacks, of course. What I find interesting is that most
tooling companies always do what comes down to be more or less the same ridiculous thing,
which is, ah, the dashboards are crappy, so we're going to build a better dashboard.
Great. Awesome. What problem are you actually solving? And it becomes, well, we don't like cost explorer.
So here's something else that we did in Kibana instead or whatnot.
Great.
I don't necessarily see that that solves the customer pain points.
You have done something very odd in that view, which is that you're not building a restated dashboard of what people will already find from native tools.
You're looking at one very specific area. What is it?
Yeah. So in our case, we're really looking at where are people spending their computational
cycles? I mean, everybody knows that you can profile a single application on their laptop
or on their computer. But then once you get to a certain scale, that gets really, really weird.
And I like to think about software in the sense that we're building essentially an
operating system for an entire data center these days. That's really what's happening inside Google.
That's what's happening inside AWS. And you need to measure where your time is spent if you want
to make things faster. Everybody who's profiled their own software usually finds some really low
hanging fruit and then goes and fixes them. And to some extent, we have a huge
disconnect these days between what the developer is writing and what is actually the feedback loop
to tell the developer, hey, this change here just caused X dollars of extra cost. So in some sense,
what we want to build is something that tells the developer, this line of code here is generating
this amount of cost. And we kind of
think that developers will make better decisions once they know what the cost is. Full disclosure,
I used to work at Google for a fairly long time. And Google wrote a paper in 2010 about a system
they've built that's called the Google White Profiler. And the results of that system inside
Google were quite hilarious. They figured out that they're spending, what, 15% of all cycles inside Google on GZip when they first started measuring.
So we thought that's got to be a useful thing for other people to have too.
And I'm of a mixed opinion on that one.
Because first, congratulations on working for Google.
All things must end.
And you've since left Google.
And they're Google, so they're really good at turning things off, but that's beside the point. This on some level might feel like it's
a problem that a company like Google will experience where optimizing their code to
make it more cost-effective makes an awful lot of sense given how they operate and what they do.
But then I look at customers I work with where they have massive cloud bills,
but fleet-wide their utilization averages to something like 7%. So these instances are often
sitting there very bored. Optimizing that code base to run more efficiently saves them approximately
zero dollars in most cases. That is in many ways the reality of large-scale companies that are not, you know, the hyperscaling Googles of the world.
That said, I can see a use case for this sort of thing when you have specific scaled-out workloads that are highly optimized,
and being able to tweak them for better performance stories starts to be something that adds serious value.
Yeah, I mean, I won't even dispute that assessment. There's so much waste
in the cloud these days. And it starts from underutilized machines. It starts from data
that's not compressed and that just sits there and nobody's ever going to touch it again.
And I won't claim that everybody will have the problem of needing to optimize their code. That's
quite plainly not the case. Our calculus is pretty much,
Google needed to build a system like this internally. Facebook needed to build a system like this internally. At some point, there's a SaaS business of a certain size where you actually
want to know, hey, where is all my money going? And you want to enable your engineers to make
those decisions in a smarter way. So I guess the other side is I looked at my own skill set and tried to figure
out what can I do? Like, where is my skill set a good match for helping people save money on the
cloud? It turns out it's not in writing a better dashboard and it's not necessarily in right sizing
either. So I guess I took what I knew how to do and figured out how to apply it, if that makes sense.
That's, I think, a great approach. The challenge I always see is in translating things that work for Google, for example, into the, I guess, the public world, where I would argue that in the
early days, this was one of the big stumbling blocks that GCP ran into. And I don't know how
well this was ever communicated or if it's just my own weird perception, but it feels like in a lot of ways, Google took what it learned from running its own
massive global infrastructure and then turned it into a cloud. The problem is, is that when you're
internal at Google and running that infrastructure, you can dictate how software has to behave in
order to be supported in that environment. With customer workloads, you absolutely
cannot do that in any meaningful way. So it feels like the more googly your software was, the better
it would run on GCP. But the more, well, let's be very direct, the more it looked like something
you'd find in a bank, the less likely to find success it was. And that's changed to a large
degree as the product itself has evolved. But is that completely out to see?
Is that an assessment that you would agree with? Or am I completely missing something?
I think you're kind of right, not entirely right. I think you're completely correct in that Google
internally, you need to rewrite things in a very specific manner in order for them to work.
And my personal experience there is,
I ran a small reverse engineering company from 2004 to 2011. And we got acquired by Google in 2011. And then we had to take an existing infrastructure that we had and port it to
Google's infrastructure, to the internal infrastructure, not to the Google Cloud
infrastructure. Because back then, Google Cloud was just App Engine, which was not...
I mean, it looks like ahead of its time now that everybody's talking about function
as a service. But back then, App Engine just looked weird. So we had to rewrite pretty much
our entire stack to conform to Google's internal requirements. And it's a super weird environment
because once you do rewrite it in the way that Google wants, everything scales to the sky
immediately. Like the number, of course cores essentially becomes a command line parameter. It's like make dash J 16, but you replace 16 with 22,000 and you have 22,000 cores doing your work.
Now, that said, GCP never externalized these internal systems.
So what you get on GCP to my great frustration as an ex-Googler is often a not so great approximation of the internal
infrastructure. So it's neither here nor there. But first of all, I completely agree that Google
historically in the cloud has had the problem that they had learned a lot of lessons internally from
scaling and were terrible at communicating these lessons properly to the outside world
and then gave people something that wasn't well explained. App Engine is a great example for this, right? Because you encounter
App Engine for the first time in 2011 and you look at this and you're like, why, how, and what is this?
It was brilliant in some ways, don't get me wrong. But privately, I always sort of viewed Google App
Engine as, cool, we're going to put this thing up and see who develops things super well with it. Then we're going to make job
offers to those people. It felt like it was more or less a project provided by Google recruiting.
I wouldn't know about that. And having been involved in interviewing, I think you're
overestimating Google's recruiting progress. But that said, I mean, App Engine is a really interesting concept, but the average
developer does not, at least the average developer in 2011, does not appreciate what it does and why
it does these things, right? And you can talk about Borg and Kubernetes as well, where Google
just didn't explain very well why they decided to build things the way they did when they
externalized similar services on GCP. And that's certainly hurt them, right? decided to build things the way they did when they externalized similar
services on GCP.
And that certainly hurt them, right?
If you look at the history of AWS versus Google, Google gave people something like App Engine,
which was weird and strange and for a particular use case and not terribly well explained.
And AWS gave people VMs, which people understood.
And by and large, if I'm going to choose a product
and one looks really strange and one looks familiar, I'm going to take the familiar product.
And I think the strategic failing that Google did historically in cloud was when they did have a
technically superior solution, they did a very poor job at explaining why this is a technically
superior solution. So in some sense, Google never
had a, like historically, Google didn't have a culture of customer interaction. And to some
extent, what you need to do in cloud is you need to reach out to people, take them by the hand,
calm their nerves, and then help them walk to the cloud. And Google just didn't do that. They gave
people strange looking things and told them, hey, this is the better way,
but we don't tell you why. This, of course, feels like a microcosm for Kubernetes. If I'm going to continue my bad take, and I absolutely will, when you're wrong, by all means, double down on it.
It feels like Kubernetes being rolled out was a effort to get the larger ecosystem to write
code and deploy it in ways that were slightly more aligned with
Google's view of the world. And credit where due, it worked. The entire world is going head over
heels for Kubernetes. Yeah, well, Kubernetes is an interesting thing to watch because
I'm a huge fan of Google's internal system called Borg. And there's a philosophical view at play, right? And the philosophical view
is that you really shouldn't treat a data center as a group of computers. You should treat a data
center as a computer that happens to be the size of a warehouse. Urs Holzler wrote a book called
Warehouse-Sized Computing. And a lot of Google's internal engineering philosophy is centered around
this thing that we really should be building something that treats an entire data center as if it was one computer.
And that's actually a very compelling viewpoint.
I think it's a very good viewpoint to take.
And then they built Borg, and Borg worked brilliantly internally for that purpose.
Well, there was a great tweet that you wrote back in April.
The trouble with Google's infra is sometimes you just want to slice some bread, but the only thing available is the continental saw, normally used for cutting continents, and the 2,000-page manual.
That's what Borg is for.
Not everyone needs that level of complexity and scale to deploy, you know, a blog.
No, that's entirely true and entirely fair. I guess if you're running a SaaS business of some size and with some
size where, let's say, speaking about when you need 20, 30, 50 servers, you probably want to
have some way of administering these. And we've all been in the trenches enough to know that
administering 50 machines becomes a bit of a nightmare very quickly. So I think the view of
we really should be treating those 50 machines as if they were one big machine,
that's still a good view, even if you're not Google. To be fair, if you're running a blog,
99% of all workflows really don't need to be distributed systems. That's the reality of it.
We've had 40 years of Moore's law. A single cell phone can do fairly amazing things if you think
about the sheer computing power we have there. So I fully agree that in the majority of cases, you don't need the complexity.
And good engineering usually means keeping things simple. And you do pay a price in complexity for
insane scalability. And I stand by the tweet about the Intercontinental Saw because it happened so
often when I was at Google that they had these fantastically scalable systems,
but it took a long while to wrap your head around how to even use them,
and you really just wanted to cut a slice of bread.
That's one of the real problems, is that scale means different things to different people all the time.
And from my perspective, oh wow, I have a blog post that got an awful lot of hits,
that might mean, I don't know, in the first 24 hours it goes up,
it gets 80,000 clicks.
That's great and all.
Then you look at something that Google launches.
It doesn't even matter what it is,
but because they're Google and they have a brand
and people want to get a look at whatever it is they launch
before it gets deprecated 20 minutes later,
it'll wind up getting 20 million hits
in the first hour that it's up.
It's a radically different sense of scale
and there's a very different model that ties into it.
And understand that Google's built some amazing stuff,
but none of the stuff that they've built
that powers their own stuff
is really designed for small scale
because they don't do anything small scale.
Oh, that's entirely true.
Now, to counter that point a little bit, though,
I would argue that, I mean mean if there's one thing to be
learned from gangnam style is that the strangest things can go viral these days and you may find
yourself in a position where you need to scale rapidly within 24 hours or even shorter time
frames if whatever you're offering gets to be insanely popular and spreads through social media
because the reality is like in the year 2000, if your software got popular,
you noticed that it was sold out in stores, right?
You could produce more and that was fine.
But the reality of today is you may be in a situation
where you need to scale really rapidly
and then it may be good if you build things
in a way that they can scale.
Now, I'm not advocating that you should always pay
the complexity price
of making things scalable. I'm just saying that scalability may be more important today from a
business perspective than it was a couple of years ago, just because especially with software as a
service, people switch things around a lot. People try things out and it's quite possible that just
randomly you get 10 million hits on your service the first day, and then you probably don't want to show the famous fail whale that Twitter was
so famous for. This episode is sponsored by our friends at New Relic. Look, you've got a complex
architecture because they're all complicated. Monitoring it takes a dozen different tools.
Troubleshooting means jumping between all those dashboards and various silos.
New Relic wants to change that, and they're doing the right things.
They're giving you one user and 100 gigabytes a month completely free.
Take the time to check them out at newrelic.com.
Once again, that's newrelic.com.
Oh, yeah, and remember that there's an argument to be made for reliability and when it begins to make serious business sense.
You can wind up refactoring your existing code that has no customers until you run out of money,
but even bad code that doesn't scale super well, like the Twitter fail whale,
can get you to a point where you can afford a team of incredibly gifted people to come in and fix your problems for you.
There's validity there, but early optimization becomes a problem.
The things that I would write if I'm trying to target 20 million active users versus half an active user at any given point in time, because who really pays attention to me, would be a very different architectural pattern for the most part.
Yeah, I don't disagree.
Then again, I think the entire selling point of App Engine back in the days was you just write this thing in the way that App Engine tells you to do, and then whatever happens, you're insured, right?
But yeah, I fully agree.
I mean, there's the argument to be made that once you have traction,
you also have money to fix the scaling issues.
But then the question becomes, can you fix those quickly enough
so people don't get turned off by the unreliability?
And that's not a question anybody can answer in any any good way upfront, right? Because you have to
try things and they'll fail and so forth. So what's interesting to me is that you don't come
from a cost optimization background historically. In fact, you come from one of the more interesting
things on the internet, which is fascinating to me at least, which is Google's Project Zero.
For those who haven't heard of it, what is Project Zero? So Project Zero is a Google internal team that tries to emulate government attackers,
essentially trying to find vulnerabilities in critical software. And by emulating the
thought process tries to nudge the industry in the direction of, well, making better security
decisions and fixing the glaring issues. And it arose from Google's experience in 2009, where the Chinese government attacked them and used a bunch of vulnerabilities.
And then Google at some point a couple of years later decided, hey, we've got all these people on the offensive side being paid to find vulnerabilities and then sell them to governments to hack everybody. Why don't we staff a team internally that tries to do the same thing, but then publishes all the techniques and publishes all the learnings and so forth,
so that the industry can be better informed? In some sense, the observation was that the
defensive side often made poor decisions by being not well informed about how attacks actually work.
And if you don't really understand how modern attack works, you may misapply your resources.
So the thought process was, let's shine a light on how these things work so the defensive side can make better
decisions. One of the things I find neat about it, this is of course where Tavis Ormandy works,
and it's fun talking to him on Twitter, watching him do these various things. Every time he's like,
hey, can someone at some random company reach out to me on the security contact side? It's,
ooh, this is going to be good. And everyone likes to gather around because it's one of those rare moments where you get to
rubberneck at a car wreck before the accident. Yeah. Yeah. Tavis is a, like, I have an extremely
high opinion of Tavis. He's a person with great personal integrity and he's a lot of fun to
discuss with. And it's got a really good intuition where things break. So in general,
the entire experience of having worked at Project Zero was pretty great. I spent a grand total of
eight years at Google, five of which in a team that did some malware related stuff. And then
two years in Project Zero and the two years in Project Zero were certainly a fantastic experience.
The thing that I find most interesting is that you have these almost celebrity bug hunters, for lack of a better term.
And what amazes me is how many people freaking seem to hate him.
And you do a little digging and, oh, you work at a company that had a massive vulnerability that was disclosed.
And one wonders why you have this ax to grind.
It's, again, in some levels, it's people doing you a favor.
I've never fully understood aspects of blaming people who point out your vulnerabilities to you in a responsible way.
Sure, I know you would prefer that they tell you and never tell anyone else, and you owe them maybe a t-shirt at most.
Some of us aren't quite that, I guess, willing to accept that price point for our skill sets.
Yeah, so the entire vulnerability disclosure debate is a very complicated and deep one,
and it also goes in circles over decades.
It's actually quite tiring after 20 years of going through the same cycle.
It feels like Groundhog Day.
But my personal view is that, to some extent, the software industry incurs risks on behalf of their users in order to make a profit.
Meaning you gather user data, you store it somewhere and you can, well, move fast and break things and nothing much will happen if that user data gets leaked and so forth.
So the incentives in the software industry are usually towards more complexity, more features,
and bad security architecture. And because there's no, I mean, there's no software liability,
there's no recourse for wider society against the risks that the software industry takes on
behalf of the users. The only thing that may happen is that you get an egg on your face because
somebody finds a really embarrassing vulnerability and then writes a blog about it. So in some sense, Project Zero and the people
that work at Project Zero, they wouldn't be doing their job if everybody loved them,
because to some extent, their job is to be an incentive to actually care. If people say,
oh, let's do a proper security architecture, otherwise Tavis will tweet at us, that's at least some incentive to have security.
It sounds a bit sad that this is the only thing that, not the only thing, but this is a thing that is necessary.
But part of the job of being a Project Zero researcher is not to be everybody's best friend, if that makes some sense.
Yeah, security is always a weird argument.
I started my career dabbling in it and got out of it because, frankly, the InfoSec community is a toxic shithole. Yes, I did say that. You did not mishear, if you're listening to this and take exception to what I just said. I said what I said. who are new to the field were not welcome. So I found places to go where learning how this stuff
works was met with encouragement rather than derision. That may have changed since I was in
the space. It's been, what, nearly 15 years, but I'm not so sure about that. So I wouldn't know,
right? Because I grew into that community in Europe 20 years ago. And the community I grew
into 20 years ago in Europe was a very
different community from the community I encountered when I first came to the Yes
and interacted with the Yes InfoSec community. And also, you tolerate a lot of behavior when
you're 16 and you want to be part of a community that you wouldn't tolerate as an adult. So I'm
not sure whether I would have a very clear view
on these topics. Because the other thing is, once you reach some level of status,
everybody's incredibly nice to you all the time. And at least my experience in security
after I turned 18 or 19 was that people were by and large more friendly than justified to me.
Now, that doesn't
mean they weren't shitty to everybody else at the same time, right? So the reality is that I have a
scooed view of the security community because I got really lucky, if that makes any sense.
And then also, I guess I'm kind of picky about who I surround myself with. So the two dozen or
so people that I really like out of the greater security community
may just not have that same culture, if that makes any sense.
So help me understand your personal journey on this. You went from focusing on InfoSec to
cloud cost optimization. I have my own thoughts on that. And personally, I think that they're
definitely aligned from the right point of view. But I'm curious to hear your story.
How did you go from where you were to where you are?
Yeah, so we can call it perhaps a midlife crisis of sorts. But the background is,
after 20 years of security, you realize security is always at some level about human conflict.
It's always you do security for somebody against somebody in some sense. You're securing something against
somebody else. And it's very, well, I wouldn't say repetitive, but it's certainly a very difficult
job. And at some point I asked myself, why am I doing this? And for what am I doing this?
And I realized, hey, perhaps I want to do something that has a positive externality.
Like I don't necessarily want to participate in human-to-human conflict all the time. And I realized that my only credible chance
of dying with a negative CO2 balance or budget would be to help people compute more efficiently.
There's no amount of no meat eating, no car driving that I can do that will
erase all the CO2 I've emitted so far. But if I can help
people compute more efficiently, then I can actually have a positive impact. There's a
triple win to be had. If I do my work well, the customer saves money, I earn some money.
And in the meantime, I reduce human wastefulness. So that had a great appeal on a philosophical
basis. And then the other thing I realized is that when you do security work,
a lot of your work is reading existing legacy code and finding problems and then have people
mad at you because you found the problems in the legacy code and now they need to be fixed.
And it turns out that when it comes to optimization, the workflow is surprisingly similar.
The skills you need in terms of lower level machine stuff and so forth is also surprisingly
similar. And if you find a
thing to optimize anything to fix then people are actually thankful because you're saving money and
make things faster so it turned out that this was pretty much a match that worked out surprisingly
well and yeah then that's how i made the the jump and i have to admit so far i i really haven't
regretted it the technical problems are super fascinating to admit, so far, I really haven't regretted it. The technical problems
are super fascinating. To some extent, there's less politics even in the cost optimization
area. Because one of the issues with security is on the defensive side, a lot of good security work
is about convincing an organization to change the way they're doing things. So a lot of good
defensive work is actually political in nature.
And the purely technical gigs in security are often in some form of offensive role. For example,
the Project Zero stuff that's offensive for the defensive side, but still it's a sort of offensive role. And then the majority of these jobs are just in companies that sell exports to governments.
And given that my forte happens to be more on the technical side than on the influencing an
entire organization side i decided that and given that i didn't want to do offensive work in that
sense anymore i decided that this entire cost optimization thing has the beautiful property
of aligning good technical work with an actual business case. There's an awful lot of value there to aligning whatever you're doing with business case.
I would argue that security and cost optimization are absolutely aligned from a basis of cloud governance.
Of course, now here in reality, don't call it that because no one wants to deal with governance
and it always means something awful just from a different axis depending upon who you talk to. But that's the painful part is that there running in that region on the other side of the world.
Oh, we don't have anything there.
I believe you are being sincere when you say that.
However, the bill doesn't lie
and suddenly we're
in the middle of an incident.
It's funny that you mention this
because the number
of security incidents
that have been uncovered
by billing discrepancies
is large.
If you go back to
Cuckoo's Egg,
like Clifford Stoll's story about
finding a bunch of KGB finance German hackers in the DoD networks, that was initially triggered by
an accounting discrepancy of, I think, 25 cents or something like this. So yeah, the interesting
thing about IT security versus banking, for example, is if you steal data, nobody normally knows because normally data isn't
accounted for properly, except when you cause large data transfer fees because you're exfiltrating
too much data out of AWS. Yeah, that's always fun when that happens. What's surprising to me,
and it makes perfect sense in hindsight, if you have a $75 AWS account every month and suddenly
you get a $60,000 bill, you sort of notice that.
But if you wind up getting compromised when you're spending, let's say, $10 million a month,
it takes an awful lot of Bitcoin mining before that even begins to make a dent in the bill.
At some point, it just disappears into the background noise.
Oh, yeah, definitely. But I guess that's always the case, right? If you look at a supermarket,
they don't notice half of always the case, right? If you look at a supermarket,
they don't notice half of the shoplifting, right?
Yeah. Supposedly, anyway. I don't know. I tend to not spend most of my time shoplifting. I usually set my eyes on bigger game, you know, by exfiltrating data from people's open S3 buckets.
Is it even exfiltrating if their S3 buckets are open?
No, I've decided that people don't respond to polite notes about those things. Instead, I just copy a whole bunch
of data into those open buckets on the theory that, well, they might ignore my polite note.
They probably won't ignore a $4 million bill surprise. That's actually a fairly effective
sounding strategy. It's funny. Let's be very clear here. I'm almost certain that that could
be construed by an aggressive attorney as a felony.
And let's not kid ourselves. If you cost a company $4 million, their attorneys will always be aggressive.
This is not legal advice. Don't touch things you don't own. Please consult someone who knows what they're doing. It's not me.
Have I successfully disclaimed enough responsibility? Probably not, but we're going to roll with it.
All right. So if people want to hear more about what you're up to, how your journey is progressing, or hear
your wry but incredibly astute observations on this ridiculous industry in which we find ourselves,
where can they find you? Well, one option is clearly on Twitter. I run a Twitter account
under twitter.com slash Halvar Flake, H-A-L-V-A-R-F-L-A-K-E.
And that is not only about my professional work. I do have a fairly unfiltered Twitter account.
Like there's nobody goes writing my tweets and there's oftentimes things that I tweet that I
regret a day later, but that's the nature of Twitter, I guess. Oh, all of my tweets are ghost
written for me. Well, not all of them, which ones specifically the ones that you don't like. That's right. That's called plausible deniability. So yeah, and if you care
about questions like, I've got 50,000 machines and I would like to know which lines of code are
eating how many of my cores, then it's probably a good idea to head over to optimize.cloud.
Remember, optimize M-Y-Z-E at the end to make spelling more fun.
And sign up for our newsletter. You're disrupting the spelling of common words.
I'm sorry for that, but the regular domains were too expensive and trademarks are really hard to
get. They really are. Well, thank you so much for taking the time to speak with me today. I really
do appreciate it. Thank you very much for having me. Thomas Duleen, CEO of Optimize Cloud.
I'm cloud economist, Corey Quinn,
and this is Screaming in the Cloud.
If you've enjoyed this podcast,
please leave a five-star review on Apple Podcasts.
Whereas if you've hated this podcast,
please leave a five-star review on Apple Podcasts anyway
and tell me why this should be deprecated as a show
along with what division of Google
you work in.
This has been this week's episode of Screaming in the Cloud.
You can also find more Corey at Screaminginthecloud.com or wherever Fine Snark is sold.
This has been a humble pod production stay humble