Screaming in the Cloud - The Rise of Generative AI with Raj Bala
Episode Date: April 4, 2023Raj Bala, Founder of Perspect, joins Corey on Screaming in the Cloud to discuss all things generative AI. Perspect is a new generative AI company that is democratizing the e-commerce space, b...y making it possible to place images of products in places that would previously require expensive photoshoots and editing. Throughout the conversation, Raj shares insights into the legal questions surrounding the rise of generative AI and potential ramifications of its widespread adoption. Raj and Corey also dig into the question, “Why were the big cloud providers beaten to the market by OpenAI?” Raj also shares his thoughts on why company culture has to be organic, and how he’s hoping generative AI will move the needle for mom-and-pop businesses. About RajRaj Bala, formerly a VP, Analyst at Gartner, led the Magic Quadrant for Cloud Infrastructure and Platform Services since its inception and led the Magic Quadrant for IaaS before that. He is deeply in-tune with market dynamics both in the US and Europe, but also extending to China, Africa and Latin America. Raj is also a software developer and is capable of building and deploying scalable services on the cloud providers to which he wrote about as a Gartner analyst. As such, Raj is now building Perspect, which is a SaaS offering at the intersection of AI and E-commerce.Raj's favorite language is Python and he is obsessed with making pizza and ice cream. Links Referenced:Perspect: https://perspect.com
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is sponsored in part by our friends at Thinks Canary.
Most folks find out way too late that they've been breached.
Thinks Canary changes this.
Deploy Canaries and Canary tokens in minutes and then forget about them.
Attackers tip their hand by touching them, giving you one alert when it matters.
With zero administrative overhead and almost no false positives, Canaries are deployed and loved on all seven continents.
Check out what people are saying at canary.love today.
This episode is sponsored in part by our friends at Chronosphere. When it costs more
money and time to observe your environment than it does to build it, there's a problem. With
Chronosphere, you can shape and transform observability data based on need, context,
and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance
at chronosphere.io slash cory-quin. That's chronosphere.io slash cory-quin. And my thanks
to them for sponsoring my ridiculous nonsense. Welcome to Screaming in the Cloud. I'm Corey
Quinn. Back again after a relatively brief point in time
since the last time he was on is Raj Bala, formerly a VP Analyst at Gartner, but now,
instead of talking about the past, we are talking instead about the future. Raj, welcome back. You're
now the founder at Perspect. What are you doing, Ordair? I am indeed. I'm building a SaaS service around the generative AI space at the intersection of
e-commerce. So those two things are things that I'm interested in, and so I'm building
a SaaS offering in that space. This is the first episode in which we're having an in-depth
discussion about generative AI. It's mostly been a topic that I've avoided because
until now, relatively recently, it's all been very visual. And it turns into sort of the next
generation's crappy version of Instagram where, okay, well, Instagram's down, so can you just
describe your lunch to me? It's not compelling to describe a generated image on an audio-based
podcast. But with the advent of things like chat GPT, where suddenly it's
muscling into my realm, which is the written word, suddenly it feels like there's a lot more attention
and effort being paid to it in a bunch of places where it wasn't getting a lot of coverage before,
including this one. So when you talk about generative AI, are you talking in the sense of
visual in terms of the written word, in terms of all of the above and more.
Where's your interest lie?
I think it's all of the above and more.
My interest is in all of it, but my focus right now is on the image aspect of things.
I'm in pretty knee-deep in stable diffusion and all the things that it entails, and it is largely about images at this point. So talk to me more about how you're building
something that stands between the intersection of e-commerce and generative AI. Because when I go
to Perspect.com, I'm not staring at a web store in the traditional sense. I'm looking at something
that, again, early days, I'm not judging you based upon the content of your landing page,
but it does present as a bit more of a developer tool and a
little bit less of a, look how pretty it is. Yeah, it's very much a developer-focused e-commerce
offering. So as a software developer, if you want programmatic access to all things e-commerce and
generative AI that are related to e-commerce, you could do that on perspac.com. So yeah, it is about taking images of products
and being able to put them in AI-generated places, essentially.
Got it. So effectively, you're trying to sell, I don't know,
titanium jewelry, for the sake of argument.
And you're talking about now you can place it on a generated model's hand
to display this rather than having to either fake it or alternately have a whole bunch of very expensive product shoots and modeling sessions.
Exactly, exactly. If you want to put this piece of jewelry in front of the Eiffel Tower or the pyramids of Giza, you can do that in a few seconds as opposed to the expensive photo shoots that were once required.
On some level, given that I spend most of my time kicking around various SaaS products,
I kind of love the idea of stock photography modeling, I don't know, Datadog or whatnot.
I don't know how that would even begin to pan out, but I'm totally here for it.
That's funny.
Now, the hard part that I'm seeing right now is, I mean, you used to work at Gartner for years.
I did.
You folks are the origin of the Gartner hype cycle.
And given all of the breathless enthusiasm, massive amounts of attention, and frankly, hilarious slash more than horrifying missteps that we start seeing in public, it feels like we are very much in the heady early days of hype around generative
AI. No doubt about it. No doubt about it. But just thinking about what's possible and what
is being accomplished even week to week in this space is just mind-blowing. I mean,
this stuff didn't exist really six months ago. And now the open source frameworks are out there. The ecosystems are
developing around it. A lot of people have compared generative AI to the iPhone. And I
think it's actually maybe bigger than that. It's more internet scale disruption, not just a single
device like the iPhone. It's one of those things
that I have the sneaking suspicion is going to start showing up in a whole bunch of different
places, manifesting in a whole host of different ways. I've been watching academia largely freak
out about the idea that, well, kids are using it to cheat on their homework. Okay, I understand the
position that they're coming from, but it seems like
whenever a new technology is unleashed on the world, that is the immediate, instantaneous,
reflexive blowback. Not necessarily picking on academics in particular, but rather the way that
we've always done something is now potentially very easy to use thanks to this advance in technology. Oh, crap, what do we do?
And there's usually a bunch of scurrying around in futile attempts to put the genie back in the
bottle, which, frankly, never works. And you also see folks trying to sprint to sort of keep up with
this, and it never really pans out. Society adapts, adjusts, and evolves. And I don't think that that's an inherently terrible thing.
I don't think so either. I mean, the same thing existed with the calculator, right? Do you remember early days in school, they said you can't use a calculator, right?
Because remember, you will not have a calculator.
And when the rubber meets the road in person,
during that test,
you're going to have to show your skills.
And the same thing will happen here.
We'll just have to have ground rules and ways to check and balance whether
people are cheating or not and adapt.
Just like you said.
Yeah.
On some level,
you're only really cheating yourself past a certain point.
There's value in being able to tell a story in a cohesive, understandable way. Yeah, on some level, you're only really cheating yourself past a certain point. Exactly. There's value in being able to tell a story in a cohesive, understandable
way. Oh, the computer will do it for me. I don't know that you can necessarily
depend on that. Absolutely. Absolutely. You have to understand more than
just the inputs and outputs. You have to understand the black box in between
enough to show that you understand the subject.
One thing that I find interesting is the position of the cloud providers in this entire space.
We have Google, who has had a bunch of execs talking about how they've been working on things like this internally for years.
You get how that makes you look worse instead of better, right?
They're effectively tripping over one another on LinkedIn to talk about how they've been working on this for such a long time,
and they have something just like it.
Well, yeah, okay, you got beaten to market by a company less than a decade old.
Azure has partnered with OpenAI and brought a lot of this to Bing so rapidly they didn't have time to update their moribund Bing app away from the use Bing and earn Microsoft coins nonsense.
It's just, wow.
Talk about being caught flat-footed on this.
And Amazon, of course, has said effectively nothing. The one even slightly generative AI
service that they have come out with that I can think of that anyone could be forgiven for having
missed is they unleashed this one year at reInvent's Midnight Madness, where they had Dr.
Matt Wood get on stage with the Deep Composer and play a little bit of a song, and it would in turn iterate on that.
And that was the last time any of us ever really heard anything about the Deep Composer.
I've got one on my shelf, and I do not hear about it mentioned even in passing, other than in trivia contests.
Yeah, it's pretty fascinating.
Amazon, with all their might, and AWS in particular,
I mean, AWS has Alexa,
and so the thing you give to Alexa is a prompt, right?
I mean, it is generative AI in a large way.
You're feeding it a prompt and saying,
do something, and it spits out something tokenized to you.
But the fact that OpenAI has upended all of these companies, I think, is massive.
And it tells us something else about Microsoft, too, is that they didn't have the wherewithal
internally to really compete themselves. They had to do it with someone else, right? They couldn't
muster up the effort to really build this themselves. They had to use OpenAI.
On some level, it's a great time to be a cloud provider because all of these experiments are
taking place on top of a whole bunch of very expensive, very specific compute.
For sure.
But that is necessary, but not sufficient as we look at it across the board. Because even AWS's
own machine learning powered services, it's only relatively recently that they seem to have
gotten above the step one, get a PhD in this stuff. Step two, here's all the nuts and bolts
you have to understand about how to build machine learning models. Whereas the thing that's really
caused OpenAI stuff to proliferate in the public awareness is, okay, you go to a webpage and you
tell it what to draw and it draws the thing. Or go ahead and
rename an AWS service if the naming manager had a sense of creativity and a slight bit of whimsy.
And it comes out with names that are six times better than anything AWS has ever come out with.
So funny. I saw your tweet on that, actually. Yeah. If you want to do generative AI on AWS today, it is hard. Oh my
gosh. That's if you can get the capacity. That's if you can get the GPU capacity. That's if you can
put together the ML ops necessary to make it all happen. It is extremely hard. Yeah,
so putting stuff into a chat interface is a thousand times easier. I mean,
doing something like containers on GPUs is just
massively difficult in cloud today. It's hard to get them
in many cases as well. I've had customers that ask, okay, what special
preferential treatment can we get to get access to more GPUs?
Can you break the laws
of physics or change global supply chain? Because if so, great, you've got this on lock. Otherwise,
good luck. I think US East 2 a couple weeks ago for like the entire week was out of
the GPU capacity necessary the entire week. I haven't been really tracking a lot of the GPU-specific
stuff. Do you happen to know what a lot of OpenAI stuff is built on top of from a vendoring
perspective? I mean, it's got to be NVIDIA, right? Is that what you're asking me? Yeah,
I don't know a lot of it. Again, this is not my area. I am not particularly deep in the differences
between the various silicon manufacturers. I know that AWS has their
Inferentia chipset that's named after something that sounds like what my grandfather had.
You've got a bunch of AMD stuff coming out. Intel's been in the space for a while.
But NVIDIA has sort of been the gold standard based upon GPU stories. So I would assume it's
NVIDIA. At this point, they're the only game in town. No one else matters. The frameworks simply don't support anything other than NVIDIA. None of it, if you look through the
source code, none of it really relies on Inferentia or Tranium or AMD or Intel. It's all NVIDIA.
As you look across the current landscape, at least let me rephrase that, as I look across
the current landscape, I am very hard-pressed to identify any company
except probably OpenAI itself
as doing anything other than falling all over itself,
having been caught, what feels like, completely flat-footed.
We've seen announcements rushed.
We've seen people talking breathlessly about things
that are not yet actively available.
When does that stop? When do
we start to see a little bit of thought and consideration put into a lot of the things that
are being rolled out, as opposed to, we're going to roll this out as an assistant now to our search
engine and then having to immediately turn it off because it becomes deeply and disturbingly
problematic in how it responds to a lot of things. You mean Sam Altman saying he's got a lodge in
Montana with a cache of firearms in case AI gets out of control? You mean that doesn't alarm you
in any way? A little bit, just a little bit. And even now you're trying to do things that,
to be clear, I am not trying to push the boundaries of these things, but all right,
write a limerick about Elon Musk hurling money at things that are ridiculous. I am not going to make
fun of individual people. It's like, I get that, but there is a punching up story around these
things. You also want to make sure that it's not, write a limerick about the disgusting habits of
my sixth grade classmate. You, you don't want to basically automate
the process of cyberbullying, let's be clear here.
But if finding that nuance,
it's a societal thing to wrestle with on some level,
but I don't think that we're anywhere near
having cohesive ideas around it.
Yeah, I mean, this stuff's going to be used
for nefarious ways,
and it's beyond just cyberbullying, too.
I think nation states are going to use this stuff
as a way to create disinformation. I mean, if we saw a huge flux of disinformation in 2020, just imagine what's going to happen in 2024 with AI-generated disinformation. It's going to be off the charts. at a point where you fundamentally have to go back to explicitly trusted sources as opposed to,
well, I saw a photo of it or a video of it or someone getting on stage and dancing about it.
Well, all those things can be generated now for effectively pennies. I mean, think about evidence
in a courtroom now. If I can generate an image of myself holding a gun to someone's head,
you have to essentially dismiss all sorts of
photographic evidence or video evidence soon enough in court because you can't trust the
authenticity of it. It makes providence and chain of custody issues far more important than they
were before. And it was still a big deal. Photoshop has been around for a while. And I remember
thinking when I was younger, I wonder how long it'll be until videos become the next evolution of this.
Because there was,
we got to a point fairly early on in my life
where you couldn't necessarily take a photograph
at face value anymore.
Because look at some of the special effects
we see in movies.
Yeah, okay.
Theoretically, someone could come up
with an incredibly convincing fake
of whatever it is that they're trying to show.
But back then, it required massive render farms and significant investment to really want to screw someone over.
Now it requires drinking a couple glasses of wine, going on the Internet late at night, navigating to the OpenAI web page and typing in the right prompt.
Maybe you iterate on it a little bit, and it spits it out for basically free. That's one of the sectors, actually, that's going to adopt this stuff the soonest. That's
happening now, the film and movie industry. Stability AI actually has a film director on
staff, and his job is to be sort of the liaison to Hollywood. And they're going to help build
AI solutions into films and so forth. So yeah,
but that's, that's happening now. One of the more amazing things that I've seen has been the idea of
generative voice where it used to be that in order to get an even halfway acceptable model of
someone's voice, they had to read a script for the better part of an hour that, and they had to make
sure that they did it with certain inflection points and certain tones. Now you can train these things on, all right, great. Here's
this person just talking for 10 minutes. Here you go. And the reason I know this, maybe I shouldn't
be disclosing this as publicly as I am, but the heck with it. We've had one of me on backup that
we've used intermittently on those rare occasions when I'm traveling, don't have my recording set
up with me, and this needs to go out in a short time period. And we've used it probably a dozen times over
the course of the 400 and some odd episodes we've done. One person has ever noticed.
Wow.
Now, having a conversation, going back and forth, start pairing some of those better
models with something like ChatGPT, and basically you're building your own best friend. Yeah. I mean, soon enough, you'll be able to do video AI, completely AI generated of your
podcast, perhaps. That would be kind of wild on some level. Like now we're going to animate the
whole thing. Yeah. On some level, I feel like we need more action sequences. I don't know about
you, but I don't have quite the flexibility I did when I was younger. I can't very well even do a pratfall without wondering if I just broke a hip.
You could have an action sequence where you kick off the cloud formation task.
How about that?
One area where I have found that generative text AI, at least, has been lackluster, has
been write a parody of the following song around these particular dimensions.
Their meter is off.
The cleverness is missing. They at least understand what a parody is and they understand
the lyrics of the song, but they're still a few iterative generations away. That said,
I don't want to besmirch the work that people have put into these things. They are basically magic.
For sure. Absolutely. I mean, I'm in wonderment of some of the artwork that I'm able to generate with
generative AI. I mean, it is absolutely awe-inspiring, no doubt about it.
So what's gotten you excited about pairing this specifically with e-commerce?
That seems like an odd couple to wind up smashing together, but you have had one of the best
perspectives on this industry for a long time
now. So my question is not, what's wrong with you? But rather, what are you seeing that I'm missing?
I think it's the biggest opportunity from an impact perspective. Generating AI avatars of
yourself is pretty cool, but ultimately I think that's a pretty small market. I think the biggest
market you can go after right now is e-commerce and in the generative AI space.
I think that's the one that's going to move the needle for a lot of people.
So it's a big opportunity for one.
I think there are interesting things you can do in it.
The technical aspects are equally interesting.
So there are a handful of compelling things that draw me to it.
I think you're right. There's, there's a lot of interest and a lot of energy and a lot of focus
built around a lot of the neat flashy stuff, but it's okay. How does this wind up serving
problems that people will pay money for? Like right now to get early access to chat GPT and
not get rate limited out, I'm paying them 20 bucks a month, which fine,
whatever. I am also in a somewhat privileged position. If you're generating profile photos
that same way, people are going to be very hard pressed to wind up paying more than a couple bucks
for it. That's not where the money is, but solving business problems. And I think you're onto
something with the idea of generative photography of products that are for sale. That has the potential to be incredibly lucrative.
It tackles what to most folks is relatively boring, if I can say that, as far as business
problems go. And that's often where a lot of value is locked away. I mean, in a way, you can think of
generative AI in this space as similar to what the cloud providers themselves do.
So the cloud providers themselves afforded much smaller entities the ability to provision large-scale infrastructure without high fixed costs.
And in some ways, the same applies to this space too. So now mom-and-pop shop type people will be able to generate interesting product photos
without high fixed cost of photo shoots and Photoshop and so forth.
So I think in some ways it helps to democratize some of the bigger tools that people have
had access to.
That's really what it comes down to is these these technologies have existed in labs, at least,
for a little while, but now they're really coming out as interesting, I guess, technical
demos, for lack of a better term.
But the entire general public is having access to these things.
There's not the requirement that we wind up investing an enormous pile of money in
custom hardware and the
rest. It feels evocative of the revolution that was cloud computing in its early days,
where suddenly, if I have an idea, I don't need to either build it on a crappy computer under my
desk or go buy a bunch of servers and wait eight weeks for them to show up in a rack somewhere.
I can just start kicking the tires on it immediately. It's about democratizing access. That, I think,
is the magic pill here. Exactly. And the entry point for people who want to do this as a business,
so like me, it is a huge hurdle still to get this stuff running. Lots of jagged edges,
lots of difficulty, and I think that ultimately is going to dissuade huge segments of the population
from doing it themselves. They're going to want completed services. They're going to want
finished product, at least in some consumable form for their persona.
What do you think the shaking out of this is going to look like from a cultural perspective?
I know that right now everyone's excited both in terms of excited about the possibility and shrieking that the sky is falling. That is fairly normal for technical cycles.
What does the next phase look like? The next phase, unfortunately, is probably going to be
a lot of litigation. I think there's a lot of that on the horizon already, right? Stability,
AI is being sued. I think the courts are going to have to decide, is this stuff above board?
You know, the fact that these models have been trained on otherwise copywritten data,
copywritten images and music and so forth, that amounts to billions of parameters.
How does that translate?
How does that affect ages of intellectual property law?
I think that's a question that's an open question, and I don't think we know. Yeah, I wish on some level that
we could avoid a lot of the unpleasantness, but you're right. It's going to come down to
a lot of litigation, some of which clearly has a point on some level, but that is, frankly,
a matter for the courts. I'm privileged that I don't have to sit here and worry about this in quite the same way,
because I am not someone who makes the majority of my income through my creative works.
And I am also not on the other side of it, where I've taken a bunch of other people's
creative output and used that to train a bunch of models.
So I'm very curious to know how that is going to shake out as a society.
Yeah.
I think that regulation is also almost certainly
on the horizon on some level. I think that tech has basically burned through 25 years of goodwill
at this point, and nobody trusts them to self-regulate. And based upon their track record,
I don't think they should. And interestingly, I think that's actually why Google was caught so
flat-footed. Google was so afraid of the ramifications of
being first and the downside optics of that, that they got a little complacent. And so they weren't
sure how the market would react to saying, here's this company that's known for doing lots of,
you know, kind of crazy things with people's data, and suddenly they come out with this AI thing
that has these huge superpowers.
And how does that reflect poorly on them?
But it ended up reflecting poorly on them anyway
because they ended up being viewed
as being very, very late to market.
So yeah, they got pie in their face one way or the other.
For better or worse,
that's going to be one of those things that haunts them.
This is the classic example of the innovator's dilemma.
By becoming so focused on avoiding downside risk and revenue protection, they effectively let their lunch get eaten.
I don't know that there was another choice that they could have made.
I'm not sitting here saying, and that's why they're foolish, but it's painful. If I'm in the same position right now, if I decide I want to launch something
new and exciting, my downside risk is fairly capped. The world is my theoretical oyster.
Same with most small companies. I don't know about you, what you do right now as a founder,
but over here at the Duckbill Group, at no point in the entire history of this company,
going back six years now, have we ever
sat down for a discussion around, okay, if we succeed at this, what are the antitrust implications?
It has never been on our roadmap. That's very firmly in the category of great problems to have.
But really confident companies will eat their own lunch. So you, in fact, see AWS do this all the time. They will have no problem
disrupting themselves. And there are lots of data points we can talk about to show this.
They will disrupt themselves first because they're afraid of someone else doing it before them.
And it makes perfect sense. Amazon has always had a, I'd call it a strange culture, but that doesn't do it enough of a
service just because it feels like compared to virtually any other company on the planet,
they may as well be an alien organism that has been dropped into the world here. And we see a
fair number of times where folks have left Amazon and they wind up being so immersed in that culture
that they go somewhere else and, oh, I'm just going to bring the leadership principles with me.
And it doesn't work that way.
A lot of them do not pan out outside of the very specific culture that Amazon has fostered.
Now, I'm not saying that they're good or that they're bad, but it is a uniquely Amazonian culture that they have going on there.
And those leadership principles are a key part of it.
You can't transplant that in the same way to other companies.
Can I tell you one of the funniest things?
One of these cloud providers has said to me,
I'm not going to mention the cloud provider.
You may be able to figure out which one anyway, though.
No, I suspect I have a laundry list to go out of these various ridiculous things
I have heard from companies.
Please, I am always willing to add to the list.
Hit me with it. So a cloud provider, a big cloud provider, mind you, told me that they
wanted Amazon's culture so bad that they began a thing where during a meeting, before each meeting,
everyone would sit quietly and read a paper that was written by someone in the room.
So they all got on the same page.
And that is distinctly an Amazon thing, right?
And this is not Amazon that is doing this.
This is some other cloud provider. bit of weirdness that you mentioned inside of Amazon, that they're willing to replicate some
of the movements and the behaviors whole cloth, hoping that they can get that same level of
culture. But it has to be organic and it has to be at the root. You can't just take an arm and
stick it onto a body and expect it to work, right? My last real job before I started this place
was at a small scrappy startup for three months.
And then we were bought by an enormous financial company.
And one of their stated reasons for doing it was,
well, we really admire a lot of your startup culture
and we want to basically socialize that
and adopt that where we are.
Spoiler, this did not happen.
It was more or less coming from a perspective,
well, we visited your offices
and we saw that you had bikes in the entryway and dogs in the office.
And well, we went back to our office, we threw in some bikes and added some dogs,
but we didn't get any different result. What's the story here? You cannot cargo culture
bits and pieces of a culture. It has to be something holistic. And let's be clear,
you're only ever going to be second best at being another
company. They're going to be first place. We saw this a lot in the early 2000s of,
we're going to be the next Yahoo. It's, why would I care about that? We already have
original Yahoo. Those fortunes faded, but here we are.
Yeah, agreed.
On our last recording, you mentioned that you would be building this out
on top of AWS. Is
that how it's panned out? You still are? For the most part, for the most part, I've dipped my toes
into trying to use GPU capacity elsewhere, using things like ECS Anywhere, which is an interesting
route. There's some promise there, but there's still lots of jagged edges there, too. But for the most part, there's not another cloud provider that really has everything I need,
from GPU capacity to serverless functions at the edge to CDNs to SQL databases.
That's actually a pretty disparate set of things.
And there's not another cloud provider that has literally all of that except AWS.
So far, positive experience or annoying? Let's put it that way.
Some of it's really, really hard.
So doing GPUs with generative AI with containers, for instance, is still really, really hard.
The documentation is almost non-existent.
Documentation is wrong. I've
actually submitted pull requests to fix AWS documentation because a bunch of it is just
wrong. So yeah, it's hard. Some of it's really easy, some of it's really difficult.
I really want to thank you for taking time to speak about what you're up to over at Perspect.
Where can people go to learn more?
www.perspect.com.
And we will, of course, put a link to that in the show notes.
Thank you so much for being so generous with your time.
I appreciate it.
Anytime, Corey.
Raj Bala, founder at Perspect.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice,
along with an angry, insulting comment that you got an AI generative system to write for you.
If your AWS bill keeps rising and your blood pressure is doing the same,
then you need the Duck Bill Group. We help companies fix their
AWS bill by making it smaller and less horrifying. The Duck Bill Group works for you, not AWS. We
tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started.