Screaming in the Cloud - The Complexities of Cloud Networking with William Collins
Episode Date: February 29, 2024Corey is joined by William Collins, Alkira's head cloud architect, to discuss the obstacles and possibilities of cloud networking. They discuss the evolution, challenges, and necessity of clo...ud networking, highlighting why this fundamental part of cloud design often goes unrecognized yet truly deserves attention. From William's early days of cloud skepticism to the incredible influence of services such as AWS Transit Gateway, William shares his experiences and insights into how network planning can make a big difference in cloud installations in this episode of Screaming in the Cloud.Show Notes:About William Collins:William Collins is a principal cloud architect at Alkira, where he plays a pivotal role in evangelizing the company's vision, building customer relationships, and leading thought in the network, security, and automation spaces within the cloud ecosystem. With a rich background in enterprise technology across financial services and healthcare, including a significant tenure as Director of Cloud Architecture at Humana, William has made substantial contributions to cloud adoption and network modernization. Beyond his professional pursuits, William is passionate about content creation, hosting The Cloud Gambit Podcast, and teaching as a LinkedIn Learning Instructor. His expertise spans automation, cloud computing, and network engineering. An advocate for continuous learning and innovation, William's outside interests include woodworking, playing ice hockey, and guitar. While his insights are influential, they reflect his personal views and not those of his employer.Show Highlights: (00:00) Introduction(03:24) William Collins shares his initial skepticism towards cloud computing (07:28) The evolution of cloud networking(13:50) The role of upfront planning in cloud network deployment to avoid scalability and complexity issues.(21:10) The shift from complicated, manual network setups to simple, effective cloud systems .(24:13) William uses Netflix's network design as an example of how cloud networking powers seamless user experiences (27:44) The future of cloud networking and the ongoing need for innovation(30:23) Closing remarks Links:Alkira's Website: https://www.alkira.com/The Cloud Gambit Podcast: https://www.thecloudgambit.com/William Collins on X (Twitter) https://twitter.com/WCollins502AWS Transit Gateway https://aws.amazon.com/transit-gateway/William Collins on LinkedIn: https://www.linkedin.com/in/william-collins-
Transcript
Discussion (0)
you don't do any planning up front with the network, you can hit some weird stuff and
you find yourself in a corner that's really hard to get out of.
Welcome to Screaming in the Cloud. I'm Corey Quinn. The cloud means an awful lot of things
to an awful lot of people. But without networking, it's basically just a giant space
heater. My guest today has some opinions with a capital O on the idea of networking in the cloud.
William Collins is a principal cloud architect at Alkira. William, thank you for joining me.
Thanks for having me. Excited to be here.
We spoke about a month or so ago on your podcast, The Cloud Gambit. How's that going for you?
It's going really well.
By the way, thank you for coming on as a guest and showing me some wisdom, giving me some tips as well.
So getting to learn from you, a titan in the podcasting industry, Corey Quinn.
Solicited, to be clear, solicited.
Showing up like, well, this is crappy and that's awful and that's just ridiculous.
Yeah, that's just called being a sparkling jerk. Yeah. That's not wisdom or anything like that. No, but everyone
has their own style and other people's material fits about as well as other people's shoes.
Honestly, I learned what I know about this space, mostly from getting it wrong,
but try something enough times you stumble into a formula that works for you. Roll into it.
Yeah. Well, you made my life easier, which is a net
positive. So yeah, thank you for that. I'll take what I can get. So I talked to an awful lot of
people who are doing weird things in cloud, focusing on different aspects of it. It's not
particularly common that I talk to people who have taken an interest, one might say bordering
on obsession, with the networking side of it. How do you get there?
That's a good question. So I think understanding that a lot of the disciplines in technology had a...
I don't want to say the transition was seamless to cloud computing, but it was much easier than
the path that networking took and the path that security's
taken. So networking just historically has just been a harder problem to solve because
if you think about it, there's a big blast radius, a big failure domain, a big
attack surface, whatever it is that you want to call it, that all this stuff sits on top of.
So more impact, it's really hard to move.
So I took that interest a long time ago.
I was in data centers.
I was writing Perl, using Expect,
automating load balancers,
and racking and stacking things.
And I remember two developers came to me and said,
hey, there's this cloud thing called AWS.
And they started giving me this vision.
And I was like, you all are crazy.
Nobody's going to put their intellectual property
up in someone else's data center.
That's dead on arrival.
And boy, was I wrong.
So I ended up...
What year was this, give or take?
It would have been back in 2000.
I want to say like 2000, later than 2010,
but before 2014, 15, somewhere in there.
So VPCs had come out by then,
but it wasn't just EC2 Classic.
But yeah, that was a very common refrain.
Be like, oh, who in the world is going to trust this?
Well, basically everyone, as it turns out.
But that was not obvious at the time.
Yeah, absolutely.
And that put me in a place to where
once I did start seeing the value,
and then maybe this isn't a good thing, but a lot of times when you work for... And to kind of back up, I work for a startup now, but I was grinding on the enterprise side of the house for multinational professional services firms and healthcare organizations for 15 years. So that's where the vast majority of my experience is, is down in the weeds,
figuring out how to bring in these new technologies
to enterprise.
And going through and figuring out,
okay, you have this cloud thing now.
We see the value and as an organization,
okay, we see other organizations doing this now too.
And that's almost like a safety net.
Like, oh, there's this new thing.
Oh, this CIO summit happened and a lot of CIOs are talking about this popular thing.
Okay, so we better look at it. We're going to start prototyping it and see what we can do with
it. And of course, that's kind of the way that I got into cloud. And then once I started using it,
I started loving it for a lot of various reasons, many of which meant that I didn't have to go
in the data center and pull cables or do different
things it was just a new way
of doing things you know
and there's trade-offs of course
you have a control plane that takes an API suddenly you're not driving
frantically across town because
you forgot to set a cron or an
at job to revert a firewall
change in five minutes if you didn't cancel it
because I'm about to make a change and I've locked myself out.
Yeah.
The second time is really when it feels terrible.
Like you'd think I would have learned the first time.
What have you been here before?
You sound like you've been there.
Lots of times.
And I think that back in 2008, turning into 2009,
that was when the global financial crisis hit.
And I was at a job then that I was getting kind of bored with,
but no one was hiring.
So, all right, sit tight.
I got my CCNA that year.
It was one of those adjacent areas
where I felt it would make me
a better systems person.
Like, okay, what do I touch a lot
that I sort of hand wave over
from a knowledge perspective?
And I didn't know what the hell
a subnet mask was.
I just knew I had to type that one in
or things didn't work right.
And let's figure out why. And it made me a better systems person as a result.
And increasingly, I find that there's a lot of hand-waving going on over networking in the cloud.
There have been for a while. There definitely is. One big challenge, I'd say, like cloud networking,
like historically has had a pretty, I'd say like a troubled past. So I like to think about cloud networking in a sense of floating abstractions that are just floating atop the real network that you interact with.
And you have a limited amount of ways that you can interact with and actually manipulate things by design.
So if you look towards the beginning of this journey, the first time I set up connectivity to AWS,
it was a site-to-site VPN.
It's like, okay, we have these three VPCs out there.
Our organization has to connect them back to on-premises.
And at the time, it was like, okay,
we've got routers in the data center.
Let's terminate the VPNs there
to these magic public endpoints in the cloud
and these magical availability zones.
Great. Now, I remember doing this and thinking, I'm an expert at cloud. I got this down. I've
been doing this forever. What more is there to learn? And of course, famous last words there,
because as you grow, that three or four VPCs turns into 34 VPCs and 80 VPCs and you keep growing. So unless your aspirations are limited to
wanting to be a tunnel admin for VPN tunnels for the rest of your life, and don't get me started
on transitive routing, that's not a good design or approach right there. And then technology evolved
and AWS tried to make it easier with Transit VPC. And I guess I could go on a rant there.
I think Transit VPC hit in 2016.
I can't remember off the top of my head,
but with that design, like you,
Corey Quinn is the customer,
would actually have to go in.
You got to create the Transit VPC
with public internet reachability.
You need to, at a minimum,
have some appliances in there.
Usually, a lot of times, at the time, it was two Cisco CSRs. You've got to build IPsec tunnels everywhere between the spokes, between the transit VPC, and then you got to run BGP atop it.
Good luck with that one, full stack developers, because that's even hard for network folks
based on the cloud and how it worked at the time.
And then your cup of tea, the billing,
you're paying Cisco for the licenses, right?
You're paying AWS to host the instances.
You're paying for all the egress
in all the different places.
You have all these different...
And then you're doing that for a lot of value
that isn't there,
like your scale. And you're paying for the time your team is spending figuring all this out too,
which is always more expensive. A lot of time. A ton of time.
But then Transit Gateway rolled out. And that was like a... I got to say, it was magic.
When that was released, I can't think of a single product release that had an immediate fit like Transit Gateway did.
For scale, for usability, for Terraform provider integration, for all these things that you would like to have with the networking.
So there's not so much ad hoc and piecemeal.
Transit Gateway, kudos to AWS because it was a fantastic product. My theory is that back when I
started, if you didn't have the network working correctly, none of the stuff you're putting in
your data center is going to work. So networking was a clear competency that your company needed
to have. Contrast that with today. We're starting, we're building a startup. Not yours,
because to my understanding, Alkira does cloud networking. You kind of need that skill set there
too. But if I'm building Twitter for pets or whatnot, and I need to get the thing online,
I can do that while knowing effectively nothing at all about networking past some very basic things.
And that'll carry me through for a while until suddenly it very much won't.
Whereas day one in a data center,
you have to understand the idea of VLANs,
of having an out-of-band management network,
of figuring out about congestion at top of rack switches,
trying to understand exactly how these things
are going to scale.
Huh, maybe I shouldn't give everything a slash 24
and put them right next to each other.
Maybe I should space them out
so I can grow and expand things.
Heck, even here at home,
I recently renumbered the network.
I made the blunder, though,
of I started off with a 192.168.1.1.
Well, all right, I'm expanding it from a 24 to a 23.
Did that because of the way network boundaries work
and now including the dot zero.
So my gateway is smack in the middle of the range. boundaries work, it's now including the dot zero. So my gateway
is smack in the middle of the range. I've got to move the gateway, which means, although I've
already dropped the DHCP interval significantly, not everything, like, you know, the Amazon Echo,
ahem, ahem, respects new information when it comes in from DHCP. So I've got to clear a couple of
hours to go through the house and turn on and recycle everything that didn't obey the change after I make the switch.
Like you still make silly mistakes like this.
It gets complicated in a bunch of different ways.
But today to build a company, you don't have to know anything about networking until suddenly you're going to really wish that you did.
Yeah, 100 percent.
Like if you're a new company starting out today, of course, the path to success is a lot
easier. But you still got a plan. But most, let's face it, most companies that have adopted cloud,
most companies that you work with and that I work with, that most people work with,
they had something before AWS. It's funny. Yeah, that example you gave with IP addressing,
true story. True story here. So I got brought in,
you know, I had this job.
I got brought in to fix and help
with this big AWS deployments.
They built all these VPCs
and they were using synthetic data
within the VPCs to do all the testing
and everything.
And then they needed to connect
this stuff back to a series
of data centers.
IP collisions because everyone uses
the same RFC 1918 space?
Well, you know what's funny about that?
They took the example in the AWS documentation
and they basically, they're like,
look, we follow the documentation line by line.
We use this CIDR.
It was like a 10 dot something dot 0.0 slash 8 or slash 16.
And they used the same CIDR for every single VPC.
Oh, no. Yeah, yeah they were you know in the company it sort of had a short-sighted so they went out and they thought that hiring full stack
developers meant like 100 experts at every single discipline which we we all know is is actually
impossible at that stage companies also are tend to be very parsimonious,
and they tend to believe, usually through the hiring narrative,
that their hiring people are the best in the world at this.
In reality, they would do very well to pay someone a $5,000 consulting fee to come in for four hours and just look at what they're doing and say,
fix this, fix that, fix this other thing. Good. There you go.
And then leave. But those small changes that are just lines on a whiteboard in the planning stages
become year-long projects once you realize that you've hit the limits and have to do something
about it. I mean, AWS is now charging, we're recording this in the middle of February,
they're now charging per public IPv4 address starting on the 1st of this month.
So March 3rd is when people are going to get their bills and freak out about this.
I have customers who are,
well, what do we do here?
We are going to spend hundreds of thousands of dollars
a month on this.
And AWS says, well, you can bring your own IP blocks.
We don't charge you for that.
Yeah, but you've got to then renumber everything.
And these things aren't just sitting there for funsies.
Customers have whitelisted these things or allow listed these things in various firewalls at client sites
around the world. It will take years, if ever, to get these things taken care of. Who's going
to spend all that time and effort? A lot of customers are taking it on the chin.
Yeah, 100%. And those are all things that you have to keep in mind up front because networking,
especially in cloud, if I'm thinking like, let's say, take route limits, for instance,
you get 200 route tables, I think, per VPC. And each one of these has a default quota of
50 routes. I'd have to look something like that. So it pays to know this,
which limits are hard limits and which limits are soft limit.
And which are the third category, the limits that AWS support swears up and down don't exist
until three escalations later when suddenly, oh, wait, hang on. And suddenly everything works.
I may have some bitterness.
Somebody is doing something on the back end, it would seem, of course.
So, and yes, you can increase these limits, these quotas for a VPC, but you might,
you know, want to increase the quota, these quotas for a VPC, but you might want to increase
the quota for subnets at the same time. Why, you might ask? Well, route tables can be shared with
numerous subnets, but those subnets can only be associated with a single route table.
You spin up a new account, you need to propagate some of those limits immediately. And if you want
to migrate something, you need to replicate those limits that hope you remember all of them.
And increasingly, the default limits for things get lowered with time.
It used to be that you would be able to get like now new accounts, for example, get 10 concurrent Lambda executions used to be a thousand.
So, yeah, things that used to work in a new account, you never realized existed or had a problem are going to cause problems now.
It's the world changes and you have to figure this stuff out all over again.
Yeah. And I mean, this takes you back to design principles too. So like when,
you know, when does a large organization decide, okay, when,
what is the thing that causes us to create a new AWS account? And then, okay, this has a direct impact on when an organization decides it's time to create
new VPCs. And then that feeds into, okay, well,
how many applications or services may live in a given VPC, which directly impacts, okay,
how many subnets and route tables you might need within that VPC. So there's this cascading
thing that happens. And it's really easy. If you don't do any planning upfront with the network,
you can hit some weird stuff
and you find yourself in a corner
that's really hard to get out of.
What also astonishes me,
and I mean, you still see a lot of it in AWS,
let's be clear,
but there's a better option
where in networking, for whatever reason,
it seems like configuration as code
has been very slow to emerge.
We still see sites having issues
because someone forgot to copy run start
on their router config or switch config.
Power outage takes the thing out
and now nothing works.
It's a, or at least with an AWS environment,
as long as you're not doing the ClickOps thing,
cool, you can reapply everything
wherever it needs to go
and you can reason about that.
Yeah, and one thing about AWS,
one thing that I valued,
so when I was automating things,
doing like the screen scraping, using Expect,
using all this different stuff
to automate network and firewall stuff in the past.
God, do you remember Rancid?
Oh, I ran Rancid in production, my friend.
I had to patch Rancid in production
to teach it to manage Radware load balancers.
It was not fun.
Like, I should not have been doing any of that.
I was not qualified.
Well, who would be? But yeah, I was not qualified to be doing any of that nonsense at the time,
but it worked. It did, and it worked well, and it worked. I mean, really, at the end of the day,
what are you automating? What you're automating doesn't have an API exposed to give you consistent
data, consistent payload. You don't have a consistent way to interact with things.
That's the beauty about when I started picking up cloud,
the APIs are so much more robust.
Like when you make a call,
you're getting the same responses.
Like, I don't want to make it sound like a,
you know, beautiful field of dreams
because there's problems, of course.
But for the most part,
it's a better world,
you know, with an API-driven world than where we came from.
And big network vendors are trying to catch up.
Everybody has a Terraform provider just to say they have it now.
When they're really in the back end, the core infrastructure wasn't meant to have any of those high-level abstractions for interactions.
So yeah, it's an interesting space. It's always been weird seeing people cling to the old way of doing things when there's,
when learning the new thing is what thing, but how it's happening. And that is what the industry
is doing. People feel like it's eroding at their sense of identity. I mean, we're having this at
an opportune time. The reason I got into this stuff again, somewhat recently is I have a talk
coming up next month. I have spent the last month and a half or so building a Kubernetes locally. And it reminds me of all
the things I'd forgotten about having to deal with in data centers, waiting for hardware to show up,
inconsistent hardware failures, dodgy cables. Huh. The switch says it's supposed to be able
to do this. Why isn't it? And so on and so forth. Whereas in cloud,
it's super easy to, okay, I'm going to spin up an entire second stack of this in another account
and see if the same thing happens. Oh, look, it does. Cool. Turn it off again. And you're out,
what, 20 cents for the experiment? Whereas you have to buy a whole second set of equipment.
Good luck. I just got rid of a 42U rack in my basement when I was studying for CCIE like 10,
you know, way long time ago, over 10 years ago. I've had this thing forever. And to get to
disassemble this sucker and get all the stuff unracked and get all of it out. I mean, it took
like probably like over 48 hours. It took forever. How many times you had a bleeding from the rack
nuts? I hate those things. I actually left a lot of them in, to be honest. And they're in different sizes too.
They're not compatible,
but you can't tell easily by looking.
Yeah, I, sorry.
We sound like old men complaining about the advent
of how hardware has shaped out.
Grinding through that actually shaped you.
You really appreciate some of the conveniences
of modern day infrastructure though.
And if you did have to go through that,
you know, it's, yeah, it's a different world.
And I mean, there's some things with cloud that are not ideal as you know, and you it's the it's a different world and i mean there's some things with cloud
that are not ideal as you know you know and you talk about quite quite a bit but for the most part
you know net positive i think we're in a better place now than we were you know 15 years ago i
agree and i think on some of that might lend itself to a partial explanation of the lack of
widespread interest in networking you take things that are of interest to folks.
It's newfangled technology, serverless, Kubernetes, et cetera,
things that are perceived to deliver business value
for better or worse, whether they do or not
is a separate argument for another time.
But networking is viewed as,
well, that just should work, right?
If it doesn't, you have a problem.
It's viewed like electrical work or plumbing,
whereas if this breaks,
we're definitely going to call in an expert.
We're going to complain about the cost
and the time spent with a toilet is gushing water
because we don't even know
there's a shutoff valve behind it.
But it's not considered high value.
It's commoditized.
And I think that people aren't falling all over themselves
to go work in an environment like that
when there's new exciting things
that frankly get a lot more press. Yeah, I could talk forever. I could go on a rant,
which I'll go on a mini rant here real quick. With these new technologies coming in and the
new technologists that are coming in to work these new technologies in cloud, the first thing they
see is the shiny and pretty things like serverless, the know, the AWS heroes, you know, in the community events,
promoting serverless containers, you know, networking. Since I've been working in this
space for a long time, has never made it to first class. We are always in the back of the plane.
And until something's broke. Everything runs on EC2 and that barely gets talked about these days.
A hundred percent. It's not the exciting part. Of course, now everything has to sit in the back of the plane because it's Gen AI. You ever notice
that AWS loves to talk about the things it feels insecure about slash is actively failing at?
They never talk about the things that they truly excel at. And I think they're doing everyone a
disservice through that. I agree. Yeah, I mean, it's the whole thing,
like these new hype cycles come and it's like, okay,
like one thing I've, you know,
acknowledged and I've realized is like,
especially when serverless came along,
I love serverless.
I've used it historically to solve many problems,
but practitioners take this approach
to where it's like the whole,
everything has to be serverless.
It can't just be like where it fits
or everything has to be,
like I was part of an organization that had a Kubernetes only strategy. The whole, everything has to be serverless. It can't just be like where it fits or everything has to be,
like I was part of an organization that had a Kubernetes only strategy.
Every app that we built from,
you know, some date
had to be built on Kubernetes clusters.
And I'm thinking like,
it just doesn't make any sense.
There are things for which it does not,
it's not fit for purpose.
And it isn't helped at all
by the ongoing bastardization of what serverless means.
Originally, when it launched, it said on the AWS website that scaling to zero was a characteristic of serverless.
The Wayback Machine confirms this.
Then they decided at some point to call a bunch of things serverless that scaled down to like 30 or 70 bucks a month minimum.
It's still serverless. No, that's a managed service. If it doesn't scale month minimum, it's still serverless.
No, that's a managed service.
If it doesn't scale to zero, it's not serverless
is the old school purist definition of this.
So congratulations,
you've taken an exciting promising technology
and devolved the term into effectively meaninglessness.
Yeah.
And the interest, I mean,
I think part of it is on the cloud providers.
AWS is really good at getting the masses excited.
If you look at the hero categories, they have the community, they have serverless data,
containers, machine learning.
I think they have security now, but where's networking?
Where's the focus on the bedrock that runs
underneath all this stuff? And many environments that I've gone through and redesigned over the
years, it's really clear to me that application-centric folks, historically, they might
end up creating anti-patterns and ultimately a lot more work and complexity for themselves
as a result of not having a well-designed network in place. You know, why is that? I mean, I look at it. So funny story. So my brother-in-law got
hitched to a girl in Canada. So the next year, my wife and I found ourselves in,
when I say the middle of nowhere, I mean the middle of nowhere in Canada. Beautiful.
But good luck finding a gas station. So the church they got married in was out in like these woods.
There was no plumbing, no electricity. Like it was out in the sticks. You know what worked really well for me?
And I actually gave a talk about this at some community day a while back, but it was Netflix.
The streaming video. And I thought, you know what? That is a testament to their...
Ultimately, there's a lot of networking under the hood. There's a lot of different stuff
going on to make that available to me where I was at, where I had mediocre cell service,
and there was nothing else. And I thought, wow, that's a really well-designed network.
I appreciate that. But again, a lot of people don't understand the mechanics that go into that
underneath the hood. And it's really hard to build out a robust, highly available,
distributed network architecture like that.
Even the fundamental economic building block
of networking has been turned on its head by cloud.
In the days of building out data centers,
you pay for a port speed
and then you would have a bandwidth commit
that was 95th percentile of use.
So you'd take every five minute span
throughout the month,
chop off the
top 5%. The next one up was the one that you got billed for. So whether you had data piling through
this thing or nothing at all, you paid the same. And suddenly metering everything by the gigabyte
at, you know, 1993 prices leads to the impression that people have, which is that networking data
is precious. And it's really,
really not, not anymore. But the clouds make us think that way. And we act as rational economic
actors as if that were true. And that really, I think, shuts the door on a bunch of really
interesting networking-based startups that could have existed, but in a time of cloud and with
those economics, never would have been financially
viable. Yeah, you hit the nail on the head. Such a good point. And even as it turns out,
like you said, so even experimenting with things, of course, tends to lead to new innovation.
So a lot of products out there, if they have a free tier or, hey, you can try this for 60 days
kind of thing, folks are inclined to go in there and try it and build things.
I've done that with every CICD platform out there.
CICD is one of my fun places I like to live.
It's awesome because I like automation.
And a lot of those products, whether it's GitHub Actions or whatever else,
they have free tiers and you can just try and do things.
And the more you do that, the more you experiment and you venture out of your direct
discipline to the adjacent disciplines, that's how new things are made. That's how new innovation
comes. Well, it's so hard. If you're going to rack up some giant bill using like a...
Sorry to pick on CloudWan, but like a CloudWan or,
you know, one of the more expensive services like that. How are you going to experiment and
build things? You got to do it on your company's dime while you're going in production.
Yeah. Looking at even the basic for a single core network on CloudWan was like four or 500 bucks.
And yeah, I have budget to experiment with a bunch of AWS stuff. But if you're going to give me like a few thousand dollars, it'll take me to do this.
Like I'm going to work on things that I find much more directly germane to what I'm working
on that given week and not these things that are useful to a subset of folks, but far from
all of them.
It's the barrier to entry for a lot of these things is a little high.
The honest truth is people still learn a lot of this stuff, especially in networking, by
tinkering in home labs. And if you price
yourselves out of it, don't offer free tiers that let
people at least kick the tires on this,
you're doing yourself a disservice.
I mean, there were problems with it, sure, but Cisco's
packet tracer was incredible
in that it let you, for
effectively free or next to it,
be able to run emulated versions
of all these things in a imaginary network
and see what routing, configure, and switching changes did
and how traffic would flow through.
It sure had its bugs,
but it gave people the sense of wonder that,
oh, it can do that.
That's fascinating.
Just sit down and tinker with it.
Yeah.
And a lot of these things,
honestly, a lot of these disciplines are kind of converging,
like especially network and security. You know, if you look at a lot of these things, honestly, a lot of these disciplines are kind of converging, like especially network and security. If you look at a lot of these giant enterprises,
the network security and network engineering teams or tiers are converging into the same team,
same leader. This is happening, especially within cloud. And as it turns out, the more that it pays
to learn, to go outside of your discipline,
to figure out these technologies,
to experiment,
and try to educate yourself so that...
And not to say that everybody has to be a network expert,
but knowing the constructs,
being able to put them in the right context
for a particular design,
and knowing when and how to bring in, you know, whoever the network
heavy hitters might be, whether they're in your org or whether they're like a consultant,
that's going to save you and your organization time. Or even your own network. I'm trying to
deal with this thing. I'm not seeing how it works. Can you help? Like I solved a problem with that
gateway migration by talking to some PFSense folks on IRC of all places.
That's awesome. IRC, of all places. That's awesome. IRC.
Still going strong.
There's always someone who knows this stuff.
We don't have to solve it alone, but
if you don't even know that the issues are
there, the only way they find out is by
blundering into them.
If you do a little due diligence
here, you will save your organization a ton
of time, a ton of technical debt,
and a ton of resource efforts. If you think about these things upfront, like shift the network left a
little bit, if you will. Get it earlier on in the planning process. And that's my goal with the
community and the outreach is trying to... Because I've been, I've seen a lot of these environments,
I've fixed a lot of these environments and just save yourself the time, you know, bring in some of these core infrastructure pieces earlier.
If nothing else,
what the hallway track is for,
like I'm about to do this.
What am I going to regret the most in six months?
Right.
People will give you a 30 second answers.
It'll save you weeks,
but outreach is important to that end.
If people want to learn more about how you view these things,
where's the best place for them to find you?
So you can find me on LinkedIn, William-Collins, Twitter, WCollins502.
And then everywhere else, it is The Cloud Gambit, including I have a podcast, cloud-related.
Which I recommend. It's fun.
Yeah, we've had a lot of good guests, a lot of good conversations.
Yeah, and Corey's on there too. If you don't come for anything else, come for Corey. It was a great
episode and I thank you for coming on. Highly entertaining. Made me laugh. Awesome. We'll of
course put links to all of that in the show notes. Thank you so much for taking the time to speak
with me. I appreciate it. Absolutely. Thank you. William Collins, Principal Cloud Architect at
Alkira. I'm cloud economist, Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice.
Whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice,
along with an angry, insulting comment that won't be delivered successfully
because that platform did not, in fact, understand how networks are supposed to work. TCP now terminates on the floor.