Grey Beards on Systems - 168: GreyBeards Year End 2024 podcast
Episode Date: December 30, 2024Our YE GreyBeards podcast is often one of our most popular. In this edition. we discuss the impacts AI is having on IT infrastructure and how we see it playing out over the next year....
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here.
Jason Collier here.
With Keith Townsend.
Welcome to another sponsored episode of the Graybeards on Storage podcast,
a show where we get Graybeards bloggers together with storage assistant vendors
to discuss upcoming products, technologies, and trends affecting the data center today.
Welcome to another edition of the Greybirds on Storage podcast.
Today is our year-end technology discussion.
We have with us everybody that matters in this world. Howard Marks is joining us for this session.
Jason Collier and Keith Townsend is here as well.
You know, the topics, we were talking earlier about the topics to discuss.
It's all about AI.
Keith, you want to start us off with that?
Yeah, so during the discussion, we're talking about, or the pre-banter,
about how AI is driving all things.
First off, this is a massive difference between
2023 AI and 2024 AI. 2023 AI was all about, you know, how many GPUs can you buy?
How do we get LLM smarter? That conversation has continued, but we don't have to look any further than AWS reInvent, this past reInvent.
If you ask Adam Zapinski what was his three priorities outside of AI, he didn't have an answer to that.
You ask the CEO of AWS today, there's an answer.
AI is all integrated in that plan. So we're marching
to a point where AI is becoming much more practical and used. I think there's still a huge,
and we can get into this discussion, a huge bifurcation between people who are training
models and people who are implementing the technology in InfraSync. It's been a fascinating year for AI
and its downstream effect in the enterprise.
Hey, Jason, are you seeing actually AI adoption
in the enterprise or is it still mostly hyperscalers,
big training labs and that sort of thing?
So the hyperscalers are clearly a big thing still.
But yes, we're seeing it in enterprise as well. My entire year, so I'm part of the data center, you know, strategy group at AMD, right? But my entire year has been fully surrounded by how do I support doing AI deployments, period. That has been my year. And seeing that grow and seeing that actually grow
in the enterprise space as well has been something to witness, right? Because this used to be just a
hyperscaler type of issue. And now very, very quickly, we're seeing it adopted in larger enterprises as
well. And mostly from an inferencing perspective rather than training? A little bit of both.
It's a little bit of everything. And yeah, but the inferencing is a strong piece
of what we're seeing out there. And I think we're starting to see some, you know, the quote killer apps coming out from an inferencing perspective on how those can actually help, you know, enable enterprises do business more efficiently.
I mean, that just starts with chatbots.
Yeah.
And it does.
It's not a lie.
It doesn't, I mean, a chatbot doesn't take a lot,
but, you know, every website has one.
Yep.
I was using LLMs to do some coding the other day. It was like, it's like most, you know,
three stooges who's on first kind of thing.
It was kind of interesting.
It wasn't as successful as i would have liked but
so howard what are you seeing in the market out there but even the most basic rag stuff you know
yeah yeah exactly you know we had we had vast or growing fast and you know product marketing has
grown from me to six folks and we need to bring them up to speed fast. And we built a chat GPT and put all the technical documentation that we'd written into it.
And they ask it questions.
And it's much faster than any other ways we found to bring people up to speed.
I got like, it's not quite a thousand PDFs.
I just put it in a rag just to try to understand.
Trying to find something in that morass was a pain in the butt before.
Now with a rag, just a rag without a chat GPT.
Yeah, just a rag. It's perfect.
So you guys are echoing what I'm seeing in the enterprise.
So to bridge the gap between what Howard is seeing and kind of talk a little bit about what Jason is seeing,
where rag has its limits is when we start to get past
chatbots and we get into this new area that rise in 2024, which is agentic AI, this desire for AI
to start to do functional stuff beyond just knowledge work or retrieving knowledge, summarizing knowledge or curating knowledge, this ability
to say, hey, based on the weather prediction in Florida, shift the number of air conditioners
we ship from California to Florida, making these types of recommendations or decisions, that requires the, for lack
of a better term, for agents to be able to think like humans.
And in order to do that, you need more data.
In order to do that, you need to, in support of the overall agentic AI vision that enterprises
have, is to get the models to think like your organization.
And that requires retraining. That requires fine tuning. That requires rag. And it is,
I'm seeing these next level challenges. So people are doing everything from doing chat box on CPUs
to buy small clusters to do retraining.
It is a fascinating time.
You see a lot of enterprises doing fine training of LLMs to do this sort of
stuff. I mean, that seems like a major step.
I'm actually seeing specialization happen. Like, so you're,
you're getting certain, you know,
verticalized markets that are, that are doing actually very specialized
training. And, you know, I think that's where the, that's going to be the magic of where AI is going
to be the most useful.
Yeah.
You know, at Vast, we're still mostly seeing the training market.
And a lot of that's going to specialized cloud providers who are buying GPUs and storage
to support them in staggering amounts.
Yeah, and then that becomes where do you do the retraining?
If you only need to do occasional fine-tuning, why buy a cluster?
Why not do that and why not let the cloud providers take that capital hit
and then you just do your fine-tuning in the cloud?
To some extent, it's not even lot of that too. I agree. Yeah. I mean, I,
I think we're seeing a lot of that actually starting to happen.
But I think it's where you're starting to see those kinds of those little,
you know, magic killer apps start to come out.
Well, for the large people who are training models,
it's not how do you build by a cluster,
it's where do you build a data center.
Even at mid-size, we're talking about systems
that are drawn 50 or 80 kW per rack
and a corporate data center that's set up
for 8 or 16 kW per rack.
You can't just buy Jason's GPU servers
and put them in the same data center
you've been running VMware in.
Oh yeah, we got people talking
150k a rack.
And that means you need
to build the data center and
it only makes sense if you're building that
data center for 10
or 12 rows.
If you're only going to need one row
of that, you need somebody to host it.
So there's two levels of
the gold rush, right? There's the
how can I
get enough GPUs,
CPUs, etc.
There's the land grab. There's all
of these components when you're thinking
about it. But at the end of the day, this is
in service of some
value to the business. And I was
listening to an A16Z podcast. The software market is a $330 billion market a year. The labor market
in the US is a multi-trillion dollar market. So business owners are looking at the fact that if I can improve productivity by 10%,
whatever that means, whether that's reducing labor costs or increasing my output of work and
increasing my top line number, if I can increase productivity by 10% using AI. And that's not even the promise. The promise is much bigger than that.
The resulting market absolutely changes. So everyone from SAP, Salesforce, ServiceNow,
you name the software vendor, they are trying to stuff as much AI that moves the needle
that they can. And that has a downward market effect.
Oh yeah.
That,
that whole class of software as a service providers are buyers.
You know,
they are,
they are building AI into their products and,
you know,
they and the specialized cloud CSPs are where we're seeing the big action
going on.
Yeah.
And that training and that inferencing,
all of that has to happen somewhere.
Someone has to pay for it.
And we're seeing, so the killer app is actually not,
are not net new apps.
They are the apps that we already have today.
Like that scenario that appeared,
SAP by themselves can do this with inside of Joule without you having
to build anything on-prem. I mean, if you look at any of the millions of paperwork workflows that
we've automated over the past few decades, you know, those paperwork workflows have five or six
stop at a human for them to do something stages right and and if you can replace half of
those stages with an ai that you know does a copy edit yeah then there's a huge savings and you know
you don't it's not necessary to have the ai be how and run the whole ship.
Right.
If you just replace a couple of steps in your common workflows that are now done by people.
Yeah, the whole coding stuff I talked about earlier.
Yeah, the nice thing about it is it provides a framework.
It provides interfaces.
It tells you what APIs you need to use.
It gives you all that stuff, how it works,
whether it's perfect or not, it's another question,
but it's a major step forward.
It's like having a guide to dump a bunch of code that can allow you to do stuff with.
It gets you to the first debugging stage
four times faster than if you had to be looking up
what commands were someplace
and that's just it gets you like 80 of the way there right i don't know if it's 80 for me it
was more like 40 you know so that's that first debugging stage where you run it and it does
something and it's not the right thing well yeah it's like and guess what it's writing all the
crap code that you didn't want to write in the first place right when it comes down to it um and you know i have found
that it's gotten me you know probably 80 of the way there when i want to do stuff i have i have
it right and like ansible uh ansible playbooks for me all the time right i'm just like hey write
me an ansible playbook that does this i was doing i was doing pre-cad the other day with it it's got no limit
you know it gets close it doesn't get you all the way there but the reality is it gets you
past a lot of the crap coding that you don't have to do yeah and and most of the stuff that we're
doing this for performance is not all that important and so you know if that's optimized
code or not i don't care yeah now think about this in different verticalized industries.
Think about this in legal, right?
Like when you're writing like a legal brief or something like that.
Guess what?
80% of that crap is like stuff that you don't need.
It's been done a thousand times before.
And so those are industries that can be revolutionized by this, if it wasn't for politics, you know, the ability to have, you know, a cut and paste build 80% of common legal documents would be easy.
You know, that's not even AI.
That's just, you know, willmaker.
Yeah.
But, you know, people, people. Yeah. But, you know, people people are going to
find ways around this entrepreneurs. There's going to be a Uber of of legal. When I say Uber,
I don't mean someone who's going to, you know, kind of legal as a service that that exists.
What it's going to be is going to be someone who says thumbs their nose to the politics and the laws of it
and they do the thing and they force adoption uh that's so tempting from a financial perspective
that it can't be ignored and we're going to see that across so many industries i was thinking that
that uh i was talking to one guy on a Slack channel the other day, and they were looking for some tasks to do.
I said, well, go look at the code federal regulations, load that up in a rag, fine-tune an LLM model, and offer tax services based on that or legal services based on that.
This is a major business.
You look at the CFR.
It's crazy, crazy stuff.
Avoid the regulatory problem and just sell it to accountants and lawyers.
A law firm would much rather bill $1,000 an hour for a partner than $500 an hour for an associate. And Ray, you hit on an earlier question point through a question, which is, are we actually
going to see organizations fine-tuning?
One guy, one guy fine-tuned Lama to have it self-correct.
That was one person.
So if one person can do that with Lama7B, what can a whole team of folks do in an enterprise?
Yeah, well, I realize that you're talking fine tuning requires human oriented reinforcement learning support.
It's not it's not easy, but it can be done.
And, you know, I guess that it's not easy. If you want the advantages,
if you want to reduce your labor costs by 10%, that's, you're going to do it.
And if you have a kind of enterprise
that does optimize their operations through IT,
you know, UPS or Walmart,
by comparison to the general,
you know, they have the resources to throw at it
and they're all about process optimization and they've got the
data yeah all right you know so you know we we can say well it's going to start there you know
it's not going to trickle down to the fortune 500th for a couple of years but the leader you
know leaders are leaders i don't know it's the innovation thing too i mean the people that want
to innovate in some space and take on the leaders, they got to start somewhere.
And the only way they can start is with a gentic AI and LLMs that are fairly available and stuff
like that. So it's something that you as an entrepreneur can go take on this stuff and
take on the big guys. big guys, you know?
So, you know,
and I think this is a good transitional example of what we're seeing across
the industry.
I'm going to give a household name that we don't think when it comes to
technology, Geico.
Geico has hired Rebecca weekly, who is,
who was a VP or senior director of intel then a vp of infrastructure at cloud flare
uh to build what's essentially a private cloud and they're on a 10-year journey to re-platform
every application so that they can use generative ai in ways that their competitors cannot because they have not replatformed
and be able to answer some of these difficult regulatory questions and challenges that their
competitors can't because they're not on the right tech stack.
So when we talk about smart organizations and huge companies taking advantage of it and making the decisions,
this is a great example of where the downstream impact is happening and where I'm seeing a lot of activity,
which you might not think is related, is folks building OpenStack clouds to take advantage of generative AI.
Yeah.
I got to say. Rebecca on this show, she's awesome, by the way. cloud to take advantage of generative AI. We need to get Rebecca
on this show. She's awesome, by the way.
She's incredible.
She's probably the smartest person I know.
I got to say, insurance
underwriting?
I have no idea how to do it, but it
is kind of the obvious AI
is going to change things and
make it more accurate.
Did you guys know that GEICO is actually an acronym?
Government Employee Insurance Corporation.
I did not know that.
They used to have a business model like USAA has now.
Right.
So USAA, the next GEICO.
So that brings up a topic.
I think, Keith, you were the guy that identified it. It's all about AI, at AWS reInvent, every announcement was an AI announcement.
And I don't mean AI in the sense that, you know, this is how you use AI.
This is in the sense of, oh, we're Amazon and we too do AI, generative AI.
This year at AWS reInvent, the thing was, okay, this is how you build AI applications or applications that, let me rephrase that, applications that leverage AI. So write the multi-zone writing capabilities with S3.
We need to march along, but we need to think about how does this impact the ability for customers to access the data,
use their data, and apply that data with AI.
And that's what it was about.
So we're starting to see more intelligent services. S3 tables is an example of that
and probably a head tip to vast.
I'll save you the trouble Howard of saying
parquet iceberg support, native vector database support
and preparation and your ability to train models
and use your data via rag,
et cetera,
ready-made in the data lake that you're creating in your storage pools.
S3 tables takes a lot of the headache of managing data in, you know,
it's managed parquet essentially. And that's a good thing.
But parquet on S3 is a huge compromise for database storage because it's storing data in immutable objects and databases generally don't do that.
So, you know, you have to do things like iceberg and layer on well if i want to delete a record from a table i have to create another object that's the deletion and then spark or trino
has to read that and do the deletion in memory and we decided to go integrate storage and the
table format even tighter so that we could not have the
limitations of S3 as an abstraction between them.
But the move to object as a whole is a big thing.
You know,
NVIDIA is shifting their focus for training from, you know,
you need a very fast file system to you're going to train on all of your data
and your data is probably an
object we should make that fast too yeah so for a while though right i mean so one one of the
things i want to highlight what howard said and this is a side benefit of s3 tables i've talked
to no less than three customers immediately after that announcement that went to replatform their
applications to not use DynamoDB because they needed just their DynamoDB was a overkill for
their use case. They were simply taking objects and then having that object be the key in a key value store that had immutable attributes to that object.
And they needed a way to look up those attributes.
So they paired S3 with DynamoDB with an immutable use case.
Wow, Dynamo's overkill for that.
Yeah, and Dynamo's overkill for that yeah and dynamo's overkill for that so now they're
replatforming and interesting that one of the customers was actually in china just coincidentally
the the the idea is to say oh i don't i don't need this middle layer this this this this overkill
an expensive option to do uh what amazon is, which is the bigger deal, faster access to, faster
query access to my object storage.
So again, this is a follow-on with them chasing AI use cases.
That's being an advantage to other applications that have nothing to do with AI.
Well, you know, once you get to the enterprise, a large fraction of the data the enterprise wants the AI to learn from is in the data lake.
Yeah, yeah, yeah.
And so the integration of AI models and data lakes is heavily going on right now.
And we are in a position where it's like, well, okay, we'll hold both and they can cross-reference each other in interesting ways.
All right. Jason, besides the software, obviously software stack implications of AI moving out to the enterprise.
There's hardware infrastructure challenges as well.
You want to talk to some of that, Jason?
You know what I did?
I will actually highlight what you said about the software challenges.
I 100% see the challenges are software.
What stacks are being utilized for for doing uh the ai deployment
and then what uh hardware components can be run on that um you know that said that you know the
hardware hardware has challenges the uh the sheer power constraints uh of these machines that we're talking about um are ludicrous by most standards um the uh uh you know
one of our standard ai boxes is is gonna have uh you know dual cpu and eight gpus
strapped into an oam board that sucks 10,000 watts of power. 10 kilowatts. That's a whole rack. That's a system.
That's a rack.
To you.
No, no, no.
It's about 6U.
Let's call it 6U.
That's 6U over 42 what now?
The average rack size is 42.
Add that up and then think of all the networking
that you need to power that stuff.
By the way, each one of those things is actually hooked into a usually a 400 gig Ethernet NIC.
Each GPU.
So we're talking now.
That switch is not an underpowered switch.
No, no, no, no.
So we're talking Arista.
We're talking Arista big stuff here.
So you've got basically 10 of those 400 gig connections per machine. We're talking Arista big stuff here.
You've got basically 10 of those 400 gig connections per machine.
Yeah, I built as part of a test for Juniper.
I got in some Juniper 10K switches, and I think these things like 4U or 6U each, they're massive.
They're just as big as the server. And they suck.
Believe for somebody who had to pay a data center bill, they suck some power.
You technology uses juice. Gentlemen, you look at,
you look at one of these machines and one of these machines is going to be,
it's a quarter million dollar machine, right? Like that. This is a server. We're talking about a server, right? Yeah yeah no we're a ferrari you know it's like one of the two right yeah six or eight of them in a rack
so you know oh yeah you're in a rack and then you got basically the uh so now think about honestly
think about the cabling how much does it cost to cable these things into one of those switches
when you've got basically 400 gig
10 400 gig
connections per machine.
So, Jason, I have this
magical 80k,
100k rack.
That's cute.
That's magical.
That's cute.
I have
a problem when my air conditioning goes out now
with my 8 to 16K racks.
How do I cool that?
Yeah.
Water?
Yeah.
You have to liquid cool one way or another.
Yeah, and, you know, right now, like, we're doing air cool on a lot of those pieces,
but at the same time, it's still sucking 10,000 watts of power.
You got to cool 10,000 watts of power.
So it's liquid in the door on the back of the rack.
Yeah.
Or it's liquid in the server.
There's a lot of cold plate, but a lot of cold plate but at the same time
like you look at what we're doing on the high performance computing part right um all of that's
liquid cool right it's like that's a reason that we can get 150k in a rack right i was at ocp summit
earlier this year and they were talking about megawatt racks and i was just blown away but yeah you're right 150 kilowatt rack is is is
a du jour today megawatt is not far off kevin o'leary is building a data center up in like
like nowhere canada um that's going to be seven gigawatts oh god and there was an article in the
washington post this morning that you know, the Biden administration is considering an executive order to let people build power plants and data centers on federal land.
Because that way you don't need the grid, which can't support the power to be in the middle well and the reality is like basically like right now so there's so much talk about this
in in data center space um uh basically doing the whole micro nuke plants um to to to support it um
but the the actual short-term solution to to help this out is probably going to be natural gas which
is why o'leary's wanting to build up in can for that stuff. And, you know, the reality is it makes sense. And then like the short-term
solution is like, okay, what, how do, how do we tap into natural gas grids to be able to,
to, to power and cool this thing. And while, you know, we we figure out how how we get these uh micronut plants up up and going
because that's the only thing that's going to feed the beast right now how are you and i think
the time to write a plant and using the waste heat from the liquid to cool a neighborhood of
how to heat a neighborhood of houses right right so and we have to talk about the short-term impact
short-term people need solutions so they're doing old school bringing in diesel generators to create
small megawatt yeah data centers like it's it's a big uh it's a big impact
we were so at amd we were building out Met center, um,
which is down by the airport and you know,
it's down by the airport for a reason because you got the good power stuff.
Um, we were building that out. And then I, you know, I remember going to like,
you know, when we were talking to Austin energy about like, Oh,
and by the way, we're going to pull this much like power into the center.
And they're just like, Oh, the hell you are.
So it's, it's been an interesting conversation hey we got we got it in there but but it was it was a it was interesting
howard you mentioned earlier before the call about uh power being a zero-sum game you want
to talk about that oh yeah so um traditionally HPC, all you worry about is going fast.
And so you do storage-y things like use a small SSD so you get more performance per gigabyte
and tier up into that scratch so you can process things as fast as possible.
But with AI, you're dealing with much bigger data sets.
And inside any given data center, power is a zero-sum game.
If I go from 15 terabyte SSDs to 120 terabyte SSDs,
then the power consumption in storage goes down by half a megawatt.
And that means I can run another three racks of JSON
servers and get more work done because, you know, up data center only has so much power.
So, you know, we've, you've got to understand that, that every watt you use one place keeps
you from using it someplace else. And the storage, luckily the storage has gotten to the point
where it's gotten more higher capacity in the same power envelope
or relatively more efficient.
That 120 terabyte SSD uses the same amount of power
as the 15 terabyte SSD.
That's insane.
I couldn't get Solid 9 to let me have the one that they were letting me. It uses the same amount of power as the 15 terabyte SSD. That's insane. Yeah.
I couldn't get Solid 9 to let me have the one that they were letting me.
Because they give you the 122 terabytes.
They said, Keith, you do not need, we'll get you one on the four terabyte one.
So you do not need the 122 terabytes of storage.
I'm like, I don't need it, but hey.
Hey, if I got it, I'll use it.
My Plex server will greatly benefit
from that power reduction.
I have people
lined up for those things.
In my
data center, in my CTO
visor legacy data center,
I still have a management box that does
backup for my environment.
And we did an inventory. It has eight, six terabyte HDDs in it. And that comes up to a
whopping 36 terabytes. I can go and get solid nine, 61 terabyte SSD. A double your capacity.
And not just that, but to this story, this is the big two CPU box. I can get it what used to be an Intel NUC,
put that in there and it will outperform this with a 10 gig link, will outperform my management
server that's doing backup. Like the, that's a massive savings in power in space.
Yep. And this, you know, obviously this is somewhat being driven by AI activity and the data explosion or tsunami or whatever the hell you want to call it.
But it's really just the technology.
It's just moving down this technology roadmap, higher density.
Technology makes the higher density.
Well, it depends what you're talking about because there's always the technology makes higher density possible and therefore doing the same job gets smaller
and more efficient.
But with SSDs,
we reached the point where,
you know,
at somewhere around four terabytes where for the average user,
bigger wasn't actually better.
And so the, the flash user, bigger wasn't actually better. And so the flash density got higher,
so there were fewer chips in each four terabyte SSD.
But, you know, 120 terabyte boot drive doesn't make any sense at all.
And in fact, 120 terabyte SSDs don't make sense
unless you have 40 or 50 of them in an array.
Yeah.
And the sophistication to manage them properly.
Right.
And I think the list price for those are up around 20K each.
They are not inexpensive, but we have people going, where is our allocation, please?
Yeah. Where is our allocation, please? Yeah, yeah. Because it's that AI space where, you know, you take a million photographs and you throw them into your facial recognition model and they get tokenized and it becomes a billion small files or objects.
And we just laugh at your power consumption. And so, you know, you need a lot of space and it's going to be accessed randomly.
So you can't be putting it on 7,200 RPM as hard drives that can do 100 high ops.
Right.
But I will say, so when you're on a cluster, one of those AI machines, when they're just like cranking and it's cluster, it's like 128 node cluster.
And you realize that you're sucking like 10,000 watts per machine
and there's 128 machines going.
That's a lot of power.
In those things, yeah, you can just
pull one out of the rack and just throw an egg on top of it because
you can cook one.
It's impressive.
It's impressive, but like, wow.
They're also kind of loud.
A bit, yeah.
There's one data center I put the noise-canceling earbuds in
and then shooting muffs over those there's um so they're putting in
so so the air-cooled uh versions of these things now are running 25 000 rpm fans oh high pitched
and yeah it's like it's like a jet inside a hangar kind of loud so yeah that's the goes
right through your skull kind of loud.
Yeah. Yeah. Oh yeah. It's, it's not good.
Yeah. It's insane.
Yeah. I hope,
I hope the backup generator doesn't come on at the same time that you do not
want that in the CTO advisor lab. I'm telling you right now.
No, I do not want that in the CTO advisor lab. Not at all.
So the whole cooling side of this discussion, we talked,
we touched on a little bit.
HBC is kind of leading
the charge, but the hyperscarers
are seeing the problem. A lot of
these guys with the specialized cloud
and even the SAS players are starting to
have the same problem.
The cooling is also a zero-sum game.
Wouldn't you say, Howard?
Oh, well, I mean, cooling
is kind of the other side of power every watt that
goes in has got to be removed somehow yeah yeah i got into an interesting i don't know if it was a
debate but i had this conversation on social media this guy said keep none of these cooling concerns
are an issue until i mean uh the power is the most important problem. I'm like, you know what?
If you've ever been in a data center where cooling has gone inactive, I don't care how much power you get in.
You're not going to do any work.
It's thermodynamics.
A data center is a closed system.
If you're pumping heat in and you're not taking it out, it's going to melt.
Yep.
Oh, we should do a thing where we have thermite.
Oh, wait, nevermind. I think somebody did that.
It has been several years since I've done that. Yes.
Thank God. Thank God.
All right, gents, let's bring us down to the last topic of discussion. I am sort of moving on from Silverton Consulting as a storage industry analyst and moving to a role where I'll be doing much more work in creating space infrastructure or ocean going infrastructure. And although Silverton Consulting will still exist, Silverton Space is a subsidiary of
Silverton Consulting.
The whole Greybirds on Storage podcast will probably continue for the next couple of years
for sure.
But it may undergo a transition to be Greybirds on Space or Greybirds on Space Infrastructure
and things like that.
So it's been a long ride for me.
I've been doing this thing for over 20 years now.
I didn't realize whether I was going to be successful or not early on,
but it turned out to be a good ride.
Vaya con Dios, Ray.
Yeah, really, really.
And I really want to thank you guys,
and I'm sure we'll invite
you on future Greybrids on
Storage or Greybrids on Space
podcasts
as they come out and stuff like
that. Sounds great. As long as
I don't have to go into space, I'm too old for that.
Well, you know, there's always a
possibility, Howard.
There's always a possibility, buddy.
You may have to get on one of these SpaceX ships.
I was going to say, I can get you on a short list,
but I'm on one of the short lists for the SpaceX Mars mission.
Yeah, there you go.
We'll see how that goes.
That's it.
They're going to say, like, yeah, no, you're not healthy enough to go on a spaceship.
Or maybe you're just right.
No, we're just right.
And I have lost a bunch of weight.
We don't have to bring you back.
And I have lost a bunch of weight, but I don't think the SpaceX suits come in 3X.
It's probably okay for you.
All right, gents. Well, this has been great thanks again for being
on the show today and uh we'll uh we'll do something here one way or another in the next
couple of months and uh we'll look forward to that any final comments keith you know it's been
an amazing year i've enjoyed these podcasts it's really great to be on a podcast with Howard again.
It's been a really long time.
I appreciate you bringing us all together.
All right, Jason.
Yep.
It's the same.
You know, echo what he said.
It's a great having everybody here.
Howard, it's especially good having you here.
Good to talk to you again, brother.
I have missed great Beards on Storage so
much. Having a job
just gets in the way of everything.
So many things.
Tell me about it. If it wasn't for that pesky
job. If it wasn't for that
pesky job, that paycheck, and
the stock options, I'm not supposed to talk
about.
Alright, gents.
Thanks a lot. That's been great.
And this is a wrap until next time.
Next time we will talk to the system storage technology person.
Any questions you want us to ask,
please let us know.
And if you enjoy our podcast,
tell your friends about it.
Please review us on Apple podcasts,
Google play,
Spotify,
as this will help get the word out.