Grey Beards on Systems - 127: Annual year end wrap up podcast with Keith, Matt & Ray
Episode Date: January 6, 2022[Ray’s sorry about his audio, it will be better next time he promises, The Eds] This was supposed to be the year where we killed off COVID for good. Alas, it was not to be and it’s going to be wit...h us for some time to come. However, this didn’t stop that technical juggernaut we … Continue reading "127: Annual year end wrap up podcast with Keith, Matt & Ray"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here. Welcome to another episode of the Greybeards on Storage podcast, a show where we get Greybeards bloggers together with storage assistant vendors to discuss upcoming products, technologies, and trends affecting the data center today.
This is our annual year-end podcast where we discuss the year's technology trends and what to look forward for for the next year.
Matt and Keith, what would you like to talk about?
You know what, Ray?
I would be extremely upset if we don't start the conversation all with Kubernetes.
Like, I'm just going to be, I'm going to explode.
That's good.
I think Kubernetes is the biggest story going here this
year, but there's some that some would say not. I would say not. I think there's a lot of vendors
that have entered the space. We did in the past few months, we've probably talked to what, about
five store vendors that are all in on Kubernetes.
So I think from a storage vendor perspective, it is definitely a hot topic.
Is it a hot topic from a customer perspective?
You know, I think there's some nuance there.
I think the enterprise is starting to wake up and notice what's going on there and are
starting to, you know, all these guys that we talked to about storage,
the reason that they're talking storage and Kubernetes is because the enterprise is driving them there.
Yeah.
And let's face it, you know, the migration of an app or workload or even a cluster from
site to site is relatively trivial.
It's the, within, exactly,
but the storage associated with those apps and migrating that storage so that, you know,
you reduce your downtime and your ability to,
in a quote-unquote multi-cloud world,
maintain those dynamics.
There's a lot of interesting solutions on the storage side.
The trouble is not the apps. The trouble is the data. How to get the data to some disaster
recovery location. I mean, the recent AWS failure is I'm getting calls from people all over the
place to want to talk about what the solution to that is. The solution is disaster recovery.
How do you get disaster recovery? The key is disaster recovery. And how do you get disaster recovery? The keys to data. Yeah, so this is where I have a problem with Kubernetes.
Like the message from the, as Kubernetes developed over the past few years, the absolute message from the experts, the people who wrote Kubernetes the hard way, the Kelsey Towers, high towers of the world, is Kubernetes is not for
persistent applications or persistent data. We have shoehorned it in because that's where the
hot thing is at. There are umpteen dozen solutions to replicate data.
I think the solutions that we all looked at over the past
few weeks or the past few months
are bending backwards to do
it in a
way that benefits
the Kubernetes
that
not necessarily put Kubernetes solutions
but people who sell Kubernetes.
Yeah, and I would agree.
I think that, you know, we've looked at this sort of from a granular perspective, but from a macro, we have to recognize that there's a lot of ways to skin a cat.
And a lot of enterprises are used to, I wouldn't call them server huggers,
but maybe virtual server huggers. Excuse my tongue there. But it may just be that
continuing to leverage virtual machines is, if not the lion's share of today's architecture, but it may be right
for a lot of workloads moving forward that a Kubernetes architecture may not be ideal for.
I'm struggling here, right? I use Docker containers and all that stuff and some of my personal stuff. And containers are interesting, they're useful, and they're very flexible and can run just about anywhere. And that's the advantage here. And the other advantage is the scalability thing. future. It's got everything the cloud has to some extent with respect to scale, with respect to
you know
functionality
with respect to
portability. It doesn't have
the hardware. It's not like you could spin
up a 20 node
Kubernetes cluster and then tomorrow
decide I want to go to 100 nodes.
But everything else, it
seems to be there. It's that automation.
Yeah. So Ray, I would agree with you if you're in the business of building clouds. If you're
in the business of building clouds and you want to compete against the AWS, the Azures,
the Googles, and the Oracles of the world, go and build it on Kubernetes. If you're in the business of building applications and you need
application platforms, go and consume cloud. Now, if that cloud just happens to be built on
Kubernetes, great. If that cloud is built on something else, also great. But the enterprise
has gotten to this whole contest on, can I build cloud better than the cloud providers or even on par with the cloud
providers. Interesting stack from, I think it was IDC, 88% of enterprises want cloud native
applications on-prem so they can repatriate static workloads from the public cloud.
So these are not things built on VMs. This is stuff built
with cloud interfaces,
but scale breaks
all things, and I don't think
most enterprises has any
business rolling Kubernetes
of their own, period.
Or even consuming it directly.
Ah, my God.
Keith.
That's a bold statement.
Yeah, and I've been pretty consistent with it. This is what I said last year. It's what I said the year before.
And it may be because, to a greater extent, Keith, the Kubernetes architectures are nascent. And I think that it's maturing.
Nascent?
Yes, you did say it last year.
Non-enterprise.
Well, I think maybe next year we may find that, and it involves a lot of things.
Orchestration is a big part of it. I think you're going to find that more companies are building their apps, more enterprise customers are either building in the cloud or even building their Kubernetes architecture in their own data center.
But what they really want is the ability to migrate it to the cloud provider that makes the most sense. So it may reside in their data center, but they may want it to sit on GCP or AWS or whomever.
And that's where the bugaboo is here.
When you think about this and about the ability to migrate your workloads in multi-cloud,
you've got networking issues, You've got networking issues.
You've got storage issues.
And none of it is as simple as it would be
in a hybrid cloud environment.
All of it gets a degree of complexity
that goes far beyond what we were talking about last year.
So Matt, why don't you define the distinction
between hybrid cloud and multi-cloud in your mind?
Well, in my mind, and I'm not sure that I'm the be-all and end-all answer to that question, but I think a hybrid situation would be we have an AWS infrastructure that we're consuming and we have an on-prem infrastructure that we're consuming. And that's hybrid, right? And the ability to move your
workloads in the data center and out of the data center to another hyperscaler is valuable.
But in a multi-cloud situation, we add AWS, GCP, Azure, who the heck knows?
IBM and all those other guys.
Exactly. And that multi-cloud. And on-prem or no?
Or is on-prem an option there?
I still think that due to latency issues, we're going to find that on-prem is a valid solution in many workload cases.
And even for the multi-cloud.
So on-prem is still a place in the multi-cloud.
It's just a lot more cloud vendors in that solution space.
Oh, God, I view it slightly different.
I partially agree.
Cloud is an operating model, not a place.
So we accept that cloud is an operating model and not a place. is I have a cloud that goes across different providers,
but I have a cloud that is consistent across multiple providers,
whether that's private data.
You're talking VMware Cloud Foundry and those sorts of things.
So you could run this thing everywhere, right?
Yeah, VMware Cloud could be an example.
But another example is that I have Kubernetes.
I'm using EKS in AWS, and I'm using VMware Tanzu,
but I'm managing it via VMware Tanzu Mission Control.
A hybrid multi-cloud is that I have multiple cloud control panels.
So I have AWS, I have Azure, I have my on-premises cloud, and I have three different operating models for how I do cloud.
And then we define hybrid infrastructure is that simply I have two different operating models.
I have an operating model, which I take legacy like a VM approach, whether that VM approach is in public cloud or on premises, but there's a VM approach.
And then I'm consuming a cloud control plane like an AWS, et cetera.
Where does OpenShift fit into?
So OpenShift would be a hybrid cloud because I'm using OpenShift across, yeah, I can run it across multiple clouds,
but I have a consistent operating model across those clouds.
So your distinction then, if I read this correctly, is hybrid cloud is one operating model regardless
of where it runs.
Right.
And multi-cloud is multiple operating models that span cloud and on-prem.
Yes, and hybrid infrastructure. So your multi-cloud or your hybrid cloud can ride
on top of a hybrid infrastructure, whereas I'm sharing responsibility. I have a colo,
I have private cloud, and I have public data center, and I'm just using that as the underlay
hybrid infrastructure. So my multi-cloud or hybrid cloud runs on top of that infrastructure.
I truly don't believe that your average IT staff is going to embrace multiple operating
models for multiple different platforms. But they have. They have Azure. They have
they need to. So they have. They may not want to, but they have.
So the goal is to simplify that.
And the question is, can you simplify it?
Kubernetes is the answer.
Yes, you can.
Kubernetes does not simplify it.
Kubernetes gives you a consistent infrastructure, a consistent operating model, but that by far isn't simple.
It's not simple.
You're solving the complex theme with a complex solution.
So what's the brass ring, though?
The goal that we're all shooting for.
The brass ring is a simplified Kubernetes.
Is OpenShift the answer?
I think it might be, at least for an orchestration layer.
Maybe Ansible for, you know for your composition side of the equation.
Also a Red Hat solution.
Well, not that I'm biased. I'm not quick to say that there's a solution. I'm saying that this is a problem that's unsolved
right now.
Well, yes. And that's the way the industry is going. I think there's a lot of ideas.
And I think there's just no, if somebody said, you know what,
I want to buy the brass ring, the brass ring is still being molded.
Yeah, but we're closer and closer.
But here's the challenge.
I've got my set of enterprise applications that I've been running for 5, 10, 15 years that depend on databases, that depend on, you know, transaction processing, depends on, you know, VMware infrastructure and those sorts of things.
And I see the cloud and all this stuff and I say, you know, I really want to have the portability.
I really want to have the scalability.
And if anything, that's where I think the world needs to go.
Of course.
And how do I get there?
And I'm telling you, you can chase it now if you want,
but you're going to spin your tires at this moment.
It's not a finished.
It's like when VMware, eight years ago,
when VMware sold the idea of a software-defined data center.
It's aspirational.
Could you buy it
eight years ago? No. Is it there today? Yes. The vision is right. The maturity isn't there.
But Kubernetes is running millions of applications already today. It's been running for a decade
almost. Would it be fair to say Kubernetes is not bigger
than AWS? Kubernetes is bigger than AWS. No, I'm talking about the number of workloads.
The number of workloads running on Kubernetes around the world versus number of workloads
running on AWS? I think they're almost equivalent. If they're almost equivalent, AWS, Amazon has said that they don't believe that more than 5% to 15% of the workloads
in the world are in public cloud. There's an overlap between those running in Kubernetes
and public cloud. So by definition, there's less than 15% of, and I think it's way less than 15% of the world's
workloads are running on Kubernetes. Kubernetes is a blip in the, right now, it's a blip on the
pimple of the butt of enterprise IT. From a size perspective, it is a blip on the butt.
Are you saying there's no Kubernetes running on-prem?
No, I'm saying it's a blip on the butt.
It's a pimple on the butt.
It's there, but it's not more than 5% of the workloads in enterprise IT.
It's just not.
Well, what we do have is a ton of legacy stuff.
And the enterprise IT organization is going to say, this works for us.
Why are we messing with it? And so you're probably right. There's very few Lotus Notes environments that are being migrated to Kubernetes. I got to take this blip apart here. Who do you think, where do you think all these,
let's call it Salesforce and Dropbox and Box
and the thousand other as a service solutions,
where do you think they're running internally?
Well, the thousands of as a service solutions,
some of them are running, not all of them,
some of them are running on native AWS services. Some of them are running, not all of them. Some of them are running on native AWS services.
Some of them are running on native Azure services.
Some of them are running on Kubernetes.
The millions of IT systems running out there are not running on Kubernetes.
So, yes, I agree with you.
The thousands are running on Kubernetes.
The millions.
All right.
So now we have to talk about how you define you know
you're talking about numbers of unique workloads or the number of the amount of processing power
devoted to those workloads i guess the number of unique the the overall effort the the overall
effort to run enterprise it the overall effort to run and if our audience is primary enterprise it
folks and what they care about chances are you're not working at drop and listening to this podcast.
The chances are you're running enterprise, you're running SAP, you're running some Oracle databases, and then you have some new stuff that you have to deal with.
And the new stuff is interesting, but it's not the majority of your job.
You know, I think I would agree with that, Keith.
Not that I want to be ameliorating for the audience, but I think that there is a differentiation here and that the differentiation is legacy versus new.
I agree with all that.
And I agree with the number of unique workloads.
We've been doing data processing since literally the 50s, and all those unique workloads are significant. But the amount of processing that's going on and doing actual work today,
I got to believe that Kubernetes is running more processing power than just about anything else in the world.
I wouldn't argue against you, right?
The question is, when a CIO looks at his resource budget and they say, OK, my people resources,
because Kubernetes is running the more processing than anywhere in the world,
I'm magically going to get more Kubernetes people than I am people that's 95% of my environment.
So the reality is, yes, the new stuff is faster, bigger, stronger, and it's more stuff.
When I talk to CIOs, CTOs, they're talking about all their stuff.
They're not just talking about Kubernetes.
They're talking about all their stuff.
And the stuff that they're in Kubernetes is interesting and it's part of their stuff and they're not just talking about Kubernetes. They're talking about all their stuff and the stuff that they're in. Kubernetes is interesting and it's part of their stuff and
they're building new stuff on there. But that deck alpha system that on the floor manufacturing
is just as important as the Kubernetes system. That deck alpha system goes down. They're not
making money. So you're saying we're not getting rid of mainframes anytime soon?
We're not getting rid of mainframes. We're not getting rid of mainframes anytime soon? We're not getting rid of mainframes.
We're not getting rid of DM.
You know what?
I just talked to Walmart, and they have plenty of OpenStack.
And they're not planning on moving off of it.
Are they running it on their VACs, though?
The Kubernetes is just going to be another one of the things over the next five to 10 years that's running in your data center.
All right.
That's fair.
The old stuff never goes away.
What happens is that the new stuff moves to whatever the latest technology is.
And that stuff ultimately, you know, processing wise, it provides more and more power because you can there. And let's face it, it's easier to migrate off of an AS400 onto Linux than it is to migrate off a 32-7.
You don't want me to start talking about old architecture.
We'll never get off this discussion.
All right.
So we kind of bandied Kubernetes to death here.
So the next thing I thought was kind of interesting that seems to be another one of these emergent topics for the enterprise
is AI. It's just every time I look
I'm seeing yet another solution that's based on AI.
It used to be it was just AI washing, like cloud washing, but nowadays we're actually
talking real AI. We're talking deep neural nets. We're talking data
up the kazoo in order to train these models and stuff.
Well, the key differentiator there and the enabling technology there is the advent of the GPU and the multi-GPU per server architectures that we're able to see. We're able to process a very different paradigm
than we were on standard x86 or Unix systems.
The key moment in my mind was that VMware and VMworld this year
talked about adopting NVIDIA's AI suite and multi-GPUs and MIG and all this other stuff, which is
pretty sophisticated stuff that they're starting to come out with.
Yeah, it's been a big year for NVIDIA.
Oh, yeah, it's always been a big year for NVIDIA.
Yeah, they're just figuring out, I did some content with them this year, and they're trying
to figure out a couple of things.
One, they're trying to figure out how to make ai accessible downstream so when you're thinking about data scientists at
georgia tech and uh that training program how do you take one of their big gpu multi gpu boxes
and uh carve it up virtual basically solve old problems and new uh sheeps yeah they've got this
hardware architecture that's over
coming and how do they carve it up and virtualize
it so everybody can use it?
Exactly. And then
the second problem is
this stuff simply moves too fast.
I'm not going to recommend a customer
go out and buy a 32
GPU HPC cluster and
then maintain that. You absolutely
need to figure out how you're going to rent it and then use it and then do with it as you please.
And then there's the problem of data movement. How do you get the data close enough to that compute?
You know, this data gravity problem. I'm doing something with Dave McCrory next month.
And we're talking about his uh data uh gravity index maybe that's a better podcast for
the great beers on storage we'll talk about that ray but you know the data gravity index and how
do you you know how do you how do you deal with this stuff from a practical perspective i think
the other thing that's apparent is that you know um where it used to be you know i go in and i train
a model and i kind of deploy it by hand and all this stuff.
All these solutions are starting to come out of the woodwork that have been there for a while.
Kubeflow, MLOps, SageMaker, Vertex or whatever the thing on Google is.
I mean, these things have been there for a while, but now they're starting to become real. come real? Yeah, I think Google just announced, I think it's called SageMaker IQ, in which the
target for that product is literally the business user being able to say, okay, I don't need to
hire a data scientist. Matter of fact, SageMaker IQ tells me what the interesting data is, similar to how a data scientist would tell me what interesting information is in the data.
And that's like, that's the Rana. I don't know how good it is, but that's the problem.
Look at what NVIDIA is doing there. They've got their own library of models. They've got their own hardware.
They've got everything that you as a user would need to do AI.
And their management software is really powerful as well.
You know, as I mentioned to you before the podcast, I'm moving, you know, steadfastly
and deliberately into this space. And, you know, the question really is how does one gain knowledge from data that's already in the system?
That's what AI is. And that's what machine learning is. But in addition to that, you know,
as far as the architecture goes, Keith, you're talking about a consumption-based model. And, you know, yeah, so that's a different piece of the equation. But how does VMware and
how does Kubernetes fit into this as well? Kubernetes is the underlayment for Kubeflow.
And a lot of these ML app solutions are all based on Kubernetes.
Yeah, you beat me to the punch.
But it's really quite interesting what's been going on and the maturation of what is, again, a truly nascent architecture.
Where is it going?
You know, I don't know if we need the three laws of robotics.
You know, Asimov doesn't yet enter into this. But, you know, I think it's interesting
and it's potentially scary to a certain extent, right? We've got to oversee it and see what we
can control in these architectures as well. There are plenty of enabling technologies that
really made this a reality. I mean,
you mentioned GPUs, and GPUs are certainly an important aspect of this. But the data, man,
it's just, we are getting to a point where we're accumulating so much data. We never knew, we never
even knew we had this much data in the past. I mean, petabytes, we were talking to customers that,
you know, set up a petabyte of storage on their floor, right?
These guys are insane.
You couldn't do that 10, 15 years ago.
You'd be insane.
You'd have to have, you know, a data center, right?
We're not talking four racks.
We're talking a data center.
Nowadays, the petabytes of data are coming in, and the question is, what do you do with it?
And AI is the answer.
Exactly. You have to gain leverage. And Splunk could be a big part of that as well.
Certainly, that was the answer to these equations a few years ago. But a petabyte of data is the infrastructure for a petabyte of storage is a whole lot less than it used to be in terms of sheer rack space.
And again, in terms of IOPS, all of these things are relevant conversations to have. But again, as I say, and to echo your point, Ray, it's
about leveraging that technology and that data to gain useful information.
Right, right. And, you know, that's what's happening. I mean, every time I'm looking at
these, I try to look at research every week or so. And it used to be every once in a while,
there'd be a new ML version of an opportunity
in this sort of discussion.
And now all of a sudden now it's like half a dozen,
if not more.
Every time I go down this path to look at what's going on.
Oh, we're using ML ops in microscopy.
We're using ML ops in satellite picture understanding. We're using ML opsps in satellite picture understanding.
We're using MLOps in COVID diagnostics and stuff like that.
It's just amazing all this stuff that's coming out of the woodwork.
Yeah, I think it's going to be very intriguing to see what happens, which is really the reason that I'm diving into this space. The critical thing, I think, from my perspective is that, you know, all this stuff is happening,
but it's happening to some extent outside the enterprise.
When do we think to see the enterprise start adopting some of the machine learning and AI technologies?
I mean, I think it's happening. It's slowly but surely, but it's still pretty early on that adoption cycle.
Yeah, I think it is. And in the'm looking forward to seeing what this means and there are
a lot of sort of architectures in a box that are pervading the space much in the way that the
VMware-based converged architectures did in the past, the V-blocks and the, who knows?
Yeah, I mean, so certainly NVIDIA has taken this bull by the horn with their own solutions,
their own hardware solutions and stuff like that.
And the storage vendors are playing to that crowd to some extent by packaging their own
storage with NVIDIA.
And NetApp was an early adopter of that, but certainly not the last.
I was talking to a storage provider not too long ago, and they're not sold. Some of them are just
not sold on the challenges and the viability of these sort of roll-your-own AI solutions,
even when it comes to the bleeding-edge storage providers,
yeah, we need to have those kind of IOPS that they can present.
But I am finding that the differentiations for these solutions are in two areas.
Certainly, the storage is definitely one of them, but it's also the management layer.
And the NVIDIA pod stuff has a robust management layer, and those are being rolled into a lot of solutions.
In fact, I would go so far as to say very few of the solutions don't involve the NVIDIA architectures on the server side.
But the storage, the storage is going to be what makes, you know, Pures Aerie different from HPE's solution.
Right, right.
I mean, so to a large extent, I think there are two places where this is happening.
And on-prem with NVIDIA pods and those sorts of solutions, there's certainly one approach
to that.
And there are going to be customers out there that do that sort of stuff.
But the other place is the cloud.
Yeah, right. Well, if you don't want to dive into, let's just, I'm throwing a number out there. I
don't know, I don't have any idea really what the cost is of the various solutions, but
let's just say a million dollar architecture just for AI. And if you can rent space, as Keith was talking about earlier, on a Google platform
solution that does the job for you, or if you can rent space on an Azure solution, AWS solution,
well, that might be your entry point, but it might not be where you end up. And the sheer reliance on horsepower, on processing power, and on storage could place one of these cloud-based solutions way out of reach financially when you stretch it out over the life of a product. That's what makes the VMware announcement so interesting in my mind,
because now they're starting to think, okay,
AI is starting to be more consumed in the enterprise.
We have to have some serious support for these things.
So by moving towards GPU and multi-GPU support and a VMware server,
by moving to MIG, which is a multi, you know, it's effectively
GPU virtualization by moving to have, you know, adopt NVIDIA's, and it's not clear in my mind
if they're adopting the whole NVIDIA management framework or what they're bringing, but they're
bringing in a lot of NVIDIA, you know, pieces and processes into the internals of VMware.
Right, right.
And, but where does that leave AMD in this equation?
AMD, you know, they're not stepping, they're not, AMD is a large organization that has
their own, you know, their own stuff.
They have got their own GPUs and a lot of those GPUs can be used in this sort of framework
as well.
It's, you know, it's obviously a question of CUDA compatibility or that sort of stuff.
Exactly right.
Real questions.
Where does that leave Intel?
Intel is not really a GPU main player in any of this space.
Oh, I think we were talking about Intel's deep pockets recently.
I think we're going to find that Intel has answers.
I think what Intel, the answer that Intel has is really by making their CPUs more GPU-like and by making, you know, more functionality.
They've done stuff with their DD boost and things of that nature to try to make it more deep learning eligible.
But, you know, there's two parts of this coin.
There's the training side and there's the inferencing side.
If they could tack the inferencing side, they'd be fine.
Yeah, so VMware, I mean, I'm sorry, Intel will be fine on the,
they have a very solid story of talking about generalization of inference
and doing kind of building frameworks and being able to do AI on general compute.
Not everyone needs the processing power of a GPU.
Not every problem is GPU size.
So Intel has a very good story to tell around that.
Going a little bit back to the Kubernetes story, and I was listening to
the NetApp, some of the NetApp story. And NetApp and NVIDIA together actually have a really good
story. But NetApp has actually worked on some really cool stuff from, as I've dealt with HPC and AI and HPC version control and CICD, containerization of the algorithms and the
process and even the data sets is a big deal. When you think about being able to take
snapshots, storage snapshots of results and data sets and combine them in a CI CD workflow.
And you can look back in revisions and say, you know what?
We get this result with this version of the data, et cetera.
The whole versioning thing becomes a lot easier
if you could snapshot everything.
Yeah, it is a big, big deal.
Oh God, yeah, yeah, absolutely.
I mean, the whole ML Ops to some extent is, you know, keeping the model, the data, and the inferencing in lockstep, right, and all that stuff.
And versioning is critical to that.
Critical to that.
But we could talk about AI all day, I think.
But we got one more topic to discuss.
Yes.
What is that topic?
I think it's work from home.
And why is that important, Matt?
Because the pandemic has changed everybody's lives, particularly in this industry. enterprise employee, you know, we're kind of there when we look at things like delivering
desktops via VMware or Citrix. Our remote access troubles, old school VPN solutions are, you know,
they're as archaic as, you know, any other old solution. I think that where we have to start considering
things is what's the future going to look like? And are people ever going to go back into the
office as a full-time role? Or is that going to, forgive the expression, remain hybrid?
I think the challenge is, you know, we've kind of gotten
beyond the technology.
Yeah, there's still some nuances there
that could be better.
Zoom and things exist.
But I think the challenge now
is what happens to the rest of the world.
Offices, downtown office space,
conventions,
does the convention still make sense?
I mean, I was reading an article
the other day that said,
you know, the nice thing about
virtual conventions, it opens up the world.
People from Africa, people from Asia, people from the U.S., all these guys can now attend
these conferences without having to really spend the money and time to do it.
Not only that, the diversity it adds to those sorts of things is just impressive.
So I have mixed feelings about it.
Our virtual event back in 2020, we had 50 percent gender representation, equal gender representation.
Fifty percent. Yeah. My from a speaker perspective, not from an attendee perspective.
So my keynote speaker was Karen Lopez, and I had, from a sponsor's perspective, I pushed the sponsors to make
sure that we had gender diversity from a sponsor perspective. I was able to get first-time speakers
who never even dreamed of speaking at a conference. We were able to coach them through their sessions
because it was pre-recorded so they could put their first best foot forward.
But I literally just dropped off of an AWS analyst session for their storage announcements because it was just non-engaging.
It was something I literally said, you know what, this could have been the, you know, there's the equivalent.
It could have been an email. This could have been a email to link to a YouTube video. If you have a captive audience, you have to find a way to
engage that audience. And I have not seen a virtual event that is engaging a audience. I don't know
what the appeal is for me to choose a while,
four to eight hours of my day on a day to engage at a virtual event.
So the question you have to ask is the alternative.
If I'm sitting in Las Vegas, God forbid, for four to eight hours,
listening to vendors talk about their product,
is that more engaging or less engaging than doing that over the?
Now, what's more engaging, what's more engaging is that, you know, we'll look at each other, Ray, like, man, this is ridiculous.
And we'll go out into the hallway and have a conversation.
Right.
Well, that's the problem is the lack of human interaction.
And I agree with you, Keith.
That's for me for years now,
that's been the best part of these trade shows.
Yeah, the face-to-face
meeting. Yeah, I went to AWS
re-invent, and I didn't even have a ticket.
It was a full show for me.
So,
that
part of the virtual thing,
until we figure that out,
I just got an email today.
HPE discovers the 28th to the 30th in person.
So we're going to keep,
we're going to keep kind of fiddling around until we figure it out because
without this human interaction is just the, this content is just noise.
I saw it was a new technology that, it's like a phone booth.
It's a holographic display of a person.
And you can look, and it'll look at people and stuff.
And maybe that's what's needed.
I don't know.
I'm not sure.
It's probably a million bucks to put on per person or something like that.
I can't envision standing in a phone booth for an entire trade show, though.
Well, you know, and I just bought a Quest.
I mean, I have a Quest 2.
And you know what?
It's just the community isn't there.
The network is not there.
It's Facebook-centric.
And I have no desire to give this.
I don't have any desire to give my business community over to Facebook.
Yeah, I'm with you. I agree. But, I mean, the work-from-home thing, I don't have any desire to give my business community over to Facebook. Yeah. Yeah.
I'm with you.
I agree.
But I mean, the work from home thing, the other part of this is, you know, what's going on with offices?
You know, so downtowns are all, you know, to a large extent based on offices, right?
I mean, you look at someplace like Chicago, there's not a lot of manufacturing happening downtown.
Yeah.
There's, you know, a gazillion feet of office space.
What's going to happen to someplace like that?
Yeah, that's a really, that's a very good question.
I don't know if I'm too worried about it because eventually I would have moved out of Chicago anywhere to a more rural area.
So it was, you know, Chicago's problem.
But that is a very significant what happens to all of this office space. I think the I forget who it was. It was I don't know, Anheuser-Busch or one of the beer companies.
They've gone back to three days in office, two days out. We'll never go back to all day 100%, but I think all virtual actually doesn't.
I think it's a great option, but it also, I don't think it's sustainable.
We need human interaction.
And the challenge is getting that human interaction virtualized.
And that's, yeah, I agree.
It's a real problem.
I've been here since 2004.
I've been doing work from home.
And it's great. It works fine. But every once in a while, I just want here since 2004. I've been doing work from home. It's great.
It works fine.
But every once in a while, I just want to meet people.
I want to go have breakfast with people or lunch or dinner or something like that.
That is a challenge.
I agree.
We should all get together in Austin or something.
Or maybe St. Louis would work.
I don't know.
Someplace.
I don't know about that.
Or maybe Chicago.
All right. I'm a Cubs fan. What can I say? Chicago would work. Chicago works for the three of us. Yeah,'t know. Someplace. I don't know about that. Or maybe Chicago. All right.
I'm a Cubs fan.
What can I say?
Chicago would work.
Chicago works for the three of us.
Yeah, yeah.
I know.
I know.
It would be nice to do this live.
Maybe next year's wrap-up will be live.
Oh, that would be so nice.
We'll be at re-invent, and we'll do a re-invent recap at the end of the year.
Great beer. Maybe. Maybe. All right. Well, listen. This has been great. reinvent and we'll do a reinvent recap in a end of the year great beard maybe maybe all right well
this listen this has been great thanks thanks a lot for for being on my show as always of course
thank you of course it's always i thought you know what we the we kind of brought down the energy
level i thought we should have ended on kubernetes that way we could have ended on i don't know if i
could have handled that keith i mean it was so much That way we could have ended on the handle. I don't know if I could have handled that, Keith.
I mean, there was so much discussion there.
We could probably, yeah.
We'd probably have to take half the time on Kubernetes alone.
I think we did.
We did, but it was, yeah.
You're right.
All right.
That's it for now.
Bye, Matt.
Bye, Keith.
Bye, Ray.
Bye, Ray.
Until next time.
Next time, we will talk to the most system storage technology person.
Any questions you want us to ask, please let us know.
And if you enjoy our podcast, tell your friends about it.
Please review us on Apple Podcasts, Google Play, and Spotify, as this will help get the word out. Thank you.