Storage Developer Conference - #151: Redfish Ecosystem for Storage
Episode Date: August 17, 2021...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Co-Chair. Welcome to the
SDC Podcast. Every week, the SDC Podcast presents important technical topics to the storage
developer community. Each episode is hand-selected by the SNEA Technical Council from the presentations at our annual Storage
Developer Conference. The link to the slides is available in the show notes at snea.org
slash podcasts. You are listening to STC Podcast, Episode 151.
Hi, welcome to a session on the Redfish E for storage and HPE's perspective on the open standards in Redfish storage.
My name is Jeff Hillen. I'm president of DMTF, as well as a distinguished technologist with HPE and the CTO's organization.
With me today is Scott Bunker, a server storage technologist with HPE.
Quick disclaimer before we go on, this information really is a snapshot of work in progress.
For the latest information, make sure you go out and check the respective websites.
Agenda today is a quick background on the DMTF, and then we're going to go into Redfish.
What is Redfish? Why do we do it?
A little bit about the general structure of basic Redfish,
and then we'll go into storage support for local storage,
and then how RDE supplies that information for local storage, as well as some changes in the latest release,
2020.3, to support NVMe swordfish, and then the fabric model changes that help assist in all of
that. And then Scott Bunker will go through HPE experience and direction as far as storage goes.
So quick background on the DMTF.
We're an industry standards organization led by an industry-leading company.
It's kind of a who's who of the IT infrastructure and cloud background.
We make a wide variety of standards, everything from virtualization,
cloud-based, network storage, servers,
a complete list you can find on our website.
We're nationally and internationally recognized by ANSI and ISO.
And we think our standards help enable the ecosystem to provide a more integrated and cost-effective approach to manageability through interoperable
solutions.
We're able to do simultaneous development of open source and open standards.
And we appreciate the support of the DMTF board,
which really is a tremendous group of people that are interested in seeing this
kind of work done in an interoperable and open way.
Quick plug for the rest of the work in the DMTF.
We've got a lot of standards besides Redfish.
SM BIOS is everywhere.
It's been in every x86 machine shipped
in over the last 15 years. There's another group called PMCI that's handling the infrastructure
inside the quote-unquote box, if you will, down at the very low level. Everything from fru data
to security protocol and data model, which has just come out, that's being leveraged
by PCIe, OCP, JEDEC, HDBaseT, and others.
There's all kinds of information down there about sensors and firmware update and monitoring
and control.
So you can go check those out at the DMTAP website as well.
And then historically, we had the Common Information Model and WebM for doing systems management in the past.
So a quick background on what is Redfish.
It really represents industry standard software defined management
for converged hybrid IT and is defined by the DMTF. It's a
RESTful interface and we chose a JSON format in order to
make sure that we all had one format that was
interoperable. It's schema-backed, but human-readable, a payload usable by GUIs. That's because it's in
JSON format. Most of your GUIs, at least when we started, can be run like JavaScript applications
and pulling JSON data and making those web browser components and GUIs enabled,
we were able to do it with the same API.
It's extensible, secure, and interoperable.
It's already an ISO standard, and we're updating that now.
And we've got a developer hub at redfish.dmtf.org where you can find more information,
whether you're a business leader or a developer.
We've got multiple tracks out there you can dive into
and learn more about it.
We just celebrated our fifth anniversary with Redfish in August of 2015.
And the initial release of Redfish was really aimed as a secure multi-node replacement for IPMyOverland.
Since then, we've been doing releases about every four months, three to four times a year.
And we've added a huge amount of additional support beyond rack, mount, blade, HPC.
The scope now covers storage and networking and fabrics and data center infrastructure, power and cooling and facilities and alarms and liquid cooling and all kinds of other stuff.
So it really has expanded the scope over the last five years to cover most of
the rest of it.
It's shipping on every,
virtually every industry standard server shipment today shipped today.
And we think there's somewhere around 35 million nodes out there with it on
it.
We've worked with a bunch of Alliance partners to extend the scope of
Redfish, you know, DMTF, we've got a bunch of people alliance partners to extend the scope of Redfish.
You know, DMTF, we've got a bunch of people that are, you know, good at one particular part of the ecosystem, but we don't know everything.
We came up with a pretty good data model.
So we've worked with experts at SNEA to cover more advanced storage, and that's what Swordfish
is.
You'll hear more about that later on today.
We've worked with OCP and ASHRAE to cover facilities.
We've done some adaptation of the Yang models to cover Ethernet
switching. We've worked with Gen Z and others to cover fabrics, and we've got more partners to work
with as the fabric definition continues to expand in the industry. We've got the parts that I talked
about earlier with the PMCI work group, where we're bridging that gap between the northbound
interface, which would be Redfish that goes outside the box,
with all the interfaces inside the box to make sure that we've developed a tight ecosystem for getting that information out and making it cost effective for those that support our standards.
We're working on replacing the host interface.
IPMI KCS was the traditional host interface for servers.
Instead, we replaced it with basically a NIC,
because your tools that work
outside the box should work inside the box just as well. Why write the code twice? And then we've
added on profiles and test tools to do scope and integration performance. We've got vertical use
case checkers. We've got emulators. We've got schema converters. There's document generators.
There's a whole bunch more open source test tools that have been leveraged by various standards to help them extend the Redfish ecosystem with us.
One of the successes for Redfish was this initial resource map.
If you think about it in logical views, you've got a bunch of stuff on the left there, tasks, sessions, accounts, events, registries, all the things that an application or a client would need for overhead that's kind of applicable to anything,
whether you're building a fabric manager or a storage system or, you know, a server or an
aggregation of servers, a bunch of enclosures. Those kind of services are going to be needed
by anything. And then we broke up the way you represent a server into three logical components.
There's a collection of systems, a collection of chassis, and a collection of managers.
The managers are the things that manage the chassis and systems.
They kind of don't fall into those other categories.
And yet they represent either software, firmware, or actual chips down in the box that aren't part of the data plane or the sheet metal
they're part of the infrastructure and you don't really want to represent those to everybody
then you've got the systems which is there on top and that's you know the processor storage
nick data plane view of the world it's kind of a logical view because you know if you got a
composable system well it may be in different parts of sheet metal, or I'm on bladed enclosure, or I'm a pizza box. You want that same code written for that system to work no matter how
it's put together. But you still need to know how it's put together. You still need that sheet metal
representation to know power and thermal. And, you know, if I lost power, how, what does it affect?
If I was, if I need to prepare for resiliency, what does that look like? And so the chassis model
extends all of that to provide that level of functionality for clients.
So I'm going to go quickly into Redfish Storage. The storage model, also known as local storage
or server storage or Redfish storage, or just what you get in the base server. It's really made
of three components. There's the storage. That's the representation of the base server. It's really made of three components. There's the storage.
That's the representation of the storage system. It's got drives, volumes, and storage controllers.
And your storage controller, at least for traditional storage, is really that set of
protocols used by the controller, the speed of the controller interfaces, manufacturer information,
that thing that controls access to and from the drives.
When you hear about Swordfish, you'll hear that that definition changed just a little bit.
So please make sure you attend those sessions to understand the distinction.
And then you've got drives and volumes.
And drive represents the physical media of the data.
It's got manufacturer and part number and size and protocol and the blinky light and secure erase and
those kind of things that are around what used to be a spinning piece of hardware and then you've
got the logical construct used and represented to the actual workload that's running on the system
right that's your volume that's your lawn that's your whatever name you use for it, that's what Redfish uses as a volume.
And that really does have the things that allow the client to access the system, encryption settings
and things like that. So a volume can span multiple drives or be part of a drive. And so there's that
logical aspect of it represented by volume. Here's what it looks like laid out under a computer system and under a chassis.
Typically, although it's not required to be, drives can be under a chassis.
You could just as well have them under a storage controller.
They can kind of live anywhere as far as URIs go.
So that gives flexibility to the people that are implementing our standards.
Typically, there is a storage element under a computer system, and that's a storage collection because you may have one or more storage subsystems in your computer system.
That storage subsystem is the one that contains the controller's information,
as well as any redundancy information, either at the controller level or the storage system level.
And then you've got the volumes under a volume collection directly off the storage.
And then the drives, you know, they're going to be in the chassis typically, although they could be hung off the storage directly if you did not want to represent a chassis. But if you want to contain and convey that information about power and cooling domains and what drives get affected by what temperature alerts,
then putting the drives in chassis is the way to go for that.
Note that volumes are in collections off the storage resource.
So now, how do you fill all that storage stuff out without creating a lockstep firmware dependency
between the management controller firmware and the storage firmware?
And that's what Redfish device enablement is, is the PMCI work group worked hand in hand with the folks in
Redfish to enable a server management controller to present all of that storage information
without needing to change the firmware of the Redfish management controller, that BMC.
So previously, all of the firmware had to be lockstep with whatever change went into the BMC. So previously, all the firmware had to be lockstep with whatever change
went into the BMC. You change the storage firmware, you had to change the BMC. And that became
unsustainable because it really became an M times N problem. You know, the more controllers you have
and the more adapters you have and the more ecosystem you have, the harder and harder the
problem is either in a development or a support environment.
So what we did was we created an adapter piece of firmware
that's self-contained and self-describing,
and that includes value-add OEM properties,
and you can plug in, and it doesn't matter if it's a network adapter
or a storage adapter or any adapter, a GPU.
If it supports RDE and the management controller supports RDE,
then when the request comes in, the management controller just turns it from JSON into this RDE
binary encoded JSON, sends it down to the adapter. The adapter cracks that packet,
does whatever action it needs to, sends it back to the management control. The management
controller just turns it back from binary to JSON and sends it on its way.
The management controller needs to do very little with that information.
And because of that, it really is just a broker.
And all of these individual components can really contain their own destiny and do whatever
they need to do to support the product in place.
So now I'm going to go quickly over the changes in 2020.3 that we really put in place for NVMe.
So first, we have this new connection schema.
We added it to the Fabric model, and as we were going through Fabric,
we were looking for where do you put the type of – we had endpoints,
and that was what was originally in the model.
But how do you know which endpoints are really connected with each other?
And how do you know the access rights that that connection represents?
And so if it's read or write or whatever access, and we realized real quickly whether it was storage access to a volume or Gen Z access for IPC,
we needed a way to show that connection to convey the information about
the access rights. And so we added connection to the schema. We also broke out storage controller
as an individual resource. Traditionally, it was an array inside the storage object.
And what we realized is when things like HBAs were around, That worked really good because it was one storage controller
per storage ecosystem.
But when we started to look at NVMe and NVMe-OF,
oh man, now the controllers are created
and retired as host connect and disconnect
from NVMe targets.
And that array would expand and contract
with different elements inside of it
potentially changing over time.
And that's really better off represented by a collection of storage controllers.
And so we made that change as well.
We added a bunch of other stuff too that I'm not really going to go into.
You can go out on the DMTF website and take a look at them.
We added endpoint groups to Fabric.
That was really something that turned out to be handy that Swordfish did for mapping and masking
and was
really more scalable as you got a connection. You can do a connection between endpoint groups,
and that way your members of your group can come and go, and you don't have to go create, you know,
a whole bunch of collections, so it's much more scalable. Storage is also off the service route.
If I'm doing a JBOF, I don't have a computer system, so how do I represent my storage enclosure?
And so we decided to hang storage off the server route.
We added indicator LED on drive because drives have blinky lights too.
We added Ethernet for fabric, and that was in works before we started NVMeOF,
but it turned out to be a pretty handy thing to have in place.
Port, so everything in the ecosystem switches and everything else used a class called
port, except for our advanced controllers that were NICs, network device function and network
interface network adapter. They use something called network port that had almost the same
properties. And it started to become an interoperability issue where some properties
were on port and they were named different things on network port.
And we decided, you know what, let's just make everything port.
And then we had added InfiniBand to the Advanced NIC model.
One last change we did in 2020.3 was we extended the Redfish Fabric we wanted the same fabric model whether it was pci sas gen z cxl um ethernet infiniband we didn't care and they all kind of are represented the same way or can be
and that's a really simple representation with a fabric hanging off of the root that has switches
and switches have ports and fabrics can have endpoints
and endpoints are mapped to ports. And then you've got zones where I've got a collection of endpoints
that are allowed to talk to each other. And so zones kind of represent routes and endpoints
represent the logical endpoints, not necessarily where the cable ends, more like where the protocol
stack ends, right? Because, you know, it's one thing to go port to port to port,
but I also need to know where I'm ending up inside of my infrastructure,
logical infrastructure.
You know, am I ending up in a VM?
And so when you start looking at the way things are really managed on fabrics,
they kind of go up to an L2 to L4 layer.
And so we needed a fabric model that did the same thing.
Quick side plug, we're working with the Open
Fabric Alliance on an Open Fabric Manager framework to use the Redfish Common Fabric model
as the northbound interface for that fabric management framework. And so if you have any
interest in anything like that, go ahead and take a look at the Open Fabric Alliance. That work is
just now kicking off. And they're totally open. so it's a really interesting way to run a standard.
So we did a few additions to the fabric model. We added the endpoint groups that I talked about
earlier, and that really is just a collection of endpoints. It's got a few properties
with respect to like mapping and masking, and we're adding some for memory access as well.
And really, it's a much better way if I have a connection, an individual endpoint,
every time I want to add something to that access, I've got to create yet another connection and another connection.
And really, that's not kind of the way things are really managed, especially not in Swordfish. They're managed through endpoint groups.
So we added that as well.
And then we added this thing called address pools.
And that was done for a different reason.
That was really to show address allocation, subnet, everything, every one of these endpoints
in this address pool or everything in the zone, I want it to have this kind of an IP
address or this kind of a Gen Z address or, you know, here's how I'd manage my subnets or other addressing groups.
And so that's all represented with address pool.
So we added the address pools, connections, and endpoint groups.
And endpoint groups was donated from SNEA, again,
to address scalability and mapping and masking for storage.
If you have any more need for any more information need for any more information go ahead and check out the
redfish developer hub uh redfish.dmtf.org that's where all the schemas and schema index there's
there's documents out there the schema guide the schema supplement as well as a specification
pointers to the github for all the open source tools message registries permission registries
other documentation we've revamped
the website lately so that when you get there, if you're a CXO and looking for high level
information, that's easier to find. If you're a developer and you want a deep dive, you can do
that too. We've got multiple mock-ups of different kinds of systems out there. More are being added
all the time. And then we've got a user forum that you can connect to. There's both a Redfish forum and a Swordfish forum that you can get to where if you are not engaged with the standards, or even if you are, put your questions out there.
We address them every week.
We give them the highest priority.
And we're really, really pretty happy to have that communication with our user community.
And there's white papers and presentations and YouTubes and much more information.
So that's the next logical step to go for more information.
So in summary, Redfish, along with the DMTF working groups and our alliance partners,
really is working to define interoperable software-defined hybrid IT management.
For whether it's servers, storage, networking, power, cooling, facilities, buildings,
it's all being driven around that model.
We're really solving the problems for modern composition for resource managers.
I didn't even go into the composability model.
And then we're plumbing the system inside the box to get all that information,
as well as enabling through SPDM and other protocols,
a zero trust model within the platform.
That does it for my portion of the talk. Next up, I'd like to
introduce Scott Bunker, HPE Technologist. Hey, thank you, Jeff. Welcome, folks. Thanks again for
watching this talk today. Jeff, it was an impressive amount of work going on in the DMTF work group.
I can speak for Hewlett Packard Enterprise and our customers that we are absolutely delighted with the work that the community has put in the efforts that they put into not only the Redfish subgroup, but also the PMCI subgroup.
They value the contribution in terms of ease of use for the Redfish data model, for the commonality. We see a growing number of
not only customers and partners, third-party integrators using Redfish to manage
systems in their data center. So, you know, we at Hewlett Packard Enterprise are, you know,
fully behind the DMTF efforts around Redfish.
And what I'd like to do today is share with you the journey that we've been on and kind of give you our personal corporate perspective on why we see the value and advantages for
adopting these standards.
Okay, so to share with you the perspective that we have, we have to start from the beginning.
Whenever I started in the computer industry many years ago, there wasn't a whole lot of platform vendors out there.
We would have ideas around management of the options that plug into our servers.
And a lot of times the ideas around management come in from the BMC team and we try to figure out a way to harvest that through our suppliers.
I definitely want to do things like thermal monitoring, but also produce alerts.
And so over time, the number of ideas around management have grown in the industry.
So our original thought was we would create platform specs.
We would create a non-disclosure agreement.
We would share them with suppliers.
Sometimes it could involve NRE payments with the supplier in order to, a spec that we created.
And it scaled very well for us in the original days across multiple,
not only suppliers, but technologies in the industry.
So Jeff, you can advance to the next slide.
Over time, we have noticed that the number of platform suppliers that are out there have also grown phenomenally and we observed what this has caused to our suppliers and so the
downside is if every platform provider like Hewlett Packard Enterprise behave
this way we would we would over tax the resources that our supplier has in order to develop a quality product that
satisfies the entire industry. So the next slide. So, you know, the natural thinking is, well,
why not have the supplier own that technology and create their own spec and send it to the platforms under NDA.
And a lot of times this could be a spec that defines a API between like a BMC and an option card,
or it could be a number of other things.
But the thinking here is if the supplier provided the spec and the technology,
we would solve the problem with the supplier scaling across multiple platform vendors,
and we could accommodate the interface in that it would scale well with our suppliers.
Next back.
But the problem that that creates is just like how we saw earlier with platform proprietary specs being sent to the supplier,
the same holds true for supplier proprietary specs being sent to the platform.
We at Hewlett Packard Enterprise have lots of suppliers that span across multiple technologies, technologies from RAID controllers to networking adapters to fiber channel adapters and GPUs,
accelerators. There's lots of technologies scaled almost as a matrix across multiple suppliers.
And so if we got into this model of receiving a custom spec that is radically different from supplier to supplier,
we would have to have a very large BMC development team in order to make sure that we keep a quality,
you know, interface. And in the end result from the customer experience is they would see
supplier-based nuances going, you know, in feature sets coming through our platform.
We don't think that this is necessarily scalable or sustainable for the industry.
Next slide.
What we see as a way to move forward as an industry, is working together.
Hewlett Packard Enterprise works well with our competitors, with our suppliers.
We all work together within the DMTF forum in order to create open standards.
The open standards, both out of the Redfish Working Group and the PMTICI working group scales well. We kind of get out of the model of either
party creating a proprietary spec and going around knocking on doors to get harnessed developers that
could develop these standards that don't scale well. If we all support the open standard,
then we all benefit together. If the platform in purple supports an open standard,
then all the suppliers that support that open standard gets natural support
through the open standard and they don't have to necessarily ask the platform vendor to
connect to the standard in order for their value to flow through.
Same is true for a supplier. It doesn't matter if it's an old supplier, a new supplier that's
emerging. If they develop their product, their option card, if you will, to the open standard,
then they get to scale across all the platforms that support that open standard. So it really is
a time point where the industry as a whole can come together
and develop these ideas together.
And we see our customers benefiting
because the outcome of this is standardization.
They see less differences from platform to platform.
And so it makes our customers' lives easier.
As we see the world tilting into open open a lot of option cards are going to be
built based upon OCP standards. We're seeing that are,
that already with OCP next those cards that are, you know,
designed toward a standard cannot have any sort of proprietary interface in
them. You know, they, they would design toward an open standard.
And then similarly, platforms built upon open standards like OpenBMC
needs to be looking for open standards in order to integrate.
So DMTF is kind of, you know, playing in the metal
to, you know, bridge these two worlds together
and provide a standard by which open option cards developed through OCP could work with open
platforms developed with OpenBMC in mind. Jeff, next slide. So give an example of, you know,
the types of standards that play out in the DMTF that HPE and our customers find value that kind
of fit this model of collaboration and work within the industry.
We talked about the two working groups within DMTF.
We've got the Redfish working group, which I think recently hit its five-year anniversary, matured its model.
You know, we've seen a lot of adoption with that, and it supports our HP platforms very well. On the bottom side
of the slide in blue you see a lot of the efforts coming out of the PMCI
working group. This is a very active working group. There's you know not only
a transport layer that can be layered up on top of multiple physical medium like
SMBus or PCI-VDM but there's also these data models that have proven to be very useful, and they work.
I'll kind of highlight a few of them.
Hewlett Packard Enterprises manages our NVMe drives along with everybody else using NVMe MI.
That is a way in which we do health monitoring.
We inventory the drive.
We can create Redfish resources using the NVMe MI model.
And, you know, it also would work well, you know, for flashing drives,
flashing drive firmware.
But this is really meant for direct-attached NVMe drives.
As we shift to the right, you see the PLDM,
all the various PLDM models.
So to walk through that, you know,
Hewlett Packard Enterprise has found value in the PLDM type 2,
which is otherwise known as DSP0248.
That model is very useful.
It's a sensor model that we use for collecting temperatures
of the option cards of drives
or even sub-enclosures that are inside of our servers.
PLDM Type 2 is a great industry standard model for collecting those sensors within the server
and doing something with them for fan control or feeding into various layers of our BMC
for health monitoring.
As we shift over to the right, PLDM type 5, otherwise known as DSP0267,
is a great standard for flashing the firmware on the option cards.
Recently, there is a version 1.1 of that spec that allows us to flash firmware of downstream devices
like dry firmware,
enclosure firmware. SNEA created a device called UBM, the Universal Backplane Manager.
HP has fully adopted UBM as our standard for our backplanes, and we could use the PLDM Type 5
model for flashing those UBM devices. The PLDM Type 5 model for flashing works well
with the Redfish firmware update service, so we see a natural connection between the Redfish
firmware update service that the Redfish team is working on with the PLDM Type 5 model that the
PMCI working group is working on. As we shift over to the right, Jeff mentioned PLDM type six,
this notion where option cards in general, no matter if you're a NEC, a fiber channel adapter,
or array controller, or a GPU, you could host your own resources and unlock your own value.
You know, so Redfish is, you know, and we call it an API, but it's a universe of things.
It not only supports resources so that you can take a look at your server and look at the physical inventory as Jeff talked about earlier,
but you can configure your server using common APIs that are well-defined.
It also supports alerting, this notion that you can have a registry.
Recently, the Redfish Working Group hosted a network device registry and a storage device registry that allow the common set of alerts that can be pushed through the Redfish subscription service to host alerts that are pushed from the option card through PLDM type 6, known as Redfish device enablement.
And then there's also metrics.
So Redfish has a telemetry service,
which works well for hosting metric reports and metric definitions
and metric report definitions and trigger functions.
And so Redfish device enablement is a great way for an option card to host metrics that can be reported through the Redfish telemetry service.
And finally, there's SPDM, which offers security for option cards that plug into our servers.
The SPDM spec has tackled some complex things in the industry, such as how to perform authentication of an option card,
how to do certificate management so you can prove the identity of the option card,
and attestation so you can determine if anything has changed in the option card since it was
manufactured or last checked. There are further improvements to the SPDM that HPC is valuing,
which is this idea that the BMC and the option card can pass session keys in
order to create a secure transport using MCTP between the BMC and the option card. I can tell
you that each one of these PLDM standards and SPDM standards that you see below work. The DMTF
is producing quality specs that we have actually implemented on our systems.
We fundamentally require them across all of our suppliers, and we see it as a way to unlock the supplier value so that the suppliers can now step up their game in competing. They could host Redfish configuration. They could offer new services,
have it piped right through the BMC. Our BMC here at HPE is called ILO. They can provide their
services right through our BMC interface without our ILO development team having to do additional work to support that.
Because as Jeff said, the Redfish device enablement model is transparent, which means a supplier
can transparently provide their value without us having to keep the BMC, which is our ILO,
kept in lockstep with the firmware version of the option card. Next slide, Jeff.
So to kind of share with you a picture of how this PLDM type 6 with Redfish device
enablement plugs into the ILO Redfish management hierarchy. You can see that there is a storage instance
that plugs into the existing storage collection
for our platform.
Everything in gray here is hosted by the option card.
So this is all part of the Redfish device enablement.
These are PDRs that are,
each one has a RDE dictionary, if you will, and it's a way to just plug in
to ILO. And then what go, you know, all the properties that are inside each one of these
resources with verbs such as get, post, patch, delete, that transparently flows through our ILO
and interacts directly with the option card. So for storage, there is a drive,
there's volume collection of volumes
where you can create volume elements
and you can interact with it
to flex your storage over time.
The storage controller collection,
as Jeff mentioned, is now brought out separately
as a separate resource.
There's the port collection.
And the way that we model the backplanes
that these option cards are attached to,
the UBM backplanes, if you will,
is by modeling the backplane as a chassis
and plugging that into the chassis collection
and modeling the cabling between the storage controller
and the backplane as a fabric
and allowing the RDE device to plug that into our fabric collection.
So we have tested out this technology.
It works.
It didn't seem to take a tremendous amount of effort,
but there's a lot of great tools out on DMTF that help with creating the dictionaries, with assisting both our internal ILO team and
the suppliers to develop. The beauty of this was we developed the interface within our ILO code
base to support NICs and fiber channel and local storage and several other types of options.
And by developing that once, we now see the benefit of scaling so you know
suddenly all all of our suppliers are allowed to plug in to the redfish device enablement naturally
work and we see it scaling across not only the suppliers but also the technology types and what's
not shown here is the other elements such as metrics and
alerting which are also supported through the RDE specification and also
work quite equally well. Next slide.
So this is a you know Hewlett Packard Enterprise vision of what's next. You
know we look at the needs of our customers and we want to work within the community to,
you know, we've already completed
the initial storage device message registry.
That is a great specification
that helps us to generate industry standard storage alerts.
This is going to work well for local RAID controllers, for locally attached NVMe drives.
It's got all of the basics that are needed in order to send an alert about storage.
There are enhancements that we could do as a community to that storage device message registry.
And HPE welcomes the community working together to create alerts that our customers
mutually want. Hot spare management. We need the ability to, you know, today we have the ability
to create a spare drive and map it to a volume, but we really need a more dynamic way to add and
delete spares that are already assigned to a volume so that we can do spare management after
the server's been initially deployed. We also see a need to be able to decommission our servers.
Our customers, you know, not only want to deploy their servers,
but oftentimes when they reach end of life, they want to decommission the server.
And so we want to look at all of the elements that are within Redfish
and make sure there's some kind of action or command that could tell the product to erase all the
data and go back to factory defaults, if you will, to facilitate providing the customers
with that security that all of their configuration details and personal information has been
removed from the platform.
We also see the need to enhance STD encryption within the storage model.
Today, there's already a lot of great properties and attributes within Redfish storage,
but we need to do some enhancements such that we could specify remote STD support.
In addition to the encryption password, we need to be able to provide attributes like the password hint that goes along with that.
And so HPE would like to work with the community to enhance STD encryption.
And finally, we see the need to work together to provide storage device metrics.
For local storage, a lot of these metrics are coming about volumes or for drives. For drives, we want them to be fundamentally rooted in some of the log pages that we see naturally coming out of SAS drive technology or ATA drive or NVMe drive technology, where the drive supplier has its own metrics. to create a storage device metric where we could then transparently provide those metrics
to the customer again using redfish device enablement and ultimately map to
the redfish telemetry service for delivering those metrics to external clients
so that's a quick perspective on uh hewlett packard Enterprise and how we have seen value and made great use out of the efforts out of multiple DMTF workgroups.
And I'd like to turn it back over to Jeff to close this out.
Thanks, Scott.
Thanks, everybody, for watching.
I'd like to point you to a few other resources.
You can head to snea.org slash swordfish to get the latest on the swordfish standards.
There's the Redfish Swordfish Forum out there.
If you have any questions, then, of course, you can always join and help drive the standard
through the SSM Twig in SNEA.
And there's a lot of other things that SNEA offers as well.
I'm sure you'll see those in other sessions.
So thanks again for watching and hope you enjoy the conference.
Thanks for listening. watching and hope you enjoy the conference. Here you can ask questions and discuss this topic further with your peers in the Storage Developer community.
For additional information about the Storage Developer Conference, visit www.storagedeveloper.org.