Storage Developer Conference - #93: Redfish Ecosystem Update
Episode Date: April 30, 2019...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Co-Chair. Welcome to the
SDC Podcast. Every week, the SDC Podcast presents important technical topics to the storage
developer community. Each episode is hand-selected by the SNEA Technical Council from the presentations at our annual Storage
Developer Conference. The link to the slides is available in the show notes at snea.org
slash podcasts. You are listening to SDC Podcast Episode 93. Welcome to SDC 18. My name is Jeff
Hillen. I'm one of the people that brought Redfish to the DMTF.
I'm also the president and have chaired a whole bunch of stuff in it.
How many of y'all know anything about Redfish or Swordfish?
All right.
So we may go through a little bit, some of this pretty quick.
Yeah, some of you I know.
So just real quick, some of the stuff I'm showing
particularly at the end
the spec hasn't been fully released yet
so some of this information will change
probably not profoundly
although that's always possible
you never know
but just keep in mind
that the latest information
is always going to be on the DMTF website
when it comes to Redfish.
So when you look at what we were trying to do when we first started Redfish was we had a bunch of old silicon,
really 8-bit micro stuff that was, you know, running IPMI and real good at doing bitwise interfaces.
But everybody out there in the industry was seeing the need for something a little more
modern, a little more object-oriented.
We tried Smash with
using sort of like SMIS.
We used WebM and
WSMAN
and all the whole SIM infrastructure
model, and we did all these profiles
and all this heavyweight stuff.
What it required was you had to build
a whole bunch of layers. You had to build a protocol stack, and then you had to build something to eat the data model, required was you had to build a whole bunch of layers you had to
build a protocol stack and then you had to build something to eat the data model and then you had
to build something on the other side of the wire to provide all that information it was a heavy lift
and one vendor did it so ipmi was still king and we we kind of failed and and so we started looking
around and um i was i was in a tc meeting we were talking all this and I was in a TC meeting.
We were talking all this, and it kind of hit me that, you know, we're making everybody do a heavy lift.
What are all the cool kids doing, and why are they doing it?
And, well, they're glumming on to HTTP and HTTPS.
Why?
Well, it's secure.
It's good enough for your bank account.
Unlike IPMI, they invented their own security
model, which is antiquated and
wouldn't work and wouldn't scale anymore and
certainly wouldn't pass modern crypto.
So, all right, let's pick what
they're using, you know, and what is everybody
else using? So you start looking at the programming
languages and you fit
a data model. It's like, well, let's find a schema
language that fits that programming language,
you know, that everybody's using. And so we picked something where you could do JSON representation,
because if you start looking at the curves, JSON was on its way up, and all the SOAP-based and XML
and everything else was on its way down. So let's pick something modern that just plugs right into
the infrastructure where libraries are already created, and you don't really have to do a whole lot.
Microsoft convinced us to do OData, OData v4.
You'll see that the vestiges of OData are becoming less and less important in Redfish.
If you ever saw OData.context, that was really only used for programming languages.
The Olingo library would grab the right part of the schema data model and go grab it.
And it turns out we muted down so much of the OData stuff that nobody could even use the Olingo library and eat Redfish data anymore.
So, gee, if you've got to have your own version of a standard library, let's just get rid of the rest.
And so we're kind of going the way of Swagger now. And so you'll find in, I guess it was released last week,
Swagger definitions, because that whole, sorry, OpenAPI, it's named now.
You'll find libraries out there that are all OpenAPI based.
So, yes, we started with CSDL and JSON schema.
JSON schema was kind of a funny one.
Never really solid standard.
Draft 4 became a de facto
standard. A lot of people started using that. And so that's what we picked up. But we did need a
schema language. And we knew SIEM wasn't it. So we started with CSDL and JSON schema. And we're
working our way over to YAML and those open API versions. Because honestly, it's all about the
payload.
You should be able to read the bits on the wire and not have to go look it up in a schema language.
And that was something fairly new for the DMTF to go off and do.
Versus all the rest of the SIM stuff,
you kind of had to know, oh, what's a value value map?
Because I'd get a two.
Well, this was a value map.
What did two mean?
Well, two meant on. Okay, great. Why don't we just put on in there? And why don't we make the property
something readable, like power on? You know, so you don't have to figure out, oh, look, I'm in a
computer system, the power is on. So, you know, trying to deliver that customer satisfaction of
the poor guy in the IT trench who, you know, may or may not have gotten a two-year degree
trying to figure out why his boss is on his neck
because half the data center is down.
So we're just trying to make that easier
and yet stick to the modern tool chain.
So if you wanted to build something programmable out of it,
some programmable interface, some tool, upper-level thing,
you could do it and have a schema definition, experience and do whiz bang policy based kind of stuff.
So try and hit two audiences at once.
If you would, save the questions to the end because the thing is being recorded.
So for posterity, let's kind of keep this going.
We also created some new modeling tenants.
And Joe White coined the phrases from Dell afterwards, after we'd already done them.
It was like, look, if I have a, one of the pet peeves is an include of an include of an include of an include.
So that if you have to go trace a definition down, you're running through four or five files.
And we didn't want anybody to do that.
It needs to be in the schema file that I look at.
And so what we invented was something called inheritance by copy.
Oh, if it's somewhere else, we're going to copy it and put it over here.
Well, as a programmer, they're just like, gee, now that puts me in the maintenance mode
of a standards writer, right?
I've got to maintain those and keep them locked, synced forever.
Okay, our pain, we'll do that.
We'll make sure we do that.
We'll put programming tools in place'll make sure we do that. We'll put programming tools in
place to make sure we do that, just to make everybody else's life easier. And then the other
one was polymorphism by union, which is a kind of way, kind of a nice way of saying, you know,
it's not really polymorphism at all. It's one object, not an object that it inherits. It's a
bunch of stuff. So like a computer system is of type physical or virtual
or composed or
some things may be in it.
The fabric model is a good example of this.
If you look at endpoint, well is my endpoint a
PCI thing? It's going to have a whole set
of different things than if it's an
InfiniBand object or a Gen Z object or
whatever. So properties are
going to be based, but let's just have one thing. Let's not
create four or five different ones of them. Let's just put them all in one object
and only use the ones we care about. And so you'll see a lot of data patterns in Redfish that are the
same. So really what is Redfish? It's an industry standard defined management for converged hybrid
IT. ACPS and JSON format based on OData v4. It's schema-backed but human-readable.
Usable for apps, GUI.
In fact, when we kind of did this,
everybody makes web servers, right?
In your BMC, you got a little web server
engine, and you got one object for
your user interface and another one for IPMI.
It's like, no, we want the GUI to use
the Redfish objects.
So let's make it JSON because everybody's making their
web GUIs out of JavaScript.
Version 1 was focused on servers.
The version submitted to the DMTF
had a whole lot more in it.
We thought out the data model pretty good.
And that's one of the reasons
it's broadly applicable
because of kind of everybody
gets their own sandbox
because of the way we did it.
But we beat version one back to IPMI over LAN,
but it's grown a whole lot since then.
So we kind of tested out that, look,
you can represent racks and blades and standalone equipment
and all kinds of future equipment with it.
We played with it a whole bunch before we took it in.
And then we knew we wanted to meet OCP's requirements
because they were an increasing market share.
And then expand that scope to the rest of IT over time, right?
Additional features are coming out about every four months or so.
We get them out as kind of a trained model.
We're doing all our development in GitHub, so if a feature's not ready to go, we just
don't put the pull into the source tree.
We're working with SNEA to cover more on Swordfish.
You'll hear about that later.
We're working with the Green Grid and ASHRAE.
ASHRAE is the people that do air conditioning and heating in buildings.
So they're looking at managing real power and cooling,
crack units and all that with this.
We're working with the IETF and some other standards organizations
to cover some level of Ethernet switching.
That one's becoming more and more complicated as things go along.
We've got a pretty good group of people.
I won't concentrate on this too much.
It's pretty much if there's people building equipment at the device level
or the system level or the integration level,
that wasn't supposed to automate, but that's okay.
Really, it's the other standards bodies that we're working with.
One of the things that DMTF has is an alliance partner relationship with a whole bunch of companies.
And people started coming out of the woodwork to work with Redfish and adapt it to their environment.
And the reason is pretty straightforward, right?
Look at it.
Industry standard servers are kind of a commodity thing, right?
You've got a processor and chip and all this and maybe a management processor,
some kind of micro to turn the whole thing on in management and I2C buses and all that kind of thing.
And people are getting pretty creative about what you can do with that kind of thing, right?
I can build a switch out of it.
I can build a storage box out of it.
I can build a this box out of it or an IoT thing, industrial IoT thing, whatever.
Well, what they don't do well is firmware management
and power and cooling and what's my system,
what's my memory, what's my load, what's my, you know.
What they do do well is I do switches.
I represent Ethernet.
Or I do storage as a service, network-attached storage.
So we kind of did all the rest of that infrastructure stuff
and everybody else just creates their own sandbox.
And that's why we've started working with so many different people.
And we'll show up to your meetings and help you out.
So that's kind of new.
It's had an interesting impact.
So we've kind of expanded our scope slowly over time.
You can see the whole list here.
But pretty much what you read out of this is kind of a couple of things.
We've pretty much completed the server definition, and we're down to the edges now where things kind of a couple of things. We've pretty much completed the server definition,
and we're down to the edges now,
where things kind of are more interesting.
You know, role-based authorization
and redoing the sensor model
so industrial IoT can use it
and keeping that aligned with the things
that ASHRAE and the Green Grid are doing.
We've got works in progress out for Ethernet switching,
and that's working. We've got works in progress out for ethernet switching and and that's working we've got
composability and i i break composability up into three different kinds there's little c middle c
and big c compose where the first compose we had was you got to enumerate the individual resources
you're going to use from your pool and say put this together whether it's a resource block or
computer system or stuff we don't care but you got to be very purposeful. Middle C is kind of more of a swag of, give me somewhere
between two and four CPUs from this pool over here, and I've got to have these things. I've
got to have that thing. It's not the full-bone, and that's what I call big C composers, kind of
like what Swordfish did for storage. We haven't gotten there yet with the DMTF because we're not sure what that looks like
when you start doing pools of servers or connections.
Connections start to become real important when you start doing Big C Compose.
We don't really have that modeled very well right now.
We've got a fabric model in there.
We've got telemetry.
We've not only done a WIP, but that became final, so there's a whole telemetry model.
SSE, the server center venting, so you can actually get a metric report sent to you when it gets generated now.
Assemblies, errata, query parameters are in there.
Open API was the big one that just got released, as well as job schedules, and we've enhanced the messaging.
So work with SNE on that as well.
And then, you know, we're aligning the standards bodies,
and you can read the list right there,
but it is becoming more and more the central nexus
of manageability for hybrid IPTT infrastructure.
So this is kind of the Redfish resource map,
and I'm not going to go over it very much.
It's just we got some services off of the route,
things like tasks and sessions and accounts
and things that everybody needs, right?
I've got to give users access,
and I've got to have a place to go get my schemas
and message registries and all this accounting
and overhead kind of stuff.
And then this is kind of what we started with
was systems, chassis, and managers.
And we wanted to separate the management ecosystem out
from the computer system.
And the system view is kind of the logical view,
the data plane view of it.
So you basically got a data plane view of the world,
a physical sheet metal view of the world,
and the management plane view of the world, a physical sheet metal view of the world, and a management plane view of the world.
And if you've got a switch-based view or a fabric view
or a power and buildings facilities kind of view,
you just throw another collection off of the route.
And that kind of tells you what to do.
And there's a composition service,
and all of that is missing.
But this is kind of the simplified model of what we've started with.
IPMI also defines something called KCS, which was the host interface to the OS.
KCS is this, actually, well, IPMI had three.
It was a, anyway, BT, the block transfer engine,
a third one that never got implemented,
and KCS was the most popular one.
And it was a bitwise interface that was, you know,
register mapped straight down,
and they were trying to pass packets back and forth on it.
And it got to be really intractable when you did large things.
So some of us had the idea about, oh, I don't know, 10, 15 years ago,
and finally put it into the standard of, what if you just had a NIC?
Right?
My BMC has a USB controller on it.
All my virtual media is just firmware.
My NIC, I can buy one off the shelf that's just firmware and make it show up because it's USB,
flash my firmware, and I've got a NIC that's showing up.
Now, what does that do to you?
Well, my out-of-band interface that's all HTTPS
and that whole protocol stack, I've just plumbed it from the OS.
So the same application that I'm running out-of-band management with,
I can now run inside the system.
Now, you've got to solve the whole how does the kernel get to it
and how does that special privilege work.
And so we made a nonce that shows up in SMBIOS
that the system firmware can grab and get a special account.
But other than that, all the code you write out of BAN works in BAN.
We also have something we're working on called profiles.
Like our predecessor, everything is optional.
There's very little required because you don't know what kind of system you're building out of it.
So make a profile that describes your system.
Am I a front-end server?
Am I a NAS box?
Am I an enterprise class database server?
Am I an OCP rack mount thing?
Am I a blade?
Am I a, you know, so really this is,
what is the common set of industry features
expected for a certain class of product?
We didn't want customers coming to us and saying,
down to the property level, thou shalt
support all this kind of thing. Instead, you'd really like them in their RFQs to just say, you
know, support this profile. And so it's as much about interoperability as it is about customers
being able to just not go to vendors down to the property level. So we do have an interoperability
test suite that we built.
There's a whole bunch of tools, and I'll give you pointers to those later.
But we do have the ability for interop tools to eat a profile
and run an interop set based on that profile.
So what Redfish did do, we didn't really tackle the storage stuff that Swordfish did,
but we have the physical server side of it.
We started out with something called simple storage, which was really just a collection of disks, and it was nothing more than that.
But we knew we had to do a better job of local storage, you know, that server class storage, storage light, whatever you want to call it.
So we developed a data model for that, and it exists of three objects.
There's this thing called storage because we couldn't call it controller because what if i'm doing ha storage because that model
in that storage box you can have more than one controller and that's how i've got you know my
reliability is is split across those two controllers i could be doing software raid and so we just came
up with the name let's just call it storage and it's got controller objects in
it it's also got references to volumes and disks and a disk is a physical media exactly what you
kind of expect what's interesting about disk is where it can live in the data model you can either
hang it off a chassis or you can hang it off of the controller, the storage object. So really, do I have a JBOD while hanging
off a chassis? When we originally did this model, we were thinking of just doing slash Redfish slash
V1 slash disk. Just don't hang it off anything. Hang it on the, because we did a whole hypermedia
thing. So it was like, hey, just hang it off of wherever you need to. None of the URIs were
defined. Who cares? Well, when we went down the OpenAPI path, we had to define normative URIs for OpenAPI.
So we nailed those two different things down for the derive.
And then the volume, you know, that's your line, right?
So that's the thing you create out of disks.
Now, this same data model is completely analogous
to how we did memory.
Now, I don't have the memory model up here,
but when you go poking through the memory,
because there is storage class memory out there,
memory domain is storage.
Memory is drive, and memory chunk is volume.
Why memory chunk?
Well, we couldn't call it LUN.
We couldn't call it this.
We couldn't call it that.
Nobody knows what a memory chunk is.
Perfect.
That's what we'll call it.
So a memory chunk is your block or your interleave set.
Well, you couldn't call it interleave set.
Not everything is interleave, and not everything is block.
So once again, we try and find a name
that has no baggage associated with it.
As far as how it's all mapped in the data model,
you'll see dashed lines on this and solid lines.
And the solid lines are what we call subordinate resources.
And the other ones, we've never been consistent on the terminology.
Sometimes it's a related item.
Sometimes it's, you know, anyway.
But we are very strict on what a subordinate object is.
And a subordinate object, particularly in OpenAPI,
is you can tack it on to the URI you're at
and it gets funny in our definition
because basically if it's a dotted line
you'll find it in a link section
the reference to it is in the link section of the object
and if it's a solid line it's not
it's right off of the resource
so every computer system has a storage collection
we throw things in collections
so your storage object could have more than one storage thingy. I've got four controllers. Two of them are in one storage
object because they're rated, and I've got two others that are separate because there's no HA
between those controllers. They've got volumes off of them, and the volumes are pointing to the disk
drives, and then drives are hanging off a chassis. Really, in a class that you'll see in Swordfish
resources backing up a Swordfish
representation, you're probably going to find them
on chassis more than you are hanging
off of the storage
object.
I'm going to do it on time.
Going through this a lot
quicker than I thought. See, I did have room for all that detail.
So the DMTF has been working on this thing called Redfish Device Enablement.
How many people have heard of MCTP or PMCI Working Group or Awesome?
Two, three, four. Well, yeah.
Sometimes.
So what happens to that signal when it comes inside the box?
We're doing industry standard servers.
We're trying to enable a community that's building solutions out of industry standard servers and industry standard components.
What's wrong with specifying the manageability interface inside the box? So that Redfish Redfish client is, RESTful client is sitting outside the server and he's sending a
RESTful interplace Jason Redfish or Swordfish object straight down the Redfish management
controller. And on the back plane, I've got an ethernet set of objects, the ACD model or simple
ethernet. I've got a storage,
simple storage or storage controller,
you know, the storage object.
What about any adapter that comes along?
We've got a problem in the industry
where, and customers hate it,
my BIOS version has to match,
my BMC version has to match
my device version and my storage one
and my, all of these things are being sent
in bundles and then
a customer has a problem and you send him a firmware
update and he flashes it with Redfish and everything else breaks.
Or they
need one thing or the other. And so all of this
stuff, why is that?
Because in order to understand how
to communicate with that
storage controller, that storage
vendor will make something like a library
that's compiled into the BMC,
into the management controller.
And so to take and represent all that storage,
you are literally hard-coded
to a specific version of firmware.
And if anything hiccups, anything changes,
God forbid I do mixed vendor
or multiple things from multiple vendors,
or worse, multiple things from the same vendor
that don't use the same API,
all of it kind of breaks.
So we had the thought of, you know what?
How long has the provider architecture been around in SIEM?
More than a decade, getting close to 20 years?
20 years?
Yeah, it's 20 years.
Yeah, 98, so okay.
And Moore's Law is caught up on those storage controllers and those things.
You know, the BMC we started out with 20 years ago
is far, far more underpowered than any of these devices out there.
So if we can come up with a way of encapsulating that Redfish
in a binary format,
Redfish payload in a binary format,
and registering them when they come up
as providers in architecture,
then I can break the interdependence
between my versions of firmware.
We're all speaking a common protocol
on the different physical media types.
That's all it should take.
We've got I2C plumbed to the boxes.
We've got PCIe plumbed to the boxes.
And that pretty much covers 95% of everything out there.
There's other stuff coming along.
And it turns out there's this thing called MCTP, Management
Controller Tunneling Protocol, that the PMCI work group did 15 years ago. What they were
trying to do is the same thing, but at the server management level. They had this thing
that they were doing called monitoring and control. I got a sensor out there. How do
I get my data in a standard format down to the BMC so I can just show you temperature?
And then they added to it. They added firmware update just recently. But they've been sitting
on monitoring and control for quite a while. The NVMe guys looked at that with NVMe MI
and mapped to MCTP their own way and did it totally different than everybody else in the industry.
And you can already see it coming.
Great.
Now I've got the same problem.
In order for me to support a change in the NVMe MI firmware down on a set of devices,
all of which could be different and support different versions, I've got to change my BMC code.
So what we did was there's this thing called PLDM, platform level data model.
It's not a data model.
It's really yet another transport if you think about it.
We did this thing called Redfish device enablement on top of it.
And that's really what it was all about was a provider level architecture taking that JSON, turning it into binary, and getting it to be a provider
architecture for the BMC, breaking the firmware interdependence between the two.
So what does it take to do that? Discovery, it turns out, when you power on the machine,
that micro becomes something called an MCTP bus master. And it doesn't matter where it
lives or what it is. Sometimes it's in a Southbridge. Sometimes it's on its own chip. It doesn't matter.
It goes out there and starts getting all these endpoint IDs and assigning them to every device
it can find on I2C, on PCIe, on future things coming. I can't talk about that one, but I can
talk about Gen Z because we signed a work register with them already.
There's one up for vote on Thursday I can't talk about,
but it's a successor for one of those things I just mentioned.
Anyway, we've got MCTP mappings to all those low-level media types.
You add a new media type, you just do an MCTP mapping for it. And as soon as you
support that, you can do all kinds of stuff. So devices are discovered using MCTP, and then we
go through and discover them using PLDM. It negotiates parameters, right? When you think
about a provider, what does it really take? Well, look at a Redfish object. How do I fill it out?
There's kind of three classes of objects inside that Redfish thing.
There's the stuff the BMC knows about, like the URI.
The device is going to have no clue how any of that works.
My OData context or my OData ID.
So you've got to figure out a way to when that thing comes back
because you want the BMC to not do any deep packet inspection.
You want it just to regurgitate that packet.
So I need some substitution variables like the URI and the OData ID that the BMC just
has in the table.
And when that thing comes up, it just looks them up and shoves them in there.
What's my schema language?
What's my, you know, some of that stuff.
The rest of it, you want the BMC to just be able to look things up in another way.
And so we created this thing called a dictionary.
We take the Redfish schema language, schema definitions,
and we alphabetize it and come up with the same order of sequence numbers
so that all versions of the dictionary are backwards compatible.
And the device carries its own dictionary.
So the dictionary is going to take that property name and a nested property name
and be able to just go through a recursive algorithm and substitute that
property name for a sequence number.
And so, okay, now I've got a consistent way of always encoding something in binary
and always spitting it back.
Well, what about enums?
What about strings?
What about numbers?
What about nulls?
Well, nulls are always null, so that's not a problem.
Numbers are numbers.
That's not a problem.
That's easy in binary.
Enums, translate them to a number.
Use a dictionary.
Same thing.
Strings, I can't do that.
I could maybe make a dictionary
no, no, no, too complicated
just do strings or strings
and so that's kind of
what's inside of them
so
we can go through and negotiate
all this overhead kind of stuff
max chunk size because MCTP
doesn't have a very big packet size
so I'm going to be chunking, I'm going to do segmentation and reassembly.
Nothing I can do about that.
I squared C has got a very small packet format.
Some of these Redfish packets can be pretty darn big.
So we need a way of doing all that stuff we've been doing.
You know, you think about Ethernet and all the fabric
and network management kind of stuff that you've done over the years.
It's all been done before.
We're just doing it inside the box.
So what do I need to know?
I need to get my dictionary, retrieve my dictionary,
retrieve my schema, find all the instances you've got out there,
storage and disks and all that kind of stuff,
and come up with a number with them,
and the MC has to know where they fit in the tree
and be able to make a tree.
Most of that is pushed down onto the device
except for top of tree, and it just says,
hey, this is the top of tree.
I'm of schema type storage.
You might want to throw me in a storage collection and do the right thing.
And that's kind of as complicated as it gets.
So we did this thing called beige.
And I guess I kind of already went into this.
Basically, beige is the binary encoded JSON.
It is inseparable from the dictionary.
You can't do it without a dictionary, right?
How do I take that sequence number, that number,
because it's a lookup table in a dictionary.
So you can't do it without a dictionary.
So that was why dictionaries are really the key.
And those dictionaries,
we're going to be doing an open source tool
that basically takes any schema
and throws it into a dictionary.
That way all your OEM extensions can be in a dictionary
and everything will work just fine.
It includes how to nest objects.
So it basically describes an algorithm
for how to turn anything coming across
in a Redfish-compatible JSON object into a binary form.
And so when you see beige, think,
okay, beige is an algorithm that uses a dictionary
to turn JSON into binary.
We get roughly a 10 to 1 compression with it.
And if you're trying to pull packets across I2C,
there's some other cool stuff in it, too.
Some optimizations I won't go over, but there's some cool stuff in there.
Most importantly, even though we're publishing an open source dictionary generator,
there's no value in everybody creating their own dictionary.
So we're just going to go ahead and compress all the dictionaries
and throw them out there just with the schemas so folks can just go grab them.
So the other thing you've got to do is operations, right?
Well, basically, we just kind of, if you look at HTTP,
we came out with RDE operations that mirror all the HTTP operations.
They've got different names for a reason. That's because of all this junk on the right that I
talked about. How do I handle multiple outstanding operations? How do I handle tasks if something
takes a long time? What do I do about segmentation and reassembly? So there's all this state machine that you'll find in the spec on,
okay, you transmitted a message, and the response, if it's short,
the whole response comes back on the response.
If it's long, there's all this get, get, and churning through.
And it had to work both ways.
So rather than go, you can always read that later and what those are,
but for those three or four people in the room,
they're actually going to be implementing this, dive into the spec.
So how does it really all fit together, right? Before a client ever contacts the Redfish
service, the management controller uses MCTP to enumerate all the devices on the bus. He's
the MCTP bus master. There's all these mapping specs. He does it according to the I2C spec
and the PCI spec and
all that. So it's all medium dependent. Then once he's got all the endpoint IDs mapped for everything
in MCTP, he starts going using PLDM for the next phase of discovery. Okay. I know what MCTP types
you supported. One of them was PLDM. Now I'm going to use PLDM discovery to say what are the PLDM
types you support. Okay. I do monitoring and control.
I do firmware update.
And I do RDE.
Awesome.
You do RDE.
So now I'm going to do the RDE discovery takes place.
Okay, get me your device registration.
What are all your resources?
What are all the actions?
Give me your association.
Give me your dictionary.
Give me your schema language, max segment type,
all this kind of stuff,
multiple outstanding operations, all this other stuff goes in there. There are detailed
oriented versions of this on the DMTF website if you want to see in detail. So finally,
I'm all registered. I got all my URIs mapped. I'm showing them up in the data model. Somebody
does a request on one of those and does a get. The management
controller gets that redfish, looks at the URI, and says, ah, this particular object, I don't have
it. I'm not going to cache it. There's a controller out there that will respond to this request.
So it takes the HTTP headers and encodes them and passes them down. It takes that JSON body
and using the dictionaries and its substitution variables,
encapsulates all that into binary encoded JSON, shoves that down.
The MC initiates the operation to the device.
There's all this sequence number and junk goes on in that whole operation table
depending on does it spawn a test and all that.
So the device processes the request.
Then when it's done, the MC takes the response,
does the exact same thing in reverse,
takes that binary data using the dictionaries
to re-span it back out into a JSON request,
and then puts the HTTP headers and all that back together
and sends the response to the client.
It sounds like a lot,
but it's not nearly as bad
as having all those firmware dependencies,
and besides, it was doing all this anyway
over the same buses using proprietary protocols
and proprietary anything.
What we've just created
is a self-contained, self-describing data model
on the adapter's firmware.
I don't know what device this is as an MC.
It can just show up.
If I can figure out where to put it in the tree,
that's all I need to know.
I don't need to know about your properties.
I don't need to know about your schema levels.
I don't need to know anything.
It's literally self-contained and self-describing,
and a future thingy can
come along and as long as that MC know where to
shove it in the tree, it can just show
up and just work.
So just some other
information. RDE also handles tasks.
It specifies how events
are handled. Eventing is
something that was already defined in PLDM.
So we had to figure out a way
of mapping those PLDM events
inside the Redfish data model
and get it to show up.
There's a state machine examples and tables.
If you go out and look in the spec,
and the spec that's out there is fairly old,
but the one that's coming out,
hopefully really soon,
we're in the last days of balloting, hopefully.
Balloting the spec.
There's examples on, one of the things we do with specs is sometimes we just write normative stuff in there and don't write anything else. And we kind of felt with this spec that, boy, somebody's going to get lost and do it wrong.
So there's examples on how a dictionary gets formed. And sure, the algorithm is there and
all the instructions are there, but there's also, gee, a program should make it look like this as
I'm going through and organizing things and creating the dictionary. There's also examples on encoding and decoding a packet from Redfish JSON to Beige and back.
And so there's all this.
You can go in there and look, and it's literally, okay, here's the dictionary that we created from the previous encoding,
and this is how you would use it to look it up and substitute all the variables.
So there's really good state machine examples and tables in the spec.
Binary format for the dictionary is specified
obviously and then there
are plenty of examples and so they
really did a good job there.
So in summary
and I'm way ahead
Redfish along
with DMTF Alliance Partners is working to define interoperable storage, interoperable software-defined hybrid IT management.
For servers, storage, networking, power, cooling, fabrics, anybody who will talk to us in the industry, the coolest one I thought was the Broadband Forum.
They do what's called user equipment.
That's your cable modem.
That's your set-top box.
It's like, wow, well, they're building a little thingy.
It's got a CPU and memory,
and they need to know if it overheats
and congestion control and metrics.
So, yeah, that one's scary.
So it's all composition for resource managers.
We've got aggregation engines.
That was thought into the model, right?
We know, sure, we're doing an MC,
but what about an aggregator?
How do I do pools and pools of servers?
Well, they're just members of collections.
And then we've got the plumbing.
We're plumbing the mechanisms inside the box as well.
We're doing more MCTP mappings any time anything comes up.
The latest thing we're working on at PLDM Layer is security. We spawned a security task force to look at
attestation, firmware measurement,
authorization
inside the box.
There's a bunch of other people working at it right
now, but none of them invented MCTP.
There's things that we can do because
we own the spec. We can go in and
change bits at the core level that nobody else can do.
It's really the right way to do it.
So one quick plug.
Tonight, there's a bof.
Is it seven to eight or seven to nine?
We have the rule from seven to nine.
Okay, it may not last that long.
The bof is really seven to eight.
It's about just general discussions
about swordfish adoption and questions from...
And we'll talk about it more in the next two sessions as well.
But any questions folks have about how are things going
with adoption, integration,
what's going on in the swordfish ecosystem? And then tomorrow night's a hands-on workshop,
2.50 in the afternoon.
That's a weird time.
It's happening all afternoon.
So it starts after the lunch session is over.
We'll be set up out in the Mezzanine area out here.
So basically 2.50, it'll be set up out in the work out in the mezzanine area out here so for basically 250 it'll be set up and it'll also be running from from five to seven in the in the what's that session called
the five what's going on for five to seven but you can come and see the reception, thank you but you can come out and look and see
it's kind of halfway
between vendor demos
and a plug fest because stuff will
actually be set up, you can come out and see
some of what some of the various
vendors actually have up and working
in the software space, you can see tools
kind of a
little bit of a current state of what's going on with circuit implementations and ecosystem.
And get hands-on with various stations.
So come on out and get your hands dirty.
And I'll repeat that twice more.
Right.
There's two more repeated plugs for this.
I think you'll be plugged.
Four or five plugs for this one today.
So this is just the first of many.
One last slide. We've got a Redfish
developer hub out there. Redfish.dmtf.org
We're adding more and more stuff
out there. There's a schema
index specs. We've
got a program that now takes all those schemas
and makes a human readable doc out of it
called the schema guide. Look for that. Man, I use that now instead of, youas and makes a human-readable doc out of it called the Schema Guide.
Look for that.
Man, I use that now.
After spending three years of digging through CSDL, I'm glad we finally got that.
The registries, standard message registries, how to build BIOS registries.
We use the same term, registries.
Not all these registries are the same.
A BIOS registry is not equivalent to a message registry, which, anyway, unfortunate naming convention.
There's a bunch of mock-ups out there where you can actually click down and look through the data.
There's this little eye on here.
When you go click through the different mock-ups, you click on that eye, you can find more about the resource.
We've got simple rack-mounted servers, bladed systems, OCP profile.
There's, I think, five or six of them out there now.
And then there's a whole educational community.
There's the Redfish user forum, which is that thing on the right.
You can click to there, ask questions.
We go through it in work group meetings every week and decide on the answers so that we're agreeing.
Sometimes people respond right away.
There's a Swordfish section to that where the Swordfish people are watching that.
So make sure you go take a look at that.
White papers, presentations.
The version of this slide deck that's got the gory details on RDE is published out there.
And all the stuff they made me pull out because it was way more complicated than what I presented.
And then there's a whole bunch of YouTubes.
There's a bunch more being added.
We've got local storage out there. I think the ones coming up
are sessions,
advanced communication device,
messaging one and two,
and there's another one I can't remember. So we're
adding to those as quickly as we can.
All right. Any questions?
Yes?
I have a question about
open APA schema. It's a very good addition
for TMPF.
We're trying to build a documentation based on this, right? Yeah, we know. So which version of the YAML files are you using?
Okay.
Post questions on the user forum.
When we publish those, we notice some
problems with them right away. The guys
at Texas Tech, we have a relationship with
the
Cloud and Anonymic Computing Center
at Texas Tech. And they've been
going in there and running all the open
API tools that they could
and posting things
and we're fixing them
as quick as we can.
So this was our first foray
into the YAML files, you know,
and they've only been out.
So it depends.
If you got them more than
five or six days ago,
they might not be the latest version.
The ones that came...
Yeah, yeah, yeah.
The ones that came out after the 20th, I think,
were ones that had some fixes in them.
Now, is it going to fix everything?
I bet you there's bugs in them.
It's all programmatically generated.
And we will respond as quick as we can on fixing them
because one of the things that DMTF has
is a real lightweight policy for getting things out.
We've got the work-in-progress stuff,
which in a week to 10 days,
we can get something turned around. There's also a loophole in our standards body we call editorial.
Oh, it's an editorial fix. Honestly, if it's something a tech writer or a non-schema person
could fix, if it's a problem with format, if it's a problem with something like that, man,
we'll voice vote it and get it out the door so quick your head will spin. Because we want you to have success with these files.
We don't want you to have problems.
You know?
There's one more question.
Okay.
Because mockups are very useful, you know,
even to realize what you want to, you know, modify.
And then mockups you can put in a text file,
which is, and then reuse it in your documentation.
It's great.
Yeah.
But having open API schema, right, and writing
the mockups, it's not that you can
include an index file, you need to,
you know, specifically write an example
of trying to, like, initialize the schema
with the parameters. Right. Which is
troublesome. Do you have any
BKMs how to work with, let's say,
the mockups and open API explicitly?
You know, I, again, post it to the forum.
I've got some ideas.
I don't know that everybody else agrees with the same ideas.
I will say one of the things we've done not very well
because they don't fit up in the mock-up explorer very well
are things like establishing a session
and those post kind of things.
We don't really show post because that's all get data.
And those we've discovered through the plug fest
is where we've hit some of our interoperability holes,
and we're trying to figure out a way.
But we want people to beat on us to make a priority.
That really does help, you know.
Him first, if you don't mind.
You mentioned OData.
One of the questions I had was around the OData query Well, needs to be supported is a funny thing.
You know, some of it doesn't make sense to me,
and I can't give you an answer on that,
but I'll tell you my preference, personal preference.
I think expand is the bomb.
I think expand is great,
because if you look at a collection
and it's got 10 members in it,
and I'm really trying to find
the system that's powered off,
I've got to go through each member of that collection and do it
simultaneously.
However, with
I'm going to get it wrong,
expand and you've got to have
a tree, the depth part of it, wrong. Expand and you've got to have a tree,
the depth part of it, level.
Expand, level, equal.
And then you've got to find that system.
And I think it's dollar filter.
It's not dollar select.
But then you use dollar select to get just the property you want.
We had our guy implement dollar select
and I think he said it took him five minutes.
And it reduced his buffer size.
So it actually turned out to be a benefit
for the implementation to implement it.
There's nothing to it.
So
it's the filter one that gets a little hard.
Look for the property name equal off.
So to me those are the ones
to do. There's two more optimizations
we just put in. One's called only.
If I have a collection
and it's a trivial collection and our definition
of trivial collection means there's one member
by specifying only as a client
I don't get the collection, I get the member
saves me from doing another round trip I.O.
I like that one
there's another one that you haven't seen yet called excerpt
you'll see it in the sensor model
which really is an interesting thing
but I'm not going to go into it because I've only got five more minutes
and I want to get to his question.
So those are my preference lists.
It really, I'd love to see clients start using them more.
And your support for them is in the root, right?
You indicate which things you support so the clients can know.
You had a question?
Yeah.
So is there any clients that support JSON patch?
No.
Because that's scripting on the MC side.
Remember me telling you about us starting with 8-bit micros? Well, we're now the state of the art of computing in about 1988 on those little devices.
So they're still, you know, they don't have gigabytes of data.
They've got megabytes, right?
Four, 8 meg
they're
maybe a 386 class
they're not 486 class
processors
they don't have it
honestly what I want
to catch me afterwards
at the Boffer, I'll tell you what I'd like to see them do
but
Jason Patch ends up being
a real hard thing to implement in
an MC. Because
just the requirements. We looked at it and it was like,
okay, even if I could
do it right, the probability
of me doing it wrong and creating
yet another attack point makes it not
worth it. And what you'd have
to support is that only on specific objects
in specific cases. couldn't do a general
engine, you just don't have the, I mean
there's, these objects are so big
and so large that little MC is struggling
to keep up as it is. That's why we had to make beige
the way we did it. So the MC has
no knowledge of what's in that package.
It's just encoding it and decoding it and letting it
fly through.
I have a question.
Hello.
Okay.
How about the JSON format?
How do you handle the OEM
extension?
So
there's an OEM section clearly labeled
OEM and then your pattern properties
then says anything can be there.
That OEM like OEM
Contoso, you'll see it in one of those examples.
I think it's server one.
You'll see OEM Contoso has an OData,
not ID, the schema definition,
has its own schema definition with the URI
to find that schema file.
Now, technically, you're supposed to put it
in the OData service document
and apply it on the OData service document and
apply it on the link header for the JSON version of all that. But you've got to put that little
piece of the OEM data header in there and then make sure it goes on the link header
on its way out.
So, is that, additionally to the metadata for this OEM extension, you need to provide
also a dictionary to convert it to the OEM? Yes.
Yep.
And that's part of the...
I don't think it's explicitly called out
because it doesn't need to be,
but when you look at the new version of Beige,
there's dictionary type is in there
and one of them is OEM.
So the OEM needs to develop and publish
a fixed dictionary and a fixed schema that corresponds to anything that's in those extensions the same way that we do standard dictionaries.
DMTF has a...
They can't change.
Right.
They have to be versioned and fixed.
One quick thing.
The DMTF has a republication portal.
If you don't want to publish yours on your site, you can just give us the right to republish.
There's a portal that says, look, I'm not giving you copyright give us the right to republish. There's a portal that says,
look, I'm not giving you copyright,
just the right to republish.
We'll put it in our schema repo.
One last question, then we got to go.
Yes.
Okay.
This is Leo from Steka.
We give webhooks, server, and storage products.
My question is,
the Redfish interface is an un-fussed-up for VNC, right? That's what we started with.
That's not where we ended up.
But nowadays, we see
comments from customers
and they ask whether they support
rare fish in fish.
But what I
concern is about the scam
and the swarfish, the ISD are building on the rare fish. Redfish or swordfish?
It's not an or.
Yes.
So I think you'll get a better picture of that in her presentation.
So watch it.
Let her finish.
And then if you still have a question, let's ask it again.
Because really there's no difference.
You can shove swordfish in a redfish implementation.
And you can't tell.
Okay, so I'm out of time.
It's her time to start.
So ask that question again when she's done.
All right.
Thank you very much.
Thanks for listening.
If you have questions about the material presented in this podcast, be sure and join our developers mailing list by sending an email to developers-subscribe at snea.org.
Here you can ask questions and discuss this topic further with your peers in the storage developer community. For additional information about the Storage Developer Conference, visit www.storagedeveloper.org.