Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 06x07: Evolving Connectivity for AI Applications with Ultra Ethernet with J Metz
Episode Date: April 1, 2024Ultra Ethernet promises to tune Ethernet for the needs of specialized workloads, including HPC and AI, from the lowest hardware to the software stack. This episode of Utilizing Tech features Dr. J Met...z, Steering Committee Chair of the Ultra Ethernet Consortium, discussing this new technology with Frederic Van Haren and Stephen Foskett. The process of tuning Ethernet begins with a study of the profile and workloads to be served to identify the characteristics needed to support it. The group focuses on scale-out networks for large-scale applications like AI and HPC. Considerations include security, latency, ordering, and scalability. The goal is not to replace PCIe, CXL, or fabrics like NVLink but to extend Ethernet to address the connectivity and performance needs in an open standardized way. But Ultra Ethernet is more than hardware; the group is also building software features including a Libfabric interface and are working with OCP, DMTF, SNIA, and other industry groups. Hosts: Stephen Foskett, Organizer of Tech Field Day: https://www.linkedin.com/in/sfoskett/ Frederic Van Haren, CTO and Founder of HighFens, Inc.: https://www.linkedin.com/in/fredericvharen/ Guest: J Metz, Chair of the Ultra Ethernet Consortium and SNIA, Technical Director at AMD: https://www.linkedin.com/in/jmetz/
Transcript
Discussion (0)
Ultra Ethernet promises to tune Ethernet for the needs of specialized workloads, including HPC and AI,
from the lowest hardware level all the way up the software stack.
This episode of Utilizing Tech features Dr. Jay Metz, Steering Committee Chair for the Ultra Ethernet Consortium,
discussing this new technology with Frederick Van Haren and myself.
Welcome to Utilizing Tech, the podcast about emerging technology from Tech Field Day, part of the Futurum Group.
This season of Utilizing Tech is returning to the topic of artificial intelligence,
where we will explore the practical applications and impact of AI and the technology needed to support it.
I'm your host, Stephen Foskett, organizer of the Tech Field Day event series.
And joining me today as co-host is Mr. Frederick Van Haren. Welcome, Frederick.
Thanks for having me.
It's always nice to have you here.
So I want to relate, I want to begin by relating a story.
So years ago, I worked for a company called U.S. Robotics, and they got bought by a company called 3Com.
And 3Com brought in an emeritus person to talk to us
all and give us a bit of a rah-rah. And that gentleman was Mr. Bob Metcalf, inventor of
Ethernet. And one of the things Bob said to me that day has stuck with me ever since. And he said,
we were talking about all the different things, and I was a smart-ass kid. And so, of course,
I said, man, Ethernet sure has changed since you invented
anything. And his response was, I don't know what the future of networking will be technically,
but it's going to be called Ethernet. And I thought that was fantastic, because he went on
to explain basically this long history of backward compatibility and forward compatibility and all
the amazing things that they've brought to this pretty basic technology.
Frederik, I assume that you've encountered this technology a few times in your career.
Yeah, indeed. When I started working on AI, InfiniBand was very expensive, but was also kind of considered the place to be. But we did everything with Ethernet. So I really do think that innovations around Ethernet are helping out in AI.
And even today, the story repeats itself, right?
People assume that InfiniBand is the way to go,
but there is still Ethernet still going strong and with a lot of innovation.
Absolutely.
And we've seen so many attempts at killing Ethernet over the years.
I mean, I remember, you know, FIDI, Token Ring, ATM, Fiber Channel, all of these things were supposed to knock it off.
And now, of course, like you say, we're hearing, oh, well, it's all going to be InfiniBand.
It's all going to be NVLink. It's going to be PCI Express. CXL is going to be the end of this.
It seems not to be the end of it yet. And that's why today we've decided to have a special guest,
a longtime friend of mine who also happens to be involved in this ultra-Ethernet world.
Mr. Jay Metz, welcome to the show. Thank you. Thank you very much for having me. I appreciate it.
So tell me a little bit more about the hats that you wear and specifically the Ethernet one.
Well, you're right. I've got a number of hats.
I'm pretty sure you can see a few of them behind me. I'm a technical director at AMD. I work on the strategic direction for both high-performance networking and storage. I also have the hat of
being the chair of SNEA, the Storage Industry Association, as well as the chair of UltraEthernet, a new consortium of about 55, as of right now,
companies working on developing workload-specific enhancements to Ethernet, specifically for AI and HPC.
Now, Jay, you and I go way back, but you've been involved in a lot of these enhancements to Ethernet for many years. I mean, I remember having conversations with you about data center Ethernet and the emergence of basically NVMe over Ethernet, all of these things. They required Ethernet to change, but it did change and it became incredibly relevant. And that's what's going on. What is UltraEthernet? UltraEthernet is an ambitious project that attempts to tune
the different stacks of Ethernet so that you have workload-specific performance behaviors.
So for instance, we focus on AI and we focus on HPC from the networking perspective.
But like most Ethernet- related technologies, you have multiple layers
that have to kind of align. We did that way back in the past when we were looking at fiber channel
over Ethernet, for example, where you had to, you know, tune the link layer, then you had the fiber
channel stack on top of the, you know, the L2 Ethernet. Now we're actually taking a much more
ambitious approach by stacking all the way
up into software, from hardware all the way up into the software layer. So the physical layer,
the link layer, the transport layer, software APIs, all tuned to specific profiles that are
designed to address the different networking considerations that both AI and HPC have,
which are not exactly the same. So it's this alignment of the layers that we're
working on specifically. And it's a very, I mentioned the word ambitious, and it is,
but it's also involved. And we have an awful lot of different companies and members who are looking
to try to solve that particular problem. So it's interesting, you talk about profile. So
not all use cases are using the same profiles.
Are you then looking at a way where you can define and select a profile for your network
and configure it such that it's ideal for that profile?
Or how do you address the different profiles?
Yeah, and that's an excellent question.
We have actually partitioned and subdivided the problem.
So in other words, what we're looking to do is say, if you've got a particular characteristic
that you need for AI, or you have it for HPC, we are looking to try to solve that issue by
addressing how the transport is ordered, whether or not it's reliable, whether it's
on order delivery or flexible delivery, whether you need it's reliable, whether it's on order delivery
or flexible delivery, whether you need security or not, those kinds of things.
And then you have effectively a negotiation period in advance. One of the things that I
think I should probably address is the fact that we actually talk at UltraEther in terms of three
different networks. And so it's probably a good idea to break those down because then you apply the profiles based upon the different network types.
We creatively call them Network 1, Network 2, and Network 3. This is what happens when you
have engineering doing the naming process, right? So Network 1 is your traditional LAN,
WAN, internet-based network, and we're not doing anything with that, right? So what you would
normally consider to be an enterprise-level network is not what we're looking to do.
We are not looking to change the general purpose nature of Ethernet as is typically applied in
systems today. Instead, we're looking at network two, which is effectively a scale-out network for
these processes. Generally, when you're doing these kinds of large scale
networks, you have certain types of considerations that you have to address that are very different
than what you would do in general purpose networks. The third type of network is one that we've got on
the back burner, and that's the scale up GPU or accelerator based network. So if you're going to
have rack scale or row scale type of approaches for accelerators like GPU So if you're going to have rack scale or row scale type of approaches for accelerators
like GPUs, you're going to have different type of considerations for latency, bandwidth,
and data consumption.
So one thing at a time, we're addressing on the scale out part of it.
We're looking to do from right now, a typical large scale network probably has about a couple 10,000, you know, maybe, you know, 40,000 nodes at most.
We're looking at going up to a million, right?
We're looking at very large scales, you know, not quite an order of magnitude difference, but pretty, you know, pretty close when we start talking about the broad scope of things and future-proofing that scale. We'll get to the Network 3 at a future date
because we're looking at trying to blend those principles together between a more fractal
approach to the scale-up and the scale-out so that you're not creating hard changeovers. But
that's a future problem to solve. Within Network 2, we've got these different profiles for AI
and HPC, which have, like I said before,
they have different requirements. For instance, in AI, you'll have different security requirements
that you have in HPC. You have different latency requirements. You have different bandwidth
requirements. You have different ordering requirements and so on and so forth. All of
these have to be aligned vertically as well as horizontally across the network.
And so those are the kinds of things that we're doing in terms of addressing the profiles perspectives for these different workloads.
It seems like a lot of the work on Ethernet in basically the past decade focused on basically converging Ethernet with various traffic types in the data center, like you mentioned, figuring out the right way to use for interconnectnet. I mentioned, for example, NVMe
over Fabrics and building Ethernet Fabrics and so on. Is this very much related to that,
or is this very much different from what the Ethernet world has worked on for the last decade?
It's different. And the reason why it's different, let's take NVMe over Fabrics as a really good
example. So NVMe over Fabrics, a really good example. So NVMe over
Fabrics, especially in an Ethernet-based approach, has two different types of components. You've got
a TCP component and you've got an RDMA component. But let's just bundle those together as a single
entity for the sake of argument. Both RDMA and TCP, when it comes to Ethernet, are effectively upper layer protocols, and they are bound to the transport system or the link system based upon the NVMe over fabric specification.
There's not a lot of changes being made to Ethernet at all in those.
We're effectively sitting it on top of TCP or RDMA-based Ethernet networking. So you've got your, you know, for RDMA-based approaches,
you've got your priority flow control and the link layer.
And then, of course, you've got TCP at the layer four.
And you've got a binding shim layer that sits on top of both of those
for the NVMe over Fabrics to work.
No real changes being made to Ethernet.
What we're doing is actually considerably more robust.
We're actually making some changes inside of the link layer for credit-based flow control, which is a little bit more fine, granularity than what you'd
get with priority floor control, but you effectively get the same kind of order reliability at the link
layer. At the transport layer, which is really where the keys to the kingdom are inside of
UltraEthernet, we've got additional semantics that we're approaching for
order delivery and unordered delivery, both reliable and unreliable, to be able to address
those kinds of needs, not just sticking an RDMA-based approach or a special software-based
approach or API-based approach on top of it as an upper layer protocol. We're actually making
changes inside of the Ethernet specifications to address those kinds of needs for ordering and semantic understanding.
So it's a really different approach that we're taking at UltraEthernet all the way down into the physical layer for bit error rate, forward error correction.
We've even got some future things that we're trying to address that may involve different types of materials, you know, different types of silicon photonics and the like.
Those are futures.
But we're trying to figure out how do we squeeze every nanosecond of latency out of an Ethernet-based stack.
And that's not what we did for FCOE.
It's not what we did for NVMe over RDMA or TCP.
This is really going into the core guts of the protocol to try to tweak for these kinds of
workloads. Yeah. So with all the changes within the Ethernet specifications, what does that mean
for an end user? Does that mean that whatever they have today, they can't use anymore? I mean,
how should people look at it? Well, no, because this is why we decided to go with Ethernet,
right? Because Ethernet is one of those things that allows us to be able to do backwards compatibility with existing environments.
Now, the reality of it is, though, you have to remember that when you're developing these kinds of networks, they tend to be greenfield anyway, right?
You don't typically put a network two specifically for accelerator-based scale-out onto a general-purpose network, right?
Now, that means that you can start off with ultra-Ethernet in existing types of switches
by having the endpoints with these types of updates to the transport layer, for instance.
That's cool.
You can actually create this inside of an existing brownfield type of switching environment. But you also want to remember that best practices
is to create a ubiquitous, homogenous, you know, or homogeneous. I can never say that work right,
right? I've heard it both ways. You really tend to find yourself with this type of environment
as a relatively greenfield situation, but you don't necessarily need to have super fancy types of ultra, no pun
intended, you know, networking equipment to do this end to end. And it's designed specifically
to be able to kind of ease yourself into ultra Ethernet approaches. As long as you've got that,
you know, like I said, the keys to the kingdom are in the transport layer, which you can do at
the endpoints inside of DPUs and specialized NICs. And that actually brings me to a question of where this fits in the overall system.
So kind of walk us through an HPC or an ML training environment.
What parts of the system would be connected with UltraEthernet versus PCI or InfiniBand or NVLink or whatever else?
That's an excellent question.
So let's start off at the inside the server kind of a space, okay?
And then we'll work our way up into the more remote things.
Inside of a server, you've got different components.
You've got your DPUs.
You've got your GPUs.
You've got your CPUs. You got your CPUs.
You know what I'm saying?
Just come on.
You're going to work with me here.
So what we need to do is we have to connect these things, these things, these things together.
Wow.
That's, that's what I get for trying to be funny.
What you, when you try to connect these things together, you have different types of interconnects
that make the components work together.
So one of the things you have that we use quite often is PCIe.
It's a very common thing.
CXL is based on PCIe.
It adds a couple of things that you can do for certain elements like memory pooling
and hierarchical switching and it's devices that can connect into CPUs and so on,
multi-host and fabric environments.
NVIDIA has NVLink.
It is their interconnect between their CPUs and GPUs, right?
AMD has one called Infinity Fabric, right?
So it's kind of that equivalent.
So those don't tend to go over the network, over a remote network.
So they're inside of the core technology.
Now, there are PCI switches.
There are CXL switches developing right now that are going to be out in the market.
But they are effectively what we would consider to be small by Ethernet standards, you know, scaling, right? We're only talking about a rack
because once you start getting out
beyond the 700 nanosecond timeframe,
you cannot do any kind of load stores, right?
Well, on that note, if I can just jump in.
So we did a whole season on CXL here as well.
And absolutely on that, yes, CXL switching is coming,
but that's really going to be rack scale
or even maybe half rack scale.
Half rack scale.
Not even, you know, necessarily top of rack.
Right. Right. And, and I think that's, that's where the things are getting,
as we start to answer questions about how do we shave off a nanosecond, we're not, we're not playing around in the microsecond level, right?
We're looking to try to get down into below 200 nanoseconds in our,
our environment. And unfortunately, neither PCIe and by extension CXL
for the kind of training and inference
that we're looking for can deal with the bandwidth,
let alone the latency for these kinds of applications
and workloads at this scale.
And that's a very real problem.
That's why NVLink is so good.
That's why Infinity Fabric is so good is because it addresses the bandwidth issue, right? It addresses the latency issue for these types of things. So you wouldn't necessarily want to extend PCIe. It's a very impractical bus level technology for these kinds of things, right? So, when you're talking about scale, now you're reaching into the InfiniBand and Ethernet approach.
Now, the reason why you would use InfiniBand is because of the fact that it is the gold standard for the last 25 years for handling high-speed interconnects at scale.
The issue that we're trying to address is that InfiniBand is based on an already-made technology, which creates pinned, for lack of a better way of putting it, pinned pathing for
endpoints. What we want to do is we want to try to expand upon the ability to do packet spraying
across every available link. The issue with RDMA is that you have load balancing approaches that
wind up being a problem at very large scales, and there's a maximum. You start hitting what is
effectively a tractor pull the further out you go.
And we're trying to address this scalability for the type of multi-pathing that we call
packet spraying, efficient use of all available links.
And then the reason why that happens at an InfiniBand layer or an RDMA layer is because
of the fact that the RDMA verbs API restricts the ability to resequence packets on the on the end
point right and so what we want to do is create the transport layer to be able to handle the
semantic reordering in the transport layer itself which means that you could actually wind up with
an open ethernet based approach for that that packet spring because you don't have to worry
about if you use this particular profile the ability the ability to do the reassembly of the packets in order at the other end of the line.
Yeah. So basically we're outside the realm of PCIe and fabrics, like NVLink.
Full network type, yeah.
And so you've contrasted it a bit with InfiniBand. So in the ultra-Ethernet vision,
is there a world for InfiniBand still,
or is this a competitor? That's a very good question. I think that the total addressable
market for these kinds of solutions have more than just the technology at stake,
right? So right now, there are no admins for ultra ethernet zero they don't exist there are lots of
admins for infiniband right there are a lot of people out there who understand infiniband who
can implement infiniband who are familiar with infiniband and that demographic is a very real
thing there's all kinds of things that go along with these kinds of services too that you know
go happen when you start to choose these types of solutions.
The reason why we are taking this particular approach is that there is an entire cottage industry for Ethernet.
We do believe that you will find administrators for Ethernet once they start to understand
how UltraEthernet works and the training will come and the, you know, the approach will
come, the support will come.
I do think that, you know, the existence of, you know, a vertical solution like InfiniBand definitely has a place for those companies and customers who want to have that ecosystem as part of their solution.
And this is just an alternative.
And I believe very, very fervently, and you know me, Stephen, I mean, I believe
competition is a huge plus to the end consumer, right? And so we're taking an open approach.
We believe in an open approach with a large group of contributors as the way that, you know,
certain companies and certain customers are going to find their future AI and HPC needs.
And so, as you know, I mean, as you probably are aware, you know, I think several of the
top 10 HPC environments are Ethernet, including the number one and two slot, I think.
And so the goal then is to basically bring competition, to bring interoperability.
All those things that have made Ethernet great for all these years are the fact that, yeah,
that you have multiple suppliers and so on.
For the most part, you know, traditionally,
these technologies, you've not seen a lot of mixing and matching.
You've not seen a lot of multiple suppliers.
And frankly, on the CXL side as well,
I don't expect it to be sort of a mix and match area,
you know, technology like Ethernet.
I expect it to be very much approved solutions only.
Would the goal of UltraEthernet be that somebody could basically buy
components from their favorite supplier at different spots
and they would all work together?
I mean, we're certainly working under that presumption.
You know, so what, like I said, the UltraEthnic
Consortium has eight different work groups. And so if you think about those work groups,
you can think of them in terms of the horizontal layers that we've got for ISO, right? We've got
the physical layer, the link layer, the transport layer, and the software layer. And then we've got
vertical layers that go cut across those. We've got storage, management, compliance and tests, and performance and debug.
Right?
So because of the fact that we've got this matrix-based system, you know, all points are touching all points.
It's almost like a, you know to make sure that you do certain things.
If you want to reach certain performance goals, you have to be able to handle the that if you have an ultra Ethernet device,
an end user will be able to say, ah, this is an ultra Ethernet, you know, compliant device because of the fact that it's actually gone through these, you know, these tests and these
measurements. And that's all built in from the word go, right? And the good news is that we have,
I mean, we're the fastest growing project inside of the Linux Foundation.
I mean, in four months, we went from from 10, 10 companies to 55 companies.
We went from 60 people to 750 people. Everybody who is involved in UltraEthernet has expressed this openness as a as a key goal.
And at the beginning of the year, I I a blog for UltraEthan about what my goals
were as chair. And I am intending, hopefully by the end of this year, when we were anticipating
the 1.0 specification to be released, to also be able to have compliance tests in place, even if
they're, you know, depending upon when the timing is, you want to make sure that you have stable
drafts to be able to do the tests. But it's all part and parcel of what we're looking to do and make it open and
accessible and freely available for everybody who wants to. And that's the key thing that we're
doing. We have open and available freely, you know, downloadable specifications when 1.0 is ready.
So who are the drivers behind UltraEthernet? Is it the network people? Is it the application people?
The people that have the
accelerators? Or all of the above, maybe? D, all of the above. I mean, you know, I think one of
the things that, you know, when we started Ultra Ethernet, it was really six companies that were
looking to, you know, just basically try to improve the network, you know, the Ethernet based network.
And then six became 10 and 10 became 55.
And we still have more coming.
And I think what we probably weren't really.
We didn't really think that it was going to be as big as it is.
Right. I mean, just even a year ago, you know, we were just not really sure that this was going to, you know, take hold at all. It's very, very fast. It happened very, very quickly.
And I think we tapped into something that is a, I got to have it moment, right? I mean,
people are looking at this and look, how am I going to solve this particular problem? And I've
got software that I have to work on. I've got hardware that I'm working on. I've got network
protocols. I've got storage and so on and so forth.
You know, Stephen, you said you had guests on previously who were talking about this very thing as well.
So, I mean, it touches so many different things.
And I think part of the problem is that, and, you know, we were talking about this outside of the video camera, I honestly think we're at a stage right now where most people
say the word AI and they're not saying the same thing, even though it's two little letters.
You know, I think that when people talk about using AI and building AI, they're munging them
together and it's not quite that straightforward. And so what we're looking to do is we're trying to tease out the building blocks that we're using to try to make this work.
Then all the other things, the ethics, the morality, all the conversations that go on as a result of that, then that can happen with some certainty because then you have a better idea of actually what's going on underneath the hood.
But until that happens, it's just speculation as far as I can tell.
Right. So you talked a little bit about the ultra Ethernet stack.
And I believe that it looks like there is a software component or let's say a much bigger software component to it than with traditional Ethernet.
Can you talk a little bit about it?
Yes.
So one of the things that we're looking to do is adopt the LibFabrics approach to the software solutions.
So UEC has a tight integration between its transport layer and the software layer,
very tight relationship between them.
And so one of the things that is going on inside of the software group is the discussion about creating a libfabrics provider for ultra-Ethernet that
also entails things like in-network collectives. These are optional features for UEC. As I said
before, that we've got different profiles that have different requirements based upon what your
current needs are. So you don't want to got different profiles that have different requirements based upon what your current needs are.
So you don't want to require certain things that aren't necessary based upon your implementation.
And so one of the things that we're looking to do is create, you know, this tight coupling between the semantic layer and the lib fabrics provider software that can be used in network or at the edges.
So I'll give you an example.
One of the things that is a key component for the way that we're taking the approach
is to have sender-based congestion control, right?
And then what happens up at the top of the –
how that affects the upper layer protocols and the software elements.
So both this transport layer and the software layer groups effectively have co-located meetings specifically to try to address that synchronization between the different components.
So that's unusual from an Ethernet perspective in traditionally. So the congestion side, the telemetry side, the signaling side,
all of that gets fed back up into the software stack,
and it comes back down into these discussions at the transport and link layers as well.
So it is a key component, and I have to be careful because of the fact that
the confidentiality rules, while it's still in draft form,
mean I can't go too much into the specifics, but the integration is there as built into the discussions.
Well, I think it's great that you're supporting libfabric. And I assume that that means that
this is sort of how you're going to be looking at software interfaces generally, right? That
you're going to be trying to basically adopt what people are using, embrace what people are using and connect Ethernet into all the various software interfaces. Because what you mentioned with host driven control of the network, well, that's something that we've tried to do for a long, long time, but it's been very difficult to actually get it done, simply because there's so many different software interfaces. But how are you going to work in that software API-driven modern world?
Well, it's not straightforward, as you point out, right?
I mean, one of the big problems that we're having to face
is the licensing issues that come along with software.
It's actually probably one of the biggest banes of my existence at the moment.
Threading that needle, that series of needles is not easy.
But we are working with
creating relationships and alliances with the important groups that are affecting these types
of solutions. So we already have a memorandum of understanding with IEEE and with OCP. We're
developing ones with OCP and, I'm sorry, with OFA, which does LibFabric and SNEA. And we've got,
you know, our management group is looking
to try to incorporate things with Redfish and Swordfish, which of course is DMTF and SNEA.
So many of these things are in progress. We haven't gotten them completed yet, but they are
in the works. But the key point to keep in mind is that we don't want to reinvent the wheel. We
want to work with and keep the industry ecosystem alive and well
by making those contributions and solutions that are already there part of our makeup and,
no pun intended, inside of our fabric. So what we want to do is we want to make sure that,
you know, if these things align and they work together, then we should do everything we can
to try to reinforce that and not try to break it, work with the flow and not against it.
So the software piece is more complex than just the code, but we are working on trying to get the organizational elements out of the way so that our coders can get right to work, which is what they're chomping to do.
They don't want to work with licenses any more than I do.
Unfortunately, that's why they pay me the big coins, right?
Nickels, quarters, chocolate cookies. I almost thought you said bitcoins and that opens up a whole other can of worms. Yeah. I ain't touching that with a 10 foot pole.
But in terms of working with those working groups, I think that the fact that you're leading this and also SNEA helps because, you know, it shows that you're the kind of person who wants to work with these organizations and not oppose them.
I mean, I think that that gives gives a real leg up. And it also suggests a bright future for Ultra Ethernet.
When you look at the membership, basically, this is a who's who of the networking
community. I think that the companies in the networking space, but also on the server space
and HPC, they're all there. And that's really good. That really shows that this is an industry
effort. And again, back to the thesis, the premise of this whole discussion, that's how Ethernet has done everything that it's done thus far. It's not that it was some kind of genius technology. It's that it stayed true to the vision, let alone, I mean, you know, even 10 or, you know, 15 years ago,
Ethernet is basically unrecognizable what's in it today, but yet it still maintains compatibility,
especially on the software side, and it still is a strong contender. So it makes a lot of sense.
Thank you so much for being part of this conversation. We could probably talk to you all day about this and as well about SNEA and all the other
things that you're working on.
But since we can't do that, tell us where can we connect with you?
Where can we learn more about UltraEthernet and where can we continue to talk with Dr.
J. Metz?
Well, you can find me, of course, you can find me on Twitter at Dr. J Metz and my blog
is jmetz.com. UltraEthernet does have a page on LinkedIn. So UltraEthernet, at UltraEthernet. And
then of course on Twitter is UltraEthernet. But of course, UltraEthernet.org. And I would recommend
that if anybody is interested in finding more of the specifics as to what it is that we're doing that we couldn't talk about, there is a white paper right smack dab on the front of the page on the UltraEthernet.org homepage.
It goes into all the details about why we're doing what we're hopefully shortly to discuss even further what we're looking to do
and dig deeper into some of the mechanics of these different transport protocols.
Very good. And, you know, Frederik, how about you? What's new with you?
Yeah, you can find me as Frederik V. Herren on LinkedIn or on hyphens.com, our company website. Or you can find me at conferences talking to people and enterprises about AI
and what AI is all about.
And as for me, you'll see me as well here on the podcast,
but also at Tech Field Day events.
We just announced that we're going to be doing an App Dev-focused Field Day,
so a Field Day focused on really the next generation of modern applications.
That's very
exciting. And we are currently in the process of planning our next season of Utilizing Tech. So
keep an eye out. We'll be announcing that very, very shortly. This podcast is brought to you by
Tech Field Day, part of the Futurum Group. Thank you so much for listening. You can find this podcast in your favorite applications
or on YouTube.
If you enjoyed this,
please do subscribe,
give us a rating,
give us a review.
For show notes and more episodes, though,
head over to our dedicated website,
which is utilizingtech.com,
or you can find us on X, Twitter,
and Mastodon at Utilizing Tech.
Thank you very much for listening,
and we will see you next
time