Grey Beards on Systems - 68: GreyBeards talk NVMeoF/TCP with Ahmet Houssein, VP of Marketing & Strategy @ Solarflare Communications
Episode Date: August 8, 2018In this episode we talk with Ahmet Houssein, VP of Marketing and Strategic Direction at Solarflare Communications, (@solarflare_comm). Ahmet’s been in the industry forever and has a unique view on w...here NVMeoF needs to go. Howard had talked with Ahmet at last years FMS. Ahmet will also be speaking at this years FMS (this week … Continue reading "68: GreyBeards talk NVMeoF/TCP with Ahmet Houssein, VP of Marketing & Strategy @ Solarflare Communications"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Howard Marks here.
Welcome to the next episode of Greybeards on Storage podcast,
the show where we get Greybeards storage assistant bloggers
to talk with storage assistant vendors to discuss upcoming products, technologies, and trends affecting the data center today.
This Greybird on Storage episode was recorded on August 1st, 2018.
We have with us here today, Ahmet Hussain, VP of Marketing and Strategic Development
at SolarFlare Communications.
Ahmet, why don't you tell us a little bit about yourself and what SolarFlare is doing these days?
Sure. Thank you for having me on the program. Yeah, SolarFlare, as you may be familiar or may not be familiar, we've been in TCP IP
networking and networking interface controller and software that goes with it for the last
decade plus. We've been delivering 10 gig and now starting to deliver 25, 50 gig and 100 gig adapters into the industry.
We've dominated our capabilities around low latency on standard TCP IP networks.
We have a fairly healthy business in the fintech world supplying our solutions to that.
So that's where SolarFlare has been to date. In the last two years, we've been
moving in looking at how we can take our TCP IP-based technologies to the broader market.
And one of the areas that we've been focusing on is taking flash storage and making it available
as a fabric-connected device, NVMe over TCP.
It's NVMe over Fabric over TCP?
We shortened it.
I apologize.
NVMe over Fabrics TCP.
I got you.
Most of what we hear about NVMe over Fabrics is either token ring,
excuse me, fiber channel.
Jesus.
Or RDMA Ethernet. And while I don't have anything conceptually against RDMA,
it complicates things when the LAN on motherboard chip talks iWarp
and the other vendors talks Rocky.
So TCP lets me run over standard NICs and standard switches
and no special config?
Absolutely correct.
And it does scare me a little bit when you said token ring, and I knew what it was.
Yes, absolutely, yes.
I am a founding member of the Fibre Channel over token ring committee.
Yeah, yeah, yeah.
So without additional hardware,
I miss TCP running NVMe over Fabric.
What about latency and all the other hardware niceties that RDMA provides and stuff like that?
How do you get by without that?
Well, I think this is one of the misnomers
in the industry around the need for those things.
Let me be very explicit.
We run latencies.
Our adapter is today shipping adapter runs around 300 nanoseconds from memory to wire.
So if you want real latency, I think it's a misnomer to say it's a TCP IP latency issue. So I think that's one
of the key things. The other thing is that TCP IP protocol as itself has been highly optimized for
many, many years now. And it's got to a stage where its ability to move data from an application
to the wire and back from a throughput latency perspective has
significantly improved over the years.
And need for specialized hardware beyond the basic, what we call stateless offloads is
not really required to achieve the performance level.
And I think that's one of the premise we started with that said, look, we think we can run NVMe over Fabrics on standard TCP without having to add any of these special protocols like Rocky, which is really RDMA running over UDP, which is running on top of Ethernet. So there are some accelerations that certain adaptive
vendors provide for making the translations from RDMA to UDP work more efficiently. But when you
have protocol that already exists that runs at latencies that we described, the need for that
is really not there. So our premise was we can actually make this go over TCP
just as efficient and just as effectively.
And why complicate it?
Are you saying that your latency is equal to the RDMA model
or that there's some additional latency but it's not significant?
So as of today, in the current product that we've released and shipping, which is pre-standard,
but will be compliant to the standard when it comes out on TCP, is within 5% of RDMA
latency.
Okay. So like, RDMA latency today might be on the order of let's say 50, 10 to 50
microseconds. You'd be a 5% on top of that. It's hardly, it's almost in a noise.
Yeah, that's exactly right. And in fact, we have three independent vendors, Super Super Micro Dell, and Stack and Red Hat ran these performance benchmarks recently
and validated what we exactly said within hairbreadths of RDMA. So did a comparison of RDMA, iWarp, or Rocky versus your standard
SolarFlare NIC running NVMe over Fabric TCP? And it was within the noise kind of thing?
Yeah, within the noise. I mean, Dell did a specific set of tests with Intel Flash storage devices,
and we were within 5% on a read-write 4K block mixed traffic.
Iometer kinds of things?
Yeah.
Well, Iometer is the other one I think is called FIO.
FIO. Yeah, yeah.
Got it?
Old age getting to me.
Can't remember the acronyms anymore.
Gosh, you should give me links to those reports because I think the listening audience would be very interested in those things.
Yeah, absolutely.
And next week at the Flash Memory, it's a big splash for us.
And I'm going to tell you about something else we've done that even significantly changes the game plan again. But I mean
at the end of the day we've established that TCP
as a transport, native transport for NVMe
is within hairbreadths and we think we can
be equal and better over the next several years as we tune it
and optimize it. And you don't require any special switches or things of that nature between the environment?
That was the motivation for us, and exactly right.
We wanted to work on standard Layer 2 switches, top of rack switches, and yeah, nothing special.
I think more than standard top of rack switches,
it's standard top of rack switches in the default configuration.
Absolutely right.
Because data center bridging isn't rare in the 10 gig switch market.
If you're going to put that in your,
the switches you would put in your data center support it.
But it's like jumbo frames.
There's things you have to twiddle with and you have to twiddle with them everywhere. And
if you miss one spot, bad things happen. No. And our motivation was that we saw that
and all the customers that we engage with, especially in the hyperscalers, all had the same
issues that they've been addressing, trying to fix.
With our solution, you don't need to do that.
Yeah, I would imagine some of the hyperscalers don't have things like DCB
in their white box switches either.
No, and I think they've got other problems to fix
rather than putting data center bridging into the infrastructure.
So, Amit, I'm just trying to understand the protocol, I guess.
NVMe over Fabric over TCP can work with anybody's NICs or only Solar Flare's NICs? I mean,
does it require a software change? Yeah, I love it because you just set me up again.
We actually have a plugin that goes into the standard NVMe transport,
which will be part of the NVMe org when the spec gets ratified,
that anybody's NIC can be used.
It's just a standard Linux plugin for NVMe transport.
Oh, my God.
But I'm sure that the 5% you're quoting is with your NICs that do partial offload of this.
So I'm going to give you another number.
We ran this on top of Mellanox NIC as a standard NIC, and we got it to within 10% of Rocky.
Okay.
Oh, my Lord.
And the Mellanox NICs do Rocky offload. So that's kind of that.
That's the comparison I was really looking for.
I like that.
Yeah.
Absolutely.
And then when you look at sustained reads,
4k reads from a latency perspective,
that was,
I was talking to iOS per second in that previous,
when you do latency,
it lays on top of each other.
You can hardly see the difference in the bar charts.
And we'll share this with you guys.
You can have a look at this.
But we'll be publishing it next week at the Flash Memory Summit.
And as much as I'll denigrate 4K reads as a benchmark
for measuring storage performance,
it's the right thing for measuring network latency.
Yeah, it's the way they do NVMe latency today.
So after that, it starts looking a little bit different
from an RDMA frame and stuff.
Yep, yep.
So we're extremely excited about this.
I mean, we want to enable the industry.
That's why we're taking a little bit of a different approach.
This plugin will be made available to the industry.
And if you run on
our adapters, you get the benefits of our latency acceleration on our adapters. And in fact, we've
got technologies in our adapters, and we'll talk about that next, that allows you to scale the
number of virtual instances or number of streams you can support into the thousands in a standard network interface controller versus, you know, 256 SRAOV
streams. Oh, I see. Okay. So one of the questions I have is that this other hardware that's at the
other end of this is a standard Red Hat server with NVMe SSDs running. What sort of software
at that point? I mean, you have to be an MBME client, right? Or I'm sorry,
target, right? Yeah. So the software that comes is a target and initiator. So if you load it into
your server, a standard server with a bunch of SSDs, both targeting and initiator, any server
that has a initiator software plugin enabled, we'll see that storage or any server that has a initiator software plugin enabled will see that storage or any server
that has both running, all storage is visible to all servers.
And is that your target or is that the Intel SPDK driver with TCP added?
Yeah, so if I just clarify that, at both ends, you have a TCP IP stack talking to a NIC
then you have the NVMe driver and transport layer for the SSDs on both
ends right I'll plug in which is the same as the rocky plug-in plugs into the
NVMe transport and we have two versions of it. One is a target version and one is an initiated version.
Both of them are loaded at the same time.
And so the target one basically grabs the command off the wire,
on top of TCP, pushes it directly into the SSD through the NVMe transport.
Right.
So you can read and write from the SSD devices remotely.
You know, I think I just heard the storage industry crash here.
You've got software that runs with anybody's NICs that can do NVMe over fabric TCP at roughly maybe 10% more overhead than a normal Rocky or iWarp NIC?
Yeah, so I'd like to get to the next story with you guys yeah what
the hell's the next story well before we get to the next story it's like ray remember
you know this really does sound a lot like ice guzzy when we've been used to fiber channel
doesn't it yeah yeah i guess that wasn't this that wasn't no no, no, no, no. Fiber Channel still had a performance advantage when you introduced iSCSI.
Yes.
This is coming across with a performance advantage, and it's a brand new product.
Yeah, but you can't argue that there's any substantial performance advantage iSCSI versus Fiber Channel today.
I agree.
I agree.
But this is 10 years later.
Yeah.
And just to give you. This is accelerating the whole thing.
Just to give you another reference point, we did run iSCSI to the remote systems using the iSCSI initiator on target.
And when you get to about queue depth of 16, the iSCSI flattens out on IOs per second. The local flash device, of course, just scales
all the way up to 350. For this particular device, it was about 350,000 IOs per second.
And so when you look at the NVMe-OF with TCP, you'll see it scale all the way up to that range. So iSCSI plans at QDEP
16, but... Because of all the overhead.
So if you think about this, we've actually optimized the equivalent of iSCSI block
with NVMe block to a better scaling protocol.
Oh, yeah, absolutely.
There's no doubt in my mind that NVMe is a much more serious scale protocol.
In fact, we're of the opinion that NVMe is going to replace SCSI
as the lingua franca of performance storage um absolutely and so you
know you don't have to sell us on nvme we're all we're at the and why isn't it completely baked yet
yeah where's the tcp where's the multi-pathing why can't i buy it at the local best buy okay
that's a slight exaggeration no but i i agree i mean, this is an enabler. In many instances, we've seen this throughout the industry.
Once you create this sort of enabler, then the adoption goes crazy because people are able to use it and get the value.
And the plugin's available for free?
It will be from the nbab.org when they ratify it which is coming up for a vote in a couple of
months i believe and it will of course be put out on the open source uh community
people will be able to enhance it move it and and just to stay on this discussion
you know at the end of the day this is actually running on a kernel tcp ip stack and
it is and by the way rocky runs inside the kernel operating system as well
it's on top of sockets so a lot of a lot of room for people to change and use utilize it
room to squeeze all sorts of stuff out of that. Okay. So that's all well and good for Linux, which, of course, opens up the whole hyperscaler market for you.
Have you been at all successful at twisting Microsoft or VMware's arm?
We can get VMware to utilize this solution because all you need to do is, of course, stick your hypervisor and Linux in there. But native into the driver,
we think it's a fairly straightforward effort
for VMware to take hold of it.
But I think both Microsoft and VMware
are conflicted a little bit
in enabling this right now.
For VMware, it's the vSAN discussion.
How does it play with vSAN?
And for Microsoft, it's their SMB direct and their approach that they've taken to that.
To me, this is a matter of time that they will come on board because the value here is so strong.
Absolutely, yeah.
Today, I think we're nibbling at the edges of Microsoft and VMware, but I believe that they will soon be on board.
I think if you can convince the hyperscales to go this way, I don't think there's any doubt that the rest of these guys will move there sooner or later.
So what's the other shoe to drop here?
You mentioned earlier that there was another story coming out next week.
Can I guess?
Go ahead. Go guess.
You're doing NVMe over Fabrics,
over TCP, Target, and Silicon.
Well, that's something coming in about nine months from us.
But let me try something else.
Okay.
So one of the key things is
with NVMe over SCSI,
or SCB block storage in Linux and various operating systems was the
inefficiencies. So we removed that by making it a highly efficient protocol and structure in terms
of the number of queues and how simple it was directly over PCIe buses. Well, the problem is
that now you scale that single node into lots and lots of
high performance. Say, for example, you take the new Intel Optane SSD, which is around 10
microsecond latency, and then you scale up a single system with lots and lots of SSDs,
you start to see that the operating system bottlenecks and starts to not be able to
efficiently service the request.
So in that direction, Intel, of course, and various other people in the industry have
worked on this SDBK, which is moving it block storage into user space.
When you do this, you automatically gain a significant advantage.
You can actually achieve the IOs per second and latency scaling within the single node
much more easier.
Moving overhead associated with the kernel,
context switching, and all those things
that you would have to do to coordinate stuff.
But if you move it into user space effectively,
now your data is moving straight out from the application
straight to the wire or straight to the SSDs.
Yep.
We got a good presentation from Intel
on all of this at Tech Field Day a year or so ago.
So now we've taken that same approach and said, what if we move TCP IP into user space?
TCP IP stack?
Yes. If you're not familiar, Solar Flare is one of the leading companies.
In fact, in the hyperscaler world world we're starting to make significant inroads by
moving tcp ip out of kernel into user space for containerized load balancers etc etc so we move
the we have a technological onload we utilize that which is a tcp ip IP stack in user space. We plugged it underneath the SDBK,
and we did the target and initiator in user space.
And to the 10 microsecond average latency of the Optane,
we only added 15 microseconds overhead.
So 25 microseconds to the operating system.
So an application doing a read and write
remotely will take about
25 microseconds to get the data back.
My lord.
As you started talking about the
SDPK,
all of a sudden it came to me where you were
going. It's like,
yes, why not?
Context.
And you're doing that in order to reduce the overhead or to be able to scale to higher numbers of devices being supported.
So both are true.
Overhead associated within the system to be able to get data in and out of storage.
In user space. Yeah.
Yeah. And then the ability to now scale that to a large number of
initiators. So let's say you have a cluster of 500 servers. Can you scale single namespace to that sort of size?
With this sort of a solution, there is no barrier to that.
So I'm just trying to understand what you just said.
So a single namespace on an NVMe device sitting in a server running your target software
could be connected to 500 separate user spaces and be shared across those?
That's one of the ways that we're driving that scalability.
The number 500 is actually arbitrary.
I just picked it up.
No, no, no.
It could be thousands.
It could be two.
But once you get out past 100 or so,
you're moving beyond the usual enterprise
into the hyperscaler world.
Yes. That's right. So, I mean, even if you're just doing a small cluster of 20-odd servers,
scalability with NVMe over fabrics using Rocky and others is still problematic. TCP in kernel
space, it's good, but it's still problematic like Rocky. Now you put it into
user space, the efficiency and scalability. And one of the facts we don't look into is that in
the operating system path, you create a lot of jitter on your data. That means not every request
goes through with the same response time. And when you load a system up, you start to get a lot of jitter.
When you move it into user space,
where the data is moving directly
in and out of the adapter
and in and out of the SSD device
without any interference,
your jitter removes,
your scalability increases
within the server and across the network.
You mean by putting it in user space,
you reduce the variability of the access overhead?
Yes. I don't understand that. So think about this. I have a piece of data that I want to put on the
wire. The wire has got lots of frames running at whatever the rate you're running at. So if I take
that piece of data and put it through the kernel and I have hundreds of applications trying to do
the same thing, you're going to get variability and jitter.
Because of the kernel bottleneck.
Okay.
Yeah.
What if I could move that to the adapter?
So if I move that to the adapter directly and this comes back to our technology, which
is we're able to scale a single adapter into thousands of virtual instances.
So we can go up to 2,000 on our current product.
We can add up to 2,000 instances. So we can go up to 2,000 on our current product. We can add up to 2,000 instances.
Yeah, because you still have
context switches between users and stuff like that.
I mean, it's not like the context switches go
away. No, but user space
context switches are much less costly
than user space to
kernel space and back.
Yeah, absolutely.
It's night and day.
In fact, we do a load balancer with NGINX reverse proxy
in user space versus traditional kernel. And we can scale linearly up to 400 queries per second,
where a standard x86 server sort of bottles that next at around 20 cores and about 100 queries per second.
Okay.
So in Silicon, you're presenting a very large number of virtual NICs.
And then each one of those virtual NICs has a user space TCP IP stack.
Dedicated to a user space.
Yes. Okay. user space tcp ip stack dedicated to a user space yeah yes okay and telco guys are dying for this because that's what they want to do oh god yes oh yeah no no i mean storage guys are dying for
this stuff i mean if this is this one real oh... Unfortunately, it brings up one of my pet peeves. Go for it.
VMware supports SRIOV.
Yeah.
But only for VMs, not for the vKernel adapters.
And sometimes what I really want to do is say,
give me this physical NIC, split it up into four virtual NICs,
and use one of them for this vKernel adapter,
and they won't let me do that.
That's right. And you ask them why, and they use one of them for this recurrent adapter and they won't let me do that.
That's right.
And you ask them why and they go, it's for this.
It's like, no, that's the way you use it.
That's not the only thing it's for.
Yeah.
So.
Sounds like a VMware issue.
Oh, yeah.
No, it's definitely a VMware issue.
So think about containers, Linux and containers, and these solutions.
Okay, so the Linux container environment running under Kubernetes or something like that,
how many user spaces are they?
If I've got like 100 containers running, is it one user space or is it 100?
All the applications run in user space.
So each container runs as a separate user space under Kubernetes?
No.
So we're going to get into a deeper discussion here.
Sorry. I'll just say that if you think about it, the application is running in a container, in a pod, in an instance.
And in this model, each container has a TCP IP stack.
Yep.
Okay.
In kernel, now move the TCP IP into the pod inside the instance.
Each one of them has its own TCP stack running as a library within their application, without modification, by the way.
Now you have something really scalable. And you're saying the pod is effectively an application running multiple containers?
Yeah, well, I'm just using the Kubernetes terminology.
Yeah, it could be master-slave thing.
I just wanted to give you a hint.
User space is something that's really taken off in the hyperscaler environments
because they're finding if you move libraries into user space,
make it a part of your application, your application efficiency increases.
Because they've always been able to scale users,
but haven't been able to scale kernel functionality.
That's right.
More or less.
And this only becomes more true as core counts and processor power increases.
Now you've hit the crux.
Core counts is a big game changer for kernels.
I think it's more of a kernel issue than, well, I understand how this would make user space scalability important and effective, but kernels not being able to handle their functionality
across all 20 cores properly without minimizing context switches, I guess is a harder problem,
maybe. I don't know. Yeah, but think about it. I think this is something that really gets us
into the ether a little bit, but think about micro kernels. Think about the architecture of changing from a monolithic kernel to a
micro kernel architecture.
Yeah.
Now you start to see that there is a way you can take kernels forward in
micro architecture.
The micro kernels is an, is a path to that.
And that translates into user space as well.
So I'm going to stop there because I think we've gone off NVMeOF.
Well, coming back to NVMeOF,
our friend Jay Metz presented to us at Storage Field Day not all that long ago.
And he said that for NVMe over TCP,
there were discovery and flow control issues to be addressed.
Can you fill us in on what's going on there a little bit? No, I think that's been well addressed and the code that we've
generated has taken some of those issues into account. The TCPIP working committee without
knowing too much of these issues have been well addressed.
And I think that you're going to find that it's not as a big problem as people thought.
So I don't know the details behind it, but I know that it's not a major issue right now. And the other issue with respect to the current state of the standards is,
I call it multi-pathing or high availability.
How is that address with
nvme or tcp uh well we got multi-parting and uh high availability built into the tcp stack it's
like for decades now right yeah yeah so why solve another problem when you have it in tcp ip today
this is this this is the fundamental because tcp ip is not a standalone stationary technology.
It's been evolved and evolving on a daily manner.
Now, if you extract yourself to the top of that, those values are going pieces and places where, you know, like multipathing, where I want the storage layer to manage it, not the network layer to manage it.
And, you know, in iSCSI, we typically do SCSI multipathing, not TCP multipathing.
So, you know, it's easy to say, well, we'll take care of that in the network but that means i can't load balance across those links
as easily because it's not a storage the storage devices don't have any control over paths to take
sure and i think this is where some people and we get a little bit religious here some people believe
having that control of that is important for storage application some people say if the
primitive's there i'll use it Don't need to worry about it.
So we get into that sort of dialogue and discussion.
Well, it is graybeards in storage.
Yeah, yeah, yeah.
Well, this presents all sorts of opportunities.
I mean, you see, you know, you could put a server with, you know,
let's say 24 NVMe SSDs at the bottom of a rack
and then fill it up with 39 other servers with the software,
and this thing would be a screening machine.
Yep.
That's our prediction, and that's our demonstration.
And we think it's going to move very rapidly.
Or worse yet, you take a whole rack, 40 servers, plug it in with 24 SSDs, and then you have this server that's
just a massive IO producer.
I mean, God, it would be, I hesitate to say it.
It's millions and millions and millions of IOPS per second with 10 to 50 microsecond
latencies sitting out there in a corner.
Yep.
Okay, so it doesn't have any data services.
It doesn't have any snapshots or thin provisioning or any of this other stuff.
But if I can do a…
No, but think about the composable infrastructure where that box with 24 NVMe SSDs…
It doesn't even have to be composable anymore.
And 400 gig ports gets attached.
Some of that storage gets attached by VMs running software-defined storage that does the snapshots and the erasure coding and all of that stuff for you.
Yeah.
And now you throw into some things like erasure coating sitting at that layer.
Yeah.
So, yeah, I think this is what we're thinking in terms of our position here.
We're thinking about how to move this forward to enable the industry.
And the dialogue we're just having is the types of dialogues we'd like to see.
Oh, yeah.
It's going to be an interesting couple of years as
NVMe over Fabrics, which
I think is pretty much
inevitable,
settles out to, what's the fabric?
I'm trying to do the math here.
It's roughly 8 million IOPS per second
per 24 NVMe
SSDs times 40 is 320 million IOPS in a rack.
Now, obviously, you need some sort of networking to get in and out of that rack very quickly.
So let's talk about the speeds, right? I i mean for a long while fiber channel at the edge
now we're going into most of the demos and pocs we're building is on 100 gig ports
because that's where people want to go in you know ultimately absolutely yeah 100 gig ports
and you know we being an ethernet company we're already planning on 200 to 400 to terabit
ports oh my lord so the the the the spigot is open the performance bottlenecks of the network
in and at the end of the day the commoditization of ethernet and tcpip and that scale is now going to pay a lot of dividends for a
lot of people because now you can get rid of all those barriers you think about a rack of servers
and multiply that number of servers and number of cores you're going to see next year when they're
going to be averaging around 48 cores to maybe a little bit
more depending on whose server you use. The number of cores and number of
threads running on a single rack of servers are in the 50 to 60 thousand
range. I'm still having trouble conceiving of a rack that can support 320 million
IOPS per second. It's a brave new world, right?
Brave? It's gone beyond brave. It's almost, I would hesitate to say, inconceivable.
I don't think that means what you think that means.
Yeah, you're right. You're right. Well, this is extremely impressive. I have a fear that this is going to have some serious ramifications for the storage industry. I mean, what's happening today is we have a lot of specialty boxes
and our hardware and our software to try to implement NBMU or Fabric
with some modest amount of data services.
This is going to come along with effectively free software, right?
And anybody's Nix and almost anybody's server
and provide almost
equivalent
IO performance. Yes, there's no data services,
but if you want data services, slap
software-defined storage in front of it.
And the challenge starts to become for the
software-defined storage to add services
without screwing up the low latency
world. Yeah, well, erasure code is
going to hurt. I mean, but, you know, and everything,
and, you know, all the metadata updates are going to hurt,
but all that stuff, I don't know.
I don't know.
I think we'll go out and develop a new storage box.
Haven't you done that enough, Ray?
I probably have, but this is a whole other game.
Well, we're changing the lingua franca, and therefore everything else changes with it.
It's really, we live in interesting times.
This is orders of magnitude changes in performance.
I mean, you know, SSDs came along okay.
It went from, you know, five milliseconds to one millisecond. It's not
quite an order of magnitude, but it's close.
This is multiple orders of
magnitude.
Yeah, when you're down to 100 microseconds
going down to 10 microsecond
latency out of a
storage device,
it changes the game significantly.
As we're
going to see even more when we start moving this stuff to the memory bus, it changes the application model from, well, if it's only 25 microseconds to go get that data from storage, why do I have to cache it in memory?
Yeah.
I mean, all this memory databases and stuff like that, all that stuff can almost go away if it's not that far away.
Especially within the first layer of a rack.
You think about three or four racks connected at the top of a rack
with only one hop between any point.
Then you're into a scaling world that's much more manageable.
Maybe I want to become a hyperscaler.
I may need more money, though.
Yeah, if you're going to compete with Google, I think you need a couple billion dollars
for A.
Yeah, it'll take me a while to raise, apparently.
So what else you got going on here?
I mean, it's like you're upending this.
Wait, this wasn't enough?
Yeah, yeah. This wasn't enough? Yeah.
Well, I mean, I think this is enough for us right now.
And, you know, I look forward to seeing people if they come next week to the Flash Memory Summit.
God, this should be interesting. I'd be happy to talk to people.
All right.
Howard, any last questions for Ahmet?
No, I think we covered it.
Ahmet, is there anything you would like to say to our listening audience?
Actually, you know, it only takes time for us to get it right.
I think we got this one, not just us, but the industry is moving in the right direction on this one.
Yeah, I agree.
Well, I agree.
Well, this has been great.
Thank you very much, Amit, for being on our show today.
Appreciate it.
Next time, we'll talk to another system storage technology person.
Any question you want us to ask, please let us know.
And if you enjoy our podcast, please tell your friends about it.
And please review us on iTunes as this will help get the word out.
Please pay particular attention to the column marked excellent.
Yeah, that's it for now.
Bye, Howard.
Bye, Ray.
Bye, Emmett.
Bye.
Bye.
Until next time.