Not Your Father’s Data Center - Data Analytics Across Industries
Episode Date: February 14, 2023Jonathan Friedmann knew the semiconductor business would find its way into his life someday. Friedmann, CEO & Founder of Speedata, spoke to Raymond Hawkins about his career and his latest... venture, Speedata, which provides a state-of-the-art Analytics Processing Unit to optimize big data and analytics workloads. While data growth is expanding at an exponential rate, processing speeds are lagging behind. Multiple cores on a single chip were a first-step industry solution, data centers were the next. Many data companies utilize clusters of hundreds and thousands of nodes to solve their complex processing needs. Friedmann’s Speedata solution offers a different approach. “Speedata is looking at a workload that is arguably the biggest workload in the data center today, databases and analytics,” Friedmann said, “Essentially, you have a database, and multiple industries hold their information in databases, and then they want to extract information from them. And you look at the public clouds; they are giving multiple managed services to handle that. The biggest and most important managed services in the world you find are all databases and analytics.” With the knowledge that big data is a large part of the processing need, Friedmann and Speedata designed a chip to target this specific workload, “Today, 99% of big data is processed by the CPU,” Friedmann said. That’s a large slice of the pie for Speedata to tap into and lessen the burden, “You look at what’s happening in the analytics and databases; the first revolution did not happen yet.” Data isn’t just coming from the outside. Computers are generating synthetic data, creating even more need for processing solutions. Friedmann equates what Speedata is doing as acting as plumbers. They’re building the tools and making the pipes wider so companies can better extract their data.
Transcript
Discussion (0)
Welcome to another edition of Not Your Father's Data Center.
I am your host, Raymond Hawkins, and today we are joined by the CEO and founder of SpeedData, Jonathan Friedman.
Jonathan, how are you today, my friend?
I'm doing very well. How are you doing today?
I'm good. So to set everybody's expectations,
I, like always, am in Dallas, Texas, here at our headquarters. Jonathan, where are you joining us
from? I'm located in Israel, Netanya, not far away from Tel Aviv. All right. Netanya, just north of
Tel Aviv. Is that right? Yeah, that's correct. All right. Well, for those of you who don't know, our friends at Speed Data are in the processor business.
We can do a little bit of homework and a little bit of setup around computers and how they work.
But before we get into that, Jonathan, why don't you tell us a little bit about yourself?
Where did you grow up?
Where were you born?
Where were you into school?
Give us a little bit of the history of Jonathan Friedman and how you got into deciding to found your own processor business.
Sure. First of all, thank you for having me on your podcast.
Glad to have you.
Looking back at that, I grew up in Israel like a standard Israeli kid. I will say that my father is a professor of law and as such he was going on sabbatical
every three to four years so I had the privilege of seeing the world as a child. I've been in
England and a couple of times in the U.S. and also in France, after visiting multiple places. So I had a great journey as a kid.
So after that, I went studying in Israel, basically electrical engineering, doing first,
second and a PhD in electrical engineering. I'm a mathematical, logical person, so I dig deeply into that.
And then came out to the semi, actually to the Israeli high-tech world. And right away,
I got hooked up by the semiconductor business. And I've been doing semiconductors for all my
professional life. So we can call you Dr.
Friedman if we want the rest of the show. Is that right? Okay, we'll stick with Jonathan.
All right, you said you got out to multiple trips around the world and got to visit lots of places.
You said you went to the U.S. a couple times. Give us a couple U.S. either highlights or lowlights,
places in the U.S. you either loved or are glad you don't have to live in.
So my father was a professor at Harvard, a visiting professor at Harvard.
So I was in Boston for a year.
I'd say I was quite small.
I was 70 years old.
My deepest memory from that time is just freezing, coming back from school,
hardly being able to lace my shoes. So that was a pretty bad memory from Boston.
Yeah, Boston winners are not like Israel winners, that's for sure.
The second time, I was already a sophomore in uh in high school um
i'll tell you two things about that first of all i was uh already then a philly uh a philly fan
um so that was the last year dr j played and i got to see him that was uh that was really a great
year for me and i got actually to see,
my mother was working at a hospital where they,
Charles Barkley had some sort of injury
and I got to meet him.
And I don't know,
his arm was like the size of my whole body.
But that was really one of the
greatest things I remember
from being a sophomore in Philadelphia.
All right, Jonathan, we are going to go totally off script here because you, for people who
listen to our podcast regularly, know that I enjoy sports a good bit. And you've brought up
two things. First of all, don't often get a Julius Irving reference while talking about the microprocessor business. Number one. Number two,
Charles Barkley and I went to school together. So I am going to grab my camera and see if we
can get this. And I know they're going to yell at me when they produce this. Can you see that
poster up there on the wall? That's Charles Barkley's signed poster in my office. He and
I went to school together. So I'm going to get yelled at for moving that camera but hey it's my show so i probably didn't hear any bad when you left philly
uh the 76ers i i could not take it at that time yeah yeah he he i mean for those of us who that
remember chuck chuck was a 76er that's who he was was in the NBA and the whole Arizona thing. And, you know,
just identify with him as a forward of the 76ers. And you're right. Chuck is not only a bigger than
life personality, but he's just physically a big guy. So great, great, great to have a great to
have references to my friend Charles Barkley on on on a show about processors. Good stuff.
Well, good. Well, great to hear about your time here in the States and
your visits. Your dad was on sabbatical. Pretty interesting stuff. All right. So you get an
electrical engineering degree and you dive. I know there's, for those of us in the U.S., don't have
probably the appropriate appreciation for how big the tech business is in Israel. We think of Silicon Valley out here
in the Bay Area, but there's a very similar gravitational pull there in Israel around
technology. Is that correct? Yeah, definitely. I would say that when you look into Israel,
which is a much smaller country than the U.S. and probably the per capita engineers and high tech.
It's much more concentrated and intensive.
And it's also something that Israel is so proud about.
So it's constantly on the news and everybody talks about it.
That's the Jewish mother.
That's what it used to be.
They want their kids to be doctors and lawyers.
And now it's high tech.
Doctors, lawyers, and now electrical engineers.
I got it.
All right.
Very good.
All right.
So you get in the degree in electrical engineering.
You get in the tech business.
Do you start right off in the processor business?
Or did you do any other things?
So actually my PhD is in signal processing.
When it used to be a very important part of electrical engineering, it still is.
I guess most of the innovation has switched from that area.
So even if you look at Israeli high tech, the first decade of the 2000s was
all about communications. And that's what I've been doing in the first decade. There
was a lot of innovations around communication. I was part of, together with one of our co-founders,
which is our chairman, Dan Cherish, I was part of a company called Polvigent, which developed actually not only
communication SOC communication chip, but also an accelerator, a high-speed modem for
cellular infrastructure. The company became a global leader in its market, selling to nine out of the 10 biggest OEMs.
Actually, even today,
I think one third of the world's cellular users
are going through Pobijet's chips.
We were finally acquired by Vodcom
in one of Israel's largest acquisitions at the time in 2011.
So that part of my life I've been doing
communicating, accelerating communications and after that I started
looking into other workloads to accelerate and what additional things
can be done in the high-tech. With the whole Israeli high-, I made a switch to processors, and I've been doing processors for the last eight or nine years of my professional life.
So I'm guessing after the sale, you and Dan spent a little bit of time in the south of France counting your money and then decided, hey, let's get back in and make a living.
Very good.
Good stuff.
All right.
Awesome.
So now we're in the processor
business. So before we get too deep in the processor business, I do want to set the stage
a little bit for our listeners, because I think we're going to get, especially as we talk about
what you guys build specifically. So when we think about a computer, there are a few devices.
There's input, output, your keyboard, your monitor, your printer, IO devices. There's input-output, your keyboard, your monitor, your printer, I.O. devices. There's memory, where your computer works on stuff immediately in its purview, the active where an application runs.
There's storage, where you write your information after you're done working on it.
And then there's a processor that talks to all of those devices and collaborates between all of those devices.
There's some other stuff, but at a high level, that's what we're doing, right?
We got I.O., we got storage, and we got memory.
And playing quarterback for all of that is a central processing unit, which I think kinds of extra layers of microprocessors that do things to help the CPU
and to help specifically do parts of the tasks that the CPU may might take more time to do.
The CPU is more of a general processing function, and there are processors that are built to do
very specific tasks. And that's what we're here to talk about today, is processors that are built to do very specific tasks. And that's what we're here to talk about today is processors that do very specific tasks, not the central processing unit, but processors
that do very specific workloads, often referred to as accelerators. Is that a fairly useful,
high-level description of what you guys do and how you fit in the picture, Jonathan?
Yes, definitely. Let me just add a few more
things around that. So your description is actually was a very good description of computers,
but mainly around home computers and PCs. Yeah, personal. Yes, absolutely.
I'll add to that, that everybody's talking today about the growth of data and the cliche that data
is the new oil.
And really, data is growing at a huge rate.
And when you look at what commercial applications are trying to do, so instead of having a personal
computer at home, they would have what's typically called a server, which is essentially very
similar to a home PC, but maybe a little bit stronger in its capabilities.
And then when you look at what happened in maybe the last decade, two extremely important
trends, which everybody's familiar with, I think in the tech business,
the growth of data, actually the explosion of data is today growing approximately doubling every two years, really exponential growth on the one hand. And then on the other hand, you would have the general purpose processors progressing, but not at
the rate they used to.
So there used to be something called Moore's law, in which the processors are able, let's
say, in common language, the processors would be faster, give better performance, double
its performance every year and a half.
You can argue what exactly is the state of that rule as well,
but that definitely does not happen today.
I would approximate that general purpose processors are improving
by approximately 5% every year now.
So these two trends cause a huge gap,
which actually boosted the whole industry to one
solution after another.
It first came with the process of companies building what's called a multi-core solution.
So instead of having a single processor, they would put multiple, what they call cores, on a single chip.
And essentially putting multiple CPUs on a single chip.
And that's boosting performance.
And then the next step was what today is called data centers.
So you have suddenly a big problem.
Typically not a problem you would have at home. but a problem that a company wants to solve.
They want to extract information from data.
And maybe we can give a few examples for that later.
And then suddenly a single server, a single chip, it takes it too long to process and solve that problem. So the next step would be, okay, why take a single chip?
Let's take two of them.
Let's take four of them and connect them together
so they would have communications.
They can communicate between one another.
And if the problem can be parallelized in some way,
you would gain hopefully close to, if you have four processors, and if the problem can be parallelized in some way,
you would gain hopefully close to,
if you have four processors, four times performance.
And these are essentially data centers, right? And today, I would say there are probably hundreds
and thousands of companies which are using clusters of hundreds and thousands only what you described as a single server,
but you have on top of that a full system, which is called a data center,
and multiple servers talking to one another and communicating with one another in order to solve the problem.
Yeah, Jonathan, I appreciate you expanding because, yes, without a doubt,
trying to get folks to get their arms around a single computer
and how that extrapolates out into servers and nodes
and then whole arrays of compute that are solving large, complex problems
and taking up buildings' worth of compute.
Can I get us to take one quick detour?
I love that you referenced Moore's Law.
As a guy who got to grow up in technology, so I started getting paid to work in the tech business in the 80s, as painful as it so much better than I do. Why do you think that we, because I agree with you, Moore's law doesn't really apply anymore. We're not seeing that, you know, we used to talk about that drive, that tech refresh,
because processors had taken such a leap in a two-year period, everybody needed to do a refresh.
What's caused that, the reality in the processor business that Moore's Law no longer applies?
Why don't we make those 100% capability improvements every 12, 18, 24 months?
What's changed?
So there are two main drivers for Moore's Law.
One relates to the process, namely how you manufacture the chips, and one relates to
architecture improvements.
The first one, which is the dominant one actually, as time went by,
the transistors from which the building blocks for the chips became smaller and smaller. And essentially, we're able to work faster in a higher frequency with one generation to another.
And that by itself, without making the processor any better, gave a lot of performance boost.
We are now approaching, first of all, moving from one process to another
becomes technically very, very complex. Transistors are made of very few atoms already
and making them smaller becomes a very complex thing to do.
And furthermore, now that we have gone down
to such high frequencies, there are multiple other things
that do not allow the frequency to go higher.
So that essentially is very, there's no growth
in frequency hardly anymore.
The architecture, general purpose processors
have been developed
for three or four decades now
and I'd say all the low-hanging
fruit
of improving processors
are already used
and it's not clear
how much progress
can be made
and it's really, I think in in some sense, not that important for the moment because it is
clear nothing going through that channel can compete with the growth of data.
And it's not just the growth of data today. today, humanity today is capable of extracting information from the data in really fantastic
ways and very different from what we used to be able. So there's a huge need for processing that data.
We actually, let me give you just an example of, we are talking, one of our customers is a global ad tech company.
And they actually are telling us that they can connect between their revenue, which is essentially the revenue is built on how good they can connect users and ads that they want to
give, how good they can make that correlation between the right user to the right ad. And the
more processing power they have, actually they tell us that every time they double the processing power, they can correlate directly to revenues, the ballpark of 20 to 25% additional in revenues.
So, of course, that makes sense only if you can double your compute without paying more than 20.
5% of the, right, right. But this is just an example from whatever you want to do.
But of course, looking into what's happening in all the health industry,
the ability to analyze DNA in ways which were not possible just a few years ago,
really can make tremendous things in the health industry.
And we're still so far away from really being able to do all the processing that we want.
So there's a huge hunger today to processing power.
So Jonathan, let me see if I can, in layperson, non-electrical engineer PhD speak, say what I think is our setup.
So we talk about, you know, we did this simple description of a personal computer.
We extrapolate that out to a server, which is the same basic construct, but multiple processors and more capabilities.
But we've run the processor as fast as we can, right?
Everybody thinks about, to date myself, megahertz, now gigahertz, clock speed on the processor. We've on a single wafer now. So we've done all the
things we can do in the CPU portion of the compute world. And now what we're trying to do is, hey,
how do we get more compute power? Hey, let's do processors that are doing specific tasks to help
the CPU. And that's how we get to this accelerator world, this world that speed data, as you just
gave a customer example, hey, if I can, with greater compute power, not necessarily more out of my central processing
unit, but if I can get greater compute power to match up my ads with the appropriate customer,
that's a better fit. The better fit my customer will pay more for, the greater compute power I
can have to make that better fit, the more revenue I can make. So there's a real business reason for being able to compute either more efficiently, faster, more specific kinds of problems.
And that leads us to this world of a whole other family of processors, which is the business you guys are in today, correct?
Exactly.
When you look at general purpose processors, a lot of the silicon, the power is spent on being general purpose.
So they have to bring in an instruction, they have to decode it, understand what it has
to do. Then they have to configure the execution unit in order to do what the instruction has told it to do.
And later on, they have to execute it.
So actually, when you look from silicon or from power point of view, I would say anywhere
between 5%, possibly 10% is about the execution itself.
The rest of it is just for being generic.
And then if you are willing to work on a specific workload
and you would be able to gain a lot from not,
okay, I don't care.
I won't be general purpose.
I won't let any instruction come into me.
I have specific things I know to do.
And these things I know to do and these things I know
to do extremely well. Then I can build an architecture which is very adapted to that
specific workload and I can do things which are essentially orders of magnitude better.
When you look into a CDA… Sorry, yeah, so you don't need the overhead
that the GPU does to understand the instruction, to deconstruct it, to translate it, to put it in execution mode.
You take all of that overhead out and let it get handled by general purpose.
You just go, hey, as I like to think about, an expert is somebody who knows more and more about less and less.
So your processors are experts at specific requests.
Exactly, I think the nice analogy to that
is the difference between a chef and a cookie cutter.
So the chef can do anything you like,
or he's gonna do it very good.
But if you wanna make a lot of cookies,
and that's the only thing you wanna do,
or possibly you want to do
enough cookies and you do you do not want to put the chef on it because he he's going to work their
cookie a lot of time you're going to take a cookie cutter and that's going to do the job substantially
more efficiently than a chef all right i'm stealing that jonathan chef and cookie cutter that's a good
that's a good word picture i like that yeah yeah okay all right so we're and cookie cutter. That's a good, that's a good word picture. I like that. Yeah. Yeah. Okay. All right. So we're into cookie cutter processors at speed data. That was a lot of setup.
So tell us what you guys do. Tell us what your processors are specifically. And if you would,
I think when people think about specialty processors, I think people think about
video accelerators, right? That's popular in the gaming space. They think about, you know, mathematical,
what do we call those? Yeah, those math chips. I think those are ones that are people familiar
with. Tell us what speed data is doing and how you fit in that stack of specialty processors.
So I'll first give, you just gave an example of a standard for a second. You look at AI
accelerators. Essentially AI is multiplying floating point mattresses
and doing that extremely efficiently.
Relating to our cookie cutter, that's what they do.
You look at these accelerators, they do it extremely efficiently.
Speed data is looking at another workload,
which is arguably the biggest workload
in the data center today,
that is databases and analytics.
Essentially, you have a database
and multiple industries hold their information
in databases, and then they wanna extract information
from it.
And you look at the public clouds, they are giving multiple services,
managed services to handle that.
And you look at the biggest and most important managed services in the world, you would find that they are all databases and analytics.
I will mention a few of them. Redshift by AWS,
which is probably the biggest managed service
in the world today.
BigQuery by Google.
That's an analytical tool.
SQL Server, which is not only a managed service,
but also an on-prem tool,
which is probably the biggest tool that Microsoft has.
Oracle, their main business is databases and analytics.
And it's all over Snowflake Databricks.
And all the big tech companies in the world today, that's their biggest managed services.
That's what they do. So we actually looked into that workload
and designed a chip from ground up
to target this specific workload,
which actually today is completely dominated by CPUs.
So processing today is done,
99% of that workload is done by CPUs. So processing today is done 99% of that workload is done by CPUs. Very different from
the revolution which happened in the AI. So hold on a second, Jonathan, 99% of big data
processing is being done by the general purpose CPU? So big data is a big word. Yeah, yeah. I would say you look at what happened in the last five years,
the AI revolution, you would find that five years ago,
the AI just began.
It was done in CPUs.
You look at AI in the data center today,
it's completely controlled by the GPUs.
So they have their revolution there,
the first wave of acceleration and actually there is a big
war there between, for a second wave of acceleration, between multiple companies
who are trying to do AI accelerations on top of the GPU. You look at what happens in the analytics
and databases, the first revolution did not happen yet. The analytics is still completely controlled
by CPUs for multiple
reasons. The main reason I would say that the hardware which was
on the table during the AI revolution, namely
the GPU, is simply not a good fit for analytics
and then the world is waiting The GPU is simply not a good fit for analytics.
And then the world is waiting for a grand solution for that.
So, Jonathan, just to relate it back to names that people can get, that GPU business, I think of NVIDIA.
Who else is in that space that names people would recognize?
So GPUs, AMD also has a GPU, but their main focus in GPU is graphics. So AI GPU is basically NVIDIA.
There are two candidates, AMD and now Intel is also coming out with their own GPU.
But NVIDIA is the king of AI today in the data center.
Right, right.
And so as you talk about that first revolution, we think about that happened with NVIDIA's opportunity.
It's actually bigger.
Exactly.
From a workload point of view, it's bigger.
And you look at what NVIDIA is doing multiple things,
but I believe more than 50% of its revenues come from AI to data center.
So really, we have a huge opportunity here to build a huge Israeli semiconductor company. And that's a big dream. Yeah, that is a great setup. And I appreciate you taking time to walk us through.
This is just understanding why speed data matters and where it fits in the problem and how the
problem gets set up. This has been really, really, really, really helpful. We get asked a lot in the
data center business, right? We build buildings where all of this takes place. And we get asked a lot, oh, you're not going to need
any more buildings, right? Computers are getting smaller and faster. This is, you know, why do you
need to keep building buildings? You know, aren't you worried about the future of your industry?
And I always give the exact same thing you did. I said, you know, folks, I don't think realize
the speed at which our data is growing, right? And there's lots of studies,
there's lots of numbers, but I think it's safe to say all the data in the world doubles about
every 24 months. So that means that at the end of 2024, all the data we have today will have twice
as much of it in two years. And so there's just so many things that are causing people to write
data. The thing that we love to say here, people don't delete their ones and zeros.
They want to keep that data.
They want to look at that data.
They want to replicate that data.
They want copies of it so they can slice it and dice it and look at it.
And how you dig into that data, having the data is not that interesting.
What the data tells you and what you can do by going and looking at it.
And when I think about speed data, that's what you're saying is, hey, let me go dig into your data. Let me go dig into that
SQL and that database and let's go find out what's in there and what can you learn from it.
Exactly. Really, the data is really growing in a staggering rate. I actually met a company just a few weeks ago that generates synthetic data.
And that's again, in order to make better analysis
and they're synthetically generating data,
which it used to take months to generate
and they now generate it in hours.
So really it's not just people working around
and taking photographs or stuff like that.
Data is already generated by computers.
I do not see that stop, definitely not in the near future.
And what SpeedData is doing, essentially,
is building where we'd like to say that we are the plumbers.
We are actually just building the tools
and making the pipes wider and allowing for other very smart companies to extract information from the data they have and just letting them or giving them the tools and the ability to do that.
Gotcha. Yeah. So speed data's job really to be the infrastructure layer that
applications would sit on top of and go, hey, I'm going to get this data to you and present it in
such a way faster than you can get it if you had to go through that central processing unit. And
now you have it and it's available. Now you can do things with it because it's here faster and consumable for your application in a way that's useful.
Exactly.
Fascinating.
So does this business, Jonathan, end up – because I think about – I watch the NVIDIA business and the GPU business, right?
And it started on a card.
And you mentioned – I don't remember if it was early in this call or in another call we had where a lot of what you deliver is actually on a card.
That's how the GPU business started, right?
You just added that accelerator into your compute environment.
Is that how this business is?
And I say that was the start.
And now you can buy entire racks of GPUs, right?
Not just a card, but a whole system that is an array of GPUs.
Is that where this is headed?
Yes, definitely. but a whole system that is an array of GPUs. Is that where this is headed?
Yes, definitely.
So we are actually building our cards
and these cards could be,
these are, I'll say, these are standard cards
with standard interface.
It's called the PCIe interface.
And they would fit in the vast amount of existing servers.
So you can simply add them up to either existing or new servers and then to multiple racks of servers.
And essentially, in that sense, you were talking about we are actually fighting both power consumption and the growth in size of the data center, you would add R cards inside these racks and essentially get
between an order of magnitude to two orders of magnitude
improvement in performance without paying in space.
And with basically moving your performance to power
or performance to cost or performance to space by an order of magnitude.
So that is one consideration I'd love for you to talk about a little bit.
When we have racks of GPUs, they eat a bunch of energy.
And in the data center business, how much electricity is in a rack, how many kilowatts we run through a certain rack,
how much heat that rack rack, how much heat that
rack produces, how much heat we have to reject then to operate the data center matters. Tell me
from your perspective, speed datas, analytic processors, how are they on power consumption?
I love the space savings, but how are they on power consumption? Our PCIe card would give anywhere between a multiple of 20 and a multiple of 100
in terms of performance to what? Again, in some sense similar to what Nvidia has done in AI
compared to the CPU. So when you look at CPUs or any kind of processor, typically, not always, but typically, it's not a big deal to get
more performance if you double the power, okay? I'll take two CPUs and get approximately double
performance, but also double power. So there's not much benefit in getting a lot of performance
without improving power. Again, since we save, as I mentioned earlier,
probably 85% or 90% of what's in a standard
general purpose processor.
So we're not doing all the activities
the general purpose processor is doing.
So we're saving huge amounts of power.
And we're actually doing,
and being able to do the same thing with much smaller power and
in much higher speeds and you gave the number depending on the application 20 to 100 X the
the speed of the analysis running through an APU than just through a general purpose.
Yes, we're actually working with multiple customers
and it depends on the workload,
what exactly you're doing
and also actually on the data itself.
So depending on the exact case,
we're anywhere between multiple 20 and 100.
Awesome.
Well, Jonathan, this has been a super
helpful understanding of what accelerators and special processors do. Give us, if you would,
a few minutes on SpeedData's roadmap. You guys are a couple of years old, if I remember right.
You've raised a good bit of funding. Where are you at in the roadmap? Where are you headed? What
does the future look like? Tell us where you are and where you're headed.
Okay, so our company is three years old,
and we are currently working with multiple big high-tech companies in the world
to make sure that we can accommodate all their requirements to our chip.
We expect to have our chip within several quarters.
And with that chip, we'll put it on a PCIe card
and basically deliver it to the customers we're currently working with.
So PCIe first, do you see getting to the point where there are speed data arrays,
for lack of a better term, a whole solution that would map a series of cards?
Is that on the roadmap? As I think of like an NVIDIA box that does all kinds of acceleration.
So I think we have multiple options. We have not decided on where the path would lead us to.
On a first guess, we are only today working with multiple OEMs.
I do not see us as a first step making something like the DGX or NVIDIA not only has the PCIe card,
they have their own server. I don't think we'll do that in our first steps, but definitely I can see us doing software on the solution and looking into how we can help our customers and make their life easier.
And not just giving the processors themselves, but possibly software layers to make their life easier in processing and extracting information from their data.
Well, Jonathan, this has been super enlightening for me.
I really, really appreciate, you know, when PhDs in electrical engineering can make guys like me who barely got out of college understand it.
So I appreciate you going slow for me and helping me follow along.
Really, really fascinating hearing what Speed Data is doing, hearing what the technology industry in Israel is doing, and hearing a little bit about your story.
And we wish you guys all the success in the world for us at Compass.
We just want more of this, right?
More data, more people succeeding, more people solving problems because that means that we need more data centers.
And that at least makes it so I can buy groceries for next week. So we appreciate it. Okay. Thank you very much. Thank
you for having me. We'd love to have you here in Israel. Jonathan, on my next trip to Israel,
I'm coming to Netanya and coming to see you. So thank you for joining us. We appreciate it.