In The Arena by TechArena - PEAK:AIO on Smarter Storage for AI—Data Insights @ OCP

Episode Date: January 8, 2026

From OCP in San Jose, PEAK:AIO’s Roger Cummings explains how workload-aware file systems, richer memory tiers, and capturing intelligence at the edge reduce cost and complexity....

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Tech Arena, featuring authentic discussions between tech's leading innovators and our host, Allison Klein. Now, let's step into the arena. Welcome in the arena. My name's Allison Klein. We're coming to you from the OCP Summit in San Jose, California. And it's a Data Insights episode, which means I'm with Janice Norowski. Hey, Janice. How's it going? Hey, Allison, doing well. I see that you brought an old friend back for another episode. Tell me who we're talking to today. I did. I did bring back an old friend, but not just an old friend, an organization that continues to morph and evolve and grow.
Starting point is 00:00:42 Very excited today to be speaking with PKK. Today we have Roger Cummings, this CEO of PKK. Welcome back, Roger. Thank you for having me. Appreciate it very, very much. So why don't we just start, because some of our listeners might have missed our first episode together. Why don't we just start with an introduction of PKKO and you'll focus on the market. Yeah, sounds great. So PQAIO really initially addressed the challenge of new customers getting into AI and really not understanding the infrastructure as necessary. They bought the GPU, they had this innovation, and they all of a sudden saw it crawling.
Starting point is 00:01:18 So PQAO really started around how do we gain the most highest performance in the smallest footprint? And meaning we can provide. And that's kind of our secret sauce very early, is getting lines speed performance on a single server. Wow. Okay. So, Roger, one of PQAO's innovations, right, is your PFNS, a new open source tool. Yeah. So file system, rather. And you just did a lot of announcing around this.
Starting point is 00:01:44 Yeah, we're very excited about it. Tell us a little bit more about that. Yeah. So as I mentioned, we started software to find, put software on a commodity server, turn that serving to a rocket ship for AI. That was great. And we had some great success together. providing that performance on a single node.
Starting point is 00:01:59 Well, now customers want to, obviously for training and expansion of AI applications, is scale that across the file system. But the challenge out there is there's a lot of proprietary code out there. So we wanted to open up. We work very closely with, we'll say almost national labs, to build this open source PNFS solution that will match the performance of storage as well as the scale of the file system that people need today. Now, you've also employed a modular framework to drive improved successes of AI applications.
Starting point is 00:02:26 Can we talk about that a bit? Yeah. Actually, one of your earlier presentations I remember, Solomon mentioned about failures is way too expensive today. And it really doesn't need to be. And that's where we took our architecture, our modular form of high performance data store, put the file system on top,
Starting point is 00:02:44 and it's automatically going to recognize as you add more and more notes. So to scale in a linear fashion across data storage as well as file performance. So it doesn't have to be cost prohibitive to take risks. and build innovation, and you need to fail sometimes, right? You learn more from doing it. Because very true. And your new to UAI data server, right, gives you tremendous throughput and density. But tell us a little bit about how you make the innovation possible around the tradeoffs of power, thermal, and energy.
Starting point is 00:03:16 So, yeah, absolutely. And you're going to see that as we grow across the file system, too, which is really exciting. So together with Solonime, we've built a tremendous amount of success around. around this modular footprint because we're able to go very dense with the power of the Solidine innovation, and then it's less infrastructure. A lot of our competitors start at 12 to 10 to 15 type of nodes just to get the same level of performance. We can get the same low performance in a sixth of the infrastructure necessary. Amazing. Now, the last thing that I saw hit my radar on PKK was your latest round of investment. Tell us what that opens the door to and how did that come
Starting point is 00:03:54 together. Really, we're taking the success that we've had. And we're a little bit older dogs. We've done this for a while, right? So we had the company and we're, this is our fifth year of the company. And we've had great success in the UK. Mark Larsinsky, who is our founder, chief strategy officer, did incredible job building company. The reference is some incredible accounts in the UK. I'm on board now about a year and a half, brought the investment in. Now it's about going global around the world to take this innovation of this modular approach to it and its economic approach to being successful along the way. So we're real excited about integrating with partners,
Starting point is 00:04:29 both at the OEM and ODM level. So we're real excited about what that funding allows us to do. Awesome. Now, we're obviously at OCP today. Can you tell us a little bit about your collaboration and alignment with OCP standard? Yeah, and I think Peak is perfectly aligned to what they stand for. You think about our open key NFS solution.
Starting point is 00:04:47 Customers need, you need to make things simple. So I spent the entire week today talking about building what we call T-shirt sizes of offerings, make things simple. In order for us to allow customer to be successful with AI, we got to make things packaged for them, both packaged at the partner level that we do, as well as the systems integrator and the resellers out there. So bringing that together on top of an open standard is really what customers need. Now, I think that we're at OCP Summit, everybody's talking about how do you increase
Starting point is 00:05:17 GPU utilization? And we all design this technology to serve customer requirements. Can you give me some examples of how customers have benefited? Yeah, absolutely. We're kind of benefiting from the maturity of AI, right? Before you have these large scale training models, well, now inference is coming to play. And we provide a really nice, high performance, scalable in performance and in file systems, solution to address the inference needs that they have both at the edge.
Starting point is 00:05:47 So things like, and I'm hearing more and more of this federated training. Sure. So getting closer to that edge, we fit really, really nice. nicely into that scenario. Can you say more about federated training? Yeah. There's so many applications. We're building this infrastructure to gain intelligence.
Starting point is 00:06:02 So let's move that, let's capture that intelligence as close we can to the data. And we're a great fit for that capturing that data, that intelligence, the data that's going to improve the inference models and then move that to the master model and prove the overall architecture. Yeah. Close loop. We've got some great examples for customers across many, many verticals doing that. So looking ahead, what trends or challenges do you see?
Starting point is 00:06:23 PKIO tackily with your software. And I would also love to know your positioning and how you're different from other competitors in this space. Yeah. So I think the biggest challenges that we're seeing is, again, there is a lot of, I think, repatriation of resources that need to take place, right? We need to have less infrastructure and more success. And I think that we're a great partner to do that with how we differentiate it from our competitors is exactly that. We're now going to take our economies of scale at a smaller level. and then also put a file system on that that takes advantage of that as they scale forward. I think we have some great innovation coming up above and beyond the file system
Starting point is 00:07:02 that's going to really allow folks to have a really deep and rich memory tier and working with solid and as well to become more and more intelligent about those AI workloads. So we can move those specific AI workloads. And that's where you're going to see pink really advanced is in the understanding of those workloads. So we can move those to the appropriate tiers within a solidine environment as well as others. Do you have a specific industry set you're really kind of going after, or is it broad? You know, in the beginning, we focused on life science, health care, university research, as well as government. Now that's really, the file system really allows everybody to get in at the very early stages of AI and grow.
Starting point is 00:07:41 So it's really open to any organization that wants to get the performance and the simplicity in the scale. That's so cool. It's always a pleasure to talk to you. Today was no different, really insightful conversation. So for those of us who are watching online, if you want to learn more, where do you send some more information? Yep. You can reach out to me on LinkedIn, but also, of course, our company website is piqueaio.com, and
Starting point is 00:08:05 we'll happy to talk to you a little bit more about our infrastructure and importantly match you up with one of our partners, it's a growing network of partners. The thing of it is, let's deliver a solution, a package solution that you can be successful with and grow. Thank you so much, Roger. And with that wraps another episode of Data Insights. much to use for the collaboration. Thank you, Alison.
Starting point is 00:08:24 Thank you, Roger. Thank you. Appreciate your time very much. Thanks for joining Tech Arena. Subscribe and engage at our website, Techarina. All content is copyright by Techarena.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.