In The Arena by TechArena - Hybrid AI at Scale: Celestica’s Playbook

Episode Date: November 10, 2025

From AI Infra Summit, Celestica’s Matt Roman unpacks the shift to hybrid and on-prem AI, why sovereignty/security matter, and how silicon, power, cooling, and racks come together to deliver scalable... AI infrastructure.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Tech Arena, featuring authentic discussions between tech's leading innovators and our host, Alison Klein. Now, let's step into the arena. Welcome in the arena. My name is Allison Klein, and we are coming to you from the AI Infra Summit in Santa Clara, California. This is a Data Insights episode, which means I'm joined by my co-host, Janice Norowski. Welcome to the show Janice. Oh, thank you, Allison. It's free to be here. Denise, we are having such amazing conversations at AI Infra from across the value chain, from those who are making infrastructure and solutions to those who are actually driving the innovation of AI adoption within the enterprise. Tell me about the topic today and who we're talking to. We are actually going to be
Starting point is 00:00:53 talking quite a bit about Celestica. And today we have Matt Roman, who is the senior director of product line management. Welcome. Great to be here. Thank you for having you. So, Matt, this is the first time Celestica's been on Tech Arena. So why don't we just take a step back and have you introduced the company and how you are related to the compute value chain for AI and what your role is at the company? We'd love to. So Celestica is a technology company, large $10 billion, approximately revenue, headquartered in Toronto.
Starting point is 00:01:24 We have a number of different businesses and segments, but certainly relevant for AI and what we're talking about here is our CCS division, and we do a number of different things, but really the biggest things that are fueling the growth for our division and for Celestica as a whole is we design and manufacture technology solutions, AI focus, particularly networking, compute, and storage for the world's largest hyperscale customers. So that's the big part of the growth. We do also have a set of products that we bring to market under the Celestica brand for, again, networking, storage, compute, and then software and services that wrap around that.
Starting point is 00:02:02 Wow. So, Matt, today with the AI adoption accelerating, we're hearing that all over the show floor. How has business opportunity for Celestica really shifted? And what kinds of infrastructure innovations are your customers, hypers asking for? We've been working with hyperscalers for probably more than a decade. And we also do design work for OEM customers as well. And certainly when it started out, it was much more around traditional. a large-scale data center deployments. But really in the last four or five years, the adoption of AI has really shifted in terms of certainly
Starting point is 00:02:38 the amount of technology these large companies are consuming and the types of technologies. And there's certainly the speeds and feeds angle of it, just more and more capacity. But with the massive adoption of accelerated silicon like GPUs, DPUs, that's changed the designs. It's changed the sort of design cycles in terms of just the pace in which new silicon is being released.
Starting point is 00:03:02 And it's also changing things like the design, the power, the widespread adoption of liquid cooling and other technologies. You've asked you business with hyperscale. And I wanted to ask you this question. You know, you're sitting at the center of AI deployments. How do you see the rate of deployment today? And where do you think that evolution curve is going?
Starting point is 00:03:23 Great question. I mean, the business is just booming. Obviously, all of these companies are highly competitive. and they all want that edge. They want all of that next generation technology. I'm sure at some point there's a tail off, but certainly from our perspective, it's all high in growth mode.
Starting point is 00:03:38 So it doesn't seem to be a slow down in sight. You know, as the deployment models expand from cloud to neocloud, which is one of my favorite topics, how do you see this changing the way infrastructure is being utilized going from edge to say neocloud? Yeah, so that's definitely an interesting area for us. So obviously, again, large hyperscalers, big investment in infrastructure, and that's going. But that neocloud segment,
Starting point is 00:04:03 you know, we've had some really good new customer acquisition in that space. And it's from some household name neoclouds that get a lot of press and a lot of attention. There's a whole host of much smaller companies that are doing GPU as a service and they're servicing enterprises that either don't want to or don't have the funding to be able to make that investment in infrastructure. Frankly, a lot of them don't know exactly what the architecture needs to be and what that investment is. So they go to these as a service providers. And those as a service providers come to suppliers like us. And the large household name neoclouds, they're looking at custom designs and are very prescriptive in terms of the technology they want. But the smaller
Starting point is 00:04:45 neoclouds are coming to us and they're purchasing full turnkey solutions from Celestica with a Celestica branded product with our open source software staff. back installed and they're getting service and support from Celestica directly. Now, I've been in the data center arena for a long time, more time than I want to admit. But, you know, the rack densities and the compute densities that we're seeing in these racks is something that I never thought that I would see in my lifetime. This is putting some design pressure in terms of the overall compute, new cooling approaches as well. How is Celestica tackling that with your customers?
Starting point is 00:05:22 Yeah, so we've got a big investment in liquid cooling for some of our cloud customers. We're on our third generation designs for liquid cooling. I know that's relatively new, but behind the scenes, Celestica has been doing that for a very long time. And you're right, the density is just amazing. The power from the accelerators, the GPUs, DPU, CPU, as well as, as an example, the network switching silicon, the power is just continuing to increase and it's driving a need to move to liquid cooling. So we've been doing it for a number of years. For hyperscale customers, we'll be releasing our first 100-tterbit switch platform
Starting point is 00:05:56 that will bring to market through our channel and directly to end users. And that will be our first commercial off-the-shelf liquid cool platform. So we're really excited about it. That's awesome. That's really awesome. I love to hear all the way up the stack from the hypers to the neoclough guys to now cooling. But can you share some examples of use cases where you would highlight some of the biggest opportunities that you guys are seeing within AI infrastructure?
Starting point is 00:06:22 I think one great example. Actually, yesterday, Celestica hosted a panel with one of our partners, Hedgehog, and an end customer, Zipline, and it was all about edge AI. And certainly the large data centers get a lot of press and a lot of attention in terms of what's going on. But in that use case, it's edge AI. This is a drone delivery company. Oh, cool.
Starting point is 00:06:41 It's using AI technology to do deliveries. And one of their big customers is Walmart. So if you live in the Dallas-Fort Worth area, you can, online, order something from Walmart, and have a drone deliver that. And the technology behind it is small-scale GPUs, obviously, that are in the drone. It's programmed to navigate and go around power lines
Starting point is 00:07:00 and figure out where to drop the package. So I think that's an interesting example that's new and emerging where it's AI at the edge. And then, of course, they've got AI back in the back end of the IT of their shop as well. That's really cool. I was just thinking I really love the name I chog for a brand, too.
Starting point is 00:07:16 So the fact that you guys are working with them, That's cool. Now, when you think about where this is going, obviously the hyperscalers have invested an incredible amount in training these models to go out and do fantastic work. And enterprises, and I think this has really been the theme of the week, enterprises are now starting to integrate AI, generative AI, agentic AI, or comprehensively in their environments. How are you seeing the proliferation of AI across a broader land. When an enterprise is thinking, I want to run some of this on-prem, I want to run some of this with my trusted provider, how do you think that's going to influence the infrastructure demands moving forward? Yeah, I mean, we definitely see this hybrid approach that you mentioned, both on-prem infrastructure as well as cloud. We're engaged with a number of customers that are actually moving from public cloud,
Starting point is 00:08:11 hosted public cloud infrastructure for AI for GPU, and they're actually, you know, moving that on-prem. It's obviously not a rip and replace. There's a phased, and in some cases, that's for sovereignty, security, other reasons. But we definitely are seeing that both hybrid and, you know, an end goal for a lot of these large enterprises to basically move all that stuff back on-prem, particularly for AI. You guys, you talked about this in some of the responses you just gave us, but you work really closely with the overall ecosystem. But how do you see the collaboration for Celestica kind of evolving to support scalable AI infrastructure? Yeah, so the ecosystem obviously is super important. And for us, it starts certainly with the silicon vendors, right?
Starting point is 00:08:54 So that's the engine that's powering this technology. And we have strategic relationships with the world's leading semiconductor companies. We get early access to do designs. And we certainly try to be first to market wherever possible. So that's on the silicon piece. When we think about AI infrastructure racks, we do a lot of integrated rack solutions. And we don't make all of that technology. So we have strong partnerships with the cooling vendors, with the power distribution vendors,
Starting point is 00:09:22 the structured cabling that goes in that, the physical racks. So, yeah, it takes a lot of collaboration with the ecosystem to be able to put forward a solution to meet our customer requirements. Well, Matt, such an impressive conversation. Thank you so much for taking the time. This is the first time that Celesteak has been on the show, but I hope it's not the last. We would love to have you back sometime. And one final question for you, where can folks go and find more information?
Starting point is 00:09:46 about Celestica, connect with you and the team, find out about the solutions you're delivering to market. Certainly number one would be Celestica.com. A ton of information up there, and I would say second, is follow us on LinkedIn. That's awesome. Well, Janice, that wraps another edition of data insights. And thank you so much for both of you
Starting point is 00:10:03 for spending the time today. Thanks for having. Thank you, Matt. Thanks for joining Tech Arena. Subscribe and engage at our website, Techorena. All content is copyright by TechRena. by tuckering.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.